From kennelson11 at gmail.com Tue May 1 00:02:36 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 01 May 2018 00:02:36 +0000 Subject: [openstack-dev] [All] [Elections] Rocky TC Election Results Message-ID: Hello Everyone :) Please join me in congratulating the 7 newly elected members of the Technical Committee (TC)! - Thierry Carrez (ttx)] - Chris Dent (cdent) - Sean McGinnis (smcginnis) - Davanum Srinivas (dims) - Zane Bitter (zaneb) - Graham Hayes (mugsie) - Mohammed Naser (mnaser) Full results: https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_98430d99fc2ed59d Election process details and results are also available here: https://governance.openstack.org/election/ Thank you to all of the candidates, having a good group of candidates helps engage the community in our democratic process. Thank you to all who voted and who encouraged others to vote. We need to ensure your voices are heard! Thank you for another great round. -Kendall Nelson (diablo_rojo) [1] https://review.openstack.org/#/c/565368/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Tue May 1 00:17:31 2018 From: amy at demarco.com (Amy) Date: Mon, 30 Apr 2018 19:17:31 -0500 Subject: [openstack-dev] [All] [Elections] Rocky TC Election Results In-Reply-To: References: Message-ID: <3481FE79-7520-48F2-8670-340CC9A95637@demarco.com> Congrats to all who were elected! Amy (spotz) Sent from my iPhone > On Apr 30, 2018, at 7:02 PM, Kendall Nelson wrote: > > Hello Everyone :) > > Please join me in congratulating the 7 newly elected members of the Technical Committee (TC)! > > Thierry Carrez (ttx)] > Chris Dent (cdent) > Sean McGinnis (smcginnis) > Davanum Srinivas (dims) > Zane Bitter (zaneb) > Graham Hayes (mugsie) > Mohammed Naser (mnaser) > > Full results: https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_98430d99fc2ed59d > > Election process details and results are also available here: https://governance.openstack.org/election/ > > Thank you to all of the candidates, having a good group of candidates helps engage the community in our democratic process. > > Thank you to all who voted and who encouraged others to vote. We need to ensure your voices are heard! > > Thank you for another great round. > > -Kendall Nelson (diablo_rojo) > > [1] https://review.openstack.org/#/c/565368/ > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdubreui at redhat.com Tue May 1 01:18:05 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Tue, 1 May 2018 11:18:05 +1000 Subject: [openstack-dev] [api] REST limitations and GraghGL inception? In-Reply-To: References: Message-ID: <9e8ab8c2-1025-c1c0-7f02-080cc8ae8fc1@redhat.com> On 30/04/18 20:16, Flint WALRUS wrote: > I would very much second that question! Indeed it have been one of my > own wondering since many times. > > Of course GraphQL is not intended to replace REST as is and have to > live in parallel Effectively a standard initial architecture is to have GraphQL sitting aside (in parallel) and wrapping REST and along the way develop GrapgQL Schema. It's seems too early to tell but GraphQL being the next step in API evolution it might ultimately replace REST. > but it would likely and highly accelerate all requests within heavily > loaded environments +1 > . > > So +1 for this question. > Le lun. 30 avr. 2018 à 05:53, Gilles Dubreuil > a écrit : > > Hi, > > Remember Boston's Summit presentation [1] about GraphQL [2] and > how it > addresses REST limitations. > I wonder if any project has been thinking about using GraphQL. I > haven't > find any mention or pointers about it. > > GraphQL takes a complete different approach compared to REST. So > we can > finally forget about REST API Description languages > (OpenAPI/Swagger/WSDL/WADL/JSON-API/ETC) and HATEOS (the hypermedia > approach which doesn't describe how to use it). > > So, once passed the point where 'REST vs GraphQL' is like > comparing SQL > and no-SQL DBMS and therefore have different applications, there > are no > doubt the complexity of most OpenStack projects are good > candidates for > GraphQL. > > Besides topics such as efficiency, decoupling, no version management > need there many other powerful features such as API Schema out of the > box and better automation down that track. > > It looks like the dream of a conduit between API services and > consumers > might have finally come true so we could move-on an worry about other > things. > > So has anyone already starting looking into it? > > [1] > https://www.openstack.org/videos/boston-2017/building-modern-apis-with-graphql > [2] http://graphql.org > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Gilles Dubreuil Senior Software Engineer - Red Hat - Openstack DFG Integration Email: gilles at redhat.com GitHub/IRC: gildub Mobile: +61 400 894 219 -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.therond at gmail.com Tue May 1 01:31:56 2018 From: gael.therond at gmail.com (Flint WALRUS) Date: Tue, 01 May 2018 01:31:56 +0000 Subject: [openstack-dev] [api] REST limitations and GraghGL inception? In-Reply-To: <9e8ab8c2-1025-c1c0-7f02-080cc8ae8fc1@redhat.com> References: <9e8ab8c2-1025-c1c0-7f02-080cc8ae8fc1@redhat.com> Message-ID: Yes, that’s was indeed the sens of my point. Openstack have to provide both endpoints type for a while for backward compatibility in order to smooth the transition. For instance, that would be a good idea to contact postman devteam once GraphQL will start to be integrated as it will allow a lot of ops to keep their day to day tools by just having to convert their existing collections of handful requests. Or alternatively to provide a tool with similar features at least. Le mar. 1 mai 2018 à 03:18, Gilles Dubreuil a écrit : > > > On 30/04/18 20:16, Flint WALRUS wrote: > > I would very much second that question! Indeed it have been one of my own > wondering since many times. > > Of course GraphQL is not intended to replace REST as is and have to live > in parallel > > > Effectively a standard initial architecture is to have GraphQL sitting > aside (in parallel) and wrapping REST and along the way develop GrapgQL > Schema. > > It's seems too early to tell but GraphQL being the next step in API > evolution it might ultimately replace REST. > > > but it would likely and highly accelerate all requests within heavily > loaded environments > > > +1 > > > . > > So +1 for this question. > Le lun. 30 avr. 2018 à 05:53, Gilles Dubreuil a > écrit : > >> Hi, >> >> Remember Boston's Summit presentation [1] about GraphQL [2] and how it >> addresses REST limitations. >> I wonder if any project has been thinking about using GraphQL. I haven't >> find any mention or pointers about it. >> >> GraphQL takes a complete different approach compared to REST. So we can >> finally forget about REST API Description languages >> (OpenAPI/Swagger/WSDL/WADL/JSON-API/ETC) and HATEOS (the hypermedia >> approach which doesn't describe how to use it). >> >> So, once passed the point where 'REST vs GraphQL' is like comparing SQL >> and no-SQL DBMS and therefore have different applications, there are no >> doubt the complexity of most OpenStack projects are good candidates for >> GraphQL. >> >> Besides topics such as efficiency, decoupling, no version management >> need there many other powerful features such as API Schema out of the >> box and better automation down that track. >> >> It looks like the dream of a conduit between API services and consumers >> might have finally come true so we could move-on an worry about other >> things. >> >> So has anyone already starting looking into it? >> >> [1] >> >> https://www.openstack.org/videos/boston-2017/building-modern-apis-with-graphql >> [2] http://graphql.org >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > -- > Gilles Dubreuil > Senior Software Engineer - Red Hat - Openstack DFG Integration > Email: gilles at redhat.com > GitHub/IRC: gildub > Mobile: +61 400 894 219 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdubreui at redhat.com Tue May 1 02:36:54 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Tue, 1 May 2018 12:36:54 +1000 Subject: [openstack-dev] [api] REST limitations and GraghGL inception? In-Reply-To: References: Message-ID: <643c4acd-3d7c-4cbe-32e2-750e81475ea1@redhat.com> On 01/05/18 07:21, Matt Riedemann wrote: > On 4/29/2018 10:53 PM, Gilles Dubreuil wrote: >> Remember Boston's Summit presentation [1] about GraphQL [2] and how >> it addresses REST limitations. >> I wonder if any project has been thinking about using GraphQL. I >> haven't find any mention or pointers about it. >> >> GraphQL takes a complete different approach compared to REST. So we >> can finally forget about REST API Description languages >> (OpenAPI/Swagger/WSDL/WADL/JSON-API/ETC) and HATEOS (the hypermedia >> approach which doesn't describe how to use it). >> >> So, once passed the point where 'REST vs GraphQL' is like comparing >> SQL and no-SQL DBMS and therefore have different applications, there >> are no doubt the complexity of most OpenStack projects are good >> candidates for GraphQL. >> >> Besides topics such as efficiency, decoupling, no version management >> need there many other powerful features such as API Schema out of the >> box and better automation down that track. >> >> It looks like the dream of a conduit between API services and >> consumers might have finally come true so we could move-on an worry >> about other things. >> >> So has anyone already starting looking into it? >> >> [1] >> https://www.openstack.org/videos/boston-2017/building-modern-apis-with-graphql >> >> [2] http://graphql.org > > Not to speak for him, but Sean Dague had a blog post about REST API > microversions in OpenStack and there is a Q&A bit at the bottom about > GraphQL replacing the need for microversions: > > https://dague.net/2017/12/11/rest-api-microversions/ > > Since I don't expect Sean to magically appear to reply to this thread, > I thought I'd pass this along. > Thanks Matt for the link. During Denver's PTG we effectively discovered consumers tend to use 3rd party SDK and we also discovered that, ironically, nobody - besides Sean ;) - has the bandwidth to work full time on SDK either. That was and still is the driver for more automation and therefore for having projects to produce an API Schema. Once aspect is about GraphQL being a descriptive language. It allow to decouple entirely consumers from producers. So instead of SDK, consumers rely on GraphQL client library (which are standardized [1]). The focus becomes the data and not how to transfer the data. Effectively, services make their data available through a Schema and clients request a tree of data against it. Sure, at the end of the day, it's still a HTTP conversation taking place and returning a JSON structure (when not up/down loading a file or so). The big difference (among other things) is one and only one transaction is used. The second aspect is about automation which can take place because the Schema is provided up-front, it's the Graph part. In the Q&A, Sean said SDK "build their object models", yes that's true with GraphQL we have "fat clients" but as we've also seen the SDK is replaced with a GraphQL client., cutting the "man in the middle" off! As per the rest of the Answer, it seems to me there are other aspects to be looked at it from different angles. Cheers [1] http://graphql.org/code/ From gdubreui at redhat.com Tue May 1 03:00:12 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Tue, 1 May 2018 13:00:12 +1000 Subject: [openstack-dev] [api] REST limitations and GraghGL inception? In-Reply-To: References: <9e8ab8c2-1025-c1c0-7f02-080cc8ae8fc1@redhat.com> Message-ID: On 01/05/18 11:31, Flint WALRUS wrote: > Yes, that’s was indeed the sens of my point. I was just enforcing it, no worries! ;) > > Openstack have to provide both endpoints type for a while for backward > compatibility in order to smooth the transition. > > For instance, that would be a good idea to contact postman devteam > once GraphQL will start to be integrated as it will allow a lot of ops > to keep their day to day tools by just having to convert their > existing collections of handful requests. Shouldn't we have a common consensus before any project start pushing its own GraphQL wheel? Also I wonder how GraphQL could open new architecture avenues for OpenStack. For example, would that make sense to also have a GraphQL broker linking OpenStack services? > > Or alternatively to provide a tool with similar features at least. > Le mar. 1 mai 2018 à 03:18, Gilles Dubreuil > a écrit : > > > > On 30/04/18 20:16, Flint WALRUS wrote: >> I would very much second that question! Indeed it have been one >> of my own wondering since many times. >> >> Of course GraphQL is not intended to replace REST as is and have >> to live in parallel > > Effectively a standard initial architecture is to have GraphQL > sitting aside (in parallel) and wrapping REST and along the way > develop GrapgQL Schema. > > It's seems too early to tell but GraphQL being the next step in > API evolution it might ultimately replace REST. > > >> but it would likely and highly accelerate all requests within >> heavily loaded environments > > +1 > > >> . >> >> So +1 for this question. >> Le lun. 30 avr. 2018 à 05:53, Gilles Dubreuil >> > a écrit : >> >> Hi, >> >> Remember Boston's Summit presentation [1] about GraphQL [2] >> and how it >> addresses REST limitations. >> I wonder if any project has been thinking about using >> GraphQL. I haven't >> find any mention or pointers about it. >> >> GraphQL takes a complete different approach compared to REST. >> So we can >> finally forget about REST API Description languages >> (OpenAPI/Swagger/WSDL/WADL/JSON-API/ETC) and HATEOS (the >> hypermedia >> approach which doesn't describe how to use it). >> >> So, once passed the point where 'REST vs GraphQL' is like >> comparing SQL >> and no-SQL DBMS and therefore have different applications, >> there are no >> doubt the complexity of most OpenStack projects are good >> candidates for >> GraphQL. >> >> Besides topics such as efficiency, decoupling, no version >> management >> need there many other powerful features such as API Schema >> out of the >> box and better automation down that track. >> >> It looks like the dream of a conduit between API services and >> consumers >> might have finally come true so we could move-on an worry >> about other >> things. >> >> So has anyone already starting looking into it? >> >> [1] >> https://www.openstack.org/videos/boston-2017/building-modern-apis-with-graphql >> [2] http://graphql.org >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > -- > Gilles Dubreuil > Senior Software Engineer - Red Hat - Openstack DFG Integration > Email:gilles at redhat.com > GitHub/IRC: gildub > Mobile: +61 400 894 219 > -- Gilles Dubreuil Senior Software Engineer - Red Hat - Openstack DFG Integration Email: gilles at redhat.com GitHub/IRC: gildub Mobile: +61 400 894 219 -------------- next part -------------- An HTML attachment was scrubbed... URL: From joshua.hesketh at gmail.com Tue May 1 03:43:23 2018 From: joshua.hesketh at gmail.com (Joshua Hesketh) Date: Tue, 1 May 2018 13:43:23 +1000 Subject: [openstack-dev] Overriding project-templates in Zuul In-Reply-To: <87o9i04rfa.fsf@meyer.lemoncheese.net> References: <87o9i04rfa.fsf@meyer.lemoncheese.net> Message-ID: On Tue, May 1, 2018 at 1:58 AM, James E. Blair wrote: > Hi, > > If you've had difficulty overriding jobs in project-templates, please > read and provide feedback on this proposed change. > > We tried to make the Zuul v3 configuration language as intuitive as > possible, and incorporated a lot that we learned from our years running > Zuul v2. One thing that we didn't anticipate was how folks would end up > wanting to use a job in both project-templates *and* local project > stanzas. > > Essentially, we had assumed that if you wanted to control how a job was > run, you would add it to a project stanza directly rather that use a > project-template. It's easy to do so if you use one or the other. > However, it turns out there are lots of good reasons to use both. For > example, in a project-template we may want to establish a recommended > way to run a job, or that a job should always be run with a set of > related jobs. Yet a project may still want to indicate that job should > only run on certain changes in that specific repo. > > To be very specific -- a very commonly expressed frustration is that a > project can't specify a "files" or "irrelevant-files" matcher to > override a job that appears in a project-template. > > Reconciling those is difficult, largely because once Zuul decides to run > a job (for example, by a declaration in a project-template) it is > impossible to dissuade it from running that job by adding any extra > configuration to a project. We need to tread carefully when fixing > this, because quite a number of related concepts could be affected. For > instance, we need to preserve branch independence (a change to stop > running a job in one branch shouldn't affect others). And we need to > preserve the ability for job variants to layer on to each other (a > project-local variant should still be able to alter a variant in a > project-template). > > I propose that we remedy this by making a small change to how Zuul > determines that a job should run: > > When a job appears multiple times on a project (for instance if it > appears in a project-template and also on the project itself), all of > the project-local variants which match the item's branch must also match > the item in order for the job to run. In other words, if a job appears > in a project-template used by a project and on the project, then both > must match. > I might be misunderstanding at which point a job is chosen to be ran and therefore when it's too late to dissuade it. However, if possible, would it make more sense for the project-local copy of a job to overwrite the supplied files and irrelevant-files? This would allow a project to run a job when it otherwise doesn't match. What happens when something is in both files and irrelevant-files? If the project-template is trying to say A is in 'files', but the project-local says A is in 'irrelevant-files', should that overwrite it? Cheers, Josh > > This effectively causes the "files" and "irrelevant-files" attributes on > all of the project-local job definitions matching a given branch to be > combined. The combination of multiple files matchers behaves as a > union, and irrelevant-files matchers as an intersection. > > ================ ======== ======= ======= > Matcher Template Project Result > ================ ======== ======= ======= > files AB BC ABC > irrelevant-files AB BC B > ================ ======== ======= ======= > > I believe this will address the shortcoming identified above, but before > we get too far in implementing it, I'd like to ask folks to take a > moment and evaluate whether it will address the issues you've seen, or > if you foresee any problems which I haven't anticipated. > > Thanks, > > Jim > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Louie.Kwan at windriver.com Tue May 1 04:04:20 2018 From: Louie.Kwan at windriver.com (Kwan, Louie) Date: Tue, 1 May 2018 04:04:20 +0000 Subject: [openstack-dev] [masakari] Masakari Project Meeting time In-Reply-To: <134F3A2C-7FB4-41FD-BEA8-A529EDA42BC9@nttdata.com> References: <47EFB32CD8770A4D9590812EE28C977E962F4E0B@ALA-MBD.corp.ad.wrs.com> <134F3A2C-7FB4-41FD-BEA8-A529EDA42BC9@nttdata.com> Message-ID: <47EFB32CD8770A4D9590812EE28C977E963077B4@ALA-MBC.corp.ad.wrs.com> It seems most of the team members are on vacation this week. By the way, 03:00 UTC is fine as well. -----Original Message----- From: Bhor, Dinesh [mailto:Dinesh.Bhor at nttdata.com] Sent: Wednesday, April 25, 2018 9:09 PM To: Kwan, Louie Cc: Sampath Priyankara (samP); openstack-dev at lists.openstack.org; Young, Ken Subject: Re: [openstack-dev] [masakari] Masakari Project Meeting time +1 This time may not fit for attendees who work in IST time zone as it will 07.30 AM in the morning. Thanks, Dinesh > On Apr 26, 2018, at 12:06 AM, Kwan, Louie wrote: > > Sampath, Dinesh and others, > > It was a good meeting last week. > > As briefly discussed with Sampath, I would like to check whether we can adjust the meeting time. > > We are at EST time zone, the meeting is right on our midnight time, 12:00 am. > > It will be nice if the meeting can be started ~2 hours earlier e.g. Could it be started at 02:00: UTC instead? > > Thanks. > Louie > > Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged,confidential, and proprietary data. If you are not the intended recipient,please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. From zhipengh512 at gmail.com Tue May 1 04:14:10 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Tue, 1 May 2018 06:14:10 +0200 Subject: [openstack-dev] [All] [Elections] Rocky TC Election Results In-Reply-To: <3481FE79-7520-48F2-8670-340CC9A95637@demarco.com> References: <3481FE79-7520-48F2-8670-340CC9A95637@demarco.com> Message-ID: Congratulations to the newly elected TC members ! On Tue, May 1, 2018 at 2:17 AM, Amy wrote: > Congrats to all who were elected! > > Amy (spotz) > > Sent from my iPhone > > On Apr 30, 2018, at 7:02 PM, Kendall Nelson wrote: > > Hello Everyone :) > > Please join me in congratulating the 7 newly elected members of the > Technical Committee (TC)! > > > - Thierry Carrez (ttx)] > - Chris Dent (cdent) > - Sean McGinnis (smcginnis) > - Davanum Srinivas (dims) > - Zane Bitter (zaneb) > - Graham Hayes (mugsie) > - Mohammed Naser (mnaser) > > > Full results: https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_ > 98430d99fc2ed59d > > Election process details and results are also available here: > https://governance.openstack.org/election/ > > Thank you to all of the candidates, having a good group of candidates > helps engage the community in our democratic process. > > Thank you to all who voted and who encouraged others to vote. We need to > ensure your voices are heard! > > Thank you for another great round. > > -Kendall Nelson (diablo_rojo) > > [1] https://review.openstack.org/#/c/565368/ > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdubreui at redhat.com Tue May 1 04:31:09 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Tue, 1 May 2018 14:31:09 +1000 Subject: [openstack-dev] [All] [Elections] Rocky TC Election Results In-Reply-To: References: <3481FE79-7520-48F2-8670-340CC9A95637@demarco.com> Message-ID: <03667a7b-9d5f-aabc-3334-142363dbe04c@redhat.com> Bravo, well done! On 01/05/18 14:14, Zhipeng Huang wrote: > Congratulations to the newly elected TC members ! > > On Tue, May 1, 2018 at 2:17 AM, Amy > wrote: > > Congrats to all who were elected! > > Amy (spotz) > > Sent from my iPhone > > On Apr 30, 2018, at 7:02 PM, Kendall Nelson > wrote: > >> Hello Everyone :) >> >> Please join me in congratulating the 7 newly elected members of >> the Technical Committee (TC)! >> >> * Thierry Carrez (ttx)] >> * Chris Dent (cdent) >> * Sean McGinnis (smcginnis) >> * Davanum Srinivas (dims) >> * Zane Bitter (zaneb) >> * Graham Hayes (mugsie) >> * Mohammed Naser (mnaser) >> >> >> Full results: >> https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_98430d99fc2ed59d >> >> >> >> Election process details and results are also available here: >> https://governance.openstack.org/election/ >> >> >> Thank you to all of the candidates, having a good group of >> candidates helps engage the community in our democratic process. >> >> Thank you to all who voted and who encouraged others to vote. We >> need to ensure your voices are heard! >> >> Thank you for another great round. >> >> -Kendall Nelson (diablo_rojo) >> >> [1] https://review.openstack.org/#/c/565368/ >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org >> ?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > -- > Zhipeng (Howard) Huang > > Standard Engineer > IT Standard & Patent/IT Product Line > Huawei Technologies Co,. Ltd > Email: huangzhipeng at huawei.com > Office: Huawei Industrial Base, Longgang, Shenzhen > > (Previous) > Research Assistant > Mobile Ad-Hoc Network Lab, Calit2 > University of California, Irvine > Email: zhipengh at uci.edu > Office: Calit2 Building Room 2402 > > OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Gilles Dubreuil Senior Software Engineer - Red Hat - Openstack DFG Integration Email: gilles at redhat.com GitHub/IRC: gildub Mobile: +61 400 894 219 -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Tue May 1 09:53:45 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Tue, 1 May 2018 11:53:45 +0200 Subject: [openstack-dev] [neutron] CI meeting Message-ID: Hi, I have to cancel today’s Neutron CI meeting because of Public holidays in many countries. Next meeting will be on 8th May. — Best regards Slawek Kaplonski skaplons at redhat.com From simon.leinen at switch.ch Tue May 1 10:56:27 2018 From: simon.leinen at switch.ch (Simon Leinen) Date: Tue, 1 May 2018 12:56:27 +0200 Subject: [openstack-dev] [Openstack-operators] [nova] Default scheduler filters survey In-Reply-To: (=?utf-8?Q?=22Val=C3=A9ry?= Tschopp"'s message of "Mon, 30 Apr 2018 15:55:23 +0000") References: Message-ID: [Resending for a colleague who's not on openstack-dev-- SL.] Yeap, because of the bug [#1677217] in the standard AggregateImagePropertiesIsolation filter, we have written a custom Nova scheduler filter. The filter AggregateImageOsDistroIsolation is a simplified version the AggregateImagePropertiesIsolation, based only on the 'os_distro' image property. https://github.com/valerytschopp/nova/blob/aggregate_image_isolation/nova/scheduler/filters/aggregate_image_os_distro_isolation.py [#1677217] https://bugs.launchpad.net/nova/+bug/1677217 Cheers, Valery On 29/04/18, 23:29 , "Ed Leafe" wrote: > On Apr 29, 2018, at 1:34 PM, Artom Lifshitz wrote: >> >> Based on that, we can definitely say that SameHostFilter and >> DifferentHostFilter do *not* belong in the defaults. In fact, we got >> our defaults pretty spot on, based on this admittedly very limited >> dataset. The only frequently occurring filter that's not in our >> defaults is AggregateInstanceExtraSpecsFilter. > Another data point that might be illuminating is: how many sites > use a custom (i.e., not in-tree) filter or weigher? One of the > original design tenets of the scheduler was that we did not want to > artificially limit what people could use to control their deployments, > but inside of Nova there is a lot of confusion as to whether anyone is > using anything but the included filters. > So - does anyone out there rely on a filter and/or weigher that they wrote themselves, and maintain outside of OpenStack? > -- Ed Leafe > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From Tim.Bell at cern.ch Tue May 1 13:10:56 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Tue, 1 May 2018 13:10:56 +0000 Subject: [openstack-dev] [Openstack-operators] [nova] Default scheduler filters survey In-Reply-To: <20180501083033.GF9259@sanger.ac.uk> References: <20180501083033.GF9259@sanger.ac.uk> Message-ID: You may also need something like pre-emptible instances to arrange the clean up of opportunistic VMs when the owner needs his resources back. Some details on the early implementation at http://openstack-in-production.blogspot.fr/2018/02/maximizing-resource-utilization-with.html. If you're in Vancouver, we'll be having a Forum session on this (https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21787/pre-emptible-instances-the-way-forward) and notes welcome on the etherpad (https://etherpad.openstack.org/p/YVR18-pre-emptible-instances) It would be good to find common implementations since this is a common scenario in the academic and research communities. Tim -----Original Message----- From: Dave Holland Date: Tuesday, 1 May 2018 at 10:40 To: Mathieu Gagné Cc: "OpenStack Development Mailing List (not for usage questions)" , openstack-operators Subject: Re: [Openstack-operators] [openstack-dev] [nova] Default scheduler filters survey On Mon, Apr 30, 2018 at 12:41:21PM -0400, Mathieu Gagné wrote: > Weighers for baremetal cells: > * ReservedHostForTenantWeigher [7] ... > [7] Used to favor reserved host over non-reserved ones based on project. Hello Mathieu, we are considering writing something like this, for virtual machines not for baremetal. Our use case is that a project buying some compute hardware is happy for others to use it, but when the compute "owner" wants sole use of it, other projects' instances must be migrated off or killed; a scheduler weigher like this might help us to minimise the number of instances needing migration or termination at that point. Would you be willing to share your source code please? thanks, Dave -- ** Dave Holland ** Systems Support -- Informatics Systems Group ** ** 01223 496923 ** Wellcome Sanger Institute, Hinxton, UK ** -- The Wellcome Sanger Institute is operated by Genome Research Limited, a charity registered in England with number 1021457 and a company registered in England with number 2742969, whose registered office is 215 Euston Road, London, NW1 2BE. _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From stdake at cisco.com Tue May 1 13:12:55 2018 From: stdake at cisco.com (Steven Dake (stdake)) Date: Tue, 1 May 2018 13:12:55 +0000 Subject: [openstack-dev] [Zun][Kolla][Kolla-ansible] Verify Zun deployment in Kolla gate In-Reply-To: References: Message-ID: <5CB2BF71-0A91-49BE-80EF-696063D0577D@cisco.com> Mark, The major constraint here is gate memory (which is maxed at 8gb). This is barely enough to run compute-kit (which is tested). Now that multiple nodes are a thing, it may be possible to run computekit on one node, and other services on other nodes. (IOW the environment has changed, and may be more conducive to adding more gating). Cheers -steve From: Mark Goddard Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Monday, April 30, 2018 at 4:34 PM To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] [Zun][Kolla][Kolla-ansible] Verify Zun deployment in Kolla gate Hi, This is something I've been thinking about recently. In particular, I noticed a patch go by to fix the same issue in the magnum role that has been broken and fixed previously. Kolla needs to up its game in terms of CI testing. At the very least, we need tests that verify that services can be deployed. Even if we don't verify that the deployed service is functional, this will be an improvement from where we are today. As with many things, we won't get there in a single leap, but should look to incrementally improve test coverage, perhaps with a set of milestones spanning multiple releases. I suggest our first step should be to add a set of experimental jobs for testing particular services. These would not run against every patch, but could be invoked on demand by commenting 'check experimental' on a patch in Gerrit. For many services this could be done simply by setting 'enable_=true' in config. There are many paths we could take from there, but perhaps this would be best discussed at the next PTG? Cheers, Mark On Mon, 30 Apr 2018, 14:07 Jeffrey Zhang, > wrote: Thanks hongbin In Kolla, one job is used to test multi OpenStack services. there are already two test scenarios. 1. without ceph 2. with ceph each scenario test a serial of OpenStack services. like nova, neutron, cinder etc. Zun or kuryr is not tested now. But i think it is OK to add a new scenario to test network related service, like zun and kuryr. for tempest testing, there is a WIP bp for this[0] [0] https://blueprints.launchpad.net/kolla-ansible/+spec/tempest-gate On Sun, Apr 29, 2018 at 5:14 AM, Hongbin Lu > wrote: Hi Kolla team, Recently, I saw there are users who tried to install Zun by using Kolla-ansible and reported bugs to us whenever they ran into issues (e.g. https://bugs.launchpad.net/kolla-ansible/+bug/1766151). The increase of this usage pattern (Kolla + Zun) made me think that we need to have CI coverage to verify the Zun deployment setup by Kolla. IMHO, the ideal CI workflow should be: * Create a VM with different distros (i.e. Ubuntu, CentOS). * Use Kolla-ansible to stand up a Zun deployment. * Run Zun's tempest test suit [1] against the deployment. My question for Kolla team is if it is reasonable to setup a Zuul job as described above? or such CI jobs already exist? If not, how to create one? [1] https://github.com/openstack/zun-tempest-plugin Best regards, Hongbin __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Regards, Jeffrey Zhang Blog: http://xcodest.me __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Tue May 1 13:56:28 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 01 May 2018 09:56:28 -0400 Subject: [openstack-dev] [All] [Elections] Rocky TC Election Results In-Reply-To: References: Message-ID: <1525182877-sup-4920@lrrr.local> Excerpts from Kendall Nelson's message of 2018-05-01 00:02:36 +0000: > Hello Everyone :) > > Please join me in congratulating the 7 newly elected members of the > Technical Committee (TC)! > > > - Thierry Carrez (ttx)] > - Chris Dent (cdent) > - Sean McGinnis (smcginnis) > - Davanum Srinivas (dims) > - Zane Bitter (zaneb) > - Graham Hayes (mugsie) > - Mohammed Naser (mnaser) > > > Full results: > https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_98430d99fc2ed59d > > Election process details and results are also available here: > https://governance.openstack.org/election/ > > Thank you to all of the candidates, having a good group of candidates helps > engage the community in our democratic process. > > Thank you to all who voted and who encouraged others to vote. We need to > ensure your voices are heard! > > Thank you for another great round. > > -Kendall Nelson (diablo_rojo) > > [1] https://review.openstack.org/#/c/565368/ > Congratulations to the new and returning TC members! I hope that some of the folks who ran this time but did not make the cut will join us in discussions and help with initiatives they would have participated in if they were elected. It is always good for us to have more perspectives. Doug From emilien at redhat.com Tue May 1 14:03:55 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 1 May 2018 07:03:55 -0700 Subject: [openstack-dev] [Openstack-operators] The Forum Schedule is now live In-Reply-To: References: <5AE34A02.8020802@openstack.org> <5AE73AA3.4030408@openstack.org> <5AE74CF2.9010804@openstack.org> Message-ID: On Mon, Apr 30, 2018 at 10:25 AM, Emilien Macchi wrote: > On Mon, Apr 30, 2018 at 10:05 AM, Jimmy McArthur > wrote: >> >> It looks like we have a spot held for you, but did not receive >> confirmation that TripleO would be moving forward with Project Update. If >> you all will be recording this, we have you down for Wednesday from 11:25 - >> 11:45am. Just let me know and I'll get it up on the schedule. >> > > This slot is perfect, and I'll run it with one of my tripleo co-workers > (Alex won't be here). > Jimmy, could you please confirm we have the TripleO Project Updates slot? I don't see it in the schedule. Thanks, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Tue May 1 14:08:23 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 01 May 2018 10:08:23 -0400 Subject: [openstack-dev] [tc][docs] documenting openstack "constellations" Message-ID: <1525183287-sup-1728@lrrr.local> The TC has had an item on our backlog for a while (a year?) to document "constellations" of OpenStack components to make it easier for deployers and users to understand which parts they need to have the features they want [1]. John Garbutt has started writing the first such document [2], but as we talked about the content we agreed the TC governance repository is not the best home for it, so I have proposed creating a new repository [3]. In order to set up the publishing jobs for that repo so the content goes to docs.openstack.org, we need to settle the ownership of the repository. I think it makes sense for the documentation team to "own" it, but I also think it makes sense for it to have its own review team because it's a bit different from the rest of the docs and we may be able to recruit folks to help who might not want to commit to being core reviewers for all of the documentation repositories. The TC members would also like to be reviewers, to get things going. So, is the documentation team willing to add the new "constellations" repository under their umbrella? Or should we keep it as a TC-owned repository for now? Doug [1] https://storyboard.openstack.org/#!/story/2001702 [2] https://review.openstack.org/565466 [3] https://review.openstack.org/565498 From jimmy at openstack.org Tue May 1 14:18:16 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Tue, 01 May 2018 09:18:16 -0500 Subject: [openstack-dev] [Openstack-operators] The Forum Schedule is now live In-Reply-To: References: <5AE34A02.8020802@openstack.org> <5AE73AA3.4030408@openstack.org> <5AE74CF2.9010804@openstack.org> Message-ID: <5AE87728.1020804@openstack.org> Apologies for the delay, Emilien! I should be adding it today, but it's definitely yours. > Emilien Macchi > May 1, 2018 at 9:03 AM > > Jimmy, could you please confirm we have the TripleO Project Updates > slot? I don't see it in the schedule. > > Thanks, > -- > Emilien Macchi > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Emilien Macchi > April 30, 2018 at 12:25 PM > > This slot is perfect, and I'll run it with one of my tripleo > co-workers (Alex won't be here). > > Thanks, > -- > Emilien Macchi > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jimmy McArthur > April 30, 2018 at 12:05 PM > Alex, > > It looks like we have a spot held for you, but did not receive > confirmation that TripleO would be moving forward with Project > Update. If you all will be recording this, we have you down for > Wednesday from 11:25 - 11:45am. Just let me know and I'll get it up > on the schedule. > > Thanks! > Jimmy > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Alex Schultz > April 30, 2018 at 11:52 AM > On Mon, Apr 30, 2018 at 9:47 AM, Jimmy McArthur wrote: >> Project Updates are in their own track: >> https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=223 >> > > TripleO is still missing? > > Thanks, > -Alex > >> As are SIG, BoF and Working Groups: >> https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=218 >> >> Amy Marrich >> April 30, 2018 at 10:44 AM >> Emilien, >> >> I believe that the Project Updates are separate from the Forum? I know I saw >> some in the schedule before the Forum submittals were even closed. Maybe >> contact speaker support or Jimmy will answer here. >> >> Thanks, >> >> Amy (spotz) >> >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> Emilien Macchi >> April 30, 2018 at 10:33 AM >> >> >>> Hello all - >>> >>> Please take a look here for the posted Forum schedule: >>> https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224 >>> You should also see it update on your Summit App. >> Why TripleO doesn't have project update? >> Maybe we could combine it with TripleO - Project Onboarding if needed but it >> would be great to have it advertised as a project update! >> >> Thanks, >> -- >> Emilien Macchi >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> Jimmy McArthur >> April 27, 2018 at 11:04 AM >> Hello all - >> >> Please take a look here for the posted Forum schedule: >> https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224 >> You should also see it update on your Summit App. >> >> Thank you and see you in Vancouver! >> Jimmy >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jimmy McArthur > April 30, 2018 at 10:47 AM > Project Updates are in their own track: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=223 > > As are SIG, BoF and Working Groups: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=218 > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From mgagne at calavera.ca Tue May 1 14:35:13 2018 From: mgagne at calavera.ca (=?UTF-8?Q?Mathieu_Gagn=C3=A9?=) Date: Tue, 1 May 2018 10:35:13 -0400 Subject: [openstack-dev] [Openstack-operators] [nova] Default scheduler filters survey In-Reply-To: <20180501083033.GF9259@sanger.ac.uk> References: <20180501083033.GF9259@sanger.ac.uk> Message-ID: Hi Dave, On Tue, May 1, 2018 at 4:30 AM, Dave Holland wrote: > On Mon, Apr 30, 2018 at 12:41:21PM -0400, Mathieu Gagné wrote: >> Weighers for baremetal cells: >> * ReservedHostForTenantWeigher [7] > ... >> [7] Used to favor reserved host over non-reserved ones based on project. > > Hello Mathieu, > > we are considering writing something like this, for virtual machines not > for baremetal. Our use case is that a project buying some compute > hardware is happy for others to use it, but when the compute "owner" > wants sole use of it, other projects' instances must be migrated off or > killed; a scheduler weigher like this might help us to minimise the > number of instances needing migration or termination at that point. > Would you be willing to share your source code please? > I'm not sure how battle-tested this code is to be honest but here it is: https://gist.github.com/mgagne/659ca02e63779802de6f7aec8cda612a I had to merge 2 files in one (the weigher and the conf) so I'm not sure if it still works but I think you will get the idea. To use it, you need to define the "reserved_for_tenant_id" Ironic node property with the project ID to reserve it. (through Ironic API) This code also assumes you already filtered out hosts which are reserved for a different tenant. I included that code in the gist too. On a side note, our technicians generally use the forced host feature of Nova to target specific Ironic nodes: https://docs.openstack.org/nova/pike/admin/availability-zones.html But if the customer buys and reserves some machines, he should get them first before the ones in the "public pool". -- Mathieu From dougal at redhat.com Tue May 1 14:57:11 2018 From: dougal at redhat.com (Dougal Matthews) Date: Tue, 1 May 2018 15:57:11 +0100 Subject: [openstack-dev] [mistral] Mistral Monthly May 2018 Message-ID: Hey Folks! Welcome to the first Mistral Monthly newsletter! # Intro I am going to try sending out a monthly newsletter summarising what is going on in the world of Mistral. This is an experiment, so feedback would be very welcome! If you have something you want me to share for next time, please let me know (or if I missed something this round please reply here). There is no fixed structure, but that may happen over time. I'll aim to send it on the 1st of each month. # Releases There were quite a few releases in April. I expect there will be a bugfix release for Queens and Pike this month. - Rocky - Rocky-1 Milestone https://docs.openstack.org/releasenotes/mistral/unreleased.html - mistral-lib 0.5.0 [We don't publish the release notes, we need to fix that] - python-mistralclient 3.4.0 https://docs.openstack.org/releasenotes/python-mistralclient/unreleased.html - Queens - Mistral 6.0.2 https://docs.openstack.org/releasenotes/mistral/queens.html - Pike - Mistral 5.2.3 https://docs.openstack.org/releasenotes/mistral/pike.html Rocky-2 is due to be released by June 8th. # Office Hours - https://etherpad.openstack.org/p/mistral-office-hours The Mistral office hours have been happening regularly on Mondays and Fridays now. Participation has been good and it has seen a wider and more varied attendance than the weekly Mistral meetings. We usually chat about bugs and features that people are interested in. if there is nothing specific we take the time to do some bug triage. Please bring yourself and topics to discuss! If none of the current time slots suit you, please propose an additional one. # Notable changes and additions - In Rocky we will have support for Swiftservice and Vitrage actions. - Workflow executions can no longer be deleted while they are still running unless the force parameter is provided. This is a backwards incompatible change, but makes the default behaviour much safer. - Support for py_mini_racer, a easier to install JavaScript implementation, was added. Users now have the choice of pyv8, v8eval or py_mini_racer. # Milestones, Reviews, Bugs and Blueprints (This will be more interesting in the next edition, when we can see what changes. Stackalytics is also down, so I can't grab all the stats I wanted) - We currently have 109 open bugs - We now have zero untriaged bugs! This is largely due to the collaboration during office hours. - Two bugs are "critical" - The number of open bugs reduced from around 180 a month ago - 44 bugs are targeted towards Rocky-2 - that is ambitious to say the least. During Rocky-1 we fixed 23 bugs - 8 blueprints are targeted for Rocky-2 (3 were implemented in Rocky-1) That's all for this time. See you next month! Dougal [If you get this far, please ping d0ugal on freenode and let me know. I am just curious to know roughly how many folks are reading. Thanks!] -------------- next part -------------- An HTML attachment was scrubbed... URL: From brad at redhat.com Tue May 1 15:12:59 2018 From: brad at redhat.com (Brad P. Crochet) Date: Tue, 01 May 2018 15:12:59 +0000 Subject: [openstack-dev] [mistral] Help with test run In-Reply-To: References: Message-ID: On Fri, Apr 27, 2018 at 5:23 AM András Kövi wrote: > Hi, > > Can someone please help me with why this build ended with TIMED_OUT? > http://logs.openstack.org/85/527085/8/check/mistral-tox-unit-mysql/3ffae9f/ > > I'm not seeing any particular reason for it. Is it happening consistently? > Thanks, > Andras > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Brad P. Crochet, RHCA, RHCE, RHCVA, RHCDS Principal Software Engineer (c) 704.236.9385 -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Tue May 1 16:11:51 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 01 May 2018 09:11:51 -0700 Subject: [openstack-dev] [mistral] Help with test run In-Reply-To: References: Message-ID: <1525191111.2120341.1357066832.0E6B2880@webmail.messagingengine.com> On Fri, Apr 27, 2018, at 2:22 AM, András Kövi wrote: > Hi, > > Can someone please help me with why this build ended with TIMED_OUT? > http://logs.openstack.org/85/527085/8/check/mistral-tox-unit-mysql/3ffae9f/ Reading the job log the job setup only took a few minutes. Then the unittests start and are running continuously until the timeout happens at 30 minutes. Chances are that the default 30 minute timeout is not sufficient for this job. Runtime may vary based on cloud region and presence of noisy neighbors. As for making this more reliable you can increase the timeout in the job configuration for that job. Another approach would be to make the unittests run more quickly. I notice the job is hard coded to use concurrency=1 when invoking the test runner so you are only using ~1/8 of the available cpus. You might try increasing this value though will likely need to make sure the tests don't conflict with each other. Clark From corvus at inaugust.com Tue May 1 17:02:32 2018 From: corvus at inaugust.com (James E. Blair) Date: Tue, 01 May 2018 10:02:32 -0700 Subject: [openstack-dev] Overriding project-templates in Zuul In-Reply-To: (Joshua Hesketh's message of "Tue, 1 May 2018 13:43:23 +1000") References: <87o9i04rfa.fsf@meyer.lemoncheese.net> Message-ID: <87bmdzwbpz.fsf@meyer.lemoncheese.net> Joshua Hesketh writes: > I might be misunderstanding at which point a job is chosen to be ran and > therefore when it's too late to dissuade it. However, if possible, would it > make more sense for the project-local copy of a job to overwrite the > supplied files and irrelevant-files? This would allow a project to run a > job when it otherwise doesn't match. Imagine that a project with branches has a job added via a template. project-config/zuul.yaml at master: - job: name: my-job vars: {jobvar: true} - project-template: name: myjobs check: jobs: - my-job: vars: {templatevar: true} project/zuul.yaml at master: - project: templates: - myjobs check: jobs: - my-job: vars: {projectvar: true} project/zuul.yaml at stable: - project: templates: - myjobs check: jobs: - my-job: vars: {projectvar: true} The resulting project config is: - project: jobs: - my-job (branches: master; project-local job) - my-job (branches: master; project-template job) - my-job (branches: stable; project-local job) - my-job (branches: stable; project-template job) When Zuul decides what to run, it goes through each of those in order, evaluates their matchers, and pulls in parents and their variants for each that matches. So a change on the master branch would collect the following variants to apply: my-job (branch: master; project-local job) my-job (job) base (job) my-job (branch: master; project-template job) my-job (job) base (job) It would then apply them in this order: base (job) my-job (job) my-job (branch: master; project-template job) my-job (branch: master; project-local job) To further restrict a project-local job with a "files:" matcher would cause the project-local job not to match, but the project-template job will still match, so the job gets run. That's the situation we have today, which is what I meant by "it's too late to dissuade it". Regarding the suggestion to overwrite it, we would need to decide which of the possible variants to overwrite. Keep in mind that there are 3 independent matchers operating on all the variants (branches, files, irrelevant-files). Does a project-local job with a "files:" matcher overwrite all of the variants? Just the ones which match the same branch? That would probably be the most reasonable thing to do. In my mind, that effectively splits the matchers into two categories: branch matchers, and file matchers. And they would behave differently. Zuul could collect the variants as above, considering only the branch matchers. It could then apply all of the variants in the normal manner, treating files and irrelevant-files as normal attributes which can be overwritten. Then, once it has composed the job to run based on all the matching variants, it would only *then* evaluate the files matchers. If they don't match, then it would not run the job after all. I think that's a very reasonable way to solve the issue as well, and I believe it would match people's expectations. Ultimately, the outcome will be very similar to the proposal I made except that rather than being combined, the matchers will be overwritten. That means that if you want to expand the set of irrelevant-files for a job, you would have to copy the set from the parent. There's one other aspect to consider -- it's possible to create a job like this: - job: name: doc-job - jobs: name: doc-job files: docs/index.rst vars: {rebuild_index: true} Which means: there's a normal docs job with no variables, but if docs/index.rst is changed, set the rebuild_index variable to true. Either approach (combine vs overwrite) eliminates the ability to do this within a project or project-template stanza. But the "combine" approach still lets us do this at the job level. We could still support this in the overwrite approach, however, I think it might be simpler to work with if we eliminated it as well and just always treated files and irrelevant-files matchers as overwriteable attributes. It would no longer be possible to implement the above example, but I'm not sure it's that useful anyway? > What happens when something is in both files and irrelevant-files? If the > project-template is trying to say A is in 'files', but the project-local > says A is in 'irrelevant-files', should that overwrite it? I think my statement and table below was erroneous: >> This effectively causes the "files" and "irrelevant-files" attributes on >> all of the project-local job definitions matching a given branch to be >> combined. The combination of multiple files matchers behaves as a >> union, and irrelevant-files matchers as an intersection. >> >> ================ ======== ======= ======= >> Matcher Template Project Result >> ================ ======== ======= ======= >> files AB BC ABC >> irrelevant-files AB BC B >> ================ ======== ======= ======= I think in actuality, both operations would end up as intersections: ================ ======== ======= ======= Matcher Template Project Result ================ ======== ======= ======= files AB BC B irrelevant-files AB BC B ================ ======== ======= ======= So with the "combine" method, it's always possible to further restrict where the job runs, but never to expand it. As to your question about what if something appears in both -- in my "combine" proposal, I would continue to treat them independently, so if a project-local variant has an "irrelevant-files:" matcher, and a project-template variant has a "files:" matcher, then the usual nonsense would happen: they'd both have to match. So a job with "files: tests/" and "irrelevant-files: docs/" would never run because it's impossible to satisfy both. That actually matches some current behavior we have -- a job must match a "real" job definition as well as at least one project-template or project-local job definition, so a "job" with "files: tests/" and a project-template with "irrelevant-files: docs/" will behave the same way today. However, if we go with the "overwrite" proposal, since we're altering the behavior of the non-branch matchers anyway, that might be a fine time to pair them up and treat them as a unit. Then, whichever is the latest would win. So a job with "files" and a project-template with "irrelevant-files" would end up only having an irrelevant-files attribute. Then a further project-local stanza with only "files" would eliminate the irrelevant-files attribute leaving only the files value. Okay, let's summarize: Proposal 1: All project-template and project-local job variants matching the item's branch must also match the item. * Files and irrelevant-files on project-template and project stanzas are essentially combined in a set intersection. * It's possible to further reduce the scope of jobs, but not expand. * Files and irrelevant-files are still independent matchers, and if both are present, both must match. * It's not possible to alter a job attribute by adding a project-local variant with only a files matcher (it would cause the whole job to run or not run). But it's still possible to do that in the main job definition itself. Proposal 2: Files and irrelevant-files are treated as overwriteable attributes and evaluated after branch-matching variants are combined. * Files and irrelevant-files are overwritten, so the last value encountered when combining all the matching variants (looking only at branches) wins. * Files and irrelevant-files will be treated as a pair, so that if "irrelevant-files" appears, it will erase a previous "files" attribute. * It's possible to both reduce and expand the scope of jobs, but the user may need to manually copy values from a parent or other variant in order to do so. * It will no longer be possible to alter a job attribute by adding a variant with only a files matcher -- in all cases files and irrelevant-files are used solely to determine whether the job is run, not to determine whether to apply a variant. I think both would be good solutions to the problem. The key points for me are whether we want to keep the "alter a job attribute with variant with a files matcher" functionality (the "rebuild_index" example from above), and whether the additional control of overwriting the matchers (at the cost of redundancy in configuration) is preferable to combining the matchers. -Jim From emilien at redhat.com Tue May 1 17:22:55 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 1 May 2018 10:22:55 -0700 Subject: [openstack-dev] Overriding project-templates in Zuul In-Reply-To: <87bmdzwbpz.fsf@meyer.lemoncheese.net> References: <87o9i04rfa.fsf@meyer.lemoncheese.net> <87bmdzwbpz.fsf@meyer.lemoncheese.net> Message-ID: On Tue, May 1, 2018 at 10:02 AM, James E. Blair wrote: [...] > Okay, let's summarize: > > Proposal 1: All project-template and project-local job variants matching > the item's branch must also match the item. > > * Files and irrelevant-files on project-template and project stanzas are > essentially combined in a set intersection. > * It's possible to further reduce the scope of jobs, but not expand. > * Files and irrelevant-files are still independent matchers, and if both > are present, both must match. > * It's not possible to alter a job attribute by adding a project-local > variant with only a files matcher (it would cause the whole job to run > or not run). But it's still possible to do that in the main job > definition itself. > > Proposal 2: Files and irrelevant-files are treated as overwriteable > attributes and evaluated after branch-matching variants are combined. > > * Files and irrelevant-files are overwritten, so the last value > encountered when combining all the matching variants (looking only at > branches) wins. > * Files and irrelevant-files will be treated as a pair, so that if > "irrelevant-files" appears, it will erase a previous "files" > attribute. > * It's possible to both reduce and expand the scope of jobs, but the > user may need to manually copy values from a parent or other variant > in order to do so. > * It will no longer be possible to alter a job attribute by adding a > variant with only a files matcher -- in all cases files and > irrelevant-files are used solely to determine whether the job is run, > not to determine whether to apply a variant. > > I think both would be good solutions to the problem. The key points for > me are whether we want to keep the "alter a job attribute with variant > with a files matcher" functionality (the "rebuild_index" example from > above), and whether the additional control of overwriting the matchers > (at the cost of redundancy in configuration) is preferable to combining > the matchers. > In the case of TripleO, I think proposal 2 is what we want. We have stanzas defined in the templates definitions in openstack-infra/tripleo-ci repo, but really want to override the file rules per repo (openstack/tripleo-quickstart for example) and I don't think we want to have them both matching but so the last value encountered would win. I'll let TripleO CI squad to give more thoughts though. Thanks, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Tue May 1 17:53:12 2018 From: corvus at inaugust.com (James E. Blair) Date: Tue, 01 May 2018 10:53:12 -0700 Subject: [openstack-dev] Overriding project-templates in Zuul In-Reply-To: <87bmdzwbpz.fsf@meyer.lemoncheese.net> (James E. Blair's message of "Tue, 01 May 2018 10:02:32 -0700") References: <87o9i04rfa.fsf@meyer.lemoncheese.net> <87bmdzwbpz.fsf@meyer.lemoncheese.net> Message-ID: <87k1snuut3.fsf@meyer.lemoncheese.net> corvus at inaugust.com (James E. Blair) writes: > So a job with "files: tests/" and "irrelevant-files: docs/" would > never run because it's impossible to satisfy both. Jeremy pointed out in IRC that that's not what would happen. So... let me rephrase that: > So a job with "files: tests/" and "irrelevant-files: docs/" would do > whatever it is that happens when you specify both. In this case, I'm pretty sure that would mean it reduces to just "files: tests/", but I've never claimed to understand irrelevant-files and I won't start now. Anyway, the main point is that Proposal 1 doesn't change the current behavior which is "everything must match" and Proposal 2 does, meaning you only get one or the other. -Jim From whayutin at redhat.com Tue May 1 19:29:08 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 01 May 2018 19:29:08 +0000 Subject: [openstack-dev] Overriding project-templates in Zuul In-Reply-To: References: <87o9i04rfa.fsf@meyer.lemoncheese.net> <87bmdzwbpz.fsf@meyer.lemoncheese.net> Message-ID: On Tue, May 1, 2018 at 1:23 PM Emilien Macchi wrote: > On Tue, May 1, 2018 at 10:02 AM, James E. Blair > wrote: > [...] > > Okay, let's summarize: >> >> Proposal 1: All project-template and project-local job variants matching >> the item's branch must also match the item. >> >> * Files and irrelevant-files on project-template and project stanzas are >> essentially combined in a set intersection. >> * It's possible to further reduce the scope of jobs, but not expand. >> * Files and irrelevant-files are still independent matchers, and if both >> are present, both must match. >> * It's not possible to alter a job attribute by adding a project-local >> variant with only a files matcher (it would cause the whole job to run >> or not run). But it's still possible to do that in the main job >> definition itself. >> >> Proposal 2: Files and irrelevant-files are treated as overwriteable >> attributes and evaluated after branch-matching variants are combined. >> >> * Files and irrelevant-files are overwritten, so the last value >> encountered when combining all the matching variants (looking only at >> branches) wins. >> * Files and irrelevant-files will be treated as a pair, so that if >> "irrelevant-files" appears, it will erase a previous "files" >> attribute. >> * It's possible to both reduce and expand the scope of jobs, but the >> user may need to manually copy values from a parent or other variant >> in order to do so. >> * It will no longer be possible to alter a job attribute by adding a >> variant with only a files matcher -- in all cases files and >> irrelevant-files are used solely to determine whether the job is run, >> not to determine whether to apply a variant. >> >> I think both would be good solutions to the problem. The key points for >> me are whether we want to keep the "alter a job attribute with variant >> with a files matcher" functionality (the "rebuild_index" example from >> above), and whether the additional control of overwriting the matchers >> (at the cost of redundancy in configuration) is preferable to combining >> the matchers. >> > > In the case of TripleO, I think proposal 2 is what we want. > We have stanzas defined in the templates definitions in > openstack-infra/tripleo-ci repo, but really want to override the file rules > per repo (openstack/tripleo-quickstart for example) and I don't think we > want to have them both matching but so the last value encountered would win. > I'll let TripleO CI squad to give more thoughts though. > > Thanks, > -- > Emilien Macchi > I agree, Proposal #2 makes the most sense to me and seems more straight forward in that you have the ability to override and that the project overriding would need to handle both files and irrelevant-files from scratch. Nice write up > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Tue May 1 19:51:19 2018 From: aj at suse.com (Andreas Jaeger) Date: Tue, 1 May 2018 21:51:19 +0200 Subject: [openstack-dev] [tc][docs] documenting openstack "constellations" In-Reply-To: <1525183287-sup-1728@lrrr.local> References: <1525183287-sup-1728@lrrr.local> Message-ID: On 05/01/2018 04:08 PM, Doug Hellmann wrote: > The TC has had an item on our backlog for a while (a year?) to > document "constellations" of OpenStack components to make it easier > for deployers and users to understand which parts they need to have > the features they want [1]. > > John Garbutt has started writing the first such document [2], but > as we talked about the content we agreed the TC governance repository > is not the best home for it, so I have proposed creating a new > repository [3]. > > In order to set up the publishing jobs for that repo so the content > goes to docs.openstack.org, we need to settle the ownership of the > repository. > > I think it makes sense for the documentation team to "own" it, but > I also think it makes sense for it to have its own review team > because it's a bit different from the rest of the docs and we may > be able to recruit folks to help who might not want to commit to > being core reviewers for all of the documentation repositories. The > TC members would also like to be reviewers, to get things going. > > So, is the documentation team willing to add the new "constellations" > repository under their umbrella? Or should we keep it as a TC-owned > repository for now? I'm fine having it as parts of the docs team. The docs PTL should be part of the review team for sure, Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From melwittt at gmail.com Tue May 1 19:56:52 2018 From: melwittt at gmail.com (melanie witt) Date: Tue, 1 May 2018 12:56:52 -0700 Subject: [openstack-dev] [nova] Virtuozzo CI status Message-ID: Hi Stackers, Lately, I noticed the Virtuozzo CI has been having some problems, for example on a recent example run [0], the job link is broken: "The requested URL /22/563722/4/check/check-dsvm-tempest-vz7-exe-minimal/d1d1707 was not found on this server." Prior to that, I noticed that the image the job was using wasn't passing the ImagePropertiesFilter and failing to have a successful run because of it. Can anyone from the Virtuozzo subteam comment on the status of the third party CI? Thanks, -melanie [0] https://review.openstack.org/563722 From allprog at gmail.com Tue May 1 20:14:24 2018 From: allprog at gmail.com (=?Windows-1252?Q?Andr=E1s_K=F6vi?=) Date: Tue, 1 May 2018 20:14:24 +0000 Subject: [openstack-dev] [mistral] Help with test run In-Reply-To: References: , Message-ID: Thanks Brad for the check. Yes it’s quite consistent. _____________________________ From: Brad P. Crochet Sent: Tuesday, May 1, 2018 5:13 PM Subject: Re: [openstack-dev] [mistral] Help with test run To: OpenStack Development Mailing List (not for usage questions) On Fri, Apr 27, 2018 at 5:23 AM András Kövi > wrote: Hi, Can someone please help me with why this build ended with TIMED_OUT? http://logs.openstack.org/85/527085/8/check/mistral-tox-unit-mysql/3ffae9f/ I'm not seeing any particular reason for it. Is it happening consistently? Thanks, Andras __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Brad P. Crochet, RHCA, RHCE, RHCVA, RHCDS Principal Software Engineer (c) 704.236.9385 -------------- next part -------------- An HTML attachment was scrubbed... URL: From allprog at gmail.com Tue May 1 20:19:04 2018 From: allprog at gmail.com (=?Windows-1252?Q?Andr=E1s_K=F6vi?=) Date: Tue, 1 May 2018 20:19:04 +0000 Subject: [openstack-dev] [mistral] Help with test run In-Reply-To: <1525191111.2120341.1357066832.0E6B2880@webmail.messagingengine.com> References: , <1525191111.2120341.1357066832.0E6B2880@webmail.messagingengine.com> Message-ID: Thanks Clark, I’m pretty sure the UTs would conflict with each other so raising the concurrency is probably a big undertaking. Though it’s definitely worth looking into in the future. Shifting the job timeout a little bit maybe a the simplest solution. Thanks again, Andras _____________________________ From: Clark Boylan Sent: Tuesday, May 1, 2018 6:12 PM Subject: Re: [openstack-dev] [mistral] Help with test run To: On Fri, Apr 27, 2018, at 2:22 AM, András Kövi wrote: > Hi, > > Can someone please help me with why this build ended with TIMED_OUT? > http://logs.openstack.org/85/527085/8/check/mistral-tox-unit-mysql/3ffae9f/ Reading the job log the job setup only took a few minutes. Then the unittests start and are running continuously until the timeout happens at 30 minutes. Chances are that the default 30 minute timeout is not sufficient for this job. Runtime may vary based on cloud region and presence of noisy neighbors. As for making this more reliable you can increase the timeout in the job configuration for that job. Another approach would be to make the unittests run more quickly. I notice the job is hard coded to use concurrency=1 when invoking the test runner so you are only using ~1/8 of the available cpus. You might try increasing this value though will likely need to make sure the tests don't conflict with each other. Clark __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Tue May 1 20:20:51 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Tue, 01 May 2018 15:20:51 -0500 Subject: [openstack-dev] OpenStack PTG Update Message-ID: <5AE8CC23.20909@openstack.org> Hey everyone, Good news! We have extended the early bird registration for the upcoming PTG in Denver to May 18, 6:59 UTC. After that time, the price will increase from USD $199 to USD $399. Please keep in mind that the OpenStack Foundation doesn’t profit on these events. Our goal is to provide the absolute best community experience/opportunity/value for the money. If you are concerned about cost and your organization will not fund your travel, you can apply for Travel Support . If we can answer any further questions about the cost or cost increase, just let us know. Thank you and we look forward to seeing you in Denver! Jimmy -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Tue May 1 20:21:09 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 01 May 2018 16:21:09 -0400 Subject: [openstack-dev] [tc][docs] documenting openstack "constellations" In-Reply-To: References: <1525183287-sup-1728@lrrr.local> Message-ID: <1525206014-sup-535@lrrr.local> Excerpts from Andreas Jaeger's message of 2018-05-01 21:51:19 +0200: > On 05/01/2018 04:08 PM, Doug Hellmann wrote: > > The TC has had an item on our backlog for a while (a year?) to > > document "constellations" of OpenStack components to make it easier > > for deployers and users to understand which parts they need to have > > the features they want [1]. > > > > John Garbutt has started writing the first such document [2], but > > as we talked about the content we agreed the TC governance repository > > is not the best home for it, so I have proposed creating a new > > repository [3]. > > > > In order to set up the publishing jobs for that repo so the content > > goes to docs.openstack.org, we need to settle the ownership of the > > repository. > > > > I think it makes sense for the documentation team to "own" it, but > > I also think it makes sense for it to have its own review team > > because it's a bit different from the rest of the docs and we may > > be able to recruit folks to help who might not want to commit to > > being core reviewers for all of the documentation repositories. The > > TC members would also like to be reviewers, to get things going. > > > > So, is the documentation team willing to add the new "constellations" > > repository under their umbrella? Or should we keep it as a TC-owned > > repository for now? > > I'm fine having it as parts of the docs team. The docs PTL should be > part of the review team for sure, > > Andreas Yeah, I wasn't really clear there: I intend to set up the documentation and TC teams as members of the new team, so that all members of both groups can be reviewers of the new repository. Doug From davanum at gmail.com Tue May 1 20:48:13 2018 From: davanum at gmail.com (Davanum Srinivas) Date: Tue, 1 May 2018 16:48:13 -0400 Subject: [openstack-dev] [tc][docs] documenting openstack "constellations" In-Reply-To: <1525206014-sup-535@lrrr.local> References: <1525183287-sup-1728@lrrr.local> <1525206014-sup-535@lrrr.local> Message-ID: On Tue, May 1, 2018 at 4:21 PM, Doug Hellmann wrote: > Excerpts from Andreas Jaeger's message of 2018-05-01 21:51:19 +0200: >> On 05/01/2018 04:08 PM, Doug Hellmann wrote: >> > The TC has had an item on our backlog for a while (a year?) to >> > document "constellations" of OpenStack components to make it easier >> > for deployers and users to understand which parts they need to have >> > the features they want [1]. >> > >> > John Garbutt has started writing the first such document [2], but >> > as we talked about the content we agreed the TC governance repository >> > is not the best home for it, so I have proposed creating a new >> > repository [3]. >> > >> > In order to set up the publishing jobs for that repo so the content >> > goes to docs.openstack.org, we need to settle the ownership of the >> > repository. >> > >> > I think it makes sense for the documentation team to "own" it, but >> > I also think it makes sense for it to have its own review team >> > because it's a bit different from the rest of the docs and we may >> > be able to recruit folks to help who might not want to commit to >> > being core reviewers for all of the documentation repositories. The >> > TC members would also like to be reviewers, to get things going. >> > >> > So, is the documentation team willing to add the new "constellations" >> > repository under their umbrella? Or should we keep it as a TC-owned >> > repository for now? >> >> I'm fine having it as parts of the docs team. The docs PTL should be >> part of the review team for sure, >> >> Andreas > > Yeah, I wasn't really clear there: I intend to set up the documentation > and TC teams as members of the new team, so that all members of both > groups can be reviewers of the new repository. +1 Doug > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Davanum Srinivas :: https://twitter.com/dims From ianyrchoi at gmail.com Tue May 1 21:26:37 2018 From: ianyrchoi at gmail.com (Ian Y. Choi) Date: Tue, 1 May 2018 14:26:37 -0700 Subject: [openstack-dev] [tc][docs] documenting openstack "constellations" In-Reply-To: References: <1525183287-sup-1728@lrrr.local> <1525206014-sup-535@lrrr.local> Message-ID: > >> > So, is the documentation team willing to add the new "constellations" > >> > repository under their umbrella? Or should we keep it as a TC-owned > >> > repository for now? > >> > >> I'm fine having it as parts of the docs team. The docs PTL should be > >> part of the review team for sure, > >> > >> Andreas > > > > Yeah, I wasn't really clear there: I intend to set up the documentation > > and TC teams as members of the new team, so that all members of both > > groups can be reviewers of the new repository. > > +1 Doug > > I think a new specialty team in Docs team structure would fit well into this purpose : https://docs.openstack.org/doc-contrib-guide/team-structure.html Note that the purpose of including docs core members is mentioned in: http://lists.openstack.org/pipermail/openstack-docs/2016-June/008760.html . With many thanks, /Ian -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.therond at gmail.com Tue May 1 21:37:12 2018 From: gael.therond at gmail.com (Flint WALRUS) Date: Tue, 01 May 2018 21:37:12 +0000 Subject: [openstack-dev] [api] REST limitations and GraghGL inception? In-Reply-To: References: <9e8ab8c2-1025-c1c0-7f02-080cc8ae8fc1@redhat.com> Message-ID: Ok, here are my two cents regarding GraphQL integration within Openstack and some thoughts around this topic. 1°/- Openstack SDK should still exist and should be in my humble opinion a critical focus as it allow following benefits for large and medium companies : • It provide a common and clean structure for Openstack developments and should be used either by projects or tools willing to integrate Openstack as it will then create some sort of standard. For instance, here in my job we have A LOT (More than 10 000 peoples working within around 130 teams) of teams developing over Openstack using the SDK as a common shared base layout. That allow for teams to easily share and co-develop on projects. Those teams are spread around the world and so need to have clean guidelines as it avoid them reinventing the wheel, they’re not stuck with someone else obscure code created by another persons on the other side of the world or within a different timezone. Additionally it streamline our support and debug processes. • We should get a common consensus before all projects start to implement it. This point is for me the most important one as it will fix flaws we get currently with the rest APIs development within Openstack. First it will avoid a fresh developer to be confused by too many options. Honestly, I know we are open etc, but this point really need to be addressed as it is the main issue that I face with Openstack advocacy since many years now. Having too many options even if explained within the documentation daunt a lot of people to quickly give a hand with projects. For instance I have a workmate that is currently working on an internal tool which ask me how should he implement its project REST interfaces. I told him TO NOT use WSME and to stick with what have been done by a major project. Unfortunately he choose to copy what have been done by Octavia which is actually using... WSME... GraphQL gives us the opportunity and ability to fix Openstack development inconsistencies by providing and enforcing a clean guideline regarding which library should be used and in which way. That would also have the side effect to easy the entry level for a new Openstack developer. • New architecture opportunities. For sure that will bring new architecture opportunities, but the broker thing is not a good idea as each project should be able to be autonomous. I personally don’t like centralized services as it bring SPOF. Let’s take the AMQP example. For now most of Openstack deployments use a RabbitMQ or broker like system. Even if each (well at least major vanilla projects) services can (and should) use ZeroMQ. I do myself use RabbitMQ but my last weeks were so much debugging/investigation hell that we now plan to have a serious benchmark and test of ZMQ. One thing that I would love to see with GraphQL is a better distributed and traceable model. Anyway, I’m glad someone started this discussion as I feel it is a really important topic that would highly help Openstack on more than just interfacing topics. Le mar. 1 mai 2018 à 05:00, Gilles Dubreuil a écrit : > > > On 01/05/18 11:31, Flint WALRUS wrote: > > Yes, that’s was indeed the sens of my point. > > > I was just enforcing it, no worries! ;) > > > > Openstack have to provide both endpoints type for a while for backward > compatibility in order to smooth the transition. > > For instance, that would be a good idea to contact postman devteam once > GraphQL will start to be integrated as it will allow a lot of ops to keep > their day to day tools by just having to convert their existing collections > of handful requests. > > > Shouldn't we have a common consensus before any project start pushing its > own GraphQL wheel? > > Also I wonder how GraphQL could open new architecture avenues for > OpenStack. > For example, would that make sense to also have a GraphQL broker linking > OpenStack services? > > > > > Or alternatively to provide a tool with similar features at least. > Le mar. 1 mai 2018 à 03:18, Gilles Dubreuil a > écrit : > >> >> >> On 30/04/18 20:16, Flint WALRUS wrote: >> >> I would very much second that question! Indeed it have been one of my own >> wondering since many times. >> >> Of course GraphQL is not intended to replace REST as is and have to live >> in parallel >> >> >> Effectively a standard initial architecture is to have GraphQL sitting >> aside (in parallel) and wrapping REST and along the way develop GrapgQL >> Schema. >> >> It's seems too early to tell but GraphQL being the next step in API >> evolution it might ultimately replace REST. >> >> >> but it would likely and highly accelerate all requests within heavily >> loaded environments >> >> >> +1 >> >> >> . >> >> So +1 for this question. >> Le lun. 30 avr. 2018 à 05:53, Gilles Dubreuil a >> écrit : >> >>> Hi, >>> >>> Remember Boston's Summit presentation [1] about GraphQL [2] and how it >>> addresses REST limitations. >>> I wonder if any project has been thinking about using GraphQL. I haven't >>> find any mention or pointers about it. >>> >>> GraphQL takes a complete different approach compared to REST. So we can >>> finally forget about REST API Description languages >>> (OpenAPI/Swagger/WSDL/WADL/JSON-API/ETC) and HATEOS (the hypermedia >>> approach which doesn't describe how to use it). >>> >>> So, once passed the point where 'REST vs GraphQL' is like comparing SQL >>> and no-SQL DBMS and therefore have different applications, there are no >>> doubt the complexity of most OpenStack projects are good candidates for >>> GraphQL. >>> >>> Besides topics such as efficiency, decoupling, no version management >>> need there many other powerful features such as API Schema out of the >>> box and better automation down that track. >>> >>> It looks like the dream of a conduit between API services and consumers >>> might have finally come true so we could move-on an worry about other >>> things. >>> >>> So has anyone already starting looking into it? >>> >>> [1] >>> >>> https://www.openstack.org/videos/boston-2017/building-modern-apis-with-graphql >>> [2] http://graphql.org >>> >>> >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> -- >> Gilles Dubreuil >> Senior Software Engineer - Red Hat - Openstack DFG Integration >> Email: gilles at redhat.com >> GitHub/IRC: gildub >> Mobile: +61 400 894 219 >> >> >> > -- > Gilles Dubreuil > Senior Software Engineer - Red Hat - Openstack DFG Integration > Email: gilles at redhat.com > GitHub/IRC: gildub > Mobile: +61 400 894 219 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arvindn05 at gmail.com Tue May 1 22:26:58 2018 From: arvindn05 at gmail.com (Arvind N) Date: Tue, 1 May 2018 15:26:58 -0700 Subject: [openstack-dev] [nova][placement] Trying to summarize bp/glance-image-traits scheduling alternatives for rebuild In-Reply-To: References: <221636a9-4b8f-1098-10b8-2240a7cb0ff7@gmail.com> <8eec45ab-f9ed-cd96-51a1-9be78849fb9b@gmail.com> <530903a4-701d-595e-acc3-05369697cf06@gmail.com> Message-ID: Reminder for Operators, Please provide feedback either way. In cases of rebuilding of an instance using a different image where the image traits have changed between the original launch and the rebuild, is it reasonable to ask to just re-launch a new instance with the new image? The argument for this approach is that given that the requirements have changed, we want the scheduler to pick and allocate the appropriate host for the instance. The approach above also gives you consistent results vs the other approaches where the rebuild may or may not succeed depending on how the original allocation of resources went. For example(from Alex Xu) ,if you launched an instance on a host which has two SRIOV nic. One is normal SRIOV nic(A), another one with some kind of offload feature(B). So, the original request is: resources=SRIOV_VF:1 The instance gets a VF from the normal SRIOV nic(A). But with a new image, the new request is: resources=SRIOV_VF:1 traits=HW_NIC_OFFLOAD_XX With all the solutions discussed in the thread, a rebuild request like above may or may not succeed depending on whether during the initial launch whether nic A or nic B was allocated. Remember that in rebuild new allocation don't happen, we have to reuse the existing allocations. Given the above background, there seems to be 2 competing options. 1. Fail in the API saying you can't rebuild with a new image with new required traits. 2. Look at the current allocations for the instance and try to match the new requirement from the image with the allocations. With #1, we get consistent results in regards to how rebuilds are treated when the image traits changed. With #2, the rebuild may or may not succeed, depending on how well the original allocations match up with the new requirements. #2 will also need to need to account for handling preferred traits or granular resource traits if we decide to implement them for images at some point... [1] https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/glance-image-traits.html [2] https://review.openstack.org/#/c/560718/ On Tue, Apr 24, 2018 at 6:26 AM, Sylvain Bauza wrote: > Sorry folks for the late reply, I'll try to also weigh in the Gerrit > change. > > On Tue, Apr 24, 2018 at 2:55 PM, Jay Pipes wrote: > >> On 04/23/2018 05:51 PM, Arvind N wrote: >> >>> Thanks for the detailed options Matt/eric/jay. >>> >>> Just few of my thoughts, >>> >>> For #1, we can make the explanation very clear that we rejected the >>> request because the original traits specified in the original image and the >>> new traits specified in the new image do not match and hence rebuild is not >>> supported. >>> >> >> I believe I had suggested that on the spec amendment patch. Matt had >> concerns about an error message being a poor user experience (I don't >> necessarily disagree with that) and I had suggested a clearer error message >> to try and make that user experience slightly less sucky. >> >> For #3, >>> >>> Even though it handles the nested provider, there is a potential issue. >>> >>> Lets say a host with two SRIOV nic. One is normal SRIOV nic(VF1), >>> another one with some kind of offload feature(VF2).(Described by alex) >>> >>> Initial instance launch happens with VF:1 allocated, rebuild launches >>> with modified request with traits=HW_NIC_OFFLOAD_X, so basically we want >>> the instance to be allocated VF2. >>> >>> But the original allocation happens against VF1 and since in rebuild the >>> original allocations are not changed, we have wrong allocations. >>> >> >> Yep, that is certainly an issue. The only solution to this that I can see >> would be to have the conductor ask the compute node to do the pre-flight >> check. The compute node already has the entire tree of providers, their >> inventories and traits, along with information about providers that share >> resources with the compute node. It has this information in the >> ProviderTree object in the reportclient that is contained in the compute >> node resource tracker. >> >> The pre-flight check, if run on the compute node, would be able to grab >> the allocation records for the instance and determine if the required >> traits for the new image are present on the actual resource providers >> allocated against for the instance (and not including any child providers >> not allocated against). >> >> > Yup, that. We also have pre-flight checks for move operations like live > and cold migrations, and I'd really like to keep all the conditionals in > the conductor, because it knows better than the scheduler which operation > is asked. > I'm not really happy with adding more in the scheduler about "yeah, it's a > rebuild, so please do something exceptional", and I'm also not happy with > having a filter (that can be disabled) calling the Placement API. > > >> Or... we chalk this up as a "too bad" situation and just either go with >> option #1 or simply don't care about it. > > > Also, that too. Maybe just provide an error should be enough, nope? > Operators, what do you think ? (cross-calling openstack-operators@) > > -Sylvain > > >> >> Best, >> -jay >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Arvind N -------------- next part -------------- An HTML attachment was scrubbed... URL: From najoy at cisco.com Tue May 1 22:41:02 2018 From: najoy at cisco.com (Naveen Joy (najoy)) Date: Tue, 1 May 2018 22:41:02 +0000 Subject: [openstack-dev] networking-vpp 18.04 for VPP 18.04 is now available Message-ID: <44659D6E-E9CA-44A8-9F06-BA52BDE77215@cisco.com> Hello Everyone, In conjunction with the release of VPP 18.04, we'd like to invite you all to try out networking-vpp 18.04 for VPP 18.04. VPP is a fast userspace forwarder based on the DPDK toolkit, and uses vector packet processing algorithms to minimize the CPU time spent on each packet and maximize throughput. networking-vpp is a ML2 mechanism driver that controls VPP on your control and compute hosts to provide fast L2 forwarding under Neutron. This version has a few additional enhancements and several bug fixes, along with supporting the VPP 18.04 APIs: - L3 HA is fully supported for VLAN, VXLAN-GPE and Flat Network Types - IPv6 VM addressing supported for VXLAN-GPE - Deadlock prevention in eventlet Along with this, there have been the usual upkeep as Neutron versions change, bug fixes, code and test improvements. The README [1] explains how you can try out VPP and networking-vpp using devstack: the devstack plugin will deploy the mechanism driver and VPP itself and should give you a working system with a minimum of hassle. It will use the etcd version deployed by newer versions of devstack. We will be continuing our development between now and VPP's 18.07 release. There are several features we're planning to work on (you'll find a list in our RFE bugs at [2]), and we welcome anyone who would like to come help us. Everyone is welcome to join our biweekly IRC meetings, held every other Monday, 0800 PST = 1600 GMT. -- Naveen & Ian [1]https://github.com/openstack/networking-vpp/blob/master/README.rst [2]http://goo.gl/i3TzAt -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevinzs2048 at gmail.com Wed May 2 01:11:22 2018 From: kevinzs2048 at gmail.com (Shuai Zhao) Date: Wed, 2 May 2018 09:11:22 +0800 Subject: [openstack-dev] [Zun][k8s] AWS Fargate and OpenStack Zun In-Reply-To: <0512CBBECA36994BAA14C7FEDE986CA6042947EB@BGSMSX102.gar.corp.intel.com> References: <0512CBBECA36994BAA14C7FEDE986CA6042947EB@BGSMSX102.gar.corp.intel.com> Message-ID: Thanks Hongbin, The article is really great! On Mon, Apr 30, 2018 at 2:40 PM, Kumari, Madhuri wrote: > Thank you Hongbin. The article is very helpful. > > > > Regards, > > Madhuri > > > > *From:* Hongbin Lu [mailto:hongbin034 at gmail.com] > *Sent:* Sunday, April 29, 2018 5:16 AM > *To:* OpenStack Development Mailing List (not for usage questions) < > openstack-dev at lists.openstack.org> > *Subject:* [openstack-dev] [Zun][k8s] AWS Fargate and OpenStack Zun > > > > Hi folks, > > > > FYI. I wrote a blog post about a comparison between AWS Fargate and > OpenStack Zun. It mainly covers the following: > > > > * The basic concepts of OpenStack Zun and AWS Fargate > > * The Kubernetes integration plan > > > > Here is the link: https://www.linkedin.com/pulse/aws-fargate- > openstack-zun-comparing-serverless-container-hongbin-lu/ > > > > Best regards, > > Hongbin > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Wed May 2 03:56:27 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 1 May 2018 20:56:27 -0700 Subject: [openstack-dev] [tripleo] The Weekly Owl - 19th Edition Message-ID: Welcome to the nineteenth edition of a weekly update in TripleO world! The goal is to provide a short reading (less than 5 minutes) to learn what's new this week. Any contributions and feedback are welcome. Link to the previous version: http://lists.openstack.org/pipermail/openstack-dev/2018-April/129800.html +---------------------------------+ | General announcements | +---------------------------------+ +--> Some efforts will be done during Rocky for splitting out controlplane, see spec: https://review.openstack.org/#/c/523459 +--> We have 5 more weeks until milestone 2 ! Check-out the schedule: https://releases.openstack.org/rocky/schedule.html +------------------------------+ | Continuous Integration | +------------------------------+ +--> Ruck is Wes and Rover is Gabriele. Please let them know any new CI issue. +--> Master promotion is 0 day, Queens is 0 day, Pike is 3 days and Ocata is 1 days. Kudos folks! +--> Still working on libvirt based multinode reproducer, see https://goo.gl/DYCnkx +--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting +-------------+ | Upgrades | +-------------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad-status +---------------+ | Containers | +---------------+ +--> Containerized Undercloud upgrades has made good progress, non-voting CI job almost ready +--> Major efforts in tripleoclient for all-in-one (sorry for all the Merge Conflict) but it was needed +--> ansible-role-container-registry was imported in RDO, now moving forward to consume it in THT. +--> No progress on container updates before undercloud deployment. +--> Still working on parity items between instack-undercloud and containerized undercloud +--> More: https://etherpad.openstack.org/p/tripleo-containers-squad-status +----------------------+ | config-download | +----------------------+ +--> config-download now the default with tripleo-heat-templates! +--> Rewriting enable-ssh-admin.sh in python +--> WIP around importing ansible role for keystone tasks. +--> Progress on OpenStark operations Ansible role: https://github.com/samdoran/ansible-role-openstack-operations +--> Working on Skydive transition +--> Working on improving performances when deploying Ceph with Ansible. +--> More: https://etherpad.openstack.org/p/tripleo-config-download- squad-status +--------------+ | Integration | +--------------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-integration-squad-status +---------+ | UI/CLI | +---------+ +--> Working on Network Wizard. +--> Finishing config-download integration +--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status +---------------+ | Validations | +---------------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-validations-squad-status +---------------+ | Networking | +---------------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-networking-squad-status +--------------+ | Workflows | +--------------+ +--> Working on splitting workflows repository. +--> Efforts around Workflow linting and unit testing. +--> Discussions around usage of Zaqar. +--> More: https://etherpad.openstack.org/p/tripleo-workflows-squad-status +-----------+ | Security | +-----------+ +--> No updates, still focusing on Public TLS by default and Secret Management. +--> More: https://etherpad.openstack.org/p/tripleo-security-squad +------------+ | Owl fact | +------------+ Keith Schincke suggested this weekly fact: you can observe owl's eyeball through their ear: https://www.livescience.com/61673-owl-eye-seen-through-ear.html Thank you all for reading and stay tuned! -- Your fellow reporter, Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Wed May 2 05:05:13 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 1 May 2018 22:05:13 -0700 Subject: [openstack-dev] [Openstack-operators] The Forum Schedule is now live In-Reply-To: <5AE87728.1020804@openstack.org> References: <5AE34A02.8020802@openstack.org> <5AE73AA3.4030408@openstack.org> <5AE74CF2.9010804@openstack.org> <5AE87728.1020804@openstack.org> Message-ID: On Tue, May 1, 2018 at 7:18 AM, Jimmy McArthur wrote: > Apologies for the delay, Emilien! I should be adding it today, but it's > definitely yours. > Could we change the title of the slot and actually be a TripleO Project Update session? It would have been great to have the onboarding session but I guess we also have 2 other sessions where we'll have occasions to meet: TripleO Ops and User feedback and TripleO and Ansible integration If it's possible to still have an onboarding session, awesome otherwise it's ok I think we'll deal with it. Thanks, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdubreui at redhat.com Wed May 2 05:09:28 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Wed, 2 May 2018 15:09:28 +1000 Subject: [openstack-dev] The Forum Schedule is now live In-Reply-To: <5AE34A02.8020802@openstack.org> References: <5AE34A02.8020802@openstack.org> Message-ID: <3c7d1e0c-5f22-7f27-c424-d51ac350ab9b@redhat.com> Hi Jimmy, Do you have an update about the API SIG slot? Thanks, Gilles On 28/04/18 02:04, Jimmy McArthur wrote: > Hello all - > > Please take a look here for the posted Forum schedule: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224 > You should also see it update on your Summit App. > > Thank you and see you in Vancouver! > Jimmy > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jichenjc at cn.ibm.com Wed May 2 05:55:16 2018 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Wed, 2 May 2018 13:55:16 +0800 Subject: [openstack-dev] [Nova] z/VM introducing a new config driveformat In-Reply-To: References: <7efe6916-17fc-59c5-d666-6bdfc19c3329@gmail.com><2eea2405-2e85-6730-e022-e1be33dc35d5@gmail.com><35a542f1-2b2f-74bd-b769-eb049a430223@gmail.com> Message-ID: Thanks for sharing this info , it make sense to leave a -2 here, I will keep modifying follow on patches and get more reviews. thanks Best Regards! Kevin (Chen) Ji 纪 晨 Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com Phone: +86-10-82451493 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC From: Dan Smith To: "Chen CH Ji" Cc: "OpenStack Development Mailing List \(not for usage questions \)" Date: 04/30/2018 10:55 PM Subject: Re: [openstack-dev] [Nova] z/VM introducing a new config driveformat > According to requirements and comments, now we opened the CI runs with > run_validation = True And according to [1] below, for example, [2] > need the ssh validation passed the test > > And there are a couple of comments need some enhancement on the logs > of CI such as format and legacy incorrect links of logs etc the newest > logs sample can be found [3] (take n-cpu as example and those logs are > with _white.html) > > Also, the blueprint [4] requested by previous discussion post here > again for reference > > please let us know whether the procedure -2 can be removed in order to > proceed . thanks for your help The CI log format issues look fixed to me and validation is turned on for the stuff supported, which is what was keeping it out of the runway. I still plan to leave the -2 on there until the next few patches have agreement, just so we don't land an empty shell driver before we are sure we're going to land spawn/destroy, etc. That's pretty normal procedure and I'll be around to remove it when appropriate. --Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From sangho at opennetworking.org Wed May 2 06:10:00 2018 From: sangho at opennetworking.org (Sangho Shin) Date: Wed, 2 May 2018 14:10:00 +0800 Subject: [openstack-dev] [neutron] How to specify OpenStack version for Zuul tests? Message-ID: Hello, Is there any way to specify the OpenStack version for Zuul test? Currently, neteworking-onos project is very outdated, and I need to create a stable version for the codes for each OpenStack version (Ocata/Pike/Queens). Each version of OpenStack has different requirements in particular neutron libs, and it seems that Zuul is using the master version of OpenStack for testing my codes for Ocata. Thank you, Sangho From gdubreui at redhat.com Wed May 2 07:40:54 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Wed, 2 May 2018 17:40:54 +1000 Subject: [openstack-dev] [api] REST limitations and GraghQL inception? In-Reply-To: References: <9e8ab8c2-1025-c1c0-7f02-080cc8ae8fc1@redhat.com> Message-ID: <9196a914-df90-41cd-01af-4f1c49a5d1aa@redhat.com> I fixed the GraphQL typo (my mistake) in $subject to help with future ML searches. Please see inline too. On 02/05/18 07:37, Flint WALRUS wrote: > Ok, here are my two cents regarding GraphQL integration within > Openstack and some thoughts around this topic. > > 1°/- Openstack SDK should still exist and should be in my humble > opinion a critical focus as it allow following benefits for large and > medium companies : > > • It provide a common and clean structure for Openstack developments > and should be used either by projects or tools willing to integrate > Openstack as it will then create some sort of standard. > > For instance, here in my job we have A LOT (More than 10 000 peoples > working within around 130 teams) of teams developing over Openstack > using the SDK as a common shared base layout. > That allow for teams to easily share and co-develop on projects. Those > teams are spread around the world and so need to have clean guidelines > as it avoid them reinventing the wheel, they’re not stuck with someone > else obscure code created by another persons on the other side of the > world or within a different timezone. > Additionally it streamline our support and debug processes. I'm assuming you're talking about the Python SDK (Shade) which would make sense because it's the "lingua franca" of all projects. Nevertheless, for any SDKs/Languages, if adopted then GraphQL is likely to replace its REST SDK on the long run. GraphQL is a DSL bypassing a SDK need which get replaced with GraphQL client library. Basically the change, not a rewrite, is inevitable. But I insist on "the long run" part, initially both in parallel one wrapping the other, then progressively the REST content moving across to GraphQL. > > • We should get a common consensus before all projects start to > implement it. This is going to be raised during the API SIG weekly meeting later this week. API developers (at least one) from every project are strongly welcomed to participate. I suppose it makes sense for the API SIG to be the place to discuss it, at least initially. > > This point is for me the most important one as it will fix flaws we > get currently with the rest APIs development within Openstack. > > First it will avoid a fresh developer to be confused by too many > options. Honestly, I know we are open etc, but this point really need > to be addressed as it is the main issue that I face with Openstack > advocacy since many years now. > > Having too many options even if explained within the documentation > daunt a lot of people to quickly give a hand with projects. > > For instance I have a workmate that is currently working on an > internal tool which ask me how should he implement its project REST > interfaces. > > I told him TO NOT use WSME and to stick with what have been done by a > major project. Unfortunately he choose to copy what have been done by > Octavia which is actually using... WSME... > > GraphQL gives us the opportunity and ability to fix Openstack > development inconsistencies by providing and enforcing a clean > guideline regarding which library should be used and in which way. > > That would also have the side effect to easy the entry level for a new > Openstack developer. I couldn't agree more! > > • New architecture opportunities. > > For sure that will bring new architecture opportunities, but the > broker thing is not a good idea as each project should be able to be > autonomous. > > I personally don’t like centralized services as it bring SPOF. > > Let’s take the AMQP example. For now most of Openstack deployments use > a RabbitMQ or broker like system. > Even if each (well at least major vanilla projects) services can (and > should) use ZeroMQ. > I do myself use RabbitMQ but my last weeks were so much > debugging/investigation hell that we now plan to have a serious > benchmark and test of ZMQ. > > One thing that I would love to see with GraphQL is a better > distributed and traceable model. > Exactly and the term broker I used is far from ideal,  I meant it in the context of a broker pattern providing distributed API service. GraphQL has "stiching" capabilities allowing to forward request to diverse GraphQL service, kind of a proxy, ideally such service to be distributed itself. The idea behind is a GraphQL proxy offering a single point of entry for OpenStack entire stack and of course leaving complete autonomy to the all services. https://blog.graph.cool/graphql-schema-stitching-explained-schema-delegation-4c6caf468405 > Anyway, I’m glad someone started this discussion as I feel it is a > really important topic that would highly help Openstack on more than > just interfacing topics. > Le mar. 1 mai 2018 à 05:00, Gilles Dubreuil > a écrit : > > > > On 01/05/18 11:31, Flint WALRUS wrote: >> Yes, that’s was indeed the sens of my point. > > I was just enforcing it, no worries! ;) > > >> >> Openstack have to provide both endpoints type for a while for >> backward compatibility in order to smooth the transition. >> >> For instance, that would be a good idea to contact postman >> devteam once GraphQL will start to be integrated as it will allow >> a lot of ops to keep their day to day tools by just having to >> convert their existing collections of handful requests. > > Shouldn't we have a common consensus before any project start > pushing its own GraphQL wheel? > > Also I wonder how GraphQL could open new architecture avenues for > OpenStack. > For example, would that make sense to also have a GraphQL broker > linking OpenStack services? > > > >> >> Or alternatively to provide a tool with similar features at least. >> Le mar. 1 mai 2018 à 03:18, Gilles Dubreuil > > a écrit : >> >> >> >> On 30/04/18 20:16, Flint WALRUS wrote: >>> I would very much second that question! Indeed it have been >>> one of my own wondering since many times. >>> >>> Of course GraphQL is not intended to replace REST as is and >>> have to live in parallel >> >> Effectively a standard initial architecture is to have >> GraphQL sitting aside (in parallel) and wrapping REST and >> along the way develop GrapgQL Schema. >> >> It's seems too early to tell but GraphQL being the next step >> in API evolution it might ultimately replace REST. >> >> >>> but it would likely and highly accelerate all requests >>> within heavily loaded environments >> >> +1 >> >> >>> . >>> >>> So +1 for this question. >>> Le lun. 30 avr. 2018 à 05:53, Gilles Dubreuil >>> > a écrit : >>> >>> Hi, >>> >>> Remember Boston's Summit presentation [1] about GraphQL >>> [2] and how it >>> addresses REST limitations. >>> I wonder if any project has been thinking about using >>> GraphQL. I haven't >>> find any mention or pointers about it. >>> >>> GraphQL takes a complete different approach compared to >>> REST. So we can >>> finally forget about REST API Description languages >>> (OpenAPI/Swagger/WSDL/WADL/JSON-API/ETC) and HATEOS (the >>> hypermedia >>> approach which doesn't describe how to use it). >>> >>> So, once passed the point where 'REST vs GraphQL' is >>> like comparing SQL >>> and no-SQL DBMS and therefore have different >>> applications, there are no >>> doubt the complexity of most OpenStack projects are good >>> candidates for >>> GraphQL. >>> >>> Besides topics such as efficiency, decoupling, no >>> version management >>> need there many other powerful features such as API >>> Schema out of the >>> box and better automation down that track. >>> >>> It looks like the dream of a conduit between API >>> services and consumers >>> might have finally come true so we could move-on an >>> worry about other >>> things. >>> >>> So has anyone already starting looking into it? >>> >>> [1] >>> https://www.openstack.org/videos/boston-2017/building-modern-apis-with-graphql >>> [2] http://graphql.org >>> >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From neil at tigera.io Wed May 2 07:43:00 2018 From: neil at tigera.io (Neil Jerram) Date: Wed, 02 May 2018 07:43:00 +0000 Subject: [openstack-dev] [Zun][k8s] AWS Fargate and OpenStack Zun In-Reply-To: References: <0512CBBECA36994BAA14C7FEDE986CA6042947EB@BGSMSX102.gar.corp.intel.com> Message-ID: +1 This is beautifully clear and helpful. Thank you! Neil On Wed, 2 May 2018, 02:13 Shuai Zhao, wrote: > Thanks Hongbin, > > The article is really great! > > On Mon, Apr 30, 2018 at 2:40 PM, Kumari, Madhuri > wrote: > >> Thank you Hongbin. The article is very helpful. >> >> >> >> Regards, >> >> Madhuri >> >> >> >> *From:* Hongbin Lu [mailto:hongbin034 at gmail.com] >> *Sent:* Sunday, April 29, 2018 5:16 AM >> *To:* OpenStack Development Mailing List (not for usage questions) < >> openstack-dev at lists.openstack.org> >> *Subject:* [openstack-dev] [Zun][k8s] AWS Fargate and OpenStack Zun >> >> >> >> Hi folks, >> >> >> >> FYI. I wrote a blog post about a comparison between AWS Fargate and >> OpenStack Zun. It mainly covers the following: >> >> >> >> * The basic concepts of OpenStack Zun and AWS Fargate >> >> * The Kubernetes integration plan >> >> >> >> Here is the link: >> https://www.linkedin.com/pulse/aws-fargate-openstack-zun-comparing-serverless-container-hongbin-lu/ >> >> >> >> Best regards, >> >> Hongbin >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Wed May 2 08:15:31 2018 From: zigo at debian.org (Thomas Goirand) Date: Wed, 2 May 2018 10:15:31 +0200 Subject: [openstack-dev] Problems with all OpenStack APIs & uwsgi with Content-Lenght and connection reset by peer (ie: 104) Message-ID: <5589ecb4-49ef-8f59-33ee-8fbe510e572d@debian.org> Hi there! It's been a month I was knocking my head on the wall trying to get uwsgi working with all of OpenStack API uwsgi applications. Indeed, when OpenStack component (like for example, nova-compute) were talking to uwsgi, then they were receiving a 104 error (ie: connection reset by peer) before getting an answer. What was disturbing was that, doing the same request with curl worked perfectly. Even more disturbing, it looked like I was having the issue nearly always in virtualbox, but not always in real hardware, where it sometimes worked. Anyway, finally, I figured out that adding: --rem-header Content-Lenght fixed everything. I was able to spawn instances in virtualbox. This however, looks like a workaround rather than a fix, and I wonder if there's a real issue somewhere that needs to be fixed in a better way, maybe in openstackclient or some other component... Thoughts anyone? Cheers, Thomas Goirand (zigo) From cdent+os at anticdent.org Wed May 2 08:25:56 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 2 May 2018 09:25:56 +0100 (BST) Subject: [openstack-dev] Problems with all OpenStack APIs & uwsgi with Content-Lenght and connection reset by peer (ie: 104) In-Reply-To: <5589ecb4-49ef-8f59-33ee-8fbe510e572d@debian.org> References: <5589ecb4-49ef-8f59-33ee-8fbe510e572d@debian.org> Message-ID: On Wed, 2 May 2018, Thomas Goirand wrote: > What was disturbing was that, doing the same request with curl worked > perfectly. Even more disturbing, it looked like I was having the issue > nearly always in virtualbox, but not always in real hardware, where it > sometimes worked. What was making the request in the first place? It fails in X, but works in curl. What is X? > Anyway, finally, I figured out that adding: > > --rem-header Content-Lenght You added this arg to what? In both cases do you mean openstackclient? > This however, looks like a workaround rather than a fix, and I wonder if > there's a real issue somewhere that needs to be fixed in a better way, > maybe in openstackclient or some other component... Yeah, it sounds like something could be setting a bad value for the content length header and uwsgi is timing out while trying to read that much data (meaning, it is believing the content-length header) but there isn't anything actually there. Another option is that there are buffer size problems in the uwsgi configuration but it's hard to speculate because it is not clear what requests and tools you're actually talking about here. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From joshua.hesketh at gmail.com Wed May 2 08:46:43 2018 From: joshua.hesketh at gmail.com (Joshua Hesketh) Date: Wed, 2 May 2018 18:46:43 +1000 Subject: [openstack-dev] Overriding project-templates in Zuul In-Reply-To: References: <87o9i04rfa.fsf@meyer.lemoncheese.net> <87bmdzwbpz.fsf@meyer.lemoncheese.net> Message-ID: > > I think in actuality, both operations would end up as intersections: > > ================ ======== ======= ======= > Matcher Template Project Result > ================ ======== ======= ======= > files AB BC B > irrelevant-files AB BC B > ================ ======== ======= ======= > > So with the "combine" method, it's always possible to further restrict > where the job runs, but never to expand it. Ignoring the 'files' above, in the example of 'irrelevant-files' haven't you just combined the results to expand when it runs? ie, A and C are /not/ excluded and therefore the job will run when there are changes to A or C? I would expect the table to be something like: ================ ======== ======= ======= Matcher Template Project Result ================ ======== ======= ======= files AB BC B irrelevant-files AB BC ABC ================ ======== ======= ======= > > So a job with "files: tests/" and "irrelevant-files: docs/" would do > > whatever it is that happens when you specify both. > > In this case, I'm pretty sure that would mean it reduces to just "files: > tests/", but I've never claimed to understand irrelevant-files and I > won't start now. > Yes, I think you are right that this would reduce to that. However, what about the use case of: files: tests/* irrelevant-files: tests/docs/* I could see a use case where both of those would be helpful. Yes you could describe that as one regex but to the end user the above may be expected to work. Unless we make the two options mutually exclusive I feel like this is a feature we should support. (That said, it's likely a separate feature/functionality than what is being described now). Anyway, I feel like Proposal #2 is more how I would expect the system to behave. I can see an argument for combining the results (and feel like you could evaulate that at the end after combining the branch-matching variants) to give something like: ================ ======== ======= ======= Matcher Template Project Result ================ ======== ======= ======= files AB BC ABC irrelevant-files AB BC ABC ================ ======== ======= ======= However, that gives the user no way to remove a previously listed option. Thus overwriting may be the better solution (ie proposal #2 as written) unless we want to explore the option of allowing a syntax that says "extend" or "overwrite". Yours in hoping that made sense, Josh -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.therond at gmail.com Wed May 2 08:46:37 2018 From: gael.therond at gmail.com (Flint WALRUS) Date: Wed, 02 May 2018 08:46:37 +0000 Subject: [openstack-dev] [api] REST limitations and GraghQL inception? In-Reply-To: <9196a914-df90-41cd-01af-4f1c49a5d1aa@redhat.com> References: <9e8ab8c2-1025-c1c0-7f02-080cc8ae8fc1@redhat.com> <9196a914-df90-41cd-01af-4f1c49a5d1aa@redhat.com> Message-ID: Hi Gilles, folks, Nice to read such answers, I’m really thrilled by what could goes out from this discussion. One last thing regarding the SDK and Broker part of the discussion. GraphQL and SDK: Obviously as you noticed it, I was focused on the python-openstacksdk part of things even if it apply to the autonomous Openstack4j JAVA SDK or any other SDK for your favorite language. I do agree with you, GraphQL being a DSL it should in a long (or maybe not so long depending the adoption rate ;-) ) run replace the REST part of the SDK, however, I think the client libraries (at least for the python side of think) should be enforced by the Openstack foundation/devs as it would avoid having devs from one project/tool that will join the big tent to use a different library of its own and so create fragmentation and pitfalls already mentioned upper on my previous message. For example, if I let our devs use their own client library for both GraphQL and workers logic I will end up with at least a dozen of different libraries per teams and it will be a nightmare to debug, investigate, maintain etc. For sure as this is a personal example some could argue that we should enforce this choice at the company level and not at the solution level, but if everyone talk the same language its easier to share information, make consensus around a project, ease the development process by having a clear and consistent path (Providing a common cookiecutter for all new projects) and would give us the ability to manage a complete project with the Openstack client tool such as: ```openstack brick init ``` Here I choose the “brick” term as a keyword in order to avoid namespace collisions as projects and services are already used for ops side of things. GraphQL broker: Ok I see what you means and I honestly love the idea as it’s an elegant way to split responsibility while being able to scale and efficiently distribute requests. I think that’s the implicit idea behind swift-proxy and how (most of) companies achieve the horizontal scaling with HAProxy as a loadbalancer in front of classic Openstack WSGI endpoints. As this is a builtin feature of GraphQL that would allows a way better service discovery and routing architecture. Kind regards, G. Le mer. 2 mai 2018 à 09:41, Gilles Dubreuil a écrit : > I fixed the GraphQL typo (my mistake) in $subject to help with future ML > searches. > > Please see inline too. > > On 02/05/18 07:37, Flint WALRUS wrote: > > Ok, here are my two cents regarding GraphQL integration within Openstack > and some thoughts around this topic. > > 1°/- Openstack SDK should still exist and should be in my humble opinion a > critical focus as it allow following benefits for large and medium > companies : > > • It provide a common and clean structure for Openstack developments and > should be used either by projects or tools willing to integrate Openstack > as it will then create some sort of standard. > > For instance, here in my job we have A LOT (More than 10 000 peoples > working within around 130 teams) of teams developing over Openstack using > the SDK as a common shared base layout. > That allow for teams to easily share and co-develop on projects. Those > teams are spread around the world and so need to have clean guidelines as > it avoid them reinventing the wheel, they’re not stuck with someone else > obscure code created by another persons on the other side of the world or > within a different timezone. > Additionally it streamline our support and debug processes. > > > I'm assuming you're talking about the Python SDK (Shade) which would make > sense because it's the "lingua franca" of all projects. > > Nevertheless, for any SDKs/Languages, if adopted then GraphQL is likely to > replace its REST SDK on the long run. GraphQL is a DSL bypassing a SDK need > which get replaced with GraphQL client library. Basically the change, not a > rewrite, is inevitable. But I insist on "the long run" part, initially both > in parallel one wrapping the other, then progressively the REST content > moving across to GraphQL. > > > • We should get a common consensus before all projects start to implement > it. > > > > This is going to be raised during the API SIG weekly meeting later this > week. > API developers (at least one) from every project are strongly welcomed to > participate. > I suppose it makes sense for the API SIG to be the place to discuss it, at > least initially. > > > > This point is for me the most important one as it will fix flaws we get > currently with the rest APIs development within Openstack. > > First it will avoid a fresh developer to be confused by too many options. > Honestly, I know we are open etc, but this point really need to be > addressed as it is the main issue that I face with Openstack advocacy since > many years now. > > Having too many options even if explained within the documentation daunt a > lot of people to quickly give a hand with projects. > > For instance I have a workmate that is currently working on an internal > tool which ask me how should he implement its project REST interfaces. > > I told him TO NOT use WSME and to stick with what have been done by a > major project. Unfortunately he choose to copy what have been done by > Octavia which is actually using... WSME... > > GraphQL gives us the opportunity and ability to fix Openstack development > inconsistencies by providing and enforcing a clean guideline regarding > which library should be used and in which way. > > That would also have the side effect to easy the entry level for a new > Openstack developer. > > > I couldn't agree more! > > > • New architecture opportunities. > > For sure that will bring new architecture opportunities, but the broker > thing is not a good idea as each project should be able to be autonomous. > > I personally don’t like centralized services as it bring SPOF. > > Let’s take the AMQP example. For now most of Openstack deployments use a > RabbitMQ or broker like system. > Even if each (well at least major vanilla projects) services can (and > should) use ZeroMQ. > I do myself use RabbitMQ but my last weeks were so much > debugging/investigation hell that we now plan to have a serious benchmark > and test of ZMQ. > > One thing that I would love to see with GraphQL is a better distributed > and traceable model. > > > Exactly and the term broker I used is far from ideal, I meant it in the > context of a broker pattern providing distributed API service. GraphQL has > "stiching" capabilities allowing to forward request to diverse GraphQL > service, kind of a proxy, ideally such service to be distributed itself. > > The idea behind is a GraphQL proxy offering a single point of entry for > OpenStack entire stack and of course leaving complete autonomy to the all > services. > > > https://blog.graph.cool/graphql-schema-stitching-explained-schema-delegation-4c6caf468405 > > Anyway, I’m glad someone started this discussion as I feel it is a really > important topic that would highly help Openstack on more than just > interfacing topics. > Le mar. 1 mai 2018 à 05:00, Gilles Dubreuil a > écrit : > >> >> >> On 01/05/18 11:31, Flint WALRUS wrote: >> >> Yes, that’s was indeed the sens of my point. >> >> >> I was just enforcing it, no worries! ;) >> >> >> >> Openstack have to provide both endpoints type for a while for backward >> compatibility in order to smooth the transition. >> >> For instance, that would be a good idea to contact postman devteam once >> GraphQL will start to be integrated as it will allow a lot of ops to keep >> their day to day tools by just having to convert their existing collections >> of handful requests. >> >> >> Shouldn't we have a common consensus before any project start pushing its >> own GraphQL wheel? >> >> Also I wonder how GraphQL could open new architecture avenues for >> OpenStack. >> For example, would that make sense to also have a GraphQL broker linking >> OpenStack services? >> >> >> >> >> Or alternatively to provide a tool with similar features at least. >> Le mar. 1 mai 2018 à 03:18, Gilles Dubreuil a >> écrit : >> >>> >>> >>> On 30/04/18 20:16, Flint WALRUS wrote: >>> >>> I would very much second that question! Indeed it have been one of my >>> own wondering since many times. >>> >>> Of course GraphQL is not intended to replace REST as is and have to live >>> in parallel >>> >>> >>> Effectively a standard initial architecture is to have GraphQL sitting >>> aside (in parallel) and wrapping REST and along the way develop GrapgQL >>> Schema. >>> >>> It's seems too early to tell but GraphQL being the next step in API >>> evolution it might ultimately replace REST. >>> >>> >>> but it would likely and highly accelerate all requests within heavily >>> loaded environments >>> >>> >>> +1 >>> >>> >>> . >>> >>> So +1 for this question. >>> Le lun. 30 avr. 2018 à 05:53, Gilles Dubreuil a >>> écrit : >>> >>>> Hi, >>>> >>>> Remember Boston's Summit presentation [1] about GraphQL [2] and how it >>>> addresses REST limitations. >>>> I wonder if any project has been thinking about using GraphQL. I >>>> haven't >>>> find any mention or pointers about it. >>>> >>>> GraphQL takes a complete different approach compared to REST. So we can >>>> finally forget about REST API Description languages >>>> (OpenAPI/Swagger/WSDL/WADL/JSON-API/ETC) and HATEOS (the hypermedia >>>> approach which doesn't describe how to use it). >>>> >>>> So, once passed the point where 'REST vs GraphQL' is like comparing SQL >>>> and no-SQL DBMS and therefore have different applications, there are no >>>> doubt the complexity of most OpenStack projects are good candidates for >>>> GraphQL. >>>> >>>> Besides topics such as efficiency, decoupling, no version management >>>> need there many other powerful features such as API Schema out of the >>>> box and better automation down that track. >>>> >>>> It looks like the dream of a conduit between API services and consumers >>>> might have finally come true so we could move-on an worry about other >>>> things. >>>> >>>> So has anyone already starting looking into it? >>>> >>>> [1] >>>> >>>> https://www.openstack.org/videos/boston-2017/building-modern-apis-with-graphql >>>> [2] http://graphql.org >>>> >>>> >>>> >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Wed May 2 09:13:33 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 02 May 2018 09:13:33 +0000 Subject: [openstack-dev] [cyborg]No Team Meeting this Wed Message-ID: Hi team, Since I'm traveling to KubeCon this week, let's cancel the weekly meeting today . You are still more than welcome to raise questions or just chat on our irc channel :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at jimrollenhagen.com Wed May 2 11:47:15 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Wed, 2 May 2018 07:47:15 -0400 Subject: [openstack-dev] [neutron] How to specify OpenStack version for Zuul tests? In-Reply-To: References: Message-ID: On Wed, May 2, 2018 at 2:10 AM, Sangho Shin wrote: > Hello, > > Is there any way to specify the OpenStack version for Zuul test? > > Currently, neteworking-onos project is very outdated, and I need to create > a stable version for the codes for each OpenStack version > (Ocata/Pike/Queens). > Each version of OpenStack has different requirements in particular neutron > libs, and it seems that Zuul is using the master version of OpenStack for > testing my codes for Ocata. > I think the "override-checkout" variable is what you're looking for. The ironic-tempest-plugin project runs tests on multiple stable branches, that config is here[0]. The parent jobs referred to in that file are here[1]. [0] https://git.openstack.org/cgit/openstack/ironic-tempest-plugin/tree/zuul.d/stable-jobs.yaml [1] https://git.openstack.org/cgit/openstack/ironic/tree/playbooks/legacy // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Wed May 2 12:19:24 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Wed, 02 May 2018 07:19:24 -0500 Subject: [openstack-dev] [Openstack-operators] The Forum Schedule is now live In-Reply-To: References: <5AE34A02.8020802@openstack.org> <5AE73AA3.4030408@openstack.org> <5AE74CF2.9010804@openstack.org> <5AE87728.1020804@openstack.org> Message-ID: <5AE9ACCC.4010200@openstack.org> Emilien Macchi wrote: > Could we change the title of the slot and actually be a TripleO > Project Update session? > It would have been great to have the onboarding session but I guess we > also have 2 other sessions where we'll have occasions to meet: > TripleO Ops and User feedback and TripleO and Ansible integration > > If it's possible to still have an onboarding session, awesome > otherwise it's ok I think we'll deal with it. No problem, we have both on the schedule. I moved the Project Update to 11-11:20 so you can have a few minutes before the Onboarding starts at 11:50. https://www.openstack.org/summit/vancouver-2018/summit-schedule/global-search?t=TripleO Let me know if I can assist further. Thanks! Jimmy From jimmy at openstack.org Wed May 2 12:25:48 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Wed, 02 May 2018 07:25:48 -0500 Subject: [openstack-dev] The Forum Schedule is now live In-Reply-To: <3c7d1e0c-5f22-7f27-c424-d51ac350ab9b@redhat.com> References: <5AE34A02.8020802@openstack.org> <3c7d1e0c-5f22-7f27-c424-d51ac350ab9b@redhat.com> Message-ID: <5AE9AE4C.7050209@openstack.org> Gilles, Just responded to the ZenDesk ticket :) Cheers, Jimmy > Gilles Dubreuil > May 2, 2018 at 12:09 AM > Hi Jimmy, > > Do you have an update about the API SIG slot? > > Thanks, > Gilles > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jimmy McArthur > April 27, 2018 at 11:04 AM > Hello all - > > Please take a look here for the posted Forum schedule: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224 > You should also see it update on your Summit App. > > Thank you and see you in Vancouver! > Jimmy > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdubreui at redhat.com Wed May 2 12:44:46 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Wed, 2 May 2018 22:44:46 +1000 Subject: [openstack-dev] The Forum Schedule is now live In-Reply-To: <5AE9AE4C.7050209@openstack.org> References: <5AE34A02.8020802@openstack.org> <3c7d1e0c-5f22-7f27-c424-d51ac350ab9b@redhat.com> <5AE9AE4C.7050209@openstack.org> Message-ID: Jimmy, Fantastic! Thank you. Cheers, Gilles On 02/05/18 22:25, Jimmy McArthur wrote: > Gilles, > > Just responded to the ZenDesk ticket :) > > Cheers, > Jimmy > >> Gilles Dubreuil >> May 2, 2018 at 12:09 AM >> Hi Jimmy, >> >> Do you have an update about the API SIG slot? >> >> Thanks, >> Gilles >> >> >> >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> Jimmy McArthur >> April 27, 2018 at 11:04 AM >> Hello all - >> >> Please take a look here for the posted Forum schedule: >> https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224 >> You should also see it update on your Summit App. >> >> Thank you and see you in Vancouver! >> Jimmy >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Gilles Dubreuil Senior Software Engineer - Red Hat - Openstack DFG Integration Email: gilles at redhat.com GitHub/IRC: gildub Mobile: +61 400 894 219 -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Wed May 2 12:53:12 2018 From: emilien at redhat.com (Emilien Macchi) Date: Wed, 2 May 2018 05:53:12 -0700 Subject: [openstack-dev] [Openstack-operators] The Forum Schedule is now live In-Reply-To: <5AE9ACCC.4010200@openstack.org> References: <5AE34A02.8020802@openstack.org> <5AE73AA3.4030408@openstack.org> <5AE74CF2.9010804@openstack.org> <5AE87728.1020804@openstack.org> <5AE9ACCC.4010200@openstack.org> Message-ID: On Wed, May 2, 2018 at 5:19 AM, Jimmy McArthur wrote: > > No problem, we have both on the schedule. I moved the Project Update to > 11-11:20 so you can have a few minutes before the Onboarding starts at > 11:50. > > https://www.openstack.org/summit/vancouver-2018/summit-sched > ule/global-search?t=TripleO > > Let me know if I can assist further. > Everything looks excellent to me now. Thanks for your help! -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From bodenvmw at gmail.com Wed May 2 13:16:32 2018 From: bodenvmw at gmail.com (Boden Russell) Date: Wed, 2 May 2018 07:16:32 -0600 Subject: [openstack-dev] [tc][stackalytics][neutron] neutron-lib not showing as TC-approved project on stackalytics Message-ID: <6d9cc441-106e-6ded-3594-4edad989968d@gmail.com> Back in 2016 we tagged neutron-lib as a "tc-approved-release" and as a result neutron-lib commits/reviews showed up on stackalytics under TC-approved Project Types. However as of recent that's seemed to have changed and neutron-lib commits/reviews are no longer showing up [1] even though it appears to still be tagged [2] IIUC. Is neutron-lib not showing up as a TC-approved project in stackalytics intentional? If so can some please refer me as to why. If not how do we get stackalytics to pick it up again? Thanks [1] http://stackalytics.com/?release=rocky&project_type=tc:approved-release&metric=commits [2] https://github.com/openstack/governance/blob/master/reference/projects.yaml#L2065 From mnaser at vexxhost.com Wed May 2 13:21:47 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 2 May 2018 09:21:47 -0400 Subject: [openstack-dev] [tc][stackalytics][neutron] neutron-lib not showing as TC-approved project on stackalytics In-Reply-To: <6d9cc441-106e-6ded-3594-4edad989968d@gmail.com> References: <6d9cc441-106e-6ded-3594-4edad989968d@gmail.com> Message-ID: On Wed, May 2, 2018 at 9:16 AM, Boden Russell wrote: > Back in 2016 we tagged neutron-lib as a "tc-approved-release" and as a > result neutron-lib commits/reviews showed up on stackalytics under > TC-approved Project Types. > > However as of recent that's seemed to have changed and neutron-lib > commits/reviews are no longer showing up [1] even though it appears to > still be tagged [2] IIUC. > > Is neutron-lib not showing up as a TC-approved project in > stackalytics intentional? If so can some please refer me as to why. If > not how do we get stackalytics to pick it up again? I think at the moment, Stackalytics is not in an entirely consistent state, you'll notice in the header: "The data is being loaded now and is not complete" I am going to throw a guess that the data hasn't be fully loaded up yet to appear, it would probably be good to check back once it's fully loaded and that header is disappeared. > Thanks > > > [1] > http://stackalytics.com/?release=rocky&project_type=tc:approved-release&metric=commits > [2] > https://github.com/openstack/governance/blob/master/reference/projects.yaml#L2065 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mriedemos at gmail.com Wed May 2 13:40:15 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 2 May 2018 08:40:15 -0500 Subject: [openstack-dev] [nova][placement] Trying to summarize bp/glance-image-traits scheduling alternatives for rebuild In-Reply-To: <8eec45ab-f9ed-cd96-51a1-9be78849fb9b@gmail.com> References: <221636a9-4b8f-1098-10b8-2240a7cb0ff7@gmail.com> <8eec45ab-f9ed-cd96-51a1-9be78849fb9b@gmail.com> Message-ID: <433b6f52-d6b3-767c-efb8-d9cb3e03598f@gmail.com> On 4/23/2018 4:43 PM, Jay Pipes wrote: > How about just having the conductor call GET > /resource_providers?in_tree=&required=, see if > there is a result, and if not, don't even call the scheduler at all > (because conductor would already know there would be a NoValidHost > returned)? This makes filtering on image properties required, which is something I was pushing back on because the ImagePropertiesFilter today, by design of all scheduler filters, is configurable and optional, which is why I wanted to add the filtering logic for image-defined required traits into the ImagePropertiesFilter itself. -- Thanks, Matt From mriedemos at gmail.com Wed May 2 13:46:51 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 2 May 2018 08:46:51 -0500 Subject: [openstack-dev] [nova][placement] Trying to summarize bp/glance-image-traits scheduling alternatives for rebuild In-Reply-To: References: <221636a9-4b8f-1098-10b8-2240a7cb0ff7@gmail.com> <8eec45ab-f9ed-cd96-51a1-9be78849fb9b@gmail.com> Message-ID: On 4/23/2018 4:51 PM, Arvind N wrote: > For #1, we can make the explanation very clear that we rejected the > request because the original traits specified in the original image and > the new traits specified in the new image do not match and hence rebuild > is not supported. We don't reject rebuild requests today where you rebuild with a new image as long as that new image passes the scheduler filters for the host on which the instance is already running. I don't see why we'd just immediately fail in the API because the new image has required traits, when we have no idea, from the nova-api service, whether or not those image-defined required traits are going to match the current host or not. That's just adding technical debt to rebuild, like we have for rebuilding a volume-backed instance with a new image (you can't do it today because it wasn't thought about early enough in the design process). > > For #2, > > Other Cons: > > 1. None of the filters currently make other API requests and my > understanding is we want to avoid reintroducing such a pattern. But > definitely workable solution. For a rebuild-specific request (which we can determine already), I'm OK with this - we're already not calling GET /allocation_candidates in this case, so if people are worried about performance, it's just a trade of one REST API call for another. > 2. If the user disables the image properties filter, then traits based > filtering will not be run in rebuild case The user doesn't disable the filter, the operator does, and likely for good reason. I don't see a problem with this. > > For #3, > > Even though it handles the nested provider, there is a potential issue. > > Lets say a host with two SRIOV nic. One is normal SRIOV nic(VF1), > another one with some kind of offload feature(VF2).(Described by alex) > > Initial instance launch happens with VF:1 allocated, rebuild launches > with modified request with traits=HW_NIC_OFFLOAD_X, so basically we want > the instance to be allocated VF2. > > But the original allocation happens against VF1 and since in rebuild the > original allocations are not changed, we have wrong allocations. I don't know what to say about this. We shouldn't have any quantitative resource allocation changes as a result of a rebuild. This actually sounds like a case for option #4 with using GET /allocation_candidates and then being able to filter out if rebuliding the instance with the new image with new required traits but on the same host would result in new allocation requests, and if so, we should fail - but we can (only?) determine that via the response from GET /allocation_candidates. -- Thanks, Matt From mriedemos at gmail.com Wed May 2 13:49:40 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 2 May 2018 08:49:40 -0500 Subject: [openstack-dev] [nova][placement] Trying to summarize bp/glance-image-traits scheduling alternatives for rebuild In-Reply-To: <1524558311.25291.2@smtp.office365.com> References: <221636a9-4b8f-1098-10b8-2240a7cb0ff7@gmail.com> <8eec45ab-f9ed-cd96-51a1-9be78849fb9b@gmail.com> <1524558311.25291.2@smtp.office365.com> Message-ID: <10228347-d8d2-ca5f-16e2-75aa07b01088@gmail.com> On 4/24/2018 3:25 AM, Balázs Gibizer wrote: > The algorithm Eric provided in a previous mail do the filtering for the > RPs that are part of the instance allocation so that sounds good to me. Yeah I've been wondering if that solves this VF case. > I think we should not try to adjust allocations during a rebuild. > Changing the allocation would mean it is not a rebuild any more but a > resize. Agree. -- Thanks, Matt From pkovar at redhat.com Wed May 2 13:56:36 2018 From: pkovar at redhat.com (Petr Kovar) Date: Wed, 2 May 2018 15:56:36 +0200 Subject: [openstack-dev] [tc][docs] documenting openstack "constellations" In-Reply-To: <1525183287-sup-1728@lrrr.local> References: <1525183287-sup-1728@lrrr.local> Message-ID: <20180502155636.d678b1cf02dc20ab14813e19@redhat.com> On Tue, 01 May 2018 10:08:23 -0400 Doug Hellmann wrote: > The TC has had an item on our backlog for a while (a year?) to > document "constellations" of OpenStack components to make it easier > for deployers and users to understand which parts they need to have > the features they want [1]. > > John Garbutt has started writing the first such document [2], but > as we talked about the content we agreed the TC governance repository > is not the best home for it, so I have proposed creating a new > repository [3]. > > In order to set up the publishing jobs for that repo so the content > goes to docs.openstack.org, we need to settle the ownership of the > repository. > > I think it makes sense for the documentation team to "own" it, but > I also think it makes sense for it to have its own review team > because it's a bit different from the rest of the docs and we may > be able to recruit folks to help who might not want to commit to > being core reviewers for all of the documentation repositories. The > TC members would also like to be reviewers, to get things going. > > So, is the documentation team willing to add the new "constellations" > repository under their umbrella? Fine with me and thanks for bringing this up! >From the top of my head, this will also require updating the doc contrib guide and the docs review dashboard. https://docs.openstack.org/doc-contrib-guide/team-structure.html https://is.gd/openstackdocsdashboard Thanks, pk From mriedemos at gmail.com Wed May 2 13:57:22 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 2 May 2018 08:57:22 -0500 Subject: [openstack-dev] [nova][placement] Trying to summarize bp/glance-image-traits scheduling alternatives for rebuild In-Reply-To: References: <221636a9-4b8f-1098-10b8-2240a7cb0ff7@gmail.com> <8eec45ab-f9ed-cd96-51a1-9be78849fb9b@gmail.com> <530903a4-701d-595e-acc3-05369697cf06@gmail.com> Message-ID: <841f164e-0fae-ebc5-7954-b5b343e4eed4@gmail.com> On 4/24/2018 8:26 AM, Sylvain Bauza wrote: > We also have pre-flight checks for move operations like live and cold > migrations, and I'd really like to keep all the conditionals in the > conductor, because it knows better than the scheduler which operation is > asked. I'm not sure what "pre-flight checks" we have for cold migration. The conductor migrate task asks the scheduler for a host and then casts to the destination compute to start the migration. The conductor live migration task does do some checking on the source and dest computes before proceeding, I agree with you there. > I'm not really happy with adding more in the scheduler about "yeah, it's a rebuild, so please do something exceptional" Agree that building more special rebuild logic into the scheduler isn't ideal and hopefully we could resolve this in conductor if possible, despite the fact that ImagePropertiesFilter is optional (although I'm pretty sure everyone enables it). -- Thanks, Matt From pkovar at redhat.com Wed May 2 13:59:21 2018 From: pkovar at redhat.com (Petr Kovar) Date: Wed, 2 May 2018 15:59:21 +0200 Subject: [openstack-dev] [docs] Documentation meeting canceled Message-ID: <20180502155921.b36e2a08b182a898b105ab93@redhat.com> Hi all, Apologies but have to go offline now so canceling today's docs meeting. If you want to talk to the docs team, join #openstack-doc. Thanks, pk From mriedemos at gmail.com Wed May 2 14:07:02 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 2 May 2018 09:07:02 -0500 Subject: [openstack-dev] [nova][placement] Trying to summarize bp/glance-image-traits scheduling alternatives for rebuild In-Reply-To: References: <221636a9-4b8f-1098-10b8-2240a7cb0ff7@gmail.com> <8eec45ab-f9ed-cd96-51a1-9be78849fb9b@gmail.com> <530903a4-701d-595e-acc3-05369697cf06@gmail.com> Message-ID: <30e8e58b-a2f0-df83-49ba-d4d7a9aeddf3@gmail.com> On 5/1/2018 5:26 PM, Arvind N wrote: > In cases of rebuilding of an instance using a different image where the > image traits have changed between the original launch and the rebuild, > is it reasonable to ask to just re-launch a new instance with the new image? > > The argument for this approach is that given that the requirements have > changed, we want the scheduler to pick and allocate the appropriate host > for the instance. We don't know if the requirements have changed with the new image until we check them. Here is another option: What if the API compares the original image required traits against the new image required traits, and if the new image has required traits which weren't in the original image, then (punt) fail in the API? Then you would at least have a chance to rebuild with a new image that has required traits as long as those required traits are less than or equal to the originally validated traits for the host on which the instance is currently running. > > The approach above also gives you consistent results vs the other > approaches where the rebuild may or may not succeed depending on how the > original allocation of resources went. > Consistently frustrating, I agree. :) Because as a user, I can rebuild with some images (that don't have required traits) and can't rebuild with other images (that do have required traits). I see no difference with this and being able to rebuild (with a new image) some instances (image-backed) and not others (volume-backed). Given that, I expect if we punt on this, someone will just come along asking for the support later. Could be a couple of years from now when everyone has moved on and it then becomes someone else's problem. > For example(from Alex Xu) ,if you launched an instance on a host which > has two SRIOV nic. One is normal SRIOV nic(A), another one with some > kind of offload feature(B). > > So, the original request is: resources=SRIOV_VF:1 The instance gets a VF > from the normal SRIOV nic(A). > > But with a new image, the new request is: resources=SRIOV_VF:1 > traits=HW_NIC_OFFLOAD_XX > > With all the solutions discussed in the thread, a rebuild request like > above may or may not succeed depending on whether during the initial > launch whether nic A or nic B was allocated. > > Remember that in rebuild new allocation don't happen, we have to reuse > the existing allocations. > > Given the above background, there seems to be 2 competing options. > > 1. Fail in the API saying you can't rebuild with a new image with new > required traits. > > 2. Look at the current allocations for the instance and try to match the > new requirement from the image with the allocations. > > With #1, we get consistent results in regards to how rebuilds are > treated when the image traits changed. > > With #2, the rebuild may or may not succeed, depending on how well the > original allocations match up with the new requirements. > > #2 will also need to need to account for handling preferred traits or > granular resource traits if we decide to implement them for images at > some point... Option 10: Don't support image-defined traits at all. I know that won't happen though. At this point I'm exhausted with this entire issue and conversation and will probably bow out and need someone else to step in with different perspective, like melwitt or dansmith. All of the solutions are bad in their own way, either because they add technical debt and poor user experience, or because they make rebuild more complicated and harder to maintain for the developers. -- Thanks, Matt From lbragstad at gmail.com Wed May 2 14:17:44 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 2 May 2018 09:17:44 -0500 Subject: [openstack-dev] [keystone] [policy] no policy meeting today Message-ID: <34c541f7-1100-c1d2-0491-adfa142e6753@gmail.com> Hi all, I'm going to cancel the policy meeting today since attendance has been waning the last month or two and there are no items on the agenda. We should discuss whether or not we want to continue using this meeting. At this point, most of the policy work is in helping projects consume outcomes from those meetings.The meeting was originally proposed to help design a better policy system across OpenStack and figure out how to move towards it. If we feel we've come up with a direction that achieves that, I'm happy to summarize things and drop the bi-weekly meeting. Otherwise, if there are use cases we need to tackle next, we can continue holding the meeting. Thoughts? Lance -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From corvus at inaugust.com Wed May 2 14:21:40 2018 From: corvus at inaugust.com (James E. Blair) Date: Wed, 02 May 2018 07:21:40 -0700 Subject: [openstack-dev] Overriding project-templates in Zuul In-Reply-To: (Joshua Hesketh's message of "Wed, 2 May 2018 18:46:43 +1000") References: <87o9i04rfa.fsf@meyer.lemoncheese.net> <87bmdzwbpz.fsf@meyer.lemoncheese.net> Message-ID: <87r2munnnv.fsf@meyer.lemoncheese.net> Joshua Hesketh writes: >> >> I think in actuality, both operations would end up as intersections: >> >> ================ ======== ======= ======= >> Matcher Template Project Result >> ================ ======== ======= ======= >> files AB BC B >> irrelevant-files AB BC B >> ================ ======== ======= ======= >> >> So with the "combine" method, it's always possible to further restrict >> where the job runs, but never to expand it. > > Ignoring the 'files' above, in the example of 'irrelevant-files' haven't > you just combined the results to expand when it runs? ie, A and C are /not/ > excluded and therefore the job will run when there are changes to A or C? > > I would expect the table to be something like: > ================ ======== ======= ======= > Matcher Template Project Result > ================ ======== ======= ======= > files AB BC B > irrelevant-files AB BC ABC > ================ ======== ======= ======= Sure, we'll go with that. :) >> > So a job with "files: tests/" and "irrelevant-files: docs/" would do >> > whatever it is that happens when you specify both. >> >> In this case, I'm pretty sure that would mean it reduces to just "files: >> tests/", but I've never claimed to understand irrelevant-files and I >> won't start now. > > Yes, I think you are right that this would reduce to that. However, what > about the use case of: > files: tests/* > irrelevant-files: tests/docs/* > > I could see a use case where both of those would be helpful. Yes you could > describe that as one regex but to the end user the above may be expected to > work. Unless we make the two options mutually exclusive I feel like this is > a feature we should support. (That said, it's likely a separate > feature/functionality than what is being described now). Today, that means: run the job if a file in tests/ is changed AND any file outside of tests/docs/* is changed. A change to tests/foo matches the irrelevant-files matcher, and also the files matcher, so it runs. A change to tests/docs/foo matches the files matcher but not the irrelevant-files matcher, so it doesn't run. I really hope I got that right. Anyway, that is an example of something that's possible to express with both. I lumped in the idea of pairing files/irrelevant-files with Proposal 2 because I thought that being able to override them is key, and switching from one to the other was part of that, and, to be honest, I don't think people should ever combine them because it's hard enough to deal with one, but maybe that's too much of an implicit behavior change, and instead we should separate that out and consider it as its own change later. I believe a user could still stop a the matchers by saying "files: .*" and "irrelevant-files: ^$" in the project-local variant. Let's revise Proposal #2 to omit that: Proposal 2: Files and irrelevant-files are treated as overwriteable attributes and evaluated after branch-matching variants are combined. * Files and irrelevant-files are overwritten, so the last value encountered when combining all the matching variants (looking only at branches) wins. * It's possible to both reduce and expand the scope of jobs, but the user may need to manually copy values from a parent or other variant in order to do so. * It will no longer be possible to alter a job attribute by adding a variant with only a files matcher -- in all cases files and irrelevant-files are used solely to determine whether the job is run, not to determine whether to apply a variant. > Anyway, I feel like Proposal #2 is more how I would expect the system to > behave. > > I can see an argument for combining the results (and feel like you could > evaulate that at the end after combining the branch-matching variants) to > give something like: > ================ ======== ======= ======= > Matcher Template Project Result > ================ ======== ======= ======= > files AB BC ABC > irrelevant-files AB BC ABC > ================ ======== ======= ======= > > However, that gives the user no way to remove a previously listed option. > Thus overwriting may be the better solution (ie proposal #2 as written) > unless we want to explore the option of allowing a syntax that says > "extend" or "overwrite". > > Yours in hoping that made sense, > Josh As much as anything with irrelevant-files does, yes. :) -Jim From pabelanger at redhat.com Wed May 2 14:24:52 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Wed, 2 May 2018 10:24:52 -0400 Subject: [openstack-dev] [all] Gerrit server replacement scheduled for May 2nd 2018 In-Reply-To: <20180425142658.GA7028@localhost.localdomain> References: <20180410184829.GA16085@localhost.localdomain> <20180419154912.GA13701@localhost.localdomain> <20180425142658.GA7028@localhost.localdomain> Message-ID: <20180502142452.GA9614@localhost.localdomain> Hello from Infra. Today is the day for scheduled maintenance of gerrit, we'll be allocating 2 hours for the outage but don't expect it to take that long. During this time you will not be able to access gerrit. If you have any questions, or would like to follow along, please join us in #openstack-infra. --- It's that time again... on Wednesday, May 02, 2018 20:00 UTC, the OpenStack Project Infrastructure team is upgrading the server which runs review.openstack.org to Ubuntu Xenial, and that means a new virtual machine instance with new IP addresses assigned by our service provider. The new IP addresses will be as follows: IPv4 -> 104.130.246.32 IPv6 -> 2001:4800:7819:103:be76:4eff:fe04:9229 They will replace these current production IP addresses: IPv4 -> 104.130.246.91 IPv6 -> 2001:4800:7819:103:be76:4eff:fe05:8525 We understand that some users may be running from egress-filtered networks with port 29418/tcp explicitly allowed to the current review.openstack.org IP addresses, and so are providing this information as far in advance as we can to allow them time to update their firewalls accordingly. Note that some users dealing with egress filtering may find it easier to switch their local configuration to use Gerrit's REST API via HTTPS instead, and the current release of git-review has support for that workflow as well. http://lists.openstack.org/pipermail/openstack-dev/2014-September/045385.html We will follow up with final confirmation in subsequent announcements. Thanks, Paul From hrybacki at redhat.com Wed May 2 14:34:37 2018 From: hrybacki at redhat.com (Harry Rybacki) Date: Wed, 2 May 2018 10:34:37 -0400 Subject: [openstack-dev] [keystone] [policy] no policy meeting today In-Reply-To: <34c541f7-1100-c1d2-0491-adfa142e6753@gmail.com> References: <34c541f7-1100-c1d2-0491-adfa142e6753@gmail.com> Message-ID: Perhaps this meeting would be a good opportunity to get some broader discussion on our default roles spec we have proposed[1]. [1] - https://review.openstack.org/#/c/523973/8/specs/define-default-roles.rst /R Harry On Wed, May 2, 2018 at 10:17 AM, Lance Bragstad wrote: > Hi all, > > I'm going to cancel the policy meeting today since attendance has been > waning the last month or two and there are no items on the agenda. > > We should discuss whether or not we want to continue using this meeting. > At this point, most of the policy work is in helping projects consume > outcomes from those meetings.The meeting was originally proposed to help > design a better policy system across OpenStack and figure out how to > move towards it. If we feel we've come up with a direction that achieves > that, I'm happy to summarize things and drop the bi-weekly meeting. > Otherwise, if there are use cases we need to tackle next, we can > continue holding the meeting. > > Thoughts? > > Lance > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jimmy at openstack.org Wed May 2 14:46:59 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Wed, 02 May 2018 09:46:59 -0500 Subject: [openstack-dev] Thank you TryStack!! In-Reply-To: <5AE75A13.4030606@openstack.org> References: <5AB9797D.1090209@tipit.net> <20180430142334.GB10224@localhost.localdomain> <5AE72967.3050100@openstack.org> <20180430151255.bcgaqm5svvtz2rkq@yuggoth.org> <5AE73F3F.4040503@openstack.org> <20180430170204.vvtfq6gktc5i3r6r@yuggoth.org> <5AE74E05.90405@openstack.org> <20180430172905.c3qyjrwucgx5vdww@yuggoth.org> <5AE75A13.4030606@openstack.org> Message-ID: <5AE9CF63.2020503@openstack.org> Just wanted to follow up on this. trystack.openstack.org is now correctly redirecting to the same place as trystack.org. Thanks, Jimmy > Jimmy McArthur > April 30, 2018 at 1:01 PM > OK - got it :) > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jeremy Stanley > April 30, 2018 at 12:29 PM > [...] > > I was thrown by the fact that DNS currently has > trystack.openstack.org as a CNAME alias for trystack.org, but > reviewing logs on static.openstack.org it seems it may have > previously pointed there (was receiving traffic up until around > 13:15 UTC today) so if you want to just glom that onto the current > trystack.org redirect that may make the most sense and we can move > forward tearing down the old infrastructure for it. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jimmy McArthur > April 30, 2018 at 12:10 PM > Yeah... my only concern is that if traffic is actually getting there, > a redirect to the same place trystack.org is going might be helpful. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jeremy Stanley > April 30, 2018 at 12:02 PM > On 2018-04-30 11:07:27 -0500 (-0500), Jimmy McArthur wrote: > [...] > [...] > > Since I don't think the trystack.o.o site ever found its way fully > into production, it may make more sense for us to simply delete the > records for it from DNS. Someone else probably knows more about the > prior state of it than I though. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jimmy McArthur > April 30, 2018 at 11:07 AM > > >> Jeremy Stanley >> April 30, 2018 at 10:12 AM >> [...] >> >> Yes, before the TryStack effort was closed down, there had been a >> plan for trystack.org to redirect to a trystack.openstack.org site >> hosted in the community infrastructure. > When we talked to trystack we agreed to redirect trystack.org to > https://openstack.org/software/start since that presents alternative > options for people to "try openstack". My suggestion would be to > redirect trystack.openstack.org to the same spot, but certainly open > to other suggestions :) >> At this point I expect we >> can just rip out the section for it from >> https://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/manifests/static.pp >> as DNS appears to no longer be pointed there. >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> Jimmy McArthur >> April 30, 2018 at 9:34 AM >> I'm working on redirecting trystack.openstack.org to >> openstack.org/software/start. We have redirects in place for >> trystack.org, but didn't realize trystack.openstack.org as a thing as >> well. >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> Paul Belanger >> April 30, 2018 at 9:23 AM >> The code is hosted by openstack-infra[1], if somebody would like to >> propose a >> patch with the new information. >> >> [1] http://git.openstack.org/cgit/openstack-infra/trystack-site >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> Jens Harbott >> April 30, 2018 at 4:37 AM >> >> Seems it would be great if https://trystack.openstack.org/ would be >> updated with this information, according to comments in #openstack >> users are still landing on that page and try to get a stack there in >> vain. >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> Jimmy Mcarthur >> March 26, 2018 at 5:51 PM >> Hi everyone, >> >> We recently made the tough decision, in conjunction with the >> dedicated volunteers that run TryStack, to end the service as of >> March 29, 2018. For those of you that used it, thank you for being >> part of the TryStack community. >> >> The good news is that you can find more resources to try OpenStack at >> http://www.openstack.org/start, including the Passport Program >> , where you can test on any >> participating public cloud. If you are looking to test different >> tools or application stacks with OpenStack clouds, you should check >> out Open Lab . >> >> Thank you very much to Will Foster, Kambiz Aghaiepour, Rich Bowen, >> and the many other volunteers who have managed this valuable service >> for the last several years! Your contribution to OpenStack was >> noticed and appreciated by many in the community. >> >> Cheers, >> Jimmy >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-philippe at evrard.me Wed May 2 15:14:07 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Wed, 2 May 2018 17:14:07 +0200 Subject: [openstack-dev] [openstack-ansible] Implement rotations for meetings handling Message-ID: Hello everyone, Now that we are all part-time, I'd like to toy with a new idea, proposed in the past by Jesse, to rotate the duties with people who are involved in OSA, or want to get involved more (it's not restricted to core developers!). One of the first duties to be handled this way could be the weekly meeting. Handling the meeting is not that hard, it just takes time to prepare, and to facilitate. I think everyone should step into this, not only core developers, but core developers are now expected to run the meetings when their turn comes. What are the actions to take: - Prepare the triage. Generate the list of the bugs for the week. - Ping people with the triage links around 1h before the weekly meeting. It would give them time to get prepared for meeting, eventually updating the agenda, and read the current bugs - Ping people at the beginning of the meeting - Chair the meeting: The structure of the meeting is now always the same, a recap of the week, and handling the bug triage. - After the meeting we would ask who is volunteer to run next meeting, and if none, a meeting char will be selected amongst core contributors at random. Thank you for your understanding. Jean-Philippe Evrard (evrardjp) From mnaser at vexxhost.com Wed May 2 15:26:51 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 2 May 2018 11:26:51 -0400 Subject: [openstack-dev] [openstack-ansible] Implement rotations for meetings handling In-Reply-To: References: Message-ID: On Wed, May 2, 2018 at 11:14 AM, Jean-Philippe Evrard wrote: > Hello everyone, > > Now that we are all part-time, I'd like to toy with a new idea, > proposed in the past by Jesse, to rotate the duties with people who > are involved in OSA, or want to get involved more (it's not restricted > to core developers!). > > One of the first duties to be handled this way could be the weekly meeting. +1 I think that's something that we can share amongst us as a responsibility and take turns doing. > Handling the meeting is not that hard, it just takes time to prepare, > and to facilitate. > > I think everyone should step into this, not only core developers, but > core developers are now expected to run the meetings when their turn > comes. > > > What are the actions to take: > - Prepare the triage. Generate the list of the bugs for the week. > - Ping people with the triage links around 1h before the weekly > meeting. It would give them time to get prepared for meeting, > eventually updating the agenda, and read the current bugs > - Ping people at the beginning of the meeting > - Chair the meeting: The structure of the meeting is now always > the same, a recap of the week, and handling the bug triage. > - After the meeting we would ask who is volunteer to run next > meeting, and if none, a meeting char will be selected amongst core > contributors at random. > > Thank you for your understanding. > > Jean-Philippe Evrard (evrardjp) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From zbitter at redhat.com Wed May 2 15:38:55 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 2 May 2018 11:38:55 -0400 Subject: [openstack-dev] [tc][docs] documenting openstack "constellations" In-Reply-To: <1525206014-sup-535@lrrr.local> References: <1525183287-sup-1728@lrrr.local> <1525206014-sup-535@lrrr.local> Message-ID: <46609486-e82b-81de-7528-ee08b0f5e991@redhat.com> On 01/05/18 16:21, Doug Hellmann wrote: > Excerpts from Andreas Jaeger's message of 2018-05-01 21:51:19 +0200: >> On 05/01/2018 04:08 PM, Doug Hellmann wrote: >>> The TC has had an item on our backlog for a while (a year?) to >>> document "constellations" of OpenStack components to make it easier >>> for deployers and users to understand which parts they need to have >>> the features they want [1]. >>> >>> John Garbutt has started writing the first such document [2], but >>> as we talked about the content we agreed the TC governance repository >>> is not the best home for it, so I have proposed creating a new >>> repository [3]. >>> >>> In order to set up the publishing jobs for that repo so the content >>> goes to docs.openstack.org, we need to settle the ownership of the >>> repository. >>> >>> I think it makes sense for the documentation team to "own" it, but >>> I also think it makes sense for it to have its own review team >>> because it's a bit different from the rest of the docs and we may >>> be able to recruit folks to help who might not want to commit to >>> being core reviewers for all of the documentation repositories. The >>> TC members would also like to be reviewers, to get things going. >>> >>> So, is the documentation team willing to add the new "constellations" >>> repository under their umbrella? Or should we keep it as a TC-owned >>> repository for now? >> >> I'm fine having it as parts of the docs team. The docs PTL should be >> part of the review team for sure, >> >> Andreas > > Yeah, I wasn't really clear there: I intend to set up the documentation > and TC teams as members of the new team, so that all members of both > groups can be reviewers of the new repository. +1. What would be the reviewer criteria? Majority vote for adding new constellations and 2x +2 for changes maybe? Or just 2x +2 for everything? Just to clarify, since the TC reviews usually operate pretty differently to other code reviews. thanks, ZB From cdwilde at gmail.com Wed May 2 15:54:10 2018 From: cdwilde at gmail.com (David Wilde) Date: Wed, 02 May 2018 15:54:10 +0000 Subject: [openstack-dev] [openstack-ansible] Implement rotations for meetings handling In-Reply-To: References: Message-ID: I am definitely +1 on this, I think it's a great idea. Thanks, Dave Wilde (d34dh0r53) On Wed, May 2, 2018 at 10:27 AM Mohammed Naser wrote: > On Wed, May 2, 2018 at 11:14 AM, Jean-Philippe Evrard > wrote: > > Hello everyone, > > > > Now that we are all part-time, I'd like to toy with a new idea, > > proposed in the past by Jesse, to rotate the duties with people who > > are involved in OSA, or want to get involved more (it's not restricted > > to core developers!). > > > > One of the first duties to be handled this way could be the weekly > meeting. > > +1 > > I think that's something that we can share amongst us as a responsibility > and > take turns doing. > > > Handling the meeting is not that hard, it just takes time to prepare, > > and to facilitate. > > > > I think everyone should step into this, not only core developers, but > > core developers are now expected to run the meetings when their turn > > comes. > > > > > > What are the actions to take: > > - Prepare the triage. Generate the list of the bugs for the week. > > - Ping people with the triage links around 1h before the weekly > > meeting. It would give them time to get prepared for meeting, > > eventually updating the agenda, and read the current bugs > > - Ping people at the beginning of the meeting > > - Chair the meeting: The structure of the meeting is now always > > the same, a recap of the week, and handling the bug triage. > > - After the meeting we would ask who is volunteer to run next > > meeting, and if none, a meeting char will be selected amongst core > > contributors at random. > > > > Thank you for your understanding. > > > > Jean-Philippe Evrard (evrardjp) > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Wed May 2 16:09:02 2018 From: amy at demarco.com (Amy Marrich) Date: Wed, 2 May 2018 11:09:02 -0500 Subject: [openstack-dev] [openstack-ansible] Implement rotations for meetings handling In-Reply-To: References: Message-ID: +1, leading meetings is a great way to get folks involved in the Community and gives them some 'ownership' within the project. Amy (spotz) On Wed, May 2, 2018 at 10:14 AM, Jean-Philippe Evrard < jean-philippe at evrard.me> wrote: > Hello everyone, > > Now that we are all part-time, I'd like to toy with a new idea, > proposed in the past by Jesse, to rotate the duties with people who > are involved in OSA, or want to get involved more (it's not restricted > to core developers!). > > One of the first duties to be handled this way could be the weekly meeting. > > Handling the meeting is not that hard, it just takes time to prepare, > and to facilitate. > > I think everyone should step into this, not only core developers, but > core developers are now expected to run the meetings when their turn > comes. > > > What are the actions to take: > - Prepare the triage. Generate the list of the bugs for the week. > - Ping people with the triage links around 1h before the weekly > meeting. It would give them time to get prepared for meeting, > eventually updating the agenda, and read the current bugs > - Ping people at the beginning of the meeting > - Chair the meeting: The structure of the meeting is now always > the same, a recap of the week, and handling the bug triage. > - After the meeting we would ask who is volunteer to run next > meeting, and if none, a meeting char will be selected amongst core > contributors at random. > > Thank you for your understanding. > > Jean-Philippe Evrard (evrardjp) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arvindn05 at gmail.com Wed May 2 16:16:23 2018 From: arvindn05 at gmail.com (Arvind N) Date: Wed, 2 May 2018 09:16:23 -0700 Subject: [openstack-dev] [nova][placement] Trying to summarize bp/glance-image-traits scheduling alternatives for rebuild In-Reply-To: <30e8e58b-a2f0-df83-49ba-d4d7a9aeddf3@gmail.com> References: <221636a9-4b8f-1098-10b8-2240a7cb0ff7@gmail.com> <8eec45ab-f9ed-cd96-51a1-9be78849fb9b@gmail.com> <530903a4-701d-595e-acc3-05369697cf06@gmail.com> <30e8e58b-a2f0-df83-49ba-d4d7a9aeddf3@gmail.com> Message-ID: > What if the API compares the original image required traits against the new image required traits, and if the new image has required traits which weren't in the original image, then (punt) fail in the API? Then you would at least have a chance > to rebuild with a new image that has required traits as long as those required traits are less than or equal to the originally validated traits for the host on which the instance is currently running. This is what i was proposing with #1, sorry if it was unclear. Will make it more explicit. 1. Reject the rebuild request indicating that rebuilding with a new image with **different** required traits compared to the original request is not supported. If the new image has the same or reduced set of traits as the old image, then the request will be passed through to the conductor etc Pseudo code > if not set(new_image.traits_required).issubset( set(original_image.traits_required)) > raise exception On Wed, May 2, 2018 at 7:07 AM, Matt Riedemann wrote: > On 5/1/2018 5:26 PM, Arvind N wrote: > >> In cases of rebuilding of an instance using a different image where the >> image traits have changed between the original launch and the rebuild, is >> it reasonable to ask to just re-launch a new instance with the new image? >> >> The argument for this approach is that given that the requirements have >> changed, we want the scheduler to pick and allocate the appropriate host >> for the instance. >> > > We don't know if the requirements have changed with the new image until we > check them. > > Here is another option: > > What if the API compares the original image required traits against the > new image required traits, and if the new image has required traits which > weren't in the original image, then (punt) fail in the API? Then you would > at least have a chance to rebuild with a new image that has required traits > as long as those required traits are less than or equal to the originally > validated traits for the host on which the instance is currently running. > > >> The approach above also gives you consistent results vs the other >> approaches where the rebuild may or may not succeed depending on how the >> original allocation of resources went. >> >> > Consistently frustrating, I agree. :) Because as a user, I can rebuild > with some images (that don't have required traits) and can't rebuild with > other images (that do have required traits). > > I see no difference with this and being able to rebuild (with a new image) > some instances (image-backed) and not others (volume-backed). Given that, I > expect if we punt on this, someone will just come along asking for the > support later. Could be a couple of years from now when everyone has moved > on and it then becomes someone else's problem. > > For example(from Alex Xu) ,if you launched an instance on a host which has >> two SRIOV nic. One is normal SRIOV nic(A), another one with some kind of >> offload feature(B). >> >> So, the original request is: resources=SRIOV_VF:1 The instance gets a VF >> from the normal SRIOV nic(A). >> >> But with a new image, the new request is: resources=SRIOV_VF:1 >> traits=HW_NIC_OFFLOAD_XX >> >> With all the solutions discussed in the thread, a rebuild request like >> above may or may not succeed depending on whether during the initial launch >> whether nic A or nic B was allocated. >> >> Remember that in rebuild new allocation don't happen, we have to reuse >> the existing allocations. >> >> Given the above background, there seems to be 2 competing options. >> >> 1. Fail in the API saying you can't rebuild with a new image with new >> required traits. >> >> 2. Look at the current allocations for the instance and try to match the >> new requirement from the image with the allocations. >> >> With #1, we get consistent results in regards to how rebuilds are treated >> when the image traits changed. >> >> With #2, the rebuild may or may not succeed, depending on how well the >> original allocations match up with the new requirements. >> >> #2 will also need to need to account for handling preferred traits or >> granular resource traits if we decide to implement them for images at some >> point... >> > > Option 10: Don't support image-defined traits at all. I know that won't > happen though. > > At this point I'm exhausted with this entire issue and conversation and > will probably bow out and need someone else to step in with different > perspective, like melwitt or dansmith. > > All of the solutions are bad in their own way, either because they add > technical debt and poor user experience, or because they make rebuild more > complicated and harder to maintain for the developers. > > -- > > Thanks, > > Matt > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Arvind N -------------- next part -------------- An HTML attachment was scrubbed... URL: From shyambiradarsggsit at gmail.com Wed May 2 16:23:59 2018 From: shyambiradarsggsit at gmail.com (Shyam Biradar) Date: Wed, 2 May 2018 21:53:59 +0530 Subject: [openstack-dev] Third party module commits to TripleO/Newton branch Message-ID: Hi, I am working on TrilioVault deployment integration with TripleO. This integration will contain changes to TripleO heat templates repo and tripleO puppet module as shown in attached document. We are targeting this integration for OpenStack Newton release first. I just wanted to know, if we are allowed to commit new changes which are not related to any core components to Newton branch of tripleo heat templates repo, openstack tripleo repo . Thanks & Regards, Shyam Biradar, Email: shyambiradarsggsit at gmail.com, Contact: +91 8600266938. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Trilio_RedhatDirector_Integration.pdf Type: application/pdf Size: 83391 bytes Desc: not available URL: From mriedemos at gmail.com Wed May 2 16:25:25 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 2 May 2018 11:25:25 -0500 Subject: [openstack-dev] [nova][ironic] ironic_host_manager and baremetal scheduler options removal Message-ID: The baremetal scheduling options were deprecated in Pike [1] and the ironic_host_manager was deprecated in Queens [2] and is now being removed [3]. Deployments must use resource classes now for baremetal scheduling. [4] The large host subset size value is also no longer needed. [5] I've gone through all of the references to "ironic_host_manager" that I could find in codesearch.o.o and updated projects accordingly [6]. Please reply ASAP to this thread and/or [3] if you have issues with this. [1] https://review.openstack.org/#/c/493052/ [2] https://review.openstack.org/#/c/521648/ [3] https://review.openstack.org/#/c/565805/ [4] https://docs.openstack.org/ironic/latest/install/configure-nova-flavors.html#scheduling-based-on-resource-classes [5] https://review.openstack.org/565736/ [6] https://review.openstack.org/#/q/topic:exact-filters+(status:open+OR+status:merged) -- Thanks, Matt From mriedemos at gmail.com Wed May 2 16:28:24 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 2 May 2018 11:28:24 -0500 Subject: [openstack-dev] Third party module commits to TripleO/Newton branch In-Reply-To: References: Message-ID: On 5/2/2018 11:23 AM, Shyam Biradar wrote: > I am working on TrilioVault > deployment integration with TripleO. > This integration will contain changes to TripleO heat templates repo and > tripleO puppet module as shown in attached document. > > We are targeting this integration for OpenStack Newton release first. > I just wanted to know, if we are allowed to commit new changes which are > not related to any core components to Newton branch of tripleo heat > templates repo, > openstack tripleo repo . Why wouldn't you start on master, or queens, and then move backward to the older branches? -- Thanks, Matt From mgagne at calavera.ca Wed May 2 16:40:56 2018 From: mgagne at calavera.ca (=?UTF-8?Q?Mathieu_Gagn=C3=A9?=) Date: Wed, 2 May 2018 12:40:56 -0400 Subject: [openstack-dev] [nova][ironic] ironic_host_manager and baremetal scheduler options removal In-Reply-To: References: Message-ID: What's the state of caching_scheduler which could still be using those configs? Mathieu On Wed, May 2, 2018 at 12:25 PM, Matt Riedemann wrote: > The baremetal scheduling options were deprecated in Pike [1] and the > ironic_host_manager was deprecated in Queens [2] and is now being removed > [3]. Deployments must use resource classes now for baremetal scheduling. [4] > > The large host subset size value is also no longer needed. [5] > > I've gone through all of the references to "ironic_host_manager" that I > could find in codesearch.o.o and updated projects accordingly [6]. > > Please reply ASAP to this thread and/or [3] if you have issues with this. > > [1] https://review.openstack.org/#/c/493052/ > [2] https://review.openstack.org/#/c/521648/ > [3] https://review.openstack.org/#/c/565805/ > [4] > https://docs.openstack.org/ironic/latest/install/configure-nova-flavors.html#scheduling-based-on-resource-classes > [5] https://review.openstack.org/565736/ > [6] > https://review.openstack.org/#/q/topic:exact-filters+(status:open+OR+status:merged) > From mriedemos at gmail.com Wed May 2 16:49:46 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 2 May 2018 11:49:46 -0500 Subject: [openstack-dev] [nova][ironic] ironic_host_manager and baremetal scheduler options removal In-Reply-To: References: Message-ID: <96f7142b-8838-93f8-d8a7-46ff7010c394@gmail.com> On 5/2/2018 11:40 AM, Mathieu Gagné wrote: > What's the state of caching_scheduler which could still be using those configs? The CachingScheduler has been deprecated since Pike [1]. We discussed the CachingScheduler at the Rocky PTG in Dublin [2] and have a TODO to write a nova-manage data migration tool to create allocations in Placement for instances that were scheduled using the CachingScheduler (since Pike) which don't have their own resource allocations set in Placement (remember that starting in Pike the FilterScheduler started creating allocations in Placement rather than the ResourceTracker in nova-compute). If you're running computes that are Ocata or Newton, then the ResourceTracker in the nova-compute service should be creating the allocations in Placement for you, assuming you have the compute service configured to talk to Placement (optional in Newton, required in Ocata). [1] https://review.openstack.org/#/c/492210/ [2] https://etherpad.openstack.org/p/nova-ptg-rocky-placement -- Thanks, Matt From mgagne at calavera.ca Wed May 2 17:00:46 2018 From: mgagne at calavera.ca (=?UTF-8?Q?Mathieu_Gagn=C3=A9?=) Date: Wed, 2 May 2018 13:00:46 -0400 Subject: [openstack-dev] [nova][ironic] ironic_host_manager and baremetal scheduler options removal In-Reply-To: <96f7142b-8838-93f8-d8a7-46ff7010c394@gmail.com> References: <96f7142b-8838-93f8-d8a7-46ff7010c394@gmail.com> Message-ID: On Wed, May 2, 2018 at 12:49 PM, Matt Riedemann wrote: > On 5/2/2018 11:40 AM, Mathieu Gagné wrote: >> >> What's the state of caching_scheduler which could still be using those >> configs? > > > The CachingScheduler has been deprecated since Pike [1]. We discussed the > CachingScheduler at the Rocky PTG in Dublin [2] and have a TODO to write a > nova-manage data migration tool to create allocations in Placement for > instances that were scheduled using the CachingScheduler (since Pike) which > don't have their own resource allocations set in Placement (remember that > starting in Pike the FilterScheduler started creating allocations in > Placement rather than the ResourceTracker in nova-compute). > > If you're running computes that are Ocata or Newton, then the > ResourceTracker in the nova-compute service should be creating the > allocations in Placement for you, assuming you have the compute service > configured to talk to Placement (optional in Newton, required in Ocata). > > [1] https://review.openstack.org/#/c/492210/ > [2] https://etherpad.openstack.org/p/nova-ptg-rocky-placement If one can still run CachingScheduler (even if it's deprecated), I think we shouldn't remove the above options. As you can end up with a broken setup and IIUC no way to migrate to placement since migration script has yet to be written. -- Mathieu From jimmy.mccrory at gmail.com Wed May 2 17:06:46 2018 From: jimmy.mccrory at gmail.com (Jimmy McCrory) Date: Wed, 2 May 2018 10:06:46 -0700 Subject: [openstack-dev] [openstack-ansible] Implement rotations for meetings handling In-Reply-To: References: Message-ID: +1 good idea On Wed, May 2, 2018 at 9:09 AM, Amy Marrich wrote: > +1, leading meetings is a great way to get folks involved in the Community > and gives them some 'ownership' within the project. > > Amy (spotz) > > On Wed, May 2, 2018 at 10:14 AM, Jean-Philippe Evrard < > jean-philippe at evrard.me> wrote: > >> Hello everyone, >> >> Now that we are all part-time, I'd like to toy with a new idea, >> proposed in the past by Jesse, to rotate the duties with people who >> are involved in OSA, or want to get involved more (it's not restricted >> to core developers!). >> >> One of the first duties to be handled this way could be the weekly >> meeting. >> >> Handling the meeting is not that hard, it just takes time to prepare, >> and to facilitate. >> >> I think everyone should step into this, not only core developers, but >> core developers are now expected to run the meetings when their turn >> comes. >> >> >> What are the actions to take: >> - Prepare the triage. Generate the list of the bugs for the week. >> - Ping people with the triage links around 1h before the weekly >> meeting. It would give them time to get prepared for meeting, >> eventually updating the agenda, and read the current bugs >> - Ping people at the beginning of the meeting >> - Chair the meeting: The structure of the meeting is now always >> the same, a recap of the week, and handling the bug triage. >> - After the meeting we would ask who is volunteer to run next >> meeting, and if none, a meeting char will be selected amongst core >> contributors at random. >> >> Thank you for your understanding. >> >> Jean-Philippe Evrard (evrardjp) >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Wed May 2 17:21:25 2018 From: aschultz at redhat.com (Alex Schultz) Date: Wed, 2 May 2018 11:21:25 -0600 Subject: [openstack-dev] [tripleo] Third party module commits to TripleO/Newton branch Message-ID: On Wed, May 2, 2018 at 10:23 AM, Shyam Biradar wrote: > Hi, > > I am working on TrilioVault deployment integration with TripleO. > This integration will contain changes to TripleO heat templates repo and > tripleO puppet module as shown in attached document. > > We are targeting this integration for OpenStack Newton release first. > I just wanted to know, if we are allowed to commit new changes which are not > related to any core components to Newton branch of tripleo heat templates > repo, > openstack tripleo repo. > No we're looking to shut down the upstream newton repos in the near future (like this month[0]). We're only keeping it open at this point for fast-forward upgrade type of issues. You would need to work on master and backport as far as available and downstream the rest. Thanks, -Alex [0] http://eavesdrop.openstack.org/meetings/tripleo/2018/tripleo.2018-05-01-14.00.log.html#l-164 > > > Thanks & Regards, > Shyam Biradar, > Email: shyambiradarsggsit at gmail.com, > Contact: +91 8600266938. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From mriedemos at gmail.com Wed May 2 17:39:03 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 2 May 2018 12:39:03 -0500 Subject: [openstack-dev] [nova][ironic] ironic_host_manager and baremetal scheduler options removal In-Reply-To: References: <96f7142b-8838-93f8-d8a7-46ff7010c394@gmail.com> Message-ID: <60821a79-42a4-dfa4-cc65-2fbc068f8b35@gmail.com> On 5/2/2018 12:00 PM, Mathieu Gagné wrote: > If one can still run CachingScheduler (even if it's deprecated), I > think we shouldn't remove the above options. > As you can end up with a broken setup and IIUC no way to migrate to > placement since migration script has yet to be written. You're currently on cells v1 on mitaka right? So you have some time to get this sorted out before getting to Rocky where the IronicHostManager is dropped. I know you're just one case, but I don't know how many people are really running the CachingScheduler with ironic either, so it might be rare. It would be nice to get other operator input here, like I'm guessing CERN has their cells carved up so that certain cells are only serving baremetal requests while other cells are only VMs? FWIW, I think we can also backport the data migration CLI to stable branches once we have it available so you can do your migration in let's say Queens before getting to Rocky. -- Thanks, Matt From mgagne at calavera.ca Wed May 2 17:48:06 2018 From: mgagne at calavera.ca (=?UTF-8?Q?Mathieu_Gagn=C3=A9?=) Date: Wed, 2 May 2018 13:48:06 -0400 Subject: [openstack-dev] [nova][ironic] ironic_host_manager and baremetal scheduler options removal In-Reply-To: <60821a79-42a4-dfa4-cc65-2fbc068f8b35@gmail.com> References: <96f7142b-8838-93f8-d8a7-46ff7010c394@gmail.com> <60821a79-42a4-dfa4-cc65-2fbc068f8b35@gmail.com> Message-ID: On Wed, May 2, 2018 at 1:39 PM, Matt Riedemann wrote: > > I know you're just one case, but I don't know how many people are really > running the CachingScheduler with ironic either, so it might be rare. It > would be nice to get other operator input here, like I'm guessing CERN has > their cells carved up so that certain cells are only serving baremetal > requests while other cells are only VMs? I found FilterScheduler to be near impossible to use with Ironic due to the huge number of hypervisors it had to handle. Using CachingScheduler made a huge difference, like day and night. > FWIW, I think we can also backport the data migration CLI to stable branches > once we have it available so you can do your migration in let's say Queens > before getting to Rocky. -- Mathieu From zigo at debian.org Wed May 2 18:28:28 2018 From: zigo at debian.org (Thomas Goirand) Date: Wed, 2 May 2018 20:28:28 +0200 Subject: [openstack-dev] Problems with all OpenStack APIs & uwsgi with Content-Lenght and connection reset by peer (ie: 104) In-Reply-To: References: <5589ecb4-49ef-8f59-33ee-8fbe510e572d@debian.org> Message-ID: On 05/02/2018 10:25 AM, Chris Dent wrote: > On Wed, 2 May 2018, Thomas Goirand wrote: > >> What was disturbing was that, doing the same request with curl worked >> perfectly. Even more disturbing, it looked like I was having the issue >> nearly always in virtualbox, but not always in real hardware, where it >> sometimes worked. > > What was making the request in the first place? It fails in X, but > works in curl. What is X? For example, nova-compute querying nova-placement-api. Another example: openstackclient. It happened to me trying to configure keystone when running puppet-openstack, for example, but on the command line directly as well, simply trying to add users, projects, etc. This looks like to me as a general problem in all of the OpenStack WSGI applications. >> Anyway, finally, I figured out that adding: >> >> --rem-header Content-Lenght > > You added this arg to what? As a parameter to uwsgi, so that it removes the Content-Lenght that the WSGI application sends. >> This however, looks like a workaround rather than a fix, and I wonder if >> there's a real issue somewhere that needs to be fixed in a better way, >> maybe in openstackclient or some other component... > > Yeah, it sounds like something could be setting a bad value for the > content length header and uwsgi is timing out while trying to read > that much data (meaning, it is believing the content-length header) > but there isn't anything actually there. > > Another option is that there are buffer size problems in the uwsgi > configuration but it's hard to speculate because it is not clear > what requests and tools you're actually talking about here. When attempting to google for the issue, I saw a lot of people that had this problem fixed by adding --buffer-size 65535, as the default 4k header of uwsgi was not enough. I also have this option set, as it seems a reasonable thing to have, but that was not enough to fix the problem. Only the --rem-header thing did. If you want to try, you can simply use the stretch-queens.debian.net repository with Glance (or simply Debian Sid), and edit /etc/glance/glance-api-wsgi.ini to change the uwsgi parameters (I've just switched Glance to uwsgi, since it now works...). I haven't checked with Glance, but since I saw the problem with nova-placement-api, cinder-api and keystone, I don't see why it wouldn't happen there. Cheers, Thomas Goirand (zigo) From doug at doughellmann.com Wed May 2 18:37:44 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 02 May 2018 14:37:44 -0400 Subject: [openstack-dev] [openstack-ansible] Implement rotations for meetings handling In-Reply-To: References: Message-ID: <1525286227-sup-5048@lrrr.local> Excerpts from Jean-Philippe Evrard's message of 2018-05-02 17:14:07 +0200: > Hello everyone, > > Now that we are all part-time, I'd like to toy with a new idea, > proposed in the past by Jesse, to rotate the duties with people who > are involved in OSA, or want to get involved more (it's not restricted > to core developers!). > > One of the first duties to be handled this way could be the weekly meeting. > > Handling the meeting is not that hard, it just takes time to prepare, > and to facilitate. > > I think everyone should step into this, not only core developers, but > core developers are now expected to run the meetings when their turn > comes. > > > What are the actions to take: > - Prepare the triage. Generate the list of the bugs for the week. > - Ping people with the triage links around 1h before the weekly > meeting. It would give them time to get prepared for meeting, > eventually updating the agenda, and read the current bugs > - Ping people at the beginning of the meeting > - Chair the meeting: The structure of the meeting is now always > the same, a recap of the week, and handling the bug triage. > - After the meeting we would ask who is volunteer to run next > meeting, and if none, a meeting char will be selected amongst core > contributors at random. > > Thank you for your understanding. > > Jean-Philippe Evrard (evrardjp) > This is a great idea for sharing the load of organizing the team! Doug From doug at doughellmann.com Wed May 2 19:11:04 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 02 May 2018 15:11:04 -0400 Subject: [openstack-dev] [tc][docs] documenting openstack "constellations" In-Reply-To: <46609486-e82b-81de-7528-ee08b0f5e991@redhat.com> References: <1525183287-sup-1728@lrrr.local> <1525206014-sup-535@lrrr.local> <46609486-e82b-81de-7528-ee08b0f5e991@redhat.com> Message-ID: <1525288213-sup-8074@lrrr.local> Excerpts from Zane Bitter's message of 2018-05-02 11:38:55 -0400: > On 01/05/18 16:21, Doug Hellmann wrote: > > Excerpts from Andreas Jaeger's message of 2018-05-01 21:51:19 +0200: > >> On 05/01/2018 04:08 PM, Doug Hellmann wrote: > >>> The TC has had an item on our backlog for a while (a year?) to > >>> document "constellations" of OpenStack components to make it easier > >>> for deployers and users to understand which parts they need to have > >>> the features they want [1]. > >>> > >>> John Garbutt has started writing the first such document [2], but > >>> as we talked about the content we agreed the TC governance repository > >>> is not the best home for it, so I have proposed creating a new > >>> repository [3]. > >>> > >>> In order to set up the publishing jobs for that repo so the content > >>> goes to docs.openstack.org, we need to settle the ownership of the > >>> repository. > >>> > >>> I think it makes sense for the documentation team to "own" it, but > >>> I also think it makes sense for it to have its own review team > >>> because it's a bit different from the rest of the docs and we may > >>> be able to recruit folks to help who might not want to commit to > >>> being core reviewers for all of the documentation repositories. The > >>> TC members would also like to be reviewers, to get things going. > >>> > >>> So, is the documentation team willing to add the new "constellations" > >>> repository under their umbrella? Or should we keep it as a TC-owned > >>> repository for now? > >> > >> I'm fine having it as parts of the docs team. The docs PTL should be > >> part of the review team for sure, > >> > >> Andreas > > > > Yeah, I wasn't really clear there: I intend to set up the documentation > > and TC teams as members of the new team, so that all members of both > > groups can be reviewers of the new repository. > > +1. What would be the reviewer criteria? Majority vote for adding new > constellations and 2x +2 for changes maybe? Or just 2x +2 for > everything? Just to clarify, since the TC reviews usually operate pretty > differently to other code reviews. > > thanks, > ZB > This is just documentation, so 2x +2 is what I had in mind. Doug From doug at doughellmann.com Wed May 2 19:13:23 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 02 May 2018 15:13:23 -0400 Subject: [openstack-dev] [tc][docs] documenting openstack "constellations" In-Reply-To: <20180502155636.d678b1cf02dc20ab14813e19@redhat.com> References: <1525183287-sup-1728@lrrr.local> <20180502155636.d678b1cf02dc20ab14813e19@redhat.com> Message-ID: <1525288286-sup-5572@lrrr.local> Excerpts from Petr Kovar's message of 2018-05-02 15:56:36 +0200: > On Tue, 01 May 2018 10:08:23 -0400 > Doug Hellmann wrote: > > > The TC has had an item on our backlog for a while (a year?) to > > document "constellations" of OpenStack components to make it easier > > for deployers and users to understand which parts they need to have > > the features they want [1]. > > > > John Garbutt has started writing the first such document [2], but > > as we talked about the content we agreed the TC governance repository > > is not the best home for it, so I have proposed creating a new > > repository [3]. > > > > In order to set up the publishing jobs for that repo so the content > > goes to docs.openstack.org, we need to settle the ownership of the > > repository. > > > > I think it makes sense for the documentation team to "own" it, but > > I also think it makes sense for it to have its own review team > > because it's a bit different from the rest of the docs and we may > > be able to recruit folks to help who might not want to commit to > > being core reviewers for all of the documentation repositories. The > > TC members would also like to be reviewers, to get things going. > > > > So, is the documentation team willing to add the new "constellations" > > repository under their umbrella? > > Fine with me and thanks for bringing this up! > > From the top of my head, this will also require updating the doc contrib > guide and the docs review dashboard. > > https://docs.openstack.org/doc-contrib-guide/team-structure.html > https://is.gd/openstackdocsdashboard > > Thanks, > pk > Thanks, Petr. I will make a note of that and look for a volunteer to take care of it. Doug From gr at ham.ie Wed May 2 19:15:14 2018 From: gr at ham.ie (Graham Hayes) Date: Wed, 2 May 2018 20:15:14 +0100 Subject: [openstack-dev] [tc][docs] documenting openstack "constellations" In-Reply-To: <1525288213-sup-8074@lrrr.local> References: <1525183287-sup-1728@lrrr.local> <1525206014-sup-535@lrrr.local> <46609486-e82b-81de-7528-ee08b0f5e991@redhat.com> <1525288213-sup-8074@lrrr.local> Message-ID: <24a33cf4-73a4-e973-0091-d89a6149ebde@ham.ie> On 02/05/18 20:11, Doug Hellmann wrote: > Excerpts from Zane Bitter's message of 2018-05-02 11:38:55 -0400: >> On 01/05/18 16:21, Doug Hellmann wrote: >>> Excerpts from Andreas Jaeger's message of 2018-05-01 21:51:19 +0200: >>>> On 05/01/2018 04:08 PM, Doug Hellmann wrote: >>>>> The TC has had an item on our backlog for a while (a year?) to >>>>> document "constellations" of OpenStack components to make it easier >>>>> for deployers and users to understand which parts they need to have >>>>> the features they want [1]. >>>>> >>>>> John Garbutt has started writing the first such document [2], but >>>>> as we talked about the content we agreed the TC governance repository >>>>> is not the best home for it, so I have proposed creating a new >>>>> repository [3]. >>>>> >>>>> In order to set up the publishing jobs for that repo so the content >>>>> goes to docs.openstack.org, we need to settle the ownership of the >>>>> repository. >>>>> >>>>> I think it makes sense for the documentation team to "own" it, but >>>>> I also think it makes sense for it to have its own review team >>>>> because it's a bit different from the rest of the docs and we may >>>>> be able to recruit folks to help who might not want to commit to >>>>> being core reviewers for all of the documentation repositories. The >>>>> TC members would also like to be reviewers, to get things going. >>>>> >>>>> So, is the documentation team willing to add the new "constellations" >>>>> repository under their umbrella? Or should we keep it as a TC-owned >>>>> repository for now? >>>> >>>> I'm fine having it as parts of the docs team. The docs PTL should be >>>> part of the review team for sure, >>>> >>>> Andreas >>> >>> Yeah, I wasn't really clear there: I intend to set up the documentation >>> and TC teams as members of the new team, so that all members of both >>> groups can be reviewers of the new repository. >> >> +1. What would be the reviewer criteria? Majority vote for adding new >> constellations and 2x +2 for changes maybe? Or just 2x +2 for >> everything? Just to clarify, since the TC reviews usually operate pretty >> differently to other code reviews. >> >> thanks, >> ZB >> > > This is just documentation, so 2x +2 is what I had in mind. > > Doug > Is it just docs though? I thought the the point of constellations was for a curated set of projects per use case? -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From jimmy at openstack.org Wed May 2 19:44:01 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Wed, 02 May 2018 14:44:01 -0500 Subject: [openstack-dev] Zuul memory improvements In-Reply-To: <87wowo4tyz.fsf@meyer.lemoncheese.net> References: <87wowo4tyz.fsf@meyer.lemoncheese.net> Message-ID: <5AEA1501.3090809@openstack.org> Congrats on the improvements, Jim! Sounds like this is going to make a huge difference. Go Zuul! Cheers, Jimmy > James E. Blair > April 30, 2018 at 10:03 AM > Hi, > > We recently made some changes to Zuul which you may want to know about > if you interact with a large number of projects. > > Previously, each change to Zuul which updated Zuul's configuration > (e.g., a change to a project's zuul.yaml file) would consume a > significant amount of memory. If we had too many of these in the queue > at a time, the server would run out of RAM. To mitigate this, we asked > folks who regularly submit large numbers of configuration changes to > only submit a few at a time. > > We have updated Zuul so it now caches much more of its configuration, > and the cost in memory of an additional configuration change is very > small. An added bonus: they are computed more quickly as well. > > Of course, there's still a cost to every change pushed up to Gerrit -- > each one uses test nodes, for instance, so if you need to make a large > number of changes, please do consider the impact to the whole system and > other users. However, there's no longer a need to severely restrict > configuration changes as a class -- consider them as any other change. > > -Jim > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From eandersson at blizzard.com Wed May 2 19:54:18 2018 From: eandersson at blizzard.com (Erik Olof Gunnar Andersson) Date: Wed, 2 May 2018 19:54:18 +0000 Subject: [openstack-dev] Problems with all OpenStack APIs & uwsgi with Content-Lenght and connection reset by peer (ie: 104) In-Reply-To: References: <5589ecb4-49ef-8f59-33ee-8fbe510e572d@debian.org> Message-ID: I noticed something similar when deploying Keystone using nginx in the lab, and pretty sure I fixed it by setting uwsgi_ignore_client_abort to on. http://nginx.org/en/docs/http/ngx_http_uwsgi_module.html In addition to that flag I also have client_header_buffer_size 64k; uwsgi_buffer_size 8k; uwsgi_read_timeout 600; uwsgi_send_timeout 600; Best Regards, Erik Olof Gunnar Andersson -----Original Message----- From: Thomas Goirand Sent: Wednesday, May 2, 2018 11:28 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] Problems with all OpenStack APIs & uwsgi with Content-Lenght and connection reset by peer (ie: 104) On 05/02/2018 10:25 AM, Chris Dent wrote: > On Wed, 2 May 2018, Thomas Goirand wrote: > >> What was disturbing was that, doing the same request with curl worked >> perfectly. Even more disturbing, it looked like I was having the >> issue nearly always in virtualbox, but not always in real hardware, >> where it sometimes worked. > > What was making the request in the first place? It fails in X, but > works in curl. What is X? For example, nova-compute querying nova-placement-api. Another example: openstackclient. It happened to me trying to configure keystone when running puppet-openstack, for example, but on the command line directly as well, simply trying to add users, projects, etc. This looks like to me as a general problem in all of the OpenStack WSGI applications. >> Anyway, finally, I figured out that adding: >> >> --rem-header Content-Lenght > > You added this arg to what? As a parameter to uwsgi, so that it removes the Content-Lenght that the WSGI application sends. >> This however, looks like a workaround rather than a fix, and I wonder >> if there's a real issue somewhere that needs to be fixed in a better >> way, maybe in openstackclient or some other component... > > Yeah, it sounds like something could be setting a bad value for the > content length header and uwsgi is timing out while trying to read > that much data (meaning, it is believing the content-length header) > but there isn't anything actually there. > > Another option is that there are buffer size problems in the uwsgi > configuration but it's hard to speculate because it is not clear what > requests and tools you're actually talking about here. When attempting to google for the issue, I saw a lot of people that had this problem fixed by adding --buffer-size 65535, as the default 4k header of uwsgi was not enough. I also have this option set, as it seems a reasonable thing to have, but that was not enough to fix the problem. Only the --rem-header thing did. If you want to try, you can simply use the stretch-queens.debian.net repository with Glance (or simply Debian Sid), and edit /etc/glance/glance-api-wsgi.ini to change the uwsgi parameters (I've just switched Glance to uwsgi, since it now works...). I haven't checked with Glance, but since I saw the problem with nova-placement-api, cinder-api and keystone, I don't see why it wouldn't happen there. Cheers, Thomas Goirand (zigo) __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From hongbin034 at gmail.com Wed May 2 20:40:07 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Wed, 2 May 2018 16:40:07 -0400 Subject: [openstack-dev] [Zun] Announce change of Zun core reviewer team Message-ID: Hi all, I would like to announce the following change on the Zun core reviewers team: + Ji Wei Ji Wei has been working on Zun for a while. His contributions include blueprints, bug fixes, code reviews, etc. In particular, I would like to highlight that he has implemented two blueprints [1][2], both of which are not easy to implement. Based on his high-quality work in the past, I believe he will serve the core reviewer role very well. This proposal had been voted within the existing core team and was unanimously approved. Welcome to the core team Ji Wei. [1] https://blueprints.launchpad.net/zun/+spec/glance-support-tag [2] https://blueprints.launchpad.net/zun/+spec/zun-rebuild-on-local-node Best regards, Hongbin -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed May 2 21:28:40 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 02 May 2018 17:28:40 -0400 Subject: [openstack-dev] [tc][docs] documenting openstack "constellations" In-Reply-To: <24a33cf4-73a4-e973-0091-d89a6149ebde@ham.ie> References: <1525183287-sup-1728@lrrr.local> <1525206014-sup-535@lrrr.local> <46609486-e82b-81de-7528-ee08b0f5e991@redhat.com> <1525288213-sup-8074@lrrr.local> <24a33cf4-73a4-e973-0091-d89a6149ebde@ham.ie> Message-ID: <1525295669-sup-392@lrrr.local> Excerpts from Graham Hayes's message of 2018-05-02 20:15:14 +0100: > On 02/05/18 20:11, Doug Hellmann wrote: > > Excerpts from Zane Bitter's message of 2018-05-02 11:38:55 -0400: > >> On 01/05/18 16:21, Doug Hellmann wrote: > >>> Excerpts from Andreas Jaeger's message of 2018-05-01 21:51:19 +0200: > >>>> On 05/01/2018 04:08 PM, Doug Hellmann wrote: > >>>>> The TC has had an item on our backlog for a while (a year?) to > >>>>> document "constellations" of OpenStack components to make it easier > >>>>> for deployers and users to understand which parts they need to have > >>>>> the features they want [1]. > >>>>> > >>>>> John Garbutt has started writing the first such document [2], but > >>>>> as we talked about the content we agreed the TC governance repository > >>>>> is not the best home for it, so I have proposed creating a new > >>>>> repository [3]. > >>>>> > >>>>> In order to set up the publishing jobs for that repo so the content > >>>>> goes to docs.openstack.org, we need to settle the ownership of the > >>>>> repository. > >>>>> > >>>>> I think it makes sense for the documentation team to "own" it, but > >>>>> I also think it makes sense for it to have its own review team > >>>>> because it's a bit different from the rest of the docs and we may > >>>>> be able to recruit folks to help who might not want to commit to > >>>>> being core reviewers for all of the documentation repositories. The > >>>>> TC members would also like to be reviewers, to get things going. > >>>>> > >>>>> So, is the documentation team willing to add the new "constellations" > >>>>> repository under their umbrella? Or should we keep it as a TC-owned > >>>>> repository for now? > >>>> > >>>> I'm fine having it as parts of the docs team. The docs PTL should be > >>>> part of the review team for sure, > >>>> > >>>> Andreas > >>> > >>> Yeah, I wasn't really clear there: I intend to set up the documentation > >>> and TC teams as members of the new team, so that all members of both > >>> groups can be reviewers of the new repository. > >> > >> +1. What would be the reviewer criteria? Majority vote for adding new > >> constellations and 2x +2 for changes maybe? Or just 2x +2 for > >> everything? Just to clarify, since the TC reviews usually operate pretty > >> differently to other code reviews. > >> > >> thanks, > >> ZB > >> > > > > This is just documentation, so 2x +2 is what I had in mind. > > > > Doug > > > > Is it just docs though? I thought the the point of constellations > was for a curated set of projects per use case? The point of constellations is to help users find the components they need to achieve their goals, by giving them smaller versions of the OpenStack component map to navigate. It's the sort of thing a product management group would produce as part of the product documentation, if we had such a group. If someone disagrees strongly about the components of a particular constellation, some items can be listed as alternatives or a completely separate constellation can be created to support the different use case. Neither solution requires the TC to, as an overall governing body, make a decision on for the entire community using our most strict consensus model for voting. If we set it up in a way that only the TC can make decisions about the contents of constellations, then we'll never produce a useful number of them. Doug From pabelanger at redhat.com Wed May 2 22:11:43 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Wed, 2 May 2018 18:11:43 -0400 Subject: [openstack-dev] [all] Gerrit server replacement finished In-Reply-To: <20180502142452.GA9614@localhost.localdomain> References: <20180410184829.GA16085@localhost.localdomain> <20180419154912.GA13701@localhost.localdomain> <20180425142658.GA7028@localhost.localdomain> <20180502142452.GA9614@localhost.localdomain> Message-ID: <20180502221143.GA14348@localhost.localdomain> Hello from Infra. Gerrit maintenance has concluded successfully and running happily on Ubuntu Xenial. We were able to save and restore the queues from zuul, but as always be sure to check your patches as a recheck maybe be required. If you have any questions or comments, please reach out to us in #openstack-infra. I've leave the text below in case anybody missed our previous emails. --- It's that time again... on Wednesday, May 02, 2018 20:00 UTC, the OpenStack Project Infrastructure team is upgrading the server which runs review.openstack.org to Ubuntu Xenial, and that means a new virtual machine instance with new IP addresses assigned by our service provider. The new IP addresses will be as follows: IPv4 -> 104.130.246.32 IPv6 -> 2001:4800:7819:103:be76:4eff:fe04:9229 They will replace these current production IP addresses: IPv4 -> 104.130.246.91 IPv6 -> 2001:4800:7819:103:be76:4eff:fe05:8525 We understand that some users may be running from egress-filtered networks with port 29418/tcp explicitly allowed to the current review.openstack.org IP addresses, and so are providing this information as far in advance as we can to allow them time to update their firewalls accordingly. Note that some users dealing with egress filtering may find it easier to switch their local configuration to use Gerrit's REST API via HTTPS instead, and the current release of git-review has support for that workflow as well. http://lists.openstack.org/pipermail/openstack-dev/2014-September/045385.html We will follow up with final confirmation in subsequent announcements. Thanks, Paul From jaypipes at gmail.com Wed May 2 22:39:54 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 2 May 2018 18:39:54 -0400 Subject: [openstack-dev] [nova][placement] Trying to summarize bp/glance-image-traits scheduling alternatives for rebuild In-Reply-To: <30e8e58b-a2f0-df83-49ba-d4d7a9aeddf3@gmail.com> References: <221636a9-4b8f-1098-10b8-2240a7cb0ff7@gmail.com> <8eec45ab-f9ed-cd96-51a1-9be78849fb9b@gmail.com> <530903a4-701d-595e-acc3-05369697cf06@gmail.com> <30e8e58b-a2f0-df83-49ba-d4d7a9aeddf3@gmail.com> Message-ID: On 05/02/2018 10:07 AM, Matt Riedemann wrote: > On 5/1/2018 5:26 PM, Arvind N wrote: >> In cases of rebuilding of an instance using a different image where >> the image traits have changed between the original launch and the >> rebuild, is it reasonable to ask to just re-launch a new instance with >> the new image? >> >> The argument for this approach is that given that the requirements >> have changed, we want the scheduler to pick and allocate the >> appropriate host for the instance. > > We don't know if the requirements have changed with the new image until > we check them. > > Here is another option: > > What if the API compares the original image required traits against the > new image required traits, and if the new image has required traits > which weren't in the original image, then (punt) fail in the API? Then > you would at least have a chance to rebuild with a new image that has > required traits as long as those required traits are less than or equal > to the originally validated traits for the host on which the instance is > currently running. That's pretty much what I had suggested earlier, yeah. > Option 10: Don't support image-defined traits at all. I know that won't > happen though. > > At this point I'm exhausted with this entire issue and conversation and > will probably bow out and need someone else to step in with different > perspective, like melwitt or dansmith. > > All of the solutions are bad in their own way, either because they add > technical debt and poor user experience, or because they make rebuild > more complicated and harder to maintain for the developers. I hear your frustration. And I agree all of the solutions are bad in their own way. My personal preference is to add less technical debt and go with a solution that checks if image traits have changed in nova-api and if so, simply refuse to perform a rebuild. Best, -jay From mriedemos at gmail.com Wed May 2 22:45:37 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 2 May 2018 17:45:37 -0500 Subject: [openstack-dev] [nova][placement] Trying to summarize bp/glance-image-traits scheduling alternatives for rebuild In-Reply-To: References: <221636a9-4b8f-1098-10b8-2240a7cb0ff7@gmail.com> <8eec45ab-f9ed-cd96-51a1-9be78849fb9b@gmail.com> <530903a4-701d-595e-acc3-05369697cf06@gmail.com> <30e8e58b-a2f0-df83-49ba-d4d7a9aeddf3@gmail.com> Message-ID: On 5/2/2018 5:39 PM, Jay Pipes wrote: > My personal preference is to add less technical debt and go with a > solution that checks if image traits have changed in nova-api and if so, > simply refuse to perform a rebuild. So, what if when I created my server, the image I used, let's say image1, had required trait A and that fit the host. Then some external service removes (or somehow changes) trait A from the compute node resource provider (because people can and will do this, there are a few vmware specs up that rely on being able to manage traits out of band from nova), and then I rebuild my server with image2 that has required trait A. That would match the original trait A in image1 and we'd say, "yup, lgtm!" and do the rebuild even though the compute node resource provider wouldn't have trait A anymore. Having said that, it could technically happen before traits if the operator changed something on the underlying compute host which invalidated instances running on that host, but I'd think if that happened the operator would be migrating everything off the host and disabling it from scheduling before making whatever that kind of change would be, let's say they change the hypervisor or something less drastic but still image property invalidating. -- Thanks, Matt From mgariepy at ccs.usherbrooke.ca Wed May 2 22:46:52 2018 From: mgariepy at ccs.usherbrooke.ca (Marc Gariepy) Date: Wed, 02 May 2018 18:46:52 -0400 Subject: [openstack-dev] [openstack-ansible] Implement rotations for meetings handling In-Reply-To: <1525286227-sup-5048@lrrr.local> Message-ID: An HTML attachment was scrubbed... URL: From arvindn05 at gmail.com Wed May 2 23:06:03 2018 From: arvindn05 at gmail.com (Arvind N) Date: Wed, 2 May 2018 16:06:03 -0700 Subject: [openstack-dev] [nova][placement] Trying to summarize bp/glance-image-traits scheduling alternatives for rebuild In-Reply-To: References: <221636a9-4b8f-1098-10b8-2240a7cb0ff7@gmail.com> <8eec45ab-f9ed-cd96-51a1-9be78849fb9b@gmail.com> <530903a4-701d-595e-acc3-05369697cf06@gmail.com> <30e8e58b-a2f0-df83-49ba-d4d7a9aeddf3@gmail.com> Message-ID: Isnt this an existing issue with traits specified in flavor as well? Server is created using flavor1 requiring trait A on RP1. Before the rebuild is called, the underlying RP1 can be updated to remove trait A and when a rebuild is requested(regardless of whether the image is updated or not), we skip scheduling and allow the rebuild to go through. Now, even though the flavor1 requests trait A, the underlying RP1 does not have that trait the rebuild will succeed... I think maybe there should be some kind of report or query which runs periodically to ensure continued conformance with respect to instance running and their traits. But since traits are intend to provide hints for scheduling, this is different problem to solve IMO. On Wed, May 2, 2018 at 3:45 PM, Matt Riedemann wrote: > On 5/2/2018 5:39 PM, Jay Pipes wrote: > >> My personal preference is to add less technical debt and go with a >> solution that checks if image traits have changed in nova-api and if so, >> simply refuse to perform a rebuild. >> > > So, what if when I created my server, the image I used, let's say image1, > had required trait A and that fit the host. > > Then some external service removes (or somehow changes) trait A from the > compute node resource provider (because people can and will do this, there > are a few vmware specs up that rely on being able to manage traits out of > band from nova), and then I rebuild my server with image2 that has required > trait A. That would match the original trait A in image1 and we'd say, > "yup, lgtm!" and do the rebuild even though the compute node resource > provider wouldn't have trait A anymore. > > Having said that, it could technically happen before traits if the > operator changed something on the underlying compute host which invalidated > instances running on that host, but I'd think if that happened the operator > would be migrating everything off the host and disabling it from scheduling > before making whatever that kind of change would be, let's say they change > the hypervisor or something less drastic but still image property > invalidating. > > -- > > Thanks, > > Matt > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Arvind N -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Wed May 2 23:11:18 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 2 May 2018 16:11:18 -0700 Subject: [openstack-dev] [nova][placement] Trying to summarize bp/glance-image-traits scheduling alternatives for rebuild In-Reply-To: References: <221636a9-4b8f-1098-10b8-2240a7cb0ff7@gmail.com> <8eec45ab-f9ed-cd96-51a1-9be78849fb9b@gmail.com> <530903a4-701d-595e-acc3-05369697cf06@gmail.com> <30e8e58b-a2f0-df83-49ba-d4d7a9aeddf3@gmail.com> Message-ID: <195398f7-00e6-93a6-a95b-4ebc9b42f2da@gmail.com> On Wed, 2 May 2018 17:45:37 -0500, Matt Riedemann wrote: > On 5/2/2018 5:39 PM, Jay Pipes wrote: >> My personal preference is to add less technical debt and go with a >> solution that checks if image traits have changed in nova-api and if so, >> simply refuse to perform a rebuild. > > So, what if when I created my server, the image I used, let's say > image1, had required trait A and that fit the host. > > Then some external service removes (or somehow changes) trait A from the > compute node resource provider (because people can and will do this, > there are a few vmware specs up that rely on being able to manage traits > out of band from nova), and then I rebuild my server with image2 that > has required trait A. That would match the original trait A in image1 > and we'd say, "yup, lgtm!" and do the rebuild even though the compute > node resource provider wouldn't have trait A anymore. > > Having said that, it could technically happen before traits if the > operator changed something on the underlying compute host which > invalidated instances running on that host, but I'd think if that > happened the operator would be migrating everything off the host and > disabling it from scheduling before making whatever that kind of change > would be, let's say they change the hypervisor or something less drastic > but still image property invalidating. This is a scenario I was thinking about too. In the land of software licenses, this would be analogous to removing a license from a compute host, say. The instance is already there but should we let a rebuild proceed that is going to violate the image traits currently supported by that host? Do we potentially prolong the life of that instance by letting it be re-imaged? I'm late to this thread but I finally went through the replies and my thought is, we should do a pre-flight check to verify with placement whether the image traits requested are 1) supported by the compute host the instance is residing on and 2) coincide with the already-existing allocations. Instead of making an assumption based on "last image" vs "new image" and artificially limiting a rebuild that should be valid to go ahead. I can imagine scenarios where a user is trying to do a rebuild that their cloud admin says should be perfectly valid on their hypervisor, but it's getting rejected because old image traits != new image traits. It seems like unnecessary user and admin pain. It doesn't seem correct to reject the request if the current compute host can fulfill it, and if I understood correctly, we have placement APIs we can call from the conductor to verify the image traits requested for the rebuild can be fulfilled. Is there a reason not to do that? -melanie From mriedemos at gmail.com Thu May 3 00:47:01 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 2 May 2018 19:47:01 -0500 Subject: [openstack-dev] [nova][ironic] ironic_host_manager and baremetal scheduler options removal In-Reply-To: <60821a79-42a4-dfa4-cc65-2fbc068f8b35@gmail.com> References: <96f7142b-8838-93f8-d8a7-46ff7010c394@gmail.com> <60821a79-42a4-dfa4-cc65-2fbc068f8b35@gmail.com> Message-ID: <356c7795-b31e-4de6-47c6-61949f8a3e95@gmail.com> On 5/2/2018 12:39 PM, Matt Riedemann wrote: > FWIW, I think we can also backport the data migration CLI to stable > branches once we have it available so you can do your migration in let's > say Queens before g FYI, here is the start on the data migration CLI: https://review.openstack.org/#/c/565886/ -- Thanks, Matt From madhuri.kumari at intel.com Thu May 3 02:37:49 2018 From: madhuri.kumari at intel.com (Kumari, Madhuri) Date: Thu, 3 May 2018 02:37:49 +0000 Subject: [openstack-dev] [Zun] Announce change of Zun core reviewer team In-Reply-To: References: Message-ID: <0512CBBECA36994BAA14C7FEDE986CA604296126@BGSMSX102.gar.corp.intel.com> Welcome to the team, Ji Wei ☺ Regards, Madhuri From: Hongbin Lu [mailto:hongbin034 at gmail.com] Sent: Thursday, May 3, 2018 2:10 AM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [Zun] Announce change of Zun core reviewer team Hi all, I would like to announce the following change on the Zun core reviewers team: + Ji Wei Ji Wei has been working on Zun for a while. His contributions include blueprints, bug fixes, code reviews, etc. In particular, I would like to highlight that he has implemented two blueprints [1][2], both of which are not easy to implement. Based on his high-quality work in the past, I believe he will serve the core reviewer role very well. This proposal had been voted within the existing core team and was unanimously approved. Welcome to the core team Ji Wei. [1] https://blueprints.launchpad.net/zun/+spec/glance-support-tag [2] https://blueprints.launchpad.net/zun/+spec/zun-rebuild-on-local-node Best regards, Hongbin -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Thu May 3 02:51:27 2018 From: aschultz at redhat.com (Alex Schultz) Date: Wed, 2 May 2018 20:51:27 -0600 Subject: [openstack-dev] [tripleo] validating overcloud config changes on a redeploy In-Reply-To: <1524844197.3706.24.camel@redhat.com> References: <1524844197.3706.24.camel@redhat.com> Message-ID: On Fri, Apr 27, 2018 at 9:49 AM, Ade Lee wrote: > Hi, > > Recently I starting looking at how we implement password changes in an > existing deployment, and found that there were issues. This made me > wonder whether we needed a test job to confirm that password changes > (and other config changes) are in fact executed properly. > > As far as I understand it, the way to do password changes is to - > 1) Create a yaml file containing the parameters to be changed and > their new values > 2) call openstack overcloud deploy and append -e new_params.yaml > > Note that the above steps can really describe the testing of setting > any config changes (not just passwords). > > Of course, if we do change passwords, we'll want to validate that the > config files have changed, the keystone/dbusers have been modified, the > mistral plan has been updated, services are still running etc. > > After talking with many folks, it seems there is no clear consensus > where code to do the above tasks should live. Should it be in tripleo- > upgrades, or in tripleo-validations or in a separate repo? > > Is there anyone already doing something similar? > > If we end up creating a role to do this, ideally it should be > deployment tool agnostic - usable by both infrared or quickstart or > others. > > Whats the best way to do this? > So in my mind, this falls under a testing framework validation where we want to perform a set of $deployment_actions and ensure that $specific_things have been completed. For the most part we don't have anything like that in the upstream tripleo project for actions that aren't covered by tempest tests. Even tempest tests are only ensuring that we configured the services so they work but not a state transition from A to B. Honestly I don't think tripleo-upgrades or tripleo-validations is the appropriate place for this type of check. tripleo-validations might make sense if we expected an end user to do this after performing a specific action but I don't think there's enough of these types of actions for that to be warrented. It's more likely that we would want to come up with a deployment test suite that could be run offline where a scenario like 'change all the passwords' would be executed and verified that it functioned as expected (all the passwords were changed). Something like this might work in a periodic upstream job but it's more like a full validation suite that would most likely need to be run offline. Thanks, -Alex > Thanks, > Ade > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From kevinzs2048 at gmail.com Thu May 3 03:20:35 2018 From: kevinzs2048 at gmail.com (Shuai Zhao) Date: Thu, 3 May 2018 11:20:35 +0800 Subject: [openstack-dev] [Zun] Announce change of Zun core reviewer team In-Reply-To: References: Message-ID: +1 for Ji Wei :-) On Thu, May 3, 2018 at 4:40 AM, Hongbin Lu wrote: > Hi all, > > I would like to announce the following change on the Zun core reviewers > team: > > + Ji Wei > > Ji Wei has been working on Zun for a while. His contributions include > blueprints, bug fixes, code reviews, etc. In particular, I would like to > highlight that he has implemented two blueprints [1][2], both of which are > not easy to implement. Based on his high-quality work in the past, I > believe he will serve the core reviewer role very well. > > This proposal had been voted within the existing core team and was > unanimously approved. Welcome to the core team Ji Wei. > > [1] https://blueprints.launchpad.net/zun/+spec/glance-support-tag > [2] https://blueprints.launchpad.net/zun/+spec/zun-rebuild-on-local-node > > Best regards, > Hongbin > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From duonghq at vn.fujitsu.com Thu May 3 07:33:31 2018 From: duonghq at vn.fujitsu.com (duonghq at vn.fujitsu.com) Date: Thu, 3 May 2018 07:33:31 +0000 Subject: [openstack-dev] [Designate] Plan for OSM Message-ID: <5fef90b8b7ff4dae8c7b280fade5cfb2@G07SGEXCMSGPS03.g07.fujitsu.local> Hi Ben, >>On 04/25/2018 11:31 PM, daidv at vn.fujitsu.com wrote: >> Hi forks, >> >> We tested and completed our process with OVO migration in Queens cycle. >> Now, we can continue with OSM implementation for Designate. >> Actually, we have pushed some patches related to OSM[1] and it's ready to review. > Out of curiosity, what does OSM stand for? Based on the patches it > seems related to rolling upgrades, but a quick glance at them doesn't > make it obvious to me what's going on. Thanks. OSM stands for Online Schema Migration, which means that we can migrate database schema without downtime for service. > -Ben Best regards, Ha Quang Duong (Mr.) PODC - Fujitsu Vietnam Ltd. From duonghq at vn.fujitsu.com Thu May 3 07:51:02 2018 From: duonghq at vn.fujitsu.com (duonghq at vn.fujitsu.com) Date: Thu, 3 May 2018 07:51:02 +0000 Subject: [openstack-dev] [kolla][vote]Core nomination for Mark Goddard (mgoddard) as kolla core member In-Reply-To: References: Message-ID: <3c755cbfc76e4eff93335560daac96a7@G07SGEXCMSGPS03.g07.fujitsu.local> +1 Sorry for my late reply, thank you for your contribution in Kolla. Regards, Duong From: Jeffrey Zhang [mailto:zhang.lei.fly at gmail.com] Sent: Thursday, April 26, 2018 10:31 PM To: OpenStack Development Mailing List Subject: [openstack-dev] [kolla][vote]Core nomination for Mark Goddard (mgoddard) as kolla core member Kolla core reviewer team, It is my pleasure to nominate ​ mgoddard for kolla core team. ​ Mark has been working both upstream and downstream with kolla and kolla-ansible for over two years, building bare metal compute clouds with ironic for HPC. He's been involved with OpenStack since 2014. He started the kayobe deployment project which complements kolla-ansible. He is also the most active non-core contributor for last 90 days[1] ​​ Consider this nomination a +1 vote from me A +1 vote indicates you are in favor of ​ mgoddard as a candidate, a -1 is a ​​ veto. Voting is open for 7 days until ​May ​4​ th, or a unanimous response is reached or a veto vote occurs. [1] http://stackalytics.com/report/contribution/kolla-group/90 -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From andy.mccrae at gmail.com Thu May 3 08:13:59 2018 From: andy.mccrae at gmail.com (Andy McCrae) Date: Thu, 3 May 2018 09:13:59 +0100 Subject: [openstack-dev] [openstack-ansible] Implement rotations for meetings handling In-Reply-To: References: Message-ID: > > > Now that we are all part-time, I'd like to toy with a new idea, > proposed in the past by Jesse, to rotate the duties with people who > are involved in OSA, or want to get involved more (it's not restricted > to core developers!). > > One of the first duties to be handled this way could be the weekly meeting. > > Handling the meeting is not that hard, it just takes time to prepare, > and to facilitate. > > I think everyone should step into this, not only core developers, but > core developers are now expected to run the meetings when their turn > comes. > > > What are the actions to take: > - Prepare the triage. Generate the list of the bugs for the week. > - Ping people with the triage links around 1h before the weekly > meeting. It would give them time to get prepared for meeting, > eventually updating the agenda, and read the current bugs > - Ping people at the beginning of the meeting > - Chair the meeting: The structure of the meeting is now always > the same, a recap of the week, and handling the bug triage. > - After the meeting we would ask who is volunteer to run next > meeting, and if none, a meeting char will be selected amongst core > contributors at random. > > Thank you for your understanding. > > Jean-Philippe Evrard (evrardjp) > I will gladly pick up my well-used meeting chair hat. It's a great idea, I think it would help make our meetings more productive. Once you've been chair you have a different view of how the meetings work. Andy -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike.carden at gmail.com Thu May 3 08:39:18 2018 From: mike.carden at gmail.com (Mike Carden) Date: Thu, 3 May 2018 18:39:18 +1000 Subject: [openstack-dev] [openstack-ansible] Implement rotations for meetings handling In-Reply-To: References: Message-ID: Hi OSA peeps. I apologise in advance for what may seem like an impertinent question. And for those playing along at home, I was just getting the hang of contributing to OSA when last year my employer decided that some of us were no longer needed, and OpenStack lost quite a few full time employed contributors. So my question is... what is the health status of OSA? Is there still a core of committed contributors? I only check in on OSA code reviews rarely now, but activity seems a lot less than it was. Before you answer, imagine that I now work for a moderately large, potential consumer of OSA. Is OSA the future, or have other deployment projects made it less relevant? -- MC On Thu, May 3, 2018 at 6:13 PM, Andy McCrae wrote: > >> Now that we are all part-time, I'd like to toy with a new idea, >> proposed in the past by Jesse, to rotate the duties with people who >> are involved in OSA, or want to get involved more (it's not restricted >> to core developers!). >> >> One of the first duties to be handled this way could be the weekly >> meeting. >> >> Handling the meeting is not that hard, it just takes time to prepare, >> and to facilitate. >> >> I think everyone should step into this, not only core developers, but >> core developers are now expected to run the meetings when their turn >> comes. >> >> >> What are the actions to take: >> - Prepare the triage. Generate the list of the bugs for the week. >> - Ping people with the triage links around 1h before the weekly >> meeting. It would give them time to get prepared for meeting, >> eventually updating the agenda, and read the current bugs >> - Ping people at the beginning of the meeting >> - Chair the meeting: The structure of the meeting is now always >> the same, a recap of the week, and handling the bug triage. >> - After the meeting we would ask who is volunteer to run next >> meeting, and if none, a meeting char will be selected amongst core >> contributors at random. >> >> Thank you for your understanding. >> >> Jean-Philippe Evrard (evrardjp) >> > > I will gladly pick up my well-used meeting chair hat. > It's a great idea, I think it would help make our meetings more productive. > Once you've been chair you have a different view of how the meetings work. > > Andy > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihaela.balas at orange.com Thu May 3 08:51:50 2018 From: mihaela.balas at orange.com (mihaela.balas at orange.com) Date: Thu, 3 May 2018 08:51:50 +0000 Subject: [openstack-dev] [octavia] Sometimes amphoras are not re-created if they are not reached for more than heartbeat_timeout In-Reply-To: References: <11302_1524654452_5AE06174_11302_207_1_2be855e5b8174bf397106775823399bf@orange.com> Message-ID: <27191_1525337511_5AEACDA7_27191_411_1_a53fc88b781d46baad9b08e9bb30b489@orange.com> Hi Michael, I build a new amphora image with the latest patches and I reproduced two different bugs that I see in my environment. One of them is similar to the one initially described in this thread. I opened two stories as you advised: https://storyboard.openstack.org/#!/story/2001960 https://storyboard.openstack.org/#!/story/2001955 Meanwhile, can you provide some recommendation of values for the following parameters (maybe in relation with number of workers, cores, computes etc)? [health_manager] failover_threads status_update_threads [haproxy_amphora] build_rate_limit build_active_retries [controller_worker] workers amp_active_retries amp_active_wait_sec [task_flow] max_workers Thank you for your help, Mihaela Balas -----Original Message----- From: Michael Johnson [mailto:johnsomor at gmail.com] Sent: Friday, April 27, 2018 8:24 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [octavia] Sometimes amphoras are not re-created if they are not reached for more than heartbeat_timeout Hi Mihaela, I am sorry to hear you are having trouble with the queens release of Octavia. It is true that a lot of work has gone into the failover capability, specifically working around a python threading issue and making it more resistant to certain neutron failure situations (missing ports, etc.). I know of one open bug against the failover flows, https://storyboard.openstack.org/#!/story/2001481, "failover breaks in Active/Standby mode if both amphroae are down". Unfortunately the log snippet above does not give me enough information about the problem to help with this issue. From the snippet it looks like the failovers were initiated, but the controllers are unable to reach the amphora-agent on the replacement amphora. It will continue those retry attempts, but eventually will fail the amphora into ERROR if it doesn't succeed. One thought I have is if you created you amphora image in the last two weeks, you may have built an amphora using the master branch of octavia, which had a bug that impacted active/standby images. This was introduced working around the new pip 10 issues. That patch has been fixed: https://review.openstack.org/#/c/564371/ If neither of these situations match your environment, please open a story (https://storyboard.openstack.org/#!/dashboard/stories) for us and include the health manager logs from the point you delete the amphora up until it starts these connection attempts. We will dig through those logs to see what the issue might be. Michael (johnsom) On Wed, Apr 25, 2018 at 4:07 AM, wrote: > Hello, > > > > I am testing Octavia Queens and I see that the failover behavior is > very much different than the one in Ocata (this is the version we are > currently running in production). > > One example of such behavior is: > > > > I create 4 load balancers and after the creation is successful, I shut > off all the 8 amphoras. Sometimes, even the health-manager agent does > not reach the amphoras, they are not deleted and re-created. The logs > look like shown below even when the heartbeat timeout is long passed. > Sometimes the amphoras are deleted and re-created. Sometimes, they > are partially re-created – part of them remain in shut off. > > Heartbeat_timeout is set to 60 seconds. > > > > > > > > [octavia-health-manager-3662231220-nxnt3] 2018-04-25 10:57:26.244 11 > WARNING octavia.amphorae.drivers.haproxy.rest_api_driver > [req-339b54a7-ab0c-422a-832f-a444cd710497 - > a5f15235c0714365b98a50a11ec956e7 > - - -] Could not connect to instance. Retrying.: ConnectionError: > HTTPSConnectionPool(host='192.168.0.15', port=9443): Max retries > exceeded with url: > /0.5/listeners/285ad342-5582-423e-b654-1f0b50d91fb2/certificates/octav > iasrv2.orange.com.pem (Caused by > NewConnectionError(' object at 0x7f559862c710>: Failed to establish a new connection: > [Errno 113] No route to host',)) > > [octavia-health-manager-3662231220-3lssd] 2018-04-25 10:57:26.464 13 > WARNING octavia.amphorae.drivers.haproxy.rest_api_driver > [req-a63b795a-4b4f-4b90-a201-a4c9f49ac68b - > a5f15235c0714365b98a50a11ec956e7 > - - -] Could not connect to instance. Retrying.: ConnectionError: > HTTPSConnectionPool(host='192.168.0.14', port=9443): Max retries > exceeded with url: > /0.5/listeners/a45bdef3-e7da-4a18-9f1f-53d5651efe0f/1615c1ec-249e-4fa8 > -9d73-2397e281712c/haproxy (Caused by > NewConnectionError(' object at 0x7f8a0de95e10>: Failed to establish a new connection: > [Errno 113] No route to host',)) > > [octavia-health-manager-3662231220-nxnt3] 2018-04-25 10:57:27.772 11 > WARNING octavia.amphorae.drivers.haproxy.rest_api_driver > [req-10febb10-85ea-4082-9df7-daa48894b004 - > a5f15235c0714365b98a50a11ec956e7 > - - -] Could not connect to instance. Retrying.: ConnectionError: > HTTPSConnectionPool(host='192.168.0.19', port=9443): Max retries > exceeded with url: > /0.5/listeners/96ce5862-d944-46cb-8809-e1e328268a66/fc5b7940-3527-4e9b > -b93f-1da3957a5b71/haproxy (Caused by > NewConnectionError(' object at 0x7f5598491c90>: Failed to establish a new connection: > [Errno 113] No route to host',)) > > [octavia-health-manager-3662231220-nxnt3] 2018-04-25 10:57:34.252 11 > WARNING octavia.amphorae.drivers.haproxy.rest_api_driver > [req-339b54a7-ab0c-422a-832f-a444cd710497 - > a5f15235c0714365b98a50a11ec956e7 > - - -] Could not connect to instance. Retrying.: ConnectionError: > HTTPSConnectionPool(host='192.168.0.15', port=9443): Max retries > exceeded with url: > /0.5/listeners/285ad342-5582-423e-b654-1f0b50d91fb2/certificates/octav > iasrv2.orange.com.pem (Caused by > NewConnectionError(' object at 0x7f5598520790>: Failed to establish a new connection: > [Errno 113] No route to host',)) > > [octavia-health-manager-3662231220-3lssd] 2018-04-25 10:57:34.476 13 > WARNING octavia.amphorae.drivers.haproxy.rest_api_driver > [req-a63b795a-4b4f-4b90-a201-a4c9f49ac68b - > a5f15235c0714365b98a50a11ec956e7 > - - -] Could not connect to instance. Retrying.: ConnectionError: > HTTPSConnectionPool(host='192.168.0.14', port=9443): Max retries > exceeded with url: > /0.5/listeners/a45bdef3-e7da-4a18-9f1f-53d5651efe0f/1615c1ec-249e-4fa8 > -9d73-2397e281712c/haproxy (Caused by > NewConnectionError(' object at 0x7f8a0de953d0>: Failed to establish a new connection: > [Errno 113] No route to host',)) > > [octavia-health-manager-3662231220-nxnt3] 2018-04-25 10:57:35.780 11 > WARNING octavia.amphorae.drivers.haproxy.rest_api_driver > [req-10febb10-85ea-4082-9df7-daa48894b004 - > a5f15235c0714365b98a50a11ec956e7 > - - -] Could not connect to instance. Retrying.: ConnectionError: > HTTPSConnectionPool(host='192.168.0.19', port=9443): Max retries > exceeded with url: > /0.5/listeners/96ce5862-d944-46cb-8809-e1e328268a66/fc5b7940-3527-4e9b > -b93f-1da3957a5b71/haproxy (Caused by > NewConnectionError(' object at 0x7f55984e2050>: Failed to establish a new connection: > [Errno 113] No route to host',)) > > > > Thank you, > > Mihaela Balas > > ______________________________________________________________________ > ___________________________________________________ > > Ce message et ses pieces jointes peuvent contenir des informations > confidentielles ou privilegiees et ne doivent donc pas etre diffuses, > exploites ou copies sans autorisation. Si vous avez recu ce message > par erreur, veuillez le signaler a l'expediteur et le detruire ainsi > que les pieces jointes. Les messages electroniques etant susceptibles > d'alteration, Orange decline toute responsabilite si ce message a ete > altere, deforme ou falsifie. Merci. > > This message and its attachments may contain confidential or > privileged information that may be protected by law; they should not > be distributed, used or copied without authorisation. > If you have received this email in error, please notify the sender and > delete this message and its attachments. > As emails may be altered, Orange is not liable for messages that have > been modified, changed or falsified. > Thank you. > > > ______________________________________________________________________ > ____ OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _________________________________________________________________________________________________________________________ Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci. This message and its attachments may contain confidential or privileged information that may be protected by law; they should not be distributed, used or copied without authorisation. If you have received this email in error, please notify the sender and delete this message and its attachments. As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified. Thank you. From m.andre at redhat.com Thu May 3 08:58:03 2018 From: m.andre at redhat.com (=?UTF-8?Q?Martin_Andr=C3=A9?=) Date: Thu, 3 May 2018 10:58:03 +0200 Subject: [openstack-dev] [kolla][vote]Core nomination for Mark Goddard (mgoddard) as kolla core member In-Reply-To: <3c755cbfc76e4eff93335560daac96a7@G07SGEXCMSGPS03.g07.fujitsu.local> References: <3c755cbfc76e4eff93335560daac96a7@G07SGEXCMSGPS03.g07.fujitsu.local> Message-ID: +1 On Thu, May 3, 2018 at 9:51 AM, duonghq at vn.fujitsu.com wrote: > +1 > > > > Sorry for my late reply, thank you for your contribution in Kolla. > > > > Regards, > > Duong > > > > From: Jeffrey Zhang [mailto:zhang.lei.fly at gmail.com] > Sent: Thursday, April 26, 2018 10:31 PM > To: OpenStack Development Mailing List > Subject: [openstack-dev] [kolla][vote]Core nomination for Mark Goddard > (mgoddard) as kolla core member > > > > Kolla core reviewer team, > > > > It is my pleasure to nominate > > mgoddard for kolla core team. > > Mark has been working both upstream and downstream with kolla and > kolla-ansible for over two years, building bare metal compute clouds with > ironic for HPC. He's been involved with OpenStack since 2014. He started > the kayobe deployment project which complements kolla-ansible. He is > also the most active non-core contributor for last 90 days[1] > > Consider this nomination a +1 vote from me > > A +1 vote indicates you are in favor of > > mgoddard as a candidate, a -1 > is a > > veto. Voting is open for 7 days until > > May > > > > 4 > > th, or a unanimous > response is reached or a veto vote occurs. > > [1] http://stackalytics.com/report/contribution/kolla-group/90 > > > > -- > > Regards, > > Jeffrey Zhang > > Blog: http://xcodest.me > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From Jesse.Pretorius at rackspace.co.uk Thu May 3 09:12:50 2018 From: Jesse.Pretorius at rackspace.co.uk (Jesse Pretorius) Date: Thu, 3 May 2018 09:12:50 +0000 Subject: [openstack-dev] [openstack-ansible] Implement rotations for meetings handling In-Reply-To: References: Message-ID: Hi Mike – please see my responses in-line. From: Mike Carden Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Thursday, May 3, 2018 at 9:42 AM To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] [openstack-ansible] Implement rotations for meetings handling * So my question is... what is the health status of OSA? Is there still a core of committed contributors? I only check in on OSA code reviews rarely now, but activity seems a lot less than it was. I’m not sure how to answer this, really. OSA is used by a variety of organizations, and contributed to by a number of organizations. The health of the project depends on the contributions of those who consume it. A quick review of Stackalytics shows that OSA has multiple contributing stakeholders, which means that the risk of project failure is managed. See http://stackalytics.com/?metric=commits&module=openstack-ansible / http://stackalytics.com/?metric=person-day&module=openstack-ansible for Rocky stats, and you can look back in history using the ‘Release’ drop-down box on the left. * Before you answer, imagine that I now work for a moderately large, potential consumer of OSA. OK, you’d be in good company if you check the contributing companies in Stackalytics. * Is OSA the future, or have other deployment projects made it less relevant? OSA is one of many deployment projects. Each has their own style. Pick the one that suits your needs best. ________________________________ Rackspace Limited is a company registered in England & Wales (company registered number 03897010) whose registered office is at 5 Millington Road, Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may contain confidential or privileged information intended for the recipient. Any dissemination, distribution or copying of the enclosed material is prohibited. If you receive this transmission in error, please notify us immediately by e-mail at abuse at rackspace.com and delete the original message. Your cooperation is appreciated. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pratapagoutham at gmail.com Thu May 3 09:47:59 2018 From: pratapagoutham at gmail.com (Goutham Pratapa) Date: Thu, 3 May 2018 15:17:59 +0530 Subject: [openstack-dev] [kolla][vote]Core nomination for Mark Goddard (mgoddard) as kolla core member In-Reply-To: <3c755cbfc76e4eff93335560daac96a7@G07SGEXCMSGPS03.g07.fujitsu.local> References: <3c755cbfc76e4eff93335560daac96a7@G07SGEXCMSGPS03.g07.fujitsu.local> Message-ID: +1 for `mgoddard` On Thu, May 3, 2018 at 1:21 PM, duonghq at vn.fujitsu.com < duonghq at vn.fujitsu.com> wrote: > +1 > > > > Sorry for my late reply, thank you for your contribution in Kolla. > > > > Regards, > > Duong > > > > *From:* Jeffrey Zhang [mailto:zhang.lei.fly at gmail.com] > *Sent:* Thursday, April 26, 2018 10:31 PM > *To:* OpenStack Development Mailing List openstack.org> > *Subject:* [openstack-dev] [kolla][vote]Core nomination for Mark Goddard > (mgoddard) as kolla core member > > > > Kolla core reviewer team, > > It is my pleasure to nominate > > ​ > > mgoddard for kolla core team. > > ​ > > Mark has been working both upstream and downstream with kolla and > kolla-ansible for over two years, building bare metal compute clouds with > ironic for HPC. He's been involved with OpenStack since 2014. He started > the kayobe deployment project which complements kolla-ansible. He is > also the most active non-core contributor for last 90 days[1] > > ​​ > > Consider this nomination a +1 vote from me > > A +1 vote indicates you are in favor of > > ​ > > mgoddard as a candidate, a -1 > is a > > ​​ > > veto. Voting is open for 7 days until > > ​May > > > > ​4​ > > th, or a unanimous > response is reached or a veto vote occurs. > > [1] http://stackalytics.com/report/contribution/kolla-group/90 > > > > -- > > Regards, > > Jeffrey Zhang > > Blog: http://xcodest.me > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Cheers !!! Goutham Pratapa -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu May 3 09:57:40 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Thu, 3 May 2018 11:57:40 +0200 Subject: [openstack-dev] Is there any way to recheck only one job? In-Reply-To: References: Message-ID: <7E0EDC94-0089-49B2-87CD-B9AB23139ED9@redhat.com> Thanks for help. > Wiadomość napisana przez Jens Harbott w dniu 30.04.2018, o godz. 10:41: > > 2018-04-30 7:12 GMT+00:00 Slawomir Kaplonski : >> Hi, >> >> I wonder if there is any way to recheck only one type of job instead of rechecking everything. >> For example sometimes I have to debug some random failure in specific job type, like „neutron-fullstack” and I want to collect some additional data or test something. So in such case I push some „Do not merge” patch and waits for job result - but I really don’t care about e.g. pep8 or UT results so would be good is I could run (recheck) only job which I want. That could safe some resources for other jobs and speed up my tests a little as I could be able to recheck only my job faster :) >> >> Is there any way that I can do it with gerrit and zuul currently? Or maybe it could be consider as a new feature to add? What do You think about it? > > This is intentionally not implemented as it could be used to trick > patches leading to unstable behaviour into passing too easily, hiding > possible issues. > > As an alternative, you could include a change to .zuul.yaml into your > test patch, removing all jobs except the one you are interested in. > This would still run the jobs defined in project-config, but may be > good enough for your scenario. I did exactly that currently and it’s exactly what I expected. Thanks :) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Best regards Slawek Kaplonski skaplons at redhat.com From sean.mcginnis at gmx.com Thu May 3 12:24:24 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 3 May 2018 07:24:24 -0500 Subject: [openstack-dev] [release] Release countdown for week R-16, May 7-11 Message-ID: <20180503122424.GA32414@smcginnis-mbp.local> Just what you've been waiting for, our regular release countdown email. Development Focus ----------------- With Rocky-1 passed, teams should be focusing on feature development and release goals. The Forum [1] schedule [2] is set and hopefully teams are preparing for some good discussions in Vancouver. This is a great opportunity for getting feedback on existing and planned features and bringing that feedback to the teams. [1] https://wiki.openstack.org/wiki/Forum [2] https://www.openstack.org/summit/vancouver-2018/summit-schedule/global-search?t=Forum General Information ------------------- We have cycle-with-intermediary projects without a release yet. Please take a look and see if these are ready to do a release for this cycle yet. It's best to "release early, release often". automaton blazar-nova castellan ceilometermiddleware cliff debtcollector glance-store heat-translator ironic-lib kuryr ldappool oslo.context ovsdbapp pycadf python-aodhclient python-barbicanclient python-blazarclient python-brick-cinderclient-ext python-cinderclient python-cloudkittyclient python-congressclient python-cyborgclient python-designateclient python-ironic-inspector-client python-karborclient python-magnumclient python-masakariclient python-muranoclient python-octaviaclient python-pankoclient python-qinlingclient python-searchlightclient python-senlinclient python-solumclient python-swiftclient python-tricircleclient python-vitrageclient python-zaqarclient requestsexceptions stevedore sushy taskflow tosca-parser Some of these have significant changes merged, while some may just have things like requirements updates or zuul job changes. Please take a look at what has merged since the last release and see if it would be a good time to do another release. Also make sure to do regular stable releases, assuming there are bugfixes ready to be made available. Upcoming Deadlines & Dates -------------------------- Forum at OpenStack Summit in Vancouver: May 21-24 Rocky-2 Milestone: June 7 -- Sean McGinnis (smcginnis) From mjturek at linux.vnet.ibm.com Thu May 3 12:43:31 2018 From: mjturek at linux.vnet.ibm.com (Michael Turek) Date: Thu, 3 May 2018 08:43:31 -0400 Subject: [openstack-dev] [ironic] Monthly bug day? In-Reply-To: <4c165699-6602-0528-200a-8a69481b39c5@redhat.com> References: <7e2b39d4-d74b-0953-fb61-659c0a6b4e7e@linux.vnet.ibm.com> <6667f21d-81f4-1f61-a365-43aea343c0e9@redhat.com> <4c165699-6602-0528-200a-8a69481b39c5@redhat.com> Message-ID: Thanks Dmitry! We'll be meeting on Dmitry's bluejeans line very soon. Hope to see everyone there! -Mike On 4/30/18 12:00 PM, Dmitry Tantsur wrote: > I've created a bluejeans channel for this meeting: > https://bluejeans.com/309964257. I may be late for it, but I've set it > up to be usable even without me. > > On 04/30/2018 02:39 PM, Michael Turek wrote: >> Just tried this and seems like Firefox does still require a browser >> plugin. >> >> Julia, could we use your bluejeans line again? >> >> Thanks! >> Mike Turek >> >> >> On 4/30/18 7:33 AM, Dmitry Tantsur wrote: >>> Hi, >>> >>> On 04/29/2018 10:17 PM, Michael Turek wrote: >>>> Awesome! If everyone doesn't mind the short notice, we'll have it >>>> again this Thursday @ 1:00 PM to 3:00 PM UTC. >>> >>> ++ >>> >>>> >>>> I can provide video conferencing through hangouts here >>>> https://goo.gl/xSKBS4 >>>> Let's give that a shot this time! >>> >>> Note that the last time I checked Hangouts video messaging required >>> a proprietary browser plugin (and hence did not work in Firefox). >>> Using it may exclude people not accepting proprietary software >>> and/or avoiding using Chromium. >>> >>>> >>>> We can adjust times, tooling, and regular agenda over the next >>>> couple meetings and see where we settle. If anyone has any >>>> questions or suggestions, don't hesitate to reach out to me! >>>> >>>> Thanks, >>>> Mike Turek >>>> >>>> >>>> On 4/25/18 12:11 PM, Julia Kreger wrote: >>>>> On Mon, Apr 23, 2018 at 12:04 PM, Michael Turek >>>>> wrote: >>>>> >>>>>> What does everyone think about having Bug Day the first Thursday >>>>>> of every >>>>>> month? >>>>> All for it! >>>>> >>>>> __________________________________________________________________________ >>>>> >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: >>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>>> __________________________________________________________________________ >>>> >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> __________________________________________________________________________ >>> >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From m.andre at redhat.com Thu May 3 13:01:59 2018 From: m.andre at redhat.com (=?UTF-8?Q?Martin_Andr=C3=A9?=) Date: Thu, 3 May 2018 15:01:59 +0200 Subject: [openstack-dev] Is there any way to recheck only one job? In-Reply-To: References: Message-ID: On Mon, Apr 30, 2018 at 10:41 AM, Jens Harbott wrote: > 2018-04-30 7:12 GMT+00:00 Slawomir Kaplonski : >> Hi, >> >> I wonder if there is any way to recheck only one type of job instead of rechecking everything. >> For example sometimes I have to debug some random failure in specific job type, like „neutron-fullstack” and I want to collect some additional data or test something. So in such case I push some „Do not merge” patch and waits for job result - but I really don’t care about e.g. pep8 or UT results so would be good is I could run (recheck) only job which I want. That could safe some resources for other jobs and speed up my tests a little as I could be able to recheck only my job faster :) >> >> Is there any way that I can do it with gerrit and zuul currently? Or maybe it could be consider as a new feature to add? What do You think about it? > > This is intentionally not implemented as it could be used to trick > patches leading to unstable behaviour into passing too easily, hiding > possible issues. Perhaps for these type of patches aimed at gathering data in CI, we could make it easier for developers to selectively trigger jobs while still retaining the "all voting jobs must pass in the same run" policy in place. I'm thinking maybe a specially formatted line in the commit message could do the trick: Trigger-Job: neutron-fullstack Even better if we can automatically put a Workflow -1 on the patches that contains a job triggering marker to prevent them from accidentally merging, and indicate to reviewers they can skip these patches. It's not uncommon to see such DNM patches, so I imagine we can save quite a lot of CI resource by implementing a system like that. And devs will be happier too because it can also be tricky at times to find what triggers a given job. Martin From aschultz at redhat.com Thu May 3 13:51:21 2018 From: aschultz at redhat.com (Alex Schultz) Date: Thu, 3 May 2018 07:51:21 -0600 Subject: [openstack-dev] [tripleo] [heat-templates] Deprecated environment files In-Reply-To: References: Message-ID: On Thu, Apr 26, 2018 at 6:08 AM, Waleed Musa wrote: > Hi guys, > > > I'm wondering, what is the plan of having these environments/*.yaml and > enviroments/services-baremetal/*.yaml. > > It seems that it's deprecated files, Please advice here. > The services-baremetal were to allow for an end user to continue to use the service on baremetal during the deprecation process. Additionally it's when we switched over to containers by default. For new services, I would recommend not creating the services-baremetal/*.yaml file. If you have to update an existing service, please also update the baremetal equivalent at least for this cycle. We can probably start removing services-baremetal/* in Stein. Thanks, -Alex > > Regards > > Waleed Mousa > > SW Engineer at Mellanox > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From skaplons at redhat.com Thu May 3 13:54:38 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Thu, 3 May 2018 15:54:38 +0200 Subject: [openstack-dev] Is there any way to recheck only one job? In-Reply-To: References: Message-ID: <72FFD1D9-6B29-4C1A-82CB-B7C3468B39F6@redhat.com> Hi, > Wiadomość napisana przez Martin André w dniu 03.05.2018, o godz. 15:01: > > On Mon, Apr 30, 2018 at 10:41 AM, Jens Harbott wrote: >> 2018-04-30 7:12 GMT+00:00 Slawomir Kaplonski : >>> Hi, >>> >>> I wonder if there is any way to recheck only one type of job instead of rechecking everything. >>> For example sometimes I have to debug some random failure in specific job type, like „neutron-fullstack” and I want to collect some additional data or test something. So in such case I push some „Do not merge” patch and waits for job result - but I really don’t care about e.g. pep8 or UT results so would be good is I could run (recheck) only job which I want. That could safe some resources for other jobs and speed up my tests a little as I could be able to recheck only my job faster :) >>> >>> Is there any way that I can do it with gerrit and zuul currently? Or maybe it could be consider as a new feature to add? What do You think about it? >> >> This is intentionally not implemented as it could be used to trick >> patches leading to unstable behaviour into passing too easily, hiding >> possible issues. > > Perhaps for these type of patches aimed at gathering data in CI, we > could make it easier for developers to selectively trigger jobs while > still retaining the "all voting jobs must pass in the same run" policy > in place. > > I'm thinking maybe a specially formatted line in the commit message > could do the trick: > > Trigger-Job: neutron-fullstack Yes, IMO it would be great to have something like that available :) > > Even better if we can automatically put a Workflow -1 on the patches > that contains a job triggering marker to prevent them from > accidentally merging, and indicate to reviewers they can skip these > patches. > It's not uncommon to see such DNM patches, so I imagine we can save > quite a lot of CI resource by implementing a system like that. And > devs will be happier too because it can also be tricky at times to > find what triggers a given job. That was my initial though when I wrote email about it :) Solution proposed by Jens is (almost) fine for me as it allows me to skip many tests but there is bunch of jobs defined in zuul directly (like openstack-tox-py27 or tempest-full) which are still running for my DNM patch. > > Martin > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Best regards Slawek Kaplonski skaplons at redhat.com From johfulto at redhat.com Thu May 3 14:29:04 2018 From: johfulto at redhat.com (John Fulton) Date: Thu, 3 May 2018 10:29:04 -0400 Subject: [openstack-dev] [TripleO] container-to-container-upgrades CI job and tripleo-common versions Message-ID: We hit a bug [1] in CI job container-to-container-upgrades because a workflow that was needed only for Pike and Queens was removed [2] as clean up for the migration to external_deploy_tasks. As we need to support an n undercloud deploying an n-1 overcloud and then upgrading it to an n overcloud, the CI job deploys with Queens THT and master tripleo-common. I take this to be by design as per this support requirement. An implication of this is that we need to keep tripleo-common backwards compatible for the n-1 release and thus we couldn't delete this workflow until Stein. An alternative is to require that tripleo-common be of the same version as tripleo-heat-templates. Recommendations? John PS: for the sake of getting CI I think we should restore the workflow for now [3] [1] https://bugs.launchpad.net/tripleo/+bug/1768116 [2] https://review.openstack.org/#/c/563047 [3] https://review.openstack.org/#/c/565580 From aschultz at redhat.com Thu May 3 15:06:30 2018 From: aschultz at redhat.com (Alex Schultz) Date: Thu, 3 May 2018 09:06:30 -0600 Subject: [openstack-dev] [TripleO] container-to-container-upgrades CI job and tripleo-common versions In-Reply-To: References: Message-ID: On Thu, May 3, 2018 at 8:29 AM, John Fulton wrote: > We hit a bug [1] in CI job container-to-container-upgrades because a > workflow that was needed only for Pike and Queens was removed [2] as > clean up for the migration to external_deploy_tasks. > > As we need to support an n undercloud deploying an n-1 overcloud and > then upgrading it to an n overcloud, the CI job deploys with Queens > THT and master tripleo-common. I take this to be by design as per this > support requirement. > I think we've always had to support this for mixed version installs. We need to be able to manage n-1 with the latest undercloud bits. So it does seem that tripleo-common needs to continue to be backwards compatible for one release. So let's restore the workflow and get an upgrade job in place so we can detect these types of breakages. Alternatively perhaps we need an n-1 deployment on a the latest undercloud job. Thanks, -Alex > An implication of this is that we need to keep tripleo-common > backwards compatible for the n-1 release and thus we couldn't delete > this workflow until Stein. > > An alternative is to require that tripleo-common be of the same > version as tripleo-heat-templates. > > Recommendations? > > John > > PS: for the sake of getting CI I think we should restore the workflow > for now [3] > > [1] https://bugs.launchpad.net/tripleo/+bug/1768116 > [2] https://review.openstack.org/#/c/563047 > [3] https://review.openstack.org/#/c/565580 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From aschultz at redhat.com Thu May 3 15:58:28 2018 From: aschultz at redhat.com (Alex Schultz) Date: Thu, 3 May 2018 09:58:28 -0600 Subject: [openstack-dev] [tripleo] Retirement of tripleo-incubator Message-ID: We haven't used tripleo-incubator in some time and it is no longer maintained. We are planning on officially retiring it ASAP[0]. We had previously said we would do it for the pike py35 goals[1] but we never got around to removing it. Efforts have begun to officially retire it. Please let us know if there are any issues. Thanks, -Alex [0] https://review.openstack.org/#/q/topic:bug/1768590+(status:open+OR+status:merged) [1] http://git.openstack.org/cgit/openstack/governance/tree/goals/pike/python35.rst#n868 From ed at leafe.com Thu May 3 16:57:32 2018 From: ed at leafe.com (Ed Leafe) Date: Thu, 3 May 2018 11:57:32 -0500 Subject: [openstack-dev] [api] REST limitations and GraghQL inception? In-Reply-To: <9196a914-df90-41cd-01af-4f1c49a5d1aa@redhat.com> References: <9e8ab8c2-1025-c1c0-7f02-080cc8ae8fc1@redhat.com> <9196a914-df90-41cd-01af-4f1c49a5d1aa@redhat.com> Message-ID: <741F69B8-03E0-44A5-9255-EABAAACC0CB5@leafe.com> On May 2, 2018, at 2:40 AM, Gilles Dubreuil wrote: > >> • We should get a common consensus before all projects start to implement it. > > This is going to be raised during the API SIG weekly meeting later this week. > API developers (at least one) from every project are strongly welcomed to participate. > I suppose it makes sense for the API SIG to be the place to discuss it, at least initially. It was indeed discussed, and we think that it would be a worthwhile experiment. But it would be a difficult, if not impossible, proposal to have adopted OpenStack-wide without some data to back it up. So what we thought would be a good starting point would be to have a group of individuals interested in GraphQL form an informal team and proceed to wrap one OpenStack API as a proof-of-concept. Monty Taylor suggested Neutron as an excellent candidate, as its API exposes things at an individual table level, requiring the client to join that information to get the answers they need. Once that is done, we could examine the results, and use them as the basis for proceeding with something more comprehensive. Does that sound like a good approach to (all of) you? -- Ed Leafe From ed at leafe.com Thu May 3 17:02:51 2018 From: ed at leafe.com (Ed Leafe) Date: Thu, 3 May 2018 12:02:51 -0500 Subject: [openstack-dev] [all][api] POST /api-sig/news Message-ID: <03EC1155-F773-4521-8D5F-8B94CDBCFA98@leafe.com> Greetings OpenStack community, A well-attended meeting today. I'm pleased to report that a good time was had by all. The discussion centered primarily on the email to the dev list [7] from Gilles Dubreuil regarding the possibility of using GraphQL [8] as a wrapper/replacement for the OpenStack APIs/SDKs. None of us are familiar with GraphQL in more than a cursory way, so we thought it would be best to propose starting with a limited proof-of-concept test. This would involve a team of people interested in GraphQL to work separately to wrap a single OpenStack service, and then, based on the results, either make it something we embrace OpenStack-wide, or chalk up as another interesting experiment. mordred suggested that they use Neutron as the test case, as their API requires a lot of client-side work for many typical queries. edleafe agreed to respond on the mailing list with these thoughts. We also agreed that edleafe needs to be more mindful of his phrasing in emails. Due to the late interest of Gilles and others, we decided to resurrect our BoF session at the upcoming Vancouver Forum. If you will be there, and you have an interest in anything API- or SDK-related, please plan on joining us! As always if you're interested in helping out, in addition to coming to the meetings, there's also: * The list of bugs [5] indicates several missing or incomplete guidelines. * The existing guidelines [2] always need refreshing to account for changes over time. If you find something that's not quite right, submit a patch [6] to fix it. * Have you done something for which you think guidance would have made things easier but couldn't find any? Submit a patch and help others [6]. # Newly Published Guidelines None # API Guidelines Proposed for Freeze Guidelines that are ready for wider review by the whole community. None # Guidelines Currently Under Review [3] * Update parameter names in microversion sdk spec https://review.openstack.org/#/c/557773/ * Add API-schema guide (still being defined) https://review.openstack.org/#/c/524467/ * A (shrinking) suite of several documents about doing version and service discovery Start at https://review.openstack.org/#/c/459405/ * WIP: microversion architecture archival doc (very early; not yet ready for review) https://review.openstack.org/444892 # Highlighting your API impacting issues If you seek further review and insight from the API SIG about APIs that you are developing or changing, please address your concerns in an email to the OpenStack developer mailing list[1] with the tag "[api]" in the subject. In your email, you should include any relevant reviews, links, and comments to help guide the discussion of the specific challenge you are facing. To learn more about the API SIG mission and the work we do, see our wiki page [4] and guidelines [2]. Thanks for reading and see you next week! # References [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] http://specs.openstack.org/openstack/api-wg/ [3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z [4] https://wiki.openstack.org/wiki/API_SIG [5] https://bugs.launchpad.net/openstack-api-wg [6] https://git.openstack.org/cgit/openstack/api-wg [7] http://lists.openstack.org/pipermail/openstack-dev/2018-April/129987.html [8] http://graphql.org/ Meeting Agenda https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda Past Meeting Records http://eavesdrop.openstack.org/meetings/api_sig/ Open Bugs https://bugs.launchpad.net/openstack-api-wg -- Ed Leafe From jaypipes at gmail.com Thu May 3 17:50:48 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 3 May 2018 13:50:48 -0400 Subject: [openstack-dev] [api] REST limitations and GraghQL inception? In-Reply-To: <741F69B8-03E0-44A5-9255-EABAAACC0CB5@leafe.com> References: <9e8ab8c2-1025-c1c0-7f02-080cc8ae8fc1@redhat.com> <9196a914-df90-41cd-01af-4f1c49a5d1aa@redhat.com> <741F69B8-03E0-44A5-9255-EABAAACC0CB5@leafe.com> Message-ID: On 05/03/2018 12:57 PM, Ed Leafe wrote: > On May 2, 2018, at 2:40 AM, Gilles Dubreuil wrote: >> >>> • We should get a common consensus before all projects start to implement it. >> >> This is going to be raised during the API SIG weekly meeting later this week. >> API developers (at least one) from every project are strongly welcomed to participate. >> I suppose it makes sense for the API SIG to be the place to discuss it, at least initially. > > It was indeed discussed, and we think that it would be a worthwhile experiment. But it would be a difficult, if not impossible, proposal to have adopted OpenStack-wide without some data to back it up. So what we thought would be a good starting point would be to have a group of individuals interested in GraphQL form an informal team and proceed to wrap one OpenStack API as a proof-of-concept. Monty Taylor suggested Neutron as an excellent candidate, as its API exposes things at an individual table level, requiring the client to join that information to get the answers they need. > > Once that is done, we could examine the results, and use them as the basis for proceeding with something more comprehensive. Does that sound like a good approach to (all of) you? Did anyone bring up the differences between control plane APIs and data APIs and the applicability of GraphQL to the latter and not the former? For example, a control plane API to reboot a server instance looks like this: POST /servers/{uuid}/action { "reboot" : { "type" : "HARD" } } how does that map to GraphQL? Via GraphQL's "mutations" [0]? That doesn't really work since the server object isn't being mutated. I mean, the state of the server will *eventually* be mutated when the reboot action starts kicking in (the above is an async operation returning a 202 Accepted). But the act of hitting POST /servers/{uuid}/action doesn't actually mutate the server's state. This is just one example of where GraphQL doesn't necessarily map well to control plane APIs that happen to be built on top of REST/HTTP [1] Bottom line for me would be what is the perceivable benefit that all of our users would receive given the (very costly) overhaul of our APIs that would likely be required. Best, -jay [0] http://graphql.org/learn/queries/#mutations [1] One could argue (and I have in the past) that POST /servers/{uuid}/action isn't a RESTful interface at all... From ed at leafe.com Thu May 3 17:55:29 2018 From: ed at leafe.com (Ed Leafe) Date: Thu, 3 May 2018 12:55:29 -0500 Subject: [openstack-dev] [api] REST limitations and GraghQL inception? In-Reply-To: References: <9e8ab8c2-1025-c1c0-7f02-080cc8ae8fc1@redhat.com> <9196a914-df90-41cd-01af-4f1c49a5d1aa@redhat.com> <741F69B8-03E0-44A5-9255-EABAAACC0CB5@leafe.com> Message-ID: <5DDC91A8-72F8-450E-80D3-D78F6DAC3458@leafe.com> On May 3, 2018, at 12:50 PM, Jay Pipes wrote: > > Bottom line for me would be what is the perceivable benefit that all of our users would receive given the (very costly) overhaul of our APIs that would likely be required. That was our thinking: no one would agree to such an effort without first demonstrating some tangible results. Hence the idea for an experiment with just a single service, done by those interested in seeing it happen. If GraphQL can do what they imagine it could do, then they would be able to demonstrate the benefit that you (and everyone else) would want to see. -- Ed Leafe From gael.therond at gmail.com Thu May 3 17:55:49 2018 From: gael.therond at gmail.com (Flint WALRUS) Date: Thu, 03 May 2018 17:55:49 +0000 Subject: [openstack-dev] [api] REST limitations and GraghQL inception? In-Reply-To: <741F69B8-03E0-44A5-9255-EABAAACC0CB5@leafe.com> References: <9e8ab8c2-1025-c1c0-7f02-080cc8ae8fc1@redhat.com> <9196a914-df90-41cd-01af-4f1c49a5d1aa@redhat.com> <741F69B8-03E0-44A5-9255-EABAAACC0CB5@leafe.com> Message-ID: It seems to be a fair way to do it. I do second the Neutron API as a good candidate. I’ll be happy to give a hand. @jay I’ve already sum my points upper, but I could definitely have better exemples if needed. I’m operating and dealing with a large (really) Openstack platform and GraphQL would have tremendous performances impacts for sure. But you’re right proof have to be made. Le jeu. 3 mai 2018 à 18:57, Ed Leafe a écrit : > On May 2, 2018, at 2:40 AM, Gilles Dubreuil wrote: > > > >> • We should get a common consensus before all projects start to > implement it. > > > > This is going to be raised during the API SIG weekly meeting later this > week. > > API developers (at least one) from every project are strongly welcomed > to participate. > > I suppose it makes sense for the API SIG to be the place to discuss it, > at least initially. > > It was indeed discussed, and we think that it would be a worthwhile > experiment. But it would be a difficult, if not impossible, proposal to > have adopted OpenStack-wide without some data to back it up. So what we > thought would be a good starting point would be to have a group of > individuals interested in GraphQL form an informal team and proceed to wrap > one OpenStack API as a proof-of-concept. Monty Taylor suggested Neutron as > an excellent candidate, as its API exposes things at an individual table > level, requiring the client to join that information to get the answers > they need. > > Once that is done, we could examine the results, and use them as the basis > for proceeding with something more comprehensive. Does that sound like a > good approach to (all of) you? > > -- Ed Leafe > > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.therond at gmail.com Thu May 3 17:56:20 2018 From: gael.therond at gmail.com (Flint WALRUS) Date: Thu, 03 May 2018 17:56:20 +0000 Subject: [openstack-dev] [api] REST limitations and GraghQL inception? In-Reply-To: References: <9e8ab8c2-1025-c1c0-7f02-080cc8ae8fc1@redhat.com> <9196a914-df90-41cd-01af-4f1c49a5d1aa@redhat.com> <741F69B8-03E0-44A5-9255-EABAAACC0CB5@leafe.com> Message-ID: Exactly ! Le jeu. 3 mai 2018 à 19:55, Flint WALRUS a écrit : > It seems to be a fair way to do it. I do second the Neutron API as a good > candidate. > > I’ll be happy to give a hand. > > @jay I’ve already sum my points upper, but I could definitely have better > exemples if needed. > > I’m operating and dealing with a large (really) Openstack platform and > GraphQL would have tremendous performances impacts for sure. But you’re > right proof have to be made. > Le jeu. 3 mai 2018 à 18:57, Ed Leafe a écrit : > >> On May 2, 2018, at 2:40 AM, Gilles Dubreuil wrote: >> > >> >> • We should get a common consensus before all projects start to >> implement it. >> > >> > This is going to be raised during the API SIG weekly meeting later this >> week. >> > API developers (at least one) from every project are strongly welcomed >> to participate. >> > I suppose it makes sense for the API SIG to be the place to discuss it, >> at least initially. >> >> It was indeed discussed, and we think that it would be a worthwhile >> experiment. But it would be a difficult, if not impossible, proposal to >> have adopted OpenStack-wide without some data to back it up. So what we >> thought would be a good starting point would be to have a group of >> individuals interested in GraphQL form an informal team and proceed to wrap >> one OpenStack API as a proof-of-concept. Monty Taylor suggested Neutron as >> an excellent candidate, as its API exposes things at an individual table >> level, requiring the client to join that information to get the answers >> they need. >> >> Once that is done, we could examine the results, and use them as the >> basis for proceeding with something more comprehensive. Does that sound >> like a good approach to (all of) you? >> >> -- Ed Leafe >> >> >> >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Fox at pnnl.gov Thu May 3 19:34:25 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Thu, 3 May 2018 19:34:25 +0000 Subject: [openstack-dev] [api] REST limitations and GraghQL inception? In-Reply-To: References: <9e8ab8c2-1025-c1c0-7f02-080cc8ae8fc1@redhat.com> <9196a914-df90-41cd-01af-4f1c49a5d1aa@redhat.com> <741F69B8-03E0-44A5-9255-EABAAACC0CB5@leafe.com>, Message-ID: <1A3C52DFCD06494D8528644858247BF01C0B9781@EX10MBOX03.pnnl.gov> k8s does that I think by separating desired state from actual state and working to bring the two inline. the same could (maybe even should) be done to openstack. But your right, that is not a small amount of work. Even without using GraphQL, Making the api more declarative anyway, has advantages. Thanks, Kevin ________________________________________ From: Jay Pipes [jaypipes at gmail.com] Sent: Thursday, May 03, 2018 10:50 AM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [api] REST limitations and GraghQL inception? On 05/03/2018 12:57 PM, Ed Leafe wrote: > On May 2, 2018, at 2:40 AM, Gilles Dubreuil wrote: >> >>> • We should get a common consensus before all projects start to implement it. >> >> This is going to be raised during the API SIG weekly meeting later this week. >> API developers (at least one) from every project are strongly welcomed to participate. >> I suppose it makes sense for the API SIG to be the place to discuss it, at least initially. > > It was indeed discussed, and we think that it would be a worthwhile experiment. But it would be a difficult, if not impossible, proposal to have adopted OpenStack-wide without some data to back it up. So what we thought would be a good starting point would be to have a group of individuals interested in GraphQL form an informal team and proceed to wrap one OpenStack API as a proof-of-concept. Monty Taylor suggested Neutron as an excellent candidate, as its API exposes things at an individual table level, requiring the client to join that information to get the answers they need. > > Once that is done, we could examine the results, and use them as the basis for proceeding with something more comprehensive. Does that sound like a good approach to (all of) you? Did anyone bring up the differences between control plane APIs and data APIs and the applicability of GraphQL to the latter and not the former? For example, a control plane API to reboot a server instance looks like this: POST /servers/{uuid}/action { "reboot" : { "type" : "HARD" } } how does that map to GraphQL? Via GraphQL's "mutations" [0]? That doesn't really work since the server object isn't being mutated. I mean, the state of the server will *eventually* be mutated when the reboot action starts kicking in (the above is an async operation returning a 202 Accepted). But the act of hitting POST /servers/{uuid}/action doesn't actually mutate the server's state. This is just one example of where GraphQL doesn't necessarily map well to control plane APIs that happen to be built on top of REST/HTTP [1] Bottom line for me would be what is the perceivable benefit that all of our users would receive given the (very costly) overhaul of our APIs that would likely be required. Best, -jay [0] http://graphql.org/learn/queries/#mutations [1] One could argue (and I have in the past) that POST /servers/{uuid}/action isn't a RESTful interface at all... __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ekcs.openstack at gmail.com Thu May 3 19:49:38 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Thu, 03 May 2018 12:49:38 -0700 Subject: [openstack-dev] [keystone][monasca][congress][senlin][telemetry] authenticated webhook notifications Message-ID: Question to the projects which send or consume webhook notifications (telemetry, monasca, senlin, vitrage, etc.), what are your supported/preferred authentication mechanisms? Bearer token (e.g. Keystone)? Signing? Any pointers to past discussions on the topic? My interest here is having Congress consume and send webhook notifications. I know some people are working on adding the keystone auth option to Monasca's webhook framework. If there is a project that already does it, it could be a very helpful reference. Thanks very much! From dms at danplanet.com Thu May 3 20:26:24 2018 From: dms at danplanet.com (Dan Smith) Date: Thu, 03 May 2018 13:26:24 -0700 Subject: [openstack-dev] [nova][placement] Trying to summarize bp/glance-image-traits scheduling alternatives for rebuild In-Reply-To: <195398f7-00e6-93a6-a95b-4ebc9b42f2da@gmail.com> (melanie witt's message of "Wed, 2 May 2018 16:11:18 -0700") References: <221636a9-4b8f-1098-10b8-2240a7cb0ff7@gmail.com> <8eec45ab-f9ed-cd96-51a1-9be78849fb9b@gmail.com> <530903a4-701d-595e-acc3-05369697cf06@gmail.com> <30e8e58b-a2f0-df83-49ba-d4d7a9aeddf3@gmail.com> <195398f7-00e6-93a6-a95b-4ebc9b42f2da@gmail.com> Message-ID: > I'm late to this thread but I finally went through the replies and my > thought is, we should do a pre-flight check to verify with placement > whether the image traits requested are 1) supported by the compute > host the instance is residing on and 2) coincide with the > already-existing allocations. Instead of making an assumption based on > "last image" vs "new image" and artificially limiting a rebuild that > should be valid to go ahead. I can imagine scenarios where a user is > trying to do a rebuild that their cloud admin says should be perfectly > valid on their hypervisor, but it's getting rejected because old image > traits != new image traits. It seems like unnecessary user and admin > pain. Yeah, I think we have to do this. > It doesn't seem correct to reject the request if the current compute > host can fulfill it, and if I understood correctly, we have placement > APIs we can call from the conductor to verify the image traits > requested for the rebuild can be fulfilled. Is there a reason not to > do that? Well, it's a little itcky in that it makes a random part of conductor a bit like the scheduler in its understanding of and iteraction with placement. I don't love it, but I think it's what we have to do. Trying to do the trait math with what was used before, or conservatively rejecting the request and being potentially wrong about that is not reasonable, IMHO. --Dan From mriedemos at gmail.com Thu May 3 20:29:21 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 3 May 2018 15:29:21 -0500 Subject: [openstack-dev] [nova][placement] Trying to summarize bp/glance-image-traits scheduling alternatives for rebuild In-Reply-To: References: <221636a9-4b8f-1098-10b8-2240a7cb0ff7@gmail.com> <8eec45ab-f9ed-cd96-51a1-9be78849fb9b@gmail.com> <530903a4-701d-595e-acc3-05369697cf06@gmail.com> <30e8e58b-a2f0-df83-49ba-d4d7a9aeddf3@gmail.com> <195398f7-00e6-93a6-a95b-4ebc9b42f2da@gmail.com> Message-ID: On 5/3/2018 3:26 PM, Dan Smith wrote: > Well, it's a little itcky in that it makes a random part of conductor a > bit like the scheduler in its understanding of and iteraction with > placement. I don't love it, but I think it's what we have to do. Trying > to do the trait math with what was used before, or conservatively > rejecting the request and being potentially wrong about that is not > reasonable, IMHO. The upside to doing the check in conductor is we have a specific code flow for rebuild in conductor and we should be able to just put a private method off to the side for this validation scenario. That's preferable to baking more rebuild logic into the scheduler. It also means we are always going to do this validation regardless of whether or not the ImagePropertiesFilter is enabled, but that (1) seems OK and (2) no one probably ever disables the ImagePropertiesFilter anyway. -- Thanks, Matt From openstack at fried.cc Thu May 3 21:40:05 2018 From: openstack at fried.cc (Eric Fried) Date: Thu, 3 May 2018 16:40:05 -0500 Subject: [openstack-dev] [nova][placement] Trying to summarize bp/glance-image-traits scheduling alternatives for rebuild In-Reply-To: References: <221636a9-4b8f-1098-10b8-2240a7cb0ff7@gmail.com> <8eec45ab-f9ed-cd96-51a1-9be78849fb9b@gmail.com> <530903a4-701d-595e-acc3-05369697cf06@gmail.com> <30e8e58b-a2f0-df83-49ba-d4d7a9aeddf3@gmail.com> <195398f7-00e6-93a6-a95b-4ebc9b42f2da@gmail.com> Message-ID: >> verify with placement >> whether the image traits requested are 1) supported by the compute >> host the instance is residing on and 2) coincide with the >> already-existing allocations. Note that #2 is a subset of #1. The only potential advantage of including #1 is efficiency: We can do #1 in one API call and bail early if it fails; but if it passes, we have to do #2 anyway, which is multiple steps. So would we rather save one step in the "good path" or potentially N-1 steps in the failure case? IMO the cost of the additional dev/test to implement #1 is higher than that of the potential extra API calls. (TL;DR: just implement #2.) -efried From mikal at stillhq.com Thu May 3 23:04:09 2018 From: mikal at stillhq.com (Michael Still) Date: Fri, 4 May 2018 09:04:09 +1000 Subject: [openstack-dev] [nova] A short guide to adding privsep'ed methods in Nova Message-ID: I was asked yesterday for a guide on how to write new escalated methods with oslo privsep, so I wrote up a blog post about it this morning. It might be useful to others here. http://www.madebymikal.com/how-to-make-a-privileged-call-with-oslo-privsep/ I intend to write up how to add privsep to a new project as well, but I'll do that separately later. Michael -- Did this email leave you hoping to cause me pain? Good news! Sponsor me in city2surf 2018 and I promise to suffer greatly. http://www.madebymikal.com/city2surf-2018/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdubreui at redhat.com Fri May 4 02:19:04 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Fri, 4 May 2018 12:19:04 +1000 Subject: [openstack-dev] [api] REST limitations and GraghQL inception? In-Reply-To: References: <9e8ab8c2-1025-c1c0-7f02-080cc8ae8fc1@redhat.com> <9196a914-df90-41cd-01af-4f1c49a5d1aa@redhat.com> <741F69B8-03E0-44A5-9255-EABAAACC0CB5@leafe.com> Message-ID: +1 for a PoC On 04/05/18 03:56, Flint WALRUS wrote: > Exactly ! > Le jeu. 3 mai 2018 à 19:55, Flint WALRUS > a écrit : > > It seems to be a fair way to do it. I do second the Neutron API as > a good candidate. > > I’ll be happy to give a hand. > > @jay I’ve already sum my points upper, but I could definitely have > better exemples if needed. > > I’m operating and dealing with a large (really) Openstack platform > and GraphQL would have tremendous performances impacts for sure. > But you’re right proof have to be made. > Le jeu. 3 mai 2018 à 18:57, Ed Leafe > a écrit : > > On May 2, 2018, at 2:40 AM, Gilles Dubreuil > > wrote: > > > >> • We should get a common consensus before all projects > start to implement it. > > > > This is going to be raised during the API SIG weekly meeting > later this week. > > API developers (at least one) from every project are > strongly welcomed to participate. > > I suppose it makes sense for the API SIG to be the place to > discuss it, at least initially. > > It was indeed discussed, and we think that it would be a > worthwhile experiment. But it would be a difficult, if not > impossible, proposal to have adopted OpenStack-wide without > some data to back it up. So what we thought would be a good > starting point would be to have a group of individuals > interested in GraphQL form an informal team and proceed to > wrap one OpenStack API as a proof-of-concept. Monty Taylor > suggested Neutron as an excellent candidate, as its API > exposes things at an individual table level, requiring the > client to join that information to get the answers they need. > > Once that is done, we could examine the results, and use them > as the basis for proceeding with something more comprehensive. > Does that sound like a good approach to (all of) you? > > -- Ed Leafe > > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Gilles Dubreuil Senior Software Engineer - Red Hat - Openstack DFG Integration Email: gilles at redhat.com GitHub/IRC: gildub Mobile: +61 400 894 219 -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdubreui at redhat.com Fri May 4 03:20:27 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Fri, 4 May 2018 13:20:27 +1000 Subject: [openstack-dev] [api] REST limitations and GraghQL inception? In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C0B9781@EX10MBOX03.pnnl.gov> References: <9e8ab8c2-1025-c1c0-7f02-080cc8ae8fc1@redhat.com> <9196a914-df90-41cd-01af-4f1c49a5d1aa@redhat.com> <741F69B8-03E0-44A5-9255-EABAAACC0CB5@leafe.com> <1A3C52DFCD06494D8528644858247BF01C0B9781@EX10MBOX03.pnnl.gov> Message-ID: <66e2285a-916c-a685-ab89-c2b6dd0900ed@redhat.com> On 04/05/18 05:34, Fox, Kevin M wrote: > k8s does that I think by separating desired state from actual state and working to bring the two inline. the same could (maybe even should) be done to openstack. But your right, that is not a small amount of work. K8s makes perfect sense to follow declarative approach. That said a mutation following control plane API action semantic could be very similar: mutation rebootServer { Server(id: ) { reboot: { type: "HARD" } } } "rebootServer" being an alias to name the request. > Even without using GraphQL, Making the api more declarative anyway, has advantages. +1 > Thanks, > Kevin > ________________________________________ > From: Jay Pipes [jaypipes at gmail.com] > Sent: Thursday, May 03, 2018 10:50 AM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [api] REST limitations and GraghQL inception? > > On 05/03/2018 12:57 PM, Ed Leafe wrote: >> On May 2, 2018, at 2:40 AM, Gilles Dubreuil wrote: >>>> • We should get a common consensus before all projects start to implement it. >>> This is going to be raised during the API SIG weekly meeting later this week. >>> API developers (at least one) from every project are strongly welcomed to participate. >>> I suppose it makes sense for the API SIG to be the place to discuss it, at least initially. >> It was indeed discussed, and we think that it would be a worthwhile experiment. But it would be a difficult, if not impossible, proposal to have adopted OpenStack-wide without some data to back it up. So what we thought would be a good starting point would be to have a group of individuals interested in GraphQL form an informal team and proceed to wrap one OpenStack API as a proof-of-concept. Monty Taylor suggested Neutron as an excellent candidate, as its API exposes things at an individual table level, requiring the client to join that information to get the answers they need. >> >> Once that is done, we could examine the results, and use them as the basis for proceeding with something more comprehensive. Does that sound like a good approach to (all of) you? > Did anyone bring up the differences between control plane APIs and data > APIs and the applicability of GraphQL to the latter and not the former? > > For example, a control plane API to reboot a server instance looks like > this: > > POST /servers/{uuid}/action > { > "reboot" : { > "type" : "HARD" > } > } > > how does that map to GraphQL? Via GraphQL's "mutations" [0]? That > doesn't really work since the server object isn't being mutated. I mean, > the state of the server will *eventually* be mutated when the reboot > action starts kicking in (the above is an async operation returning a > 202 Accepted). But the act of hitting POST /servers/{uuid}/action > doesn't actually mutate the server's state. > > This is just one example of where GraphQL doesn't necessarily map well > to control plane APIs that happen to be built on top of REST/HTTP [1] > > Bottom line for me would be what is the perceivable benefit that all of > our users would receive given the (very costly) overhaul of our APIs > that would likely be required. > > Best, > -jay > > [0] http://graphql.org/learn/queries/#mutations > [1] One could argue (and I have in the past) that POST > /servers/{uuid}/action isn't a RESTful interface at all... > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Gilles Dubreuil Senior Software Engineer - Red Hat - Openstack DFG Integration Email: gilles at redhat.com GitHub/IRC: gildub Mobile: +61 400 894 219 From gdubreui at redhat.com Fri May 4 04:41:18 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Fri, 4 May 2018 14:41:18 +1000 Subject: [openstack-dev] [api] REST limitations and GraghQL inception? In-Reply-To: <66e2285a-916c-a685-ab89-c2b6dd0900ed@redhat.com> References: <9e8ab8c2-1025-c1c0-7f02-080cc8ae8fc1@redhat.com> <9196a914-df90-41cd-01af-4f1c49a5d1aa@redhat.com> <741F69B8-03E0-44A5-9255-EABAAACC0CB5@leafe.com> <1A3C52DFCD06494D8528644858247BF01C0B9781@EX10MBOX03.pnnl.gov> <66e2285a-916c-a685-ab89-c2b6dd0900ed@redhat.com> Message-ID: <2f71b413-d96c-7be4-9fc2-f2e04923e676@redhat.com> Actually Mutations fields are only data to be displayed, if needed, by the response. The data changes comes with the parameters. So the correct mutation syntax is: mutation rebootServer {   updateServer(id: ) {     reboot(type: "HARD")   } } Also the latter example would be a "data API" equivalent using CRUD function like "updateServer" And the following example would be a "plane API" equivalent approach with an action function: mutation hardReboot {   rebootServer(id: , type: "HARD") } Sorry for the initial confusion but I think this is important because GraphQL schema helps clarify data and the operations. On 04/05/18 13:20, Gilles Dubreuil wrote: > > On 04/05/18 05:34, Fox, Kevin M wrote: >> k8s does that I think by separating desired state from actual state >> and working to bring the two inline. the same could (maybe even >> should) be done to openstack. But your right, that is not a small >> amount of work. > > K8s makes perfect sense to follow declarative approach. > > That said a mutation following control plane API action semantic could > be very similar: > > mutation rebootServer { >   Server(id: ) { >     reboot: { >       type: "HARD" >     } >   } > } > > > "rebootServer" being an alias to name the request. > > >> Even without using GraphQL, Making the api more declarative anyway, >> has advantages. > > +1 > >> Thanks, >> Kevin >> ________________________________________ >> From: Jay Pipes [jaypipes at gmail.com] >> Sent: Thursday, May 03, 2018 10:50 AM >> To: openstack-dev at lists.openstack.org >> Subject: Re: [openstack-dev] [api] REST limitations and GraghQL >> inception? >> >> On 05/03/2018 12:57 PM, Ed Leafe wrote: >>> On May 2, 2018, at 2:40 AM, Gilles Dubreuil >>> wrote: >>>>> • We should get a common consensus before all projects start to >>>>> implement it. >>>> This is going to be raised during the API SIG weekly meeting later >>>> this week. >>>> API developers (at least one) from every project are strongly >>>> welcomed to participate. >>>> I suppose it makes sense for the API SIG to be the place to discuss >>>> it, at least initially. >>> It was indeed discussed, and we think that it would be a worthwhile >>> experiment. But it would be a difficult, if not impossible, proposal >>> to have adopted OpenStack-wide without some data to back it up. So >>> what we thought would be a good starting point would be to have a >>> group of individuals interested in GraphQL form an informal team and >>> proceed to wrap one OpenStack API as a proof-of-concept. Monty >>> Taylor suggested Neutron as an excellent candidate, as its API >>> exposes things at an individual table level, requiring the client to >>> join that information to get the answers they need. >>> >>> Once that is done, we could examine the results, and use them as the >>> basis for proceeding with something more comprehensive. Does that >>> sound like a good approach to (all of) you? >> Did anyone bring up the differences between control plane APIs and data >> APIs and the applicability of GraphQL to the latter and not the former? >> >> For example, a control plane API to reboot a server instance looks like >> this: >> >> POST /servers/{uuid}/action >> { >>       "reboot" : { >>           "type" : "HARD" >>       } >> } >> >> how does that map to GraphQL? Via GraphQL's "mutations" [0]? That >> doesn't really work since the server object isn't being mutated. I mean, >> the state of the server will *eventually* be mutated when the reboot >> action starts kicking in (the above is an async operation returning a >> 202 Accepted). But the act of hitting POST /servers/{uuid}/action >> doesn't actually mutate the server's state. >> >> This is just one example of where GraphQL doesn't necessarily map well >> to control plane APIs that happen to be built on top of REST/HTTP [1] >> >> Bottom line for me would be what is the perceivable benefit that all of >> our users would receive given the (very costly) overhaul of our APIs >> that would likely be required. >> >> Best, >> -jay >> >> [0] http://graphql.org/learn/queries/#mutations >> [1] One could argue (and I have in the past) that POST >> /servers/{uuid}/action isn't a RESTful interface at all... >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Gilles Dubreuil Senior Software Engineer - Red Hat - Openstack DFG Integration Email: gilles at redhat.com GitHub/IRC: gildub Mobile: +61 400 894 219 From linghucongsong at 163.com Fri May 4 08:13:41 2018 From: linghucongsong at 163.com (linghucongsong) Date: Fri, 4 May 2018 16:13:41 +0800 (CST) Subject: [openstack-dev] Does the openstack ci vms start each time clear up enough? Message-ID: <74160d3e.f22c.1632a36c1aa.Coremail.linghucongsong@163.com> Hi all! Recently we meet a strange problem in our ci. look this link: https://review.openstack.org/#/c/532097/ we can pass the ci in the first time, but when we begin to start the gate job, it will always failed in the second time. we have rebased several times, it alway pass the ci in the first time and failed in the second time. This have not happen before and make me to guess is it really we start the ci from the new fresh vms each time? -------------- next part -------------- An HTML attachment was scrubbed... URL: From witold.bedyk at est.fujitsu.com Fri May 4 08:53:58 2018 From: witold.bedyk at est.fujitsu.com (Bedyk, Witold) Date: Fri, 4 May 2018 08:53:58 +0000 Subject: [openstack-dev] [keystone][monasca][congress][senlin][telemetry][vitrage] authenticated webhook notifications Message-ID: <30d0d82237b24a0099c28c856b2ee6b4@R01UKEXCASM126.r01.fujitsu.local> Hi Eric, In Monasca use cases sending the token in the request header should be enough, I guess. I'm adding the references to the HipChat [1] and Slack APIs [2] as well as two old blueprints [3, 4]. [1] https://developer.atlassian.com/server/hipchat/about-the-hipchat-rest-api/ [2] https://api.slack.com/web#authentication [3] https://blueprints.launchpad.net/monasca/+spec/webhook-api-support [4] https://blueprints.launchpad.net/monasca/+spec/secure-notification-params Greetings Witek P.S. Adding Vitrage to the tags list. > -----Original Message----- > From: Eric K > Sent: Donnerstag, 3. Mai 2018 21:50 > To: OpenStack Development Mailing List (not for usage questions) > > Subject: [openstack-dev] [keystone][monasca][congress][senlin][telemetry] > authenticated webhook notifications > > Question to the projects which send or consume webhook notifications > (telemetry, monasca, senlin, vitrage, etc.), what are your > supported/preferred authentication mechanisms? Bearer token (e.g. > Keystone)? Signing? > > Any pointers to past discussions on the topic? My interest here is having > Congress consume and send webhook notifications. > > I know some people are working on adding the keystone auth option to > Monasca's webhook framework. If there is a project that already does it, it > could be a very helpful reference. From e0ne at e0ne.info Fri May 4 09:35:58 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Fri, 4 May 2018 12:35:58 +0300 Subject: [openstack-dev] [horizon][nova][cinder] Horizon support for multiattach volumes In-Reply-To: <6b8df777-e176-a028-b03a-a04319af3e40@gmail.com> References: <6b8df777-e176-a028-b03a-a04319af3e40@gmail.com> Message-ID: Matt, thank you for all your efforts! I'm going to work on Horizon blueprint soon to get it implemented in Rocky Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Wed, Apr 25, 2018 at 4:57 PM, Matt Riedemann wrote: > I wanted to advertise the need for some help in adding multiattach volume > support to Horizon. There is a blueprint tracking the changes [1]. I > started the ball rolling with [2] but there is more work to do, listed in > the work items section of the blueprint. > > [2] was I think my first real code contribution to Horizon and it wasn't > terrible (thanks for Akihiro's patience), so I'm sure others could easily > jump in here and slice this up if we have people looking for something to > hack on. > > [1] https://blueprints.launchpad.net/horizon/+spec/multi-attach-volume > [2] https://review.openstack.org/#/c/547856/ > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From therve at redhat.com Fri May 4 09:36:34 2018 From: therve at redhat.com (Thomas Herve) Date: Fri, 4 May 2018 11:36:34 +0200 Subject: [openstack-dev] [keystone][monasca][congress][senlin][telemetry] authenticated webhook notifications In-Reply-To: References: Message-ID: On Thu, May 3, 2018 at 9:49 PM, Eric K wrote: > Question to the projects which send or consume webhook notifications > (telemetry, monasca, senlin, vitrage, etc.), what are your > supported/preferred authentication mechanisms? Bearer token (e.g. > Keystone)? Signing? > > Any pointers to past discussions on the topic? My interest here is having > Congress consume and send webhook notifications. > > I know some people are working on adding the keystone auth option to > Monasca's webhook framework. If there is a project that already does it, > it could be a very helpful reference. Hi, I'll add a few that you didn't mention which consume such webhooks. * Heat has been using EC2 signatures basically since forever. It creates EC2 credentials for a Keystone user, and signs URL that way. * Zaqar has signed URLs (https://developer.openstack.org/api-ref/message/#pre-signed-queue) which allows sharing queues without authentication. * Swift temp URLs (https://docs.openstack.org/swift/latest/middleware.html#tempurl) is a good mechanism to share information as well. I'd say application credentials would make those operations a bit nicer, but they are not completely there yet. Everybody not reinventing its own wheel would be nice too :). -- Thomas From sean.mcginnis at gmx.com Fri May 4 09:37:46 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 4 May 2018 04:37:46 -0500 Subject: [openstack-dev] Does the openstack ci vms start each time clear up enough? In-Reply-To: <74160d3e.f22c.1632a36c1aa.Coremail.linghucongsong@163.com> References: <74160d3e.f22c.1632a36c1aa.Coremail.linghucongsong@163.com> Message-ID: <20180504093745.GA38938@smcginnis-mbp.local> On Fri, May 04, 2018 at 04:13:41PM +0800, linghucongsong wrote: > > Hi all! > > Recently we meet a strange problem in our ci. look this link: https://review.openstack.org/#/c/532097/ > > we can pass the ci in the first time, but when we begin to start the gate job, it will always failed in the second time. > > we have rebased several times, it alway pass the ci in the first time and failed in the second time. > > This have not happen before and make me to guess is it really we start the ci from the new fresh vms each time? A new VM is spun up for each test run, so I don't believe this is an issue with stale artifacts on the host. I would guess this is more likely some sort of race condition, and you just happen to be hitting it 50% of the time. You can probably keep rechecking the patch until you get lucky. But it would be dangerous to do that without knowing the source of the failure. You will just most likely keep running into this in future patches then. The failure looks very odd, with the test expecting a status to be returned but instead getting a ServerDetail object: http://logs.openstack.org/97/532097/16/gate/legacy-tricircle-dsvm-multiregion/ad547d5/job-output.txt.gz#_2018-05-04_03_57_05_399493 From sahid.ferdjaoui at redhat.com Fri May 4 10:02:54 2018 From: sahid.ferdjaoui at redhat.com (Sahid Orentino Ferdjaoui) Date: Fri, 4 May 2018 12:02:54 +0200 Subject: [openstack-dev] [neutron][nova] live-migration broken after update of OVS/DPDK Message-ID: <20180504100254.GA7072@redhat> We have an issue with live-migration if operators update OVS from a version that does not support dpdkvhostuserclient to a version that is supporting it. Basically from OVS2.6 to OVS2.7 or upper. The problem is that, for libvirt driver all the instances created that use vhu interfaces in server mode (OVS2.6) wont be able to live-migrate anymore. That because Neutron to select which vhu mode to use is looking at OVS capabilities [0]. Meaning that During the live-migration port details are going to be updated but Nova and in particular libvirt driver does not update guests domain XML to refer the changes. - We can fix Neutron by making it consider to always use the same vhu mode if the ports already exist. - We can enhance Nova and in particular libvirt driver to update guests domain XML during live-migration. The benefit is that the instances are going to be updated for free to use vhu in client mode which is totally better but it's probably not so trivial to implement. - We can avoid fixing it meaning that operators will have to update their instances to use vhu mode client that by a way like snapshot/rebuild. Then live-migration will be possible. [0] https://git.openstack.org/cgit/openstack/neutron/tree/neutron/plugins/ml2/drivers/openvswitch/mech_driver/mech_openvswitch.py#n94 From mnaser at vexxhost.com Fri May 4 12:23:55 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Fri, 4 May 2018 08:23:55 -0400 Subject: [openstack-dev] [puppet] Proposing Tobias Urdin to join Puppet OpenStack core In-Reply-To: References: Message-ID: Hi everyone, Due to active cores having no objections, I have officially added Tobias to our cores. Welcome, Tobias! :) Thanks, Mohammed On Fri, Apr 27, 2018 at 5:13 PM, Alex Schultz wrote: > +1 > > On Fri, Apr 27, 2018 at 11:41 AM, Emilien Macchi wrote: >> +1, thanks Tobias for your contributions! >> >> On Fri, Apr 27, 2018 at 8:21 AM, Iury Gregory wrote: >>> >>> +1 >>> >>> On Fri, Apr 27, 2018, 12:15 Mohammed Naser wrote: >>>> >>>> Hi everyone, >>>> >>>> I'm proposing that we add Tobias Urdin to the core Puppet OpenStack >>>> team as they've been putting great reviews over the past few months >>>> and they have directly contributed in resolving all the Ubuntu >>>> deployment issues and helped us bring Ubuntu support back and make the >>>> jobs voting again. >>>> >>>> Thank you, >>>> Mohammed >>>> >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> >> -- >> Emilien Macchi >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jpena at redhat.com Fri May 4 13:11:07 2018 From: jpena at redhat.com (Javier Pena) Date: Fri, 4 May 2018 09:11:07 -0400 (EDT) Subject: [openstack-dev] [tripleo] Upcoming changes to DLRN might affect TripleO In-Reply-To: <994842813.21988613.1525439193811.JavaMail.zimbra@redhat.com> Message-ID: <1106109202.21989452.1525439467605.JavaMail.zimbra@redhat.com> Hi, We are working on some changes to DLRN, which will improve its flexibility and ability to build packages using different backends [1]. While we try to make these changes backwards-compatible, we have detected an issue with existing CI jobs for TripleO. Both tripleo-ci and oooq-extras change the build_rpm.sh script from DLRN, and that change will no longer be required (and actually the sed command that tries that will fail). I have proposed patches to both tripleo-ci and oooq-extras [2], which allow compatibility with the current and future DLRN. Would it be possible to get some reviews on these patches? We need that functionality in DLRN and want to avoid breaking the TripleO gate. Thanks, Javier [1] - https://softwarefactory-project.io/r/#/q/status:open+project:DLRN+branch:master+topic:modular-build-drivers [2] - https://review.openstack.org/#/q/topic:dlrn-modular-build-drivers From gael.therond at gmail.com Fri May 4 13:16:08 2018 From: gael.therond at gmail.com (Flint WALRUS) Date: Fri, 04 May 2018 13:16:08 +0000 Subject: [openstack-dev] [api] REST limitations and GraghQL inception? In-Reply-To: <2f71b413-d96c-7be4-9fc2-f2e04923e676@redhat.com> References: <9e8ab8c2-1025-c1c0-7f02-080cc8ae8fc1@redhat.com> <9196a914-df90-41cd-01af-4f1c49a5d1aa@redhat.com> <741F69B8-03E0-44A5-9255-EABAAACC0CB5@leafe.com> <1A3C52DFCD06494D8528644858247BF01C0B9781@EX10MBOX03.pnnl.gov> <66e2285a-916c-a685-ab89-c2b6dd0900ed@redhat.com> <2f71b413-d96c-7be4-9fc2-f2e04923e676@redhat.com> Message-ID: As clarify by Gilles and Kevin we absolutely can get GraphQL with the control plan API and the workers api. Ok, how do start to work on that? What’s the next step? Which server library do we want to use? I personally use graphene with python as it is the library listed by the official GraphQL website. I don’t even know if there is another library available indeed. Are we ok to try to use neutron as a PoC service? Le ven. 4 mai 2018 à 06:41, Gilles Dubreuil a écrit : > Actually Mutations fields are only data to be displayed, if needed, by > the response. > The data changes comes with the parameters. > So the correct mutation syntax is: > > mutation rebootServer { > updateServer(id: ) { > reboot(type: "HARD") > } > } > > Also the latter example would be a "data API" equivalent using CRUD > function like "updateServer" > > And the following example would be a "plane API" equivalent approach > with an action function: > > mutation hardReboot { > rebootServer(id: , type: "HARD") > } > > Sorry for the initial confusion but I think this is important because > GraphQL schema helps clarify data and the operations. > > > On 04/05/18 13:20, Gilles Dubreuil wrote: > > > > On 04/05/18 05:34, Fox, Kevin M wrote: > >> k8s does that I think by separating desired state from actual state > >> and working to bring the two inline. the same could (maybe even > >> should) be done to openstack. But your right, that is not a small > >> amount of work. > > > > K8s makes perfect sense to follow declarative approach. > > > > That said a mutation following control plane API action semantic could > > be very similar: > > > > mutation rebootServer { > > Server(id: ) { > > reboot: { > > type: "HARD" > > } > > } > > } > > > > > > "rebootServer" being an alias to name the request. > > > > > >> Even without using GraphQL, Making the api more declarative anyway, > >> has advantages. > > > > +1 > > > >> Thanks, > >> Kevin > >> ________________________________________ > >> From: Jay Pipes [jaypipes at gmail.com] > >> Sent: Thursday, May 03, 2018 10:50 AM > >> To: openstack-dev at lists.openstack.org > >> Subject: Re: [openstack-dev] [api] REST limitations and GraghQL > >> inception? > >> > >> On 05/03/2018 12:57 PM, Ed Leafe wrote: > >>> On May 2, 2018, at 2:40 AM, Gilles Dubreuil > >>> wrote: > >>>>> • We should get a common consensus before all projects start to > >>>>> implement it. > >>>> This is going to be raised during the API SIG weekly meeting later > >>>> this week. > >>>> API developers (at least one) from every project are strongly > >>>> welcomed to participate. > >>>> I suppose it makes sense for the API SIG to be the place to discuss > >>>> it, at least initially. > >>> It was indeed discussed, and we think that it would be a worthwhile > >>> experiment. But it would be a difficult, if not impossible, proposal > >>> to have adopted OpenStack-wide without some data to back it up. So > >>> what we thought would be a good starting point would be to have a > >>> group of individuals interested in GraphQL form an informal team and > >>> proceed to wrap one OpenStack API as a proof-of-concept. Monty > >>> Taylor suggested Neutron as an excellent candidate, as its API > >>> exposes things at an individual table level, requiring the client to > >>> join that information to get the answers they need. > >>> > >>> Once that is done, we could examine the results, and use them as the > >>> basis for proceeding with something more comprehensive. Does that > >>> sound like a good approach to (all of) you? > >> Did anyone bring up the differences between control plane APIs and data > >> APIs and the applicability of GraphQL to the latter and not the former? > >> > >> For example, a control plane API to reboot a server instance looks like > >> this: > >> > >> POST /servers/{uuid}/action > >> { > >> "reboot" : { > >> "type" : "HARD" > >> } > >> } > >> > >> how does that map to GraphQL? Via GraphQL's "mutations" [0]? That > >> doesn't really work since the server object isn't being mutated. I mean, > >> the state of the server will *eventually* be mutated when the reboot > >> action starts kicking in (the above is an async operation returning a > >> 202 Accepted). But the act of hitting POST /servers/{uuid}/action > >> doesn't actually mutate the server's state. > >> > >> This is just one example of where GraphQL doesn't necessarily map well > >> to control plane APIs that happen to be built on top of REST/HTTP [1] > >> > >> Bottom line for me would be what is the perceivable benefit that all of > >> our users would receive given the (very costly) overhaul of our APIs > >> that would likely be required. > >> > >> Best, > >> -jay > >> > >> [0] http://graphql.org/learn/queries/#mutations > >> [1] One could argue (and I have in the past) that POST > >> /servers/{uuid}/action isn't a RESTful interface at all... > >> > >> > __________________________________________________________________________ > >> > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > >> > __________________________________________________________________________ > >> > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Gilles Dubreuil > Senior Software Engineer - Red Hat - Openstack DFG Integration > Email: gilles at redhat.com > GitHub/IRC: gildub > Mobile: +61 400 894 219 > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Fri May 4 13:46:57 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 4 May 2018 08:46:57 -0500 Subject: [openstack-dev] [nova] reboot a rescued instance? Message-ID: <6137a9e6-4607-378b-5c17-7d1ccc49bc0e@gmail.com> For full details on this, see the IRC conversation [1]. tl;dr: the nova compute manager and xen virt driver assume that you can reboot a rescued instance [2] but the API does not allow that [3] and as far as I can tell, it never has. I can only assume that Rackspace had an out of tree change to the API to allow rebooting a rescued instance. I don't know why that wouldn't have been upstreamed, but the upstream API doesn't allow it. I'm also not aware of anything internal to nova that reboots an instance in a rescued state. So the question now is, should we add rescue to the possible states to reboot an instance in the API? Or just rollback this essentially dead code in the compute manager and xen virt driver? I don't know if any other virt drivers will support rebooting a rescued instance. [1] http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-05-03.log.html#t2018-05-03T18:49:58 [2] https://review.openstack.org/#/q/topic:bug/1170237+(status:open+OR+status:merged [3] https://github.com/openstack/nova/blob/4b0d0ea9f18139d58103a520a6a4e9119e19a4de/nova/compute/vm_states.py#L69-L72 -- Thanks, Matt From mriedemos at gmail.com Fri May 4 13:50:08 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 4 May 2018 08:50:08 -0500 Subject: [openstack-dev] [nova] reboot a rescued instance? Message-ID: <716ed1b2-74c5-db73-fc4f-cf5ff10edd4d@gmail.com> For full details on this, see the IRC conversation [1]. tl;dr: the nova compute manager and xen virt driver assume that you can reboot a rescued instance [2] but the API does not allow that [3] and as far as I can tell, it never has. I can only assume that Rackspace had an out of tree change to the API to allow rebooting a rescued instance. I don't know why that wouldn't have been upstreamed, but the upstream API doesn't allow it. I'm also not aware of anything internal to nova that reboots an instance in a rescued state. So the question now is, should we add rescue to the possible states to reboot an instance in the API? Or just rollback this essentially dead code in the compute manager and xen virt driver? I don't know if any other virt drivers will support rebooting a rescued instance. [1] http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-05-03.log.html#t2018-05-03T18:49:58 [2] https://review.openstack.org/#/q/topic:bug/1170237+(status:open+OR+status:merged [3] https://github.com/openstack/nova/blob/4b0d0ea9f18139d58103a520a6a4e9119e19a4de/nova/compute/vm_states.py#L69-L72 -- Thanks, Matt From chris.friesen at windriver.com Fri May 4 15:42:47 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Fri, 4 May 2018 09:42:47 -0600 Subject: [openstack-dev] [nova] reboot a rescued instance? In-Reply-To: <716ed1b2-74c5-db73-fc4f-cf5ff10edd4d@gmail.com> References: <716ed1b2-74c5-db73-fc4f-cf5ff10edd4d@gmail.com> Message-ID: <5AEC7F77.4070101@windriver.com> On 05/04/2018 07:50 AM, Matt Riedemann wrote: > For full details on this, see the IRC conversation [1]. > > tl;dr: the nova compute manager and xen virt driver assume that you can reboot a > rescued instance [2] but the API does not allow that [3] and as far as I can > tell, it never has. > > I can only assume that Rackspace had an out of tree change to the API to allow > rebooting a rescued instance. I don't know why that wouldn't have been > upstreamed, but the upstream API doesn't allow it. I'm also not aware of > anything internal to nova that reboots an instance in a rescued state. > > So the question now is, should we add rescue to the possible states to reboot an > instance in the API? Or just rollback this essentially dead code in the compute > manager and xen virt driver? I don't know if any other virt drivers will support > rebooting a rescued instance. Not sure where the more recent equivalent is, but the mitaka user guide[1] has this: "Pause, suspend, and stop operations are not allowed when an instance is running in rescue mode, as triggering these actions causes the loss of the original instance state, and makes it impossible to unrescue the instance." Would the same logic apply to reboot since it's basically stop/start? Chris [1] https://docs.openstack.org/mitaka/user-guide/cli_reboot_an_instance.html From cboylan at sapwetik.org Fri May 4 15:52:29 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 04 May 2018 08:52:29 -0700 Subject: [openstack-dev] Does the openstack ci vms start each time clear up enough? In-Reply-To: <20180504093745.GA38938@smcginnis-mbp.local> References: <74160d3e.f22c.1632a36c1aa.Coremail.linghucongsong@163.com> <20180504093745.GA38938@smcginnis-mbp.local> Message-ID: <1525449149.2793610.1361025224.52597443@webmail.messagingengine.com> On Fri, May 4, 2018, at 2:37 AM, Sean McGinnis wrote: > On Fri, May 04, 2018 at 04:13:41PM +0800, linghucongsong wrote: > > > > Hi all! > > > > Recently we meet a strange problem in our ci. look this link: https://review.openstack.org/#/c/532097/ > > > > we can pass the ci in the first time, but when we begin to start the gate job, it will always failed in the second time. > > > > we have rebased several times, it alway pass the ci in the first time and failed in the second time. > > > > This have not happen before and make me to guess is it really we start the ci from the new fresh vms each time? > > A new VM is spun up for each test run, so I don't believe this is an issue with > stale artifacts on the host. I would guess this is more likely some sort of > race condition, and you just happen to be hitting it 50% of the time. Additionally you can check the job logs to see while these two jobs did run against the same cloud provider they did so in different regions on hosts with completely different IP addresses. The inventory files [0][1] are where I would start if you suspect oddness of this sort. Reading them I don't see anything to indicate the nodes were reused. [0] http://logs.openstack.org/97/532097/16/check/legacy-tricircle-dsvm-multiregion/c9b3d29/zuul-info/inventory.yaml [1] http://logs.openstack.org/97/532097/16/gate/legacy-tricircle-dsvm-multiregion/ad547d5/zuul-info/inventory.yaml Clark From vladislav.belogrudov at oracle.com Fri May 4 16:58:39 2018 From: vladislav.belogrudov at oracle.com (vladislav.belogrudov at oracle.com) Date: Fri, 4 May 2018 19:58:39 +0300 Subject: [openstack-dev] [kolla-ansible] Configure OpenStack services to use Rabbit HA queues Message-ID: Hi, is there a reason we don't configure services for rabbitmq ha queues like it is suggested in [0] ? Rabbitmq itself has ha policy 'on' via one of its templates. Thanks, Vladislav Belogrudov [0] https://docs.openstack.org/ha-guide/shared-messaging.html#rabbitmq-services From gerard.damm at wipro.com Fri May 4 18:34:36 2018 From: gerard.damm at wipro.com (gerard.damm at wipro.com) Date: Fri, 4 May 2018 18:34:36 +0000 Subject: [openstack-dev] [sdk] issues with using OpenStack SDK Python client Message-ID: Hi everybody, As a bit of a novice, I'm trying to use OpenStack SDK 0.13 in an OPNFV/ONAP project (Auto). I'm able to use the compute and network proxies, but have problems with the identity proxy, so I can't create projects and users. With network, I can create a network, a router, router interfaces, but can't add a gateway to a router. Also, deleting a router fails. With compute, I can't create flavors, and not sure if there is a "create_image" method ? Specific issues are listed below with more details. Any pointers (configuration, installation, usage, ...) and URLs to examples and documentation would be welcome. For documentation, I've been looking mostly at: https://docs.openstack.org/openstacksdk/latest/user/proxies/network.html https://docs.openstack.org/openstacksdk/latest/user/proxies/compute.html https://docs.openstack.org/openstacksdk/latest/user/proxies/identity_v3.html Thanks in advance, Gerard For all code, import statement and Connection creation is as follows (constants defined before): import openstack conn = openstack.connect(cloud=OPENSTACK_CLOUD_NAME, region_name=OPENSTACK_REGION_NAME) 1) problem adding a gateway (external network) to a router: not sure how to build a dictionary body (couldn't find examples online) tried this: network_dict_body = {'network_id': public_network.id} and this (from looking at a router printout): network_dict_body = { 'external_fixed_ips': [{'subnet_id' : public_subnet.id}], 'network_id': public_network.id } in both cases, tried this command: conn.network.add_gateway_to_router(onap_router,network_dict_body) getting the error: Exception: add_gateway_to_router() takes 2 positional arguments but 3 were given printing the router gave this: openstack.network.v2.router.Router(distributed=False, tenant_id=03aa47d3bcfd48199e0470b1c86a7f5b, created_at=2018-05-01T01:16:08Z, external_gateway_info=None, status=ACTIVE, availability_zone_hints=[], ha=False, tags=[], description=Router created for ONAP, admin_state_up=True, revision=1, flavor_id=None, id=b923fba5-5027-47b6-b679-29c331ac1aba, updated_at=2018-05-01T01:16:08Z, routes=[], name=onap_router, availability_zones=[]) 2) problem deleting routers: onap_router = conn.network.find_router(ONAP_ROUTER_NAME) conn.network.delete_router(onap_router.id) (same if conn.network.delete_router(onap_router)) getting the error: Exception: 'NoneType' object has no attribute '_body' printing the router that had been created gave this: openstack.network.v2.router.Router(description=Router created for ONAP, status=ACTIVE, routes=[], updated_at=2018-05-01T01:16:11Z, ha=False, id=b923fba5-5027-47b6-b679-29c331ac1aba, external_gateway_info=None, admin_state_up=True, availability_zone_hints=[], tenant_id=03aa47d3bcfd48199e0470b1c86a7f5b, name=onap_router, availability_zones=['nova'], tags=[], revision=3, distributed=False, flavor_id=None, created_at=2018-05-01T01:16:08Z) 3) problem reaching the identity service: (although I can reach compute and network services, and although there are users and projects in the Openstack instance: "admin" and "service" projects, "ceilometer", "nova", etc. (and "admin") users) print("\nList Users:") i=1 for user in conn.identity.users(): print('User',str(i),'\n',user,'\n') i+=1 getting the error: List Users: Exception: NotFoundException: 404 print("\nList Projects:") i=1 for project in conn.identity.projects(): print('Project',str(i),'\n',project,'\n') i+=1 also getting an error, but not the same as users: List Projects: Exception: 'Proxy' object has no attribute 'projects' if trying to create a project: onap_project = conn.identity.find_project(ONAP_TENANT_NAME) if onap_project != None: print('ONAP project/tenant already exists') else: print('Creating ONAP project/tenant...') onap_project = conn.identity.create_project( name = ONAP_TENANT_NAME, description = ONAP_TENANT_DESC, is_enabled = True) getting the error: Exception: 'Proxy' object has no attribute 'find_project' 4) problem creating flavors: tiny_flavor = conn.compute.find_flavor("m1.tiny") if tiny_flavor != None: print('m1.tiny Flavor already exists') else: print('Creating m1.tiny Flavor...') tiny_flavor = conn.compute.create_flavor( name = 'm1.tiny', vcpus = 1, disk = 1, ram = 512, ephemeral = 0, #swap = 0, #rxtx_factor = 1.0, is_public = True) (by the way, maybe swap and rxtx are not supposed to be set ?) getting the error: Exception: 'NoneType' object has no attribute '_body' 5) how to create images ? there is a compute proxy method: find_image() but it looks like there is no create_image() ? say you wanted to add this image: Ubuntu Server 16.04 LTS (Xenial Xerus) https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img how would you do it ? openstacksdk version 0.13.0 is installed: $ pip3 list Package Version ------------------- --------- appdirs 1.4.3 certifi 2018.4.16 chardet 3.0.4 command-not-found 0.3 decorator 4.3.0 deprecation 2.0.2 dogpile.cache 0.6.5 idna 2.6 iso8601 0.1.12 jmespath 0.9.3 jsonpatch 1.23 jsonpointer 2.0 keystoneauth1 3.5.0 language-selector 0.1 munch 2.3.1 netifaces 0.10.6 openstacksdk 0.13.0 os-service-types 1.2.0 packaging 17.1 pbr 4.0.2 pip 10.0.1 pycurl 7.43.0 pygobject 3.20.0 pyparsing 2.2.0 python-apt 1.1.0b1 python-debian 0.1.27 python-systemd 231 PyYAML 3.12 requests 2.18.4 requestsexceptions 1.4.0 setuptools 20.7.0 six 1.10.0 ssh-import-id 5.5 stevedore 1.28.0 ufw 0.35 unattended-upgrades 0.1 urllib3 1.22 wheel 0.29.0 $ pip3 check No broken requirements found. The information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments. WARNING: Computer viruses can be transmitted via email. The recipient should check this email and any attachments for the presence of viruses. The company accepts no liability for any damage caused by any virus transmitted by this email. www.wipro.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mordred at inaugust.com Fri May 4 19:15:43 2018 From: mordred at inaugust.com (Monty Taylor) Date: Fri, 4 May 2018 14:15:43 -0500 Subject: [openstack-dev] [sdk] issues with using OpenStack SDK Python client In-Reply-To: References: Message-ID: On 05/04/2018 01:34 PM, gerard.damm at wipro.com wrote: > Hi everybody, > > As a bit of a novice, I'm trying to use OpenStack SDK 0.13 in an > OPNFV/ONAP project (Auto). Yay! Welcome. > I'm able to use the compute and network proxies, but have problems with > the identity proxy, > > so I can't create projects and users. > > With network, I can create a network, a router, router interfaces, but > can't add a gateway to a router. Also, deleting a router fails. > > With compute, I can't create flavors, and not sure if there is a > "create_image" method ? > > Specific issues are listed below with more details. > > Any pointers (configuration, installation, usage, ...) and URLs to > examples and documentation would be welcome. > > For documentation, I've been looking mostly at: > > https://docs.openstack.org/openstacksdk/latest/user/proxies/network.html > > https://docs.openstack.org/openstacksdk/latest/user/proxies/compute.html > > https://docs.openstack.org/openstacksdk/latest/user/proxies/identity_v3.html Yes, that's all good documentation to follow. > > Thanks in advance, > > Gerard > > For all code, import statement and Connection creation is as follows > (constants defined before): > > import openstack > > conn = openstack.connect(cloud=OPENSTACK_CLOUD_NAME, > region_name=OPENSTACK_REGION_NAME) > > 1) problem adding a gateway (external network) to a router: > > not sure how to build a dictionary body (couldn't find examples online) > > tried this: > > network_dict_body = {'network_id': public_network.id} > > and this (from looking at a router printout): > > network_dict_body = { > >     'external_fixed_ips': [{'subnet_id' : public_subnet.id}], > >     'network_id': public_network.id > > } > > in both cases, tried this command: > > conn.network.add_gateway_to_router(onap_router,network_dict_body) > > getting the error: > > Exception: add_gateway_to_router() takes 2 > positional arguments but 3 were given The signature for add_gateway_to_router looks like this: def add_gateway_to_router(self, router, **body): the ** indicate that it's looking for the body as keyword arguments. Change your code to: conn.network.add_gateway_to_router(onap_router, **network_dict_body) and it should work. You could also, should you feel like it, do: conn.network.add_gateway_to_router( onap_router, external_fixed_ips=[{'subnet_id' : public_subnet.id}], network_id=public_network.id ) which is basically what the ** in **network_dict_body is doing. > printing the router gave this: > > openstack.network.v2.router.Router(distributed=False, > tenant_id=03aa47d3bcfd48199e0470b1c86a7f5b, > created_at=2018-05-01T01:16:08Z, external_gateway_info=None, > status=ACTIVE, availability_zone_hints=[], ha=False, tags=[], > description=Router created for ONAP, admin_state_up=True, revision=1, > flavor_id=None, id=b923fba5-5027-47b6-b679-29c331ac1aba, > updated_at=2018-05-01T01:16:08Z, routes=[], name=onap_router, > availability_zones=[]) > > 2) problem deleting routers: > > onap_router = conn.network.find_router(ONAP_ROUTER_NAME) > > conn.network.delete_router(onap_router.id) > > (same if conn.network.delete_router(onap_router)) > > getting the error: > > Exception: 'NoneType' object has no attribute > '_body' I'm not sure yet what's causing this - it's the same issue you're having below with flavors - I'm looking in to it. Do you have tracebacks for the exception? > printing the router that had been created gave this: > > openstack.network.v2.router.Router(description=Router created for ONAP, > status=ACTIVE, routes=[], updated_at=2018-05-01T01:16:11Z, ha=False, > id=b923fba5-5027-47b6-b679-29c331ac1aba, external_gateway_info=None, > admin_state_up=True, availability_zone_hints=[], > tenant_id=03aa47d3bcfd48199e0470b1c86a7f5b, name=onap_router, > availability_zones=['nova'], tags=[], revision=3, distributed=False, > flavor_id=None, created_at=2018-05-01T01:16:08Z) > > 3) problem reaching the identity service: There is an underlying bug/deficiency in the code that's next on my list to fix, but I'm waiting to land a patch to keystoneauth first. For now, add identity_api_version=3 to your openstack.connect line and it should work. Also - sorry about that - that's a terrible experience and is definitely not the way it should/will work. > (although I can reach compute and network services, and although there > are users and projects in the Openstack instance: "admin" and "service" > projects, "ceilometer", "nova", etc. (and "admin") users) > >         print("\nList Users:") > >         i=1 > >         for user in conn.identity.users(): > >             print('User',str(i),'\n',user,'\n') > >             i+=1 > > getting the error: > > List Users: > > Exception: > NotFoundException: 404 > >         print("\nList Projects:") > >         i=1 > >         for project in conn.identity.projects(): > >             print('Project',str(i),'\n',project,'\n') > >             i+=1 > > also getting an error, but not the same as users: > > List Projects: > > Exception: 'Proxy' object has no attribute > 'projects' > > if trying to create a project: > >         onap_project = conn.identity.find_project(ONAP_TENANT_NAME) > >         if onap_project != None: > >             print('ONAP project/tenant already exists') > >         else: > >             print('Creating ONAP project/tenant...') > >             onap_project = conn.identity.create_project( > >                 name = ONAP_TENANT_NAME, > >                 description = ONAP_TENANT_DESC, > >                 is_enabled = True) > > getting the error: > > Exception: 'Proxy' object has no attribute > 'find_project' > > 4) problem creating flavors: > >         tiny_flavor = conn.compute.find_flavor("m1.tiny") > >         if tiny_flavor != None: > >             print('m1.tiny Flavor already exists') > >        else: > >             print('Creating m1.tiny Flavor...') > >             tiny_flavor = conn.compute.create_flavor( > >                 name = 'm1.tiny', > >                 vcpus = 1, > >                 disk = 1, > >                 ram = 512, > >                 ephemeral = 0, > >                 #swap = 0, > >                 #rxtx_factor = 1.0, > >                 is_public = True) > > (by the way, maybe swap and rxtx are not supposed to be set ?) > > getting the error: > > Exception: 'NoneType' object has no attribute > '_body' Same thing - not sure what's up yet , do you have a traceback? > 5) how to create images ? > > there is a compute proxy method: find_image() > > but it looks like there is no create_image() ? That is correct. Image methods are all on the image proxy (and we should actually get rid of that compute.find_image() method before we release a 1.0) ... HOWEVER ... > say you wanted to add this image: Ubuntu Server 16.04 LTS (Xenial Xerus) > > https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img I *highly* recommend using the shade layer for image creation, because image creation is exceptionally complicated and all the logic to deal with it is there. We haven't finished combining them, but it'll work for you in its current form. https://docs.openstack.org/openstacksdk/latest/user/connection.html#openstack.connection.Connection.create_image You'll need to download the image file locally - as of right now there is not support for directly importing from a url on the glance side. Once you do: conn.create_image( 'Ubuntu Server 16.04 LTS (Xenial Xerus', filename='xenial-server-cloudimg-amd64-disk1.img') should do the trick - although you might have to add a parameter or two. I'll keep looking in to the other two errors and see what I can find, but hopefully this will help you for now. > how would you do it ? > > openstacksdk version 0.13.0 is installed: > > $ pip3 list > > Package             Version > > ------------------- --------- > > appdirs             1.4.3 > > certifi             2018.4.16 > > chardet             3.0.4 > > command-not-found   0.3 > > decorator           4.3.0 > > deprecation         2.0.2 > > dogpile.cache       0.6.5 > > idna                2.6 > > iso8601             0.1.12 > > jmespath            0.9.3 > > jsonpatch           1.23 > > jsonpointer         2.0 > > keystoneauth1       3.5.0 > > language-selector   0.1 > > munch               2.3.1 > > netifaces           0.10.6 > > openstacksdk        0.13.0 > > os-service-types    1.2.0 > > packaging           17.1 > > pbr                 4.0.2 > > pip                 10.0.1 > > pycurl              7.43.0 > > pygobject           3.20.0 > > pyparsing           2.2.0 > > python-apt          1.1.0b1 > > python-debian       0.1.27 > > python-systemd      231 > > PyYAML              3.12 > > requests            2.18.4 > > requestsexceptions  1.4.0 > > setuptools          20.7.0 > > six                 1.10.0 > > ssh-import-id       5.5 > > stevedore           1.28.0 > > ufw                 0.35 > > unattended-upgrades 0.1 > > urllib3             1.22 > > wheel               0.29.0 > > $ pip3 check > > No broken requirements found. > > The information contained in this electronic message and any attachments > to this message are intended for the exclusive use of the addressee(s) > and may contain proprietary, confidential or privileged information. If > you are not the intended recipient, you should not disseminate, > distribute or copy this e-mail. Please notify the sender immediately and > destroy all copies of this message and any attachments. WARNING: > Computer viruses can be transmitted via email. The recipient should > check this email and any attachments for the presence of viruses. The > company accepts no liability for any damage caused by any virus > transmitted by this email. www.wipro.com > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From dmsimard at redhat.com Fri May 4 19:21:31 2018 From: dmsimard at redhat.com (David Moreau Simard) Date: Fri, 4 May 2018 15:21:31 -0400 Subject: [openstack-dev] [all] Recent failures to use ARA or generate reports in the gate In-Reply-To: References: Message-ID: Hi, I forgot to follow up but this was resolved last Monday April 30th when Flask released 0.12.4. Please let me know if you see anything out of the ordinary. Thanks ! David Moreau Simard Senior Software Engineer | OpenStack RDO dmsimard = [irc, github, twitter] On Fri, Apr 27, 2018 at 11:41 AM, David Moreau Simard wrote: > Hi, > > I was made aware today that new installations of ARA were not working > or failing to generate reports in a variety of gate jobs with a stack > trace that ends with: > AttributeError: 'Blueprint' object has no attribute 'json_encoder' > > The root cause was identified to be a new release of Flask, 0.12.3, > which shipped broken packages to PyPi [1]. > This should be fixed momentarily once upstream ships a fixed 0.12.4 package. > > In the meantime, we're going to merge a requirements.txt update to > blacklist 0.12.3 but it won't be effective until we cut a new release > of ARA which we hope to be able to do sometime next week. > > I'll take the opportunity to remind users of ARA that we're > transitioning away from statically generated reports [3] and you > should do that too if you haven't already. > > [1]: https://github.com/pallets/flask/issues/2728 > [2]: https://github.com/openstack/requirements/blob/a5537a6f4b9cc477067949e1f9136415ac216f21/upper-constraints.txt# > L480 > [3]: http://lists.openstack.org/pipermail/openstack-dev/2018-March/128902.html > > David Moreau Simard > Senior Software Engineer | OpenStack RDO > > dmsimard = [irc, github, twitter] From dmsimard at redhat.com Fri May 4 19:22:26 2018 From: dmsimard at redhat.com (David Moreau Simard) Date: Fri, 4 May 2018 15:22:26 -0400 Subject: [openstack-dev] [all][infra] Upcoming changes in ARA Zuul job reports In-Reply-To: References: Message-ID: Hi, It took longer than ancitipated but the necessary changes have landed in both ARA and logs.openstack.org. The reports should now be faster and more reliable. Please let me know if you see anything out of the ordinary. Thanks ! David Moreau Simard Senior Software Engineer | OpenStack RDO dmsimard = [irc, github, twitter] On Wed, Apr 4, 2018 at 6:16 PM, David Moreau Simard wrote: > Hi, > > You might have noticed that the performance (and reliability) of the > new reports aren't up to par. > If you see failures in loading content, a refresh will usually fix the issue. > > We have different fixes to improve the performance and the reliability > of the reports and we hope > to be able to land them soon. > > In the meantime, please let us know if there is any report that > appears to be particularly > problematic. > > Thanks ! > > > David Moreau Simard > Senior Software Engineer | OpenStack RDO > > dmsimard = [irc, github, twitter] > > > On Thu, Mar 29, 2018 at 6:14 PM, David Moreau Simard > wrote: >> Hi, >> >> By default, all jobs currently benefit from the generation of a static >> ARA report located in the "ara" directory at the root of the log >> directory. >> Due to scalability concerns, these reports were only generated when a >> job failed and were not available on successful runs. >> >> I'm happy to announce that you can expect ARA reports to be available >> for every job from now on -- including the successful ones ! >> >> You'll notice a subtle but important change: the report directory will >> henceforth be named "ara-report" instead of "ara". >> >> Instead of generating and saving a HTML report, we'll now only save >> the ARA database in the "ara-report" directory. >> This is a special directory from the perspective of the >> logs.openstack.org server and ARA databases located in such >> directories will be loaded dynamically by a WSGI middleware. >> >> You don't need to do anything to benefit from this change -- it will >> be pushed to all jobs that inherit from the base job by default. >> >> However, if you happen to be using a "nested" installation of ARA and >> Ansible (i.e, OpenStack-Ansible, Kolla-Ansible, TripleO, etc.), this >> means that you can also leverage this feature. >> In order to do that, you'll want to create an "ara-report" directory >> and copy your ARA database inside before your logs are collected and >> uploaded. >> >> To help you visualize: >> /ara-report <-- This is the default Zuul report >> /logs/ara <-- This wouldn't be loaded dynamically >> /logs/ara-report <-- This would be loaded dynamically >> /logs/some/directory/ara-report <-- This would be loaded dynamically >> >> For more details on this feature of ARA, you can refer to the documentation [1]. >> >> Let me know if you have any questions ! >> >> [1]: https://ara.readthedocs.io/en/latest/advanced.html >> >> David Moreau Simard >> Senior Software Engineer | OpenStack RDO >> >> dmsimard = [irc, github, twitter] From dougal at redhat.com Fri May 4 19:34:24 2018 From: dougal at redhat.com (Dougal Matthews) Date: Fri, 4 May 2018 20:34:24 +0100 Subject: [openstack-dev] [mistral] Skipping Office Hours on Monday Message-ID: Hey folks, I forgot to say - I wont be around on Monday for office hours. Feel free to carry on without me. It should be less formal than a meeting, so anyone can chat - just send some messages to #openstack-mistral so folks know you are around for it. The ehterpad has a pinglist you can use to remind regulars. https://etherpad.openstack.org/p/mistral-office-hours Cheers, Dougal -------------- next part -------------- An HTML attachment was scrubbed... URL: From hrybacki at redhat.com Fri May 4 19:55:43 2018 From: hrybacki at redhat.com (Harry Rybacki) Date: Fri, 4 May 2018 15:55:43 -0400 Subject: [openstack-dev] [tc] [nova] [octavia] [ironic] [keystone] [policy] Spec. Freeze Exception - Default Roles Message-ID: Greetings All, After a discussion in #openstack-tc[1] earlier today, the Keystone team is adjusting its approach in proposing default roles[2]. Subsequently, I have ported the current default roles specification from openstack-specs[3] to keystone-specs[2]. The original review has been in a pretty stable state for a few weeks. As such, I propose we allow the new spec an exception to the original Rocky-m1 proposal freeze date. I invite more discussion around default roles, and our proposed approach. The Keystone team has a forum session[4] dedicated to this topic at 1135 on day one of the Vancouver Summit. Everyone should feel welcome and encouraged to attend -- we hope that this work will lead to an OpenStack Community Goal in a not-so-distant release. [1] - http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-05-04.log.html#t2018-05-04T14:40:36 [2] - https://review.openstack.org/#/c/566377/ [3] - https://review.openstack.org/#/c/523973/ [4] - https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21761/default-roles /R Harry Rybacki From lbragstad at gmail.com Fri May 4 20:16:09 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 4 May 2018 15:16:09 -0500 Subject: [openstack-dev] [tc] [nova] [octavia] [ironic] [keystone] [policy] Spec. Freeze Exception - Default Roles In-Reply-To: References: Message-ID: <28cb94f9-2cff-16b0-5f93-ca9780f2b7b4@gmail.com> On 05/04/2018 02:55 PM, Harry Rybacki wrote: > Greetings All, > > After a discussion in #openstack-tc[1] earlier today, the Keystone > team is adjusting its approach in proposing default roles[2]. > Subsequently, I have ported the current default roles specification > from openstack-specs[3] to keystone-specs[2]. > > The original review has been in a pretty stable state for a few weeks. > As such, I propose we allow the new spec an exception to the original > Rocky-m1 proposal freeze date. I don't have an issue with this, especially since we talked about it heavily at the PTG. We also had people familiar with keystone +1 the openstack-spec prior to keystone's proposal freeze. I'm OK granting an exception here if other keystone contributors don't object. > > I invite more discussion around default roles, and our proposed > approach. The Keystone team has a forum session[4] dedicated to this > topic at 1135 on day one of the Vancouver Summit. Everyone should feel > welcome and encouraged to attend -- we hope that this work will lead > to an OpenStack Community Goal in a not-so-distant release. I think scoping this down to be keystone-specific is a smart move. It allows us to focus on building a solid template for other projects to learn from. I was pleasantly surprised to hear people in -tc suggest this as a candidate for a community goal in Stein or T. Also, big thanks to jroll, dhellmann, ttx, zaneb, smcginnis, johnsom, and mnaser for taking time to work through this with us. > [1] - http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-05-04.log.html#t2018-05-04T14:40:36 > [2] - https://review.openstack.org/#/c/566377/ > [3] - https://review.openstack.org/#/c/523973/ > [4] - https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21761/default-roles > > > /R > > Harry Rybacki > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From johnsomor at gmail.com Fri May 4 21:27:53 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Fri, 4 May 2018 14:27:53 -0700 Subject: [openstack-dev] [octavia] Sometimes amphoras are not re-created if they are not reached for more than heartbeat_timeout In-Reply-To: <27191_1525337511_5AEACDA7_27191_411_1_a53fc88b781d46baad9b08e9bb30b489@orange.com> References: <11302_1524654452_5AE06174_11302_207_1_2be855e5b8174bf397106775823399bf@orange.com> <27191_1525337511_5AEACDA7_27191_411_1_a53fc88b781d46baad9b08e9bb30b489@orange.com> Message-ID: I have commented on both of those stories. Thank you for submitting them. As for the values,this is hard as those settings depend on a lot of factors. The default values are targeted towards developers and likely need to be adjusted for production. We have not yet put together our deployment guide where we would cover this type of tuning. Sigh, so much to do and not enough team members. Here are some comments I can give on those settings: [health_manager] failover_threads - This is the maximum number of parallel failovers each instance (process) of the octavia-healthmanager can process at the same time. Beyond this number they queue until a thread becomes available. If your cloud is fairly stable and you have few health managers, this can be a reasonably low number. Consider the maximum number of amphora you would have on a single compute host should it fail. Also take into account the CPU power available on the health manager host. status_update_threads - This is the maximum number of health heartbeat messages each instance (process) of the octavia-healthmanager can process at the same time. The more octavia-healthmanagers you have, the lower this can be. The upper limit on this is related to how fast your database is processing the updates. Should this number be too low, the heatlh manager will start logging warnings that you need more health managers. [haproxy_amphora] build_rate_limit build_active_retries These two settings are only used if build rate limiting is enabled (not by default). This would be set if your Nova infrastructure cannot handle the rate of instance builds Octavia is asking of it. This will prioritize instance builds based on the need and will limit the rate of instance builds Octavia asks Nova for. The only impact to the Octavia controllers is increased memory utilization if there are a large number of builds being queued waiting for Nova. You missed these two: connection_max_retries connection_retry_interval These values are typically adjusted in production environments as they are tuned for exceeding slow development systems (virtualbox, etc.) where booting instances can take up to twenty minutes. This is the time after Nova declares the instance "ACTIVE" and when the kernel finishes booting in the instance and the amphora agent is running. The default is to wait 25 minutes. In production you would expect to drop this number significantly. On a typical cloud this should take less than thirty seconds, but you should give it some buffer in case a host is especially busy. Again this depends on the performance of your cloud. [controller_worker] workers - This is the number of worker threads pulling user requests from the oslo messaging queue for each instance of the octavia-worker process. This number would be tuned depending on the number of worker controllers you have in your cloud and the rate of user requests (create, update, delete) that need to be serviced by a worker. GET calls do not require a worker. This will also be limited by the controller host CPU and RAM capacities. amp_active_retries amp_active_wait_sec Both of these values depend on the performance of your Nova environment. This is how many times and how often we check Nova to see if a requested instance has become "ACTIVE". Unless your Nova environment is unusually slow, you should not need to change these values. [task_flow] max_workers - This value limits the parallelism inside the TaskFlow flows used by the controllers. Currently there is little reason to adjust this value as the degrees of parallelism in our flows are not higher than this value. However, when we release Active-Active load balancers this value will control the number of parallel amphora builds up to the build limit above. Michael On Thu, May 3, 2018 at 1:51 AM, wrote: > Hi Michael, > > I build a new amphora image with the latest patches and I reproduced two different bugs that I see in my environment. One of them is similar to the one initially described in this thread. I opened two stories as you advised: > > https://storyboard.openstack.org/#!/story/2001960 > https://storyboard.openstack.org/#!/story/2001955 > > Meanwhile, can you provide some recommendation of values for the following parameters (maybe in relation with number of workers, cores, computes etc)? > > [health_manager] > failover_threads > status_update_threads > > [haproxy_amphora] > build_rate_limit > build_active_retries > > [controller_worker] > workers > amp_active_retries > amp_active_wait_sec > > [task_flow] > max_workers > > Thank you for your help, > Mihaela Balas > > -----Original Message----- > From: Michael Johnson [mailto:johnsomor at gmail.com] > Sent: Friday, April 27, 2018 8:24 PM > To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [octavia] Sometimes amphoras are not re-created if they are not reached for more than heartbeat_timeout > > Hi Mihaela, > > I am sorry to hear you are having trouble with the queens release of Octavia. It is true that a lot of work has gone into the failover capability, specifically working around a python threading issue and making it more resistant to certain neutron failure situations (missing ports, etc.). > > I know of one open bug against the failover flows, https://storyboard.openstack.org/#!/story/2001481, "failover breaks in Active/Standby mode if both amphroae are down". > > Unfortunately the log snippet above does not give me enough information about the problem to help with this issue. From the snippet it looks like the failovers were initiated, but the controllers are unable to reach the amphora-agent on the replacement amphora. It will continue those retry attempts, but eventually will fail the amphora into ERROR if it doesn't succeed. > > One thought I have is if you created you amphora image in the last two weeks, you may have built an amphora using the master branch of octavia, which had a bug that impacted active/standby images. This was introduced working around the new pip 10 issues. That patch has been > fixed: https://review.openstack.org/#/c/564371/ > > If neither of these situations match your environment, please open a story (https://storyboard.openstack.org/#!/dashboard/stories) for us and include the health manager logs from the point you delete the amphora up until it starts these connection attempts. We will dig through those logs to see what the issue might be. > > Michael (johnsom) > > On Wed, Apr 25, 2018 at 4:07 AM, wrote: >> Hello, >> >> >> >> I am testing Octavia Queens and I see that the failover behavior is >> very much different than the one in Ocata (this is the version we are >> currently running in production). >> >> One example of such behavior is: >> >> >> >> I create 4 load balancers and after the creation is successful, I shut >> off all the 8 amphoras. Sometimes, even the health-manager agent does >> not reach the amphoras, they are not deleted and re-created. The logs >> look like shown below even when the heartbeat timeout is long passed. >> Sometimes the amphoras are deleted and re-created. Sometimes, they >> are partially re-created – part of them remain in shut off. >> >> Heartbeat_timeout is set to 60 seconds. >> >> >> >> >> >> >> >> [octavia-health-manager-3662231220-nxnt3] 2018-04-25 10:57:26.244 11 >> WARNING octavia.amphorae.drivers.haproxy.rest_api_driver >> [req-339b54a7-ab0c-422a-832f-a444cd710497 - >> a5f15235c0714365b98a50a11ec956e7 >> - - -] Could not connect to instance. Retrying.: ConnectionError: >> HTTPSConnectionPool(host='192.168.0.15', port=9443): Max retries >> exceeded with url: >> /0.5/listeners/285ad342-5582-423e-b654-1f0b50d91fb2/certificates/octav >> iasrv2.orange.com.pem (Caused by >> NewConnectionError('> object at 0x7f559862c710>: Failed to establish a new connection: >> [Errno 113] No route to host',)) >> >> [octavia-health-manager-3662231220-3lssd] 2018-04-25 10:57:26.464 13 >> WARNING octavia.amphorae.drivers.haproxy.rest_api_driver >> [req-a63b795a-4b4f-4b90-a201-a4c9f49ac68b - >> a5f15235c0714365b98a50a11ec956e7 >> - - -] Could not connect to instance. Retrying.: ConnectionError: >> HTTPSConnectionPool(host='192.168.0.14', port=9443): Max retries >> exceeded with url: >> /0.5/listeners/a45bdef3-e7da-4a18-9f1f-53d5651efe0f/1615c1ec-249e-4fa8 >> -9d73-2397e281712c/haproxy (Caused by >> NewConnectionError('> object at 0x7f8a0de95e10>: Failed to establish a new connection: >> [Errno 113] No route to host',)) >> >> [octavia-health-manager-3662231220-nxnt3] 2018-04-25 10:57:27.772 11 >> WARNING octavia.amphorae.drivers.haproxy.rest_api_driver >> [req-10febb10-85ea-4082-9df7-daa48894b004 - >> a5f15235c0714365b98a50a11ec956e7 >> - - -] Could not connect to instance. Retrying.: ConnectionError: >> HTTPSConnectionPool(host='192.168.0.19', port=9443): Max retries >> exceeded with url: >> /0.5/listeners/96ce5862-d944-46cb-8809-e1e328268a66/fc5b7940-3527-4e9b >> -b93f-1da3957a5b71/haproxy (Caused by >> NewConnectionError('> object at 0x7f5598491c90>: Failed to establish a new connection: >> [Errno 113] No route to host',)) >> >> [octavia-health-manager-3662231220-nxnt3] 2018-04-25 10:57:34.252 11 >> WARNING octavia.amphorae.drivers.haproxy.rest_api_driver >> [req-339b54a7-ab0c-422a-832f-a444cd710497 - >> a5f15235c0714365b98a50a11ec956e7 >> - - -] Could not connect to instance. Retrying.: ConnectionError: >> HTTPSConnectionPool(host='192.168.0.15', port=9443): Max retries >> exceeded with url: >> /0.5/listeners/285ad342-5582-423e-b654-1f0b50d91fb2/certificates/octav >> iasrv2.orange.com.pem (Caused by >> NewConnectionError('> object at 0x7f5598520790>: Failed to establish a new connection: >> [Errno 113] No route to host',)) >> >> [octavia-health-manager-3662231220-3lssd] 2018-04-25 10:57:34.476 13 >> WARNING octavia.amphorae.drivers.haproxy.rest_api_driver >> [req-a63b795a-4b4f-4b90-a201-a4c9f49ac68b - >> a5f15235c0714365b98a50a11ec956e7 >> - - -] Could not connect to instance. Retrying.: ConnectionError: >> HTTPSConnectionPool(host='192.168.0.14', port=9443): Max retries >> exceeded with url: >> /0.5/listeners/a45bdef3-e7da-4a18-9f1f-53d5651efe0f/1615c1ec-249e-4fa8 >> -9d73-2397e281712c/haproxy (Caused by >> NewConnectionError('> object at 0x7f8a0de953d0>: Failed to establish a new connection: >> [Errno 113] No route to host',)) >> >> [octavia-health-manager-3662231220-nxnt3] 2018-04-25 10:57:35.780 11 >> WARNING octavia.amphorae.drivers.haproxy.rest_api_driver >> [req-10febb10-85ea-4082-9df7-daa48894b004 - >> a5f15235c0714365b98a50a11ec956e7 >> - - -] Could not connect to instance. Retrying.: ConnectionError: >> HTTPSConnectionPool(host='192.168.0.19', port=9443): Max retries >> exceeded with url: >> /0.5/listeners/96ce5862-d944-46cb-8809-e1e328268a66/fc5b7940-3527-4e9b >> -b93f-1da3957a5b71/haproxy (Caused by >> NewConnectionError('> object at 0x7f55984e2050>: Failed to establish a new connection: >> [Errno 113] No route to host',)) >> >> >> >> Thank you, >> >> Mihaela Balas >> >> ______________________________________________________________________ >> ___________________________________________________ >> >> Ce message et ses pieces jointes peuvent contenir des informations >> confidentielles ou privilegiees et ne doivent donc pas etre diffuses, >> exploites ou copies sans autorisation. Si vous avez recu ce message >> par erreur, veuillez le signaler a l'expediteur et le detruire ainsi >> que les pieces jointes. Les messages electroniques etant susceptibles >> d'alteration, Orange decline toute responsabilite si ce message a ete >> altere, deforme ou falsifie. Merci. >> >> This message and its attachments may contain confidential or >> privileged information that may be protected by law; they should not >> be distributed, used or copied without authorisation. >> If you have received this email in error, please notify the sender and >> delete this message and its attachments. >> As emails may be altered, Orange is not liable for messages that have >> been modified, changed or falsified. >> Thank you. >> >> >> ______________________________________________________________________ >> ____ OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _________________________________________________________________________________________________________________________ > > Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc > pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler > a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, > Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci. > > This message and its attachments may contain confidential or privileged information that may be protected by law; > they should not be distributed, used or copied without authorisation. > If you have received this email in error, please notify the sender and delete this message and its attachments. > As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified. > Thank you. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From lbragstad at gmail.com Fri May 4 21:33:19 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 4 May 2018 16:33:19 -0500 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 30 April 2018 Message-ID: <269ad854-06f4-efcc-5dcd-9c8fc91a96d5@gmail.com> # Keystone Team Update - Week of 30 April 2018 ## News Most of this week was spent firming up specification details. There were a couple good discussions on unified limits and the default role work we're targeting for the release. ## Open Specs Dashboard: https://tinyurl.com/yagxuyfr We had some really good discussions [0] regarding unified limits and hierarchical enforcement models. We actually have the behaviors of a specific enforcement model written down [1][2] and ready for review. CERN was able to review it and gave us some positive feedback on the approach. Please have a look if this interests you. We've also decided to refocus the default roles work after getting some help from jroll and the TC earlier today [3]. Harry re-proposed the openstack-spec [4] to the keystone-spec [5] repository and summarized the discussion on the mailing list [6]. [0] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-05-01.log.html#t2018-05-01T14:56:46[1] https://review.openstack.org/#/c/540803/[2] https://review.openstack.org/#/c/565412/[3] http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-05-04.log.html#t2018-05-04T14:23:09[4] https://review.openstack.org/#/c/523973/[5] https://review.openstack.org/#/c/566377/[6] http://lists.openstack.org/pipermail/openstack-dev/2018-May/130207.html ## Recently Merged Changes Dashboard: https://tinyurl.com/yagxuyfr We merged 17 changes this week. ## Changes that need Attention Dashboard: https://tinyurl.com/yagxuyfr There are 60 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. If you have time to review, it would be greatly appreciated. ## Bugs This week we opened 4 new bugs and fixed 1. We have 14 patches in review that are mergeable and close a bug. There are 23 that are bug related and don't have any negative feedback. Search query: https://tinyurl.com/yag837l2 ## Milestone Outlook https://releases.openstack.org/rocky/schedule.html Our next deadline is June 8th, which is specification freeze. ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From gerard.damm at wipro.com Fri May 4 22:25:59 2018 From: gerard.damm at wipro.com (gerard.damm at wipro.com) Date: Fri, 4 May 2018 22:25:59 +0000 Subject: [openstack-dev] [sdk] issues with using OpenStack SDK Python client In-Reply-To: References: Message-ID: Many thanks for the welcome ;) And many thanks for the speedy and very useful response ! Details below. Best regards, Gerard -------------------------------------------------------------------- For add_gateway_to_router(): So I tried this: external_network = conn.network.find_network(EXTERNAL_NETWORK_NAME) network_dict_body = {'network_id' : external_network.id} conn.network.add_gateway_to_router(onap_router, **network_dict_body) ==> no errors, but the router is not updated (no gateway is set) (external_gateway_info is still None) (same with conn.network.add_gateway_to_router(onap_router, network_id=external_network.id) ) Is the body parameter for add_gateway_to_router() expected to correspond to a Network ? (from a router point of view, a "gateway" is an external network) Should the network's subnet(s) be also specified in the dictionary ? Maybe only if certain specific subnets are desired for the gateway role. Otherwise, the default would apply: there is usually only 1 subnet, and that's the one to be used. So network_id would be enough to specify a gateway used in a standard way. Maybe more details about what is expected in this body dictionary should be documented in the add_gateway_to_router() section? In Horizon, when selecting a router, and selecting "Set Gateway", the user is only asked to pick an external network from a dropdown list. Then, a router interface is implicitly created, with an IP@ picked from the subnet of that network. -------------------------------------------------------------------- For router deletion: it looks like it's the "!= None" test on the returned object that has an issue onap_router = conn.network.find_router(ONAP_ROUTER_NAME) if onap_router != None: print('Deleting ONAP router...') conn.network.delete_router(onap_router.id) else: print('No ONAP router found...') I added traceback printouts in the code. printing the router before trying to delete it: onap_router: openstack.network.v2.router.Router(updated_at=2018-05-04T21:07:23Z, description=Router created for ONAP, status=ACTIVE, ha=False, name=ONAP_router, created_at=2018-05-04T21:07:20Z, tenant_id=03aa47d3bcfd48199e0470b1c86a7f5b, availability_zone_hints=[], admin_state_up=True, availability_zones=['nova'], tags=[], revision=3, routes=[], id=675abd14-096a-4b28-b764-31ca7098913b, external_gateway_info=None, distributed=False, flavor_id=None) *** Exception: 'NoneType' object has no attribute '_body' *** traceback.print_tb(): File "auto_script_config_openstack_for_onap.py", line 141, in delete_all_ONAP if onap_router != None: File "/usr/local/lib/python3.5/dist-packages/openstack/resource.py", line 358, in __eq__ return all([self._body.attributes == comparand._body.attributes, *** traceback.print_exception(): Traceback (most recent call last): File "auto_script_config_openstack_for_onap.py", line 141, in delete_all_ONAP if onap_router != None: File "/usr/local/lib/python3.5/dist-packages/openstack/resource.py", line 358, in __eq__ return all([self._body.attributes == comparand._body.attributes, AttributeError: 'NoneType' object has no attribute '_body' -------------------------------------------------------------------- For identity_api_version=3 : yes, that worked ! Could that identity_api_version parameter also/instead be specified in the clouds.yaml file ? -------------------------------------------------------------------- Here's the traceback info for the flavor error, also on the "!= None" test : *** Exception: 'NoneType' object has no attribute '_body' *** traceback.print_tb(): File "auto_script_config_openstack_for_onap.py", line 537, in configure_all_ONAP if tiny_flavor != None: File "/usr/local/lib/python3.5/dist-packages/openstack/resource.py", line 358, in __eq__ return all([self._body.attributes == comparand._body.attributes, *** traceback.print_exception(): Traceback (most recent call last): File "auto_script_config_openstack_for_onap.py", line 537, in configure_all_ONAP if tiny_flavor != None: File "/usr/local/lib/python3.5/dist-packages/openstack/resource.py", line 358, in __eq__ return all([self._body.attributes == comparand._body.attributes, AttributeError: 'NoneType' object has no attribute '_body' -------------------------------------------------------------------- For the image creation: ah, OK, indeed, there is an image proxy (even 2: v1, v2), and maybe the compute / image operations are redundant (or maybe not, for convenience) ? and yes, it worked ! There was no need for additional parameters. The information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments. WARNING: Computer viruses can be transmitted via email. The recipient should check this email and any attachments for the presence of viruses. The company accepts no liability for any damage caused by any virus transmitted by this email. www.wipro.com From gdubreui at redhat.com Fri May 4 23:18:39 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Sat, 5 May 2018 09:18:39 +1000 Subject: [openstack-dev] [api] REST limitations and GraghQL inception? In-Reply-To: References: <9e8ab8c2-1025-c1c0-7f02-080cc8ae8fc1@redhat.com> <9196a914-df90-41cd-01af-4f1c49a5d1aa@redhat.com> <741F69B8-03E0-44A5-9255-EABAAACC0CB5@leafe.com> <1A3C52DFCD06494D8528644858247BF01C0B9781@EX10MBOX03.pnnl.gov> <66e2285a-916c-a685-ab89-c2b6dd0900ed@redhat.com> <2f71b413-d96c-7be4-9fc2-f2e04923e676@redhat.com> Message-ID: <2eaa29bc-025d-1dfb-dca0-b8a9a376dc1f@redhat.com> Right, let's announce the Proof of Concept project as of Neutron, invite anyone interested and start it. There is an API SIG BoF at Vancouver, where we will announce it too. And for everyone who can attend, to be welcome to discuss it: https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21798/api-special-interest-group-session Yeah, Graphene is the only one listed by GraphQL organization for Python: http://graphql.org/code/#python. I think we should take this discussion on the coming project thread. Thank you everyone and see you there. Cheers, Gilles On 04/05/18 23:16, Flint WALRUS wrote: > As clarify by Gilles and Kevin we absolutely can  get GraphQL with the > control plan API and the workers api. > > Ok, how do start to work on that? What’s the next step? > > Which server library do we want to use? > I personally use graphene with python as it is the library listed by > the official GraphQL website. I don’t even know if there is another > library available indeed. > > Are we ok to try to use neutron as a PoC service? > > Le ven. 4 mai 2018 à 06:41, Gilles Dubreuil > a écrit : > > Actually Mutations fields are only data to be displayed, if > needed, by > the response. > The data changes comes with the parameters. > So the correct mutation syntax is: > > mutation rebootServer { >    updateServer(id: ) { >      reboot(type: "HARD") >    } > } > > Also the latter example would be a "data API" equivalent using CRUD > function like "updateServer" > > And the following example would be a "plane API" equivalent approach > with an action function: > > mutation hardReboot { >    rebootServer(id: , type: "HARD") > } > > Sorry for the initial confusion but I think this is important because > GraphQL schema helps clarify data and the operations. > > > On 04/05/18 13:20, Gilles Dubreuil wrote: > > > > On 04/05/18 05:34, Fox, Kevin M wrote: > >> k8s does that I think by separating desired state from actual > state > >> and working to bring the two inline. the same could (maybe even > >> should) be done to openstack. But your right, that is not a small > >> amount of work. > > > > K8s makes perfect sense to follow declarative approach. > > > > That said a mutation following control plane API action semantic > could > > be very similar: > > > > mutation rebootServer { > >   Server(id: ) { > >     reboot: { > >       type: "HARD" > >     } > >   } > > } > > > > > > "rebootServer" being an alias to name the request. > > > > > >> Even without using GraphQL, Making the api more declarative > anyway, > >> has advantages. > > > > +1 > > > >> Thanks, > >> Kevin > >> ________________________________________ > >> From: Jay Pipes [jaypipes at gmail.com ] > >> Sent: Thursday, May 03, 2018 10:50 AM > >> To: openstack-dev at lists.openstack.org > > >> Subject: Re: [openstack-dev] [api] REST limitations and GraghQL > >> inception? > >> > >> On 05/03/2018 12:57 PM, Ed Leafe wrote: > >>> On May 2, 2018, at 2:40 AM, Gilles Dubreuil > > > >>> wrote: > >>>>> • We should get a common consensus before all projects start to > >>>>> implement it. > >>>> This is going to be raised during the API SIG weekly meeting > later > >>>> this week. > >>>> API developers (at least one) from every project are strongly > >>>> welcomed to participate. > >>>> I suppose it makes sense for the API SIG to be the place to > discuss > >>>> it, at least initially. > >>> It was indeed discussed, and we think that it would be a > worthwhile > >>> experiment. But it would be a difficult, if not impossible, > proposal > >>> to have adopted OpenStack-wide without some data to back it > up. So > >>> what we thought would be a good starting point would be to have a > >>> group of individuals interested in GraphQL form an informal > team and > >>> proceed to wrap one OpenStack API as a proof-of-concept. Monty > >>> Taylor suggested Neutron as an excellent candidate, as its API > >>> exposes things at an individual table level, requiring the > client to > >>> join that information to get the answers they need. > >>> > >>> Once that is done, we could examine the results, and use them > as the > >>> basis for proceeding with something more comprehensive. Does that > >>> sound like a good approach to (all of) you? > >> Did anyone bring up the differences between control plane APIs > and data > >> APIs and the applicability of GraphQL to the latter and not the > former? > >> > >> For example, a control plane API to reboot a server instance > looks like > >> this: > >> > >> POST /servers/{uuid}/action > >> { > >>       "reboot" : { > >>           "type" : "HARD" > >>       } > >> } > >> > >> how does that map to GraphQL? Via GraphQL's "mutations" [0]? That > >> doesn't really work since the server object isn't being > mutated. I mean, > >> the state of the server will *eventually* be mutated when the > reboot > >> action starts kicking in (the above is an async operation > returning a > >> 202 Accepted). But the act of hitting POST /servers/{uuid}/action > >> doesn't actually mutate the server's state. > >> > >> This is just one example of where GraphQL doesn't necessarily > map well > >> to control plane APIs that happen to be built on top of > REST/HTTP [1] > >> > >> Bottom line for me would be what is the perceivable benefit > that all of > >> our users would receive given the (very costly) overhaul of our > APIs > >> that would likely be required. > >> > >> Best, > >> -jay > >> > >> [0] http://graphql.org/learn/queries/#mutations > >> [1] One could argue (and I have in the past) that POST > >> /servers/{uuid}/action isn't a RESTful interface at all... > >> > >> > __________________________________________________________________________ > > >> > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > >> > __________________________________________________________________________ > > >> > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Gilles Dubreuil > Senior Software Engineer - Red Hat - Openstack DFG Integration > Email: gilles at redhat.com > GitHub/IRC: gildub > Mobile: +61 400 894 219 > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Gilles Dubreuil Senior Software Engineer - Red Hat - Openstack DFG Integration Email: gilles at redhat.com GitHub/IRC: gildub Mobile: +61 400 894 219 -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.therond at gmail.com Fri May 4 23:42:59 2018 From: gael.therond at gmail.com (Flint WALRUS) Date: Fri, 04 May 2018 23:42:59 +0000 Subject: [openstack-dev] [api] REST limitations and GraghQL inception? In-Reply-To: <2eaa29bc-025d-1dfb-dca0-b8a9a376dc1f@redhat.com> References: <9e8ab8c2-1025-c1c0-7f02-080cc8ae8fc1@redhat.com> <9196a914-df90-41cd-01af-4f1c49a5d1aa@redhat.com> <741F69B8-03E0-44A5-9255-EABAAACC0CB5@leafe.com> <1A3C52DFCD06494D8528644858247BF01C0B9781@EX10MBOX03.pnnl.gov> <66e2285a-916c-a685-ab89-c2b6dd0900ed@redhat.com> <2f71b413-d96c-7be4-9fc2-f2e04923e676@redhat.com> <2eaa29bc-025d-1dfb-dca0-b8a9a376dc1f@redhat.com> Message-ID: I will not attend the vancouver summit but I’ll try to attend the berlin one as it’s closer to me. However I’ll be happy to join the conversation and give a hand, especially if you need an operational point of view as our Openstack usage is constantly growing within an heterogeneous environment ranging from a grizzly cluster (deprecating it this year) to a shiny Queens one on multiple geographic area. I think our setup gives us a really good point of view of what are the Openstack PITA and what operators are expecting the foundation to do with such challenges. Le sam. 5 mai 2018 à 01:18, Gilles Dubreuil a écrit : > Right, let's announce the Proof of Concept project as of Neutron, invite > anyone interested and start it. > > There is an API SIG BoF at Vancouver, where we will announce it too. And > for everyone who can attend, to be welcome to discuss it: > > https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21798/api-special-interest-group-session > > Yeah, Graphene is the only one listed by GraphQL organization for Python: > http://graphql.org/code/#python. > > I think we should take this discussion on the coming project thread. > > Thank you everyone and see you there. > > Cheers, > Gilles > > On 04/05/18 23:16, Flint WALRUS wrote: > > As clarify by Gilles and Kevin we absolutely can get GraphQL with the > control plan API and the workers api. > > Ok, how do start to work on that? What’s the next step? > > Which server library do we want to use? > I personally use graphene with python as it is the library listed by the > official GraphQL website. I don’t even know if there is another library > available indeed. > > Are we ok to try to use neutron as a PoC service? > > Le ven. 4 mai 2018 à 06:41, Gilles Dubreuil a > écrit : > >> Actually Mutations fields are only data to be displayed, if needed, by >> the response. >> The data changes comes with the parameters. >> So the correct mutation syntax is: >> >> mutation rebootServer { >> updateServer(id: ) { >> reboot(type: "HARD") >> } >> } >> >> Also the latter example would be a "data API" equivalent using CRUD >> function like "updateServer" >> >> And the following example would be a "plane API" equivalent approach >> with an action function: >> >> mutation hardReboot { >> rebootServer(id: , type: "HARD") >> } >> >> Sorry for the initial confusion but I think this is important because >> GraphQL schema helps clarify data and the operations. >> >> >> On 04/05/18 13:20, Gilles Dubreuil wrote: >> > >> > On 04/05/18 05:34, Fox, Kevin M wrote: >> >> k8s does that I think by separating desired state from actual state >> >> and working to bring the two inline. the same could (maybe even >> >> should) be done to openstack. But your right, that is not a small >> >> amount of work. >> > >> > K8s makes perfect sense to follow declarative approach. >> > >> > That said a mutation following control plane API action semantic could >> > be very similar: >> > >> > mutation rebootServer { >> > Server(id: ) { >> > reboot: { >> > type: "HARD" >> > } >> > } >> > } >> > >> > >> > "rebootServer" being an alias to name the request. >> > >> > >> >> Even without using GraphQL, Making the api more declarative anyway, >> >> has advantages. >> > >> > +1 >> > >> >> Thanks, >> >> Kevin >> >> ________________________________________ >> >> From: Jay Pipes [jaypipes at gmail.com] >> >> Sent: Thursday, May 03, 2018 10:50 AM >> >> To: openstack-dev at lists.openstack.org >> >> Subject: Re: [openstack-dev] [api] REST limitations and GraghQL >> >> inception? >> >> >> >> On 05/03/2018 12:57 PM, Ed Leafe wrote: >> >>> On May 2, 2018, at 2:40 AM, Gilles Dubreuil >> >>> wrote: >> >>>>> • We should get a common consensus before all projects start to >> >>>>> implement it. >> >>>> This is going to be raised during the API SIG weekly meeting later >> >>>> this week. >> >>>> API developers (at least one) from every project are strongly >> >>>> welcomed to participate. >> >>>> I suppose it makes sense for the API SIG to be the place to discuss >> >>>> it, at least initially. >> >>> It was indeed discussed, and we think that it would be a worthwhile >> >>> experiment. But it would be a difficult, if not impossible, proposal >> >>> to have adopted OpenStack-wide without some data to back it up. So >> >>> what we thought would be a good starting point would be to have a >> >>> group of individuals interested in GraphQL form an informal team and >> >>> proceed to wrap one OpenStack API as a proof-of-concept. Monty >> >>> Taylor suggested Neutron as an excellent candidate, as its API >> >>> exposes things at an individual table level, requiring the client to >> >>> join that information to get the answers they need. >> >>> >> >>> Once that is done, we could examine the results, and use them as the >> >>> basis for proceeding with something more comprehensive. Does that >> >>> sound like a good approach to (all of) you? >> >> Did anyone bring up the differences between control plane APIs and data >> >> APIs and the applicability of GraphQL to the latter and not the former? >> >> >> >> For example, a control plane API to reboot a server instance looks like >> >> this: >> >> >> >> POST /servers/{uuid}/action >> >> { >> >> "reboot" : { >> >> "type" : "HARD" >> >> } >> >> } >> >> >> >> how does that map to GraphQL? Via GraphQL's "mutations" [0]? That >> >> doesn't really work since the server object isn't being mutated. I >> mean, >> >> the state of the server will *eventually* be mutated when the reboot >> >> action starts kicking in (the above is an async operation returning a >> >> 202 Accepted). But the act of hitting POST /servers/{uuid}/action >> >> doesn't actually mutate the server's state. >> >> >> >> This is just one example of where GraphQL doesn't necessarily map well >> >> to control plane APIs that happen to be built on top of REST/HTTP [1] >> >> >> >> Bottom line for me would be what is the perceivable benefit that all of >> >> our users would receive given the (very costly) overhaul of our APIs >> >> that would likely be required. >> >> >> >> Best, >> >> -jay >> >> >> >> [0] http://graphql.org/learn/queries/#mutations >> >> [1] One could argue (and I have in the past) that POST >> >> /servers/{uuid}/action isn't a RESTful interface at all... >> >> >> >> >> __________________________________________________________________________ >> >> >> >> OpenStack Development Mailing List (not for usage questions) >> >> Unsubscribe: >> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> >> __________________________________________________________________________ >> >> >> >> OpenStack Development Mailing List (not for usage questions) >> >> Unsubscribe: >> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> >> -- >> Gilles Dubreuil >> Senior Software Engineer - Red Hat - Openstack DFG Integration >> Email: gilles at redhat.com >> GitHub/IRC: gildub >> Mobile: +61 400 894 219 >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > -- > Gilles Dubreuil > Senior Software Engineer - Red Hat - Openstack DFG Integration > Email: gilles at redhat.com > GitHub/IRC: gildub > Mobile: +61 400 894 219 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekcs.openstack at gmail.com Fri May 4 23:53:00 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Fri, 04 May 2018 16:53:00 -0700 Subject: [openstack-dev] [keystone][monasca][congress][senlin][telemetry] authenticated webhook notifications In-Reply-To: References: Message-ID: Thanks a lot Witold and Thomas! So it doesn't seem that someone is currently using a keystone token to authenticate web hook? Is is simply because most of the use cases had involved services which do not use keystone? Or is it unsuitable for another reason? On 5/4/18, 2:36 AM, "Thomas Herve" wrote: >On Thu, May 3, 2018 at 9:49 PM, Eric K wrote: >> Question to the projects which send or consume webhook notifications >> (telemetry, monasca, senlin, vitrage, etc.), what are your >> supported/preferred authentication mechanisms? Bearer token (e.g. >> Keystone)? Signing? >> >> Any pointers to past discussions on the topic? My interest here is >>having >> Congress consume and send webhook notifications. >> >> I know some people are working on adding the keystone auth option to >> Monasca's webhook framework. If there is a project that already does it, >> it could be a very helpful reference. > >Hi, > >I'll add a few that you didn't mention which consume such webhooks. > > * Heat has been using EC2 signatures basically since forever. It >creates EC2 credentials for a Keystone user, and signs URL that way. > * Zaqar has signed URLs >(https://developer.openstack.org/api-ref/message/#pre-signed-queue) >which allows sharing queues without authentication. > * Swift temp URLs >(https://docs.openstack.org/swift/latest/middleware.html#tempurl) is a >good mechanism to share information as well. > >I'd say application credentials would make those operations a bit >nicer, but they are not completely there yet. Everybody not >reinventing its own wheel would be nice too :). > >-- >Thomas > >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From zhang.lei.fly at gmail.com Sat May 5 00:03:14 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Sat, 5 May 2018 08:03:14 +0800 Subject: [openstack-dev] [kolla-ansible] Configure OpenStack services to use Rabbit HA queues In-Reply-To: References: Message-ID: Hi vladispay, I guess you are talking rabbit_ha_queues options. It is already marked as deprecated[0]. cfg.BoolOpt('rabbit_ha_queues', default=False, deprecated_group='DEFAULT', help='Try to use HA queues in RabbitMQ (x-ha-policy: all). ' 'If you change this option, you must wipe the RabbitMQ ' 'database. In RabbitMQ 3.0, queue mirroring is no longer ' 'controlled by the x-ha-policy argument when declaring a ' 'queue. If you just want to make sure that all queues (except ' 'those with auto-generated names) are mirrored across all ' 'nodes, run: ' """\"rabbitmqctl set_policy HA '^(?!amq\.).*' """ """'{"ha-mode": "all"}' \""""), In kolla, we configure a global ha-mode policy through its definition.json file in rabbitmq, please check[1] [0] https://github.com/openstack/oslo.messaging/blob/master/oslo_messaging/_ drivers/impl_rabbit.py#L165-L176 [1] https://github.com/openstack/kolla-ansible/blob/ d2d9c6622888416ad2e748706fd237f8588e993a/ansible/roles/rabbitmq/templates/ definitions.json.j2#L20 On Sat, May 5, 2018 at 12:58 AM, wrote: > Hi, > > is there a reason we don't configure services for rabbitmq ha queues like > it is suggested in [0] ? > Rabbitmq itself has ha policy 'on' via one of its templates. > > Thanks, > Vladislav Belogrudov > > [0] https://docs.openstack.org/ha-guide/shared-messaging.html#ra > bbitmq-services > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Sat May 5 04:02:42 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Sat, 5 May 2018 12:02:42 +0800 Subject: [openstack-dev] [Openstack-operators][heat][all] Heat now migrated to StoryBoard!! Message-ID: Dear all Heat members and friends As you might award, OpenStack projects are scheduled to migrating ([5]) from Launchpad to StoryBoard [1]. For whom who like to know where to file a bug/blueprint, here are some heads up for you. *What's StoryBoard?* StoryBoard is a cross-project task-tracker, contains numbers of ``project``, each project contains numbers of ``story`` which you can think it as an issue or blueprint. Within each story, contains one or multiple ``task`` (task separate stories into the tasks to resolve/implement). To learn more about StoryBoard or how to make a good story, you can reference [6]. *How to file a bug?* This is actually simple, use your current ubuntu-one id to access to storyboard. Then find the corresponding project in [2] and create a story to it with a description of your issue. We should try to create tasks which to reference with patches in Gerrit. *How to work on a spec (blueprint)?* File a story like you used to file a Blueprint. Create tasks for your plan. Also you might want to create a task for adding spec( in heat-spec repo) if your blueprint needs documents to explain. I still leave current blueprint page open, so if you like to create a story from BP, you can still get information. Right now we will start work as task-driven workflow, so BPs should act no big difference with a bug in StoryBoard (which is a story with many tasks). *Where should I put my story?* We migrate all heat sub-projects to StoryBoard to try to keep the impact to whatever you're doing as small as possible. However, if you plan to create a new story, *please create it under heat project [4]* and tag it with what it might affect with (like python-heatclint, heat-dashboard, heat-agents). We do hope to let users focus their stories in one place so all stories will get better attention and project maintainers don't need to go around separate places to find it. *How to connect from Gerrit to StoryBoard?* We usually use following key to reference Launchpad Closes-Bug: ####### Partial-Bug: ####### Related-Bug: ####### Now in StoryBoard, you can use following key. Task: ###### Story: ###### you can find more info in [3]. *What I need to do for my exists bug/bps?* Your bug is automatically migrated to StoryBoard, however, the reference in your patches ware not, so you need to change your commit message to replace the old link to launchpad to new links to StoryBoard. *Do we still need Launchpad after all this migration are done?* As the plan, we won't need Launchpad for heat anymore once we have done with migrating. Will forbid new bugs/bps filed in Launchpad. Also, try to provide new information as many as possible. Hopefully, we can make everyone happy. For those newly created bugs during/after migration, don't worry we will disallow further create new bugs/bps and do a second migrate so we won't missed yours. [1] https://storyboard.openstack.org/ [2] https://storyboard.openstack.org/#!/project_group/82 [3] https://docs.openstack.org/infra/manual/developers.html#development-workflow [4] https://storyboard.openstack.org/#!/project/989 [5] https://docs.openstack.org/infra/storyboard/migration.html [6] https://docs.openstack.org/infra/storyboard/gui/tasks_stories_tags.html#what-is-a-story -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Sat May 5 04:15:00 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Sat, 5 May 2018 12:15:00 +0800 Subject: [openstack-dev] [Openstack-operators][heat][all] Heat now migrated to StoryBoard!! In-Reply-To: References: Message-ID: looping heat-dashboard team 2018-05-05 12:02 GMT+08:00 Rico Lin : > Dear all Heat members and friends > > As you might award, OpenStack projects are scheduled to migrating ([5]) > from Launchpad to StoryBoard [1]. > For whom who like to know where to file a bug/blueprint, here are some > heads up for you. > > *What's StoryBoard?* > StoryBoard is a cross-project task-tracker, contains numbers of > ``project``, each project contains numbers of ``story`` which you can think > it as an issue or blueprint. Within each story, contains one or multiple > ``task`` (task separate stories into the tasks to resolve/implement). To > learn more about StoryBoard or how to make a good story, you can reference > [6]. > > *How to file a bug?* > This is actually simple, use your current ubuntu-one id to access to > storyboard. Then find the corresponding project in [2] and create a story > to it with a description of your issue. We should try to create tasks which > to reference with patches in Gerrit. > > *How to work on a spec (blueprint)?* > File a story like you used to file a Blueprint. Create tasks for your > plan. Also you might want to create a task for adding spec( in heat-spec > repo) if your blueprint needs documents to explain. > I still leave current blueprint page open, so if you like to create a > story from BP, you can still get information. Right now we will start work > as task-driven workflow, so BPs should act no big difference with a bug in > StoryBoard (which is a story with many tasks). > > *Where should I put my story?* > We migrate all heat sub-projects to StoryBoard to try to keep the impact > to whatever you're doing as small as possible. However, if you plan to > create a new story, *please create it under heat project [4]* and tag it > with what it might affect with (like python-heatclint, heat-dashboard, > heat-agents). We do hope to let users focus their stories in one place so > all stories will get better attention and project maintainers don't need to > go around separate places to find it. > > *How to connect from Gerrit to StoryBoard?* > We usually use following key to reference Launchpad > Closes-Bug: ####### > Partial-Bug: ####### > Related-Bug: ####### > > Now in StoryBoard, you can use following key. > Task: ###### > Story: ###### > you can find more info in [3]. > > *What I need to do for my exists bug/bps?* > Your bug is automatically migrated to StoryBoard, however, the reference > in your patches ware not, so you need to change your commit message to > replace the old link to launchpad to new links to StoryBoard. > > *Do we still need Launchpad after all this migration are done?* > As the plan, we won't need Launchpad for heat anymore once we have done > with migrating. Will forbid new bugs/bps filed in Launchpad. Also, try to > provide new information as many as possible. Hopefully, we can make > everyone happy. For those newly created bugs during/after migration, don't > worry we will disallow further create new bugs/bps and do a second migrate > so we won't missed yours. > > [1] https://storyboard.openstack.org/ > [2] https://storyboard.openstack.org/#!/project_group/82 > [3] https://docs.openstack.org/infra/manual/developers. > html#development-workflow > [4] https://storyboard.openstack.org/#!/project/989 > [5] https://docs.openstack.org/infra/storyboard/migration.html > [6] https://docs.openstack.org/infra/storyboard/gui/ > tasks_stories_tags.html#what-is-a-story > > > > -- > May The Force of OpenStack Be With You, > > *Rico Lin*irc: ricolin > > -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From vladislav.belogrudov at oracle.com Sat May 5 08:40:08 2018 From: vladislav.belogrudov at oracle.com (vladislav.belogrudov at oracle.com) Date: Sat, 5 May 2018 11:40:08 +0300 Subject: [openstack-dev] [kolla-ansible] Configure OpenStack services to use Rabbit HA queues In-Reply-To: References: Message-ID: <2d1ebb1b-fc03-1fb4-cd80-0068739724eb@oracle.com> thanks Jeffrey On 05/05/2018 03:03 AM, Jeffrey Zhang wrote: > Hi vladispay, > > I guess you are talking rabbit_ha_queues options. It is already marked > as deprecated[0]. > >     cfg.BoolOpt('rabbit_ha_queues', >                 default=False, > deprecated_group='DEFAULT', >                 help='Try to use HA queues in RabbitMQ (x-ha-policy: > all). ' >                 'If you change this option, you must wipe the RabbitMQ ' >                 'database. In RabbitMQ 3.0, queue mirroring is no longer ' >                 'controlled by the x-ha-policy argument when declaring a ' >                 'queue. If you just want to make sure that all queues > (except ' >                 'those with auto-generated names) are mirrored across > all ' >                 'nodes, run: ' >                 """\"rabbitmqctl set_policy HA '^(?!amq\.).*' """ >                 """'{"ha-mode": "all"}' \""""), > > In kolla, we configure a global ha-mode policy through its > definition.json file in rabbitmq, please check[1] > > [0] > https://github.com/openstack/oslo.messaging/blob/master/oslo_messaging/_drivers/impl_rabbit.py#L165-L176 > > [1] > https://github.com/openstack/kolla-ansible/blob/d2d9c6622888416ad2e748706fd237f8588e993a/ansible/roles/rabbitmq/templates/definitions.json.j2#L20 > > > On Sat, May 5, 2018 at 12:58 AM, > wrote: > > Hi, > > is there a reason we don't configure services for rabbitmq ha > queues like it is suggested in [0] ? > Rabbitmq itself has ha policy 'on' via one of its templates. > > Thanks, > Vladislav Belogrudov > > [0] > https://docs.openstack.org/ha-guide/shared-messaging.html#rabbitmq-services > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > -- > Regards, > Jeffrey Zhang > Blog: http://xcodest.me > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdubreui at redhat.com Sat May 5 09:15:49 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Sat, 5 May 2018 19:15:49 +1000 Subject: [openstack-dev] [neutron][api][grapql] Proof of Concept Message-ID: Hello, Few of us recently discussed [1] how GraphQL [2], the next evolution from REST, could transform OpenStack APIs for the better. Effectively we believe OpenStack APIs provide perfect use cases for GraphQL DSL approach, to bring among other advantages, better performance and stability, easier developments and consumption, and with GraphQL Schema provide automation capabilities never achieved before. The API SIG suggested to start an API GraphQL Proof of Concept (PoC) to demonstrate the capabilities before eventually extend GraphQL to other projects. Neutron has been selected for the PoC because of its specific data model. So if you are interested, please join us. For those who can make it, we'll also discuss this during the SIG API BoF at OpenStack Summit at Vancouver [3] To learn more about GraphQL, check-out howtographql.com [4]. So let's get started... [1] http://lists.openstack.org/pipermail/openstack-dev/2018-May/130054.html [2] http://graphql.org/ [3] https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21798/api-special-interest-group-session [4] https://www.howtographql.com/ Regards, Gilles From gdubreui at redhat.com Sat May 5 13:53:06 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Sat, 5 May 2018 23:53:06 +1000 Subject: [openstack-dev] [api] REST limitations and GraghQL inception? In-Reply-To: References: <9e8ab8c2-1025-c1c0-7f02-080cc8ae8fc1@redhat.com> <9196a914-df90-41cd-01af-4f1c49a5d1aa@redhat.com> <741F69B8-03E0-44A5-9255-EABAAACC0CB5@leafe.com> <1A3C52DFCD06494D8528644858247BF01C0B9781@EX10MBOX03.pnnl.gov> <66e2285a-916c-a685-ab89-c2b6dd0900ed@redhat.com> <2f71b413-d96c-7be4-9fc2-f2e04923e676@redhat.com> <2eaa29bc-025d-1dfb-dca0-b8a9a376dc1f@redhat.com> Message-ID: <82bd1d33-1293-9369-caeb-d12eb2e1e085@redhat.com> On 05/05/18 09:42, Flint WALRUS wrote: > I will not attend the vancouver summit but I’ll try to attend the > berlin one as it’s closer to me. No worries, I hope "networking" at Vancouver will allow to grab good support and rocket the momentum :). Unfortunately I'm not sure to make it to Berlin time wise and distance wise too. > > However I’ll be happy to join the conversation and give a hand, > especially if you need an operational point of view as our Openstack > usage is constantly growing within an heterogeneous environment > ranging from a grizzly cluster (deprecating it this year) to a shiny > Queens one on multiple geographic area. > > I think our setup gives us a really good point of view of what are the > Openstack PITA and what operators are expecting the foundation to do > with such challenges. Flint, I think that's an invaluable experience. Thank you for bringing in and also what you've expressed is very important too. I believe there are needs to be addressed. The viewpoint of consumers has been lacking. And the API SIG exists to take it in consideration but we need more people involved. It seems the ransom of the success hitting as now a critical mass of supporters is needed to be able to get any requirement accepted. Especially such requirements touch project wide components (API) living inside the entropy of the cloud structural complexity. This is why there is no doubt GraphQL data model simplification can bring only good. From my side, I'm not core, just been consuming OpenStack APIs for SDKs for the last 2 years and I feel we're stalling. So I'm more than happy to help and get more involved but we're going to need neutron core and other APIs core members believers. Thanks, Gilles > Le sam. 5 mai 2018 à 01:18, Gilles Dubreuil > a écrit : > > Right, let's announce the Proof of Concept project as of Neutron, > invite anyone interested and start it. > > There is an API SIG BoF at Vancouver, where we will announce it > too. And for everyone who can attend, to be welcome to discuss it: > https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21798/api-special-interest-group-session > > Yeah, Graphene is the only one listed by GraphQL organization for > Python: http://graphql.org/code/#python. > > I think we should take this discussion on the coming project thread. > > Thank you everyone and see you there. > > Cheers, > Gilles > > > On 04/05/18 23:16, Flint WALRUS wrote: >> As clarify by Gilles and Kevin we absolutely can  get GraphQL >> with the control plan API and the workers api. >> >> Ok, how do start to work on that? What’s the next step? >> >> Which server library do we want to use? >> I personally use graphene with python as it is the library listed >> by the official GraphQL website. I don’t even know if there is >> another library available indeed. >> >> Are we ok to try to use neutron as a PoC service? >> >> Le ven. 4 mai 2018 à 06:41, Gilles Dubreuil > > a écrit : >> >> Actually Mutations fields are only data to be displayed, if >> needed, by >> the response. >> The data changes comes with the parameters. >> So the correct mutation syntax is: >> >> mutation rebootServer { >>    updateServer(id: ) { >>      reboot(type: "HARD") >>    } >> } >> >> Also the latter example would be a "data API" equivalent >> using CRUD >> function like "updateServer" >> >> And the following example would be a "plane API" equivalent >> approach >> with an action function: >> >> mutation hardReboot { >>    rebootServer(id: , type: "HARD") >> } >> >> Sorry for the initial confusion but I think this is important >> because >> GraphQL schema helps clarify data and the operations. >> >> >> On 04/05/18 13:20, Gilles Dubreuil wrote: >> > >> > On 04/05/18 05:34, Fox, Kevin M wrote: >> >> k8s does that I think by separating desired state from >> actual state >> >> and working to bring the two inline. the same could (maybe >> even >> >> should) be done to openstack. But your right, that is not >> a small >> >> amount of work. >> > >> > K8s makes perfect sense to follow declarative approach. >> > >> > That said a mutation following control plane API action >> semantic could >> > be very similar: >> > >> > mutation rebootServer { >> >   Server(id: ) { >> >     reboot: { >> >       type: "HARD" >> >     } >> >   } >> > } >> > >> > >> > "rebootServer" being an alias to name the request. >> > >> > >> >> Even without using GraphQL, Making the api more >> declarative anyway, >> >> has advantages. >> > >> > +1 >> > >> >> Thanks, >> >> Kevin >> >> ________________________________________ >> >> From: Jay Pipes [jaypipes at gmail.com >> ] >> >> Sent: Thursday, May 03, 2018 10:50 AM >> >> To: openstack-dev at lists.openstack.org >> >> >> Subject: Re: [openstack-dev] [api] REST limitations and >> GraghQL >> >> inception? >> >> >> >> On 05/03/2018 12:57 PM, Ed Leafe wrote: >> >>> On May 2, 2018, at 2:40 AM, Gilles Dubreuil >> > >> >>> wrote: >> >>>>> • We should get a common consensus before all projects >> start to >> >>>>> implement it. >> >>>> This is going to be raised during the API SIG weekly >> meeting later >> >>>> this week. >> >>>> API developers (at least one) from every project are >> strongly >> >>>> welcomed to participate. >> >>>> I suppose it makes sense for the API SIG to be the place >> to discuss >> >>>> it, at least initially. >> >>> It was indeed discussed, and we think that it would be a >> worthwhile >> >>> experiment. But it would be a difficult, if not >> impossible, proposal >> >>> to have adopted OpenStack-wide without some data to back >> it up. So >> >>> what we thought would be a good starting point would be >> to have a >> >>> group of individuals interested in GraphQL form an >> informal team and >> >>> proceed to wrap one OpenStack API as a proof-of-concept. >> Monty >> >>> Taylor suggested Neutron as an excellent candidate, as >> its API >> >>> exposes things at an individual table level, requiring >> the client to >> >>> join that information to get the answers they need. >> >>> >> >>> Once that is done, we could examine the results, and use >> them as the >> >>> basis for proceeding with something more comprehensive. >> Does that >> >>> sound like a good approach to (all of) you? >> >> Did anyone bring up the differences between control plane >> APIs and data >> >> APIs and the applicability of GraphQL to the latter and >> not the former? >> >> >> >> For example, a control plane API to reboot a server >> instance looks like >> >> this: >> >> >> >> POST /servers/{uuid}/action >> >> { >> >>       "reboot" : { >> >>           "type" : "HARD" >> >>       } >> >> } >> >> >> >> how does that map to GraphQL? Via GraphQL's "mutations" >> [0]? That >> >> doesn't really work since the server object isn't being >> mutated. I mean, >> >> the state of the server will *eventually* be mutated when >> the reboot >> >> action starts kicking in (the above is an async operation >> returning a >> >> 202 Accepted). But the act of hitting POST >> /servers/{uuid}/action >> >> doesn't actually mutate the server's state. >> >> >> >> This is just one example of where GraphQL doesn't >> necessarily map well >> >> to control plane APIs that happen to be built on top of >> REST/HTTP [1] >> >> >> >> Bottom line for me would be what is the perceivable >> benefit that all of >> >> our users would receive given the (very costly) overhaul >> of our APIs >> >> that would likely be required. >> >> >> >> Best, >> >> -jay >> >> >> >> [0] http://graphql.org/learn/queries/#mutations >> >> [1] One could argue (and I have in the past) that POST >> >> /servers/{uuid}/action isn't a RESTful interface at all... >> >> >> >> >> __________________________________________________________________________ >> >> >> >> >> OpenStack Development Mailing List (not for usage questions) >> >> Unsubscribe: >> >> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> >> __________________________________________________________________________ >> >> >> >> >> OpenStack Development Mailing List (not for usage questions) >> >> Unsubscribe: >> >> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> >> -- >> Gilles Dubreuil >> Senior Software Engineer - Red Hat - Openstack DFG Integration >> Email: gilles at redhat.com >> GitHub/IRC: gildub >> Mobile: +61 400 894 219 >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > -- > Gilles Dubreuil > Senior Software Engineer - Red Hat - Openstack DFG Integration > Email:gilles at redhat.com > GitHub/IRC: gildub > Mobile: +61 400 894 219 > -- Gilles Dubreuil Senior Software Engineer - Red Hat - Openstack DFG Integration Email: gilles at redhat.com GitHub/IRC: gildub Mobile: +61 400 894 219 -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sat May 5 15:19:37 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 5 May 2018 15:19:37 +0000 Subject: [openstack-dev] [api] REST limitations and GraghQL inception? In-Reply-To: References: <9196a914-df90-41cd-01af-4f1c49a5d1aa@redhat.com> <741F69B8-03E0-44A5-9255-EABAAACC0CB5@leafe.com> <1A3C52DFCD06494D8528644858247BF01C0B9781@EX10MBOX03.pnnl.gov> <66e2285a-916c-a685-ab89-c2b6dd0900ed@redhat.com> <2f71b413-d96c-7be4-9fc2-f2e04923e676@redhat.com> <2eaa29bc-025d-1dfb-dca0-b8a9a376dc1f@redhat.com> Message-ID: <20180505151936.l6mhrrfkmczrpjxf@yuggoth.org> On 2018-05-04 23:42:59 +0000 (+0000), Flint WALRUS wrote: [...] > what operators are expecting the foundation to do with such > challenges. [...] If by "the foundation" you mean the OpenStack Foundation then this isn't really their remit. You need invested members of the community at large to join you in taking up this challenge (as you've correctly noted elsewhere). While the foundation and other leadership bodies may occasionally find successful ways to steer the project as a whole, the community is made up of individual entities (contributors and in many cases the organizations who employ them) who have their own goals and set their own priorities. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gael.therond at gmail.com Sat May 5 15:24:52 2018 From: gael.therond at gmail.com (Flint WALRUS) Date: Sat, 05 May 2018 15:24:52 +0000 Subject: [openstack-dev] [api] REST limitations and GraghQL inception? In-Reply-To: <20180505151936.l6mhrrfkmczrpjxf@yuggoth.org> References: <9196a914-df90-41cd-01af-4f1c49a5d1aa@redhat.com> <741F69B8-03E0-44A5-9255-EABAAACC0CB5@leafe.com> <1A3C52DFCD06494D8528644858247BF01C0B9781@EX10MBOX03.pnnl.gov> <66e2285a-916c-a685-ab89-c2b6dd0900ed@redhat.com> <2f71b413-d96c-7be4-9fc2-f2e04923e676@redhat.com> <2eaa29bc-025d-1dfb-dca0-b8a9a376dc1f@redhat.com> <20180505151936.l6mhrrfkmczrpjxf@yuggoth.org> Message-ID: Yeah, when I said foundation I’m talking about the community. @Gilles, count me on if you need someone to work with. Le sam. 5 mai 2018 à 17:20, Jeremy Stanley a écrit : > On 2018-05-04 23:42:59 +0000 (+0000), Flint WALRUS wrote: > [...] > > what operators are expecting the foundation to do with such > > challenges. > [...] > > If by "the foundation" you mean the OpenStack Foundation then this > isn't really their remit. You need invested members of the community > at large to join you in taking up this challenge (as you've > correctly noted elsewhere). While the foundation and other > leadership bodies may occasionally find successful ways to steer the > project as a whole, the community is made up of individual entities > (contributors and in many cases the organizations who employ them) > who have their own goals and set their own priorities. > -- > Jeremy Stanley > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Sat May 5 16:04:46 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Sat, 05 May 2018 16:04:46 +0000 Subject: [openstack-dev] [neutron][api][grapql] Proof of Concept In-Reply-To: References: Message-ID: Hi, I am happy to see the effort to explore a new API mechanism. I would like to see good progress and help effort as API liaison from the neutron team. > Neutron has been selected for the PoC because of its specific data model On the other hand, I am not sure this is the right reason to choose 'neutron' only from this reason. I would like to note "its specific data model" is not the reason that makes the progress of API versioning slowest in the OpenStack community. I believe it is worth recognized as I would like not to block the effort due to the neutron-specific reasons. The most complicated point in the neutron API is that the neutron API layer allows neutron plugins to declare which features are supported. The neutron API is a collection of API extensions defined in the neutron-lib repo and each neutron plugin can declare which subset(s) of the neutron APIs are supported. (For more detail, you can check how the neutron API extension mechanism is implemented). It is not defined only by the neutron API layer. We need to communicate which API features are supported by communicating enabled service plugins. I am afraid that most efforts to explore a new mechanism in neutron will be spent to address the above points which is not directly related to GraphQL itself. Of course, it would be great if you overcome long-standing complicated topics as part of GraphQL effort :) I am happy to help the effort and understand how the neutron API is defined. Thanks, Akihiro 2018年5月5日(土) 18:16 Gilles Dubreuil : > Hello, > > Few of us recently discussed [1] how GraphQL [2], the next evolution > from REST, could transform OpenStack APIs for the better. > Effectively we believe OpenStack APIs provide perfect use cases for > GraphQL DSL approach, to bring among other advantages, better > performance and stability, easier developments and consumption, and with > GraphQL Schema provide automation capabilities never achieved before. > > The API SIG suggested to start an API GraphQL Proof of Concept (PoC) to > demonstrate the capabilities before eventually extend GraphQL to other > projects. > Neutron has been selected for the PoC because of its specific data model. > > So if you are interested, please join us. > For those who can make it, we'll also discuss this during the SIG API > BoF at OpenStack Summit at Vancouver [3] > > To learn more about GraphQL, check-out howtographql.com [4]. > > So let's get started... > > > [1] > http://lists.openstack.org/pipermail/openstack-dev/2018-May/130054.html > [2] http://graphql.org/ > [3] > > https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21798/api-special-interest-group-session > [4] https://www.howtographql.com/ > > Regards, > Gilles > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.therond at gmail.com Sat May 5 16:23:06 2018 From: gael.therond at gmail.com (Flint WALRUS) Date: Sat, 05 May 2018 16:23:06 +0000 Subject: [openstack-dev] [neutron][api][grapql] Proof of Concept In-Reply-To: References: Message-ID: Hi Akihiro, Thanks a lot for this insight on how neutron behave. We would love to get support and backing from the neutron team in order to be able to get the best PoC possible. Someone suggested neutron as a good choice because of it simple database model. As GraphQL can manage your behavior of an extension declaring its own schemes I don’t think it would take that much time to implement it. @Gilles, if I goes to the berlin summitt I could definitely do the networking and relationship work needed to get support on our PoC from different teams members. This would help to spread the world multiple time and don’t have a long time before someone come to talk about this subject as what happens with the 2015 talk of the Facebook speaker. Le sam. 5 mai 2018 à 18:05, Akihiro Motoki a écrit : > Hi, > > I am happy to see the effort to explore a new API mechanism. > I would like to see good progress and help effort as API liaison from the > neutron team. > > > Neutron has been selected for the PoC because of its specific data model > > On the other hand, I am not sure this is the right reason to choose > 'neutron' only from this reason. I would like to note "its specific data > model" is not the reason that makes the progress of API versioning slowest > in the OpenStack community. I believe it is worth recognized as I would > like not to block the effort due to the neutron-specific reasons. > The most complicated point in the neutron API is that the neutron API > layer allows neutron plugins to declare which features are supported. The > neutron API is a collection of API extensions defined in the neutron-lib > repo and each neutron plugin can declare which subset(s) of the neutron > APIs are supported. (For more detail, you can check how the neutron API > extension mechanism is implemented). It is not defined only by the neutron > API layer. We need to communicate which API features are supported by > communicating enabled service plugins. > > I am afraid that most efforts to explore a new mechanism in neutron will > be spent to address the above points which is not directly related to > GraphQL itself. > Of course, it would be great if you overcome long-standing complicated > topics as part of GraphQL effort :) > > I am happy to help the effort and understand how the neutron API is > defined. > > Thanks, > Akihiro > > > 2018年5月5日(土) 18:16 Gilles Dubreuil : > >> Hello, >> >> Few of us recently discussed [1] how GraphQL [2], the next evolution >> from REST, could transform OpenStack APIs for the better. >> Effectively we believe OpenStack APIs provide perfect use cases for >> GraphQL DSL approach, to bring among other advantages, better >> performance and stability, easier developments and consumption, and with >> GraphQL Schema provide automation capabilities never achieved before. >> >> The API SIG suggested to start an API GraphQL Proof of Concept (PoC) to >> demonstrate the capabilities before eventually extend GraphQL to other >> projects. >> Neutron has been selected for the PoC because of its specific data model. >> >> So if you are interested, please join us. >> For those who can make it, we'll also discuss this during the SIG API >> BoF at OpenStack Summit at Vancouver [3] >> >> To learn more about GraphQL, check-out howtographql.com [4]. >> >> So let's get started... >> >> >> [1] >> http://lists.openstack.org/pipermail/openstack-dev/2018-May/130054.html >> [2] http://graphql.org/ >> [3] >> >> https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21798/api-special-interest-group-session >> [4] https://www.howtographql.com/ >> >> Regards, >> Gilles >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From luckyvega.g at gmail.com Sun May 6 10:20:47 2018 From: luckyvega.g at gmail.com (Vega Cai) Date: Sun, 06 May 2018 10:20:47 +0000 Subject: [openstack-dev] Does the openstack ci vms start each time clear up enough? In-Reply-To: <1525449149.2793610.1361025224.52597443@webmail.messagingengine.com> References: <74160d3e.f22c.1632a36c1aa.Coremail.linghucongsong@163.com> <20180504093745.GA38938@smcginnis-mbp.local> <1525449149.2793610.1361025224.52597443@webmail.messagingengine.com> Message-ID: To test whether it's our new patch that causes the problem, I submitted a dummy patch[1] to trigger CI and the CI failed again. Checking the log of nova scheduler, it's very strange that the scheduling starts with 0 host at the beginning. May 06 09:40:34.358585 ubuntu-xenial-inap-mtl01-0003885152 nova-scheduler[21962]: DEBUG oslo_service.periodic_task [None req-008ee30a-47a1-40a2-bf64-cb0f1719806e None None] Running periodic task SchedulerManager._run_periodic_tasks {{(pid=23795) run_periodic_tasks /usr/local/lib/python2.7/dist-packages/oslo_service/periodic_task.py:215}}May 06 09:41:23.968029 ubuntu-xenial-inap-mtl01-0003885152 nova-scheduler[21962]: DEBUG nova.scheduler.manager [None req-c67986fa-2e3b-45b7-96dd-196704945b95 admin admin] *Starting to schedule for instances: [u'8b227e85-8959-4e07-be3d-1bc094c115c1']* {{(pid=23795) select_destinations /opt/stack/new/nova/nova/scheduler/manager.py:118}}May 06 09:41:23.969293 ubuntu-xenial-inap-mtl01-0003885152 nova-scheduler[21962]: DEBUG oslo_concurrency.lockutils [None req-c67986fa-2e3b-45b7-96dd-196704945b95 admin admin] Lock "placement_client" acquired by "nova.scheduler.client.report._create_client" :: waited 0.000s {{(pid=23795) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:273}}May 06 09:41:23.975304 ubuntu-xenial-inap-mtl01-0003885152 nova-scheduler[21962]: DEBUG oslo_concurrency.lockutils [None req-c67986fa-2e3b-45b7-96dd-196704945b95 admin admin] Lock "placement_client" released by "nova.scheduler.client.report._create_client" :: held 0.006s {{(pid=23795) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:285}}May 06 09:41:24.276470 ubuntu-xenial-inap-mtl01-0003885152 nova-scheduler[21962]: DEBUG oslo_concurrency.lockutils [None req-c67986fa-2e3b-45b7-96dd-196704945b95 admin admin] Lock "6e118c71-9008-4694-8aee-faa607944c5f" acquired by "nova.context.get_or_set_cached_cell_and_set_connections" :: waited 0.000s {{(pid=23795) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:273}}May 06 09:41:24.279331 ubuntu-xenial-inap-mtl01-0003885152 nova-scheduler[21962]: DEBUG oslo_concurrency.lockutils [None req-c67986fa-2e3b-45b7-96dd-196704945b95 admin admin] Lock "6e118c71-9008-4694-8aee-faa607944c5f" released by "nova.context.get_or_set_cached_cell_and_set_connections" :: held 0.003s {{(pid=23795) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:285}}May 06 09:41:24.302854 ubuntu-xenial-inap-mtl01-0003885152 nova-scheduler[21962]: DEBUG oslo_db.sqlalchemy.engines [None req-c67986fa-2e3b-45b7-96dd-196704945b95 admin admin] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION {{(pid=23795) _check_effective_sql_mode /usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/engines.py:308}}May 06 09:41:24.321713 ubuntu-xenial-inap-mtl01-0003885152 nova-scheduler[21962]: DEBUG nova.filters [None req-c67986fa-2e3b-45b7-96dd-196704945b95 admin admin] *Starting with 0 host(s)* {{(pid=23795) get_filtered_objects /opt/stack/new/nova/nova/filters.py:70}}May 06 09:41:24.322136 ubuntu-xenial-inap-mtl01-0003885152 nova-scheduler[21962]: INFO nova.filters [None req-c67986fa-2e3b-45b7-96dd-196704945b95 admin admin] Filter RetryFilter returned 0 hostsMay 06 09:41:24.322614 ubuntu-xenial-inap-mtl01-0003885152 nova-scheduler[21962]: DEBUG nova.filters [None req-c67986fa-2e3b-45b7-96dd-196704945b95 admin admin] Filtering removed all hosts for the request with instance ID '8b227e85-8959-4e07-be3d-1bc094c115c1'. Filter results: [('RetryFilter', None)] {{(pid=23795) get_filtered_objects /opt/stack/new/nova/nova/filters.py:129}}May 06 09:41:24.323029 ubuntu-xenial-inap-mtl01-0003885152 nova-scheduler[21962]: INFO nova.filters [None req-c67986fa-2e3b-45b7-96dd-196704945b95 admin admin] Filtering removed all hosts for the request with instance ID '8b227e85-8959-4e07-be3d-1bc094c115c1'. Filter results: ['RetryFilter: (start: 0, end: 0)']May 06 09:41:24.323419 ubuntu-xenial-inap-mtl01-0003885152 nova-scheduler[21962]: DEBUG nova.scheduler.filter_scheduler [None req-c67986fa-2e3b-45b7-96dd-196704945b95 admin admin] Filtered [] {{(pid=23795) _get_sorted_hosts /opt/stack/new/nova/nova/scheduler/filter_scheduler.py:404}}May 06 09:41:24.323861 ubuntu-xenial-inap-mtl01-0003885152 nova-scheduler[21962]: DEBUG nova.scheduler.filter_scheduler [None req-c67986fa-2e3b-45b7-96dd-196704945b95 admin admin] There are 0 hosts available but 1 instances requested to build. {{(pid=23795) _ensure_sufficient_hosts /opt/stack/new/nova/nova/scheduler/filter_scheduler.py:278}}May 06 09:41:26.358317 ubuntu-xenial-inap-mtl01-0003885152 nova-scheduler[21962]: DEBUG oslo_service.periodic_task [None req-008ee30a-47a1-40a2-bf64-cb0f1719806e None None] Running periodic task SchedulerManager._run_periodic_tasks {{(pid=23794) run_periodic_tasks /usr/local/lib/python2.7/dist-packages/oslo_service/periodic_task.py:215}} I copy the log between two periodic task log records to show one whole scheduling process. Zhiyuan On Fri, 4 May 2018 at 23:52 Clark Boylan wrote: > > > On Fri, May 4, 2018, at 2:37 AM, Sean McGinnis wrote: > > On Fri, May 04, 2018 at 04:13:41PM +0800, linghucongsong wrote: > > > > > > Hi all! > > > > > > Recently we meet a strange problem in our ci. look this link: > https://review.openstack.org/#/c/532097/ > > > > > > we can pass the ci in the first time, but when we begin to start the > gate job, it will always failed in the second time. > > > > > > we have rebased several times, it alway pass the ci in the first time > and failed in the second time. > > > > > > This have not happen before and make me to guess is it really we > start the ci from the new fresh vms each time? > > > > A new VM is spun up for each test run, so I don't believe this is an > issue with > > stale artifacts on the host. I would guess this is more likely some sort > of > > race condition, and you just happen to be hitting it 50% of the time. > > Additionally you can check the job logs to see while these two jobs did > run against the same cloud provider they did so in different regions on > hosts with completely different IP addresses. The inventory files [0][1] > are where I would start if you suspect oddness of this sort. Reading them I > don't see anything to indicate the nodes were reused. > > [0] > http://logs.openstack.org/97/532097/16/check/legacy-tricircle-dsvm-multiregion/c9b3d29/zuul-info/inventory.yaml > [1] > http://logs.openstack.org/97/532097/16/gate/legacy-tricircle-dsvm-multiregion/ad547d5/zuul-info/inventory.yaml > > Clark > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- BR Zhiyuan -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdubreui at redhat.com Sun May 6 13:01:42 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Sun, 6 May 2018 23:01:42 +1000 Subject: [openstack-dev] [neutron][api][grapql] Proof of Concept In-Reply-To: References: Message-ID: Akihiro, thank you for your precious help! Regarding the choice of Neutron as PoC, I'm sorry for not providing much details when I said "because of its specific data model", effectively the original mention was  "its API exposes things at an individual table level, requiring the client to join that information to get the answers they need". I realize now such description probably applies to many OpenStack APIs. So I'm not sure what was the reason for choosing Neutron. I suppose Nova is also a good candidate because API is quite complex too, in a different way, and need to expose the data API and the control API plane as we discussed. After all Neutron is maybe not the best candidate but it seems good enough. And as Flint say the extension mechanism shouldn't be an issue. So if someone believe there is a better candidate for the PoC, please speak now. Thanks, Gilles PS: Flint,  Thank you for offering to be the advocate for Berlin. That's great! On 06/05/18 02:23, Flint WALRUS wrote: > Hi Akihiro, > > Thanks a lot for this insight on how neutron behave. > > We would love to get support and backing from the neutron team in > order to be able to get the best PoC possible. > > Someone suggested neutron as a good choice because of it simple > database model. As GraphQL can manage your behavior of an extension > declaring its own schemes I don’t think it would take that much time > to implement it. > > @Gilles, if I goes to the berlin summitt I could definitely do the > networking and relationship work needed to get support on our PoC from > different teams members. This would help to spread the world multiple > time and don’t have a long time before someone come to talk about this > subject as what happens with the 2015 talk of the Facebook speaker. > > Le sam. 5 mai 2018 à 18:05, Akihiro Motoki > a écrit : > > Hi, > > I am happy to see the effort to explore a new API mechanism. > I would like to see good progress and help effort as API liaison > from the neutron team. > > > Neutron has been selected for the PoC because of its specific > data model > > On the other hand, I am not sure this is the right reason to > choose 'neutron' only from this reason. I would like to note "its > specific data model" is not the reason that makes the progress of > API versioning slowest in the OpenStack community. I believe it is > worth recognized as I would like not to block the effort due to > the neutron-specific reasons. > The most complicated point in the neutron API is that the neutron > API layer allows neutron plugins to declare which features are > supported. The neutron API is a collection of API extensions > defined in the neutron-lib repo and each neutron plugin can > declare which subset(s) of the neutron APIs are supported. (For > more detail, you can check how the neutron API extension mechanism > is implemented). It is not defined only by the neutron API layer. > We need to communicate which API features are supported by > communicating enabled service plugins. > > I am afraid that most efforts to explore a new mechanism in > neutron will be spent to address the above points which is not > directly related to GraphQL itself. > Of course, it would be great if you overcome long-standing > complicated topics as part of GraphQL effort :) > > I am happy to help the effort and understand how the neutron API > is defined. > > Thanks, > Akihiro > > > 2018年5月5日(土) 18:16 Gilles Dubreuil >: > > Hello, > > Few of us recently discussed [1] how GraphQL [2], the next > evolution > from REST, could transform OpenStack APIs for the better. > Effectively we believe OpenStack APIs provide perfect use > cases for > GraphQL DSL approach, to bring among other advantages, better > performance and stability, easier developments and > consumption, and with > GraphQL Schema provide automation capabilities never achieved > before. > > The API SIG suggested to start an API GraphQL Proof of Concept > (PoC) to > demonstrate the capabilities before eventually extend GraphQL > to other > projects. > Neutron has been selected for the PoC because of its specific > data model. > > So if you are interested, please join us. > For those who can make it, we'll also discuss this during the > SIG API > BoF at OpenStack Summit at Vancouver [3] > > To learn more about GraphQL, check-out howtographql.com > [4]. > > So let's get started... > > > [1] > http://lists.openstack.org/pipermail/openstack-dev/2018-May/130054.html > [2] http://graphql.org/ > [3] > https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21798/api-special-interest-group-session > [4] https://www.howtographql.com/ > > Regards, > Gilles > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Gilles Dubreuil Senior Software Engineer - Red Hat - Openstack DFG Integration Email: gilles at redhat.com GitHub/IRC: gildub Mobile: +61 400 894 219 -------------- next part -------------- An HTML attachment was scrubbed... URL: From therve at redhat.com Sun May 6 16:30:20 2018 From: therve at redhat.com (Thomas Herve) Date: Sun, 6 May 2018 18:30:20 +0200 Subject: [openstack-dev] [keystone][monasca][congress][senlin][telemetry] authenticated webhook notifications In-Reply-To: References: Message-ID: On Sat, May 5, 2018 at 1:53 AM, Eric K wrote: > Thanks a lot Witold and Thomas! > > So it doesn't seem that someone is currently using a keystone token to > authenticate web hook? Is is simply because most of the use cases had > involved services which do not use keystone? > > Or is it unsuitable for another reason? It's fairly impractical for webhooks because 1) Tokens expire fairly quickly. 2) You can't store all the data in the URL, so you need to store the token and the URL separately. -- Thomas From shu.mutow at gmail.com Mon May 7 07:05:26 2018 From: shu.mutow at gmail.com (Shu M.) Date: Mon, 7 May 2018 16:05:26 +0900 Subject: [openstack-dev] [Zun] Announce change of Zun core reviewer team In-Reply-To: References: Message-ID: +1. Welcome!! Cheers, Shu 2018-05-03 12:20 GMT+09:00 Shuai Zhao : > +1 for Ji Wei :-) > > On Thu, May 3, 2018 at 4:40 AM, Hongbin Lu wrote: > >> Hi all, >> >> I would like to announce the following change on the Zun core reviewers >> team: >> >> + Ji Wei >> >> Ji Wei has been working on Zun for a while. His contributions include >> blueprints, bug fixes, code reviews, etc. In particular, I would like to >> highlight that he has implemented two blueprints [1][2], both of which are >> not easy to implement. Based on his high-quality work in the past, I >> believe he will serve the core reviewer role very well. >> >> This proposal had been voted within the existing core team and was >> unanimously approved. Welcome to the core team Ji Wei. >> >> [1] https://blueprints.launchpad.net/zun/+spec/glance-support-tag >> [2] https://blueprints.launchpad.net/zun/+spec/zun-rebuild-on-local-node >> >> Best regards, >> Hongbin >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yamamoto at midokura.com Mon May 7 07:22:10 2018 From: yamamoto at midokura.com (Takashi Yamamoto) Date: Mon, 7 May 2018 16:22:10 +0900 Subject: [openstack-dev] [tap-as-a-service] publish on pypi In-Reply-To: <1522366030.3775323.1320870552.6204320D@webmail.messagingengine.com> References: <1522366030.3775323.1320870552.6204320D@webmail.messagingengine.com> Message-ID: thank you. done. https://pypi.org/project/tap-as-a-service/ On Fri, Mar 30, 2018 at 8:27 AM, Clark Boylan wrote: > On Wed, Mar 28, 2018, at 7:59 AM, Takashi Yamamoto wrote: >> hi, >> >> i'm thinking about publishing the latest release of tap-as-a-service on pypi. >> background: https://review.openstack.org/#/c/555788/ >> iirc, the naming (tap-as-a-service vs neutron-taas) was one of concerns >> when we talked about this topic last time. (long time ago. my memory is dim.) >> do you have any ideas or suggestions? >> probably i'll just use "tap-as-a-service" unless anyone has strong opinions. >> because: >> - it's the name we use the most frequently >> - we are not neutron (yet?) > > http://git.openstack.org/cgit/openstack/tap-as-a-service/tree/setup.cfg#n2 shows that tap-as-a-service is the existing package name so probably a good one to go with as anyone that already has it installed from source should have pip do the right thing when talking to pypi. > > Clark > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From niweichen at chinamobile.com Mon May 7 09:55:53 2018 From: niweichen at chinamobile.com (=?gb2312?B?xN/OtbO9?=) Date: Mon, 7 May 2018 17:55:53 +0800 Subject: [openstack-dev] [nova] Extra feature of vCPU allocation on demands Message-ID: <006901d3e5e9$96844d70$c38ce850$@chinamobile.com> Hi, all I would like to propose a blueprint (not proposed yet), which is related to openstack nova. I hope to have some comments by explaining my idea through this e-mail. Please contact me if anyone has any comment. Background Under current OpenStack, vCPUs assigned to each VM can be configured as dedicated or shared. In some scenarios, such as deploying Radio Access Network VNF, the VM is required to have dedicated vCPUs to insure the performance. However, in that case, each VM has a vCPU to do Guest OS housekeeping. Usually, this vCPU is not a high performance required vCPU and do not take high percentage of dedicated vCPU utilization. There is some vCPU resources waste. Proposed feature I hope to add an extra feature to flavor extra specs. It refers to how many dedicated vCPUs and how many shared vCPUs are needed for the VM. When VM requires vCPU, OpenStack allocates vCPUs on demands. In the background scenario, this idea can save many dedicated vCPUs which take Guest OS housekeeping. And the scenario stated above is only one use case for the feature. This feature potentially allows user to have more flexible VM design to save CPU resource. Thanks. Weichen e-mail: niweichen at chinamobile.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Mon May 7 10:27:48 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Mon, 7 May 2018 18:27:48 +0800 Subject: [openstack-dev] [Openstack-operators][heat][all] Heat now migrated to StoryBoard!! In-Reply-To: References: Message-ID: Hi all, I updated more information to this guideline in [1]. Please must take a view on [1] to see what's been updated. will likely to keep update on that etherpad if new Q&A or issue found. Will keep trying to make this process as painless for you as possible, so please endure with us for now, and sorry for any inconvenience *[1] https://etherpad.openstack.org/p/Heat-StoryBoard-Migration-Info * 2018-05-05 12:15 GMT+08:00 Rico Lin : > looping heat-dashboard team > > 2018-05-05 12:02 GMT+08:00 Rico Lin : > >> Dear all Heat members and friends >> >> As you might award, OpenStack projects are scheduled to migrating ([5]) >> from Launchpad to StoryBoard [1]. >> For whom who like to know where to file a bug/blueprint, here are some >> heads up for you. >> >> *What's StoryBoard?* >> StoryBoard is a cross-project task-tracker, contains numbers of >> ``project``, each project contains numbers of ``story`` which you can think >> it as an issue or blueprint. Within each story, contains one or multiple >> ``task`` (task separate stories into the tasks to resolve/implement). To >> learn more about StoryBoard or how to make a good story, you can reference >> [6]. >> >> *How to file a bug?* >> This is actually simple, use your current ubuntu-one id to access to >> storyboard. Then find the corresponding project in [2] and create a story >> to it with a description of your issue. We should try to create tasks which >> to reference with patches in Gerrit. >> >> *How to work on a spec (blueprint)?* >> File a story like you used to file a Blueprint. Create tasks for your >> plan. Also you might want to create a task for adding spec( in heat-spec >> repo) if your blueprint needs documents to explain. >> I still leave current blueprint page open, so if you like to create a >> story from BP, you can still get information. Right now we will start work >> as task-driven workflow, so BPs should act no big difference with a bug in >> StoryBoard (which is a story with many tasks). >> >> *Where should I put my story?* >> We migrate all heat sub-projects to StoryBoard to try to keep the impact >> to whatever you're doing as small as possible. However, if you plan to >> create a new story, *please create it under heat project [4]* and tag it >> with what it might affect with (like python-heatclint, heat-dashboard, >> heat-agents). We do hope to let users focus their stories in one place so >> all stories will get better attention and project maintainers don't need to >> go around separate places to find it. >> >> *How to connect from Gerrit to StoryBoard?* >> We usually use following key to reference Launchpad >> Closes-Bug: ####### >> Partial-Bug: ####### >> Related-Bug: ####### >> >> Now in StoryBoard, you can use following key. >> Task: ###### >> Story: ###### >> you can find more info in [3]. >> >> *What I need to do for my exists bug/bps?* >> Your bug is automatically migrated to StoryBoard, however, the reference >> in your patches ware not, so you need to change your commit message to >> replace the old link to launchpad to new links to StoryBoard. >> >> *Do we still need Launchpad after all this migration are done?* >> As the plan, we won't need Launchpad for heat anymore once we have done >> with migrating. Will forbid new bugs/bps filed in Launchpad. Also, try to >> provide new information as many as possible. Hopefully, we can make >> everyone happy. For those newly created bugs during/after migration, don't >> worry we will disallow further create new bugs/bps and do a second migrate >> so we won't missed yours. >> >> [1] https://storyboard.openstack.org/ >> [2] https://storyboard.openstack.org/#!/project_group/82 >> [3] https://docs.openstack.org/infra/manual/developers.html# >> development-workflow >> [4] https://storyboard.openstack.org/#!/project/989 >> [5] https://docs.openstack.org/infra/storyboard/migration.html >> [6] https://docs.openstack.org/infra/storyboard/gui/tasks_ >> stories_tags.html#what-is-a-story >> >> >> >> -- >> May The Force of OpenStack Be With You, >> >> *Rico Lin*irc: ricolin >> >> > > > -- > May The Force of OpenStack Be With You, > > *Rico Lin*irc: ricolin > > -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrea.frittoli at gmail.com Mon May 7 10:43:57 2018 From: andrea.frittoli at gmail.com (Andrea Frittoli) Date: Mon, 07 May 2018 10:43:57 +0000 Subject: [openstack-dev] [tempest] Proposing Felipe Monteiro for Tempest core In-Reply-To: References: Message-ID: +1!! On Sat, 28 Apr 2018, 11:29 am Ghanshyam Mann, wrote: > Hi Tempest Team, > > I would like to propose Felipe Monteiro (irc: felipemonteiro) to Tempest > core. > > Felipe has been an active contributor to the Tempest since the Pike > cycle. He has been doing lot of review and commits since then. Filling > the gaps on service clients side and their testing and lot other > areas. He has demonstrated the good quality and feedback while his > review. > > He has good understanding of Tempest source code and project missions > & goal. IMO his efforts are highly valuable and it will be great to > have him in team. > > > As per usual practice, please vote +1 or -1 to the nomination. I will > keep this nomination open for a week or until everyone voted. > > Felipe Reviews and Commit - > > https://review.openstack.org/#/q/reviewer:felipe.monteiro at att.com+project:openstack/tempest > > https://review.openstack.org/#/q/owner:felipe.monteiro at att.com+project:openstack/tempest > > -gmann > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-philippe at evrard.me Mon May 7 10:59:00 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Mon, 7 May 2018 12:59:00 +0200 Subject: [openstack-dev] [openstack-ansible] Implement rotations for meetings handling In-Reply-To: References: Message-ID: I think Jesse sumarized things elegantly. Here is an analogy for you, to complement the answer with more background. Let me compare our state with a new company, as this is a a notion people many people can relate to. Initially we were a startup. Only a few people were working on OSA on its creation. And they were busy doing everything. But then we grew, and we continue to grow. At some point, those few people doing everything didn't (or don't) have the time to do everything anymore (because of the growing nature). So OSA, like any business, needs to learn how to grow bigger. One of the way is to distribute work as much as we can into the more appropriate persons. We started doing that by distributing core duties for roles. We are now adding the meeting organisation. I have a few other ideas where shared ownership will help the project mature and allow more growth in the future, but one step at a time :) Best regards, Jean-Philippe Evrard (evrardjp) From jean-philippe at evrard.me Mon May 7 11:36:34 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Mon, 7 May 2018 13:36:34 +0200 Subject: [openstack-dev] [all][tc][ptls] final stages of python 3 transition In-Reply-To: <1525125561-sup-8369@lrrr.local> References: <1524689037-sup-783@lrrr.local> <1525100618-sup-9669@lrrr.local> <1525125561-sup-8369@lrrr.local> Message-ID: We've been juggling with python3, ansible and multiple distros for a while now. That dance hasn't been fruitful: many hidden issues, either due to ansible modules, or our own modules, or upgrade issues. I've recently decided to simplify the python2/3 story. Queens and all the stable branches will be python2 only (python3 will not be used anymore, to simplify the code) For Rocky, we plan to use as much as possible the distribution packages for the python stack, if it's recent enough for our source installs. Ubuntu 16.04 will have python2, SUSE has python2, CentOS has no appropriate package, so we are pip installing things (and using python2). So... If people work on Ubuntu 18.04 support, we could try a python3 only system. Nobody worked on it right now. Same for CentOS, because there is no usage of packages, we could use Software Collections and have centos as a python3 only system. Sadly, nobody has the cycles to do it now. I am expecting we'll be using/seeing a lot more python3 in the future with S, and wish for a python3 only "S" release. But this solely depends on the work done in R to make sure 18.04 is ready, as this would be our example, and "stepping stone" towards both python2 and python3. I am not sure this answers your question, as this is more gray than a black or white answer. But I am hoping we'll stabilize python3 this cycle, for ubuntu 18.04 at least, and other distros as a stretch goal. Best regards, Jean-Philippe Evrard (evrardjp) From doug at doughellmann.com Mon May 7 13:12:43 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 07 May 2018 09:12:43 -0400 Subject: [openstack-dev] [all][tc][ptls] final stages of python 3 transition In-Reply-To: References: <1524689037-sup-783@lrrr.local> <1525100618-sup-9669@lrrr.local> <1525125561-sup-8369@lrrr.local> Message-ID: <1525698625-sup-4275@lrrr.local> Excerpts from Jean-Philippe Evrard's message of 2018-05-07 13:36:34 +0200: > We've been juggling with python3, ansible and multiple distros for a while now. > That dance hasn't been fruitful: many hidden issues, either due to > ansible modules, or our own modules, or upgrade issues. > > I've recently decided to simplify the python2/3 story. > > Queens and all the stable branches will be python2 only (python3 will > not be used anymore, to simplify the code) > > For Rocky, we plan to use as much as possible the distribution > packages for the python stack, if it's recent enough for our source > installs. > Ubuntu 16.04 will have python2, SUSE has python2, CentOS has no > appropriate package, so we are pip installing things (and using > python2). > So... If people work on Ubuntu 18.04 support, we could try a python3 > only system. Nobody worked on it right now. > Same for CentOS, because there is no usage of packages, we could use > Software Collections and have centos as a python3 only system. Sadly, > nobody has the cycles to do it now. > > I am expecting we'll be using/seeing a lot more python3 in the future > with S, and wish for a python3 only "S" release. > But this solely depends on the work done in R to make sure 18.04 is > ready, as this would be our example, and "stepping stone" towards both > python2 and python3. > > I am not sure this answers your question, as this is more gray than a > black or white answer. > But I am hoping we'll stabilize python3 this cycle, for ubuntu 18.04 > at least, and other distros as a stretch goal. > > Best regards, > Jean-Philippe Evrard (evrardjp) > I think your answer does help. It sounds like, unsurprisingly, you are depending on work upstream in two different directions: You need the OpenStack contributor community to ensure the code works on the platform using Python 3, and then you need the OS vendors to provide appropriate packages using Python 3. Doug From doug at doughellmann.com Mon May 7 13:52:16 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 07 May 2018 09:52:16 -0400 Subject: [openstack-dev] [tc][goals] tracking status of old goals for new projects Message-ID: <1525700930-sup-9125@lrrr.local> There is a patch to update the Python 3.5 goal for Kolla [1]. While I'm glad to see the work happening, the change adds a new deliverable to an old goal, and it isn’t clear whether we want to use that approach for tracking goal work indefinitely. I see a few options. 1. We could update the existing document. 2. We could set up stories in storyboard like we are doing for newer goals. 3. We could do nothing to record the work related to the goal. I like option 2, because it means we will be consistent with future tracking data and we end up with fewer changes in the governance repo (which was the reason for moving to storyboard in the first place). What do others think? Doug [1] https://review.openstack.org/#/c/557863/ From fungi at yuggoth.org Mon May 7 14:06:35 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 7 May 2018 14:06:35 +0000 Subject: [openstack-dev] [tc][goals] tracking status of old goals for new projects In-Reply-To: <1525700930-sup-9125@lrrr.local> References: <1525700930-sup-9125@lrrr.local> Message-ID: <20180507140634.2jwmnef47te2wjii@yuggoth.org> On 2018-05-07 09:52:16 -0400 (-0400), Doug Hellmann wrote: [...] > 3. We could do nothing to record the work related to the goal. [...] For situations like 557863 I think I'd prefer either the status quo (the kolla deliverable _was_ already listed so it could make sense to update it in that document) or option #3 (the cycle for that goal is already well in the past, and certainly adding new deliverables like kolla-kubernetes to a past goal sets unrealistic expectations for future goals regardless of where we track them). I really do, though, think we should simply accept that these goals don't always (or even usually?) reach 100% coverage and that at some point we need to be able to consider better means of keeping track of, e.g., which deliverables work on which Python versions. The goals process is excellent for reaching critical mass on such efforts, but should not be considered a source of long-term support documentation. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From openstack at nemebean.com Mon May 7 14:13:26 2018 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 7 May 2018 09:13:26 -0500 Subject: [openstack-dev] [TripleO] TripleO deployment on OVB on public cloud Message-ID: Hi, I wanted to make everyone aware that I recorded a demo of running an OVB deployment on a public cloud. As I say in the video, this is significant because in the past OVB has largely been limited to running in our special-purpose CI clouds, which have distinctly finite capacity and are generally not as available for developer access. Anybody can use a public cloud though, and it opens up a bunch more potential capacity. Anyway, the demo and a more detailed writeup are available here: http://blog.nemebean.com/content/openstack-virtual-baremetal-public-cloud It's running on Vexxhost because that's the first public cloud I found that had all the features necessary to run OVB. If anyone knows of others that meet the requirements I'd certainly be interested to hear about it. Hopefully this can help with the developer hardware constraints that we seem to be constantly fighting in TripleO. Let me know if you have any feedback. Thanks. -Ben From melwittt at gmail.com Mon May 7 14:16:30 2018 From: melwittt at gmail.com (melanie witt) Date: Mon, 7 May 2018 07:16:30 -0700 Subject: [openstack-dev] [nova] review runway status Message-ID: Howdy everyone, This is just a brief status about the blueprints currently occupying review runways [0] and an ask for the nova-core team to give these reviews priority for their code review focus. * XenAPI: Support a new image handler for non-FS based SRs https://blueprints.launchpad.net/nova/+spec/xenapi-image-handler-option-improvement (jianghuaw_) [END DATE: 2018-05-11] series starting at https://review.openstack.org/497201 * Add z/VM driver https://blueprints.launchpad.net/nova/+spec/add-zvm-driver-rocky (jichen) [END DATE: 2018-05-15] spec amendment https://review.openstack.org/562154 and implementation series starting at https://review.openstack.org/523387 * Local disk serial numbers https://blueprints.launchpad.net/nova/+spec/local-disk-serial-numbers (mdbooth) [END DATE: 2018-05-16] series starting at https://review.openstack.org/526346 Cheers, -melanie [0] https://etherpad.openstack.org/p/nova-runways-rocky From e0ne at e0ne.info Mon May 7 14:28:48 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Mon, 7 May 2018 17:28:48 +0300 Subject: [openstack-dev] [horizon] No meeting this week Message-ID: Hi team, Due to the Victory Day, I won't be able to chair the meeting this week. After a short conversation in the IRC, we decided to skip this meeting. Feel free to add your topic to https://wiki.openstack.org/wiki/Meetings/Horizon and see you next week! Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at fried.cc Mon May 7 14:40:31 2018 From: openstack at fried.cc (Eric Fried) Date: Mon, 7 May 2018 09:40:31 -0500 Subject: [openstack-dev] [nova] Extra feature of vCPU allocation on demands In-Reply-To: <006901d3e5e9$96844d70$c38ce850$@chinamobile.com> References: <006901d3e5e9$96844d70$c38ce850$@chinamobile.com> Message-ID: <945bf321-519f-ec98-adfe-6c30b62d289c@fried.cc> I will be interested to watch this develop. In PowerVM we already have shared vs. dedicated processors [1] along with concepts like capped vs. uncapped, min/max proc units, weights, etc. But obviously it's all heavily customized to be PowerVM-specific. If these concepts made their way into mainstream Nova, we could hopefully adapt to use them and remove some tech debt. [1] https://github.com/openstack/nova/blob/master/nova/virt/powervm/vm.py#L372-L401 On 05/07/2018 04:55 AM, 倪蔚辰 wrote: > Hi, all > > I would like to propose a blueprint (not proposed yet), which is related > to openstack nova. I hope to have some comments by explaining my idea > through this e-mail. Please contact me if anyone has any comment. > >   > > Background > > Under current OpenStack, vCPUs assigned to each VM can be configured as > dedicated or shared. In some scenarios, such as deploying Radio Access > Network VNF, the VM is required to have dedicated vCPUs to insure the > performance. However, in that case, each VM has a vCPU to do Guest OS > housekeeping. Usually, this vCPU is not a high performance required vCPU > and do not take high percentage of dedicated vCPU utilization. There is > some vCPU resources waste. > >   > > Proposed feature > > I hope to add an extra feature to flavor extra specs. It refers to how > many dedicated vCPUs and how many shared vCPUs are needed for the VM. > When VM requires vCPU, OpenStack allocates vCPUs on demands. In the > background scenario, this idea can save many dedicated vCPUs which take > Guest OS housekeeping. And the scenario stated above is only one use > case for the feature. This feature potentially allows user to have more > flexible VM design to save CPU resource. > >   > > Thanks. > >   > > Weichen > > e-mail: niweichen at chinamobile.com > >   > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jaypipes at gmail.com Mon May 7 14:51:11 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 7 May 2018 10:51:11 -0400 Subject: [openstack-dev] [nova] Extra feature of vCPU allocation on demands In-Reply-To: <006901d3e5e9$96844d70$c38ce850$@chinamobile.com> References: <006901d3e5e9$96844d70$c38ce850$@chinamobile.com> Message-ID: On 05/07/2018 05:55 AM, 倪蔚辰 wrote: > Hi, all > > I would like to propose a blueprint (not proposed yet), which is related > to openstack nova. I hope to have some comments by explaining my idea > through this e-mail. Please contact me if anyone has any comment. > > Background > > Under current OpenStack, vCPUs assigned to each VM can be configured as > dedicated or shared. In some scenarios, such as deploying Radio Access > Network VNF, the VM is required to have dedicated vCPUs to insure the > performance. However, in that case, each VM has a vCPU to do Guest OS > housekeeping. Usually, this vCPU is not a high performance required vCPU > and do not take high percentage of dedicated vCPU utilization. There is > some vCPU resources waste. > > Proposed feature > > I hope to add an extra feature to flavor extra specs. It refers to how > many dedicated vCPUs and how many shared vCPUs are needed for the VM. > When VM requires vCPU, OpenStack allocates vCPUs on demands. In the > background scenario, this idea can save many dedicated vCPUs which take > Guest OS housekeeping. And the scenario stated above is only one use > case for the feature. This feature potentially allows user to have more > flexible VM design to save CPU resource. Please see here: https://review.openstack.org/#/c/555081/ Best, -jay From doug at doughellmann.com Mon May 7 14:53:04 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 07 May 2018 10:53:04 -0400 Subject: [openstack-dev] [tc] Technical Committee Status update, 7 May Message-ID: <1525704301-sup-8668@lrrr.local> This is the weekly summary of work being done by the Technical Committee members. The full list of active items is managed in the wiki: https://wiki.openstack.org/wiki/Technical_Committee_Tracker We also track TC objectives for the cycle using StoryBoard at: https://storyboard.openstack.org/#!/project/923 == Recent Activity == There is a patch to update the Python 3.5 goal for Kolla [1]. The change adds a new deliverable to the old goal, and it isn't clear whether we want to do that. TC members, please share your opinion in the openstack-dev thread [2]. [1] https://review.openstack.org/557863 [2] http://lists.openstack.org/pipermail/openstack-dev/2018-May/130236.html I expanded the discussion of dropping py27 support from the original governance patch [3] by starting a mailing list thread to discuss a more detailed deprecation process and timeline [4]. There will also be a forum session covering this topic in Vancouver [5]. [3] https://review.openstack.org/561922 [4] http://lists.openstack.org/pipermail/openstack-dev/2018-April/129866.html [5] https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21741/python-2-deprecation-timeline John Garbutt started drafting a first constellation to describe scientific computing workload use cases [6]. After some discussion on that patch and in IRC, I proposed that we create a separate repository to hold the constellation documents [7]. That repo now exists, with the documentation team and TC as members of the core review team. There is a proposal to add it to the documentation team's repo list [8]. * We could use volunteers to help finish making that repository ready to publish properly and to document the constellations sub-team in the documentation contributor's guide. Please contact me directly if you can help. [6] https://review.openstack.org/565466 [7] http://lists.openstack.org/pipermail/openstack-dev/2018-May/130068.html [8] https://review.openstack.org/565877 The change to retire the kolla-kubernetes project has reached a majority and will be approved on 10 May [9]. [9] https://review.openstack.org/#/c/565385/ The Adjutant project application [10] is still under review, and the only votes registered are opposed. I anticipate having the topic of how we review project applications as one of several items we discuss during the TC retrospective session as the summit [11]. [10] https://review.openstack.org/553643 [11] https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21740/tc-retrospective After a bit of discussion in IRC with Lance Bragstad and Harry Rybacki, we agreed to try having the Keystone team manage the implementation of a new spec for defining default roles across all OpenStack services, rather than trying to review the openstack-specs repository and review process. See [12] for details. [12] http://lists.openstack.org/pipermail/openstack-dev/2018-May/130207.html == TC member actions/focus/discussions for the coming week(s) == I have added the two items raised by TC members raised to the draft agenda for the joint Board/TC/UC meeting to be held in Vancouver (see the wiki page [13] under "Strategic Discussions" and "Next steps for fixing bylaws typo"). Please keep in mind that the time allocations and content of the meeting are still subject to change. [13] https://wiki.openstack.org/wiki/Governance/Foundation/20May2018BoardMeeting We will also hold a retrospective for the TC as a team on Monday at the Forum. Please be prepared to discuss things you think are going well, things you think we need to change, items from our backlog that you would like to work on, etc. [11] I need to revise the patch to update the expectations for goal champions based on existing feedback. [14] [14] https://review.openstack.org/564060 We have several items on our backlog that need owners. TC members, please review the storyboard list [15] and consider taking on one of the tasks that we agreed we would do. [15] https://storyboard.openstack.org/#!/project/923 == Contacting the TC == The Technical Committee uses a series of weekly "office hour" time slots for synchronous communication. We hope that by having several such times scheduled, we will have more opportunities to engage with members of the community from different timezones. Office hour times in #openstack-tc: * 09:00 UTC on Tuesdays * 01:00 UTC on Wednesdays * 15:00 UTC on Thursdays If you have something you would like the TC to discuss, you can add it to our office hour conversation starter etherpad at: https://etherpad.openstack.org/p/tc-office-hour-conversation-starters Many of us also run IRC bouncers which stay in #openstack-tc most of the time, so please do not feel that you need to wait for an office hour time to pose a question or offer a suggestion. You can use the string "tc-members" to alert the members to your question. If you expect your topic to require significant discussion or to need input from members of the community other than the TC, please start a mailing list discussion on openstack-dev at lists.openstack.org and use the subject tag "[tc]" to bring it to the attention of TC members. From michel at redhat.com Mon May 7 14:54:02 2018 From: michel at redhat.com (Michel Peterson) Date: Mon, 7 May 2018 17:54:02 +0300 Subject: [openstack-dev] [all][requirements] uncapping eventlet In-Reply-To: <1524254623-sup-3036@lrrr.local> References: <20180405154737.lhxhlpsj3uttjevu@gentoo.org> <1523463552-sup-1950@lrrr.local> <1524254623-sup-3036@lrrr.local> Message-ID: On Fri, Apr 20, 2018 at 11:06 PM, Doug Hellmann wrote: > > We have made great progress on this but we do still have quite a > few of these patches that have not been approved. Many are failing > test jobs and will need a little attention ( the failing requirements > jobs are real problems and rechecking will not fix them). If you > have time to help, please jump in and take over a patch and get it > working. > > https://review.openstack.org/#/q/status:open+topic:uncap-eventlet > > I did a script to fix those and I've submitted patches. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongbin.lu at huawei.com Mon May 7 17:25:09 2018 From: hongbin.lu at huawei.com (Hongbin Lu) Date: Mon, 7 May 2018 17:25:09 +0000 Subject: [openstack-dev] [neutron] Bug deputy report Message-ID: <0957CD8F4B55C0418161614FEC580D6B2F8EF9E3@YYZEML701-CHM.china.huawei.com> Hi all, Below is the bug deputy report for last week (Apr 30 - May 6). * https://bugs.launchpad.net/neutron/+bug/1767811 Confirmed. In DVR scenario, VM capture unexpected packet that should not be sent to the VM. The bug reporter self-assigned this bug. * https://bugs.launchpad.net/neutron/+bug/1767829 Confirmed. Fullstack security group tests fails often on gate. Slawek Kaplonski is looking at it. * https://bugs.launchpad.net/neutron/+bug/1768209 Confirmed. Some tempest tests don't check the 'l3-ha' extension when using the 'ha' attribute of routers. Dongcan Ye took this bug. * https://bugs.launchpad.net/neutron/+bug/1768690 New. This is a RFE. Ironic team requested to add support for re-generating MAC address when updating a port. Neutron Driver team is looking at it. * https://bugs.launchpad.net/neutron/+bug/1768952 In Progress. Neutron returned 500 error on listing availability zones with filters. Hongbin Lu is working on a fix. * https://bugs.launchpad.net/neutron/+bug/1768990 In Progress. External bridge is not re-configured if it is removed and re-created. Slawek Kaplonski is working on it. Best regards, Hongbin -------------- next part -------------- An HTML attachment was scrubbed... URL: From myoung at redhat.com Mon May 7 17:35:10 2018 From: myoung at redhat.com (Matt Young) Date: Mon, 7 May 2018 13:35:10 -0400 Subject: [openstack-dev] [tripleo] CI & Tempest squad planning summary: Sprint 13 Message-ID: Greetings, The TripleO CI & Tempest squads have begun work on Sprint 13. Like most of our sprints these are three weeks long and are planned on a Thursday or Friday (depending on squad) and have a retrospective on Wednesday. Sprint 13 runs from 2018-05-03 thru 2018-05-23. More information regarding our process is available in the tripleo-specs repository [1]. Ongoing meeting notes and other detail are always available in the Squad Etherpads [2,3]. This sprint the CI squad is working on the Upgrades epic, and the Tempest squad is refactoring python-tempestconf to in part enable the upstream refstack group. ## Ruck / Rover: * Matt Young (myoung) and Sagi Shnaidman (sshnaidm) * https://review.rdoproject.org/etherpad/p/ruckrover-sprint13 ## CI Squad * Put in place voting update jobs ( https://review.openstack.org/#/q/topic:gate_update) * Add additional check/gate jobs to gate changes made this sprint. * Refine the design for how we model releases in CI, taking into account feedback from a collaborative design session with the Upgrades team (etherpad http://ow.ly/da5L30jSeo8). Epic: https://trello.com/c/cuKevn28/728-sprint-13-upgrades-goals Tasks: http://ow.ly/yeIf30jScyj ## Tempest Squad * Refactor pythyon-tempestconf tempest config by dynamically discovering resources In Scope: Keystone, Nova, Glance, Neutron, Cinder, Swift. The following are specifically NOT in scope for Sprint 13 They are tentatively planned for future sprints: Heat, Ironic, ec2api, Zaquar, Mistral, Manila, Octavia, Horizon, Ceilometer. Epic: https://trello.com/c/efqE5XMr/734-sprint-13-refactor-python-tempestconf Tasks: http://ow.ly/YOXh30jScEw For any questions please find us in #tripleo Thanks, Matt [1] https://github.com/openstack/tripleo-specs/blob/master/specs/policy/ci-team-structure.rst [2] https://etherpad.openstack.org/p/tripleo-ci-squad-meeting [3] https://etherpad.openstack.org/p/tripleo-tempest-squad-meeting -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Mon May 7 17:53:13 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 7 May 2018 13:53:13 -0400 Subject: [openstack-dev] Ironic Status Updates Message-ID: Greetings everyone, For some time, Rama Yeleswarapu (rama_y) has been graciously sending out a weekly update for Ironic to the mailing list. We found out late last week that this contributor would be unable to continue to do so. While discussing this and searching for a volunteer, we came up with two important questions that we determined should have some answers. The first being. Is anyone finding a weekly email useful for Ironic? The second being: If it is useful, would there be an alternative format that may be even more useful? Largely it is presently a copy/paste of our general purpose whiteboard. Perhaps a stream of consciousness or commentary to convey the delta from week to week? -Julia From ildiko.vancsa at gmail.com Mon May 7 19:01:44 2018 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 8 May 2018 00:31:44 +0530 Subject: [openstack-dev] [os-upstream-institute] Meeting reminder Message-ID: Hi Training Team, It’s a friendly reminder that our next meeting is in an hour, 2000 UTC, on #openstack-meeting-3. Agenda is here: https://etherpad.openstack.org/p/openstack-upstream-institute-meetings See you in a bit! Thanks, Ildikó (IRC: ildikov) From mnaser at vexxhost.com Mon May 7 19:59:35 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 7 May 2018 15:59:35 -0400 Subject: [openstack-dev] [tc][goals] tracking status of old goals for new projects In-Reply-To: <1525700930-sup-9125@lrrr.local> References: <1525700930-sup-9125@lrrr.local> Message-ID: On Mon, May 7, 2018 at 9:52 AM, Doug Hellmann wrote: > There is a patch to update the Python 3.5 goal for Kolla [1]. While > I'm glad to see the work happening, the change adds a new deliverable > to an old goal, and it isn’t clear whether we want to use that > approach for tracking goal work indefinitely. I see a few options. > > 1. We could update the existing document. > > 2. We could set up stories in storyboard like we are doing for newer > goals. I think that this is the way to go moving forward. That will encourage projects to still hit these goals and track them in someway. It also makes those changes much quicker to update for projects as they don't have to go through the entire governance merge process. > 3. We could do nothing to record the work related to the goal. > > I like option 2, because it means we will be consistent with future > tracking data and we end up with fewer changes in the governance repo > (which was the reason for moving to storyboard in the first place). > > What do others think? > > Doug > > [1] https://review.openstack.org/#/c/557863/ > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From gerard.damm at wipro.com Mon May 7 20:02:36 2018 From: gerard.damm at wipro.com (gerard.damm at wipro.com) Date: Mon, 7 May 2018 20:02:36 +0000 Subject: [openstack-dev] [sdk] identity proxy: users, groups, projects, membership, roles Message-ID: Hi, I'm wondering if there are (or should be) operations in the Connection.identity proxy to manage user and group membership to projects, and to manage user and group role assignments (there are role management methods in the Project class, but I couldn't find them in the identity proxy): The 8 operations could be something like that: add|remove user|group to|from project (i.e., manage 'Member' role for projects) assign|unassign role to|from user|role (any role) Project operations: Connection.identity.add_user_to_project(user, project, ) user: instance or id project: project or id Connection.identity.remove_user_from_project(user, project, ) Connection.identity.add_group_to_project(group, project, ) group: instance or id Connection.identity.remove_group_from_project(group, project, ) Role operations: Connection.identity.assign_role_to_user(role, user, ) Connection.identity.unassign_role_from_user(role, user, ) Connection.identity.assign_role_to_group(role, group, ) Connection.identity.unassign_role_from_group(role, group, ) Cheers, Gerard The information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments. WARNING: Computer viruses can be transmitted via email. The recipient should check this email and any attachments for the presence of viruses. The company accepts no liability for any damage caused by any virus transmitted by this email. www.wipro.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From alee at redhat.com Mon May 7 20:09:06 2018 From: alee at redhat.com (Ade Lee) Date: Mon, 07 May 2018 16:09:06 -0400 Subject: [openstack-dev] [barbican] meeting cancelled for 5/7/2018 Message-ID: <1525723746.18118.12.camel@redhat.com> Hi all, I have a conflict for this week's meeting. Therefore we will cancel for this week and reconvene next week. Thanks. Ade From doug at doughellmann.com Mon May 7 20:26:36 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 07 May 2018 16:26:36 -0400 Subject: [openstack-dev] [tc][goals] tracking status of old goals for new projects In-Reply-To: <20180507140634.2jwmnef47te2wjii@yuggoth.org> References: <1525700930-sup-9125@lrrr.local> <20180507140634.2jwmnef47te2wjii@yuggoth.org> Message-ID: <1525724632-sup-2612@lrrr.local> Excerpts from Jeremy Stanley's message of 2018-05-07 14:06:35 +0000: > On 2018-05-07 09:52:16 -0400 (-0400), Doug Hellmann wrote: > [...] > > 3. We could do nothing to record the work related to the goal. > [...] > > For situations like 557863 I think I'd prefer either the status quo > (the kolla deliverable _was_ already listed so it could make > sense to update it in that document) or option #3 (the cycle for > that goal is already well in the past, and certainly adding new > deliverables like kolla-kubernetes to a past goal sets unrealistic > expectations for future goals regardless of where we track them). > > I really do, though, think we should simply accept that these goals > don't always (or even usually?) reach 100% coverage and that at some > point we need to be able to consider better means of keeping track > of, e.g., which deliverables work on which Python versions. The > goals process is excellent for reaching critical mass on such > efforts, but should not be considered a source of long-term support > documentation. Right, it's that latter part I think we didn't really consider when we started the whole process. I hope storyboard will make it easier to address those cases in the future, because anyone can add a task and mark it completed without triggering a bunch of review work for TC members, which is the main objection I have to continuing to update the documents in the governance repo. Doug From doug at doughellmann.com Mon May 7 20:34:41 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 07 May 2018 16:34:41 -0400 Subject: [openstack-dev] Ironic Status Updates In-Reply-To: References: Message-ID: <1525725113-sup-1502@lrrr.local> Excerpts from Julia Kreger's message of 2018-05-07 13:53:13 -0400: > Greetings everyone, > > For some time, Rama Yeleswarapu (rama_y) has been graciously sending > out a weekly update for Ironic to the mailing list. We found out late > last week that this contributor would be unable to continue to do so. > While discussing this and searching for a volunteer, we came up with > two important questions that we determined should have some answers. > > The first being. Is anyone finding a weekly email useful for Ironic? > > The second being: If it is useful, would there be an alternative > format that may be even more useful? Largely it is presently a > copy/paste of our general purpose whiteboard. Perhaps a stream of > consciousness or commentary to convey the delta from week to week? > > -Julia > As a consumer of team updates from outside of the team, I do find them valuable. I think having a regular email update like that is a good communication pattern we've started to establish with several teams, and I'm going to ask the TC to help find ways to make those updates more visible for folks who want to stay informed but can't spend the time it takes to read all of the messages on the mailing list (blogs, RSS, twitter, etc.). So, I hope the Ironic team can find a volunteer (or several to share the work?) to step in and continue with summaries in some form. Doug From doug at doughellmann.com Mon May 7 20:50:26 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 07 May 2018 16:50:26 -0400 Subject: [openstack-dev] [all][requirements] uncapping eventlet In-Reply-To: References: <20180405154737.lhxhlpsj3uttjevu@gentoo.org> <1523463552-sup-1950@lrrr.local> <1524254623-sup-3036@lrrr.local> Message-ID: <1525726215-sup-5823@lrrr.local> Excerpts from Michel Peterson's message of 2018-05-07 17:54:02 +0300: > On Fri, Apr 20, 2018 at 11:06 PM, Doug Hellmann > wrote: > > > > > We have made great progress on this but we do still have quite a > > few of these patches that have not been approved. Many are failing > > test jobs and will need a little attention ( the failing requirements > > jobs are real problems and rechecking will not fix them). If you > > have time to help, please jump in and take over a patch and get it > > working. > > > > https://review.openstack.org/#/q/status:open+topic:uncap-eventlet > > > > > I did a script to fix those and I've submitted patches. Thanks! Doug From gerard.damm at wipro.com Mon May 7 20:58:53 2018 From: gerard.damm at wipro.com (gerard.damm at wipro.com) Date: Mon, 7 May 2018 20:58:53 +0000 Subject: [openstack-dev] [sdk] issues with using OpenStack SDK Python client In-Reply-To: References: Message-ID: more details about router deletion: I tried replacing the "!=None" test with a try statement: try: onap_router = conn.network.find_router(ONAP_ROUTER_NAME) conn.network.delete_router(onap_router.id) (router print-out just before deleting): openstack.network.v2.router.Router(revision=3, distributed=False, status=ACTIVE, tenant_id=03aa47d3bcfd48199e0470b1c86a7f5b, created_at=2018-05-07T20:37:56Z, name=ONAP_router, admin_state_up=True, tags=[], updated_at=2018-05-07T20:37:59Z, ha=False, id=9fc30b97-3942-4444-856a-69c9e2368e02, availability_zone_hints=[], flavor_id=None, description=Router created for ONAP, availability_zones=['nova'], routes=[], external_gateway_info=None) *** traceback.print_exception(): Traceback (most recent call last): File "auto_script_config_openstack_for_onap.py", line 158, in delete_all_ONAP conn.network.delete_router(onap_router.id) File "/usr/local/lib/python3.5/dist-packages/openstack/network/v2/_proxy.py", line 2255, in delete_router self._delete(_router.Router, router, ignore_missing=ignore_missing) File "/usr/local/lib/python3.5/dist-packages/openstack/proxy.py", line 41, in check return method(self, expected, actual, *args, **kwargs) File "/usr/local/lib/python3.5/dist-packages/openstack/proxy.py", line 146, in _delete value=value, File "/usr/local/lib/python3.5/dist-packages/openstack/resource.py", line 847, in delete self._translate_response(response, has_body=False, **kwargs) File "/usr/local/lib/python3.5/dist-packages/openstack/resource.py", line 666, in _translate_response exceptions.raise_from_response(response, error_message=error_message) File "/usr/local/lib/python3.5/dist-packages/openstack/exceptions.py", line 212, in raise_from_response http_status=http_status, request_id=request_id openstack.exceptions.ConflictException: Unable to delete Router for 9fc30b97-3942-4444-856a-69c9e2368e02 (note: deleting the router from either the openstack CLI or from the Horizon GUI works fine) The information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments. WARNING: Computer viruses can be transmitted via email. The recipient should check this email and any attachments for the presence of viruses. The company accepts no liability for any damage caused by any virus transmitted by this email. www.wipro.com From me at not.mn Mon May 7 21:23:10 2018 From: me at not.mn (John Dickinson) Date: Mon, 07 May 2018 14:23:10 -0700 Subject: [openstack-dev] [all][requirements] uncapping eventlet In-Reply-To: <1525726215-sup-5823@lrrr.local> References: <20180405154737.lhxhlpsj3uttjevu@gentoo.org> <1523463552-sup-1950@lrrr.local> <1524254623-sup-3036@lrrr.local> <1525726215-sup-5823@lrrr.local> Message-ID: <48B55C20-76F4-42EE-BFF9-1519F9F93937@not.mn> I've discovered that eventlet 0.23.0 (released April 6) does break things for Swift. I'm not sure about other projects yet. https://bugs.launchpad.net/swift/+bug/1769749 --John On 7 May 2018, at 13:50, Doug Hellmann wrote: > Excerpts from Michel Peterson's message of 2018-05-07 17:54:02 +0300: >> On Fri, Apr 20, 2018 at 11:06 PM, Doug Hellmann >> >> wrote: >> >>> >>> We have made great progress on this but we do still have quite a >>> few of these patches that have not been approved. Many are failing >>> test jobs and will need a little attention ( the failing >>> requirements >>> jobs are real problems and rechecking will not fix them). If you >>> have time to help, please jump in and take over a patch and get it >>> working. >>> >>> https://review.openstack.org/#/q/status:open+topic:uncap-eventlet >>> >>> >> I did a script to fix those and I've submitted patches. > > Thanks! > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ramamani.yeleswarapu at intel.com Mon May 7 23:47:10 2018 From: ramamani.yeleswarapu at intel.com (Yeleswarapu, Ramamani) Date: Mon, 7 May 2018 23:47:10 +0000 Subject: [openstack-dev] [ironic] this week's priorities and subteam reports Message-ID: Hi, We are glad to present this week's priorities and subteam report for Ironic. As usual, this is pulled directly from the Ironic whiteboard[0] and formatted. This Week's Priorities (as of the weekly ironic meeting) ======================================================== Weekly priorities ----------------- - Bios interface support - BIOS Settings: Add BIOSInterface : https://review.openstack.org/507793 - Needs update - BIOS Settings: Add BIOS caching: https://review.openstack.org/512200 - Add Node BIOS support - REST API: https://review.openstack.org/512579 - Hardware type cleanup - https://review.openstack.org/#/q/topic:api-jobs to unblock api CI test cleanup - https://review.openstack.org/#/q/status:open+topic:hw-types - Blocked pending api job cleanup - Python-ironicclient things - Wire in header microversion into client negotiation - https://review.openstack.org/#/c/558027/ - Accept a version on set_provision_state - https://review.openstack.org/#/c/557850/ 1x+2 - Remaining Rescue patches - https://review.openstack.org/#/c/528699/ - Tempest tests with nova (This can land after nova work is done. But, it should be ready to get the nova patch reviewed.) Needs Revision - Management interface boot_mode change - https://review.openstack.org/#/c/526773/ Needs Revision - Bug Fixes - Fixing ironic-inspector dnsmasq filter behavior https://review.openstack.org/#/c/566407/ - Revert virtualbmc SOL support due to leaking file descriptors https://review.openstack.org/566646 - House Keeping: - CoreOS needs to be updated for IPA - https://review.openstack.org/#/c/566094/ Vendor priorities ----------------- cisco-ucs: Patches in works for SDK update, but not posted yet, currently rebuilding third party CI infra after a disaster... idrac: RFE and first several patches for adding UEFI support will be posted by Tuesday, 1/9 ilo: None irmc: None - a few works are work in progress xclarity: Fix XClarity parameters discrepancy: https://review.openstack.org/#/c/561405/ Needs Revision Subproject priorities --------------------- bifrost: ironic-inspector (or its client): (dtantsur) bug fixes for the PXE filters: Blacklist unknown hosts https://review.openstack.org/#/c/566407/ Correct tear down on SIGTERM https://review.openstack.org/#/c/563335/ networking-baremetal: networking-generic-switch: sushy and the redfish driver: (dtantsur) do not run functional (API) tests in the CI: sushy https://review.openstack.org/#/c/566577/ 1x+2 sushy-tools https://review.openstack.org/#/c/566578/ 1x+2 Bugs (dtantsur, vdrok, TheJulia) -------------------------------- - (TheJulia) Ironic has moved to Storyboard. Dtantsur has indicated he will update the tool that generates these stats. - initial version (much fewer features): https://github.com/dtantsur/ironic-bug-report - Stats (new version, diff with 30 Apr 2018): - Total bugs: 275 (-8) - of them untriaged: 236 (-20) - Total RFEs: 231 (-7) - of them untriaged: 26 (-1) - HIGH bugs with patches to review: - Clean steps are not tested in gate https://bugs.launchpad.net/ironic/+bug/1523640: Add manual clean step ironic standalone test https://review.openstack.org/#/c/429770/15 - Needs to be reproposed to the ironic tempest plugin repository. - prepare_instance() is not called for whole disk images with 'agent' deploy interface https://bugs.launchpad.net/ironic/+bug/1713916: - Fix ``agent`` deploy interface to call ``boot.prepare_instance`` https://review.openstack.org/#/c/499050/ MERGED - Backport to stable/queens proposed Priorities ========== Deploy Steps (rloo, mgoddard) ----------------------------- - spec for deployment steps framework has merged: http://specs.openstack.org/openstack/ironic-specs/specs/approved/deployment-steps-framework.html - status as of 7 May 2018: - waiting for code from rloo, no timeframe yet BIOS config framework(zshi, yolanda, mgoddard, hshiina) ------------------------------------------------------- - status as of 30 April 2018: - Spec: http://specs.openstack.org/openstack/ironic-specs/specs/approved/generic-bios-config.html - List of ordered patches: - BIOS Settings: Add DB model: https://review.openstack.org/511162 MERGED - Add bios_interface db field https://review.openstack.org/528609 MERGED - BIOS Settings: Add DB API: https://review.openstack.org/511402 MERGED - BIOS Settings: Add RPC object https://review.openstack.org/511714 MERGED - Add BIOSInterface to base driver class https://review.openstack.org/507793 - BIOS Settings: Add BIOS caching: https://review.openstack.org/512200 - Add Node BIOS support - REST API: https://review.openstack.org/512579 Conductor Location Awareness (jroll, dtantsur) ---------------------------------------------- - story: https://storyboard.openstack.org/#!/story/2001795 - (may 7) spec has good feedback, one issue to resolve, should be able to land this week - https://review.openstack.org/#/c/559420/ needs update Reference architecture guide (dtantsur, jroll) ---------------------------------------------- - story: https://storyboard.openstack.org/#!/story/2001745 - status as of 7 May 2018: - Dublin PTG consensus was to start with small architectural building blocks. - list of cases from the Denver PTG - see in the story - nothing new this week Graphical console interface (mkrai, anup-d-navare, TheJulia) ------------------------------------------------------------ - nova blueprint: https://blueprints.launchpad.net/nova/+spec/ironic-vnc-console - status as of 7 May 2018: - VNC Graphical console spec: https://review.openstack.org/#/c/306074/ needs review Neutron event processing (vdrok) -------------------------------- - status as of 7 May 2018: - spec at https://review.openstack.org/343684 - Needs update - WIP code at https://review.openstack.org/440778 - code rewrite done, should be able to test it this week and get on review, spec update coming afterwards Goals ===== Make nova flexible with ironic API versions (TheJulia) ------------------------------------------------------ Status as of 7 MAY 2018: We've started heading down the path of wiring in os_ironic_api_verison arguments into pertinent calls Python-ironicclient: - https://review.openstack.org/#/c/558027/ - https://review.openstack.org/#/c/557850/ 1x+2 - https://review.openstack.org/#/c/566029/ Storyboard migration (TheJulia, dtantsur) ----------------------------------------- Status as of Apr 30th. - Done with moving data. - dtantsur to rewrite the bug dashboard - in progress https://github.com/dtantsur/ironic-bug-report - suggestions welcome Management interface refactoring (etingof, dtantsur) ---------------------------------------------------- - Status as of May 7th: - boot mode in ManagementInterface: https://review.openstack.org/#/c/526773/ needs revision (review feedback is being addressed) Getting clean steps (rloo, TheJulia) ------------------------------------ - Stat as of May 7th 2018 - spec: https://review.openstack.org/#/c/507910/ - needs review Project vision (jroll, TheJulia) -------------------------------- - Status as of April 16: - jroll still trying to find time to collect enough thoughts for an email Stretch Goals ============= NOTE: These items will be migrated into storyboard and will be removed from the weekly whiteboard once storyboard is in-place Classic driver removal formerly Classic drivers deprecation (dtantsur) ---------------------------------------------------------------------- - spec: http://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/classic-drivers-future.html - status as of 26 Mar 2018: - switch documentation to hardware types: - api-ref examples: TODO - update https://wiki.openstack.org/wiki/Ironic/Drivers: TODO - or should we kill it with fire in favour of the docs? - ironic-inspector: - documentation: https://review.openstack.org/#/c/545285/ MERGED - backport: https://review.openstack.org/#/c/554586/ - enable fake-hardware in devstack: https://review.openstack.org/#/c/550811/ MERGED - change the default discovery driver: https://review.openstack.org/#/c/550464/ - migration of CI to hardware types - IPA: https://review.openstack.org/553431 MERGED - ironic-lib: https://review.openstack.org/#/c/552537/ MERGED - python-ironicclient: https://review.openstack.org/552543 MERGED - python-ironic-inspector-client: https://review.openstack.org/552546 +A MERGED - virtualbmc: https://review.openstack.org/#/c/555361/ MERGED - started an ML thread tagging potentially affected projects: http://lists.openstack.org/pipermail/openstack-dev/2018-March/128438.html - bug needs to be fixed: "periodic tasks of non-classic driver Interfaces aren't run" https://storyboard.openstack.org/#!/story/2001884 MERGED/FIXED Redfish OOB inspection (etingof, deray, stendulker) --------------------------------------------------- - sushy Storage API -- https://review.openstack.org/#/c/563051/1 Before Rocky ============ CI refactoring and missing test coverage ---------------------------------------- - not considered a priority, it's a 'do it always' thing - Standalone CI tests (vsaienk0) - next patch to be reviewed, needed for 3rd party CI: https://review.openstack.org/#/c/429770/ - localboot with partitioned image patches: - Ironic - add localboot partitioned image test: https://review.openstack.org/#/c/502886/ Rebase/update required - when previous are merged TODO (vsaienko) - Upload tinycore partitioned image to tarbals.openstack.org - Switch ironic to use tinyipa partitioned image by default - Missing test coverage (all) - portgroups and attach/detach tempest tests: https://review.openstack.org/382476 - adoption: https://review.openstack.org/#/c/344975/ - should probably be changed to use standalone tests - root device hints: TODO - node take over - resource classes integration tests: https://review.openstack.org/#/c/443628/ - radosgw (https://bugs.launchpad.net/ironic/+bug/1737957) Queens High Priorities ====================== Routed network support (sambetts, vsaienk0, bfournie, hjensas) -------------------------------------------------------------- - status as of 12 Feb 2018: - All code patches are merged. - One CI patch left, rework devstack baremetal simulation. To be done in Rocky? - This is to have actual 'flat' networks in CI. - Placement API work to be done in Rocky due to: Challenges with integration to Placement due to the way the integration was done in neutron. Neutron will create a resource provider for network segments in Placement, then it creates an os-aggregate in Nova for the segment, adds nova compute hosts to this aggregate. Ironic nodes cannot be added to host-aggregates. I (hjensas) had a short discussion with neutron devs (mlavalle) on the issue: http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2018-01-12.log.html#t2018-01-12T17:05:38 There are patches in Nova to add support for ironic nodes in host-aggregates: - https://review.openstack.org/#/c/526753/ allow compute nodes to be associated with host agg - https://review.openstack.org/#/c/529135/ (Spec) - Patches: - CI Patches: - https://review.openstack.org/#/c/392959/ Rework Ironic devstack baremetal network simulation - RFEs (Rocky) - https://bugs.launchpad.net/networking-baremetal/+bug/1749166 - TheJulia, March 19th 2018: This RFE seems not to contain detail on what is desired to be improved upon, and ultimately just seems like refactoring/improvement work and may not then need an rfe. - https://bugs.launchpad.net/networking-baremetal/+bug/1749162 - TheJulia, March 19th 2018: This RFE makes sense, although I would classify it as a general improvement. If we wish to adhere to strict RFE approval for networking-baremetal work, then I think we should consider this approved since it is minor enhancement to improve operation. Rescue mode (rloo, stendulker) ------------------------------ - Status as on 12 Feb 2018 - spec: http://specs.openstack.org/openstack/ironic-specs/specs/approved/implement-rescue-mode.html - code: https://review.openstack.org/#/q/topic:bug/1526449+status:open+OR+status:merged - ironic side: - all code patches have merged except for - Rescue mode standalone tests: https://review.openstack.org/#/c/538119/ (failing CI, not ready for reviews) - Tempest tests with nova: https://review.openstack.org/#/c/528699/ - Run the tempest test on the CI: https://review.openstack.org/#/c/528704/ - succeeded in rescuing: http://logs.openstack.org/04/528704/16/check/ironic-tempest-dsvm-ipa-wholedisk-bios-agent_ipmitool-tinyipa/4b74169/logs/screen-ir-cond.txt.gz#_Feb_02_09_44_12_940007 - nova side: - https://blueprints.launchpad.net/nova/+spec/ironic-rescue-mode: - approved for Queens but didn't get the ironic code (client) done in time - (TheJulia) Nova has indicated that this is deferred until Rocky. - To get the nova patch merged, we need: - release new python-ironicclient - Done - update ironicclient version in upper-constraints (this patch will be posted automatically) - update ironicclient version in global-requirement (this patch needs to be posted manually) Posted https://review.openstack.org/554673 - code patch: https://review.openstack.org/#/c/416487/ Needs revision - CI is needed for nova part to land - tiendc is working for CI Clean up deploy interfaces (vdrok) ---------------------------------- - status as of 5 Feb 2017: - patch https://review.openstack.org/524433 needs update and rebase Zuul v3 jobs in-tree (sambetts, derekh, jlvillal) ------------------------------------------------- - Next TODO is to convert jobs on master, to proper ansible. NOT a high priority though. - (pas-ha) DNM experimental patch with "devstack-tempest" as base job https://review.openstack.org/#/c/520167/ OpenStack Priorities ==================== Python 3.5 compatibility (Nisha, Ankit) --------------------------------------- - Topic: https://review.openstack.org/#/q/topic:goal-python35+NOT+project:openstack/governance+NOT+project:openstack/releases - this include all projects, not only ironic - please tag all reviews with topic "goal-python35" - TODO submit the python3 job for IPA - for ironic and ironic-inspector job enabled by disabling swift as swift is still lacking py3.5 support. - anupn to update the python3 job to build tinyipa with python3 - (anupn): Talked with swift folks and there is a bug upstream opened https://review.openstack.org/#/c/401397 for py3 support in swift. But this is not on their priority - Right now patch pass all gate jobs except agent_- drivers. - (TheJulia) It seems we might not have py3 compatibility with swift until the T- cycle. Deploying with Apache and WSGI in CI (pas-ha, vsaienk0) ------------------------------------------------------- - ironic is mostly finished - (pas-ha) needs to be rewritten for uWSGI, patches on review: - https://review.openstack.org/#/c/507067 - inspector is TODO and depends on https://review.openstack.org/#/q/topic:bug/1525218 - delayed as the HA work seems to take a different direction - (TheJulia, March 19th, 2018) Perhaps because of the different direction, we should consider ourselves done? Subprojects =========== Inspector (dtantsur) -------------------- - trying to flip dsvm-discovery to use the new dnsmasq pxe filter and failing because of bash :Dhttps://review.openstack.org/#/c/525685/6/devstack/plugin.sh at 202 - follow-ups being merged/reviewed; working on state consistency enhancements https://review.openstack.org/#/c/510928/ too (HA demo follow-up) Bifrost (TheJulia) ------------------ - Also seems a recent authentication change in keystoneauth1 has broken processing of the clouds.yaml files, i.e. `openstack` command does not work. - TheJulia will try to look at this this week. Drivers: -------- Cisco UCS (sambetts) Last updated 2018/02/05 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - Cisco CIMC driver CI back up and working on every patch - Cisco UCSM driver CI in development - Patches for updating the UCS python SDKs are in the works and should be posted soon ......... Until next week, --rama [0] https://etherpad.openstack.org/p/IronicWhiteBoard -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikal at stillhq.com Tue May 8 02:05:02 2018 From: mikal at stillhq.com (Michael Still) Date: Tue, 8 May 2018 12:05:02 +1000 Subject: [openstack-dev] [All] A worked example of adding privsep to an OpenStack project Message-ID: Hi, further to last week's example of how to add a new privsep'ed call in Nova, I thought I'd write up how to add privsep to a new OpenStack project. I've used Cinder in this worked example, but it really applies to any project which wants to do things with escalated permissions. The write up is here -- http://www.madebymikal.com/adding-oslo-privsep-to-a-new-project-a-worked-example/ Michael -- Did this email leave you hoping to cause me pain? Good news! Sponsor me in city2surf 2018 and I promise to suffer greatly. http://www.madebymikal.com/city2surf-2018/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From glongwave at gmail.com Tue May 8 02:50:17 2018 From: glongwave at gmail.com (ChangBo Guo) Date: Tue, 8 May 2018 10:50:17 +0800 Subject: [openstack-dev] [All] A worked example of adding privsep to an OpenStack project In-Reply-To: References: Message-ID: Thanks Michael, that's very useful. 2018-05-08 10:05 GMT+08:00 Michael Still : > Hi, > > further to last week's example of how to add a new privsep'ed call in > Nova, I thought I'd write up how to add privsep to a new OpenStack project. > I've used Cinder in this worked example, but it really applies to any > project which wants to do things with escalated permissions. > > The write up is here -- http://www.madebymikal.com/ > adding-oslo-privsep-to-a-new-project-a-worked-example/ > > Michael > > -- > Did this email leave you hoping to cause me pain? Good news! > Sponsor me in city2surf 2018 and I promise to suffer greatly. > http://www.madebymikal.com/city2surf-2018/ > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- ChangBo Guo(gcb) Community Director @EasyStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Tue May 8 07:27:55 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Tue, 8 May 2018 15:27:55 +0800 Subject: [openstack-dev] [cyborg]spec review day (May 9th) Message-ID: Hi team, Let's make use of the team meeting on Wed to kickstart a whole day of concentration of review of the critical Rocky specs [0] and try to get them done as much as possible. We start with the meeting and folks in US and Europe could carry on til the end of the day when Asian devs could come in again :) Initial agenda for Wed team meeting: - Promote Sundar as new core reviewer - KubeCon feedback - Bugs and Issues - Spec Review Day kickstart [0] https://etherpad.openstack.org/p/cyborg-rocky-spec-day -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Tue May 8 08:44:20 2018 From: zigo at debian.org (Thomas Goirand) Date: Tue, 8 May 2018 10:44:20 +0200 Subject: [openstack-dev] [horizon] Scheduling switch to django >= 2.0 Message-ID: <276a6199-158c-bb7d-7f7d-f04de9a52e06@debian.org> Hi, It has been decided that, in Debian, we'll switch to Django 2.0 after Buster will be released. Buster is to be frozen next February. This means that we have roughly one more year before Django 1.x goes away. Hopefully, Horizon will be ready for it, right? Hoping this helps, Cheers, Thomas Goirand (zigo) From balazs.gibizer at ericsson.com Tue May 8 09:16:49 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Tue, 8 May 2018 11:16:49 +0200 Subject: [openstack-dev] [nova] Notification update week 19 Message-ID: <1525771009.5489.0@smtp.office365.com> Hi, After a bit of silence here is the latest notification status. Bugs ---- [Low] https://bugs.launchpad.net/nova/+bug/1757407 Notification sending sometimes hits the keystone API to get glance endpoints Fix has been proposed has many +1s https://review.openstack.org/#/c/564528/ [Medium] https://bugs.launchpad.net/nova/+bug/1763051 Need to audit when notifications are sent during live migration We need to go throught the live migration codepath and make sure that the different live migartion notifications sent at a proper time. [Low] https://bugs.launchpad.net/nova/+bug/1764392 Avoid bandwidth usage db query in notifications when the virt driver does not support collecting such data [High] https://bugs.launchpad.net/nova/+bug/1739325 Server operations fail to complete with versioned notifications if payload contains unset non-nullable fields No progress. We still need to understand how this problem happens to find the proper solution. [Low] https://bugs.launchpad.net/nova/+bug/1487038 nova.exception._cleanse_dict should use oslo_utils.strutils._SANITIZE_KEYS Old abandoned patches exist but need somebody to pick them up: * https://review.openstack.org/#/c/215308/ * https://review.openstack.org/#/c/388345/ Versioned notification transformation ------------------------------------- https://review.openstack.org/#/q/topic:bp/versioned-notification-transformation-rocky+status:open * https://review.openstack.org/#/c/403660 Transform instance.exists notification - lost the +2 due to a merge conflict * https://review.openstack.org/#/c/410297/ Transform missing delete notifications - many +1s, needs core review Introduce instance.lock and instance.unlock notifications --------------------------------------------------------- https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances Implementation proposed but needs some work: https://review.openstack.org/#/c/526251/ - No progress. I've pinged the author but no response. Add the user id and project id of the user initiated the instance action to the notification ----------------------------------------------------------------- https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications Implementation patch exists but still needs work https://review.openstack.org/#/c/536243/ - No progress. I've pinged the author but no response. Sending full traceback in versioned notifications ------------------------------------------------- https://blueprints.launchpad.net/nova/+spec/add-full-traceback-to-error-notifications The bp was reassigned to Kevin_Zheng and he proposed a WIP patch https://review.openstack.org/#/c/564092/ Add versioned notifications for removing a member from a server group --------------------------------------------------------------------- The specless bp https://blueprints.launchpad.net/nova/+spec/add-server-group-remove-member-notifications Based on the PoC patch https://review.openstack.org/#/c/559076/ we see basic problems with the overal bp. See Matt's mail from the ML http://lists.openstack.org/pipermail/openstack-dev/2018-April/129804.html Add notification support for trusted_certs ------------------------------------------ This is part of the bp nova-validate-certificates implementation series to extend some of the instance notifications: https://review.openstack.org/#/c/563269 I have to re-review the patch as it seems Brianna updated it based on my suggestions. Introduce Pending VM state -------------------------- The spec https://review.openstack.org/#/c/554212 proposed to introduce new notification along with the new state. I have to give a detailed review about this proposal. Weekly meeting -------------- The next meeting will be held on 8th of May on #openstack-meeting-4 https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180508T170000 Cheers, gibi From balazs.gibizer at ericsson.com Tue May 8 09:49:50 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Tue, 8 May 2018 11:49:50 +0200 Subject: [openstack-dev] [nova] nova-manage cell_v2 map_instances uses invalid UUID as marker in the db Message-ID: <1525772990.5489.1@smtp.office365.com> Hi, The oslo UUIDField emits a warning if the string used as a field value does not pass the validation of the uuid.UUID(str(value)) call [3]. All the offending places are fixed in nova except the nova-manage cell_v2 map_instances call [1][2]. That call uses markers in the DB that are not valid UUIDs. If we could fix this last offender then we could merge the patch [4] that changes the this warning to an exception in the nova tests to avoid such future rule violations. However I'm not sure it is easy to fix. Replacing 'INSTANCE_MIGRATION_MARKER' at [1] to '00000000-0000-0000-0000-00000000' might work but I don't know what to do with instance_uuid.replace(' ', '-') [2] to make it a valid uuid. Also I think that if there is an unfinished mapping in the deployment and then the marker is changed in the code that leads to inconsistencies. I'm open to any suggestions. Cheers, gibi [1] https://github.com/openstack/nova/blob/09af976016a83288df22ac6ed1cce1676c2294cc/nova/cmd/manage.py#L1168 [2] https://github.com/openstack/nova/blob/09af976016a83288df22ac6ed1cce1676c2294cc/nova/cmd/manage.py#L1180 [3] https://github.com/openstack/oslo.versionedobjects/blob/29e643e4a93333866b33965b68fc8dfb8acf30fa/oslo_versionedobjects/fields.py#L359 [4] https://review.openstack.org/#/c/540386 From linghucongsong at 163.com Tue May 8 09:56:14 2018 From: linghucongsong at 163.com (linghucongsong) Date: Tue, 8 May 2018 17:56:14 +0800 (CST) Subject: [openstack-dev] [infra][ci]Does the openstack ci vms start each time clear up enough? In-Reply-To: References: <74160d3e.f22c.1632a36c1aa.Coremail.linghucongsong@163.com> <20180504093745.GA38938@smcginnis-mbp.local> <1525449149.2793610.1361025224.52597443@webmail.messagingengine.com> Message-ID: hi cboylan. Thanks for reply! I have recheker several times but always the second time failed. The first time always pass. Is It maybe the reason in the below email luckyvega wrote.? At 2018-05-06 18:20:47, "Vega Cai" wrote: To test whether it's our new patch that causes the problem, I submitted a dummy patch[1] to trigger CI and the CI failed again. Checking the log of nova scheduler, it's very strange that the scheduling starts with 0 host at the beginning. May 06 09:40:34.358585 ubuntu-xenial-inap-mtl01-0003885152 nova-scheduler[21962]: DEBUG oslo_service.periodic_task [None req-008ee30a-47a1-40a2-bf64-cb0f1719806e None None] Running periodic task SchedulerManager._run_periodic_tasks {{(pid=23795) run_periodic_tasks /usr/local/lib/python2.7/dist-packages/oslo_service/periodic_task.py:215}} May 06 09:41:23.968029 ubuntu-xenial-inap-mtl01-0003885152 nova-scheduler[21962]: DEBUG nova.scheduler.manager [None req-c67986fa-2e3b-45b7-96dd-196704945b95 admin admin] Starting to schedule for instances: [u'8b227e85-8959-4e07-be3d-1bc094c115c1'] {{(pid=23795) select_destinations /opt/stack/new/nova/nova/scheduler/manager.py:118}} May 06 09:41:23.969293 ubuntu-xenial-inap-mtl01-0003885152 nova-scheduler[21962]: DEBUG oslo_concurrency.lockutils [None req-c67986fa-2e3b-45b7-96dd-196704945b95 admin admin] Lock "placement_client" acquired by "nova.scheduler.client.report._create_client" :: waited 0.000s {{(pid=23795) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:273}} May 06 09:41:23.975304 ubuntu-xenial-inap-mtl01-0003885152 nova-scheduler[21962]: DEBUG oslo_concurrency.lockutils [None req-c67986fa-2e3b-45b7-96dd-196704945b95 admin admin] Lock "placement_client" released by "nova.scheduler.client.report._create_client" :: held 0.006s {{(pid=23795) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:285}} May 06 09:41:24.276470 ubuntu-xenial-inap-mtl01-0003885152 nova-scheduler[21962]: DEBUG oslo_concurrency.lockutils [None req-c67986fa-2e3b-45b7-96dd-196704945b95 admin admin] Lock "6e118c71-9008-4694-8aee-faa607944c5f" acquired by "nova.context.get_or_set_cached_cell_and_set_connections" :: waited 0.000s {{(pid=23795) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:273}} May 06 09:41:24.279331 ubuntu-xenial-inap-mtl01-0003885152 nova-scheduler[21962]: DEBUG oslo_concurrency.lockutils [None req-c67986fa-2e3b-45b7-96dd-196704945b95 admin admin] Lock "6e118c71-9008-4694-8aee-faa607944c5f" released by "nova.context.get_or_set_cached_cell_and_set_connections" :: held 0.003s {{(pid=23795) inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:285}} May 06 09:41:24.302854 ubuntu-xenial-inap-mtl01-0003885152 nova-scheduler[21962]: DEBUG oslo_db.sqlalchemy.engines [None req-c67986fa-2e3b-45b7-96dd-196704945b95 admin admin] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION {{(pid=23795) _check_effective_sql_mode /usr/local/lib/python2.7/dist-packages/oslo_db/sqlalchemy/engines.py:308}} May 06 09:41:24.321713 ubuntu-xenial-inap-mtl01-0003885152 nova-scheduler[21962]: DEBUG nova.filters [None req-c67986fa-2e3b-45b7-96dd-196704945b95 admin admin] Starting with 0 host(s) {{(pid=23795) get_filtered_objects /opt/stack/new/nova/nova/filters.py:70}} May 06 09:41:24.322136 ubuntu-xenial-inap-mtl01-0003885152 nova-scheduler[21962]: INFO nova.filters [None req-c67986fa-2e3b-45b7-96dd-196704945b95 admin admin] Filter RetryFilter returned 0 hosts May 06 09:41:24.322614 ubuntu-xenial-inap-mtl01-0003885152 nova-scheduler[21962]: DEBUG nova.filters [None req-c67986fa-2e3b-45b7-96dd-196704945b95 admin admin] Filtering removed all hosts for the request with instance ID '8b227e85-8959-4e07-be3d-1bc094c115c1'. Filter results: [('RetryFilter', None)] {{(pid=23795) get_filtered_objects /opt/stack/new/nova/nova/filters.py:129}} May 06 09:41:24.323029 ubuntu-xenial-inap-mtl01-0003885152 nova-scheduler[21962]: INFO nova.filters [None req-c67986fa-2e3b-45b7-96dd-196704945b95 admin admin] Filtering removed all hosts for the request with instance ID '8b227e85-8959-4e07-be3d-1bc094c115c1'. Filter results: ['RetryFilter: (start: 0, end: 0)'] May 06 09:41:24.323419 ubuntu-xenial-inap-mtl01-0003885152 nova-scheduler[21962]: DEBUG nova.scheduler.filter_scheduler [None req-c67986fa-2e3b-45b7-96dd-196704945b95 admin admin] Filtered [] {{(pid=23795) _get_sorted_hosts /opt/stack/new/nova/nova/scheduler/filter_scheduler.py:404}} May 06 09:41:24.323861 ubuntu-xenial-inap-mtl01-0003885152 nova-scheduler[21962]: DEBUG nova.scheduler.filter_scheduler [None req-c67986fa-2e3b-45b7-96dd-196704945b95 admin admin] There are 0 hosts available but 1 instances requested to build. {{(pid=23795) _ensure_sufficient_hosts /opt/stack/new/nova/nova/scheduler/filter_scheduler.py:278}} May 06 09:41:26.358317 ubuntu-xenial-inap-mtl01-0003885152 nova-scheduler[21962]: DEBUG oslo_service.periodic_task [None req-008ee30a-47a1-40a2-bf64-cb0f1719806e None None] Running periodic task SchedulerManager._run_periodic_tasks {{(pid=23794) run_periodic_tasks /usr/local/lib/python2.7/dist-packages/oslo_service/periodic_task.py:215}} I copy the log between two periodic task log records to show one whole scheduling process. Zhiyuan On Fri, 4 May 2018 at 23:52 Clark Boylan wrote: On Fri, May 4, 2018, at 2:37 AM, Sean McGinnis wrote: > On Fri, May 04, 2018 at 04:13:41PM +0800, linghucongsong wrote: > > > > Hi all! > > > > Recently we meet a strange problem in our ci. look this link: https://review.openstack.org/#/c/532097/ > > > > we can pass the ci in the first time, but when we begin to start the gate job, it will always failed in the second time. > > > > we have rebased several times, it alway pass the ci in the first time and failed in the second time. > > > > This have not happen before and make me to guess is it really we start the ci from the new fresh vms each time? > > A new VM is spun up for each test run, so I don't believe this is an issue with > stale artifacts on the host. I would guess this is more likely some sort of > race condition, and you just happen to be hitting it 50% of the time. Additionally you can check the job logs to see while these two jobs did run against the same cloud provider they did so in different regions on hosts with completely different IP addresses. The inventory files [0][1] are where I would start if you suspect oddness of this sort. Reading them I don't see anything to indicate the nodes were reused. [0] http://logs.openstack.org/97/532097/16/check/legacy-tricircle-dsvm-multiregion/c9b3d29/zuul-info/inventory.yaml [1] http://logs.openstack.org/97/532097/16/gate/legacy-tricircle-dsvm-multiregion/ad547d5/zuul-info/inventory.yaml Clark __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- BR Zhiyuan -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhang.lei.fly at gmail.com Tue May 8 10:13:23 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Tue, 8 May 2018 18:13:23 +0800 Subject: [openstack-dev] [kolla][vote]Core nomination for Mark Goddard (mgoddard) as kolla core member In-Reply-To: References: <3c755cbfc76e4eff93335560daac96a7@G07SGEXCMSGPS03.g07.fujitsu.local> Message-ID: Time is up. And welcome mgoddard to the team :D On Thu, May 3, 2018 at 5:47 PM, Goutham Pratapa wrote: > +1 for `mgoddard` > > On Thu, May 3, 2018 at 1:21 PM, duonghq at vn.fujitsu.com < > duonghq at vn.fujitsu.com> wrote: > >> +1 >> >> >> >> Sorry for my late reply, thank you for your contribution in Kolla. >> >> >> >> Regards, >> >> Duong >> >> >> >> *From:* Jeffrey Zhang [mailto:zhang.lei.fly at gmail.com] >> *Sent:* Thursday, April 26, 2018 10:31 PM >> *To:* OpenStack Development Mailing List > .org> >> *Subject:* [openstack-dev] [kolla][vote]Core nomination for Mark Goddard >> (mgoddard) as kolla core member >> >> >> >> Kolla core reviewer team, >> >> It is my pleasure to nominate >> >> ​ >> >> mgoddard for kolla core team. >> >> ​ >> >> Mark has been working both upstream and downstream with kolla and >> kolla-ansible for over two years, building bare metal compute clouds with >> ironic for HPC. He's been involved with OpenStack since 2014. He started >> the kayobe deployment project which complements kolla-ansible. He is >> also the most active non-core contributor for last 90 days[1] >> >> ​​ >> >> Consider this nomination a +1 vote from me >> >> A +1 vote indicates you are in favor of >> >> ​ >> >> mgoddard as a candidate, a -1 >> is a >> >> ​​ >> >> veto. Voting is open for 7 days until >> >> ​May >> >> >> >> ​4​ >> >> th, or a unanimous >> response is reached or a veto vote occurs. >> >> [1] http://stackalytics.com/report/contribution/kolla-group/90 >> >> >> >> -- >> >> Regards, >> >> Jeffrey Zhang >> >> Blog: http://xcodest.me >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > Cheers !!! > Goutham Pratapa > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Tue May 8 10:22:27 2018 From: mark at stackhpc.com (Mark Goddard) Date: Tue, 8 May 2018 11:22:27 +0100 Subject: [openstack-dev] [kolla][vote]Core nomination for Mark Goddard (mgoddard) as kolla core member In-Reply-To: References: <3c755cbfc76e4eff93335560daac96a7@G07SGEXCMSGPS03.g07.fujitsu.local> Message-ID: Thanks everyone for putting your trust in me! On 8 May 2018 at 11:13, Jeffrey Zhang wrote: > Time is up. And welcome mgoddard to the team :D > > On Thu, May 3, 2018 at 5:47 PM, Goutham Pratapa > wrote: > >> +1 for `mgoddard` >> >> On Thu, May 3, 2018 at 1:21 PM, duonghq at vn.fujitsu.com < >> duonghq at vn.fujitsu.com> wrote: >> >>> +1 >>> >>> >>> >>> Sorry for my late reply, thank you for your contribution in Kolla. >>> >>> >>> >>> Regards, >>> >>> Duong >>> >>> >>> >>> *From:* Jeffrey Zhang [mailto:zhang.lei.fly at gmail.com] >>> *Sent:* Thursday, April 26, 2018 10:31 PM >>> *To:* OpenStack Development Mailing List >> .org> >>> *Subject:* [openstack-dev] [kolla][vote]Core nomination for Mark >>> Goddard (mgoddard) as kolla core member >>> >>> >>> >>> Kolla core reviewer team, >>> >>> It is my pleasure to nominate >>> >>> ​ >>> >>> mgoddard for kolla core team. >>> >>> ​ >>> >>> Mark has been working both upstream and downstream with kolla and >>> kolla-ansible for over two years, building bare metal compute clouds with >>> ironic for HPC. He's been involved with OpenStack since 2014. He started >>> the kayobe deployment project which complements kolla-ansible. He is >>> also the most active non-core contributor for last 90 days[1] >>> >>> ​​ >>> >>> Consider this nomination a +1 vote from me >>> >>> A +1 vote indicates you are in favor of >>> >>> ​ >>> >>> mgoddard as a candidate, a -1 >>> is a >>> >>> ​​ >>> >>> veto. Voting is open for 7 days until >>> >>> ​May >>> >>> >>> >>> ​4​ >>> >>> th, or a unanimous >>> response is reached or a veto vote occurs. >>> >>> [1] http://stackalytics.com/report/contribution/kolla-group/90 >>> >>> >>> >>> -- >>> >>> Regards, >>> >>> Jeffrey Zhang >>> >>> Blog: http://xcodest.me >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> >> -- >> Cheers !!! >> Goutham Pratapa >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > Regards, > Jeffrey Zhang > Blog: http://xcodest.me > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hariprasanth.l at msystechnologies.com Tue May 8 11:13:02 2018 From: hariprasanth.l at msystechnologies.com (Hari Prasanth Loganathan) Date: Tue, 8 May 2018 16:43:02 +0530 Subject: [openstack-dev] Create a Volume type using OpenStack Message-ID: Hi Team, 1) I am able to list all the project using the OpenStack REST API, http://{IP_ADDRESS}:5000/v3/auth/projects/ But as per the documentation of /v3/ API's in OpenStack ( https://developer.openstack.org/api-ref/block-storage/v3/index.html#volumes-volumes ), I need API's to i) list all the Volume types in the OpenStack ii) I need API's to create the Volume types in the OpenStack I am able to create via CLI, I need to perform the same using API Create Volume Type openstack volume type create ${poolName} cinder type-key "${poolName}" set storagetype:pool=${poolName} volume_backend_name=rbd-${poolName} I am able to create via CLI, I need to perform the same using API. Please help me in this. Thanks, Hari -------------- next part -------------- An HTML attachment was scrubbed... URL: From duncan.thomas at gmail.com Tue May 8 11:18:36 2018 From: duncan.thomas at gmail.com (Duncan Thomas) Date: Tue, 8 May 2018 12:18:36 +0100 Subject: [openstack-dev] Create a Volume type using OpenStack In-Reply-To: References: Message-ID: If you're using the cinder CLI (aka python-cinderclient) then if you run with --debug, then you can see the REST calls used. I would assume the the unified openstack CLI client has a similar mode. On 8 May 2018 at 12:13, Hari Prasanth Loganathan wrote: > Hi Team, > > 1) I am able to list all the project using the OpenStack REST API, > > http://{IP_ADDRESS}:5000/v3/auth/projects/ > > But as per the documentation of /v3/ API's in OpenStack > (https://developer.openstack.org/api-ref/block-storage/v3/index.html#volumes-volumes), > > I need API's to > i) list all the Volume types in the OpenStack > ii) I need API's to create the Volume types in the OpenStack > > I am able to create via CLI, I need to perform the same using API > Create Volume Type > openstack volume type create ${poolName} > cinder type-key "${poolName}" set storagetype:pool=${poolName} > volume_backend_name=rbd-${poolName} > > > I am able to create via CLI, I need to perform the same using API. Please > help me in this. > > > Thanks, > Hari > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Duncan Thomas From bob.ball at citrix.com Tue May 8 12:41:54 2018 From: bob.ball at citrix.com (Bob Ball) Date: Tue, 8 May 2018 12:41:54 +0000 Subject: [openstack-dev] [nova] reboot a rescued instance? In-Reply-To: <716ed1b2-74c5-db73-fc4f-cf5ff10edd4d@gmail.com> References: <716ed1b2-74c5-db73-fc4f-cf5ff10edd4d@gmail.com> Message-ID: <4395197a23f647da88d2becf06a67d7d@AMSPEX02CL01.citrite.net> Hi Matt, My understanding is that this is being used by Rackspace. AFAIK the change isn't upstream because there was no sensible way to permit reboot of a rescued instance for XenAPI users but prevent it for other drivers. I'd be hesitant to permit reboot-from-rescue for all drivers as I'm not sure the drivers would have consistent (or perhaps working!) behaviours? Is there a way to enable this when using XenAPI? Bob -----Original Message----- From: Matt Riedemann [mailto:mriedemos at gmail.com] Sent: 04 May 2018 14:50 To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [nova] reboot a rescued instance? For full details on this, see the IRC conversation [1]. tl;dr: the nova compute manager and xen virt driver assume that you can reboot a rescued instance [2] but the API does not allow that [3] and as far as I can tell, it never has. I can only assume that Rackspace had an out of tree change to the API to allow rebooting a rescued instance. I don't know why that wouldn't have been upstreamed, but the upstream API doesn't allow it. I'm also not aware of anything internal to nova that reboots an instance in a rescued state. So the question now is, should we add rescue to the possible states to reboot an instance in the API? Or just rollback this essentially dead code in the compute manager and xen virt driver? I don't know if any other virt drivers will support rebooting a rescued instance. [1] http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-05-03.log.html#t2018-05-03T18:49:58 [2] https://review.openstack.org/#/q/topic:bug/1170237+(status:open+OR+status:merged [3] https://github.com/openstack/nova/blob/4b0d0ea9f18139d58103a520a6a4e9119e19a4de/nova/compute/vm_states.py#L69-L72 -- Thanks, Matt __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From marcin.juszkiewicz at linaro.org Tue May 8 12:48:34 2018 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Tue, 8 May 2018 14:48:34 +0200 Subject: [openstack-dev] [kolla][neutron][requirements][pbr]Use git+https line in requirements.txt break the pip install In-Reply-To: References: Message-ID: W dniu 18.04.2018 o 11:02, Michel Peterson pisze: > How can we fix this? Any update on it? Would like to get rid of current workarounds. From mriedemos at gmail.com Tue May 8 12:53:49 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 8 May 2018 07:53:49 -0500 Subject: [openstack-dev] [nova] reboot a rescued instance? In-Reply-To: <4395197a23f647da88d2becf06a67d7d@AMSPEX02CL01.citrite.net> References: <716ed1b2-74c5-db73-fc4f-cf5ff10edd4d@gmail.com> <4395197a23f647da88d2becf06a67d7d@AMSPEX02CL01.citrite.net> Message-ID: On 5/8/2018 7:41 AM, Bob Ball wrote: > I'd be hesitant to permit reboot-from-rescue for all drivers as I'm not sure the drivers would have consistent (or perhaps working!) behaviours? Is there a way to enable this when using XenAPI? Off the top of my head the virt driver could report a capability for this which gets modeled in placement as a standard trait on the compute node resource provider. Then the API could check for that trait from Placement and fail if it's not found. -- Thanks, Matt From sean.mcginnis at gmx.com Tue May 8 13:19:21 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 8 May 2018 08:19:21 -0500 Subject: [openstack-dev] Create a Volume type using OpenStack In-Reply-To: References: Message-ID: <20180508131920.GA1692@sm-xps> On Tue, May 08, 2018 at 12:18:36PM +0100, Duncan Thomas wrote: > If you're using the cinder CLI (aka python-cinderclient) then if you > run with --debug, then you can see the REST calls used. > > > > > I need API's to > > i) list all the Volume types in the OpenStack > > ii) I need API's to create the Volume types in the OpenStack > > Hi Hari, The volume type API calls our in a different section (Volume types vs Volumes): https://developer.openstack.org/api-ref/block-storage/v3/index.html#volume-types-types So I believe you are looking for: i) https://developer.openstack.org/api-ref/block-storage/v3/index.html#list-all-volume-types and ii) https://developer.openstack.org/api-ref/block-storage/v3/index.html#create-a-volume-type Sean From aschultz at redhat.com Tue May 8 15:07:33 2018 From: aschultz at redhat.com (Alex Schultz) Date: Tue, 8 May 2018 09:07:33 -0600 Subject: [openstack-dev] [tripleo] The Weekly Owl - 20th Edition Message-ID: Welcome to the twentieth edition of a weekly update in TripleO world! The goal is to provide a short reading (less than 5 minutes) to learn what's new this week. Any contributions and feedback are welcome. Link to the previous version: http://lists.openstack.org/pipermail/openstack-dev/2018-May/130090.html +---------------------------------+ | General announcements | +---------------------------------+ +--> Further discussions about Storyboard migration will be coming to the ML this week. +--> We have 4 more weeks until milestone 2 ! Check-out the schedule: https://releases.openstack.org/rocky/schedule.html +------------------------------+ | Continuous Integration | +------------------------------+ +--> Ruck is myoung and Rover is sshnaidm. Please let them know any new CI issue. +--> Master promotion is 0 day, Queens is 1 day, Pike is 3 days and Ocata is 2 days. Kudos folks! +--> Upcoming DLRN changes coming that may impact CI, see http://lists.openstack.org/pipermail/openstack-dev/2018-May/130195.html +--> Still working on libvirt based multinode reproducer, see https://goo.gl/DYCnkx +--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting +-------------+ | Upgrades | +-------------+ +--> Continued progress for ffwd upgrades as well as cleaing up upgrade/updates jobs. +--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad-status +---------------+ | Containers | +---------------+ +--> Continued efforts to align instack-undercloud & containerized undercloud +--> all-in-one work beginning to extract the deployment framework/tooling from the containerized undercloud +--> More: https://etherpad.openstack.org/p/tripleo-containers-squad-status +----------------------+ | config-download | +----------------------+ +--> Progress on OpenStark operations Ansible role: https://github.com/samdoran/ansible-role-openstack-operations +--> Working on Skydive transition to external tasks +--> Working on improving performances when deploying Ceph with Ansible. +--> client/api/workflow for "play deployment failures list" equivalent to "stack failures list +--> More: https://etherpad.openstack.org/p/tripleo-config-download-squad-status +--------------+ | Integration | +--------------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-integration-squad-status +---------+ | UI/CLI | +---------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status +---------------+ | Validations | +---------------+ +--> Custom validations +--> Fixing node health validations +--> More: https://etherpad.openstack.org/p/tripleo-validations-squad-status +---------------+ | Networking | +---------------+ +--> Continued work on neutron sidecar containers +--> More: https://etherpad.openstack.org/p/tripleo-networking-squad-status +--------------+ | Workflows | +--------------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-workflows-squad-status +-----------+ | Security | +-----------+ +--> Patches for public TLS by default are up, https://review.openstack.org/#/q/topic:public-tls-default+status:open +--> More: https://etherpad.openstack.org/p/tripleo-security-squad +------------+ | Owl fact | +------------+ Burrowing owls migrate to the Rocky Mountain Arsenal National Wildlife Refuge (near Denver, CO) every summer and raise their young in abandoned prairie dog burrows. https://www.fws.gov/nwrs/threecolumn.aspx?id=2147510941 From zbitter at redhat.com Tue May 8 15:09:21 2018 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 8 May 2018 11:09:21 -0400 Subject: [openstack-dev] [all][tc][ptls] final stages of python 3 transition In-Reply-To: References: <1524689037-sup-783@lrrr.local> <1525100618-sup-9669@lrrr.local> Message-ID: On 30/04/18 17:16, Ben Nemec wrote: >> Excerpts from Doug Hellmann's message of 2018-04-25 16:54:46 -0400: >>> 1. Fix oslo.service functional tests -- the Oslo team needs help >>>     maintaining this library. Alternatively, we could move all >>>     services to use cotyledon (https://pypi.org/project/cotyledon/). I submitted a patch that fixes the py35 gate (which was broken due to changes between CPython 3.4 and 3.5), so once that merges we can flip the gate back to voting: https://review.openstack.org/566714 > For everyone's awareness, we discussed this in the Oslo meeting today > and our first step is to see how many, if any, services are actually > relying on the oslo.service functionality that doesn't work in Python 3 > today.  From there we will come up with a plan for how to move forward. > > https://bugs.launchpad.net/manila/+bug/1482633 is the original bug. These tests are currently skipped in both oslo_service and nova. (Equivalent tests were removed from Neutron and Manila on the principle that they're now oslo_service's responsibility.) This appears to be a series of long-standing bugs in eventlet: Python 3.5 failure mode: https://github.com/eventlet/eventlet/issues/308 https://github.com/eventlet/eventlet/issues/189 Python 3.4 failure mode: https://github.com/eventlet/eventlet/issues/476 https://github.com/eventlet/eventlet/issues/145 There are also more problems coming down the pipeline in Python 3.6: https://github.com/eventlet/eventlet/issues/371 That one is resolved in eventlet 0.21, but we have that blocked by upper-constraints: http://git.openstack.org/cgit/openstack/requirements/tree/upper-constraints.txt#n135 Given that the code in question relates solely to standalone WSGI servers with SSL and everything should have already migrated to Apache, and that the upstream is clearly overworked and unlikely to merge fixes any time soon (plus we would have to deal with the fallout of moving the upper constraint), I agree that it would be preferable if we could just ditch this functionality. cheers, Zane. From gr at ham.ie Tue May 8 15:28:46 2018 From: gr at ham.ie (Graham Hayes) Date: Tue, 8 May 2018 16:28:46 +0100 Subject: [openstack-dev] [all][tc][ptls] final stages of python 3 transition In-Reply-To: References: <1524689037-sup-783@lrrr.local> <1525100618-sup-9669@lrrr.local> Message-ID: <297b4c6f-5ce1-1ab5-88db-92b7e06174de@ham.ie> On 08/05/18 16:09, Zane Bitter wrote: > On 30/04/18 17:16, Ben Nemec wrote: >>> Excerpts from Doug Hellmann's message of 2018-04-25 16:54:46 -0400: >>>> 1. Fix oslo.service functional tests -- the Oslo team needs help >>>>     maintaining this library. Alternatively, we could move all >>>>     services to use cotyledon (https://pypi.org/project/cotyledon/). > > I submitted a patch that fixes the py35 gate (which was broken due to > changes between CPython 3.4 and 3.5), so once that merges we can flip > the gate back to voting: > > https://review.openstack.org/566714 > >> For everyone's awareness, we discussed this in the Oslo meeting today >> and our first step is to see how many, if any, services are actually >> relying on the oslo.service functionality that doesn't work in Python >> 3 today.  From there we will come up with a plan for how to move forward. >> >> https://bugs.launchpad.net/manila/+bug/1482633 is the original bug. > > These tests are currently skipped in both oslo_service and nova. > (Equivalent tests were removed from Neutron and Manila on the principle > that they're now oslo_service's responsibility.) > > This appears to be a series of long-standing bugs in eventlet: > > Python 3.5 failure mode: > https://github.com/eventlet/eventlet/issues/308 > https://github.com/eventlet/eventlet/issues/189 > > Python 3.4 failure mode: > https://github.com/eventlet/eventlet/issues/476 > https://github.com/eventlet/eventlet/issues/145 > > There are also more problems coming down the pipeline in Python 3.6: > > https://github.com/eventlet/eventlet/issues/371 > > That one is resolved in eventlet 0.21, but we have that blocked by > upper-constraints: > http://git.openstack.org/cgit/openstack/requirements/tree/upper-constraints.txt#n135 > > > Given that the code in question relates solely to standalone WSGI > servers with SSL and everything should have already migrated to Apache, > and that the upstream is clearly overworked and unlikely to merge fixes > any time soon (plus we would have to deal with the fallout of moving the > upper constraint), I agree that it would be preferable if we could just > ditch this functionality. There are a few projects that have not migrated, and some that have issues running in non standalone WSGI mode (due, ironically to eventlet) We should probably get people to run these projects behind an reverse proxy, and terminate SSL there, but right now we don't have that documented. > cheers, > Zane. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From doug at doughellmann.com Tue May 8 15:53:06 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 08 May 2018 11:53:06 -0400 Subject: [openstack-dev] [all][tc][ptls] final stages of python 3 transition In-Reply-To: <297b4c6f-5ce1-1ab5-88db-92b7e06174de@ham.ie> References: <1524689037-sup-783@lrrr.local> <1525100618-sup-9669@lrrr.local> <297b4c6f-5ce1-1ab5-88db-92b7e06174de@ham.ie> Message-ID: <1525794769-sup-717@lrrr.local> Excerpts from Graham Hayes's message of 2018-05-08 16:28:46 +0100: > On 08/05/18 16:09, Zane Bitter wrote: > > On 30/04/18 17:16, Ben Nemec wrote: > >>> Excerpts from Doug Hellmann's message of 2018-04-25 16:54:46 -0400: > >>>> 1. Fix oslo.service functional tests -- the Oslo team needs help > >>>>     maintaining this library. Alternatively, we could move all > >>>>     services to use cotyledon (https://pypi.org/project/cotyledon/). > > > > I submitted a patch that fixes the py35 gate (which was broken due to > > changes between CPython 3.4 and 3.5), so once that merges we can flip > > the gate back to voting: > > > > https://review.openstack.org/566714 > > > >> For everyone's awareness, we discussed this in the Oslo meeting today > >> and our first step is to see how many, if any, services are actually > >> relying on the oslo.service functionality that doesn't work in Python > >> 3 today.  From there we will come up with a plan for how to move forward. > >> > >> https://bugs.launchpad.net/manila/+bug/1482633 is the original bug. > > > > These tests are currently skipped in both oslo_service and nova. > > (Equivalent tests were removed from Neutron and Manila on the principle > > that they're now oslo_service's responsibility.) > > > > This appears to be a series of long-standing bugs in eventlet: > > > > Python 3.5 failure mode: > > https://github.com/eventlet/eventlet/issues/308 > > https://github.com/eventlet/eventlet/issues/189 > > > > Python 3.4 failure mode: > > https://github.com/eventlet/eventlet/issues/476 > > https://github.com/eventlet/eventlet/issues/145 > > > > There are also more problems coming down the pipeline in Python 3.6: > > > > https://github.com/eventlet/eventlet/issues/371 > > > > That one is resolved in eventlet 0.21, but we have that blocked by > > upper-constraints: > > http://git.openstack.org/cgit/openstack/requirements/tree/upper-constraints.txt#n135 > > > > > > Given that the code in question relates solely to standalone WSGI > > servers with SSL and everything should have already migrated to Apache, > > and that the upstream is clearly overworked and unlikely to merge fixes > > any time soon (plus we would have to deal with the fallout of moving the > > upper constraint), I agree that it would be preferable if we could just > > ditch this functionality. > > There are a few projects that have not migrated, and some that have > issues running in non standalone WSGI mode (due, ironically to eventlet) > > We should probably get people to run these projects behind an reverse > proxy, and terminate SSL there, but right now we don't have that > documented. Do you know which projects? Doug From gr at ham.ie Tue May 8 16:01:36 2018 From: gr at ham.ie (Graham Hayes) Date: Tue, 8 May 2018 17:01:36 +0100 Subject: [openstack-dev] [all][tc][ptls] final stages of python 3 transition In-Reply-To: <1525794769-sup-717@lrrr.local> References: <1524689037-sup-783@lrrr.local> <1525100618-sup-9669@lrrr.local> <297b4c6f-5ce1-1ab5-88db-92b7e06174de@ham.ie> <1525794769-sup-717@lrrr.local> Message-ID: <200c62f6-c13c-5082-1662-692aacf2b581@ham.ie> On 08/05/18 16:53, Doug Hellmann wrote: > Excerpts from Graham Hayes's message of 2018-05-08 16:28:46 +0100: >> On 08/05/18 16:09, Zane Bitter wrote: >>> On 30/04/18 17:16, Ben Nemec wrote: >>>>> Excerpts from Doug Hellmann's message of 2018-04-25 16:54:46 -0400: >>>>>> 1. Fix oslo.service functional tests -- the Oslo team needs help >>>>>>     maintaining this library. Alternatively, we could move all >>>>>>     services to use cotyledon (https://pypi.org/project/cotyledon/). >>> >>> I submitted a patch that fixes the py35 gate (which was broken due to >>> changes between CPython 3.4 and 3.5), so once that merges we can flip >>> the gate back to voting: >>> >>> https://review.openstack.org/566714 >>> >>>> For everyone's awareness, we discussed this in the Oslo meeting today >>>> and our first step is to see how many, if any, services are actually >>>> relying on the oslo.service functionality that doesn't work in Python >>>> 3 today.  From there we will come up with a plan for how to move forward. >>>> >>>> https://bugs.launchpad.net/manila/+bug/1482633 is the original bug. >>> >>> These tests are currently skipped in both oslo_service and nova. >>> (Equivalent tests were removed from Neutron and Manila on the principle >>> that they're now oslo_service's responsibility.) >>> >>> This appears to be a series of long-standing bugs in eventlet: >>> >>> Python 3.5 failure mode: >>> https://github.com/eventlet/eventlet/issues/308 >>> https://github.com/eventlet/eventlet/issues/189 >>> >>> Python 3.4 failure mode: >>> https://github.com/eventlet/eventlet/issues/476 >>> https://github.com/eventlet/eventlet/issues/145 >>> >>> There are also more problems coming down the pipeline in Python 3.6: >>> >>> https://github.com/eventlet/eventlet/issues/371 >>> >>> That one is resolved in eventlet 0.21, but we have that blocked by >>> upper-constraints: >>> http://git.openstack.org/cgit/openstack/requirements/tree/upper-constraints.txt#n135 >>> >>> >>> Given that the code in question relates solely to standalone WSGI >>> servers with SSL and everything should have already migrated to Apache, >>> and that the upstream is clearly overworked and unlikely to merge fixes >>> any time soon (plus we would have to deal with the fallout of moving the >>> upper constraint), I agree that it would be preferable if we could just >>> ditch this functionality. >> >> There are a few projects that have not migrated, and some that have >> issues running in non standalone WSGI mode (due, ironically to eventlet) >> >> We should probably get people to run these projects behind an reverse >> proxy, and terminate SSL there, but right now we don't have that >> documented. > > Do you know which projects? I know of 2: Designate - mainly due to the major lack of resources available during the uwsgi goal period, and the level of work needed to unravel our tooling to support it. Glance - Has issues with image upload + uwsgi + eventlet [1] I am sure there are probably others, but I know of these 2. [1] https://docs.openstack.org/releasenotes/glance/unreleased.html#b1 > > Doug > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From sundar.nadathur at intel.com Tue May 8 16:04:39 2018 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Tue, 8 May 2018 09:04:39 -0700 Subject: [openstack-dev] [cyborg] [glance] [nova] Cyborg/Nova spec for os-acc is out Message-ID: Hi all,     The Cyborg compute node specification has been published: https://review.openstack.org/#/c/566798/ . Please review it. The main factors defined in this spec are: * The behavior with respect to accelerators when various Compute API [1] operations are applied. E.g. On a reboot/pause/suspend, the assigned accelerators are left intact. But, on a stop or shelve, they are detached. * The APIs for the newly proposed os-acc library. This is structured along the same lines as os-vif usage [2]. Changes are needed in Nova compute to invoke os-acc APIs on specific instance-related events. * Interactions of Cyborg with Glance in the compute node. The plan is to use Glance properties. No changes are needed in Glance. References: [1] https://developer.openstack.org/api-guide/compute/server_concepts.html [2] https://docs.openstack.org/os-vif/queens/user/usage.html Thanks & Regards, Sundar From doug at doughellmann.com Tue May 8 16:18:40 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 08 May 2018 12:18:40 -0400 Subject: [openstack-dev] [tc] Technical Committee Status update, 7 May In-Reply-To: <1525704301-sup-8668@lrrr.local> References: <1525704301-sup-8668@lrrr.local> Message-ID: <1525796254-sup-1238@lrrr.local> Excerpts from Doug Hellmann's message of 2018-05-07 10:53:04 -0400: [snip] > The Adjutant project application [10] is still under review, and > the only votes registered are opposed. I anticipate having the topic > of how we review project applications as one of several items we > discuss during the TC retrospective session as the summit [11]. > > [10] https://review.openstack.org/553643 > [11] https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21740/tc-retrospective There is also a session dedicated to the Adjutant application scheduled for Thursday. Sorry for the oversight. https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21752/adjutant-official-project-status Doug From mtreinish at kortar.org Tue May 8 16:22:56 2018 From: mtreinish at kortar.org (Matthew Treinish) Date: Tue, 8 May 2018 12:22:56 -0400 Subject: [openstack-dev] [all][tc][ptls] final stages of python 3 transition In-Reply-To: <200c62f6-c13c-5082-1662-692aacf2b581@ham.ie> References: <1524689037-sup-783@lrrr.local> <1525100618-sup-9669@lrrr.local> <297b4c6f-5ce1-1ab5-88db-92b7e06174de@ham.ie> <1525794769-sup-717@lrrr.local> <200c62f6-c13c-5082-1662-692aacf2b581@ham.ie> Message-ID: <20180508162256.GA11443@zeong> On Tue, May 08, 2018 at 05:01:36PM +0100, Graham Hayes wrote: > On 08/05/18 16:53, Doug Hellmann wrote: > > Excerpts from Graham Hayes's message of 2018-05-08 16:28:46 +0100: > >> On 08/05/18 16:09, Zane Bitter wrote: > >>> On 30/04/18 17:16, Ben Nemec wrote: > >>>>> Excerpts from Doug Hellmann's message of 2018-04-25 16:54:46 -0400: > >>>>>> 1. Fix oslo.service functional tests -- the Oslo team needs help > >>>>>>     maintaining this library. Alternatively, we could move all > >>>>>>     services to use cotyledon (https://pypi.org/project/cotyledon/). > >>> > >>> I submitted a patch that fixes the py35 gate (which was broken due to > >>> changes between CPython 3.4 and 3.5), so once that merges we can flip > >>> the gate back to voting: > >>> > >>> https://review.openstack.org/566714 > >>> > >>>> For everyone's awareness, we discussed this in the Oslo meeting today > >>>> and our first step is to see how many, if any, services are actually > >>>> relying on the oslo.service functionality that doesn't work in Python > >>>> 3 today.  From there we will come up with a plan for how to move forward. > >>>> > >>>> https://bugs.launchpad.net/manila/+bug/1482633 is the original bug. > >>> > >>> These tests are currently skipped in both oslo_service and nova. > >>> (Equivalent tests were removed from Neutron and Manila on the principle > >>> that they're now oslo_service's responsibility.) > >>> > >>> This appears to be a series of long-standing bugs in eventlet: > >>> > >>> Python 3.5 failure mode: > >>> https://github.com/eventlet/eventlet/issues/308 > >>> https://github.com/eventlet/eventlet/issues/189 > >>> > >>> Python 3.4 failure mode: > >>> https://github.com/eventlet/eventlet/issues/476 > >>> https://github.com/eventlet/eventlet/issues/145 > >>> > >>> There are also more problems coming down the pipeline in Python 3.6: > >>> > >>> https://github.com/eventlet/eventlet/issues/371 > >>> > >>> That one is resolved in eventlet 0.21, but we have that blocked by > >>> upper-constraints: > >>> http://git.openstack.org/cgit/openstack/requirements/tree/upper-constraints.txt#n135 > >>> > >>> > >>> Given that the code in question relates solely to standalone WSGI > >>> servers with SSL and everything should have already migrated to Apache, > >>> and that the upstream is clearly overworked and unlikely to merge fixes > >>> any time soon (plus we would have to deal with the fallout of moving the > >>> upper constraint), I agree that it would be preferable if we could just > >>> ditch this functionality. > >> > >> There are a few projects that have not migrated, and some that have > >> issues running in non standalone WSGI mode (due, ironically to eventlet) > >> > >> We should probably get people to run these projects behind an reverse > >> proxy, and terminate SSL there, but right now we don't have that > >> documented. > > > > Do you know which projects? > > I know of 2: > > Designate - mainly due to the major lack of resources available during > the uwsgi goal period, and the level of work needed to unravel our > tooling to support it. > > Glance - Has issues with image upload + uwsgi + eventlet [1] This actually is a bit misleading. Glance works fine with image upload and uwsgi. That's the only configuration of glance in a wsgi app that works because of chunked transfer encoding not being in the WSGI protocol. [2] uwsgi provides an alternate interface to read chunked requests which enables this to work. If you look at the bugs linked off that release note about image upload you'll see they're all fixed. The issues glance has with running in a wsgi app are related to it's use of async tasks via taskflow. (which includes the tasks api and image import stuff) This shouldn't be hard to fix, and I've had patches up to address these for months: https://review.openstack.org/#/c/531498/ https://review.openstack.org/#/c/549743/ Part of the issue is that there is no api driven testing for these async api functions or any documented way to test them. Which is why I marked the 2nd one WIP, since I have no method to test it and after asking several times for a test case or some other method to validate these APIs without an answer. In fact people are running glance under uwsgi in production already because it makes a lot of things easier and the current issues don't effect most users. -Matt Treinish > > I am sure there are probably others, but I know of these 2. > > [1] https://docs.openstack.org/releasenotes/glance/unreleased.html#b1 > [2] There are a few other ways, as some other wsgi servers have grafted on support for chunked transfer encoding. But, most wsgi servers have not implemented a method. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From doug at doughellmann.com Tue May 8 16:28:27 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 08 May 2018 12:28:27 -0400 Subject: [openstack-dev] [all][tc][ptls] final stages of python 3 transition In-Reply-To: <200c62f6-c13c-5082-1662-692aacf2b581@ham.ie> References: <1524689037-sup-783@lrrr.local> <1525100618-sup-9669@lrrr.local> <297b4c6f-5ce1-1ab5-88db-92b7e06174de@ham.ie> <1525794769-sup-717@lrrr.local> <200c62f6-c13c-5082-1662-692aacf2b581@ham.ie> Message-ID: <1525796357-sup-3313@lrrr.local> Excerpts from Graham Hayes's message of 2018-05-08 17:01:36 +0100: > On 08/05/18 16:53, Doug Hellmann wrote: > > Excerpts from Graham Hayes's message of 2018-05-08 16:28:46 +0100: > >> On 08/05/18 16:09, Zane Bitter wrote: > >>> On 30/04/18 17:16, Ben Nemec wrote: > >>>>> Excerpts from Doug Hellmann's message of 2018-04-25 16:54:46 -0400: > >>>>>> 1. Fix oslo.service functional tests -- the Oslo team needs help > >>>>>>     maintaining this library. Alternatively, we could move all > >>>>>>     services to use cotyledon (https://pypi.org/project/cotyledon/). > >>> > >>> I submitted a patch that fixes the py35 gate (which was broken due to > >>> changes between CPython 3.4 and 3.5), so once that merges we can flip > >>> the gate back to voting: > >>> > >>> https://review.openstack.org/566714 > >>> > >>>> For everyone's awareness, we discussed this in the Oslo meeting today > >>>> and our first step is to see how many, if any, services are actually > >>>> relying on the oslo.service functionality that doesn't work in Python > >>>> 3 today.  From there we will come up with a plan for how to move forward. > >>>> > >>>> https://bugs.launchpad.net/manila/+bug/1482633 is the original bug. > >>> > >>> These tests are currently skipped in both oslo_service and nova. > >>> (Equivalent tests were removed from Neutron and Manila on the principle > >>> that they're now oslo_service's responsibility.) > >>> > >>> This appears to be a series of long-standing bugs in eventlet: > >>> > >>> Python 3.5 failure mode: > >>> https://github.com/eventlet/eventlet/issues/308 > >>> https://github.com/eventlet/eventlet/issues/189 > >>> > >>> Python 3.4 failure mode: > >>> https://github.com/eventlet/eventlet/issues/476 > >>> https://github.com/eventlet/eventlet/issues/145 > >>> > >>> There are also more problems coming down the pipeline in Python 3.6: > >>> > >>> https://github.com/eventlet/eventlet/issues/371 > >>> > >>> That one is resolved in eventlet 0.21, but we have that blocked by > >>> upper-constraints: > >>> http://git.openstack.org/cgit/openstack/requirements/tree/upper-constraints.txt#n135 > >>> > >>> > >>> Given that the code in question relates solely to standalone WSGI > >>> servers with SSL and everything should have already migrated to Apache, > >>> and that the upstream is clearly overworked and unlikely to merge fixes > >>> any time soon (plus we would have to deal with the fallout of moving the > >>> upper constraint), I agree that it would be preferable if we could just > >>> ditch this functionality. > >> > >> There are a few projects that have not migrated, and some that have > >> issues running in non standalone WSGI mode (due, ironically to eventlet) > >> > >> We should probably get people to run these projects behind an reverse > >> proxy, and terminate SSL there, but right now we don't have that > >> documented. > > > > Do you know which projects? > > I know of 2: > > Designate - mainly due to the major lack of resources available during > the uwsgi goal period, and the level of work needed to unravel our > tooling to support it. > > Glance - Has issues with image upload + uwsgi + eventlet [1] > > I am sure there are probably others, but I know of these 2. > > [1] https://docs.openstack.org/releasenotes/glance/unreleased.html#b1 OK, so we need to put these things on the red flags list for moving to Python 3. I've updated the status for oslo.service, designate, and glance in https://wiki.openstack.org/wiki/Python3 to reflect that. Doug From juliaashleykreger at gmail.com Tue May 8 16:43:07 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 8 May 2018 12:43:07 -0400 Subject: [openstack-dev] [release][ironic] hacking 1.1.0 released and ironic CI gates failing pep8 Message-ID: About two hours ago, we started seeing Ironic CI jobs failing pep8 with new errors[1]. For some of our repositories, it just seems to be a couple of lines that need to be fixed. On ironic itself, supporting this might have us dead in the water for a while to fix the code in accordance with what hacking is now expecting. That being said, dtantsur and dhellmann have the perception that new checks are supposed to be opt-in only, yet this new hacking appears to have at W605 and W606 enabled by default as indicated by discussion in #openstack-release[2]. Please advise, it seems like the release team ought to revert the breaking changes and cut a new release as soon as possible. -Julia [1]: http://logs.openstack.org/87/557687/4/check/openstack-tox-pep8/75380de/job-output.txt.gz#_2018-05-08_14_46_47_179606 [2]: http://eavesdrop.openstack.org/irclogs/%23openstack-release/%23openstack-release.2018-05-08.log.html#t2018-05-08T16:30:22 From lbragstad at gmail.com Tue May 8 17:15:40 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Tue, 8 May 2018 12:15:40 -0500 Subject: [openstack-dev] [tc] [nova] [octavia] [ironic] [keystone] [policy] Spec. Freeze Exception - Default Roles In-Reply-To: <28cb94f9-2cff-16b0-5f93-ca9780f2b7b4@gmail.com> References: <28cb94f9-2cff-16b0-5f93-ca9780f2b7b4@gmail.com> Message-ID: <5072e9f9-60e7-dbe9-5a5b-9a74e81f61dc@gmail.com> This was discussed in today's meeting and it was pretty clear that we should still do this for Rocky [0]. Updating this thread to include documentation of the discussion. Thanks, Harry. [0] http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-05-08-16.00.log.html#l-15 On 05/04/2018 03:16 PM, Lance Bragstad wrote: > > On 05/04/2018 02:55 PM, Harry Rybacki wrote: >> Greetings All, >> >> After a discussion in #openstack-tc[1] earlier today, the Keystone >> team is adjusting its approach in proposing default roles[2]. >> Subsequently, I have ported the current default roles specification >> from openstack-specs[3] to keystone-specs[2]. >> >> The original review has been in a pretty stable state for a few weeks. >> As such, I propose we allow the new spec an exception to the original >> Rocky-m1 proposal freeze date. > I don't have an issue with this, especially since we talked about it heavily at the PTG. We also had people familiar with keystone +1 the openstack-spec prior to keystone's proposal freeze. I'm OK granting an exception here if other keystone contributors don't object. > >> I invite more discussion around default roles, and our proposed >> approach. The Keystone team has a forum session[4] dedicated to this >> topic at 1135 on day one of the Vancouver Summit. Everyone should feel >> welcome and encouraged to attend -- we hope that this work will lead >> to an OpenStack Community Goal in a not-so-distant release. > I think scoping this down to be keystone-specific is a smart move. It allows us to focus on building a solid template for other projects to learn from. I was pleasantly surprised to hear people in -tc suggest this as a candidate for a community goal in Stein or T. > > Also, big thanks to jroll, dhellmann, ttx, zaneb, smcginnis, johnsom, and mnaser for taking time to work through this with us. > >> [1] - http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-05-04.log.html#t2018-05-04T14:40:36 >> [2] - https://review.openstack.org/#/c/566377/ >> [3] - https://review.openstack.org/#/c/523973/ >> [4] - https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21761/default-roles >> >> >> /R >> >> Harry Rybacki >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From doug at doughellmann.com Tue May 8 17:23:42 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 08 May 2018 13:23:42 -0400 Subject: [openstack-dev] [qa][release][ironic][requirements] hacking 1.1.0 released and ironic CI gates failing pep8 In-Reply-To: References: Message-ID: <1525800079-sup-7992@lrrr.local> (I added the [qa] topic tag for the QA team, since they own hacking, and [requirements] for that team since I have a question about capping.) Excerpts from Julia Kreger's message of 2018-05-08 12:43:07 -0400: > About two hours ago, we started seeing Ironic CI jobs failing pep8 > with new errors[1]. For some of our repositories, it just seems to be > a couple of lines that need to be fixed. On ironic itself, supporting > this might have us dead in the water for a while to fix the code in > accordance with what hacking is now expecting. > > That being said, dtantsur and dhellmann have the perception that new > checks are supposed to be opt-in only, yet this new hacking appears to > have at W605 and W606 enabled by default as indicated by discussion in > #openstack-release[2]. > > Please advise, it seems like the release team ought to revert the > breaking changes and cut a new release as soon as possible. > > -Julia > > [1]: http://logs.openstack.org/87/557687/4/check/openstack-tox-pep8/75380de/job-output.txt.gz#_2018-05-08_14_46_47_179606 > [2]: http://eavesdrop.openstack.org/irclogs/%23openstack-release/%23openstack-release.2018-05-08.log.html#t2018-05-08T16:30:22 > As discussed in #openstack-release, those checks are pulled in via pycodestyle rather than hacking itself, and pycodestyle doesn't have an opt-in policy. Hacking is in the blacklist for requirements management, so teams *ought* to be able to cap it, if I understand correctly. So I suggest at least starting with a patch to test that. Doug From doug at doughellmann.com Tue May 8 17:34:11 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 08 May 2018 13:34:11 -0400 Subject: [openstack-dev] [all][tc][ptls][glance] final stages of python 3 transition In-Reply-To: <20180508162256.GA11443@zeong> References: <1524689037-sup-783@lrrr.local> <1525100618-sup-9669@lrrr.local> <297b4c6f-5ce1-1ab5-88db-92b7e06174de@ham.ie> <1525794769-sup-717@lrrr.local> <200c62f6-c13c-5082-1662-692aacf2b581@ham.ie> <20180508162256.GA11443@zeong> Message-ID: <1525800729-sup-4338@lrrr.local> (added [glance] subject tag) Excerpts from Matthew Treinish's message of 2018-05-08 12:22:56 -0400: > On Tue, May 08, 2018 at 05:01:36PM +0100, Graham Hayes wrote: > > On 08/05/18 16:53, Doug Hellmann wrote: > > > Excerpts from Graham Hayes's message of 2018-05-08 16:28:46 +0100: > > >> On 08/05/18 16:09, Zane Bitter wrote: > > >>> On 30/04/18 17:16, Ben Nemec wrote: > > >>>>> Excerpts from Doug Hellmann's message of 2018-04-25 16:54:46 -0400: > > >>>>>> 1. Fix oslo.service functional tests -- the Oslo team needs help > > >>>>>>     maintaining this library. Alternatively, we could move all > > >>>>>>     services to use cotyledon (https://pypi.org/project/cotyledon/). > > >>> > > >>> I submitted a patch that fixes the py35 gate (which was broken due to > > >>> changes between CPython 3.4 and 3.5), so once that merges we can flip > > >>> the gate back to voting: > > >>> > > >>> https://review.openstack.org/566714 > > >>> > > >>>> For everyone's awareness, we discussed this in the Oslo meeting today > > >>>> and our first step is to see how many, if any, services are actually > > >>>> relying on the oslo.service functionality that doesn't work in Python > > >>>> 3 today.  From there we will come up with a plan for how to move forward. > > >>>> > > >>>> https://bugs.launchpad.net/manila/+bug/1482633 is the original bug. > > >>> > > >>> These tests are currently skipped in both oslo_service and nova. > > >>> (Equivalent tests were removed from Neutron and Manila on the principle > > >>> that they're now oslo_service's responsibility.) > > >>> > > >>> This appears to be a series of long-standing bugs in eventlet: > > >>> > > >>> Python 3.5 failure mode: > > >>> https://github.com/eventlet/eventlet/issues/308 > > >>> https://github.com/eventlet/eventlet/issues/189 > > >>> > > >>> Python 3.4 failure mode: > > >>> https://github.com/eventlet/eventlet/issues/476 > > >>> https://github.com/eventlet/eventlet/issues/145 > > >>> > > >>> There are also more problems coming down the pipeline in Python 3.6: > > >>> > > >>> https://github.com/eventlet/eventlet/issues/371 > > >>> > > >>> That one is resolved in eventlet 0.21, but we have that blocked by > > >>> upper-constraints: > > >>> http://git.openstack.org/cgit/openstack/requirements/tree/upper-constraints.txt#n135 > > >>> > > >>> > > >>> Given that the code in question relates solely to standalone WSGI > > >>> servers with SSL and everything should have already migrated to Apache, > > >>> and that the upstream is clearly overworked and unlikely to merge fixes > > >>> any time soon (plus we would have to deal with the fallout of moving the > > >>> upper constraint), I agree that it would be preferable if we could just > > >>> ditch this functionality. > > >> > > >> There are a few projects that have not migrated, and some that have > > >> issues running in non standalone WSGI mode (due, ironically to eventlet) > > >> > > >> We should probably get people to run these projects behind an reverse > > >> proxy, and terminate SSL there, but right now we don't have that > > >> documented. > > > > > > Do you know which projects? > > > > I know of 2: > > > > Designate - mainly due to the major lack of resources available during > > the uwsgi goal period, and the level of work needed to unravel our > > tooling to support it. > > > > Glance - Has issues with image upload + uwsgi + eventlet [1] > > This actually is a bit misleading. Glance works fine with image upload and uwsgi. > That's the only configuration of glance in a wsgi app that works because > of chunked transfer encoding not being in the WSGI protocol. [2] uwsgi provides > an alternate interface to read chunked requests which enables this to work. > If you look at the bugs linked off that release note about image upload > you'll see they're all fixed. Is this documented somewhere? > > The issues glance has with running in a wsgi app are related to it's use of > async tasks via taskflow. (which includes the tasks api and image import stuff) > This shouldn't be hard to fix, and I've had patches up to address these for > months: > > https://review.openstack.org/#/c/531498/ > https://review.openstack.org/#/c/549743/ > > Part of the issue is that there is no api driven testing for these async api > functions or any documented way to test them. Which is why I marked the 2nd > one WIP, since I have no method to test it and after asking several times > for a test case or some other method to validate these APIs without an answer. It would be helpful if some of this detail made its way into the glance section of https://wiki.openstack.org/wiki/Python3#Python_3_Status_of_OpenStack_projects > > In fact people are running glance under uwsgi in production already because it > makes a lot of things easier and the current issues don't effect most users. That's good to know! > > -Matt Treinish > > > > > I am sure there are probably others, but I know of these 2. > > > > [1] https://docs.openstack.org/releasenotes/glance/unreleased.html#b1 > > > > [2] There are a few other ways, as some other wsgi servers have grafted on > support for chunked transfer encoding. But, most wsgi servers have not > implemented a method. From mtreinish at kortar.org Tue May 8 17:55:43 2018 From: mtreinish at kortar.org (Matthew Treinish) Date: Tue, 8 May 2018 13:55:43 -0400 Subject: [openstack-dev] [all][tc][ptls][glance] final stages of python 3 transition In-Reply-To: <1525800729-sup-4338@lrrr.local> References: <1524689037-sup-783@lrrr.local> <1525100618-sup-9669@lrrr.local> <297b4c6f-5ce1-1ab5-88db-92b7e06174de@ham.ie> <1525794769-sup-717@lrrr.local> <200c62f6-c13c-5082-1662-692aacf2b581@ham.ie> <20180508162256.GA11443@zeong> <1525800729-sup-4338@lrrr.local> Message-ID: <20180508175543.GB11443@zeong> On Tue, May 08, 2018 at 01:34:11PM -0400, Doug Hellmann wrote: > > (added [glance] subject tag) > > Excerpts from Matthew Treinish's message of 2018-05-08 12:22:56 -0400: > > On Tue, May 08, 2018 at 05:01:36PM +0100, Graham Hayes wrote: > > > On 08/05/18 16:53, Doug Hellmann wrote: > > > > Excerpts from Graham Hayes's message of 2018-05-08 16:28:46 +0100: > > > >> On 08/05/18 16:09, Zane Bitter wrote: > > > >>> On 30/04/18 17:16, Ben Nemec wrote: > > > >>>>> Excerpts from Doug Hellmann's message of 2018-04-25 16:54:46 -0400: > > > >>>>>> 1. Fix oslo.service functional tests -- the Oslo team needs help > > > >>>>>>     maintaining this library. Alternatively, we could move all > > > >>>>>>     services to use cotyledon (https://pypi.org/project/cotyledon/). > > > >>> > > > >>> I submitted a patch that fixes the py35 gate (which was broken due to > > > >>> changes between CPython 3.4 and 3.5), so once that merges we can flip > > > >>> the gate back to voting: > > > >>> > > > >>> https://review.openstack.org/566714 > > > >>> > > > >>>> For everyone's awareness, we discussed this in the Oslo meeting today > > > >>>> and our first step is to see how many, if any, services are actually > > > >>>> relying on the oslo.service functionality that doesn't work in Python > > > >>>> 3 today.  From there we will come up with a plan for how to move forward. > > > >>>> > > > >>>> https://bugs.launchpad.net/manila/+bug/1482633 is the original bug. > > > >>> > > > >>> These tests are currently skipped in both oslo_service and nova. > > > >>> (Equivalent tests were removed from Neutron and Manila on the principle > > > >>> that they're now oslo_service's responsibility.) > > > >>> > > > >>> This appears to be a series of long-standing bugs in eventlet: > > > >>> > > > >>> Python 3.5 failure mode: > > > >>> https://github.com/eventlet/eventlet/issues/308 > > > >>> https://github.com/eventlet/eventlet/issues/189 > > > >>> > > > >>> Python 3.4 failure mode: > > > >>> https://github.com/eventlet/eventlet/issues/476 > > > >>> https://github.com/eventlet/eventlet/issues/145 > > > >>> > > > >>> There are also more problems coming down the pipeline in Python 3.6: > > > >>> > > > >>> https://github.com/eventlet/eventlet/issues/371 > > > >>> > > > >>> That one is resolved in eventlet 0.21, but we have that blocked by > > > >>> upper-constraints: > > > >>> http://git.openstack.org/cgit/openstack/requirements/tree/upper-constraints.txt#n135 > > > >>> > > > >>> > > > >>> Given that the code in question relates solely to standalone WSGI > > > >>> servers with SSL and everything should have already migrated to Apache, > > > >>> and that the upstream is clearly overworked and unlikely to merge fixes > > > >>> any time soon (plus we would have to deal with the fallout of moving the > > > >>> upper constraint), I agree that it would be preferable if we could just > > > >>> ditch this functionality. > > > >> > > > >> There are a few projects that have not migrated, and some that have > > > >> issues running in non standalone WSGI mode (due, ironically to eventlet) > > > >> > > > >> We should probably get people to run these projects behind an reverse > > > >> proxy, and terminate SSL there, but right now we don't have that > > > >> documented. > > > > > > > > Do you know which projects? > > > > > > I know of 2: > > > > > > Designate - mainly due to the major lack of resources available during > > > the uwsgi goal period, and the level of work needed to unravel our > > > tooling to support it. > > > > > > Glance - Has issues with image upload + uwsgi + eventlet [1] > > > > This actually is a bit misleading. Glance works fine with image upload and uwsgi. > > That's the only configuration of glance in a wsgi app that works because > > of chunked transfer encoding not being in the WSGI protocol. [2] uwsgi provides > > an alternate interface to read chunked requests which enables this to work. > > If you look at the bugs linked off that release note about image upload > > you'll see they're all fixed. > > Is this documented somewhere? The wsgi limitation or the glance usage? I wrote up a doc about running under apache when I added the uwsgi chunked transfer encoding support to glance about running glance under apache here: https://docs.openstack.org/glance/latest/admin/apache-httpd.html Which includes how you have to configure things to get it working and a section on why mod_wsgi doesn't work. > > > > > The issues glance has with running in a wsgi app are related to it's use of > > async tasks via taskflow. (which includes the tasks api and image import stuff) > > This shouldn't be hard to fix, and I've had patches up to address these for > > months: > > > > https://review.openstack.org/#/c/531498/ > > https://review.openstack.org/#/c/549743/ > > > > Part of the issue is that there is no api driven testing for these async api > > functions or any documented way to test them. Which is why I marked the 2nd > > one WIP, since I have no method to test it and after asking several times > > for a test case or some other method to validate these APIs without an answer. > > It would be helpful if some of this detail made its way into the glance > section of https://wiki.openstack.org/wiki/Python3#Python_3_Status_of_OpenStack_projects It really doesn't have anything to do with Python 3 though since the bug with glance's taskflow usage is on both py2 and py3. In fact we're already running glance under uwsgi in the gate with python 3 today for the dsvm py3 jobs. The reason these bugs haven't come up there is because there is no test coverage for any of these async APIs. But I can add it to the wiki later today. > > > > > In fact people are running glance under uwsgi in production already because it > > makes a lot of things easier and the current issues don't effect most users. > > That's good to know! > > > > > -Matt Treinish > > > > > > > > I am sure there are probably others, but I know of these 2. > > > > > > [1] https://docs.openstack.org/releasenotes/glance/unreleased.html#b1 > > > > > > > [2] There are a few other ways, as some other wsgi servers have grafted on > > support for chunked transfer encoding. But, most wsgi servers have not > > implemented a method. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From doug at doughellmann.com Tue May 8 19:02:05 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 08 May 2018 15:02:05 -0400 Subject: [openstack-dev] [all][tc][ptls][glance] final stages of python 3 transition In-Reply-To: <20180508175543.GB11443@zeong> References: <1524689037-sup-783@lrrr.local> <1525100618-sup-9669@lrrr.local> <297b4c6f-5ce1-1ab5-88db-92b7e06174de@ham.ie> <1525794769-sup-717@lrrr.local> <200c62f6-c13c-5082-1662-692aacf2b581@ham.ie> <20180508162256.GA11443@zeong> <1525800729-sup-4338@lrrr.local> <20180508175543.GB11443@zeong> Message-ID: <1525805985-sup-7865@lrrr.local> Excerpts from Matthew Treinish's message of 2018-05-08 13:55:43 -0400: > On Tue, May 08, 2018 at 01:34:11PM -0400, Doug Hellmann wrote: > > > > (added [glance] subject tag) > > > > Excerpts from Matthew Treinish's message of 2018-05-08 12:22:56 -0400: > > > On Tue, May 08, 2018 at 05:01:36PM +0100, Graham Hayes wrote: > > > > On 08/05/18 16:53, Doug Hellmann wrote: > > > > > Excerpts from Graham Hayes's message of 2018-05-08 16:28:46 +0100: [snip] > > > > Glance - Has issues with image upload + uwsgi + eventlet [1] > > > > > > This actually is a bit misleading. Glance works fine with image upload and uwsgi. > > > That's the only configuration of glance in a wsgi app that works because > > > of chunked transfer encoding not being in the WSGI protocol. [2] uwsgi provides > > > an alternate interface to read chunked requests which enables this to work. > > > If you look at the bugs linked off that release note about image upload > > > you'll see they're all fixed. > > > > Is this documented somewhere? > > The wsgi limitation or the glance usage? I wrote up a doc about running under > apache when I added the uwsgi chunked transfer encoding support to glance about > running glance under apache here: > > https://docs.openstack.org/glance/latest/admin/apache-httpd.html > > Which includes how you have to configure things to get it working and a section > on why mod_wsgi doesn't work. I meant the glance usage so it sounds like you've covered the docs for that. Thanks! > > > The issues glance has with running in a wsgi app are related to it's use of > > > async tasks via taskflow. (which includes the tasks api and image import stuff) > > > This shouldn't be hard to fix, and I've had patches up to address these for > > > months: > > > > > > https://review.openstack.org/#/c/531498/ > > > https://review.openstack.org/#/c/549743/ > > > > > > Part of the issue is that there is no api driven testing for these async api > > > functions or any documented way to test them. Which is why I marked the 2nd > > > one WIP, since I have no method to test it and after asking several times > > > for a test case or some other method to validate these APIs without an answer. > > > > It would be helpful if some of this detail made its way into the glance > > section of https://wiki.openstack.org/wiki/Python3#Python_3_Status_of_OpenStack_projects > > It really doesn't have anything to do with Python 3 though since the bug with > glance's taskflow usage is on both py2 and py3. In fact we're already running > glance under uwsgi in the gate with python 3 today for the dsvm py3 jobs. The > reason these bugs haven't come up there is because there is no test coverage > for any of these async APIs. But I can add it to the wiki later today. Will it block us from moving glance to python 3 if we drop the WSGI code from oslo.service so that the only way to deploy is behind some other WSGI server? Doug From mtreinish at kortar.org Tue May 8 19:16:40 2018 From: mtreinish at kortar.org (Matthew Treinish) Date: Tue, 8 May 2018 15:16:40 -0400 Subject: [openstack-dev] [all][tc][ptls][glance] final stages of python 3 transition In-Reply-To: <1525805985-sup-7865@lrrr.local> References: <1525100618-sup-9669@lrrr.local> <297b4c6f-5ce1-1ab5-88db-92b7e06174de@ham.ie> <1525794769-sup-717@lrrr.local> <200c62f6-c13c-5082-1662-692aacf2b581@ham.ie> <20180508162256.GA11443@zeong> <1525800729-sup-4338@lrrr.local> <20180508175543.GB11443@zeong> <1525805985-sup-7865@lrrr.local> Message-ID: <20180508191640.GA16227@sinanju.localdomain> On Tue, May 08, 2018 at 03:02:05PM -0400, Doug Hellmann wrote: > Excerpts from Matthew Treinish's message of 2018-05-08 13:55:43 -0400: > > On Tue, May 08, 2018 at 01:34:11PM -0400, Doug Hellmann wrote: > > > > > > (added [glance] subject tag) > > > > > > Excerpts from Matthew Treinish's message of 2018-05-08 12:22:56 -0400: > > > > On Tue, May 08, 2018 at 05:01:36PM +0100, Graham Hayes wrote: > > > > > On 08/05/18 16:53, Doug Hellmann wrote: > > > > > > Excerpts from Graham Hayes's message of 2018-05-08 16:28:46 +0100: > > [snip] > > > > > > Glance - Has issues with image upload + uwsgi + eventlet [1] > > > > > > > > This actually is a bit misleading. Glance works fine with image upload and uwsgi. > > > > That's the only configuration of glance in a wsgi app that works because > > > > of chunked transfer encoding not being in the WSGI protocol. [2] uwsgi provides > > > > an alternate interface to read chunked requests which enables this to work. > > > > If you look at the bugs linked off that release note about image upload > > > > you'll see they're all fixed. > > > > > > Is this documented somewhere? > > > > The wsgi limitation or the glance usage? I wrote up a doc about running under > > apache when I added the uwsgi chunked transfer encoding support to glance about > > running glance under apache here: > > > > https://docs.openstack.org/glance/latest/admin/apache-httpd.html > > > > Which includes how you have to configure things to get it working and a section > > on why mod_wsgi doesn't work. > > I meant the glance usage so it sounds like you've covered the docs > for that. Thanks! > > > > > The issues glance has with running in a wsgi app are related to it's use of > > > > async tasks via taskflow. (which includes the tasks api and image import stuff) > > > > This shouldn't be hard to fix, and I've had patches up to address these for > > > > months: > > > > > > > > https://review.openstack.org/#/c/531498/ > > > > https://review.openstack.org/#/c/549743/ > > > > > > > > Part of the issue is that there is no api driven testing for these async api > > > > functions or any documented way to test them. Which is why I marked the 2nd > > > > one WIP, since I have no method to test it and after asking several times > > > > for a test case or some other method to validate these APIs without an answer. > > > > > > It would be helpful if some of this detail made its way into the glance > > > section of https://wiki.openstack.org/wiki/Python3#Python_3_Status_of_OpenStack_projects > > > > It really doesn't have anything to do with Python 3 though since the bug with > > glance's taskflow usage is on both py2 and py3. In fact we're already running > > glance under uwsgi in the gate with python 3 today for the dsvm py3 jobs. The > > reason these bugs haven't come up there is because there is no test coverage > > for any of these async APIs. But I can add it to the wiki later today. > > Will it block us from moving glance to python 3 if we drop the WSGI > code from oslo.service so that the only way to deploy is behind > some other WSGI server? > It shouldn't be a blocker, the wsgi entrypoint just uses paste to expose the wsgi app directly: https://github.com/openstack/glance/blob/master/glance/common/wsgi_app.py#L59-L67 oslo.service doesn't come into play in that code path. So it won't block the deploying with uwsgi model. The bugs addressed by the 2 patches I referenced above will still be present though. Although, I don't think glance uses oslo.service even in the case where it's using the standalone eventlet server. It looks like it launches eventlet.wsgi directly: https://github.com/openstack/glance/blob/master/glance/common/wsgi.py and I don't see oslo.service in the requirements file either: https://github.com/openstack/glance/blob/master/requirements.txt -Matt Treinish -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From ekcs.openstack at gmail.com Tue May 8 19:29:45 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Tue, 08 May 2018 12:29:45 -0700 Subject: [openstack-dev] [keystone][monasca][congress][senlin][telemetry] authenticated webhook notifications In-Reply-To: References: Message-ID: Thanks, Thomas! I see the point that it is impractical to configure a service with a fixed keystone token to use in webhook notifications because they expire fairly quickly. I'm thinking about the situation where the sending service can obtain tokens directly from keystone. In that case I'm guessing the main reason it hasn't been done that way is because it does not generalize to most other services that don't connect to keystone? On 5/6/18, 9:30 AM, "Thomas Herve" wrote: >On Sat, May 5, 2018 at 1:53 AM, Eric K wrote: >> Thanks a lot Witold and Thomas! >> >> So it doesn't seem that someone is currently using a keystone token to >> authenticate web hook? Is is simply because most of the use cases had >> involved services which do not use keystone? >> >> Or is it unsuitable for another reason? > >It's fairly impractical for webhooks because > >1) Tokens expire fairly quickly. >2) You can't store all the data in the URL, so you need to store the >token and the URL separately. > >-- >Thomas > >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From zbitter at redhat.com Tue May 8 19:31:41 2018 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 8 May 2018 15:31:41 -0400 Subject: [openstack-dev] [all][tc][ptls][glance] final stages of python 3 transition In-Reply-To: <20180508191640.GA16227@sinanju.localdomain> References: <1525100618-sup-9669@lrrr.local> <297b4c6f-5ce1-1ab5-88db-92b7e06174de@ham.ie> <1525794769-sup-717@lrrr.local> <200c62f6-c13c-5082-1662-692aacf2b581@ham.ie> <20180508162256.GA11443@zeong> <1525800729-sup-4338@lrrr.local> <20180508175543.GB11443@zeong> <1525805985-sup-7865@lrrr.local> <20180508191640.GA16227@sinanju.localdomain> Message-ID: <557b34e5-f27f-6975-07fe-85f6b6c707d7@redhat.com> On 08/05/18 15:16, Matthew Treinish wrote: > Although, I don't think glance uses oslo.service even in the case where it's > using the standalone eventlet server. It looks like it launches eventlet.wsgi > directly: > > https://github.com/openstack/glance/blob/master/glance/common/wsgi.py > > and I don't see oslo.service in the requirements file either: > > https://github.com/openstack/glance/blob/master/requirements.txt It would probably independently suffer from https://bugs.launchpad.net/manila/+bug/1482633 in Python 3 then. IIUC the code started in oslo incubator but projects like neutron and manila converted to use the oslo.service version. There may be other copies of it still floating around... - ZB From ekcs.openstack at gmail.com Tue May 8 19:47:27 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Tue, 08 May 2018 12:47:27 -0700 Subject: [openstack-dev] [keystone][monasca][congress][senlin][telemetry] authenticated webhook notifications In-Reply-To: References: Message-ID: To clarify, one of the reasons I'd like to accept webhook notifications authenticated with keystone tokens is that I don't want the access to expire, but of course it's poor practice to use a signed URL that never expires. Eric On 5/8/18, 12:29 PM, "Eric K" wrote: >Thanks, Thomas! > >I see the point that it is impractical to configure a service with a fixed >keystone token to use in webhook notifications because they expire fairly >quickly. > >I'm thinking about the situation where the sending service can obtain >tokens directly from keystone. In that case I'm guessing the main reason >it hasn't been done that way is because it does not generalize to most >other services that don't connect to keystone? > >On 5/6/18, 9:30 AM, "Thomas Herve" wrote: > >>On Sat, May 5, 2018 at 1:53 AM, Eric K wrote: >>> Thanks a lot Witold and Thomas! >>> >>> So it doesn't seem that someone is currently using a keystone token to >>> authenticate web hook? Is is simply because most of the use cases had >>> involved services which do not use keystone? >>> >>> Or is it unsuitable for another reason? >> >>It's fairly impractical for webhooks because >> >>1) Tokens expire fairly quickly. >>2) You can't store all the data in the URL, so you need to store the >>token and the URL separately. >> >>-- >>Thomas >> >>_________________________________________________________________________ >>_ >>OpenStack Development Mailing List (not for usage questions) >>Unsubscribe: >>OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > From zbitter at redhat.com Tue May 8 20:20:07 2018 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 8 May 2018 16:20:07 -0400 Subject: [openstack-dev] [keystone][monasca][congress][senlin][telemetry] authenticated webhook notifications In-Reply-To: References: Message-ID: On 03/05/18 15:49, Eric K wrote: > Question to the projects which send or consume webhook notifications > (telemetry, monasca, senlin, vitrage, etc.), what are your > supported/preferred authentication mechanisms? Bearer token (e.g. > Keystone)? Signing? Signed URLs and regular Keystone auth are both options, and both used in various places as Thomas said. Any time you can not implement your own signed URL thing, it's better that you don't though. Security-sensitive things like authentication should be implemented as few times as possible. Eventually we should be able to mostly eliminate the need for signed URLs with http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/capabilities-app-creds.html but we're not there yet. If the caller is something that is basically trusted, then you should prefer regular keystone auth. If you need to make sure that the caller can only use that one API, signed URLs are still the only game in town for now (but we hope this is very temporary). > Any pointers to past discussions on the topic? My interest here is having > Congress consume and send webhook notifications. Please don't. Webhooks are a security nightmare. They can be used to enlist the OpenStack infrastructure in mounting attacks on other random sites, or to attack the OpenStack operator themselves if everything is not properly secured. Ideally there should be only one place in OpenStack that can send webhooks, so that there's only one thing for operators to secure. (IMHO since that thing will need to keep a queue of pending webhooks to send, the logical place would be Zaqar notifications.) Obviously that's not the case today - we already send webhooks from Aodh, Mistral, Zaqar and others. But at least we can avoid adding more. > I know some people are working on adding the keystone auth option to > Monasca's webhook framework. If there is a project that already does it, > it could be a very helpful reference. There's a sort of convention that where you supply a webhook URL with a scheme trust+https:// then the service creates a keystone trust and uses that to get keystone tokens which are then used to authenticate the webhook request. Aodh and Zaqar at least follow this convention. The trust part is an important point that you're overlooking: (from your other message) > I'm thinking about the situation where the sending service can obtain > tokens directly from keystone. If you haven't stored the user's password then you cannot, in fact, obtain more tokens from keystone. You only have the one they gave you with the initial request, and that will soon expire. So you have to create a trust (which doesn't expire) and store the trust ID, which you can then use in combination with the service token to get additional user tokens from when required. Don't do that though. Just create a Zaqar queue, store a pre-signed URL that allows you to post to it, and set up a Zaqar notification for the webhook URL you want to hit (which can be a trust+https:// URL). Avoid being the next project to reinvent the wheel :) cheers, Zane. From doug at doughellmann.com Tue May 8 20:22:00 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 08 May 2018 16:22:00 -0400 Subject: [openstack-dev] [tc] [nova] [octavia] [ironic] [keystone] [policy] Spec. Freeze Exception - Default Roles In-Reply-To: <28cb94f9-2cff-16b0-5f93-ca9780f2b7b4@gmail.com> References: <28cb94f9-2cff-16b0-5f93-ca9780f2b7b4@gmail.com> Message-ID: <1525810667-sup-3796@lrrr.local> Excerpts from Lance Bragstad's message of 2018-05-04 15:16:09 -0500: > > On 05/04/2018 02:55 PM, Harry Rybacki wrote: > > Greetings All, > > > > After a discussion in #openstack-tc[1] earlier today, the Keystone > > team is adjusting its approach in proposing default roles[2]. > > Subsequently, I have ported the current default roles specification > > from openstack-specs[3] to keystone-specs[2]. > > > > The original review has been in a pretty stable state for a few weeks. > > As such, I propose we allow the new spec an exception to the original > > Rocky-m1 proposal freeze date. > > I don't have an issue with this, especially since we talked about it heavily at the PTG. We also had people familiar with keystone +1 the openstack-spec prior to keystone's proposal freeze. I'm OK granting an exception here if other keystone contributors don't object. > > > > > I invite more discussion around default roles, and our proposed > > approach. The Keystone team has a forum session[4] dedicated to this > > topic at 1135 on day one of the Vancouver Summit. Everyone should feel > > welcome and encouraged to attend -- we hope that this work will lead > > to an OpenStack Community Goal in a not-so-distant release. > > I think scoping this down to be keystone-specific is a smart move. It allows us to focus on building a solid template for other projects to learn from. I was pleasantly surprised to hear people in -tc suggest this as a candidate for a community goal in Stein or T. > > Also, big thanks to jroll, dhellmann, ttx, zaneb, smcginnis, johnsom, and mnaser for taking time to work through this with us. This is a good opportunity for us to experiment with simplifying a community process. We've seen repeatedly that big initiatives like this work best when the team behind them is committed to the specific initiative. Rather than trying to assemble a new team of uncertain membership or interest to review all "global" specs like this, I like the idea of the keystone team continuing to drive the work on this change while seeking input from the rest of the community. We still need to have consensus about the plan, which we can do via the normal mailing list threads and review of the spec. And then when we have one or two projects done as an example, we can review what else we might need before taking it on as a goal. Doug From openstack at nemebean.com Tue May 8 20:54:53 2018 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 8 May 2018 15:54:53 -0500 Subject: [openstack-dev] [oslo] Project Update Topics Message-ID: Hi, This was discussed in the meeting this week too, but I wanted to send it to the list as well for a little more visibility. We've started an etherpad at https://etherpad.openstack.org/p/oslo-project-update-rocky to collect any topics that folks want included in the Oslo project update session in Vancouver. We're less than two weeks out from Summit so please don't wait if you have something to add. Thanks. -Ben From ekcs.openstack at gmail.com Tue May 8 21:13:48 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Tue, 08 May 2018 14:13:48 -0700 Subject: [openstack-dev] [keystone][monasca][congress][senlin][telemetry] authenticated webhook notifications In-Reply-To: References: Message-ID: Thank you, Zane for the discussion. Point taken about sending webhook notifications. Primarily I want Congress to consume webhook notifications from the openstack services which already send them (monasca, vitrage, etc.). Most of them do not currently support sending appropriate keystone tokens with the notifications, but some are open to doing it. The aodh and zaqar references are exactly what I was hoping to find. I couldn't find a reference to it in aodh docs or much on google, so many thanks for the pointer! Eric On 5/8/18, 1:20 PM, "Zane Bitter" wrote: >If the caller is something that is basically trusted, then you should >prefer regular keystone auth. If you need to make sure that the caller >can only use that one API, signed URLs are still the only game in town >for now (but we hope this is very temporary). > >> I know some people are working on adding the keystone auth option to >> Monasca's webhook framework. If there is a project that already does it, >> it could be a very helpful reference. > >There's a sort of convention that where you supply a webhook URL with a >scheme trust+https:// then the service creates a keystone trust and uses >that to get keystone tokens which are then used to authenticate the >webhook request. Aodh and Zaqar at least follow this convention. The >trust part is an important point that you're overlooking: (from your >other message) From myoung at redhat.com Wed May 9 02:24:59 2018 From: myoung at redhat.com (Matt Young) Date: Tue, 8 May 2018 22:24:59 -0400 Subject: [openstack-dev] =?utf-8?q?=5Btripleo=5D_CI_Squads=E2=80=99_Sprint?= =?utf-8?q?_12_Summary=3A_libvirt-reproducer=2C_python-tempestconf?= Message-ID: Greetings, The TripleO squads for CI and Tempest have just completed Sprint 12. The following is a summary of activities during this sprint. Details on our team structure can be found in the spec [1]. --- # Sprint 12 Epic (CI): Libvirt Reproducer * Epic Card: https://trello.com/c/JEGLSVh6/51-reproduce-ci-jobs-with-libvirt * Tasks: http://ow.ly/O1vZ30jTSc3 "Allow developers to reproduce a multinode CI job on a bare metal host using libvirt" "Enable the same workflows used in upstream CI / reproducer using libvirt instead of OVB as the provisioning mechanism" The CI Squad prototyped, designed, and implemented new functionality for our CI reproducer. “Reproducers” are scripts generated by each CI job that allow the job/test to be recreated. These are useful to both CI team members when investigating failures, as well as developers creating failures with the intent to iteratively debug and/or fix issues. Prior to this sprint, the reproducer scripts supported reproduction of upstream CI jobs using OVB, typically on RDO Cloud. This sprint we extended this capability to support reproduction of jobs in libvirt. This work was done for a few reasons: * (short term) enable the team to work on upgrades and other CI team tasks more efficiently by mitigating recurring RDO Cloud infrastructure issues. This was the primary motivator for doing this work at this time. * (mid-longer term) enhance / enable iterative workflows such as THT development, debugging deployment scenarios, etc. Snapshots in particular have proven quite useful. As we look towards a future with a viable single-node deployment capability, libvirt has clear benefits for common developer scenarios. It is expected that further iteration and refinement of this initial implementation will be required before the tripleo-ci team is able to support this broadly. What we’ve done works as designed. While we welcome folks to explore, please note that we are not announcing a supported libvirt reproducer meant for use outside the tripleo-ci team at this time. We expect some degree of change, and have a number of RFE’s resulting from our testing as well as documentation patches that we’re iterating on. That said, we think it’s really cool, works well in its current form, and are optimistic about its future. ## We did the following (CI): * Add support to the reproducer script [2,3] generated by CI to enable libvirt. * Basic snapshot create/restore [4] capability. * Tested Scenarios: featureset 3 (UC idem), 10 (multinode containers), 37 (min OC + minor update). See sprint cards for details. * 14-18 RFE’s identified as part of testing for future work http://ow.ly/J2u830jTSLG --- # Sprint 12 Epic (Tempest): * Epic Card: https://trello.com/c/ifIYQsxs/75-sprint-12-undercloud-tempest * Tasks: http://ow.ly/GGvc30jTSfV “Run tempest on undercloud by using containerized and packaged tempest” “Complete work items carried from sprint 11 or another side work going on.” ## We did the following (Tempest): * Create tripleo-ci jobs that run containerized tempest on all stable branches. * Create documentation for configuring and running tempest using containerized tempest on UC @tripleo.org, and blog posts. [5,6,7] * Run certification tests via new Jenkins job using ansible role [8] * Refactor validate-tempest CI role for UC and containers --- # Ruck and Rover Each sprint two of the team members assume the roles of Ruck and Rover (each for half of the sprint). * Ruck is responsible to monitoring the CI, checking for failures, opening bugs, participate on meetings, and this is your focal point to any CI issues. * Rover is responsible to work on these bugs, fix problems and the rest of the team are focused on the sprint. For more information about our structure, check [1] ## Ruck & Rover (Sprint 12), Etherpad [9,10]: * Quique Llorente(quiquell) * Gabriele Cerami (panda) A few notable issues where substantial time was spent were: 1767099 periodic-tripleo-ci-centos-7-multinode-1ctlr-featureset030-master vxlan tunnel fails randomly 1758899 reproducer-quickstart.sh building wrong gating package. 1767343 gate tripleo-ci-centos-7-containers-multinode fails to update packages in cron container 1762351 periodic-tripleo-ci-centos-7-ovb-1ctlr_1comp-featureset002-queens-upload is timeout Depends on https://bugzilla.redhat.com/show_bug.cgi?id=1565179 1766873 quickstart on ovb doesn't yield a deployment 1767049 Error during test discovery : 'must specify exactly one of host or intercept' Depends on https://bugzilla.redhat.com/show_bug.cgi?id=1434385 1767076 Creating pingtest_sack fails: Failed to schedule instances: NoValidHost_Remote: No valid host was found 1763634 devmode.sh --ovb fails to deploy overcloud 1765680 Incorrect branch used for not gated tripleo-upgrade repo If you have any questions and/or suggestions, please contact us in #oooq or #tripleo Thanks, Matt tq: https://github.com/openstack/tripleo-quickstart tqe: https://github.com/openstack/tripleo-quickstart-extras [1] https://specs.openstack.org/openstack/tripleo-specs/specs/policy/ci-team-structure.html [2] {{tq}}/roles/libvirt/setup/overcloud/tasks/libvirt_nodepool.yml [3] {{tqe}}/roles/create-reproducer-script/templates/reproducer-quickstart.sh.j2#L50 [4] {{tqe}}/roles/snapshot-libvirt [5] https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/12/html-single/openstack_integration_test_suite_guide [6] https://blogs.rdoproject.org/2018/05/running-tempest-tests-against-a-tripleo-undercloud [7] https://blogs.rdoproject.org/2018/05/consuming-kolla-tempest-container-image-for-running-tempest-tests [8] https://github.com/redhat-cip/ansible-role-openstack-certification [9] https://review.rdoproject.org/etherpad/p/ruckrover-sprint12 [10] https://etherpad.openstack.org/p/rover-030518 -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed May 9 02:37:53 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 9 May 2018 11:37:53 +0900 Subject: [openstack-dev] [qa][release][ironic][requirements] hacking 1.1.0 released and ironic CI gates failing pep8 In-Reply-To: <1525800079-sup-7992@lrrr.local> References: <1525800079-sup-7992@lrrr.local> Message-ID: On Wed, May 9, 2018 at 2:23 AM, Doug Hellmann wrote: > (I added the [qa] topic tag for the QA team, since they own hacking, and > [requirements] for that team since I have a question about capping.) > > Excerpts from Julia Kreger's message of 2018-05-08 12:43:07 -0400: >> About two hours ago, we started seeing Ironic CI jobs failing pep8 >> with new errors[1]. For some of our repositories, it just seems to be >> a couple of lines that need to be fixed. On ironic itself, supporting >> this might have us dead in the water for a while to fix the code in >> accordance with what hacking is now expecting. >> >> That being said, dtantsur and dhellmann have the perception that new >> checks are supposed to be opt-in only, yet this new hacking appears to >> have at W605 and W606 enabled by default as indicated by discussion in >> #openstack-release[2]. >> >> Please advise, it seems like the release team ought to revert the >> breaking changes and cut a new release as soon as possible. >> >> -Julia >> >> [1]: http://logs.openstack.org/87/557687/4/check/openstack-tox-pep8/75380de/job-output.txt.gz#_2018-05-08_14_46_47_179606 >> [2]: http://eavesdrop.openstack.org/irclogs/%23openstack-release/%23openstack-release.2018-05-08.log.html#t2018-05-08T16:30:22 >> > > As discussed in #openstack-release, those checks are pulled in via > pycodestyle rather than hacking itself, and pycodestyle doesn't have an > opt-in policy. > > Hacking is in the blacklist for requirements management, so teams > *ought* to be able to cap it, if I understand correctly. So I suggest at > least starting with a patch to test that. > > Doug Sorry for inconvenience but i agree with Doug on capping the hacking on project side. hacking in blacklist and never be in g-r sync list was to avoided the situation like this. hacking compatible version and cap is maintained by project side only as per their source code. Almost all the project has capped the hacking version in their test-requirements.txt and update the version as when project team want & code is passing the new rules. For example [1]. It is difficult for QA team or release team to verify that hacking release will not break anyone with new version even there is no new rule addition in hacking (like this time failure are caused by pycodestyle). To avoid such failure in future, i am on side of capping the hacking on project side like majority of the project does. I searched manually and found below repo which does not cap the hacking[2]. I am giving try to cap the version on those repo. Project team can decide and merge them to fix the current gate as well as to avoid such situation in future. - https://review.openstack.org/#/q/topic:cap-hacking+(status:open+OR+status:merged) FYI- W503 was raising failure even before hacking release which was fixed in Tempest a month ago. [3] ..1 https://review.openstack.org/#/c/397486/ ..2 http://codesearch.openstack.org/?q=hacking&i=nope&files=test-requirements.txt&repos= 1. openstack/fuel-ccp-installer 2. openstack/fuel-ccp-tests 3. openstack/ironic 4. openstack/ironic-inspector 5. openstack/ironic-lib 6. openstack/ironic-python-agent 7. openstack/python-ironic-inspector-client 8. openstack/python-ironicclient 9. openstack/kolla-ansible 10. openstack/monasca-analytics 11. openstack/networking-generic-switch 12. openstack/patrole 13. openstack/pyghmi 14. openstack/rally 15. openstack/rally-openstack 16. openstack-infra/storyboard 17. openstack/sushy .. 3 https://review.openstack.org/#/c/560360/ -gmann > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From a.chadin at servionica.ru Wed May 9 07:44:30 2018 From: a.chadin at servionica.ru (=?koi8-r?B?/sHEyc4g4czFy9PBzsTS?=) Date: Wed, 9 May 2018 07:44:30 +0000 Subject: [openstack-dev] [watcher] meeting today Message-ID: Watcher team, We have a meeting today on #openstack-meeting-alt channel at 08:00 UTC ____ Alex From kobayashi.hiroaki at lab.ntt.co.jp Wed May 9 07:54:01 2018 From: kobayashi.hiroaki at lab.ntt.co.jp (Hiroaki Kobayashi) Date: Wed, 9 May 2018 16:54:01 +0900 Subject: [openstack-dev] [blazar][placement][nova] Placement support in Blazar Message-ID: Hello, As we discussed at past summits and PTGs, Blazar plans to use Placement API for modernizing it's implementation. Based on the ideas teams shared, the Blazar team has discussed possible approaches and listed them down here: We want to make a consensus on the basic approach with the Placement team before we start detailed design because it highly depends on the Placement roadmap. It would be a great help if Placement team could see the etherpad and give us feedback! Thanks, Hiroaki From sangho at opennetworking.org Wed May 9 08:00:11 2018 From: sangho at opennetworking.org (Sangho Shin) Date: Wed, 9 May 2018 16:00:11 +0800 Subject: [openstack-dev] [neutron][ml2 plugin] unit test errors Message-ID: <08D21635-A69C-4D77-811E-4F67ED4C61A3@opennetworking.org> Hello, I am getting the following unit test error in Zuul test. See below. The error is caused only in the pike version, and in stable/ocata version, I do not have the error. ( If you can give me any clue, it would be very helpful ) BTW, in nosetests, there is no error. However, in tox -e py27 tests, I am getting different errors like below. Actually, it is caused because the tests are using different version of neutron library somehow. Actual neutron is installed in /opt/stack/neutron path, and it has correct python files such as callbacks and driver api, which are complained below. So, I would like to know how to specify the correct neutron location in tox tests. Thank you, Sangho tox -e py27 errors. --------------------------------- ========================= Failures during discovery ========================= --- import errors --- Failed to import test module: networking_onos.tests.unit.extensions.test_driver Traceback (most recent call last): File "/opt/stack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py", line 456, in _find_test_path module = self._get_module_from_name(name) File "/opt/stack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py", line 395, in _get_module_from_name __import__(name) File "networking_onos/tests/unit/extensions/test_driver.py", line 25, in import networking_onos.extensions.securitygroup as onos_sg_driver File "networking_onos/extensions/securitygroup.py", line 21, in from networking_onos.extensions import callback File "networking_onos/extensions/callback.py", line 15, in from neutron.callbacks import events ImportError: No module named callbacks Failed to import test module: networking_onos.tests.unit.plugins.ml2.test_driver Traceback (most recent call last): File "/opt/stack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py", line 456, in _find_test_path module = self._get_module_from_name(name) File "/opt/stack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py", line 395, in _get_module_from_name __import__(name) File "networking_onos/tests/unit/plugins/ml2/test_driver.py", line 24, in from neutron.plugins.ml2 import driver_api as api ImportError: cannot import name driver_api Zuul errors. --------------------------- Traceback (most recent call last): 2018-05-09 05:12:30.077594 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py", line 1182, in _execute_context 2018-05-09 05:12:30.077653 | ubuntu-xenial | context) 2018-05-09 05:12:30.077964 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py", line 470, in do_execute 2018-05-09 05:12:30.078065 | ubuntu-xenial | cursor.execute(statement, parameters) 2018-05-09 05:12:30.078210 | ubuntu-xenial | InterfaceError: Error binding parameter 0 - probably unsupported type. 2018-05-09 05:12:30.078282 | ubuntu-xenial | update failed: No details. 2018-05-09 05:12:30.078367 | ubuntu-xenial | Traceback (most recent call last): 2018-05-09 05:12:30.078683 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/api/v2/resource.py", line 98, in resource 2018-05-09 05:12:30.078791 | ubuntu-xenial | result = method(request=request, **args) 2018-05-09 05:12:30.079085 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/api/v2/base.py", line 615, in update 2018-05-09 05:12:30.079202 | ubuntu-xenial | return self._update(request, id, body, **kwargs) 2018-05-09 05:12:30.079480 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/db/api.py", line 93, in wrapped 2018-05-09 05:12:30.079574 | ubuntu-xenial | setattr(e, '_RETRY_EXCEEDED', True) 2018-05-09 05:12:30.079870 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__ 2018-05-09 05:12:30.079941 | ubuntu-xenial | self.force_reraise() 2018-05-09 05:12:30.080242 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise 2018-05-09 05:12:30.080350 | ubuntu-xenial | six.reraise(self.type_, self.value, self.tb) 2018-05-09 05:12:30.080629 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/db/api.py", line 89, in wrapped 2018-05-09 05:12:30.080706 | ubuntu-xenial | return f(*args, **kwargs) 2018-05-09 05:12:30.080985 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_db/api.py", line 150, in wrapper 2018-05-09 05:12:30.081064 | ubuntu-xenial | ectxt.value = e.inner_exc 2018-05-09 05:12:30.081363 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__ 2018-05-09 05:12:30.081433 | ubuntu-xenial | self.force_reraise() 2018-05-09 05:12:30.081733 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise 2018-05-09 05:12:30.081849 | ubuntu-xenial | six.reraise(self.type_, self.value, self.tb) 2018-05-09 05:12:30.082131 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_db/api.py", line 138, in wrapper 2018-05-09 05:12:30.082208 | ubuntu-xenial | return f(*args, **kwargs) 2018-05-09 05:12:30.082489 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/db/api.py", line 128, in wrapped 2018-05-09 05:12:30.082620 | ubuntu-xenial | LOG.debug("Retry wrapper got retriable exception: %s", e) 2018-05-09 05:12:30.082931 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__ 2018-05-09 05:12:30.083006 | ubuntu-xenial | self.force_reraise() 2018-05-09 05:12:30.083306 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise 2018-05-09 05:12:30.083415 | ubuntu-xenial | six.reraise(self.type_, self.value, self.tb) 2018-05-09 05:12:30.083696 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/db/api.py", line 124, in wrapped 2018-05-09 05:12:30.083786 | ubuntu-xenial | return f(*dup_args, **dup_kwargs) 2018-05-09 05:12:30.084081 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/api/v2/base.py", line 676, in _update 2018-05-09 05:12:30.084161 | ubuntu-xenial | original=orig_object_copy) 2018-05-09 05:12:30.084466 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/callbacks/registry.py", line 53, in notify 2018-05-09 05:12:30.084611 | ubuntu-xenial | _get_callback_manager().notify(resource, event, trigger, **kwargs) 2018-05-09 05:12:30.084932 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/db/utils.py", line 105, in _wrapped 2018-05-09 05:12:30.085026 | ubuntu-xenial | raise db_exc.RetryRequest(e) 2018-05-09 05:12:30.085319 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__ 2018-05-09 05:12:30.085387 | ubuntu-xenial | self.force_reraise() 2018-05-09 05:12:30.085687 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise 2018-05-09 05:12:30.085796 | ubuntu-xenial | six.reraise(self.type_, self.value, self.tb) 2018-05-09 05:12:30.086098 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/db/utils.py", line 100, in _wrapped 2018-05-09 05:12:30.086192 | ubuntu-xenial | return function(*args, **kwargs) 2018-05-09 05:12:30.086499 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/callbacks/manager.py", line 152, in notify 2018-05-09 05:12:30.086613 | ubuntu-xenial | raise exceptions.CallbackFailure(errors=errors) 2018-05-09 05:12:30.094917 | ubuntu-xenial | CallbackFailure: Callback neutron.notifiers.nova.Notifier._send_nova_notification-115311 failed with "(sqlite3.InterfaceError) Error binding parameter 0 - probably unsupported type. [SQL: u'SELECT ports.project_id AS ports_project_id, ports.id AS ports_id, ports.name AS ports_name, ports.network_id AS ports_network_id, ports.mac_address AS ports_mac_address, ports.admin_state_up AS ports_admin_state_up, ports.status AS ports_status, ports.device_id AS ports_device_id, ports.device_owner AS ports_device_owner, ports.ip_allocation AS ports_ip_allocation, ports.standard_attr_id AS ports_standard_attr_id, standardattributes_1.id AS standardattributes_1_id, standardattributes_1.resource_type AS standardattributes_1_resource_type, standardattributes_1.description AS standardattributes_1_description, standardattributes_1.revision_number AS standardattributes_1_revision_number, standardattributes_1.created_at AS standardattributes_1_created_at, standardattributes_1.updated_at AS standardattributes_1_updated_at, securitygroupportbindings_1.port_id AS securitygroupportbindings_1_port_id, securitygroupportbindings_1.security_group_id AS securitygroupportbindings_1_security_group_id, portbindingports_1.port_id AS portbindingports_1_port_id, portbindingports_1.host AS portbindingports_1_host, portdataplanestatuses_1.port_id AS portdataplanestatuses_1_port_id, portdataplanestatuses_1.data_plane_status AS portdataplanestatuses_1_data_plane_status, portsecuritybindings_1.port_id AS portsecuritybindings_1_port_id, portsecuritybindings_1.port_security_enabled AS portsecuritybindings_1_port_security_enabled, ml2_port_bindings_1.port_id AS ml2_port_bindings_1_port_id, ml2_port_bindings_1.host AS ml2_port_bindings_1_host, ml2_port_bindings_1.vnic_type AS ml2_port_bindings_1_vnic_type, ml2_port_bindings_1.profile AS ml2_port_bindings_1_profile, ml2_port_bindings_1.vif_type AS ml2_port_bindings_1_vif_type, ml2_port_bindings_1.vif_details AS ml2_port_bindings_1_vif_details, ml2_port_bindings_1.status AS ml2_port_bindings_1_status, portdnses_1.port_id AS portdnses_1_port_id, portdnses_1.current_dns_name AS portdnses_1_current_dns_name, portdnses_1.current_dns_domain AS portdnses_1_current_dns_domain, portdnses_1.previous_dns_name AS portdnses_1_previous_dns_name, portdnses_1.previous_dns_domain AS portdnses_1_previous_dns_domain, portdnses_1.dns_name AS portdnses_1_dns_name, portdnses_1.dns_domain AS portdnses_1_dns_domain, qos_port_policy_bindings_1.policy_id AS qos_port_policy_bindings_1_policy_id, qos_port_policy_bindings_1.port_id AS qos_port_policy_bindings_1_port_id, standardattributes_2.id AS standardattributes_2_id, standardattributes_2.resource_type AS standardattributes_2_resource_type, standardattributes_2.description AS standardattributes_2_description, standardattributes_2.revision_number AS standardattributes_2_revision_number, standardattributes_2.created_at AS standardattributes_2_created_at, standardattributes_2.updated_at AS standardattributes_2_updated_at, trunks_1.project_id AS trunks_1_project_id, trunks_1.id AS trunks_1_id, trunks_1.admin_state_up AS trunks_1_admin_state_up, trunks_1.name AS trunks_1_name, trunks_1.port_id AS trunks_1_port_id, trunks_1.status AS trunks_1_status, trunks_1.standard_attr_id AS trunks_1_standard_attr_id, subports_1.port_id AS subports_1_port_id, subports_1.trunk_id AS subports_1_trunk_id, subports_1.segmentation_type AS subports_1_segmentation_type, subports_1.segmentation_id AS subports_1_segmentation_id \nFROM ports LEFT OUTER JOIN standardattributes AS standardattributes_1 ON standardattributes_1.id = ports.standard_attr_id LEFT OUTER JOIN securitygroupportbindings AS securitygroupportbindings_1 ON ports.id = securitygroupportbindings_1.port_id LEFT OUTER JOIN portbindingports AS portbindingports_1 ON ports.id = portbindingports_1.port_id LEFT OUTER JOIN portdataplanestatuses AS portdataplanestatuses_1 ON ports.id = portdataplanestatuses_1.port_id LEFT OUTER JOIN portsecuritybindings AS portsecuritybindings_1 ON ports.id = portsecuritybindings_1.port_id LEFT OUTER JOIN ml2_port_bindings AS ml2_port_bindings_1 ON ports.id = ml2_port_bindings_1.port_id LEFT OUTER JOIN portdnses AS portdnses_1 ON ports.id = portdnses_1.port_id LEFT OUTER JOIN qos_port_policy_bindings AS qos_port_policy_bindings_1 ON ports.id = qos_port_policy_bindings_1.port_id LEFT OUTER JOIN trunks AS trunks_1 ON ports.id = trunks_1.port_id LEFT OUTER JOIN standardattributes AS standardattributes_2 ON standardattributes_2.id = trunks_1.standard_attr_id LEFT OUTER JOIN subports AS subports_1 ON ports.id = subports_1.port_id \nWHERE ports.id = ?'] [parameters: (,)]" 2018-05-09 05:12:30.097463 | ubuntu-xenial | {7} networking_onos.tests.unit.plugins.l3.test_driver.ONOSL3PluginTestCase.test_update_floating_ip [1.435310s] ... FAILED 2018-05-09 05:12:30.097519 | ubuntu-xenial | 2018-05-09 05:12:30.097608 | ubuntu-xenial | Captured traceback: 2018-05-09 05:12:30.097702 | ubuntu-xenial | ~~~~~~~~~~~~~~~~~~~ 2018-05-09 05:12:30.097838 | ubuntu-xenial | Traceback (most recent call last): 2018-05-09 05:12:30.098230 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/tests/base.py", line 118, in func 2018-05-09 05:12:30.098369 | ubuntu-xenial | return f(self, *args, **kwargs) 2018-05-09 05:12:30.098642 | ubuntu-xenial | File "networking_onos/tests/unit/plugins/l3/test_driver.py", line 166, in test_update_floating_ip 2018-05-09 05:12:30.098858 | ubuntu-xenial | resp = self._test_send_msg(floating_ip_request, 'put', url) 2018-05-09 05:12:30.099090 | ubuntu-xenial | File "networking_onos/tests/unit/plugins/l3/test_driver.py", line 96, in _test_send_msg 2018-05-09 05:12:30.099261 | ubuntu-xenial | resp = self.api.put(url, self.serialize(dict_info)) 2018-05-09 05:12:30.099597 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py", line 395, in put 2018-05-09 05:12:30.099712 | ubuntu-xenial | content_type=content_type, 2018-05-09 05:12:30.100056 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py", line 747, in _gen_request 2018-05-09 05:12:30.100164 | ubuntu-xenial | expect_errors=expect_errors) 2018-05-09 05:12:30.100486 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py", line 643, in do_request 2018-05-09 05:12:30.100603 | ubuntu-xenial | self._check_status(status, res) 2018-05-09 05:12:30.100931 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py", line 675, in _check_status 2018-05-09 05:12:30.101002 | ubuntu-xenial | res) 2018-05-09 05:12:30.101354 | ubuntu-xenial | webtest.app.AppError: Bad response: 500 Internal Server Error (not 200 OK or 3xx redirect for http://localhost/floatingips/7464aaf0-27ea-448a-97df-51732f9e0e25.json) 2018-05-09 05:12:30.101685 | ubuntu-xenial | '{"NeutronError": {"message": "Request Failed: internal server error while processing your request.", "detail": "", "type": "HTTPInternalServerError"}}' 2018-05-09 05:12:30.101735 | ubuntu-xenial | 2018-05-09 05:12:30.102007 | ubuntu-xenial | {7} networking_onos.tests.unit.plugins.ml2.test_driver.ONOSMechanismDriverTestCase.test_create_port_postcommit [0.004284s] ... ok -------------- next part -------------- An HTML attachment was scrubbed... URL: From scheuran at linux.vnet.ibm.com Wed May 9 08:04:03 2018 From: scheuran at linux.vnet.ibm.com (Andreas Scheuring) Date: Wed, 9 May 2018 10:04:03 +0200 Subject: [openstack-dev] [neutron][ml2 plugin] unit test errors In-Reply-To: <08D21635-A69C-4D77-811E-4F67ED4C61A3@opennetworking.org> References: <08D21635-A69C-4D77-811E-4F67ED4C61A3@opennetworking.org> Message-ID: <5D884907-7422-4A8F-AA94-DA1BE7E037A9@linux.vnet.ibm.com> neutron.plugins.ml2.driver_api got moved to neutron-lib. You probably need to update the networking-onos code and fix all imports there and push the changes... --- Andreas Scheuring (andreas_s) On 9. May 2018, at 10:00, Sangho Shin wrote: Hello, I am getting the following unit test error in Zuul test. See below. The error is caused only in the pike version, and in stable/ocata version, I do not have the error. ( If you can give me any clue, it would be very helpful ) BTW, in nosetests, there is no error. However, in tox -e py27 tests, I am getting different errors like below. Actually, it is caused because the tests are using different version of neutron library somehow. Actual neutron is installed in /opt/stack/neutron path, and it has correct python files such as callbacks and driver api, which are complained below. So, I would like to know how to specify the correct neutron location in tox tests. Thank you, Sangho tox -e py27 errors. --------------------------------- ========================= Failures during discovery ========================= --- import errors --- Failed to import test module: networking_onos.tests.unit.extensions.test_driver Traceback (most recent call last): File "/opt/stack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py", line 456, in _find_test_path module = self._get_module_from_name(name) File "/opt/stack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py", line 395, in _get_module_from_name __import__(name) File "networking_onos/tests/unit/extensions/test_driver.py", line 25, in import networking_onos.extensions.securitygroup as onos_sg_driver File "networking_onos/extensions/securitygroup.py", line 21, in from networking_onos.extensions import callback File "networking_onos/extensions/callback.py", line 15, in from neutron.callbacks import events ImportError: No module named callbacks Failed to import test module: networking_onos.tests.unit.plugins.ml2.test_driver Traceback (most recent call last): File "/opt/stack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py", line 456, in _find_test_path module = self._get_module_from_name(name) File "/opt/stack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py", line 395, in _get_module_from_name __import__(name) File "networking_onos/tests/unit/plugins/ml2/test_driver.py", line 24, in from neutron.plugins.ml2 import driver_api as api ImportError: cannot import name driver_api Zuul errors. --------------------------- Traceback (most recent call last): 2018-05-09 05:12:30.077594 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py ", line 1182, in _execute_context 2018-05-09 05:12:30.077653 | ubuntu-xenial | context) 2018-05-09 05:12:30.077964 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py ", line 470, in do_execute 2018-05-09 05:12:30.078065 | ubuntu-xenial | cursor.execute(statement, parameters) 2018-05-09 05:12:30.078210 | ubuntu-xenial | InterfaceError: Error binding parameter 0 - probably unsupported type. 2018-05-09 05:12:30.078282 | ubuntu-xenial | update failed: No details. 2018-05-09 05:12:30.078367 | ubuntu-xenial | Traceback (most recent call last): 2018-05-09 05:12:30.078683 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/api/v2/resource.py ", line 98, in resource 2018-05-09 05:12:30.078791 | ubuntu-xenial | result = method(request=request, **args) 2018-05-09 05:12:30.079085 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/api/v2/base.py ", line 615, in update 2018-05-09 05:12:30.079202 | ubuntu-xenial | return self._update(request, id, body, **kwargs) 2018-05-09 05:12:30.079480 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/db/api.py ", line 93, in wrapped 2018-05-09 05:12:30.079574 | ubuntu-xenial | setattr(e, '_RETRY_EXCEEDED', True) 2018-05-09 05:12:30.079870 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py ", line 220, in __exit__ 2018-05-09 05:12:30.079941 | ubuntu-xenial | self.force_reraise() 2018-05-09 05:12:30.080242 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py ", line 196, in force_reraise 2018-05-09 05:12:30.080350 | ubuntu-xenial | six.reraise(self.type_, self.value, self.tb) 2018-05-09 05:12:30.080629 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/db/api.py ", line 89, in wrapped 2018-05-09 05:12:30.080706 | ubuntu-xenial | return f(*args, **kwargs) 2018-05-09 05:12:30.080985 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_db/api.py ", line 150, in wrapper 2018-05-09 05:12:30.081064 | ubuntu-xenial | ectxt.value = e.inner_exc 2018-05-09 05:12:30.081363 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py ", line 220, in __exit__ 2018-05-09 05:12:30.081433 | ubuntu-xenial | self.force_reraise() 2018-05-09 05:12:30.081733 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py ", line 196, in force_reraise 2018-05-09 05:12:30.081849 | ubuntu-xenial | six.reraise(self.type_, self.value, self.tb) 2018-05-09 05:12:30.082131 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_db/api.py ", line 138, in wrapper 2018-05-09 05:12:30.082208 | ubuntu-xenial | return f(*args, **kwargs) 2018-05-09 05:12:30.082489 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/db/api.py ", line 128, in wrapped 2018-05-09 05:12:30.082620 | ubuntu-xenial | LOG.debug("Retry wrapper got retriable exception: %s", e) 2018-05-09 05:12:30.082931 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py ", line 220, in __exit__ 2018-05-09 05:12:30.083006 | ubuntu-xenial | self.force_reraise() 2018-05-09 05:12:30.083306 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py ", line 196, in force_reraise 2018-05-09 05:12:30.083415 | ubuntu-xenial | six.reraise(self.type_, self.value, self.tb) 2018-05-09 05:12:30.083696 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/db/api.py ", line 124, in wrapped 2018-05-09 05:12:30.083786 | ubuntu-xenial | return f(*dup_args, **dup_kwargs) 2018-05-09 05:12:30.084081 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/api/v2/base.py ", line 676, in _update 2018-05-09 05:12:30.084161 | ubuntu-xenial | original=orig_object_copy) 2018-05-09 05:12:30.084466 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/callbacks/registry.py ", line 53, in notify 2018-05-09 05:12:30.084611 | ubuntu-xenial | _get_callback_manager().notify(resource, event, trigger, **kwargs) 2018-05-09 05:12:30.084932 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/db/utils.py ", line 105, in _wrapped 2018-05-09 05:12:30.085026 | ubuntu-xenial | raise db_exc.RetryRequest(e) 2018-05-09 05:12:30.085319 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py ", line 220, in __exit__ 2018-05-09 05:12:30.085387 | ubuntu-xenial | self.force_reraise() 2018-05-09 05:12:30.085687 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py ", line 196, in force_reraise 2018-05-09 05:12:30.085796 | ubuntu-xenial | six.reraise(self.type_, self.value, self.tb) 2018-05-09 05:12:30.086098 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/db/utils.py ", line 100, in _wrapped 2018-05-09 05:12:30.086192 | ubuntu-xenial | return function(*args, **kwargs) 2018-05-09 05:12:30.086499 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/callbacks/manager.py ", line 152, in notify 2018-05-09 05:12:30.086613 | ubuntu-xenial | raise exceptions.CallbackFailure(errors=errors) 2018-05-09 05:12:30.094917 | ubuntu-xenial | CallbackFailure: Callback neutron.notifiers.nova.Notifier._send_nova_notification-115311 failed with "(sqlite3.InterfaceError) Error binding parameter 0 - probably unsupported type. [SQL: u'SELECT ports.project_id AS ports_project_id, ports.id AS ports_id, ports.name AS ports_name, ports.network_id AS ports_network_id, ports.mac_address AS ports_mac_address, ports.admin_state_up AS ports_admin_state_up, ports.status AS ports_status, ports.device_id AS ports_device_id, ports.device_owner AS ports_device_owner, ports.ip_allocation AS ports_ip_allocation, ports.standard_attr_id AS ports_standard_attr_id, standardattributes_1.id AS standardattributes_1_id, standardattributes_1.resource_type AS standardattributes_1_resource_type, standardattributes_1.description AS standardattributes_1_description, standardattributes_1.revision_number AS standardattributes_1_revision_number, standardattributes_1.created_at AS standardattributes_1_created_at, standardattributes_1.updated_at AS standardattributes_1_updated_at, securitygroupportbindings_1.port_id AS securitygroupportbindings_1_port_id, securitygroupportbindings_1.security_group_id AS securitygroupportbindings_1_security_group_id, portbindingports_1.port_id AS portbindingports_1_port_id, portbindingports_1.host AS portbindingports_1_host, portdataplanestatuses_1.port_id AS portdataplanestatuses_1_port_id, portdataplanestatuses_1.data_plane_status AS portdataplanestatuses_1_data_plane_status, portsecuritybindings_1.port_id AS portsecuritybindings_1_port_id, portsecuritybindings_1.port_security_enabled AS portsecuritybindings_1_port_security_enabled, ml2_port_bindings_1.port_id AS ml2_port_bindings_1_port_id, ml2_port_bindings_1.host AS ml2_port_bindings_1_host, ml2_port_bindings_1.vnic_type AS ml2_port_bindings_1_vnic_type, ml2_port_bindings_1.profile AS ml2_port_bindings_1_profile, ml2_port_bindings_1.vif_type AS ml2_port_bindings_1_vif_type, ml2_port_bindings_1.vif_details AS ml2_port_bindings_1_vif_details, ml2_port_bindings_1.status AS ml2_port_bindings_1_status, portdnses_1.port_id AS portdnses_1_port_id, portdnses_1.current_dns_name AS portdnses_1_current_dns_name, portdnses_1.current_dns_domain AS portdnses_1_current_dns_domain, portdnses_1.previous_dns_name AS portdnses_1_previous_dns_name, portdnses_1.previous_dns_domain AS portdnses_1_previous_dns_domain, portdnses_1.dns_name AS portdnses_1_dns_name, portdnses_1.dns_domain AS portdnses_1_dns_domain, qos_port_policy_bindings_1.policy_id AS qos_port_policy_bindings_1_policy_id, qos_port_policy_bindings_1.port_id AS qos_port_policy_bindings_1_port_id, standardattributes_2.id AS standardattributes_2_id, standardattributes_2.resource_type AS standardattributes_2_resource_type, standardattributes_2.description AS standardattributes_2_description, standardattributes_2.revision_number AS standardattributes_2_revision_number, standardattributes_2.created_at AS standardattributes_2_created_at, standardattributes_2.updated_at AS standardattributes_2_updated_at, trunks_1.project_id AS trunks_1_project_id, trunks_1.id AS trunks_1_id, trunks_1.admin_state_up AS trunks_1_admin_state_up, trunks_1.name AS trunks_1_name, trunks_1.port_id AS trunks_1_port_id, trunks_1.status AS trunks_1_status, trunks_1.standard_attr_id AS trunks_1_standard_attr_id, subports_1.port_id AS subports_1_port_id, subports_1.trunk_id AS subports_1_trunk_id, subports_1.segmentation_type AS subports_1_segmentation_type, subports_1.segmentation_id AS subports_1_segmentation_id \nFROM ports LEFT OUTER JOIN standardattributes AS standardattributes_1 ON standardattributes_1.id = ports.standard_attr_id LEFT OUTER JOIN securitygroupportbindings AS securitygroupportbindings_1 ON ports.id = securitygroupportbindings_1.port_id LEFT OUTER JOIN portbindingports AS portbindingports_1 ON ports.id = portbindingports_1.port_id LEFT OUTER JOIN portdataplanestatuses AS portdataplanestatuses_1 ON ports.id = portdataplanestatuses_1.port_id LEFT OUTER JOIN portsecuritybindings AS portsecuritybindings_1 ON ports.id = portsecuritybindings_1.port_id LEFT OUTER JOIN ml2_port_bindings AS ml2_port_bindings_1 ON ports.id = ml2_port_bindings_1.port_id LEFT OUTER JOIN portdnses AS portdnses_1 ON ports.id = portdnses_1.port_id LEFT OUTER JOIN qos_port_policy_bindings AS qos_port_policy_bindings_1 ON ports.id = qos_port_policy_bindings_1.port_id LEFT OUTER JOIN trunks AS trunks_1 ON ports.id = trunks_1.port_id LEFT OUTER JOIN standardattributes AS standardattributes_2 ON standardattributes_2.id = trunks_1.standard_attr_id LEFT OUTER JOIN subports AS subports_1 ON ports.id = subports_1.port_id \nWHERE ports.id = ?'] [parameters: (,)]" 2018-05-09 05:12:30.097463 | ubuntu-xenial | {7} networking_onos.tests.unit.plugins.l3.test_driver.ONOSL3PluginTestCase.test_update_floating_ip [1.435310s] ... FAILED 2018-05-09 05:12:30.097519 | ubuntu-xenial | 2018-05-09 05:12:30.097608 | ubuntu-xenial | Captured traceback: 2018-05-09 05:12:30.097702 | ubuntu-xenial | ~~~~~~~~~~~~~~~~~~~ 2018-05-09 05:12:30.097838 | ubuntu-xenial | Traceback (most recent call last): 2018-05-09 05:12:30.098230 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/tests/base.py ", line 118, in func 2018-05-09 05:12:30.098369 | ubuntu-xenial | return f(self, *args, **kwargs) 2018-05-09 05:12:30.098642 | ubuntu-xenial | File "networking_onos/tests/unit/plugins/l3/test_driver.py", line 166, in test_update_floating_ip 2018-05-09 05:12:30.098858 | ubuntu-xenial | resp = self._test_send_msg(floating_ip_request, 'put', url) 2018-05-09 05:12:30.099090 | ubuntu-xenial | File "networking_onos/tests/unit/plugins/l3/test_driver.py", line 96, in _test_send_msg 2018-05-09 05:12:30.099261 | ubuntu-xenial | resp = self.api.put(url, self.serialize(dict_info)) 2018-05-09 05:12:30.099597 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py ", line 395, in put 2018-05-09 05:12:30.099712 | ubuntu-xenial | content_type=content_type, 2018-05-09 05:12:30.100056 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py ", line 747, in _gen_request 2018-05-09 05:12:30.100164 | ubuntu-xenial | expect_errors=expect_errors) 2018-05-09 05:12:30.100486 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py ", line 643, in do_request 2018-05-09 05:12:30.100603 | ubuntu-xenial | self._check_status(status, res) 2018-05-09 05:12:30.100931 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py ", line 675, in _check_status 2018-05-09 05:12:30.101002 | ubuntu-xenial | res) 2018-05-09 05:12:30.101354 | ubuntu-xenial | webtest.app.AppError: Bad response: 500 Internal Server Error (not 200 OK or 3xx redirect for http://localhost/floatingips/7464aaf0-27ea-448a-97df-51732f9e0e25.json ) 2018-05-09 05:12:30.101685 | ubuntu-xenial | '{"NeutronError": {"message": "Request Failed: internal server error while processing your request.", "detail": "", "type": "HTTPInternalServerError"}}' 2018-05-09 05:12:30.101735 | ubuntu-xenial | 2018-05-09 05:12:30.102007 | ubuntu-xenial | {7} networking_onos.tests.unit.plugins.ml2.test_driver.ONOSMechanismDriverTestCase.test_create_port_postcommit [0.004284s] ... ok __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From sangho at opennetworking.org Wed May 9 08:40:57 2018 From: sangho at opennetworking.org (Sangho Shin) Date: Wed, 9 May 2018 16:40:57 +0800 Subject: [openstack-dev] [neutron][ml2 plugin] unit test errors In-Reply-To: <08D21635-A69C-4D77-811E-4F67ED4C61A3@opennetworking.org> References: <08D21635-A69C-4D77-811E-4F67ED4C61A3@opennetworking.org> Message-ID: I just manually installed neutron in .tox folder (I am not sure if this is a correct way to fix the problem) and run tox tests again. And.. All tests passed.. as below.. But, I am not sure why Zuul tests fails as in my first email. :-( Thank you, Sangho Tox tests success log in my local environment… ----------------------------------------------- ubuntu at sangho-sona-pike-1:~/networking-onos$ tox -epy27 -vv removing /home/ubuntu/networking-onos/.tox/log using tox.ini: /home/ubuntu/networking-onos/tox.ini using tox-2.3.1 from /usr/lib/python3/dist-packages/tox/__init__.py skipping sdist step py27 start: getenv /home/ubuntu/networking-onos/.tox/py27 py27 reusing: /home/ubuntu/networking-onos/.tox/py27 py27 finish: getenv after 0.09 seconds py27 start: developpkg /home/ubuntu/networking-onos /home/ubuntu/networking-onos$ /home/ubuntu/networking-onos/.tox/py27/bin/python /home/ubuntu/networking-onos/setup.py --name py27 develop-inst-nodeps: /home/ubuntu/networking-onos setting PATH=/home/ubuntu/networking-onos/.tox/py27/bin:/home/ubuntu/bin:/home/ubuntu/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin /home/ubuntu/networking-onos$ /home/ubuntu/networking-onos/tools/tox_install.sh https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt --no-deps -e /home/ubuntu/networking-onos >/home/ubuntu/networking-onos/.tox/py27/log/py27-4.log py27 finish: developpkg after 10.28 seconds py27 start: envreport setting PATH=/home/ubuntu/networking-onos/.tox/py27/bin:/home/ubuntu/bin:/home/ubuntu/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin /home/ubuntu/networking-onos$ /home/ubuntu/networking-onos/.tox/py27/bin/pip freeze >/home/ubuntu/networking-onos/.tox/py27/log/py27-5.log py27 installed: alabaster==0.7.10,alembic==0.9.9,amqp==2.2.2,appdirs==1.4.3,asn1crypto==0.24.0,Babel==2.5.3,bcrypt==3.1.4,beautifulsoup4==4.6.0,cachetools==2.0.1,certifi==2018.4.16,cffi==1.11.5,chardet==3.0.4,cliff==2.11.0,cmd2==0.8.5,contextlib2==0.5.5,coverage==4.5.1,cryptography==2.2.2,debtcollector==1.19.0,decorator==4.3.0,deprecation==2.0.2,doc8==0.8.0,docutils==0.14,dogpile.cache==0.6.5,dulwich==0.19.2,enum-compat==0.0.2,enum34==1.1.6,eventlet==0.20.0,extras==1.0.0,fasteners==0.14.1,fixtures==3.0.0,flake8==2.5.5,funcsigs==1.0.2,functools32==3.2.3.post2,future==0.16.0,futures==3.2.0,futurist==1.7.0,greenlet==0.4.13,hacking==0.12.0,httplib2==0.11.3,idna==2.6,imagesize==1.0.0,ipaddress==1.0.22,iso8601==0.1.12,Jinja2==2.10,jmespath==0.9.3,jsonpatch==1.23,jsonpointer==2.0,jsonschema==2.6.0,keystoneauth1==3.5.0,keystonemiddleware==5.0.0,kombu==4.1.0,linecache2==1.0.0,logutils==0.3.5,Mako==1.0.7,MarkupSafe==1.0,mccabe==0.2.1,mock==2.0.0,monotonic==1.4,mox3==0.25.0,msgpack==0.5.6,munch==2.3.1,netaddr==0.7.19,netifaces==0.10.6,-e git+ssh://sanghoshin at review.openstack.org:29418/openstack/networking-onos at 678eaaf9c917b7037a426eaadecc252a07fdd47b#egg=networking_onos,-e git+https://git.openstack.org/openstack/networking-sfc at 379fcd5cfcb7a71e7dbbe969da0255bc3ff09a33#egg=networking_sfc,-e git+ssh://sanghoshin at review.openstack.org:29418/openstack/networking-onos at 678eaaf9c917b7037a426eaadecc252a07fdd47b#egg=neutron&subdirectory=.tox/py27/lib/python2.7/site-packages,neutron-lib==1.14.0,openstacksdk==0.13.0,os-client-config==1.31.1,os-service-types==1.2.0,os-testr==1.0.0,os-xenapi==0.3.3,osc-lib==1.10.0,oslo.cache==1.30.1,oslo.concurrency==3.27.0,oslo.config==6.2.0,oslo.context==2.20.0,oslo.db==4.38.0,oslo.i18n==3.20.0,oslo.log==3.38.1,oslo.messaging==6.2.0,oslo.middleware==3.35.0,oslo.policy==1.35.0,oslo.privsep==1.29.0,oslo.reports==1.28.0,oslo.rootwrap==5.14.0,oslo.serialization==2.25.0,oslo.service==1.31.1,oslo.utils==3.36.1,oslo.versionedobjects==1.33.1,oslosphinx==4.18.0,oslotest==3.4.2,osprofiler==2.0.0,ovs==2.9.0,ovsdbapp==0.10.0,packaging==17.1,paramiko==2.4.1,Paste==2.0.3,PasteDeploy==1.5.2,pbr==4.0.2,pecan==1.3.2,pep8==1.5.7,pkg-resources==0.0.0,prettytable==0.7.2,psutil==5.4.5,pyasn1==0.4.2,pycadf==2.7.0,pycparser==2.18,pyflakes==0.8.1,Pygments==2.2.0,pyinotify==0.9.6,PyNaCl==1.2.1,pyparsing==2.2.0,pyperclip==1.6.0,pyroute2==0.5.0,python-dateutil==2.7.2,python-designateclient==2.9.0,python-editor==1.0.3,python-keystoneclient==3.16.0,python-mimeparse==1.6.0,python-neutronclient==6.8.0,python-novaclient==10.2.0,python-subunit==1.3.0,pytz==2018.4,PyYAML==3.12,reno==2.9.1,repoze.lru==0.7,requests==2.18.4,requestsexceptions==1.4.0,restructuredtext-lint==1.1.3,rfc3986==1.1.0,Routes==2.4.1,ryu==4.24,simplejson==3.14.0,singledispatch==3.4.0.3,six==1.11.0,snowballstemmer==1.2.1,Sphinx==1.6.5,sphinxcontrib-websupport==1.0.1,SQLAlchemy==1.2.7,sqlalchemy-migrate==0.11.0,sqlparse==0.2.4,statsd==3.2.2,stestr==2.0.0,stevedore==1.28.0,subprocess32==3.2.7,Tempita==0.5.2,tenacity==4.11.0,testrepository==0.0.20,testresources==2.0.1,testscenarios==0.5.0,testtools==2.3.0,tinyrpc==0.8,traceback2==1.4.0,typing==3.6.4,unicodecsv==0.14.1,unittest2==1.1.0,urllib3==1.22,vine==1.1.4,voluptuous==0.11.1,waitress==1.1.0,wcwidth==0.1.7,weakrefmethod==1.0.3,WebOb==1.7.4,WebTest==2.0.29,wrapt==1.10.11 py27 finish: envreport after 0.76 seconds py27 start: runtests py27 runtests: PYTHONHASHSEED='2231557252' py27 runtests: commands[0] | /home/ubuntu/networking-onos/tools/ostestr_compat_shim.sh setting PATH=/home/ubuntu/networking-onos/.tox/py27/bin:/home/ubuntu/bin:/home/ubuntu/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin /home/ubuntu/networking-onos$ /home/ubuntu/networking-onos/tools/ostestr_compat_shim.sh /home/ubuntu/networking-onos/.tox/py27/local/lib/python2.7/site-packages/os_testr/ostestr.py:120: UserWarning: No .stestr.conf file found in the CWD. Please create one to to replace the .testr.conf. You can find a script to do this in the stestr git repository. warnings.warn(msg) {2} networking_onos.tests.unit.extensions.test_driver.ONOSSecurityGroupTestCase.test_create_security_group_rule_postcommit [0.006574s] ... ok {2} networking_onos.tests.unit.extensions.test_driver.ONOSSecurityGroupTestCase.test_delete_security_group_postcommit [0.001862s] ... ok {0} networking_onos.tests.unit.extensions.test_driver.ONOSSecurityGroupTestCase.test_create_security_group_postcommit [0.004377s] ... ok {0} networking_onos.tests.unit.extensions.test_driver.ONOSSecurityGroupTestCase.test_delete_security_group_rule_postcommit [0.001005s] ... ok {0} networking_onos.tests.unit.extensions.test_driver.ONOSSecurityGroupTestCase.test_update_security_group_postcommit [0.001361s] ... ok /home/ubuntu/networking-onos/.tox/py27/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py:22: DeprecationWarning: Parameters to load are deprecated. Call .resolve and .require separately. return pkg_resources.EntryPoint.parse("x=" + s).load(False) /home/ubuntu/networking-onos/.tox/py27/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py:22: DeprecationWarning: Parameters to load are deprecated. Call .resolve and .require separately. return pkg_resources.EntryPoint.parse("x=" + s).load(False) {0} networking_onos.tests.unit.plugins.l3.test_driver.ONOSL3PluginTestCase.test_add_router_interface [0.912765s] ... ok /home/ubuntu/networking-onos/.tox/py27/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py:22: DeprecationWarning: Parameters to load are deprecated. Call .resolve and .require separately. return pkg_resources.EntryPoint.parse("x=" + s).load(False) Deprecated: The quota driver neutron.quota.ConfDriver is deprecated as of Liberty. neutron.db.quota.driver.DbQuotaDriver should be used in its place {2} networking_onos.tests.unit.plugins.l3.test_driver.ONOSL3PluginTestCase.test_create_router [0.972778s] ... ok {1} networking_onos.tests.unit.plugins.l3.test_driver.ONOSL3PluginTestCase.test_delete_router [0.876546s] ... ok /home/ubuntu/networking-onos/.tox/py27/local/lib/python2.7/site-packages/paste/deploy/loadwsgi.py:22: DeprecationWarning: Parameters to load are deprecated. Call .resolve and .require separately. return pkg_resources.EntryPoint.parse("x=" + s).load(False) Deprecated: The quota driver neutron.quota.ConfDriver is deprecated as of Liberty. neutron.db.quota.driver.DbQuotaDriver should be used in its place {3} networking_onos.tests.unit.plugins.l3.test_driver.ONOSL3PluginTestCase.test_create_floating_ip [1.287437s] ... ok {0} networking_onos.tests.unit.plugins.l3.test_driver.ONOSL3PluginTestCase.test_update_floating_ip [0.651223s] ... ok {0} networking_onos.tests.unit.plugins.ml2.test_driver.ONOSMechanismDriverTestCase.test_delete_network_postcommit [0.001129s] ... ok {2} networking_onos.tests.unit.plugins.l3.test_driver.ONOSL3PluginTestCase.test_update_router [0.645290s] ... ok {2} networking_onos.tests.unit.plugins.ml2.test_driver.ONOSMechanismDriverTestCase.test_create_port_postcommit [0.001539s] ... ok {2} networking_onos.tests.unit.plugins.ml2.test_driver.ONOSMechanismDriverTestCase.test_delete_subnet_postcommit [0.001001s] ... ok {1} networking_onos.tests.unit.plugins.l3.test_driver.ONOSL3PluginTestCase.test_remove_router_interface [0.642295s] ... ok {1} networking_onos.tests.unit.plugins.ml2.test_driver.ONOSMechanismDriverTestCase.test_bind_port [0.004079s] ... ok {1} networking_onos.tests.unit.plugins.ml2.test_driver.ONOSMechanismDriverTestCase.test_create_subnet_postcommit [0.001475s] ... ok {1} networking_onos.tests.unit.plugins.ml2.test_driver.ONOSMechanismDriverTestCase.test_update_port_postcommit [0.001447s] ... ok {1} networking_onos.tests.unit.plugins.ml2.test_driver.ONOSMechanismDriverTestCase.test_update_subnet_postcommit [0.001580s] ... ok {3} networking_onos.tests.unit.plugins.l3.test_driver.ONOSL3PluginTestCase.test_delete_floating_ip [0.480901s] ... ok {3} networking_onos.tests.unit.plugins.ml2.test_driver.ONOSMechanismDriverTestCase.test_check_segment [0.001506s] ... ok {3} networking_onos.tests.unit.plugins.ml2.test_driver.ONOSMechanismDriverTestCase.test_create_network_postcommit [0.001746s] ... ok {3} networking_onos.tests.unit.plugins.ml2.test_driver.ONOSMechanismDriverTestCase.test_delete_port_postcommit [0.001640s] ... ok {3} networking_onos.tests.unit.plugins.ml2.test_driver.ONOSMechanismDriverTestCase.test_update_network_postcommit [0.001948s] ... ok ====== Totals ====== Ran: 24 tests in 3.0000 sec. - Passed: 24 - Skipped: 0 - Expected Fail: 0 - Unexpected Success: 0 - Failed: 0 Sum of execute time for each test: 6.5035 sec. ============== Worker Balance ============== - Worker 0 (6 tests) => 0:00:01.573183 - Worker 1 (6 tests) => 0:00:01.528750 - Worker 2 (6 tests) => 0:00:01.630468 - Worker 3 (6 tests) => 0:00:01.776258 Slowest Tests: Test id Runtime (s) ---------------------------------------------------------------------------------------------------------------------- ----------- networking_onos.tests.unit.plugins.l3.test_driver.ONOSL3PluginTestCase.test_create_floating_ip 1.287 networking_onos.tests.unit.plugins.l3.test_driver.ONOSL3PluginTestCase.test_create_router 0.973 networking_onos.tests.unit.plugins.l3.test_driver.ONOSL3PluginTestCase.test_add_router_interface 0.913 networking_onos.tests.unit.plugins.l3.test_driver.ONOSL3PluginTestCase.test_delete_router 0.877 networking_onos.tests.unit.plugins.l3.test_driver.ONOSL3PluginTestCase.test_update_floating_ip 0.651 networking_onos.tests.unit.plugins.l3.test_driver.ONOSL3PluginTestCase.test_update_router 0.645 networking_onos.tests.unit.plugins.l3.test_driver.ONOSL3PluginTestCase.test_remove_router_interface 0.642 networking_onos.tests.unit.plugins.l3.test_driver.ONOSL3PluginTestCase.test_delete_floating_ip 0.481 networking_onos.tests.unit.extensions.test_driver.ONOSSecurityGroupTestCase.test_create_security_group_rule_postcommit 0.007 networking_onos.tests.unit.extensions.test_driver.ONOSSecurityGroupTestCase.test_create_security_group_postcommit 0.004 py27 finish: runtests after 5.94 seconds ______________________________________________________________________________________ summary ______________________________________________________________________________________ py27: commands succeeded congratulations :) > 2018. 5. 9. 오후 4:00, Sangho Shin 작성: > > Hello, > > I am getting the following unit test error in Zuul test. See below. > The error is caused only in the pike version, and in stable/ocata version, I do not have the error. > ( If you can give me any clue, it would be very helpful ) > > BTW, in nosetests, there is no error. > However, in tox -e py27 tests, I am getting different errors like below. Actually, it is caused because the tests are using different version of neutron library somehow. Actual neutron is installed in /opt/stack/neutron path, and it has correct python files such as callbacks and driver api, which are complained below. > > So, I would like to know how to specify the correct neutron location in tox tests. > > Thank you, > > Sangho > > > tox -e py27 errors. > > --------------------------------- > > > ========================= > Failures during discovery > ========================= > --- import errors --- > Failed to import test module: networking_onos.tests.unit.extensions.test_driver > Traceback (most recent call last): > File "/opt/stack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py", line 456, in _find_test_path > module = self._get_module_from_name(name) > File "/opt/stack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py", line 395, in _get_module_from_name > __import__(name) > File "networking_onos/tests/unit/extensions/test_driver.py", line 25, in > import networking_onos.extensions.securitygroup as onos_sg_driver > File "networking_onos/extensions/securitygroup.py", line 21, in > from networking_onos.extensions import callback > File "networking_onos/extensions/callback.py", line 15, in > from neutron.callbacks import events > ImportError: No module named callbacks > > Failed to import test module: networking_onos.tests.unit.plugins.ml2.test_driver > Traceback (most recent call last): > File "/opt/stack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py", line 456, in _find_test_path > module = self._get_module_from_name(name) > File "/opt/stack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py", line 395, in _get_module_from_name > __import__(name) > File "networking_onos/tests/unit/plugins/ml2/test_driver.py", line 24, in > from neutron.plugins.ml2 import driver_api as api > ImportError: cannot import name driver_api > > > > > > > Zuul errors. > > --------------------------- > > Traceback (most recent call last): > 2018-05-09 05:12:30.077594 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py ", line 1182, in _execute_context > 2018-05-09 05:12:30.077653 | ubuntu-xenial | context) > 2018-05-09 05:12:30.077964 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py ", line 470, in do_execute > 2018-05-09 05:12:30.078065 | ubuntu-xenial | cursor.execute(statement, parameters) > 2018-05-09 05:12:30.078210 | ubuntu-xenial | InterfaceError: Error binding parameter 0 - probably unsupported type. > 2018-05-09 05:12:30.078282 | ubuntu-xenial | update failed: No details. > 2018-05-09 05:12:30.078367 | ubuntu-xenial | Traceback (most recent call last): > 2018-05-09 05:12:30.078683 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/api/v2/resource.py ", line 98, in resource > 2018-05-09 05:12:30.078791 | ubuntu-xenial | result = method(request=request, **args) > 2018-05-09 05:12:30.079085 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/api/v2/base.py ", line 615, in update > 2018-05-09 05:12:30.079202 | ubuntu-xenial | return self._update(request, id, body, **kwargs) > 2018-05-09 05:12:30.079480 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/db/api.py ", line 93, in wrapped > 2018-05-09 05:12:30.079574 | ubuntu-xenial | setattr(e, '_RETRY_EXCEEDED', True) > 2018-05-09 05:12:30.079870 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py ", line 220, in __exit__ > 2018-05-09 05:12:30.079941 | ubuntu-xenial | self.force_reraise() > 2018-05-09 05:12:30.080242 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py ", line 196, in force_reraise > 2018-05-09 05:12:30.080350 | ubuntu-xenial | six.reraise(self.type_, self.value, self.tb) > 2018-05-09 05:12:30.080629 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/db/api.py ", line 89, in wrapped > 2018-05-09 05:12:30.080706 | ubuntu-xenial | return f(*args, **kwargs) > 2018-05-09 05:12:30.080985 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_db/api.py ", line 150, in wrapper > 2018-05-09 05:12:30.081064 | ubuntu-xenial | ectxt.value = e.inner_exc > 2018-05-09 05:12:30.081363 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py ", line 220, in __exit__ > 2018-05-09 05:12:30.081433 | ubuntu-xenial | self.force_reraise() > 2018-05-09 05:12:30.081733 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py ", line 196, in force_reraise > 2018-05-09 05:12:30.081849 | ubuntu-xenial | six.reraise(self.type_, self.value, self.tb) > 2018-05-09 05:12:30.082131 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_db/api.py ", line 138, in wrapper > 2018-05-09 05:12:30.082208 | ubuntu-xenial | return f(*args, **kwargs) > 2018-05-09 05:12:30.082489 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/db/api.py ", line 128, in wrapped > 2018-05-09 05:12:30.082620 | ubuntu-xenial | LOG.debug("Retry wrapper got retriable exception: %s", e) > 2018-05-09 05:12:30.082931 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py ", line 220, in __exit__ > 2018-05-09 05:12:30.083006 | ubuntu-xenial | self.force_reraise() > 2018-05-09 05:12:30.083306 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py ", line 196, in force_reraise > 2018-05-09 05:12:30.083415 | ubuntu-xenial | six.reraise(self.type_, self.value, self.tb) > 2018-05-09 05:12:30.083696 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/db/api.py ", line 124, in wrapped > 2018-05-09 05:12:30.083786 | ubuntu-xenial | return f(*dup_args, **dup_kwargs) > 2018-05-09 05:12:30.084081 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/api/v2/base.py ", line 676, in _update > 2018-05-09 05:12:30.084161 | ubuntu-xenial | original=orig_object_copy) > 2018-05-09 05:12:30.084466 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/callbacks/registry.py ", line 53, in notify > 2018-05-09 05:12:30.084611 | ubuntu-xenial | _get_callback_manager().notify(resource, event, trigger, **kwargs) > 2018-05-09 05:12:30.084932 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/db/utils.py ", line 105, in _wrapped > 2018-05-09 05:12:30.085026 | ubuntu-xenial | raise db_exc.RetryRequest(e) > 2018-05-09 05:12:30.085319 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py ", line 220, in __exit__ > 2018-05-09 05:12:30.085387 | ubuntu-xenial | self.force_reraise() > 2018-05-09 05:12:30.085687 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py ", line 196, in force_reraise > 2018-05-09 05:12:30.085796 | ubuntu-xenial | six.reraise(self.type_, self.value, self.tb) > 2018-05-09 05:12:30.086098 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/db/utils.py ", line 100, in _wrapped > 2018-05-09 05:12:30.086192 | ubuntu-xenial | return function(*args, **kwargs) > 2018-05-09 05:12:30.086499 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/callbacks/manager.py ", line 152, in notify > 2018-05-09 05:12:30.086613 | ubuntu-xenial | raise exceptions.CallbackFailure(errors=errors) > 2018-05-09 05:12:30.094917 | ubuntu-xenial | CallbackFailure: Callback neutron.notifiers.nova.Notifier._send_nova_notification-115311 failed with "(sqlite3.InterfaceError) Error binding parameter 0 - probably unsupported type. [SQL: u'SELECT ports.project_id AS ports_project_id, ports.id AS ports_id, ports.name AS ports_name, ports.network_id AS ports_network_id, ports.mac_address AS ports_mac_address, ports.admin_state_up AS ports_admin_state_up, ports.status AS ports_status, ports.device_id AS ports_device_id, ports.device_owner AS ports_device_owner, ports.ip_allocation AS ports_ip_allocation, ports.standard_attr_id AS ports_standard_attr_id, standardattributes_1.id AS standardattributes_1_id, standardattributes_1.resource_type AS standardattributes_1_resource_type, standardattributes_1.description AS standardattributes_1_description, standardattributes_1.revision_number AS standardattributes_1_revision_number, standardattributes_1.created_at AS standardattributes_1_created_at, standardattributes_1.updated_at AS standardattributes_1_updated_at, securitygroupportbindings_1.port_id AS securitygroupportbindings_1_port_id, securitygroupportbindings_1.security_group_id AS securitygroupportbindings_1_security_group_id, portbindingports_1.port_id AS portbindingports_1_port_id, portbindingports_1.host AS portbindingports_1_host, portdataplanestatuses_1.port_id AS portdataplanestatuses_1_port_id, portdataplanestatuses_1.data_plane_status AS portdataplanestatuses_1_data_plane_status, portsecuritybindings_1.port_id AS portsecuritybindings_1_port_id, portsecuritybindings_1.port_security_enabled AS portsecuritybindings_1_port_security_enabled, ml2_port_bindings_1.port_id AS ml2_port_bindings_1_port_id, ml2_port_bindings_1.host AS ml2_port_bindings_1_host, ml2_port_bindings_1.vnic_type AS ml2_port_bindings_1_vnic_type, ml2_port_bindings_1.profile AS ml2_port_bindings_1_profile, ml2_port_bindings_1.vif_type AS ml2_port_bindings_1_vif_type, ml2_port_bindings_1.vif_details AS ml2_port_bindings_1_vif_details, ml2_port_bindings_1.status AS ml2_port_bindings_1_status, portdnses_1.port_id AS portdnses_1_port_id, portdnses_1.current_dns_name AS portdnses_1_current_dns_name, portdnses_1.current_dns_domain AS portdnses_1_current_dns_domain, portdnses_1.previous_dns_name AS portdnses_1_previous_dns_name, portdnses_1.previous_dns_domain AS portdnses_1_previous_dns_domain, portdnses_1.dns_name AS portdnses_1_dns_name, portdnses_1.dns_domain AS portdnses_1_dns_domain, qos_port_policy_bindings_1.policy_id AS qos_port_policy_bindings_1_policy_id, qos_port_policy_bindings_1.port_id AS qos_port_policy_bindings_1_port_id, standardattributes_2.id AS standardattributes_2_id, standardattributes_2.resource_type AS standardattributes_2_resource_type, standardattributes_2.description AS standardattributes_2_description, standardattributes_2.revision_number AS standardattributes_2_revision_number, standardattributes_2.created_at AS standardattributes_2_created_at, standardattributes_2.updated_at AS standardattributes_2_updated_at, trunks_1.project_id AS trunks_1_project_id, trunks_1.id AS trunks_1_id, trunks_1.admin_state_up AS trunks_1_admin_state_up, trunks_1.name AS trunks_1_name, trunks_1.port_id AS trunks_1_port_id, trunks_1.status AS trunks_1_status, trunks_1.standard_attr_id AS trunks_1_standard_attr_id, subports_1.port_id AS subports_1_port_id, subports_1.trunk_id AS subports_1_trunk_id, subports_1.segmentation_type AS subports_1_segmentation_type, subports_1.segmentation_id AS subports_1_segmentation_id \nFROM ports LEFT OUTER JOIN standardattributes AS standardattributes_1 ON standardattributes_1.id = ports.standard_attr_id LEFT OUTER JOIN securitygroupportbindings AS securitygroupportbindings_1 ON ports.id = securitygroupportbindings_1.port_id LEFT OUTER JOIN portbindingports AS portbindingports_1 ON ports.id = portbindingports_1.port_id LEFT OUTER JOIN portdataplanestatuses AS portdataplanestatuses_1 ON ports.id = portdataplanestatuses_1.port_id LEFT OUTER JOIN portsecuritybindings AS portsecuritybindings_1 ON ports.id = portsecuritybindings_1.port_id LEFT OUTER JOIN ml2_port_bindings AS ml2_port_bindings_1 ON ports.id = ml2_port_bindings_1.port_id LEFT OUTER JOIN portdnses AS portdnses_1 ON ports.id = portdnses_1.port_id LEFT OUTER JOIN qos_port_policy_bindings AS qos_port_policy_bindings_1 ON ports.id = qos_port_policy_bindings_1.port_id LEFT OUTER JOIN trunks AS trunks_1 ON ports.id = trunks_1.port_id LEFT OUTER JOIN standardattributes AS standardattributes_2 ON standardattributes_2.id = trunks_1.standard_attr_id LEFT OUTER JOIN subports AS subports_1 ON ports.id = subports_1.port_id \nWHERE ports.id = ?'] [parameters: (,)]" > 2018-05-09 05:12:30.097463 | ubuntu-xenial | {7} networking_onos.tests.unit.plugins.l3.test_driver.ONOSL3PluginTestCase.test_update_floating_ip [1.435310s] ... FAILED > 2018-05-09 05:12:30.097519 | ubuntu-xenial | > 2018-05-09 05:12:30.097608 | ubuntu-xenial | Captured traceback: > 2018-05-09 05:12:30.097702 | ubuntu-xenial | ~~~~~~~~~~~~~~~~~~~ > 2018-05-09 05:12:30.097838 | ubuntu-xenial | Traceback (most recent call last): > 2018-05-09 05:12:30.098230 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/tests/base.py ", line 118, in func > 2018-05-09 05:12:30.098369 | ubuntu-xenial | return f(self, *args, **kwargs) > 2018-05-09 05:12:30.098642 | ubuntu-xenial | File "networking_onos/tests/unit/plugins/l3/test_driver.py", line 166, in test_update_floating_ip > 2018-05-09 05:12:30.098858 | ubuntu-xenial | resp = self._test_send_msg(floating_ip_request, 'put', url) > 2018-05-09 05:12:30.099090 | ubuntu-xenial | File "networking_onos/tests/unit/plugins/l3/test_driver.py", line 96, in _test_send_msg > 2018-05-09 05:12:30.099261 | ubuntu-xenial | resp = self.api.put(url, self.serialize(dict_info)) > 2018-05-09 05:12:30.099597 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py ", line 395, in put > 2018-05-09 05:12:30.099712 | ubuntu-xenial | content_type=content_type, > 2018-05-09 05:12:30.100056 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py ", line 747, in _gen_request > 2018-05-09 05:12:30.100164 | ubuntu-xenial | expect_errors=expect_errors) > 2018-05-09 05:12:30.100486 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py ", line 643, in do_request > 2018-05-09 05:12:30.100603 | ubuntu-xenial | self._check_status(status, res) > 2018-05-09 05:12:30.100931 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py ", line 675, in _check_status > 2018-05-09 05:12:30.101002 | ubuntu-xenial | res) > 2018-05-09 05:12:30.101354 | ubuntu-xenial | webtest.app.AppError: Bad response: 500 Internal Server Error (not 200 OK or 3xx redirect for http://localhost/floatingips/7464aaf0-27ea-448a-97df-51732f9e0e25.json ) > 2018-05-09 05:12:30.101685 | ubuntu-xenial | '{"NeutronError": {"message": "Request Failed: internal server error while processing your request.", "detail": "", "type": "HTTPInternalServerError"}}' > 2018-05-09 05:12:30.101735 | ubuntu-xenial | > 2018-05-09 05:12:30.102007 | ubuntu-xenial | {7} networking_onos.tests.unit.plugins.ml2.test_driver.ONOSMechanismDriverTestCase.test_create_port_postcommit [0.004284s] ... ok > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdobreli at redhat.com Wed May 9 08:47:08 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Wed, 9 May 2018 10:47:08 +0200 Subject: [openstack-dev] =?utf-8?q?=5Btripleo=5D_CI_Squads=E2=80=99_Sprint?= =?utf-8?q?_12_Summary=3A_libvirt-reproducer=2C_python-tempestconf?= In-Reply-To: References: Message-ID: <47010e18-32d1-a0ac-c6a2-f93ad14a4e28@redhat.com> On 5/9/18 4:24 AM, Matt Young wrote: > Greetings, > > The TripleO squads for CI and Tempest have just completed Sprint 12. > The following is a summary of activities during this sprint.   Details > on our team structure can be found in the spec [1]. > > --- > > # Sprint 12 Epic (CI): Libvirt Reproducer > > * Epic Card: https://trello.com/c/JEGLSVh6/51-reproduce-ci-jobs-with-libvirt > * Tasks: http://ow.ly/O1vZ30jTSc3 > > "Allow developers to reproduce a multinode CI job on a bare metal host > using libvirt" > "Enable the same workflows used in upstream CI / reproducer using > libvirt instead of OVB as the provisioning mechanism" > > The CI Squad prototyped, designed, and implemented new functionality for > our CI reproducer.   “Reproducers” are scripts generated by each CI job > that allow the job/test to be recreated.  These are useful to both CI > team members when investigating failures, as well as developers creating > failures with the intent to iteratively debug and/or fix issues.  Prior > to this sprint, the reproducer scripts supported reproduction of > upstream CI jobs using OVB, typically on RDO Cloud.  This sprint we > extended this capability to support reproduction of jobs in libvirt. > > This work was done for a few reasons: > > * (short term) enable the team to work on upgrades and other CI team > tasks more efficiently by mitigating recurring RDO Cloud infrastructure > issues.  This was the primary motivator for doing this work at this time. > * (mid-longer term) enhance / enable iterative workflows such as THT > development, debugging deployment scenarios, etc.  Snapshots in > particular have proven quite useful.  As we look towards a future with a > viable single-node deployment capability, libvirt has clear benefits for > common developer scenarios. Thank you for that, a really cool feature for tripleo development! > > It is expected that further iteration and refinement of this initial > implementation will be required before the tripleo-ci team is able to > support this broadly.  What we’ve done works as designed.  While we > welcome folks to explore, please note that we are not announcing a > supported libvirt reproducer meant for use outside the tripleo-ci team > at this time.  We expect some degree of change, and have a number of > RFE’s resulting from our testing as well as documentation patches that > we’re iterating on. > > That said, we think it’s really cool, works well in its current form, > and are optimistic about its future. > > ## We did the following (CI): > > * Add support to the reproducer script [2,3] generated by CI to enable > libvirt. > * Basic snapshot create/restore [4] capability. > * Tested Scenarios: featureset 3 (UC idem), 10 (multinode containers), > 37 (min OC + minor update).  See sprint cards for details. > * 14-18 RFE’s identified as part of testing for future work > http://ow.ly/J2u830jTSLG > > --- > > # Sprint 12 Epic (Tempest): > > * Epic Card: https://trello.com/c/ifIYQsxs/75-sprint-12-undercloud-tempest > * Tasks: http://ow.ly/GGvc30jTSfV > > “Run tempest on undercloud by using containerized and packaged tempest” > “Complete work items carried from sprint 11 or another side work going on.” > > ## We did the following (Tempest): > > * Create tripleo-ci jobs that run containerized tempest on all stable > branches. > * Create documentation for configuring and running tempest using > containerized tempest on UC @tripleo.org , and blog > posts. [5,6,7] > * Run certification tests via new Jenkins job using ansible role [8] > * Refactor validate-tempest CI role for UC and containers > > --- > > # Ruck and Rover > > Each sprint two of the team members assume the roles of Ruck and Rover > (each for half of the sprint). > > * Ruck is responsible to monitoring the CI, checking for failures, > opening bugs, participate on meetings, and this is your focal point to > any CI issues. > * Rover is responsible to work on these bugs, fix problems and the rest > of the team are focused on the sprint. For more information about our > structure, check [1] > > ## Ruck & Rover (Sprint 12), Etherpad [9,10]: > > * Quique Llorente(quiquell) > * Gabriele Cerami (panda) > > A few notable issues where substantial time was spent were: > > 1767099 > periodic-tripleo-ci-centos-7-multinode-1ctlr-featureset030-master vxlan > tunnel fails randomly > 1758899 reproducer-quickstart.sh building wrong gating package. > 1767343 gate tripleo-ci-centos-7-containers-multinode fails to update > packages in cron container > 1762351 > periodic-tripleo-ci-centos-7-ovb-1ctlr_1comp-featureset002-queens-upload > is timeout Depends on https://bugzilla.redhat.com/show_bug.cgi?id=1565179 > 1766873 quickstart on ovb doesn't yield a deployment > 1767049 Error during test discovery : 'must specify exactly one of host > or intercept' Depends on https://bugzilla.redhat.com/show_bug.cgi?id=1434385 > 1767076 Creating pingtest_sack fails: Failed to schedule instances: > NoValidHost_Remote: No valid host was found > 1763634 devmode.sh --ovb fails to deploy overcloud > 1765680 Incorrect branch used for not gated tripleo-upgrade repo > > If you have any questions and/or suggestions, please contact us in #oooq > or #tripleo > > Thanks, > > Matt > > > tq: https://github.com/openstack/tripleo-quickstart > tqe: https://github.com/openstack/tripleo-quickstart-extras > > [1] > https://specs.openstack.org/openstack/tripleo-specs/specs/policy/ci-team-structure.html > [2] {{tq}}/roles/libvirt/setup/overcloud/tasks/libvirt_nodepool.yml > [3] > {{tqe}}/roles/create-reproducer-script/templates/reproducer-quickstart.sh.j2#L50 > > [4] {{tqe}}/roles/snapshot-libvirt > [5] > https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/12/html-single/openstack_integration_test_suite_guide > [6] > https://blogs.rdoproject.org/2018/05/running-tempest-tests-against-a-tripleo-undercloud > [7] > https://blogs.rdoproject.org/2018/05/consuming-kolla-tempest-container-image-for-running-tempest-tests > > [8] https://github.com/redhat-cip/ansible-role-openstack-certification > [9] https://review.rdoproject.org/etherpad/p/ruckrover-sprint12 > [10] https://etherpad.openstack.org/p/rover-030518 > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Bogdan Dobrelya, Irc #bogdando From scheuran at linux.vnet.ibm.com Wed May 9 09:15:55 2018 From: scheuran at linux.vnet.ibm.com (Andreas Scheuring) Date: Wed, 9 May 2018 11:15:55 +0200 Subject: [openstack-dev] [nova][third-party-ci]zKVM (s390x) CI broken Message-ID: <39562EAF-538B-4C58-90FA-339885A0EE1E@linux.vnet.ibm.com> Hi all, the nova CI for zKVM is currently broken - need to investigate. It seems like the vnc console gets configured for some reason (vnc is not supported on s390x)... --- Andreas Scheuring (andreas_s) -------------- next part -------------- An HTML attachment was scrubbed... URL: From scheuran at linux.vnet.ibm.com Wed May 9 11:16:45 2018 From: scheuran at linux.vnet.ibm.com (Andreas Scheuring) Date: Wed, 9 May 2018 13:16:45 +0200 Subject: [openstack-dev] [nova][third-party-ci]zKVM (s390x) CI broken In-Reply-To: <39562EAF-538B-4C58-90FA-339885A0EE1E@linux.vnet.ibm.com> References: <39562EAF-538B-4C58-90FA-339885A0EE1E@linux.vnet.ibm.com> Message-ID: <751F9D3E-DEE2-4B43-B6EB-F5A0D9B400CB@linux.vnet.ibm.com> The root cause seems to be bug [1]. It’s related to nova cells v2 configuration in devstack. Stephen Finucane already promised to have a look later the day (thx!!!). I keep the CI running for now... [1] https://bugs.launchpad.net/devstack/+bug/1770143 --- Andreas Scheuring (andreas_s) On 9. May 2018, at 11:15, Andreas Scheuring wrote: Hi all, the nova CI for zKVM is currently broken - need to investigate. It seems like the vnc console gets configured for some reason (vnc is not supported on s390x)... --- Andreas Scheuring (andreas_s) __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Wed May 9 12:42:02 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 9 May 2018 13:42:02 +0100 (BST) Subject: [openstack-dev] [nova] [placement] placement extraction session at forum Message-ID: I've started an etherpad related to the Vancouver Forum session on extracting placement from nova. It's mostly just an outline for now but is evolving: https://etherpad.openstack.org/p/YVR-placement-extraction If we can get some real information in there before the session we are much more likely to have a productive session. Please feel free to add any notes or questions you have there. Or on this thread if you prefer. The (potentially overly-optimistic) hope is that we can complete any prepatory work before the end of Rocky and then do the extraction in Stein. If we are willing to accept (please, let's) some form of control plane downtime data migration issues can be vastly eased. Getting agreement on how that might work is one of the goals of the session. Your input very appreciated. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From cdent+os at anticdent.org Wed May 9 12:56:58 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 9 May 2018 13:56:58 +0100 (BST) Subject: [openstack-dev] [cinder] [placement] cinder + placement forum session etherpad Message-ID: I've started an etherpad for the forum session in Vancouver devoted to discussing the possibility of tracking and allocation resources in Cinder using the Placement service. This is not a done deal. Instead the session is to discuss if it could work and how to make it happen if it seems like a good idea. The etherpad is at https://etherpad.openstack.org/p/YVR-cinder-placement but there's not a great deal there yet. Notably there's no description of how scheduling and resource tracking currently works in Cinder because I have no experience with that. This session is mostly for exploring and sharing information so the value of the etherpad may mostly be in the notes we take at the session, but anything we write in advance will help keep things a bit more structured and focused. If this is a topic of interest for you please add some notes to the etherpad, or if you prefer, here. Thanks. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From a.settle at outlook.com Wed May 9 13:13:44 2018 From: a.settle at outlook.com (Alexandra Settle) Date: Wed, 9 May 2018 13:13:44 +0000 Subject: [openstack-dev] [docs][openstack-ansible] Message-ID: <03758D6E-D720-4F40-8AE0-1296EB280D95@outlook.com> Hi all, It is with a super heavy heart I have to say that I need to step down as core from the OpenStack-Ansible and Documentation teams – and take a step back from the community. The last year has taken me in a completely different direction to what I expected, and try as I might I just don’t have the time to be even a part-time member of this great community :( Although I’m moving on, and learning new things, nothing can beat the memories of SnowpenStack and Denver’s super awesome trains. I know this isn’t some acceptance speech at the Oscars – but I just want to thank the Foundation and everyone who donates to the travel program. Without you guys, I wouldn’t have been a part of the community as much as I have been and met all your lovely faces. I have had such a great time being a part of something as exciting and new as OpenStack, and I hope to continue to lurk in the background of IRC like a total weirdo. I hope to perform some super shit karaoke with you all in another part of the world :) (who knows, maybe I’ll just tag along to PTG’s as a social outing… how cool am I?!) I’d also like to thank Mugsie for this sweet shot which is the perfect summary of my time with the OpenStack community. Read into this what you will: [cid:image001.jpg at 01D3E79F.EFDEF8E0] Don’t be a stranger, Alex IRC: asettle Twitter: dewsday Email: a.settle at outlook.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 144068 bytes Desc: image001.jpg URL: From a.settle at outlook.com Wed May 9 13:22:04 2018 From: a.settle at outlook.com (Alexandra Settle) Date: Wed, 9 May 2018 13:22:04 +0000 Subject: [openstack-dev] FW: [docs][openstack-ansible] Stepping down from core Message-ID: <442FA6C1-282B-44B8-AA29-0B3BD87427C5@outlook.com> Man I’m so smart I sent a Dear John letter to the ML and forgot the subject header. SMOOTH MOVE. From: Alexandra Settle Date: Wednesday, May 9, 2018 at 2:13 PM To: "OpenStack Development Mailing List (not for usage questions)" Cc: Petr Kovar , Jean-Philippe Evrard Subject: [openstack-dev][docs][openstack-ansible] Hi all, It is with a super heavy heart I have to say that I need to step down as core from the OpenStack-Ansible and Documentation teams – and take a step back from the community. The last year has taken me in a completely different direction to what I expected, and try as I might I just don’t have the time to be even a part-time member of this great community :( Although I’m moving on, and learning new things, nothing can beat the memories of SnowpenStack and Denver’s super awesome trains. I know this isn’t some acceptance speech at the Oscars – but I just want to thank the Foundation and everyone who donates to the travel program. Without you guys, I wouldn’t have been a part of the community as much as I have been and met all your lovely faces. I have had such a great time being a part of something as exciting and new as OpenStack, and I hope to continue to lurk in the background of IRC like a total weirdo. I hope to perform some super shit karaoke with you all in another part of the world :) (who knows, maybe I’ll just tag along to PTG’s as a social outing… how cool am I?!) I’d also like to thank Mugsie for this sweet shot which is the perfect summary of my time with the OpenStack community. Read into this what you will: [cid:image001.jpg at 01D3E79F.EFDEF8E0] Don’t be a stranger, Alex IRC: asettle Twitter: dewsday Email: a.settle at outlook.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 144069 bytes Desc: image001.jpg URL: From lebre.adrien at free.fr Wed May 9 13:50:45 2018 From: lebre.adrien at free.fr (lebre.adrien at free.fr) Date: Wed, 9 May 2018 15:50:45 +0200 (CEST) Subject: [openstack-dev] [FEMDC] IRC meeting postponed to next Wednesday In-Reply-To: <1415866368.99080875.1525873695891.JavaMail.root@zimbra29-e5.priv.proxad.net> Message-ID: <295656139.99088970.1525873845270.JavaMail.root@zimbra29-e5.priv.proxad.net> Dear all, Neither Paul-Andre nor me can chair the meeting today so we propose to postpone it for one week. The agenda will be delivered soon but you can consider that next meeting will focus on the preparation of the Vancouver summit (presentations, F2F meetings...). Best regards, ad_ri3n_ From mnaser at vexxhost.com Wed May 9 13:58:32 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 9 May 2018 09:58:32 -0400 Subject: [openstack-dev] FW: [docs][openstack-ansible] Stepping down from core In-Reply-To: <442FA6C1-282B-44B8-AA29-0B3BD87427C5@outlook.com> References: <442FA6C1-282B-44B8-AA29-0B3BD87427C5@outlook.com> Message-ID: Hey Alex, Thank you for all your contribution and leadership of the docs project! Hope we can see you around! :) Take care, Mohammed On Wed, May 9, 2018 at 9:22 AM, Alexandra Settle wrote: > Man I’m so smart I sent a Dear John letter to the ML and forgot the > subject header. > > > > SMOOTH MOVE. > > > > *From: *Alexandra Settle > *Date: *Wednesday, May 9, 2018 at 2:13 PM > *To: *"OpenStack Development Mailing List (not for usage questions)" < > OpenStack-dev at lists.openstack.org> > *Cc: *Petr Kovar , Jean-Philippe Evrard < > jean-philippe at evrard.me> > *Subject: *[openstack-dev][docs][openstack-ansible] > > > > Hi all, > > > > It is with a super heavy heart I have to say that I need to step down as > core from the OpenStack-Ansible and Documentation teams – and take a step > back from the community. > > > > The last year has taken me in a completely different direction to what I > expected, and try as I might I just don’t have the time to be even a > part-time member of this great community :( > > > > Although I’m moving on, and learning new things, nothing can beat the > memories of SnowpenStack and Denver’s super awesome trains. > > > > I know this isn’t some acceptance speech at the Oscars – but I just want > to thank the Foundation and everyone who donates to the travel program. > Without you guys, I wouldn’t have been a part of the community as much as I > have been and met all your lovely faces. > > > > I have had such a great time being a part of something as exciting and new > as OpenStack, and I hope to continue to lurk in the background of IRC like > a total weirdo. I hope to perform some super shit karaoke with you all in > another part of the world :) (who knows, maybe I’ll just tag along to PTG’s > as a social outing… how cool am I?!) > > > > I’d also like to thank Mugsie for this sweet shot which is the perfect > summary of my time with the OpenStack community. Read into this what you > will: > > > > [image: cid:image001.jpg at 01D3E79F.EFDEF8E0] > > > > Don’t be a stranger, > > > > Alex > > > > IRC: asettle > > Twitter: dewsday > > Email: a.settle at outlook.com > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 144069 bytes Desc: not available URL: From Jesse.Pretorius at rackspace.co.uk Wed May 9 14:11:07 2018 From: Jesse.Pretorius at rackspace.co.uk (Jesse Pretorius) Date: Wed, 9 May 2018 14:11:07 +0000 Subject: [openstack-dev] [docs][openstack-ansible] In-Reply-To: <03758D6E-D720-4F40-8AE0-1296EB280D95@outlook.com> References: <03758D6E-D720-4F40-8AE0-1296EB280D95@outlook.com> Message-ID: From: Alexandra Settle Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Wednesday, May 9, 2018 at 2:18 PM To: "OpenStack Development Mailing List (not for usage questions)" Subject: [openstack-dev] [docs][openstack-ansible] * It is with a super heavy heart I have to say that I need to step down as core from the OpenStack-Ansible and Documentation teams – and take a step back from the community. It was fantastic, and even entertaining, working with you! Your value to OpenStack-Ansible’s documentation improvement was unparalleled – I thank you for your support during that time and after when you did a great job shaking things up as the docs PTL. ☺ I wish you the best for your next journey and hope it’s filled with more laughter and good times. ________________________________ Rackspace Limited is a company registered in England & Wales (company registered number 03897010) whose registered office is at 5 Millington Road, Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may contain confidential or privileged information intended for the recipient. Any dissemination, distribution or copying of the enclosed material is prohibited. If you receive this transmission in error, please notify us immediately by e-mail at abuse at rackspace.com and delete the original message. Your cooperation is appreciated. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jesse.Pretorius at rackspace.co.uk Wed May 9 14:11:07 2018 From: Jesse.Pretorius at rackspace.co.uk (Jesse Pretorius) Date: Wed, 9 May 2018 14:11:07 +0000 Subject: [openstack-dev] [docs][openstack-ansible] In-Reply-To: <03758D6E-D720-4F40-8AE0-1296EB280D95@outlook.com> References: <03758D6E-D720-4F40-8AE0-1296EB280D95@outlook.com> Message-ID: From: Alexandra Settle Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Wednesday, May 9, 2018 at 2:18 PM To: "OpenStack Development Mailing List (not for usage questions)" Subject: [openstack-dev] [docs][openstack-ansible] * It is with a super heavy heart I have to say that I need to step down as core from the OpenStack-Ansible and Documentation teams – and take a step back from the community. It was fantastic, and even entertaining, working with you! Your value to OpenStack-Ansible’s documentation improvement was unparalleled – I thank you for your support during that time and after when you did a great job shaking things up as the docs PTL. ☺ I wish you the best for your next journey and hope it’s filled with more laughter and good times. ________________________________ Rackspace Limited is a company registered in England & Wales (company registered number 03897010) whose registered office is at 5 Millington Road, Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may contain confidential or privileged information intended for the recipient. Any dissemination, distribution or copying of the enclosed material is prohibited. If you receive this transmission in error, please notify us immediately by e-mail at abuse at rackspace.com and delete the original message. Your cooperation is appreciated. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Wed May 9 14:15:15 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 9 May 2018 10:15:15 -0400 Subject: [openstack-dev] [sdk] issues with using OpenStack SDK Python client In-Reply-To: References: Message-ID: Hey Gerard, Replies in-line! Thanks, Mohammed On Fri, May 4, 2018 at 6:25 PM, gerard.damm at wipro.com wrote: > Many thanks for the welcome ;) > > And many thanks for the speedy and very useful response ! > > Details below. > > Best regards, > Gerard > > > -------------------------------------------------------------------- > For add_gateway_to_router(): > > So I tried this: > external_network = conn.network.find_network(EXTERNAL_NETWORK_NAME) > network_dict_body = {'network_id' : external_network.id} > conn.network.add_gateway_to_router(onap_router, **network_dict_body) > > ==> no errors, but the router is not updated (no gateway is set) > (external_gateway_info is still None) > > (same with conn.network.add_gateway_to_router(onap_router, network_id=external_network.id) ) > > Is the body parameter for add_gateway_to_router() expected to correspond to a Network ? > (from a router point of view, a "gateway" is an external network) > > Should the network's subnet(s) be also specified in the dictionary ? Maybe only > if certain specific subnets are desired for the gateway role. Otherwise, > the default would apply: there is usually only 1 subnet, and that's the one > to be used. So network_id would be enough to specify a gateway used in a standard way. > > Maybe more details about what is expected in this body dictionary should be documented > in the add_gateway_to_router() section? > > In Horizon, when selecting a router, and selecting "Set Gateway", the user is only > asked to pick an external network from a dropdown list. Then, a router interface is > implicitly created, with an IP@ picked from the subnet of that network. Have you reviewed the API reference regarding to this? OpenStack SDK really just pushes the data straight through to the API in this case. > > -------------------------------------------------------------------- > For router deletion: it looks like it's the "!= None" test on the returned object that has an issue > > onap_router = conn.network.find_router(ONAP_ROUTER_NAME) > if onap_router != None: > print('Deleting ONAP router...') > conn.network.delete_router(onap_router.id) > else: > print('No ONAP router found...') > > I added traceback printouts in the code. > > printing the router before trying to delete it: > onap_router: > openstack.network.v2.router.Router(updated_at=2018-05-04T21:07:23Z, description=Router created for ONAP, status=ACTIVE, ha=False, name=ONAP_router, created_at=2018-05-04T21:07:20Z, tenant_id=03aa47d3bcfd48199e0470b1c86a7f5b, availability_zone_hints=[], admin_state_up=True, availability_zones=['nova'], tags=[], revision=3, routes=[], id=675abd14-096a-4b28-b764-31ca7098913b, external_gateway_info=None, distributed=False, flavor_id=None) > > > *** Exception: 'NoneType' object has no attribute '_body' > *** traceback.print_tb(): > File "auto_script_config_openstack_for_onap.py", line 141, in delete_all_ONAP > if onap_router != None: > File "/usr/local/lib/python3.5/dist-packages/openstack/resource.py", line 358, in __eq__ > return all([self._body.attributes == comparand._body.attributes, > *** traceback.print_exception(): > Traceback (most recent call last): > File "auto_script_config_openstack_for_onap.py", line 141, in delete_all_ONAP > if onap_router != None: > File "/usr/local/lib/python3.5/dist-packages/openstack/resource.py", line 358, in __eq__ > return all([self._body.attributes == comparand._body.attributes, > AttributeError: 'NoneType' object has no attribute '_body' > This looks like a bug to me, I've pushed up a fix here which you can follow: https://review.openstack.org/567230 > > -------------------------------------------------------------------- > For identity_api_version=3 : > > yes, that worked ! > > Could that identity_api_version parameter also/instead be specified in the clouds.yaml file ? > Yes, you can do that, your file would look something like this: https://github.com/openstack/openstack-ansible-openstack_openrc/blob/master/templates/clouds.yaml.j2 This is a templated file but you can see how you can add "identity_api_version" in there (note: it's not in the same level as "auth") > > -------------------------------------------------------------------- > Here's the traceback info for the flavor error, also on the "!= None" test : > > *** Exception: 'NoneType' object has no attribute '_body' > *** traceback.print_tb(): > File "auto_script_config_openstack_for_onap.py", line 537, in configure_all_ONAP > if tiny_flavor != None: > File "/usr/local/lib/python3.5/dist-packages/openstack/resource.py", line 358, in __eq__ > return all([self._body.attributes == comparand._body.attributes, > *** traceback.print_exception(): > Traceback (most recent call last): > File "auto_script_config_openstack_for_onap.py", line 537, in configure_all_ONAP > if tiny_flavor != None: > File "/usr/local/lib/python3.5/dist-packages/openstack/resource.py", line 358, in __eq__ > return all([self._body.attributes == comparand._body.attributes, > AttributeError: 'NoneType' object has no attribute '_body' > Same bug as the other I believe, here: https://review.openstack.org/567230 > > -------------------------------------------------------------------- > For the image creation: > > ah, OK, indeed, there is an image proxy (even 2: v1, v2), > and maybe the compute / image operations are redundant (or maybe not, for convenience) ? > > and yes, it worked ! There was no need for additional parameters. > > > > The information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments. WARNING: Computer viruses can be transmitted via email. The recipient should check this email and any attachments for the presence of viruses. The company accepts no liability for any damage caused by any virus transmitted by this email. www.wipro.com > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From amy at demarco.com Wed May 9 14:43:08 2018 From: amy at demarco.com (Amy Marrich) Date: Wed, 9 May 2018 09:43:08 -0500 Subject: [openstack-dev] [docs][openstack-ansible] In-Reply-To: <03758D6E-D720-4F40-8AE0-1296EB280D95@outlook.com> References: <03758D6E-D720-4F40-8AE0-1296EB280D95@outlook.com> Message-ID: We'll miss you!! And not just for your awesome docs and organizational skills! Amy (spotz) On Wed, May 9, 2018 at 8:13 AM, Alexandra Settle wrote: > Hi all, > > > > It is with a super heavy heart I have to say that I need to step down as > core from the OpenStack-Ansible and Documentation teams – and take a step > back from the community. > > > > The last year has taken me in a completely different direction to what I > expected, and try as I might I just don’t have the time to be even a > part-time member of this great community :( > > > > Although I’m moving on, and learning new things, nothing can beat the > memories of SnowpenStack and Denver’s super awesome trains. > > > > I know this isn’t some acceptance speech at the Oscars – but I just want > to thank the Foundation and everyone who donates to the travel program. > Without you guys, I wouldn’t have been a part of the community as much as I > have been and met all your lovely faces. > > > > I have had such a great time being a part of something as exciting and new > as OpenStack, and I hope to continue to lurk in the background of IRC like > a total weirdo. I hope to perform some super shit karaoke with you all in > another part of the world :) (who knows, maybe I’ll just tag along to PTG’s > as a social outing… how cool am I?!) > > > > I’d also like to thank Mugsie for this sweet shot which is the perfect > summary of my time with the OpenStack community. Read into this what you > will: > > > > > > Don’t be a stranger, > > > > Alex > > > > IRC: asettle > > Twitter: dewsday > > Email: a.settle at outlook.com > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 144068 bytes Desc: not available URL: From sean.mcginnis at gmx.com Wed May 9 14:48:11 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 9 May 2018 09:48:11 -0500 Subject: [openstack-dev] Vancouver Forum Etherpad List Message-ID: <20180509144811.GA16802@sm-xps> We are now less than two weeks away from the next Summit/Forum in Vancouver. Hopefully teams are able to spend some time preparing for their Forum sessions to make them productive. I have updated the Forum wiki page to start collecting links to session etherpads: https://wiki.openstack.org/wiki/Forum/Vancouver2018 Please update this page with your etherpads as they are ready to make this one easy place to go to for all sessions. I have started populating some sessions so there is a start, but there are many that still need to be filled in. Looking forward to another week in Vancouver. Thanks! Sean From mchandras at suse.de Wed May 9 14:51:19 2018 From: mchandras at suse.de (Markos Chandras) Date: Wed, 9 May 2018 15:51:19 +0100 Subject: [openstack-dev] [openstack-ansible] Implement rotations for meetings handling In-Reply-To: References: Message-ID: On 03/05/18 09:13, Andy McCrae wrote: > > > I will gladly pick up my well-used meeting chair hat. > It's a great idea, I think it would help make our meetings more productive. > Once you've been chair you have a different view of how the meetings work. > > Andy I too think this is a great idea +1 -- markos SUSE LINUX GmbH | GF: Felix Imendörffer, Jane Smithard, Graham Norton HRB 21284 (AG Nürnberg) Maxfeldstr. 5, D-90409, Nürnberg From mchandras at suse.de Wed May 9 14:57:40 2018 From: mchandras at suse.de (Markos Chandras) Date: Wed, 9 May 2018 15:57:40 +0100 Subject: [openstack-dev] [docs][openstack-ansible] In-Reply-To: <03758D6E-D720-4F40-8AE0-1296EB280D95@outlook.com> References: <03758D6E-D720-4F40-8AE0-1296EB280D95@outlook.com> Message-ID: On 09/05/18 14:13, Alexandra Settle wrote: > Hi all, > >   > > It is with a super heavy heart I have to say that I need to step down as > core from the OpenStack-Ansible and Documentation teams – and take a > step back from the community. I am sorry to see you go Alex. Thank you for all your work in the OSA world :) I hope you have fun on your next adventure! -- markos SUSE LINUX GmbH | GF: Felix Imendörffer, Jane Smithard, Graham Norton HRB 21284 (AG Nürnberg) Maxfeldstr. 5, D-90409, Nürnberg From aj at suse.com Wed May 9 15:10:36 2018 From: aj at suse.com (Andreas Jaeger) Date: Wed, 9 May 2018 17:10:36 +0200 Subject: [openstack-dev] [docs][openstack-ansible] In-Reply-To: <03758D6E-D720-4F40-8AE0-1296EB280D95@outlook.com> References: <03758D6E-D720-4F40-8AE0-1296EB280D95@outlook.com> Message-ID: Thanks for your leadership through troubled time! Hope our ways cross again, Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From prometheanfire at gentoo.org Wed May 9 15:22:44 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 9 May 2018 10:22:44 -0500 Subject: [openstack-dev] [swift][ironic][ceph][radosgw] radosgw "support" in python-swiftclient droped for ocata and above Message-ID: <20180509152244.75ihypuvaxqv7cw6@gentoo.org> python-swiftclient prior to 3.2.0 seemed to incidentally support radosgw tempurls. That is, there was no official support, but it still worked. In 3.2.0 (specifically the linked commit(s)) tempurls were validated to require /v1/account/container/object, which does not work with radosgw as it expects /v1/container/object. This means that radosgw tempurls fail to work, which further means that radosgw will stop working for things like ironic. I can see the point that swiftclient should not care about ceph not fully implementing the swift spec and not supporting the radosgw url syntax, but it seems like a step back. If this is not fixed then things like ironic will not work with radosgw for Ocata and above (as that's when this change was made). We'd need to wait for either ceph to fix this and support the account part of the url (probably just dropping it) or have people fork python-swiftclient to 'fix' it. I'm not sure what the right answer is... https://github.com/openstack/python-swiftclient/commit/4c955751d340a8f71a2eebdb3c58d90b36874a66 https://github.com/openstack/ironic/blob/214b694f05d200ac1e2ce6db631546f2831c01f7/ironic/common/glance_service/v2/image_service.py#L152-L185 https://bugs.launchpad.net/ironic/+bug/1747384 -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From jimmy at openstack.org Wed May 9 15:24:02 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Wed, 09 May 2018 10:24:02 -0500 Subject: [openstack-dev] [docs][openstack-ansible] In-Reply-To: <03758D6E-D720-4F40-8AE0-1296EB280D95@outlook.com> References: <03758D6E-D720-4F40-8AE0-1296EB280D95@outlook.com> Message-ID: <5AF31292.7090702@openstack.org> :( Man. That bums me out. Thanks for everything you've done for the community, Alex! Don't be a stranger!!!! Alexandra Settle wrote: > > Hi all, > > It is with a super heavy heart I have to say that I need to step down > as core from the OpenStack-Ansible and Documentation teams – and take > a step back from the community. > > The last year has taken me in a completely different direction to what > I expected, and try as I might I just don’t have the time to be even a > part-time member of this great community :( > > Although I’m moving on, and learning new things, nothing can beat the > memories of SnowpenStack and Denver’s super awesome trains. > > I know this isn’t some acceptance speech at the Oscars – but I just > want to thank the Foundation and everyone who donates to the travel > program. Without you guys, I wouldn’t have been a part of the > community as much as I have been and met all your lovely faces. > > I have had such a great time being a part of something as exciting and > new as OpenStack, and I hope to continue to lurk in the background of > IRC like a total weirdo. I hope to perform some super shit karaoke > with you all in another part of the world :) (who knows, maybe I’ll > just tag along to PTG’s as a social outing… how cool am I?!) > > I’d also like to thank Mugsie for this sweet shot which is the perfect > summary of my time with the OpenStack community. Read into this what > you will: > > Don’t be a stranger, > > Alex > > IRC: asettle > > Twitter: dewsday > > Email: a.settle at outlook.com > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.jpg Type: image/jpeg Size: 237236 bytes Desc: not available URL: From jungleboyj at gmail.com Wed May 9 15:26:22 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Wed, 9 May 2018 10:26:22 -0500 Subject: [openstack-dev] [docs][openstack-ansible] In-Reply-To: <03758D6E-D720-4F40-8AE0-1296EB280D95@outlook.com> References: <03758D6E-D720-4F40-8AE0-1296EB280D95@outlook.com> Message-ID: Alex, It has been a pleasure working with you!  Sorry to see you go but know your contributions and leadership will not be forgotten! Best of luck in your future adventures and I hope our paths cross again in the future. Best wishes, Jay On 5/9/2018 8:13 AM, Alexandra Settle wrote: > > Hi all, > > It is with a super heavy heart I have to say that I need to step down > as core from the OpenStack-Ansible and Documentation teams – and take > a step back from the community. > > The last year has taken me in a completely different direction to what > I expected, and try as I might I just don’t have the time to be even a > part-time member of this great community :( > > Although I’m moving on, and learning new things, nothing can beat the > memories of  SnowpenStack and Denver’s super awesome trains. > > I know this isn’t some acceptance speech at the Oscars – but I just > want to thank the Foundation and everyone who donates to the travel > program. Without you guys, I wouldn’t have been a part of the > community as much as I have been and met all your lovely faces. > > I have had such a great time being a part of something as exciting and > new as OpenStack, and I hope to continue to lurk in the background of > IRC like a total weirdo. I hope to perform some super shit karaoke > with you all in another part of the world :) (who knows, maybe I’ll > just tag along to PTG’s as a social outing… how cool am I?!) > > I’d also like to thank Mugsie for this sweet shot which is the perfect > summary of my time with the OpenStack community. Read into this what > you will: > > Don’t be a stranger, > > Alex > > IRC: asettle > > Twitter: dewsday > > Email: a.settle at outlook.com > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 144068 bytes Desc: not available URL: From cbodley at redhat.com Wed May 9 15:40:47 2018 From: cbodley at redhat.com (Casey Bodley) Date: Wed, 9 May 2018 11:40:47 -0400 Subject: [openstack-dev] [swift][ironic][ceph][radosgw] radosgw "support" in python-swiftclient droped for ocata and above In-Reply-To: <20180509152244.75ihypuvaxqv7cw6@gentoo.org> References: <20180509152244.75ihypuvaxqv7cw6@gentoo.org> Message-ID: <4125568e-af78-15ea-d664-749db5ce3d63@redhat.com> On 05/09/2018 11:22 AM, Matthew Thode wrote: > python-swiftclient prior to 3.2.0 seemed to incidentally support radosgw > tempurls. That is, there was no official support, but it still worked. > > In 3.2.0 (specifically the linked commit(s)) tempurls were validated to > require /v1/account/container/object, which does not work with radosgw > as it expects /v1/container/object. This means that radosgw tempurls > fail to work, which further means that radosgw will stop working for > things like ironic. > > I can see the point that swiftclient should not care about ceph not > fully implementing the swift spec and not supporting the radosgw url > syntax, but it seems like a step back. If this is not fixed then things > like ironic will not work with radosgw for Ocata and above (as that's > when this change was made). We'd need to wait for either ceph to fix > this and support the account part of the url (probably just dropping it) > or have people fork python-swiftclient to 'fix' it. > > I'm not sure what the right answer is... > > https://github.com/openstack/python-swiftclient/commit/4c955751d340a8f71a2eebdb3c58d90b36874a66 > https://github.com/openstack/ironic/blob/214b694f05d200ac1e2ce6db631546f2831c01f7/ironic/common/glance_service/v2/image_service.py#L152-L185 > > https://bugs.launchpad.net/ironic/+bug/1747384 > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Thanks for raising the issue. Radosgw does have a config option 'rgw_swift_account_in_url' to expect this url format, though it defaults to false and I'm not 100% sure that it applies correctly to tempurls. -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed May 9 16:00:36 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 09 May 2018 12:00:36 -0400 Subject: [openstack-dev] [docs][openstack-ansible] In-Reply-To: <03758D6E-D720-4F40-8AE0-1296EB280D95@outlook.com> References: <03758D6E-D720-4F40-8AE0-1296EB280D95@outlook.com> Message-ID: <1525880779-sup-8371@lrrr.local> Excerpts from Alexandra Settle's message of 2018-05-09 13:13:44 +0000: > Hi all, > > It is with a super heavy heart I have to say that I need to step down as core from the OpenStack-Ansible and Documentation teams – and take a step back from the community. > > The last year has taken me in a completely different direction to what I expected, and try as I might I just don’t have the time to be even a part-time member of this great community :( > > Although I’m moving on, and learning new things, nothing can beat the memories of SnowpenStack and Denver’s super awesome trains. > > I know this isn’t some acceptance speech at the Oscars – but I just want to thank the Foundation and everyone who donates to the travel program. Without you guys, I wouldn’t have been a part of the community as much as I have been and met all your lovely faces. > > I have had such a great time being a part of something as exciting and new as OpenStack, and I hope to continue to lurk in the background of IRC like a total weirdo. I hope to perform some super shit karaoke with you all in another part of the world :) (who knows, maybe I’ll just tag along to PTG’s as a social outing… how cool am I?!) > > I’d also like to thank Mugsie for this sweet shot which is the perfect summary of my time with the OpenStack community. Read into this what you will: > > [cid:image001.jpg at 01D3E79F.EFDEF8E0] > > Don’t be a stranger, > > Alex > > IRC: asettle > Twitter: dewsday > Email: a.settle at outlook.com :-( This is a sad, but not entirely surprising, notice. Thank you for everything you did to help with the migration last year. It would not have been possible to make that work without your assistance and energy. Our community is better because of your contributions. Doug From doug at doughellmann.com Wed May 9 17:12:14 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 09 May 2018 13:12:14 -0400 Subject: [openstack-dev] [all][forum][python3] etherpad for python 2 deprecation timeline forum session Message-ID: <1525885877-sup-5903@lrrr.local> I have created the etherpad for the forum session to discuss deprecating python 2. https://etherpad.openstack.org/p/YVR-python-2-deprecation-timeline I pre-populated it with some of the details from the earlier mailing list thread, but please add any additional notes you have that will help lay out this timeline. Doug From doug at doughellmann.com Wed May 9 17:14:16 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 09 May 2018 13:14:16 -0400 Subject: [openstack-dev] [tc][all][forum] etherpad for tc retrospective at the forum Message-ID: <1525885967-sup-1263@lrrr.local> I have created the etherpad for the TC Retrospective session planned for Thursday at the Forum. I set up the document structure, but have not added any substantive content. Please consider the questions in the etherpad before the session and either add content or be prepared to add it in the room on Thursday. https://etherpad.openstack.org/p/YVR-tc-retrospective Doug From melwittt at gmail.com Wed May 9 17:28:42 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 9 May 2018 10:28:42 -0700 Subject: [openstack-dev] [nova][third-party-ci]zKVM (s390x) CI broken In-Reply-To: <751F9D3E-DEE2-4B43-B6EB-F5A0D9B400CB@linux.vnet.ibm.com> References: <39562EAF-538B-4C58-90FA-339885A0EE1E@linux.vnet.ibm.com> <751F9D3E-DEE2-4B43-B6EB-F5A0D9B400CB@linux.vnet.ibm.com> Message-ID: <90aea7ea-9053-d572-7507-f8c527209851@gmail.com> On Wed, 9 May 2018 13:16:45 +0200, Andreas Scheuring wrote: > The root cause seems to be bug [1]. It’s related to nova cells v2 > configuration in devstack. Stephen Finucane already promised to have a > look later the day (thx!!!). I keep the CI running for now... > > [1] https://bugs.launchpad.net/devstack/+bug/1770143 Thanks for opening the bug about it. I'm going to investigate it too as it's related to my recent patch to devstack. -melanie From jim at jimrollenhagen.com Wed May 9 17:42:02 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Wed, 9 May 2018 13:42:02 -0400 Subject: [openstack-dev] [swift][ironic][ceph][radosgw] radosgw "support" in python-swiftclient droped for ocata and above In-Reply-To: <20180509152244.75ihypuvaxqv7cw6@gentoo.org> References: <20180509152244.75ihypuvaxqv7cw6@gentoo.org> Message-ID: On Wed, May 9, 2018 at 11:22 AM, Matthew Thode wrote: > python-swiftclient prior to 3.2.0 seemed to incidentally support radosgw > tempurls. That is, there was no official support, but it still worked. > > In 3.2.0 (specifically the linked commit(s)) tempurls were validated to > require /v1/account/container/object, which does not work with radosgw > as it expects /v1/container/object. This means that radosgw tempurls > fail to work, which further means that radosgw will stop working for > things like ironic. > > I can see the point that swiftclient should not care about ceph not > fully implementing the swift spec and not supporting the radosgw url > syntax, but it seems like a step back. If this is not fixed then things > like ironic will not work with radosgw for Ocata and above (as that's > when this change was made). We'd need to wait for either ceph to fix > this and support the account part of the url (probably just dropping it) > or have people fork python-swiftclient to 'fix' it. > > I'm not sure what the right answer is... > Sounds like in the meantime ironic should pin python-swiftclient, yes? // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed May 9 17:56:02 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 9 May 2018 17:56:02 +0000 Subject: [openstack-dev] [docs][openstack-ansible] In-Reply-To: <03758D6E-D720-4F40-8AE0-1296EB280D95@outlook.com> References: <03758D6E-D720-4F40-8AE0-1296EB280D95@outlook.com> Message-ID: <20180509175602.zbfcqowezvmku65t@yuggoth.org> On 2018-05-09 13:13:44 +0000 (+0000), Alexandra Settle wrote: [...] > I have had such a great time being a part of something as exciting > and new as OpenStack, and I hope to continue to lurk in the > background of IRC like a total weirdo. I hope to perform some > super shit karaoke with you all in another part of the world :) > (who knows, maybe I’ll just tag along to PTG’s as a social outing… > how cool am I?!) [...] Your sheer awesomeness has touched our community in countless ways; I really hope you do continue to hang out with the rest of us at every opportunity! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From prometheanfire at gentoo.org Wed May 9 19:08:15 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 9 May 2018 14:08:15 -0500 Subject: [openstack-dev] [swift][ironic][ceph][radosgw] radosgw "support" in python-swiftclient droped for ocata and above In-Reply-To: References: <20180509152244.75ihypuvaxqv7cw6@gentoo.org> Message-ID: <20180509190815.bdm3dv5xaqugdea7@gentoo.org> On 18-05-09 13:42:02, Jim Rollenhagen wrote: > On Wed, May 9, 2018 at 11:22 AM, Matthew Thode > wrote: > > > python-swiftclient prior to 3.2.0 seemed to incidentally support radosgw > > tempurls. That is, there was no official support, but it still worked. > > > > In 3.2.0 (specifically the linked commit(s)) tempurls were validated to > > require /v1/account/container/object, which does not work with radosgw > > as it expects /v1/container/object. This means that radosgw tempurls > > fail to work, which further means that radosgw will stop working for > > things like ironic. > > > > I can see the point that swiftclient should not care about ceph not > > fully implementing the swift spec and not supporting the radosgw url > > syntax, but it seems like a step back. If this is not fixed then things > > like ironic will not work with radosgw for Ocata and above (as that's > > when this change was made). We'd need to wait for either ceph to fix > > this and support the account part of the url (probably just dropping it) > > or have people fork python-swiftclient to 'fix' it. > > > > I'm not sure what the right answer is... > > > > Sounds like in the meantime ironic should pin python-swiftclient, yes? > That's one solution. The following three solutions are what I see as ways forward if swiftclient isn't changed. * Proper fix would be to make ceph support the account field * Workaround would be to specify an old swiftclient to install (3.1.0, pre-ocata) * Workaround would be to for swiftclient to be forked and 'fixed' -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From melwittt at gmail.com Wed May 9 19:09:45 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 9 May 2018 12:09:45 -0700 Subject: [openstack-dev] [nova][third-party-ci]zKVM (s390x) CI broken In-Reply-To: <90aea7ea-9053-d572-7507-f8c527209851@gmail.com> References: <39562EAF-538B-4C58-90FA-339885A0EE1E@linux.vnet.ibm.com> <751F9D3E-DEE2-4B43-B6EB-F5A0D9B400CB@linux.vnet.ibm.com> <90aea7ea-9053-d572-7507-f8c527209851@gmail.com> Message-ID: <0eee46fb-0507-9d76-04a7-bc0b5cf98184@gmail.com> On Wed, 9 May 2018 10:28:42 -0700, Melanie Witt wrote: > On Wed, 9 May 2018 13:16:45 +0200, Andreas Scheuring wrote: >> The root cause seems to be bug [1]. It’s related to nova cells v2 >> configuration in devstack. Stephen Finucane already promised to have a >> look later the day (thx!!!). I keep the CI running for now... >> >> [1] https://bugs.launchpad.net/devstack/+bug/1770143 > > Thanks for opening the bug about it. I'm going to investigate it too as > it's related to my recent patch to devstack. Update: I've proposed a fix for the bug at https://review.openstack.org/567298 -melanie From clay.gerrard at gmail.com Wed May 9 19:24:32 2018 From: clay.gerrard at gmail.com (Clay Gerrard) Date: Wed, 9 May 2018 12:24:32 -0700 Subject: [openstack-dev] [swift][ironic][ceph][radosgw] radosgw "support" in python-swiftclient droped for ocata and above In-Reply-To: <20180509190815.bdm3dv5xaqugdea7@gentoo.org> References: <20180509152244.75ihypuvaxqv7cw6@gentoo.org> <20180509190815.bdm3dv5xaqugdea7@gentoo.org> Message-ID: On Wed, May 9, 2018 at 12:08 PM, Matthew Thode wrote: > > * Proper fix would be to make ceph support the account field > Is the 'rgw_swift_account_in_url' option not correct/sufficient? > * Workaround would be to specify an old swiftclient to install (3.1.0, > pre-ocata) > Doesn't seem great if a sysadmin wants to co-install the newer swiftclient cli > * Workaround would be to for swiftclient to be forked and 'fixed' > > Not clear to me what the "fix" would be here - just don't do validation? I'll assume the "fork threat" here is for completeness/emphasis :D Do you know if ironic works with "normal" swift tempurls or only the radosgw implementation of the swift api? -Clay -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Wed May 9 19:24:24 2018 From: aschultz at redhat.com (Alex Schultz) Date: Wed, 9 May 2018 13:24:24 -0600 Subject: [openstack-dev] [tripleo] Migration to Storyboard Message-ID: Hello tripleo folks, So we've been experimenting with migrating some squads over to storyboard[0] but this seems to be causing more issues than perhaps it's worth. Since the upstream community would like to standardize on Storyboard at some point, I would propose that we do a cut over of all the tripleo bugs/blueprints from Launchpad to Storyboard. In the irc meeting this week[1], I asked that the tripleo-ci team make sure the existing scripts that we use to monitor bugs for CI support Storyboard. I would consider this a prerequisite for the migration. I am thinking it would be beneficial to get this done before or as close to M2. Thoughts, concerns, etc? Thanks, -Alex [0] https://storyboard.openstack.org/#!/project_group/76 [1] http://eavesdrop.openstack.org/meetings/tripleo/2018/tripleo.2018-05-08-14.00.log.html#l-42 From jim at jimrollenhagen.com Wed May 9 19:35:58 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Wed, 9 May 2018 15:35:58 -0400 Subject: [openstack-dev] [swift][ironic][ceph][radosgw] radosgw "support" in python-swiftclient droped for ocata and above In-Reply-To: References: <20180509152244.75ihypuvaxqv7cw6@gentoo.org> <20180509190815.bdm3dv5xaqugdea7@gentoo.org> Message-ID: On Wed, May 9, 2018 at 3:24 PM, Clay Gerrard wrote: > > > On Wed, May 9, 2018 at 12:08 PM, Matthew Thode > wrote: > >> >> * Proper fix would be to make ceph support the account field >> > > Is the 'rgw_swift_account_in_url' option not correct/sufficient? > I guess we could just document that people need to use this. > * Workaround would be to specify an old swiftclient to install (3.1.0, >> pre-ocata) >> > > Doesn't seem great if a sysadmin wants to co-install the newer swiftclient > cli > > >> * Workaround would be to for swiftclient to be forked and 'fixed' >> >> > Not clear to me what the "fix" would be here - just don't do validation? > I'll assume the "fork threat" here is for completeness/emphasis :D > > Do you know if ironic works with "normal" swift tempurls or only the > radosgw implementation of the swift api? > It works with both, see the link from earlier in the thread: https://github.com/openstack/ironic/blob/214b694f05d200ac1e2ce6db631546f2831c01f7/ironic/common/glance_service/v2/image_service.py#L152-L185 // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From clay.gerrard at gmail.com Wed May 9 19:58:15 2018 From: clay.gerrard at gmail.com (Clay Gerrard) Date: Wed, 9 May 2018 12:58:15 -0700 Subject: [openstack-dev] [swift][ironic][ceph][radosgw] radosgw "support" in python-swiftclient droped for ocata and above In-Reply-To: References: <20180509152244.75ihypuvaxqv7cw6@gentoo.org> <20180509190815.bdm3dv5xaqugdea7@gentoo.org> Message-ID: On Wed, May 9, 2018 at 12:35 PM, Jim Rollenhagen wrote: > > It works with both, see the link from earlier in the thread: > https://github.com/openstack/ironic/blob/214b694f05d200ac1e2ce6db631546 > f2831c01f7/ironic/common/glance_service/v2/image_service.py#L152-L185 > > Ah! Perfect! Thanks for point that out (again 😅) -Clay -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed May 9 20:06:01 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 09 May 2018 16:06:01 -0400 Subject: [openstack-dev] [tc] agenda for upcoming joint leadership meeting in vancouver Message-ID: <1525896030-sup-1329@lrrr.local> The last time we had a joint leadership meeting, several TC members expressed the desire to have more information about what was going to be covered before the meeting. The agenda for the next meeting (to be held in Vancouver), is in the wiki at https://wiki.openstack.org/wiki/Governance/Foundation/20May2018BoardMeeting (I suggest "watching" that page to keep an eye on updates because it is still subject to change). The agenda items are a bit terse, so I thought I would send this email to give my understanding of what will be discussed. If you have questions or anything to add, please post a follow up message. Welcome to new members -- We start with introductions for new folks, since there are new members of all 3 groups attending. This is usually just a chance to put a face with a name, but sometimes we mention employers. Because of the nature of the board, its members are usually more interested in who employs new board members than the TC or UC are about our members. Executive Team Update -- Jonathan, Lauren, Mark, and Thierry will give updates about the summit and Foundation. We usually learn things like registration numbers for the event, major themes, etc. User Committee Update -- Melvin and Matt will talk about the work the UC has been doing. I don't have any real details about that. Strategic Discussions -- I put our topic about seeking more contributors with more time to spend upstream here. It's not entirely clear this is what Alan had in mind for this section, and since I've asked for the entire hour this may change. As I've mentioned a few times in channel, I have been working on putting hard numbers together to demonstrate that individual contributors, given time to establish their credibility and actually do some work, can have a significant positive impact on the community and the direction of OpenStack. So far, I have a few cases drawn from contributions to our community goals and help-wanted list to demonstrate both success and failure to make this point. I hope that by framing it this way we can move on from the problem statement, which we have covered a few times in the past, and have a more fruitful discussion of how to convince engineering managers and other decision makers to allow developers to contribute in these ways. I will share the slide deck with everyone after the meeting, since it may change between now and then. Updating the Foundation Mission Statement -- I'm not really sure what this is, beyond what it says. Preview 2018 event strategy -- I've been told this is a presentation/discussion led by Lauren to talk about our in-person events such as the Summit/Forum and PTG. Next steps for fixing bylaws typo -- This is our second agenda item, to talk about fixing the obvious error in the appendix to the bylaws that covers selecting TC members and defines "Active Technical Contributor" (section 3.b.i of https://www.openstack.org/legal/technical-committee-member-policy/). When the bylaws were last modified, this section was changed to say "An Individual Member is an ATC who has..." meaning that a Foundation member must be an ATC. Clearly this is not the desired intent, and the text should read "An ATC is an Individual Member who has...". We need the Foundation legal folks to help us understand whether we need a membership vote to fix the wording, or if we can have the board amend it. The goal for this section of the meeting is to get someone to make that determination and then for the rest of us to settle on the next steps we will take to correct the issue. Doug From prometheanfire at gentoo.org Wed May 9 20:14:37 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 9 May 2018 15:14:37 -0500 Subject: [openstack-dev] [swift][ironic][ceph][radosgw] radosgw "support" in python-swiftclient droped for ocata and above In-Reply-To: References: <20180509152244.75ihypuvaxqv7cw6@gentoo.org> <20180509190815.bdm3dv5xaqugdea7@gentoo.org> Message-ID: <20180509201437.ahrneoh3ovwi4555@gentoo.org> On 18-05-09 12:24:32, Clay Gerrard wrote: > On Wed, May 9, 2018 at 12:08 PM, Matthew Thode > wrote: > > > > > * Proper fix would be to make ceph support the account field > > > > Is the 'rgw_swift_account_in_url' option not correct/sufficient? > I didn't see that option, I'll test and get back to you on it. > > * Workaround would be to specify an old swiftclient to install (3.1.0, > > pre-ocata) > > > > Doesn't seem great if a sysadmin wants to co-install the newer swiftclient > cli > > > > * Workaround would be to for swiftclient to be forked and 'fixed' > > > > > Not clear to me what the "fix" would be here - just don't do validation? > I'll assume the "fork threat" here is for completeness/emphasis :D > > Do you know if ironic works with "normal" swift tempurls or only the > radosgw implementation of the swift api? > -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From whayutin at redhat.com Wed May 9 20:20:02 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 09 May 2018 20:20:02 +0000 Subject: [openstack-dev] [tripleo] Migration to Storyboard In-Reply-To: References: Message-ID: On Wed, May 9, 2018 at 3:25 PM Alex Schultz wrote: > Hello tripleo folks, > > So we've been experimenting with migrating some squads over to > storyboard[0] but this seems to be causing more issues than perhaps > it's worth. Since the upstream community would like to standardize on > Storyboard at some point, I would propose that we do a cut over of all > the tripleo bugs/blueprints from Launchpad to Storyboard. > > In the irc meeting this week[1], I asked that the tripleo-ci team make > sure the existing scripts that we use to monitor bugs for CI support > Storyboard. I would consider this a prerequisite for the migration. > I am thinking it would be beneficial to get this done before or as > close to M2. > > Thoughts, concerns, etc? > Just clarifying. You would like to have the tooling updated by M2, which is fine I think. However squads are not expected to change all their existing procedures by M2 correct? I'm concerned about migrating our current kanban boards to storyboard by M2. Thanks > > Thanks, > -Alex > > [0] https://storyboard.openstack.org/#!/project_group/76 > [1] > http://eavesdrop.openstack.org/meetings/tripleo/2018/tripleo.2018-05-08-14.00.log.html#l-42 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed May 9 20:27:43 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 09 May 2018 16:27:43 -0400 Subject: [openstack-dev] [tc] agenda for upcoming joint leadership meeting in vancouver In-Reply-To: <1525896030-sup-1329@lrrr.local> References: <1525896030-sup-1329@lrrr.local> Message-ID: <1525897562-sup-4680@lrrr.local> Excerpts from Doug Hellmann's message of 2018-05-09 16:06:01 -0400: > > The last time we had a joint leadership meeting, several TC members > expressed the desire to have more information about what was going > to be covered before the meeting. The agenda for the next meeting > (to be held in Vancouver), is in the wiki at > https://wiki.openstack.org/wiki/Governance/Foundation/20May2018BoardMeeting > (I suggest "watching" that page to keep an eye on updates because > it is still subject to change). The agenda items are a bit terse, > so I thought I would send this email to give my understanding of > what will be discussed. If you have questions or anything to add, > please post a follow up message. > > Welcome to new members -- We start with introductions for new > folks, since there are new members of all 3 groups attending. This > is usually just a chance to put a face with a name, but sometimes > we mention employers. Because of the nature of the board, its members > are usually more interested in who employs new board members than > the TC or UC are about our members. > > Executive Team Update -- Jonathan, Lauren, Mark, and Thierry will > give updates about the summit and Foundation. We usually learn things > like registration numbers for the event, major themes, etc. > > User Committee Update -- Melvin and Matt will talk about the work > the UC has been doing. I don't have any real details about that. > > Strategic Discussions -- I put our topic about seeking more > contributors with more time to spend upstream here. It's not entirely > clear this is what Alan had in mind for this section, and since > I've asked for the entire hour this may change. > > As I've mentioned a few times in channel, I have been working on > putting hard numbers together to demonstrate that individual > contributors, given time to establish their credibility and actually > do some work, can have a significant positive impact on the community > and the direction of OpenStack. So far, I have a few cases drawn > from contributions to our community goals and help-wanted list to > demonstrate both success and failure to make this point. I hope > that by framing it this way we can move on from the problem statement, > which we have covered a few times in the past, and have a more > fruitful discussion of how to convince engineering managers and > other decision makers to allow developers to contribute in these > ways. I will share the slide deck with everyone after the meeting, > since it may change between now and then. > > Updating the Foundation Mission Statement -- I'm not really sure > what this is, beyond what it says. > > Preview 2018 event strategy -- I've been told this is a > presentation/discussion led by Lauren to talk about our in-person > events such as the Summit/Forum and PTG. > > Next steps for fixing bylaws typo -- This is our second agenda item, > to talk about fixing the obvious error in the appendix to the bylaws > that covers selecting TC members and defines "Active Technical > Contributor" (section 3.b.i of > https://www.openstack.org/legal/technical-committee-member-policy/). > > When the bylaws were last modified, this section was changed to say > "An Individual Member is an ATC who has..." meaning that a Foundation > member must be an ATC. Clearly this is not the desired intent, and > the text should read "An ATC is an Individual Member who has...". > We need the Foundation legal folks to help us understand whether > we need a membership vote to fix the wording, or if we can have the > board amend it. The goal for this section of the meeting is to get > someone to make that determination and then for the rest of us to > settle on the next steps we will take to correct the issue. > > Doug Oops, I cut that off before the end of the agenda. The last few items are part of a formal Board of Directors meeting. Those are generally open except when they need to discuss something in an Executive session, but only the Board members contribute to the meeting unless someone else is specifically called on. As you can see, it looks like this time around this section of the meeting will only consist of committee updates. Doug From aschultz at redhat.com Wed May 9 20:28:35 2018 From: aschultz at redhat.com (Alex Schultz) Date: Wed, 9 May 2018 14:28:35 -0600 Subject: [openstack-dev] [tripleo] Migration to Storyboard In-Reply-To: References: Message-ID: On Wed, May 9, 2018 at 2:20 PM, Wesley Hayutin wrote: > > > On Wed, May 9, 2018 at 3:25 PM Alex Schultz wrote: >> >> Hello tripleo folks, >> >> So we've been experimenting with migrating some squads over to >> storyboard[0] but this seems to be causing more issues than perhaps >> it's worth. Since the upstream community would like to standardize on >> Storyboard at some point, I would propose that we do a cut over of all >> the tripleo bugs/blueprints from Launchpad to Storyboard. >> >> In the irc meeting this week[1], I asked that the tripleo-ci team make >> sure the existing scripts that we use to monitor bugs for CI support >> Storyboard. I would consider this a prerequisite for the migration. >> I am thinking it would be beneficial to get this done before or as >> close to M2. >> >> Thoughts, concerns, etc? > > > Just clarifying. You would like to have the tooling updated by M2, which is > fine I think. However squads are not expected to change all their existing > procedures by M2 correct? I'm concerned about migrating our current kanban > boards to storyboard by M2. > I'm talking about tooling (irc bot/monitoring) and launchpad migration complete by m2. Any other boards can wait until squads want to move over. Thanks, -Alex > Thanks > >> >> >> Thanks, >> -Alex >> >> [0] https://storyboard.openstack.org/#!/project_group/76 >> [1] >> http://eavesdrop.openstack.org/meetings/tripleo/2018/tripleo.2018-05-08-14.00.log.html#l-42 >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From gerard.damm at wipro.com Wed May 9 20:49:05 2018 From: gerard.damm at wipro.com (gerard.damm at wipro.com) Date: Wed, 9 May 2018 20:49:05 +0000 Subject: [openstack-dev] [sdk] issues with using OpenStack SDK Python client In-Reply-To: References: Message-ID: Many thanks ! Adding a gateway to a router: it looks like it works in the router creation method, but not in the router update method. (i.e., passing a external_gateway_info dictionary in conn.network.create_router() worked, but doing the same with conn.network.add_gateway_to_router() did not work) the (verbose) code snippet is: external_network = conn.network.find_network(EXTERNAL_NETWORK_NAME) onap_gateway_external_subnets = [] for ext_subn_id in external_network.subnet_ids: onap_gateway_external_subnets.append({'subnet_id':ext_subn_id}) network_dict_body = { 'network_id': external_network.id, 'enable_snat': True, 'external_fixed_ips': onap_gateway_external_subnets} onap_router = conn.network.create_router( name = ONAP_ROUTER_NAME, description = ONAP_ROUTER_DESC, external_gateway_info = network_dict_body, is_admin_state_up = True) I got the idea of trying at creation time, from the create_router API (external_gateway_info entry) at https://developer.openstack.org/api-ref/network/v2/#layer-3-networking (thanks for the tip!) So there might be a difference between how network.create_router() handles this external_gateway_info, versus how network.add_gateway_to_router() handles it. About identity_api_version in clouds.yaml: yep, that works ! About the "!=None" comparison problem: OK, thanks for the fixing effort in progress ! The information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments. WARNING: Computer viruses can be transmitted via email. The recipient should check this email and any attachments for the presence of viruses. The company accepts no liability for any damage caused by any virus transmitted by this email. www.wipro.com From cbodley at redhat.com Wed May 9 21:10:12 2018 From: cbodley at redhat.com (Casey Bodley) Date: Wed, 9 May 2018 17:10:12 -0400 Subject: [openstack-dev] [swift][ironic][ceph][radosgw] radosgw "support" in python-swiftclient droped for ocata and above In-Reply-To: <4125568e-af78-15ea-d664-749db5ce3d63@redhat.com> References: <20180509152244.75ihypuvaxqv7cw6@gentoo.org> <4125568e-af78-15ea-d664-749db5ce3d63@redhat.com> Message-ID: On 05/09/2018 11:40 AM, Casey Bodley wrote: > > On 05/09/2018 11:22 AM, Matthew Thode wrote: >> python-swiftclient prior to 3.2.0 seemed to incidentally support radosgw >> tempurls. That is, there was no official support, but it still worked. >> >> In 3.2.0 (specifically the linked commit(s)) tempurls were validated to >> require /v1/account/container/object, which does not work with radosgw >> as it expects /v1/container/object. This means that radosgw tempurls >> fail to work, which further means that radosgw will stop working for >> things like ironic. >> >> I can see the point that swiftclient should not care about ceph not >> fully implementing the swift spec and not supporting the radosgw url >> syntax, but it seems like a step back. If this is not fixed then things >> like ironic will not work with radosgw for Ocata and above (as that's >> when this change was made). We'd need to wait for either ceph to fix >> this and support the account part of the url (probably just dropping it) >> or have people fork python-swiftclient to 'fix' it. >> >> I'm not sure what the right answer is... >> >> https://github.com/openstack/python-swiftclient/commit/4c955751d340a8f71a2eebdb3c58d90b36874a66 >> https://github.com/openstack/ironic/blob/214b694f05d200ac1e2ce6db631546f2831c01f7/ironic/common/glance_service/v2/image_service.py#L152-L185 >> >> https://bugs.launchpad.net/ironic/+bug/1747384 >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > Thanks for raising the issue. Radosgw does have a config option > 'rgw_swift_account_in_url' to expect this url format, though it > defaults to false and I'm not 100% sure that it applies correctly to > tempurls. Marcus Watts confirmed that this does work with tempurl. From juliaashleykreger at gmail.com Wed May 9 21:30:31 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 9 May 2018 17:30:31 -0400 Subject: [openstack-dev] [swift][ironic][ceph][radosgw] radosgw "support" in python-swiftclient droped for ocata and above In-Reply-To: References: <20180509152244.75ihypuvaxqv7cw6@gentoo.org> <4125568e-af78-15ea-d664-749db5ce3d63@redhat.com> Message-ID: On Wed, May 9, 2018 at 5:10 PM, Casey Bodley wrote: > > On 05/09/2018 11:40 AM, Casey Bodley wrote: >> >> >> On 05/09/2018 11:22 AM, Matthew Thode wrote: >>> >>> python-swiftclient prior to 3.2.0 seemed to incidentally support radosgw >>> tempurls. That is, there was no official support, but it still worked. >>> >>> In 3.2.0 (specifically the linked commit(s)) tempurls were validated to >>> require /v1/account/container/object, which does not work with radosgw >>> as it expects /v1/container/object. This means that radosgw tempurls >>> fail to work, which further means that radosgw will stop working for >>> things like ironic. What is the value in the validation of the URL path as such? It seems like the client shouldn't really care as to the precise format of the end user supplied URL as long as the server returns the expected response. >>> I can see the point that swiftclient should not care about ceph not >>> fully implementing the swift spec and not supporting the radosgw url >>> syntax, but it seems like a step back. If this is not fixed then things >>> like ironic will not work with radosgw for Ocata and above (as that's >>> when this change was made). We'd need to wait for either ceph to fix >>> this and support the account part of the url (probably just dropping it) >>> or have people fork python-swiftclient to 'fix' it. >>> >>> I'm not sure what the right answer is... I'm personally -1 to pinning swiftclient as that will introduce headaches if someone tries to install ironic along side anything that expects a newer client or vise-versa. I agree it seems like a step back, to which I'm curious about the value of having the check. The forth option is for ironic to abruptly drop all related code and support for radosgw temp urls, but that too would be a setback and negative for OpenStack in general. From kennelson11 at gmail.com Wed May 9 21:53:15 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 09 May 2018 21:53:15 +0000 Subject: [openstack-dev] [First Contact] [SIG] Forum Etherpads Message-ID: Hello Everyone! I created etherpads for both of our Forum Sessions that were accepted and added them to the master list[1]. Please feel free to add discussion topics to them whether you can attend or not! First Contact SIG Operator Inclusion: https://etherpad.openstack.org/p/FC-SIG-Ops-Inclusion Drafting Requirements for Organisations Contributing to Open https://etherpad.openstack.org/p/Reqs-for-Organisations-Contributing-to-OpenStack 11 Days till Forum Fun -Kendall Nelson (diablo_rojo) [1]https://wiki.openstack.org/wiki/Forum/Vancouver2018#Wednesday.2C_May_23 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sangho at opennetworking.org Thu May 10 00:21:15 2018 From: sangho at opennetworking.org (Sangho Shin) Date: Thu, 10 May 2018 08:21:15 +0800 Subject: [openstack-dev] [neutron][ml2 plugin] unit test errors In-Reply-To: <5D884907-7422-4A8F-AA94-DA1BE7E037A9@linux.vnet.ibm.com> References: <08D21635-A69C-4D77-811E-4F67ED4C61A3@opennetworking.org> <5D884907-7422-4A8F-AA94-DA1BE7E037A9@linux.vnet.ibm.com> Message-ID: <6438243B-6740-44EA-9BF4-2F472AD39BE1@opennetworking.org> Andreas, Thank you for your answer. Actually, I was able to make it use the correct neutron API in my local tox tests, and all tests passed. However, only in Zuul, I am still getting the following errors. :-( Thank you, Sangho > 2018. 5. 9. 오후 4:04, Andreas Scheuring 작성: > > neutron.plugins.ml2.driver_api got moved to neutron-lib. You probably need to update the networking-onos code and fix all imports there and push the changes... > > > --- > Andreas Scheuring (andreas_s) > > > > On 9. May 2018, at 10:00, Sangho Shin > wrote: > > Hello, > > I am getting the following unit test error in Zuul test. See below. > The error is caused only in the pike version, and in stable/ocata version, I do not have the error. > ( If you can give me any clue, it would be very helpful ) > > BTW, in nosetests, there is no error. > However, in tox -e py27 tests, I am getting different errors like below. Actually, it is caused because the tests are using different version of neutron library somehow. Actual neutron is installed in /opt/stack/neutron path, and it has correct python files such as callbacks and driver api, which are complained below. > > So, I would like to know how to specify the correct neutron location in tox tests. > > Thank you, > > Sangho > > > tox -e py27 errors. > > --------------------------------- > > > ========================= > Failures during discovery > ========================= > --- import errors --- > Failed to import test module: networking_onos.tests.unit.extensions.test_driver > Traceback (most recent call last): > File "/opt/stack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py", line 456, in _find_test_path > module = self._get_module_from_name(name) > File "/opt/stack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py", line 395, in _get_module_from_name > __import__(name) > File "networking_onos/tests/unit/extensions/test_driver.py", line 25, in > import networking_onos.extensions.securitygroup as onos_sg_driver > File "networking_onos/extensions/securitygroup.py", line 21, in > from networking_onos.extensions import callback > File "networking_onos/extensions/callback.py", line 15, in > from neutron.callbacks import events > ImportError: No module named callbacks > > Failed to import test module: networking_onos.tests.unit.plugins.ml2.test_driver > Traceback (most recent call last): > File "/opt/stack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py", line 456, in _find_test_path > module = self._get_module_from_name(name) > File "/opt/stack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py", line 395, in _get_module_from_name > __import__(name) > File "networking_onos/tests/unit/plugins/ml2/test_driver.py", line 24, in > from neutron.plugins.ml2 import driver_api as api > ImportError: cannot import name driver_api > > > > > > > Zuul errors. > > --------------------------- > > Traceback (most recent call last): > 2018-05-09 05:12:30.077594 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py ", line 1182, in _execute_context > 2018-05-09 05:12:30.077653 | ubuntu-xenial | context) > 2018-05-09 05:12:30.077964 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py ", line 470, in do_execute > 2018-05-09 05:12:30.078065 | ubuntu-xenial | cursor.execute(statement, parameters) > 2018-05-09 05:12:30.078210 | ubuntu-xenial | InterfaceError: Error binding parameter 0 - probably unsupported type. > 2018-05-09 05:12:30.078282 | ubuntu-xenial | update failed: No details. > 2018-05-09 05:12:30.078367 | ubuntu-xenial | Traceback (most recent call last): > 2018-05-09 05:12:30.078683 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/api/v2/resource.py ", line 98, in resource > 2018-05-09 05:12:30.078791 | ubuntu-xenial | result = method(request=request, **args) > 2018-05-09 05:12:30.079085 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/api/v2/base.py ", line 615, in update > 2018-05-09 05:12:30.079202 | ubuntu-xenial | return self._update(request, id, body, **kwargs) > 2018-05-09 05:12:30.079480 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/db/api.py ", line 93, in wrapped > 2018-05-09 05:12:30.079574 | ubuntu-xenial | setattr(e, '_RETRY_EXCEEDED', True) > 2018-05-09 05:12:30.079870 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py ", line 220, in __exit__ > 2018-05-09 05:12:30.079941 | ubuntu-xenial | self.force_reraise() > 2018-05-09 05:12:30.080242 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py ", line 196, in force_reraise > 2018-05-09 05:12:30.080350 | ubuntu-xenial | six.reraise(self.type_, self.value, self.tb) > 2018-05-09 05:12:30.080629 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/db/api.py ", line 89, in wrapped > 2018-05-09 05:12:30.080706 | ubuntu-xenial | return f(*args, **kwargs) > 2018-05-09 05:12:30.080985 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_db/api.py ", line 150, in wrapper > 2018-05-09 05:12:30.081064 | ubuntu-xenial | ectxt.value = e.inner_exc > 2018-05-09 05:12:30.081363 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py ", line 220, in __exit__ > 2018-05-09 05:12:30.081433 | ubuntu-xenial | self.force_reraise() > 2018-05-09 05:12:30.081733 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py ", line 196, in force_reraise > 2018-05-09 05:12:30.081849 | ubuntu-xenial | six.reraise(self.type_, self.value, self.tb) > 2018-05-09 05:12:30.082131 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_db/api.py ", line 138, in wrapper > 2018-05-09 05:12:30.082208 | ubuntu-xenial | return f(*args, **kwargs) > 2018-05-09 05:12:30.082489 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/db/api.py ", line 128, in wrapped > 2018-05-09 05:12:30.082620 | ubuntu-xenial | LOG.debug("Retry wrapper got retriable exception: %s", e) > 2018-05-09 05:12:30.082931 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py ", line 220, in __exit__ > 2018-05-09 05:12:30.083006 | ubuntu-xenial | self.force_reraise() > 2018-05-09 05:12:30.083306 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py ", line 196, in force_reraise > 2018-05-09 05:12:30.083415 | ubuntu-xenial | six.reraise(self.type_, self.value, self.tb) > 2018-05-09 05:12:30.083696 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/db/api.py ", line 124, in wrapped > 2018-05-09 05:12:30.083786 | ubuntu-xenial | return f(*dup_args, **dup_kwargs) > 2018-05-09 05:12:30.084081 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/api/v2/base.py ", line 676, in _update > 2018-05-09 05:12:30.084161 | ubuntu-xenial | original=orig_object_copy) > 2018-05-09 05:12:30.084466 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/callbacks/registry.py ", line 53, in notify > 2018-05-09 05:12:30.084611 | ubuntu-xenial | _get_callback_manager().notify(resource, event, trigger, **kwargs) > 2018-05-09 05:12:30.084932 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/db/utils.py ", line 105, in _wrapped > 2018-05-09 05:12:30.085026 | ubuntu-xenial | raise db_exc.RetryRequest(e) > 2018-05-09 05:12:30.085319 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py ", line 220, in __exit__ > 2018-05-09 05:12:30.085387 | ubuntu-xenial | self.force_reraise() > 2018-05-09 05:12:30.085687 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py ", line 196, in force_reraise > 2018-05-09 05:12:30.085796 | ubuntu-xenial | six.reraise(self.type_, self.value, self.tb) > 2018-05-09 05:12:30.086098 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/db/utils.py ", line 100, in _wrapped > 2018-05-09 05:12:30.086192 | ubuntu-xenial | return function(*args, **kwargs) > 2018-05-09 05:12:30.086499 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/callbacks/manager.py ", line 152, in notify > 2018-05-09 05:12:30.086613 | ubuntu-xenial | raise exceptions.CallbackFailure(errors=errors) > 2018-05-09 05:12:30.094917 | ubuntu-xenial | CallbackFailure: Callback neutron.notifiers.nova.Notifier._send_nova_notification-115311 failed with "(sqlite3.InterfaceError) Error binding parameter 0 - probably unsupported type. [SQL: u'SELECT ports.project_id AS ports_project_id, ports.id AS ports_id, ports.name AS ports_name, ports.network_id AS ports_network_id, ports.mac_address AS ports_mac_address, ports.admin_state_up AS ports_admin_state_up, ports.status AS ports_status, ports.device_id AS ports_device_id, ports.device_owner AS ports_device_owner, ports.ip_allocation AS ports_ip_allocation, ports.standard_attr_id AS ports_standard_attr_id, standardattributes_1.id AS standardattributes_1_id, standardattributes_1.resource_type AS standardattributes_1_resource_type, standardattributes_1.description AS standardattributes_1_description, standardattributes_1.revision_number AS standardattributes_1_revision_number, standardattributes_1.created_at AS standardattributes_1_created_at, standardattributes_1.updated_at AS standardattributes_1_updated_at, securitygroupportbindings_1.port_id AS securitygroupportbindings_1_port_id, securitygroupportbindings_1.security_group_id AS securitygroupportbindings_1_security_group_id, portbindingports_1.port_id AS portbindingports_1_port_id, portbindingports_1.host AS portbindingports_1_host, portdataplanestatuses_1.port_id AS portdataplanestatuses_1_port_id, portdataplanestatuses_1.data_plane_status AS portdataplanestatuses_1_data_plane_status, portsecuritybindings_1.port_id AS portsecuritybindings_1_port_id, portsecuritybindings_1.port_security_enabled AS portsecuritybindings_1_port_security_enabled, ml2_port_bindings_1.port_id AS ml2_port_bindings_1_port_id, ml2_port_bindings_1.host AS ml2_port_bindings_1_host, ml2_port_bindings_1.vnic_type AS ml2_port_bindings_1_vnic_type, ml2_port_bindings_1.profile AS ml2_port_bindings_1_profile, ml2_port_bindings_1.vif_type AS ml2_port_bindings_1_vif_type, ml2_port_bindings_1.vif_details AS ml2_port_bindings_1_vif_details, ml2_port_bindings_1.status AS ml2_port_bindings_1_status, portdnses_1.port_id AS portdnses_1_port_id, portdnses_1.current_dns_name AS portdnses_1_current_dns_name, portdnses_1.current_dns_domain AS portdnses_1_current_dns_domain, portdnses_1.previous_dns_name AS portdnses_1_previous_dns_name, portdnses_1.previous_dns_domain AS portdnses_1_previous_dns_domain, portdnses_1.dns_name AS portdnses_1_dns_name, portdnses_1.dns_domain AS portdnses_1_dns_domain, qos_port_policy_bindings_1.policy_id AS qos_port_policy_bindings_1_policy_id, qos_port_policy_bindings_1.port_id AS qos_port_policy_bindings_1_port_id, standardattributes_2.id AS standardattributes_2_id, standardattributes_2.resource_type AS standardattributes_2_resource_type, standardattributes_2.description AS standardattributes_2_description, standardattributes_2.revision_number AS standardattributes_2_revision_number, standardattributes_2.created_at AS standardattributes_2_created_at, standardattributes_2.updated_at AS standardattributes_2_updated_at, trunks_1.project_id AS trunks_1_project_id, trunks_1.id AS trunks_1_id, trunks_1.admin_state_up AS trunks_1_admin_state_up, trunks_1.name AS trunks_1_name, trunks_1.port_id AS trunks_1_port_id, trunks_1.status AS trunks_1_status, trunks_1.standard_attr_id AS trunks_1_standard_attr_id, subports_1.port_id AS subports_1_port_id, subports_1.trunk_id AS subports_1_trunk_id, subports_1.segmentation_type AS subports_1_segmentation_type, subports_1.segmentation_id AS subports_1_segmentation_id \nFROM ports LEFT OUTER JOIN standardattributes AS standardattributes_1 ON standardattributes_1.id = ports.standard_attr_id LEFT OUTER JOIN securitygroupportbindings AS securitygroupportbindings_1 ON ports.id = securitygroupportbindings_1.port_id LEFT OUTER JOIN portbindingports AS portbindingports_1 ON ports.id = portbindingports_1.port_id LEFT OUTER JOIN portdataplanestatuses AS portdataplanestatuses_1 ON ports.id = portdataplanestatuses_1.port_id LEFT OUTER JOIN portsecuritybindings AS portsecuritybindings_1 ON ports.id = portsecuritybindings_1.port_id LEFT OUTER JOIN ml2_port_bindings AS ml2_port_bindings_1 ON ports.id = ml2_port_bindings_1.port_id LEFT OUTER JOIN portdnses AS portdnses_1 ON ports.id = portdnses_1.port_id LEFT OUTER JOIN qos_port_policy_bindings AS qos_port_policy_bindings_1 ON ports.id = qos_port_policy_bindings_1.port_id LEFT OUTER JOIN trunks AS trunks_1 ON ports.id = trunks_1.port_id LEFT OUTER JOIN standardattributes AS standardattributes_2 ON standardattributes_2.id = trunks_1.standard_attr_id LEFT OUTER JOIN subports AS subports_1 ON ports.id = subports_1.port_id \nWHERE ports.id = ?'] [parameters: (,)]" > 2018-05-09 05:12:30.097463 | ubuntu-xenial | {7} networking_onos.tests.unit.plugins.l3.test_driver.ONOSL3PluginTestCase.test_update_floating_ip [1.435310s] ... FAILED > 2018-05-09 05:12:30.097519 | ubuntu-xenial | > 2018-05-09 05:12:30.097608 | ubuntu-xenial | Captured traceback: > 2018-05-09 05:12:30.097702 | ubuntu-xenial | ~~~~~~~~~~~~~~~~~~~ > 2018-05-09 05:12:30.097838 | ubuntu-xenial | Traceback (most recent call last): > 2018-05-09 05:12:30.098230 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/tests/base.py ", line 118, in func > 2018-05-09 05:12:30.098369 | ubuntu-xenial | return f(self, *args, **kwargs) > 2018-05-09 05:12:30.098642 | ubuntu-xenial | File "networking_onos/tests/unit/plugins/l3/test_driver.py", line 166, in test_update_floating_ip > 2018-05-09 05:12:30.098858 | ubuntu-xenial | resp = self._test_send_msg(floating_ip_request, 'put', url) > 2018-05-09 05:12:30.099090 | ubuntu-xenial | File "networking_onos/tests/unit/plugins/l3/test_driver.py", line 96, in _test_send_msg > 2018-05-09 05:12:30.099261 | ubuntu-xenial | resp = self.api.put(url, self.serialize(dict_info)) > 2018-05-09 05:12:30.099597 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py ", line 395, in put > 2018-05-09 05:12:30.099712 | ubuntu-xenial | content_type=content_type, > 2018-05-09 05:12:30.100056 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py ", line 747, in _gen_request > 2018-05-09 05:12:30.100164 | ubuntu-xenial | expect_errors=expect_errors) > 2018-05-09 05:12:30.100486 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py ", line 643, in do_request > 2018-05-09 05:12:30.100603 | ubuntu-xenial | self._check_status(status, res) > 2018-05-09 05:12:30.100931 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py ", line 675, in _check_status > 2018-05-09 05:12:30.101002 | ubuntu-xenial | res) > 2018-05-09 05:12:30.101354 | ubuntu-xenial | webtest.app.AppError: Bad response: 500 Internal Server Error (not 200 OK or 3xx redirect for http://localhost/floatingips/7464aaf0-27ea-448a-97df-51732f9e0e25.json ) > 2018-05-09 05:12:30.101685 | ubuntu-xenial | '{"NeutronError": {"message": "Request Failed: internal server error while processing your request.", "detail": "", "type": "HTTPInternalServerError"}}' > 2018-05-09 05:12:30.101735 | ubuntu-xenial | > 2018-05-09 05:12:30.102007 | ubuntu-xenial | {7} networking_onos.tests.unit.plugins.ml2.test_driver.ONOSMechanismDriverTestCase.test_create_port_postcommit [0.004284s] ... ok > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org ?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Thu May 10 02:43:42 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Thu, 10 May 2018 02:43:42 +0000 Subject: [openstack-dev] [tripleo] gate jobs impacted RAX yum mirror Message-ID: FYI.. https://bugs.launchpad.net/tripleo/+bug/1770298 I'm on #openstack-infra chatting w/ Ian atm. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From natsume.takashi at lab.ntt.co.jp Thu May 10 05:50:12 2018 From: natsume.takashi at lab.ntt.co.jp (Takashi Natsume) Date: Thu, 10 May 2018 14:50:12 +0900 Subject: [openstack-dev] [nova] nova-manage cell_v2 map_instances uses invalid UUID as marker in the db In-Reply-To: <1525772990.5489.1@smtp.office365.com> References: <1525772990.5489.1@smtp.office365.com> Message-ID: The temporary fix to pass the gate jobs is to mock the 'warnings.warn' method in the test method. Then add a TODO note to fix storing non-UUID value in the 'map_instances' command of the 'CellV2Commands' class. The fundamental solution is to change the design of the 'map_instances' command. In the first place, it is not good to store non-UUID value in the UUID field. In some compute REST APIs, it returns the 'marker' parameter in their pagination. Then users can specify the 'marker' parameter in the next request. So it is one way to change the command to stop storing a 'marker' value in the InstanceMapping (instance_mappings) DB table and return (print) a 'marker' value and be able to be specifid the 'marker' value as the command line argument. On 2018/05/08 18:49, Balázs Gibizer wrote: > Hi, > > The oslo UUIDField emits a warning if the string used as a field value > does not pass the validation of the uuid.UUID(str(value)) call [3]. All > the offending places are fixed in nova except the nova-manage cell_v2 > map_instances call [1][2]. That call uses markers in the DB that are not > valid UUIDs. If we could fix this last offender then we could merge the > patch [4] that changes the this warning to an exception in the nova > tests to avoid such future rule violations. > > However I'm not sure it is easy to fix. Replacing > 'INSTANCE_MIGRATION_MARKER' at [1] to '00000000-0000-0000-0000-00000000' > might work but I don't know what to do with instance_uuid.replace(' ', > '-') [2] to make it a valid uuid. Also I think that if there is an > unfinished mapping in the deployment and then the marker is changed in > the code that leads to inconsistencies. > > I'm open to any suggestions. > > Cheers, > gibi > > > [1] > https://github.com/openstack/nova/blob/09af976016a83288df22ac6ed1cce1676c2294cc/nova/cmd/manage.py#L1168 > > [2] > https://github.com/openstack/nova/blob/09af976016a83288df22ac6ed1cce1676c2294cc/nova/cmd/manage.py#L1180 > > [3] > https://github.com/openstack/oslo.versionedobjects/blob/29e643e4a93333866b33965b68fc8dfb8acf30fa/oslo_versionedobjects/fields.py#L359 > > [4] https://review.openstack.org/#/c/540386 > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Regards, Takashi Natsume NTT Software Innovation Center E-mail: natsume.takashi at lab.ntt.co.jp From gmann at ghanshyammann.com Thu May 10 06:02:42 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 10 May 2018 15:02:42 +0900 Subject: [openstack-dev] Overriding project-templates in Zuul In-Reply-To: <87r2munnnv.fsf@meyer.lemoncheese.net> References: <87o9i04rfa.fsf@meyer.lemoncheese.net> <87bmdzwbpz.fsf@meyer.lemoncheese.net> <87r2munnnv.fsf@meyer.lemoncheese.net> Message-ID: On Wed, May 2, 2018 at 11:21 PM, James E. Blair wrote: > Joshua Hesketh writes: > >>> >>> I think in actuality, both operations would end up as intersections: >>> >>> ================ ======== ======= ======= >>> Matcher Template Project Result >>> ================ ======== ======= ======= >>> files AB BC B >>> irrelevant-files AB BC B >>> ================ ======== ======= ======= >>> >>> So with the "combine" method, it's always possible to further restrict >>> where the job runs, but never to expand it. >> >> Ignoring the 'files' above, in the example of 'irrelevant-files' haven't >> you just combined the results to expand when it runs? ie, A and C are /not/ >> excluded and therefore the job will run when there are changes to A or C? >> >> I would expect the table to be something like: >> ================ ======== ======= ======= >> Matcher Template Project Result >> ================ ======== ======= ======= >> files AB BC B >> irrelevant-files AB BC ABC >> ================ ======== ======= ======= > > Sure, we'll go with that. :) > >>> > So a job with "files: tests/" and "irrelevant-files: docs/" would do >>> > whatever it is that happens when you specify both. >>> >>> In this case, I'm pretty sure that would mean it reduces to just "files: >>> tests/", but I've never claimed to understand irrelevant-files and I >>> won't start now. >> >> Yes, I think you are right that this would reduce to that. However, what >> about the use case of: >> files: tests/* >> irrelevant-files: tests/docs/* >> >> I could see a use case where both of those would be helpful. Yes you could >> describe that as one regex but to the end user the above may be expected to >> work. Unless we make the two options mutually exclusive I feel like this is >> a feature we should support. (That said, it's likely a separate >> feature/functionality than what is being described now). > > Today, that means: run the job if a file in tests/ is changed AND any > file outside of tests/docs/* is changed. A change to tests/foo matches > the irrelevant-files matcher, and also the files matcher, so it runs. A > change to tests/docs/foo matches the files matcher but not the > irrelevant-files matcher, so it doesn't run. I really hope I got that > right. Anyway, that is an example of something that's possible to > express with both. > > I lumped in the idea of pairing files/irrelevant-files with Proposal 2 > because I thought that being able to override them is key, and switching > from one to the other was part of that, and, to be honest, I don't think > people should ever combine them because it's hard enough to deal with > one, but maybe that's too much of an implicit behavior change, and > instead we should separate that out and consider it as its own change > later. I believe a user could still stop a the matchers by saying > "files: .*" and "irrelevant-files: ^$" in the project-local variant. > > Let's revise Proposal #2 to omit that: > > Proposal 2: Files and irrelevant-files are treated as overwriteable > attributes and evaluated after branch-matching variants are combined. > > * Files and irrelevant-files are overwritten, so the last value > encountered when combining all the matching variants (looking only at > branches) wins. > * It's possible to both reduce and expand the scope of jobs, but the > user may need to manually copy values from a parent or other variant > in order to do so. > * It will no longer be possible to alter a job attribute by adding a > variant with only a files matcher -- in all cases files and > irrelevant-files are used solely to determine whether the job is run, > not to determine whether to apply a variant. This is limitation for this Proposal but i am not sure how many use case of this features. I have not seen till now in jobs. > Yes, Proposal#2 looks good for nova use case [1] also where integrated-gate templates job needs to be controlled by nova pipeline definition mainly for 'irrelevant-files'. This approach gives benefit of Easy to read from one place that this job is controlled by these value of overridden var ('files', 'irrelevant-files'). -gmann >> Anyway, I feel like Proposal #2 is more how I would expect the system to >> behave. >> >> I can see an argument for combining the results (and feel like you could >> evaulate that at the end after combining the branch-matching variants) to >> give something like: >> ================ ======== ======= ======= >> Matcher Template Project Result >> ================ ======== ======= ======= >> files AB BC ABC >> irrelevant-files AB BC ABC >> ================ ======== ======= ======= >> >> However, that gives the user no way to remove a previously listed option. >> Thus overwriting may be the better solution (ie proposal #2 as written) >> unless we want to explore the option of allowing a syntax that says >> "extend" or "overwrite". >> >> Yours in hoping that made sense, >> Josh > > As much as anything with irrelevant-files does, yes. :) > > -Jim > ..1 https://bugs.launchpad.net/nova/+bug/1745431 , https://bugs.launchpad.net/nova/+bug/1745405 > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ianyrchoi at gmail.com Thu May 10 07:43:13 2018 From: ianyrchoi at gmail.com (Ian Y. Choi) Date: Thu, 10 May 2018 16:43:13 +0900 Subject: [openstack-dev] [docs][openstack-ansible] In-Reply-To: <03758D6E-D720-4F40-8AE0-1296EB280D95@outlook.com> References: <03758D6E-D720-4F40-8AE0-1296EB280D95@outlook.com> Message-ID: <3975d21e-c35e-ca3a-0833-906f0c75cef6@gmail.com> Hello Alex, Alexandra Settle wrote on 5/9/2018 10:13 PM: > > I have had such a great time being a part of something as exciting and > new as OpenStack, and I hope to continue to lurk in the background of > IRC like a total weirdo. I hope to perform some super shit karaoke > with you all in another part of the world :) (who knows, maybe I’ll > just tag along to PTG’s as a social outing… how cool am I?!) > It was so great for me to work with you during Pike cycle as PTL. From Pike PTG, thanks to lots of kind help from Documentation team, I strongly believe that I18n team successfully settled Pike and later PTGs with deep collaboration of Documentation team :) I appreciate your so many kind things to me - I learned a lot from your wonderful activities. I really hope that your everything will go well and we all will meet again with much better status! With many thanks, /Ian From chenxingcampus at outlook.com Thu May 10 08:40:44 2018 From: chenxingcampus at outlook.com (Chan Chason) Date: Thu, 10 May 2018 08:40:44 +0000 Subject: [openstack-dev] [docs][openstack-ansible] In-Reply-To: <03758D6E-D720-4F40-8AE0-1296EB280D95@outlook.com> References: <03758D6E-D720-4F40-8AE0-1296EB280D95@outlook.com> Message-ID: Alex, Sorry to see you go .. :-( I’d like to give you the best wish for your next adventure and hope we could work together again in the near future. Don’t be a stranger, Chason 在 2018年5月9日,下午9:13,Alexandra Settle > 写道: Hi all, It is with a super heavy heart I have to say that I need to step down as core from the OpenStack-Ansible and Documentation teams – and take a step back from the community. The last year has taken me in a completely different direction to what I expected, and try as I might I just don’t have the time to be even a part-time member of this great community :( Although I’m moving on, and learning new things, nothing can beat the memories of SnowpenStack and Denver’s super awesome trains. I know this isn’t some acceptance speech at the Oscars – but I just want to thank the Foundation and everyone who donates to the travel program. Without you guys, I wouldn’t have been a part of the community as much as I have been and met all your lovely faces. I have had such a great time being a part of something as exciting and new as OpenStack, and I hope to continue to lurk in the background of IRC like a total weirdo. I hope to perform some super shit karaoke with you all in another part of the world :) (who knows, maybe I’ll just tag along to PTG’s as a social outing… how cool am I?!) I’d also like to thank Mugsie for this sweet shot which is the perfect summary of my time with the OpenStack community. Read into this what you will: Don’t be a stranger, Alex IRC: asettle Twitter: dewsday Email: a.settle at outlook.com __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From sc at linux.it Thu May 10 09:45:33 2018 From: sc at linux.it (Stefano Canepa) Date: Thu, 10 May 2018 10:45:33 +0100 Subject: [openstack-dev] [all][monasca] pysnmp autogenerated code Message-ID: All, I'm writing a notification plugin for monasca that forward alarms using SNMP. I'm using pysnmp and I'm transforming my MIB into python so it can be loaded faster in my code. The issue I have is that the utility from pysmi (pysnmp dependency) does not generate good pycodestyle code even if this is supposed to. Manual editing autogenerated code to get it pass pep8 gate does not look like a good idea to me, have you ever faced this same problem? Is there an easy and clean solution? Any help is really appreciated. All the best Stefano -- Stefano Canepa sc at linux.it or stefano at canepa.ge.it www.stefanocanepa.it Three great virtues of a programmer: laziness, impatience, and hubris. (Larry Wall) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ietingof at redhat.com Thu May 10 09:55:34 2018 From: ietingof at redhat.com (Ilya Etingof) Date: Thu, 10 May 2018 11:55:34 +0200 Subject: [openstack-dev] [all][monasca] pysnmp autogenerated code In-Reply-To: References: Message-ID: <143a9d8c-64a1-76d8-a191-70a966ba41cb@redhat.com> Hi Stefano, The best solution would be of course to fix pysmi code generator [1] to behave. ;-) On the other hand, if you won't include the autogenerated code into your package, the code generation would happen just once at run time - the autogenerated module would get cached on the file system and loaded from there ever after. Theoretically, not pinning Python MIB in your package has an advantage of letting pysmi pulling newer ASN.1 MIB and turning it into Python whenever newer MIB revision becomes available. 1. https://github.com/etingof/pysmi/blob/master/pysmi/codegen/pysnmp.py On 05/10/2018 11:45 AM, Stefano Canepa wrote: > All, > I'm writing a notification plugin for monasca that forward alarms using > SNMP. I'm using pysnmp and I'm transforming my MIB into python so it can > be loaded faster in my code. > > The issue I have is that the utility from pysmi (pysnmp dependency) does > not generate good pycodestyle code even if this is supposed to. Manual > editing autogenerated code to get it pass pep8 gate does not look like a > good idea to me, have you ever faced this same problem? Is there an easy > and clean solution? > > Any help is really appreciated. >   > All the best > Stefano From rico.lin.guanyu at gmail.com Thu May 10 10:42:05 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Thu, 10 May 2018 18:42:05 +0800 Subject: [openstack-dev] [Openstack-operators][heat][all] Heat now migrated to StoryBoard!! In-Reply-To: References: Message-ID: Hi all, As we keep adding more info to the migration guideline [1], you might like to take a look again. And do hope it will make things easier for you. If not, please find me in irc or mail. [1] https://etherpad.openstack.org/p/Heat-StoryBoard-Migration-Info Here's the quick hint for you, your bug id is exactly your story id. 2018-05-07 18:27 GMT+08:00 Rico Lin : > Hi all, > > I updated more information to this guideline in [1]. > Please must take a view on [1] to see what's been updated. > will likely to keep update on that etherpad if new Q&A or issue found. > > Will keep trying to make this process as painless for you as possible, > so please endure with us for now, and sorry for any inconvenience > > *[1] https://etherpad.openstack.org/p/Heat-StoryBoard-Migration-Info > * > > 2018-05-05 12:15 GMT+08:00 Rico Lin : > >> looping heat-dashboard team >> >> 2018-05-05 12:02 GMT+08:00 Rico Lin : >> >>> Dear all Heat members and friends >>> >>> As you might award, OpenStack projects are scheduled to migrating ([5]) >>> from Launchpad to StoryBoard [1]. >>> For whom who like to know where to file a bug/blueprint, here are some >>> heads up for you. >>> >>> *What's StoryBoard?* >>> StoryBoard is a cross-project task-tracker, contains numbers of >>> ``project``, each project contains numbers of ``story`` which you can think >>> it as an issue or blueprint. Within each story, contains one or multiple >>> ``task`` (task separate stories into the tasks to resolve/implement). To >>> learn more about StoryBoard or how to make a good story, you can reference >>> [6]. >>> >>> *How to file a bug?* >>> This is actually simple, use your current ubuntu-one id to access to >>> storyboard. Then find the corresponding project in [2] and create a story >>> to it with a description of your issue. We should try to create tasks which >>> to reference with patches in Gerrit. >>> >>> *How to work on a spec (blueprint)?* >>> File a story like you used to file a Blueprint. Create tasks for your >>> plan. Also you might want to create a task for adding spec( in heat-spec >>> repo) if your blueprint needs documents to explain. >>> I still leave current blueprint page open, so if you like to create a >>> story from BP, you can still get information. Right now we will start work >>> as task-driven workflow, so BPs should act no big difference with a bug in >>> StoryBoard (which is a story with many tasks). >>> >>> *Where should I put my story?* >>> We migrate all heat sub-projects to StoryBoard to try to keep the impact >>> to whatever you're doing as small as possible. However, if you plan to >>> create a new story, *please create it under heat project [4]* and tag >>> it with what it might affect with (like python-heatclint, heat-dashboard, >>> heat-agents). We do hope to let users focus their stories in one place so >>> all stories will get better attention and project maintainers don't need to >>> go around separate places to find it. >>> >>> *How to connect from Gerrit to StoryBoard?* >>> We usually use following key to reference Launchpad >>> Closes-Bug: ####### >>> Partial-Bug: ####### >>> Related-Bug: ####### >>> >>> Now in StoryBoard, you can use following key. >>> Task: ###### >>> Story: ###### >>> you can find more info in [3]. >>> >>> *What I need to do for my exists bug/bps?* >>> Your bug is automatically migrated to StoryBoard, however, the reference >>> in your patches ware not, so you need to change your commit message to >>> replace the old link to launchpad to new links to StoryBoard. >>> >>> *Do we still need Launchpad after all this migration are done?* >>> As the plan, we won't need Launchpad for heat anymore once we have done >>> with migrating. Will forbid new bugs/bps filed in Launchpad. Also, try to >>> provide new information as many as possible. Hopefully, we can make >>> everyone happy. For those newly created bugs during/after migration, don't >>> worry we will disallow further create new bugs/bps and do a second migrate >>> so we won't missed yours. >>> >>> [1] https://storyboard.openstack.org/ >>> [2] https://storyboard.openstack.org/#!/project_group/82 >>> [3] https://docs.openstack.org/infra/manual/developers.html# >>> development-workflow >>> [4] https://storyboard.openstack.org/#!/project/989 >>> [5] https://docs.openstack.org/infra/storyboard/migration.html >>> [6] https://docs.openstack.org/infra/storyboard/gui/tasks_st >>> ories_tags.html#what-is-a-story >>> >>> >>> >>> -- >>> May The Force of OpenStack Be With You, >>> >>> *Rico Lin*irc: ricolin >>> >>> >> >> >> -- >> May The Force of OpenStack Be With You, >> >> *Rico Lin*irc: ricolin >> >> > > > -- > May The Force of OpenStack Be With You, > > *Rico Lin*irc: ricolin > > -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From gergely.csatari at nokia.com Thu May 10 11:30:39 2018 From: gergely.csatari at nokia.com (Csatari, Gergely (Nokia - HU/Budapest)) Date: Thu, 10 May 2018 11:30:39 +0000 Subject: [openstack-dev] [edge][keystone][forum]: Keystone edge brainstorming etherpad Message-ID: Hi, I've added some initial text to the Etherpad [1] of the Possible edge architectures for Keystone Forum session [2]. Please add your comments and also indicate your willingness to participate. Thanks, Gerg0 [1]: https://etherpad.openstack.org/p/YVR-edge-keystone-brainstorming [2]: https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21737/possible-edge-architectures-for-keystone -------------- next part -------------- An HTML attachment was scrubbed... URL: From wang.yuxin at ostorage.com.cn Thu May 10 12:07:03 2018 From: wang.yuxin at ostorage.com.cn (Yuxin Wang) Date: Thu, 10 May 2018 20:07:03 +0800 Subject: [openstack-dev] [swift][swift3][s3] Keep containers unique among a cluster Message-ID: <43BF82CA-5B47-495D-A164-6C8F5E882995@ostorage.com.cn> Dear all, How to keep containers unique among a swift cluster? I'm working on a swift project. Our customer cares about S3 compatibility very much. I tested our swift cluster with ceph/s3-tests and analyzed the failed cases. It turns out that lots of the failed cases are related to unique container/bucket. But as we know, containers are just unique in a tenant/project. After googling, I can't find much info about that. I wonder if there are approaches to accomplish such a unique container constraint in a swift cluster? I thought about adding a proxy app in front of the proxy-server, the proxy app would maintain a mapping from unique containers to its account and check the uniqueness of every container PUT request, but it seems unpractical. Do you have any ideas on how to do or maybe why not to do? I'd highly appreciate any suggestions. Best regards, Yuxin From juliaashleykreger at gmail.com Thu May 10 13:01:42 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 10 May 2018 09:01:42 -0400 Subject: [openstack-dev] Ironic Status Updates In-Reply-To: <1525725113-sup-1502@lrrr.local> References: <1525725113-sup-1502@lrrr.local> Message-ID: On Mon, May 7, 2018 at 4:34 PM, Doug Hellmann wrote: > As a consumer of team updates from outside of the team, I do find > them valuable. Ditto, if I have time to read them. > I think having a regular email update like that is a good communication > pattern we've started to establish with several teams, and I'm going > to ask the TC to help find ways to make those updates more visible > for folks who want to stay informed but can't spend the time it > takes to read all of the messages on the mailing list (blogs, RSS, > twitter, etc.). > > So, I hope the Ironic team can find a volunteer (or several to share > the work?) to step in and continue with summaries in some form. > I suspect finding a replacement is going to be a little hard for anything that is more than a ten to fifteen minute commitment per week. Perhaps if there was some sort of unified way that might make things easier and we could then all coalesce in terms of update format, amount of pertinent information versus noise. I'm totally on-board for something easy, I'm also just not sure email has the same impact as it once did. Anyway, things to discuss in the hallway track at the summit. :) From ietingof at redhat.com Thu May 10 13:48:34 2018 From: ietingof at redhat.com (Ilya Etingof) Date: Thu, 10 May 2018 15:48:34 +0200 Subject: [openstack-dev] Ironic Status Updates In-Reply-To: References: <1525725113-sup-1502@lrrr.local> Message-ID: <3571dbce-31fb-7a75-9927-081eb8e75e61@redhat.com> On 05/10/2018 03:01 PM, Julia Kreger wrote: > On Mon, May 7, 2018 at 4:34 PM, Doug Hellmann wrote: > >> As a consumer of team updates from outside of the team, I do find >> them valuable. > > Ditto, if I have time to read them. > > >> I think having a regular email update like that is a good communication >> pattern we've started to establish with several teams, and I'm going >> to ask the TC to help find ways to make those updates more visible >> for folks who want to stay informed but can't spend the time it >> takes to read all of the messages on the mailing list (blogs, RSS, >> twitter, etc.). >> >> So, I hope the Ironic team can find a volunteer (or several to share >> the work?) to step in and continue with summaries in some form. >> > > I suspect finding a replacement is going to be a little hard for anything that > is more than a ten to fifteen minute commitment per week. Perhaps if there was > some sort of unified way that might make things easier and we could then > all coalesce in terms of update format, amount of pertinent information > versus noise. The status updates we used to have [1] were essentially weekly snapshots of the Ironic Whiteboard [2]. That probably explains why they are 1) somewhat large/noisy and 2) changing only partially. My mental diff capabilities now risk declining. ;-) While we still have the whiteboard [2], does it make sense to duplicate it in e-mail [1]? Would it work better if we'd: * Report just progress/changes in the e-mail, not the whole whiteboard (which is still one click away) and/or * Report essential discussions/resolutions occurred at the weekly IRC meeting WDYT? I can probably do something about those status reports if people find them useful in one form or the other. 1. http://lists.openstack.org/pipermail/openstack-dev/2018-May/130257.html 2. https://etherpad.openstack.org/p/IronicWhiteBoard > > I'm totally on-board for something easy, I'm also just not sure email has > the same impact as it once did. Anyway, things to discuss in the hallway track > at the summit. :) > From thierry at openstack.org Thu May 10 13:56:49 2018 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 10 May 2018 15:56:49 +0200 Subject: [openstack-dev] [forum] Etherpad for "Ops/Devs: One community" session Message-ID: <0e25c5a4-ef13-f877-0114-ec2468079b03@openstack.org> Hi! I have created an etherpad for the "Ops/Devs: One community" Forum session that will happen in Vancouver on Monday at 4:20pm. https://etherpad.openstack.org/p/YVR-ops-devs-one-community If you are interested in continuing breaking up the community silos and making everyone "contributors" with various backgrounds but a single objective, please add to it and join the session ! -- Thierry Carrez (ttx) From sfinucan at redhat.com Thu May 10 14:07:20 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Thu, 10 May 2018 15:07:20 +0100 Subject: [openstack-dev] [nova] nova-manage cell_v2 map_instances uses invalid UUID as marker in the db In-Reply-To: <1525772990.5489.1@smtp.office365.com> References: <1525772990.5489.1@smtp.office365.com> Message-ID: <1525961240.4163.1.camel@redhat.com> On Tue, 2018-05-08 at 11:49 +0200, Balázs Gibizer wrote: > Hi, > > The oslo UUIDField emits a warning if the string used as a field value > does not pass the validation of the uuid.UUID(str(value)) call [3]. All > the offending places are fixed in nova except the nova-manage cell_v2 > map_instances call [1][2]. That call uses markers in the DB that are > not valid UUIDs. If we could fix this last offender then we could merge > the patch [4] that changes the this warning to an exception in the nova > tests to avoid such future rule violations. > > However I'm not sure it is easy to fix. Replacing > 'INSTANCE_MIGRATION_MARKER' at [1] to > '00000000-0000-0000-0000-00000000' might work but I don't know what to > do with instance_uuid.replace(' ', '-') [2] to make it a valid uuid. > Also I think that if there is an unfinished mapping in the deployment > and then the marker is changed in the code that leads to > inconsistencies. > > I'm open to any suggestions. > > Cheers, > gibi This is a somewhat complicated issue. I haven't got any ideas to solve this (edleafe tried and failed) but I have submitted a patch to explain why we do this, pending a real resolution. https://review.openstack.org/567597 Stephen > > [1] > https://github.com/openstack/nova/blob/09af976016a83288df22ac6ed1cce1676c2294cc/nova/cmd/manage.py#L1168 > [2] > https://github.com/openstack/nova/blob/09af976016a83288df22ac6ed1cce1676c2294cc/nova/cmd/manage.py#L1180 > [3] > https://github.com/openstack/oslo.versionedobjects/blob/29e643e4a93333866b33965b68fc8dfb8acf30fa/oslo_versionedobjects/fields.py#L359 > [4] https://review.openstack.org/#/c/540386 > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sc at linux.it Thu May 10 15:01:26 2018 From: sc at linux.it (Stefano Canepa) Date: Thu, 10 May 2018 16:01:26 +0100 Subject: [openstack-dev] [all][monasca] pysnmp autogenerated code In-Reply-To: <143a9d8c-64a1-76d8-a191-70a966ba41cb@redhat.com> References: <143a9d8c-64a1-76d8-a191-70a966ba41cb@redhat.com> Message-ID: On 10 May 2018 at 10:55, Ilya Etingof wrote: > > Hi Stefano, > > The best solution would be of course to fix pysmi code generator [1] to > behave. ;-) > ​This is something that pysmi author already gives for granted in the Release notes. I bet you know this better then me ;-)​ > On the other hand, if you won't include the autogenerated code into your > package, the code generation would happen just once at run time - the > autogenerated module would get cached on the file system and loaded from > there ever after. > > Theoretically, not pinning Python MIB in your package has an advantage > of letting pysmi pulling newer ASN.1 MIB and turning it into Python > whenever newer MIB revision becomes available. > ​Ilya you're confusing me. Do you mean that, even if I load my MIB and all other it depends on from ASN.1, they are compiled into python byte code and cached and blah blah? ​ ​All the best Stefano -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheanfire at gentoo.org Thu May 10 15:05:25 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Thu, 10 May 2018 10:05:25 -0500 Subject: [openstack-dev] [swift][ironic][ceph][radosgw] radosgw "support" in python-swiftclient droped for ocata and above In-Reply-To: <20180509201437.ahrneoh3ovwi4555@gentoo.org> References: <20180509152244.75ihypuvaxqv7cw6@gentoo.org> <20180509190815.bdm3dv5xaqugdea7@gentoo.org> <20180509201437.ahrneoh3ovwi4555@gentoo.org> Message-ID: <20180510150525.qczwtx3blzmgbwgq@mthode.org> On 18-05-09 15:14:37, Matthew Thode wrote: > On 18-05-09 12:24:32, Clay Gerrard wrote: > > On Wed, May 9, 2018 at 12:08 PM, Matthew Thode > > wrote: > > > > > > > > * Proper fix would be to make ceph support the account field > > > > > > > Is the 'rgw_swift_account_in_url' option not correct/sufficient? > > > > I didn't see that option, I'll test and get back to you on it. > Confirmed that the option works. > > > * Workaround would be to specify an old swiftclient to install (3.1.0, > > > pre-ocata) > > > > > > > Doesn't seem great if a sysadmin wants to co-install the newer swiftclient > > cli > > > > > > > * Workaround would be to for swiftclient to be forked and 'fixed' > > > > > > > > Not clear to me what the "fix" would be here - just don't do validation? > > I'll assume the "fork threat" here is for completeness/emphasis :D > > > > Do you know if ironic works with "normal" swift tempurls or only the > > radosgw implementation of the swift api? > > > -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From jim at jimrollenhagen.com Thu May 10 15:09:41 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Thu, 10 May 2018 11:09:41 -0400 Subject: [openstack-dev] [swift][ironic][ceph][radosgw] radosgw "support" in python-swiftclient droped for ocata and above In-Reply-To: <20180510150525.qczwtx3blzmgbwgq@mthode.org> References: <20180509152244.75ihypuvaxqv7cw6@gentoo.org> <20180509190815.bdm3dv5xaqugdea7@gentoo.org> <20180509201437.ahrneoh3ovwi4555@gentoo.org> <20180510150525.qczwtx3blzmgbwgq@mthode.org> Message-ID: On Thu, May 10, 2018 at 11:05 AM, Matthew Thode wrote: > On 18-05-09 15:14:37, Matthew Thode wrote: > > On 18-05-09 12:24:32, Clay Gerrard wrote: > > > On Wed, May 9, 2018 at 12:08 PM, Matthew Thode < > prometheanfire at gentoo.org> > > > wrote: > > > > > > > > > > > * Proper fix would be to make ceph support the account field > > > > > > > > > > Is the 'rgw_swift_account_in_url' option not correct/sufficient? > > > > > > > I didn't see that option, I'll test and get back to you on it. > > > > Confirmed that the option works. > Awesome. Can someone submit a patch? It will need to remove the special URL building for radosgw linked earlier in the thread, and add a reference to this config option in the docs: https://git.openstack.org/cgit/openstack/ironic/tree/doc/source/admin/radosgw.rst Thanks! // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From ed at leafe.com Thu May 10 15:14:47 2018 From: ed at leafe.com (Ed Leafe) Date: Thu, 10 May 2018 10:14:47 -0500 Subject: [openstack-dev] [nova] nova-manage cell_v2 map_instances uses invalid UUID as marker in the db In-Reply-To: References: <1525772990.5489.1@smtp.office365.com> Message-ID: On May 10, 2018, at 12:50 AM, Takashi Natsume wrote: > > So it is one way to change the command to stop storing a 'marker' value > in the InstanceMapping (instance_mappings) DB table > and return (print) a 'marker' value and be able to be specifid > the 'marker' value as the command line argument. Anything that gets rid of the awful hack used for the instance mapping code would be welcome. Storing a marker in the table is terrible, and then munging a UUID to be a not-UUID is even worse. My cleanup at least got rid of the second hack, but I would have preferred to fix the whole thing by not storing the marker in the first place. -- Ed Leafe From prometheanfire at gentoo.org Thu May 10 15:19:53 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Thu, 10 May 2018 10:19:53 -0500 Subject: [openstack-dev] [swift][ironic][ceph][radosgw] radosgw "support" in python-swiftclient droped for ocata and above In-Reply-To: References: <20180509152244.75ihypuvaxqv7cw6@gentoo.org> <20180509190815.bdm3dv5xaqugdea7@gentoo.org> <20180509201437.ahrneoh3ovwi4555@gentoo.org> <20180510150525.qczwtx3blzmgbwgq@mthode.org> Message-ID: <20180510151953.ydwxgtj4acvfm3cc@gentoo.org> On 18-05-10 11:09:41, Jim Rollenhagen wrote: > On Thu, May 10, 2018 at 11:05 AM, Matthew Thode > wrote: > > > On 18-05-09 15:14:37, Matthew Thode wrote: > > > On 18-05-09 12:24:32, Clay Gerrard wrote: > > > > On Wed, May 9, 2018 at 12:08 PM, Matthew Thode < > > prometheanfire at gentoo.org> > > > > wrote: > > > > > > > > > > > > > > * Proper fix would be to make ceph support the account field > > > > > > > > > > > > > Is the 'rgw_swift_account_in_url' option not correct/sufficient? > > > > > > > > > > I didn't see that option, I'll test and get back to you on it. > > > > > > > Confirmed that the option works. > > > > Awesome. Can someone submit a patch? It will need to remove the special URL > building for radosgw linked earlier in the thread, and add a reference to > this config option in the docs: > https://git.openstack.org/cgit/openstack/ironic/tree/doc/source/admin/radosgw.rst > > Thanks! > Sure, I can make a first pass at it. Given swift removing support for the 'other' method, there will be other changes too though. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From jim at jimrollenhagen.com Thu May 10 15:29:35 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Thu, 10 May 2018 11:29:35 -0400 Subject: [openstack-dev] [swift][ironic][ceph][radosgw] radosgw "support" in python-swiftclient droped for ocata and above In-Reply-To: <20180510151953.ydwxgtj4acvfm3cc@gentoo.org> References: <20180509152244.75ihypuvaxqv7cw6@gentoo.org> <20180509190815.bdm3dv5xaqugdea7@gentoo.org> <20180509201437.ahrneoh3ovwi4555@gentoo.org> <20180510150525.qczwtx3blzmgbwgq@mthode.org> <20180510151953.ydwxgtj4acvfm3cc@gentoo.org> Message-ID: On Thu, May 10, 2018 at 11:19 AM, Matthew Thode wrote: > On 18-05-10 11:09:41, Jim Rollenhagen wrote: > > > > Awesome. Can someone submit a patch? It will need to remove the special > URL > > building for radosgw linked earlier in the thread, and add a reference to > > this config option in the docs: > > https://git.openstack.org/cgit/openstack/ironic/tree/ > doc/source/admin/radosgw.rst > > > > Thanks! > > > > Sure, I can make a first pass at it. Given swift removing support for > the 'other' method, there will be other changes too though. > > Thanks! If it's close someone should be able to take it from there :) // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Thu May 10 15:42:57 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 10 May 2018 10:42:57 -0500 Subject: [openstack-dev] [release] Release countdown for week R-15, May 14-18 Message-ID: <20180510154257.GA31753@sm-xps> The moment you've all been waiting for - this week's release countdown email! Development Focus ----------------- Work on new features should be well underway. With the Forum coming up, and the Rocky-2 milestone just a few weeks away, teams should be thinking about what they can actually accomplish in this cycle. General Information ------------------- We have cycle-with-intermediary projects without a release this cycle. Please take a look and see if these are ready to do a release for this cycle yet. It's best to "release early, release often". bifrost blazar-dashboard blazar-nova castellan ceilometer ceilometermiddleware cliff cloudkitty-dashboard cloudkitty debtcollector glance-store heat-translator ironic-inspector ironic-python-agent ironic-ui ironic karbor-dashboard karbor kuryr-kubernetes kuryr-libnetwork kuryr ldappool magnum-ui magnum masakari-dashboard monasca-kibana-plugin monasca-log-api monasca-notification murano-agent networking-baremetal networking-generic-switch networking-hyperv neutron-fwaas-dashboard neutron-vpnaas-dashboard oslo.context ovsdbapp panko pycadf python-aodhclient python-barbicanclient python-blazarclient python-brick-cinderclient-ext python-cinderclient python-cloudkittyclient python-congressclient python-cyborgclient python-designateclient python-karborclient python-magnumclient python-masakariclient python-muranoclient python-octaviaclient python-pankoclient python-searchlightclient python-senlinclient python-solumclient python-swiftclient python-tricircleclient python-vitrageclient python-zaqarclient requestsexceptions requirements senlin-dashboard solum-dashboard solum stevedore swift tacker-horizon tacker taskflow tempest tricircle vitrage-dashboard vitrage zun-ui zun We had a few forced releases in Queens done by the release team, and we would really prefer not to be the ones initiating those. There are quite a few projects that have not responded to the mutable config [1] and mox removal [2] Rocky series goals. Just a reminder that teams should respond to these goals, even if they do not trigger any work for your specific project. [1] https://storyboard.openstack.org/#!/story/2001545 [2] https://storyboard.openstack.org/#!/story/2001546 Upcoming Deadlines & Dates -------------------------- Forum at OpenStack Summit in Vancouver: May 21-24 Rocky-2 Milestone: June 7 -- Sean McGinnis (smcginnis) From cdent+os at anticdent.org Thu May 10 16:25:32 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 10 May 2018 17:25:32 +0100 (BST) Subject: [openstack-dev] [all][api] POST /api-sig/news Message-ID: Greetings OpenStack community, Very quick meeting today, just edleafe and me (cdent). We reviewed last week's action items, both of which were accomplished, both related to the recent GraphQL [8] discussions [7] and preparation for a meeting of the API SIG at Forum [9]. If you're there we hope to see you. There being no recent changes to pending guidelines nor bugs, we ended the meeting early. As always if you're interested in helping out, in addition to coming to the meetings, there's also: * The list of bugs [5] indicates several missing or incomplete guidelines. * The existing guidelines [2] always need refreshing to account for changes over time. If you find something that's not quite right, submit a patch [6] to fix it. * Have you done something for which you think guidance would have made things easier but couldn't find any? Submit a patch and help others [6]. # Newly Published Guidelines None # API Guidelines Proposed for Freeze Guidelines that are ready for wider review by the whole community. None # Guidelines Currently Under Review [3] * Update parameter names in microversion sdk spec https://review.openstack.org/#/c/557773/ * Add API-schema guide (still being defined) https://review.openstack.org/#/c/524467/ * A (shrinking) suite of several documents about doing version and service discovery Start at https://review.openstack.org/#/c/459405/ * WIP: microversion architecture archival doc (very early; not yet ready for review) https://review.openstack.org/444892 # Highlighting your API impacting issues If you seek further review and insight from the API SIG about APIs that you are developing or changing, please address your concerns in an email to the OpenStack developer mailing list[1] with the tag "[api]" in the subject. In your email, you should include any relevant reviews, links, and comments to help guide the discussion of the specific challenge you are facing. To learn more about the API SIG mission and the work we do, see our wiki page [4] and guidelines [2]. Thanks for reading and see you next week! # References [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] http://specs.openstack.org/openstack/api-wg/ [3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z [4] https://wiki.openstack.org/wiki/API_SIG [5] https://bugs.launchpad.net/openstack-api-wg [6] https://git.openstack.org/cgit/openstack/api-wg [7] http://lists.openstack.org/pipermail/openstack-dev/2018-May/130219.html [8] http://graphql.org/ [9] https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21798/api-special-interest-group-session Meeting Agenda https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda Past Meeting Records http://eavesdrop.openstack.org/meetings/api_sig/ Open Bugs https://bugs.launchpad.net/openstack-api-wg -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From miguel at mlavalle.com Thu May 10 17:03:36 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Thu, 10 May 2018 12:03:36 -0500 Subject: [openstack-dev] [neutron] [fwaas] Proposal for the evolution of the FWaaS API Message-ID: Hi, As discussed during the weekly FWaaS IRC meeting, there is a new proposal for the evolution of the FWaaS API here: https://docs.google.com/document/d/1lnzV6pv841pX43sM76gF3aZ7jceRH3FPbKaGpPumWgs/edit This proposal is based on the current FWaaS V2.0 API as documented here: https://specs.openstack.org/openstack/neutron-specs/specs/mitaka/fwaas-api-2.0.html. The key additional features proposed are: 1. Firewall groups not only associate with ports but also with subnets, other firewall groups and dynamic rules. A list of excluded ports can be specified 2. Dynamic rules make possible the association with Nova instances by security tags and VM names 3. Source and destination address groups can be lists 4. A re-direct action in firewall rules 5. Priority attribute in firewall policies 6. A default rule resource The agreement in the meeting was for the team to help identify the areas where there is incremental features in the proposal compared to what is currently in place plus the what is being already planned for implementation. A spec will be developed based on that increment. We will meet in Vancouver to continue the conversation face to face Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at danplanet.com Thu May 10 18:48:31 2018 From: dms at danplanet.com (Dan Smith) Date: Thu, 10 May 2018 11:48:31 -0700 Subject: [openstack-dev] [nova] nova-manage cell_v2 map_instances uses invalid UUID as marker in the db In-Reply-To: <1525772990.5489.1@smtp.office365.com> (=?utf-8?Q?=22Bal?= =?utf-8?Q?=C3=A1zs?= Gibizer"'s message of "Tue, 8 May 2018 11:49:50 +0200") References: <1525772990.5489.1@smtp.office365.com> Message-ID: > The oslo UUIDField emits a warning if the string used as a field value > does not pass the validation of the uuid.UUID(str(value)) call > [3]. All the offending places are fixed in nova except the nova-manage > cell_v2 map_instances call [1][2]. That call uses markers in the DB > that are not valid UUIDs. No, that call uses markers in the DB that don't fit the canonical string representation of a UUID that the oslo library is looking for. There are many ways to serialize a UUID: https://en.wikipedia.org/wiki/Universally_unique_identifier#Format The 8-4-4-4-12 format is one of them (and the most popular). Changing the dashes to spaces does not make it not a UUID, it makes it not the same _string_ and it's done (for better or worse) in the aforementioned code to skirt the database's UUID-ignorant _string_ uniqueness constraint. > If we could fix this last offender then we could merge the patch [4] > that changes the this warning to an exception in the nova tests to > avoid such future rule violations. > > However I'm not sure it is easy to fix. Replacing > 'INSTANCE_MIGRATION_MARKER' at [1] to > '00000000-0000-0000-0000-00000000' might work The project_id field on the object is not a UUIDField, nor is it 36 characters in the database schema. It can't be because project ids are not guaranteed to be UUIDs. > but I don't know what to do with instance_uuid.replace(' ', '-') [2] > to make it a valid uuid. Also I think that if there is an unfinished > mapping in the deployment and then the marker is changed in the code > that leads to inconsistencies. IMHO, it would be bad to do anything that breaks people in the middle of a mapping procedure. While I understand the desire to have fewer spurious warnings in the test runs, I feel like doing anything to impact the UX or performance of runtime code to make the unit test output cleaner is a bad idea. > I'm open to any suggestions. We already store values in this field that are not 8-4-4-4-12, and the oslo field warning is just a warning. If people feel like we need to do something, I propose we just do this: https://review.openstack.org/#/c/567669/ It is one of those "we normally wouldn't do this with object schemas, but we know this is okay" sort of situations. Personally, I'd just make the offending tests shut up about the warning and move on, but I'm also okay with the above solution if people prefer. --Dan From dms at danplanet.com Thu May 10 18:52:40 2018 From: dms at danplanet.com (Dan Smith) Date: Thu, 10 May 2018 11:52:40 -0700 Subject: [openstack-dev] [nova] nova-manage cell_v2 map_instances uses invalid UUID as marker in the db In-Reply-To: (Takashi Natsume's message of "Thu, 10 May 2018 14:50:12 +0900") References: <1525772990.5489.1@smtp.office365.com> Message-ID: Takashi Natsume writes: > In some compute REST APIs, it returns the 'marker' parameter > in their pagination. > Then users can specify the 'marker' parameter in the next request. How is this possible? The only way we would get the marker is if we either (a) listed the mappings by project_id, using INSTANCE_MAPPING_MARKER as the query value, or (b) listed all the mappings and somehow returned those to the user. I don't think (a) is a thing, and I'm not seeing how (b) could be either. If you know of a place, please write a functional test for it and we can get it resolves. In my proposed patch, I added a filter to ensure that this doesn't show up in the get_by_cell_id() query, but again, I'm not sure how this would ever be exposed to a user. https://review.openstack.org/#/c/567669/1/nova/objects/instance_mapping.py at 173 --Dan From doug at doughellmann.com Thu May 10 19:17:41 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 10 May 2018 15:17:41 -0400 Subject: [openstack-dev] Ironic Status Updates In-Reply-To: References: <1525725113-sup-1502@lrrr.local> Message-ID: <1525979587-sup-7209@lrrr.local> Excerpts from Julia Kreger's message of 2018-05-10 09:01:42 -0400: > On Mon, May 7, 2018 at 4:34 PM, Doug Hellmann wrote: > > > As a consumer of team updates from outside of the team, I do find > > them valuable. > > Ditto, if I have time to read them. > > > I think having a regular email update like that is a good communication > > pattern we've started to establish with several teams, and I'm going > > to ask the TC to help find ways to make those updates more visible > > for folks who want to stay informed but can't spend the time it > > takes to read all of the messages on the mailing list (blogs, RSS, > > twitter, etc.). > > > > So, I hope the Ironic team can find a volunteer (or several to share > > the work?) to step in and continue with summaries in some form. > > > > I suspect finding a replacement is going to be a little hard for anything that > is more than a ten to fifteen minute commitment per week. Perhaps if there was > some sort of unified way that might make things easier and we could then > all coalesce in terms of update format, amount of pertinent information > versus noise. > > I'm totally on-board for something easy, I'm also just not sure email has > the same impact as it once did. Anyway, things to discuss in the hallway track > at the summit. :) > Changing the primary outlet for status updates to some other medium increases the friction for writing them a slight bit, but it makes following up with questions or more detail significantly harder, since the person doing the follow up may not have access to post on the blog or other outlet. So, I have been trying to get people to standardize on the mailing list as the main form of communication, and then use other forms for highlighting important threads for folks who can't read the whole list (even I don't really do that any more). I have some ideas about how to highlight threads outside of the mailing list using a blog and twitter that need more time to bake before I have something ready to experiment with. Maybe we can talk about it in Vancouver, if this is an area that interests you? Doug From doug at doughellmann.com Thu May 10 19:18:55 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 10 May 2018 15:18:55 -0400 Subject: [openstack-dev] Ironic Status Updates In-Reply-To: <3571dbce-31fb-7a75-9927-081eb8e75e61@redhat.com> References: <1525725113-sup-1502@lrrr.local> <3571dbce-31fb-7a75-9927-081eb8e75e61@redhat.com> Message-ID: <1525979869-sup-3108@lrrr.local> Excerpts from Ilya Etingof's message of 2018-05-10 15:48:34 +0200: > On 05/10/2018 03:01 PM, Julia Kreger wrote: > > On Mon, May 7, 2018 at 4:34 PM, Doug Hellmann wrote: > > > >> As a consumer of team updates from outside of the team, I do find > >> them valuable. > > > > Ditto, if I have time to read them. > > > > > >> I think having a regular email update like that is a good communication > >> pattern we've started to establish with several teams, and I'm going > >> to ask the TC to help find ways to make those updates more visible > >> for folks who want to stay informed but can't spend the time it > >> takes to read all of the messages on the mailing list (blogs, RSS, > >> twitter, etc.). > >> > >> So, I hope the Ironic team can find a volunteer (or several to share > >> the work?) to step in and continue with summaries in some form. > >> > > > > I suspect finding a replacement is going to be a little hard for anything that > > is more than a ten to fifteen minute commitment per week. Perhaps if there was > > some sort of unified way that might make things easier and we could then > > all coalesce in terms of update format, amount of pertinent information > > versus noise. > > The status updates we used to have [1] were essentially weekly snapshots > of the Ironic Whiteboard [2]. That probably explains why they are 1) > somewhat large/noisy and 2) changing only partially. My mental diff > capabilities now risk declining. ;-) > > While we still have the whiteboard [2], does it make sense to duplicate > it in e-mail [1]? > > Would it work better if we'd: > > * Report just progress/changes in the e-mail, not the whole whiteboard > (which is still one click away) > > and/or > > * Report essential discussions/resolutions occurred at the weekly IRC > meeting > > WDYT? Both of those things seem like they would be valuable. If we view these summaries as a way to help people find and understand the important discussions, actually writing them should become a bit easier. Doug > > I can probably do something about those status reports if people find > them useful in one form or the other. > > > 1. http://lists.openstack.org/pipermail/openstack-dev/2018-May/130257.html > 2. https://etherpad.openstack.org/p/IronicWhiteBoard > > > > > I'm totally on-board for something easy, I'm also just not sure email has > > the same impact as it once did. Anyway, things to discuss in the hallway track > > at the summit. :) > > > From mriedemos at gmail.com Thu May 10 19:42:14 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 10 May 2018 14:42:14 -0500 Subject: [openstack-dev] [nova] os-fping API removal Message-ID: <822f81cd-3c1b-ce97-6650-e9c6a8f698cd@gmail.com> We agreed to drop nova-network in Rocky [1] and I've started what I think is a template for dropping nova-network-only REST APIs [2] by doing the os-fping API first. This one was pretty easy and follows the pattern established by removing os-certificates and os-cloudpipe in Pike. I'll follow up with a short contributor doc on the things to remember when doing this, sort of like the "adding a new microversion" but in reverse. I don't plan on doing these all myself, but would like to make sure it all goes in a series, so if you would like to work on this, please sign up in the etherpad I've started [3]. [1] https://etherpad.openstack.org/p/nova-ptg-rocky [2] https://review.openstack.org/#/c/567682/ [3] https://etherpad.openstack.org/p/nova-network-removal-rocky -- Thanks, Matt From dms at danplanet.com Thu May 10 20:18:36 2018 From: dms at danplanet.com (Dan Smith) Date: Thu, 10 May 2018 13:18:36 -0700 Subject: [openstack-dev] [nova] nova-manage cell_v2 map_instances uses invalid UUID as marker in the db In-Reply-To: (Dan Smith's message of "Thu, 10 May 2018 11:52:40 -0700") References: <1525772990.5489.1@smtp.office365.com> Message-ID: > Takashi Natsume writes: > >> In some compute REST APIs, it returns the 'marker' parameter >> in their pagination. >> Then users can specify the 'marker' parameter in the next request. I read this as you saying there was some way that the in-band marker mapping could be leaked to the user via the REST API. However, if you meant to just offer up the REST API's pagination as an example that we could follow in the nova-manage CLI, requiring users to provide the marker each time, then ignore this part: > How is this possible? The only way we would get the marker is if we > either (a) listed the mappings by project_id, using > INSTANCE_MAPPING_MARKER as the query value, or (b) listed all the > mappings and somehow returned those to the user. > > I don't think (a) is a thing, and I'm not seeing how (b) could be > either. If you know of a place, please write a functional test for it > and we can get it resolves. In my proposed patch, I added a filter to > ensure that this doesn't show up in the get_by_cell_id() query, but > again, I'm not sure how this would ever be exposed to a user. > > https://review.openstack.org/#/c/567669/1/nova/objects/instance_mapping.py at 173 As I said in my reply to gibi, I don't think making the user keep track of the marker is a very nice UX for a management CLI, nor is it as convenient for something like puppet to run as it has to parse the (grossly verbose) output each time to extract that marker. --Dan From zbitter at redhat.com Thu May 10 20:38:37 2018 From: zbitter at redhat.com (Zane Bitter) Date: Thu, 10 May 2018 16:38:37 -0400 Subject: [openstack-dev] [all] Topics for the Board+TC+UC meeting in Vancouver In-Reply-To: <1df6fb53-ed96-8eea-a290-ab0b889a1ae2@openstack.org> References: <1df6fb53-ed96-8eea-a290-ab0b889a1ae2@openstack.org> Message-ID: On 17/04/18 05:24, Thierry Carrez wrote: > Hi everyone, > > As you know the Technical Committee (the governance body representing > contributors producing OpenStack software) meets with other OpenStack > governance bodies (Board of Directors and User Committee) on the Sunday > before every Summit, and Vancouver will be no exception. > > At the TC retrospective Forum session in Sydney we decided we should > more broadly ask our constituency for topics they would like us to cover > in that discussion. > > Once the current election cycle is over and the new TC chair is picked, > we'll come up with a proposed agenda and submit it to the Chairman of > the Board for consideration. > > So... Is there any specific topic you think we should cover in that > meeting ? There's one topic I've been thinking about that I think would be valuable to discuss with the Board and the UC. I don't know if we still have time to add stuff to the agenda for Vancouver, but if not then consider this my advance submission for Denver. OpenStack was bootstrapped using a very powerful positive feedback loop: in (very) broad-brush terms it started with a minimum viable product; users for whom that was enough to entice them tried it out and offered suggestions; vendors who wanted to sell to those users (as well as the users themselves) implemented the suggestions; both groups joined the Foundation, which marketed OpenStack to folks with similar needs. Obviously that is a good thing, but it also comes with the danger of getting trapped in a local maximum. Users for whom the product has not yet met the threshold of minimum viability are generally not going to show up, and their needs are no match for the feedback loop set up with the users who _have_ shown up. (Specifically, we are arguably only just now approaching the minimum viability point for the types of cloud-aware applications that are routinely written against the APIs of the big 3 proprietary clouds.) How can we avoid (or get out of) the local maximum trap and ensure that OpenStack will meet the needs of all the users we want to serve, not just those whose needs are similar to those of the users we already have? Discuss. thanks, Zane. From mriedemos at gmail.com Thu May 10 20:45:45 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 10 May 2018 15:45:45 -0500 Subject: [openstack-dev] [all] Topics for the Board+TC+UC meeting in Vancouver In-Reply-To: References: <1df6fb53-ed96-8eea-a290-ab0b889a1ae2@openstack.org> Message-ID: <0b605e9d-1e7a-450e-e0e9-e1cba90a80ed@gmail.com> On 5/10/2018 3:38 PM, Zane Bitter wrote: > How can we avoid (or get out of) the local maximum trap and ensure that > OpenStack will meet the needs of all the users we want to serve, not > just those whose needs are similar to those of the users we already have? The phrase "jack of all trades, master of none" comes to mind here. Wasn't the explosion of big tent projects at least an indication of people trying to make OpenStack all things to all people and failing most of the time? -- Thanks, Matt From melwittt at gmail.com Thu May 10 20:48:39 2018 From: melwittt at gmail.com (melanie witt) Date: Thu, 10 May 2018 13:48:39 -0700 Subject: [openstack-dev] [nova] nova-manage cell_v2 map_instances uses invalid UUID as marker in the db In-Reply-To: References: <1525772990.5489.1@smtp.office365.com> Message-ID: <15884f0a-a896-38b9-a8a7-6e8701e75b46@gmail.com> On Thu, 10 May 2018 11:48:31 -0700, Dan Smith wrote: > We already store values in this field that are not 8-4-4-4-12, and the > oslo field warning is just a warning. If people feel like we need to do > something, I propose we just do this: > > https://review.openstack.org/#/c/567669/ > > It is one of those "we normally wouldn't do this with object schemas, > but we know this is okay" sort of situations. I'm in favor of this "solution" because, as you mentioned earlier, project_id/user_id aren't supposed to be restricted to UUID-only or 36 characters anyway -- they come from the identity service and could be any string. We've been good about keeping with String(255) in the database schema for project_id/user_id originating from the identity service. And, I noticed Instance.project_id is a StringField too [1]. Really, IMHO we should be consistent with this field type among the various objects for project_id/user_id. Best, -melanie [1] https://github.com/openstack/nova/blob/e35e8d7/nova/objects/instance.py#L121 From doug at doughellmann.com Thu May 10 20:52:09 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 10 May 2018 16:52:09 -0400 Subject: [openstack-dev] [all] Topics for the Board+TC+UC meeting in Vancouver In-Reply-To: References: <1df6fb53-ed96-8eea-a290-ab0b889a1ae2@openstack.org> Message-ID: <1525985480-sup-5377@lrrr.local> Excerpts from Zane Bitter's message of 2018-05-10 16:38:37 -0400: > On 17/04/18 05:24, Thierry Carrez wrote: > > Hi everyone, > > > > As you know the Technical Committee (the governance body representing > > contributors producing OpenStack software) meets with other OpenStack > > governance bodies (Board of Directors and User Committee) on the Sunday > > before every Summit, and Vancouver will be no exception. > > > > At the TC retrospective Forum session in Sydney we decided we should > > more broadly ask our constituency for topics they would like us to cover > > in that discussion. > > > > Once the current election cycle is over and the new TC chair is picked, > > we'll come up with a proposed agenda and submit it to the Chairman of > > the Board for consideration. > > > > So... Is there any specific topic you think we should cover in that > > meeting ? > > There's one topic I've been thinking about that I think would be > valuable to discuss with the Board and the UC. I don't know if we still > have time to add stuff to the agenda for Vancouver, but if not then > consider this my advance submission for Denver. > > OpenStack was bootstrapped using a very powerful positive feedback loop: > in (very) broad-brush terms it started with a minimum viable product; > users for whom that was enough to entice them tried it out and offered > suggestions; vendors who wanted to sell to those users (as well as the > users themselves) implemented the suggestions; both groups joined the > Foundation, which marketed OpenStack to folks with similar needs. > > Obviously that is a good thing, but it also comes with the danger of > getting trapped in a local maximum. Users for whom the product has not > yet met the threshold of minimum viability are generally not going to > show up, and their needs are no match for the feedback loop set up with > the users who _have_ shown up. (Specifically, we are arguably only just > now approaching the minimum viability point for the types of cloud-aware > applications that are routinely written against the APIs of the big 3 > proprietary clouds.) > > How can we avoid (or get out of) the local maximum trap and ensure that > OpenStack will meet the needs of all the users we want to serve, not > just those whose needs are similar to those of the users we already have? > > Discuss. > > thanks, > Zane. > This does feel like an excellent topic for one of these strategic discussion sessions, but I think the agenda is already full for this particular meeting. Maybe we can discuss it within the TC between now and Denver so we have a good way to frame the question and discussion at that meeting? Doug From zbitter at redhat.com Fri May 11 00:12:29 2018 From: zbitter at redhat.com (Zane Bitter) Date: Thu, 10 May 2018 20:12:29 -0400 Subject: [openstack-dev] [all] Topics for the Board+TC+UC meeting in Vancouver In-Reply-To: <0b605e9d-1e7a-450e-e0e9-e1cba90a80ed@gmail.com> References: <1df6fb53-ed96-8eea-a290-ab0b889a1ae2@openstack.org> <0b605e9d-1e7a-450e-e0e9-e1cba90a80ed@gmail.com> Message-ID: On 10/05/18 16:45, Matt Riedemann wrote: > On 5/10/2018 3:38 PM, Zane Bitter wrote: >> How can we avoid (or get out of) the local maximum trap and ensure >> that OpenStack will meet the needs of all the users we want to serve, >> not just those whose needs are similar to those of the users we >> already have? > > The phrase "jack of all trades, master of none" comes to mind here. Stipulating the constraint that you can't please everybody, how do you ensure that you're meeting the needs of the users who are most important to the long-term sustainability of the project, and not just the ones who were easiest to bootstrap? It's the same question. > Wasn't the explosion of big tent projects at least an indication of > people trying to make OpenStack all things to all people and failing > most of the time? Well, just to take one example, we built a lot of things that you might characterise as application services, but not until Queens did we have a mechanism for applications to authenticate to those services without giving them the user's LDAP password. (Thanks Colleen!!!) Now, to the extent that those projects can be said to have "mostly fail[ed]", was it because we: * Tried to do too much? * Tried to do too little? * Tried to do the wrong things? Opinions differ. - ZB From zbitter at redhat.com Fri May 11 00:17:42 2018 From: zbitter at redhat.com (Zane Bitter) Date: Thu, 10 May 2018 20:17:42 -0400 Subject: [openstack-dev] [all] Topics for the Board+TC+UC meeting in Vancouver In-Reply-To: <1525985480-sup-5377@lrrr.local> References: <1df6fb53-ed96-8eea-a290-ab0b889a1ae2@openstack.org> <1525985480-sup-5377@lrrr.local> Message-ID: <9fe69c40-e7fc-52eb-e81d-779f1f81e09d@redhat.com> On 10/05/18 16:52, Doug Hellmann wrote: > Excerpts from Zane Bitter's message of 2018-05-10 16:38:37 -0400: >> On 17/04/18 05:24, Thierry Carrez wrote: >>> Hi everyone, >>> >>> As you know the Technical Committee (the governance body representing >>> contributors producing OpenStack software) meets with other OpenStack >>> governance bodies (Board of Directors and User Committee) on the Sunday >>> before every Summit, and Vancouver will be no exception. >>> >>> At the TC retrospective Forum session in Sydney we decided we should >>> more broadly ask our constituency for topics they would like us to cover >>> in that discussion. >>> >>> Once the current election cycle is over and the new TC chair is picked, >>> we'll come up with a proposed agenda and submit it to the Chairman of >>> the Board for consideration. >>> >>> So... Is there any specific topic you think we should cover in that >>> meeting ? >> >> There's one topic I've been thinking about that I think would be >> valuable to discuss with the Board and the UC. I don't know if we still >> have time to add stuff to the agenda for Vancouver, but if not then >> consider this my advance submission for Denver. >> >> OpenStack was bootstrapped using a very powerful positive feedback loop: >> in (very) broad-brush terms it started with a minimum viable product; >> users for whom that was enough to entice them tried it out and offered >> suggestions; vendors who wanted to sell to those users (as well as the >> users themselves) implemented the suggestions; both groups joined the >> Foundation, which marketed OpenStack to folks with similar needs. >> >> Obviously that is a good thing, but it also comes with the danger of >> getting trapped in a local maximum. Users for whom the product has not >> yet met the threshold of minimum viability are generally not going to >> show up, and their needs are no match for the feedback loop set up with >> the users who _have_ shown up. (Specifically, we are arguably only just >> now approaching the minimum viability point for the types of cloud-aware >> applications that are routinely written against the APIs of the big 3 >> proprietary clouds.) >> >> How can we avoid (or get out of) the local maximum trap and ensure that >> OpenStack will meet the needs of all the users we want to serve, not >> just those whose needs are similar to those of the users we already have? >> >> Discuss. >> >> thanks, >> Zane. >> > > This does feel like an excellent topic for one of these strategic > discussion sessions, but I think the agenda is already full for this > particular meeting. Maybe we can discuss it within the TC between now > and Denver so we have a good way to frame the question and discussion at > that meeting? +1. As usual I've been struggling to find a good balance between being too abstract and too specific, so I'd appreciate help in framing the question to avoid sending the discussion directly into the weeds. cdent already had a good suggestion on IRC. cheers, Zane. From bzhaojyathousandy at gmail.com Fri May 11 01:15:53 2018 From: bzhaojyathousandy at gmail.com (bo zhaobo) Date: Fri, 11 May 2018 09:15:53 +0800 Subject: [openstack-dev] [neutron] [fwaas] Proposal for the evolution of the FWaaS API In-Reply-To: References: Message-ID: This proposal Looks like more flexible for the network traffic security. For current FW V2, we support 2 security levels for a single Neutron port. One is security group, the other is firewall group, but this looks like support more. And the firewall depolyer/dispatcher need to own some network knowledge for configuring the specific fw rule. So it's necessary to provide a good user experience, like security tags or some thing. 2018-05-11 1:03 GMT+08:00 Miguel Lavalle : > Hi, > > As discussed during the weekly FWaaS IRC meeting, there is a new proposal > for the evolution of the FWaaS API here: https://docs.google.com/ > document/d/1lnzV6pv841pX43sM76gF3aZ7jceRH3FPbKaGpPumWgs/edit > > This proposal is based on the current FWaaS V2.0 API as documented here: > https://specs.openstack.org/openstack/neutron-specs/specs/ > mitaka/fwaas-api-2.0.html. The key additional features proposed are: > > 1. Firewall groups not only associate with ports but also with > subnets, other firewall groups and dynamic rules. A list of excluded ports > can be specified > 2. Dynamic rules make possible the association with Nova instances by > security tags and VM names > 3. Source and destination address groups can be lists > 4. A re-direct action in firewall rules > 5. Priority attribute in firewall policies > 6. A default rule resource > > The agreement in the meeting was for the team to help identify the areas > where there is incremental features in the proposal compared to what is > currently in place plus the what is being already planned for > implementation. A spec will be developed based on that increment. We will > meet in Vancouver to continue the conversation face to face > > Best regards > > Miguel > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Fri May 11 01:17:32 2018 From: zbitter at redhat.com (Zane Bitter) Date: Thu, 10 May 2018 21:17:32 -0400 Subject: [openstack-dev] [tc] [all] TC Report 18-17 In-Reply-To: References: Message-ID: On 24/04/18 08:35, Chris Dent wrote: > > The main TC-related activity over the past week has been the > [elections](https://governance.openstack.org/election/) currently in > progress. A quiet campaigning period burst into late activity with a > series of four questions posted in email by Doug Hellman: > > * [campaign question related to new > > projects](http://lists.openstack.org/pipermail/openstack-dev/2018-April/129622.html) > > * [How "active" should the TC > > be?](http://lists.openstack.org/pipermail/openstack-dev/2018-April/129658.html) > > * [How should we handle projects with overlapping feature > > sets?](http://lists.openstack.org/pipermail/openstack-dev/2018-April/129661.html) > > * [How can we make  contributing to OpenStack > > easier?](http://lists.openstack.org/pipermail/openstack-dev/2018-April/129664.html) > > > I feel we should be working with these sorts of questions and > conversations more frequently. There are many good ideas and > observations in the threads. In the interests of keeping this (very productive) discussion going, perhaps the folks who were elected in October might also like to provide answers to the questions now that the campaign is over, in the copious spare time I know y'all have between now and Summit ;) cheers, Zane. From ietingof at redhat.com Fri May 11 07:26:09 2018 From: ietingof at redhat.com (Ilya Etingof) Date: Fri, 11 May 2018 09:26:09 +0200 Subject: [openstack-dev] [all][monasca] pysnmp autogenerated code In-Reply-To: References: <143a9d8c-64a1-76d8-a191-70a966ba41cb@redhat.com> Message-ID: <4058bf05-efe3-5cb7-9dae-07ec1048a4ec@redhat.com> On 05/10/2018 05:01 PM, Stefano Canepa wrote: > > On 10 May 2018 at 10:55, Ilya Etingof > wrote: > > > Hi Stefano, > > The best solution would be of course to fix pysmi code generator [1] to > behave. ;-) > > > ​This is something that pysmi author already gives for granted in the > Release notes. > I bet you know this better then me ;-)​ >   >   > > On the other hand, if you won't include the autogenerated code into your > package, the code generation would happen just once at run time - the > autogenerated module would get cached on the file system and loaded from > there ever after. > > Theoretically, not pinning Python MIB in your package has an advantage > of letting pysmi pulling newer ASN.1 MIB and turning it into Python > whenever newer MIB revision becomes available. > > > ​Ilya you're confusing me. Do you mean that, even if I load my MIB and > all other it depends on from ASN.1, they are compiled into python byte > code and cached and blah blah? ​ The workflow is this: * pysnmp wants to load a MY-MIB by name (e.g. evaluate the contents of MY-MIB.py, turn it into Python objects and link them up to its in-memory MIB tree) * pysnmp searches for MY-MIB.py[co] in its search path * if pysnmp is successful, we are done * if pysnmp does not find MY-MIB.py[co] and pysmi package is present, pysnmp calls pysmi with MY-MIB name on input * pysmi tries to find MY-MIB (e.g. in ASN.1 form) in its search path (possibly including remote locations), compile it into MY-MIB.py[co] and cache it somewhere within pysnmp search path * if pysmi is successful, pysnmp starts over loading MY-MIB.py[co] Hope this is helpful. ;) From thierry at openstack.org Fri May 11 07:31:51 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 11 May 2018 09:31:51 +0200 Subject: [openstack-dev] [all] Topics for the Board+TC+UC meeting in Vancouver In-Reply-To: References: <1df6fb53-ed96-8eea-a290-ab0b889a1ae2@openstack.org> Message-ID: Zane Bitter wrote: > [...] > How can we avoid (or get out of) the local maximum trap and ensure that > OpenStack will meet the needs of all the users we want to serve, not > just those whose needs are similar to those of the users we already have? It'a a good question, and a topic I raised a couple years ago. Back then we had (and we arguably still have) a critical mass of medium-sized private clouds, which makes most contributions gravitate to that middle area of the potential usage spectrum. But for the success of OpenStack we need the two extremes to be served: the "giant public cloud" use case (because we all need that giant public cloud to burst infinite capacity to in hybrid scenarios), but also the "lab deployment" use case because that's a great on-boarding tool. Currently it's still too complex to use OpenStack in those two ends of the use case spectrum. How do we solve that ? We can't rely on natural open collaboration dynamics ("show up and be the change you want to see in the world") -- that one will continue to feed the medium use case. We can continue to wait for proponents of the "small deployment" or the "massive public cloud" to suddenly invest hundreds of FTEs to cover their use case. Or we can be aware of the local maximum trap, go a bit out of our ways to serve both ends of the spectrum, and realize that it puts us in a lot better place. -- Thierry Carrez (ttx) From sc at linux.it Fri May 11 07:51:35 2018 From: sc at linux.it (Stefano Canepa) Date: Fri, 11 May 2018 08:51:35 +0100 Subject: [openstack-dev] [all][monasca] pysnmp autogenerated code In-Reply-To: <4058bf05-efe3-5cb7-9dae-07ec1048a4ec@redhat.com> References: <143a9d8c-64a1-76d8-a191-70a966ba41cb@redhat.com> <4058bf05-efe3-5cb7-9dae-07ec1048a4ec@redhat.com> Message-ID: On 11 May 2018 at 08:26, Ilya Etingof wrote: > On 05/10/2018 05:01 PM, Stefano Canepa wrote: > > > > On 10 May 2018 at 10:55, Ilya Etingof > > wrote: > > > > > > Hi Stefano, > > > > The best solution would be of course to fix pysmi code generator [1] > to > > behave. ;-) > > > > > > ​This is something that pysmi author already gives for granted in the > > Release notes. > > I bet you know this better then me ;-)​ > > > > > > > > On the other hand, if you won't include the autogenerated code into > your > > package, the code generation would happen just once at run time - the > > autogenerated module would get cached on the file system and loaded > from > > there ever after. > > > > Theoretically, not pinning Python MIB in your package has an > advantage > > of letting pysmi pulling newer ASN.1 MIB and turning it into Python > > whenever newer MIB revision becomes available. > > > > > > ​Ilya you're confusing me. Do you mean that, even if I load my MIB and > > all other it depends on from ASN.1, they are compiled into python byte > > code and cached and blah blah? ​ > > The workflow is this: > > * pysnmp wants to load a MY-MIB by name (e.g. evaluate the contents of > MY-MIB.py, turn it into Python objects and link them up to its in-memory > MIB tree) > * pysnmp searches for MY-MIB.py[co] in its search path > * if pysnmp is successful, we are done > * if pysnmp does not find MY-MIB.py[co] and pysmi package is present, > pysnmp calls pysmi with MY-MIB name on input > * pysmi tries to find MY-MIB (e.g. in ASN.1 form) in its search path > (possibly including remote locations), compile it into MY-MIB.py[co] and > cache it somewhere within pysnmp search path > * if pysmi is successful, pysnmp starts over loading MY-MIB.py[co] > > Hope this is helpful. ;) > Super helpful. Copy&pasted this into my notebook for future reference. All the best Stefano ​ -------------- next part -------------- An HTML attachment was scrubbed... URL: From scheuran at linux.vnet.ibm.com Fri May 11 09:08:13 2018 From: scheuran at linux.vnet.ibm.com (Andreas Scheuring) Date: Fri, 11 May 2018 11:08:13 +0200 Subject: [openstack-dev] [neutron][ml2 plugin] unit test errors In-Reply-To: <5D884907-7422-4A8F-AA94-DA1BE7E037A9@linux.vnet.ibm.com> References: <08D21635-A69C-4D77-811E-4F67ED4C61A3@opennetworking.org> <5D884907-7422-4A8F-AA94-DA1BE7E037A9@linux.vnet.ibm.com> Message-ID: So what you need to do first is to make a patch for networking-onos that does ONLY the following replace all occurrences of * neutron.callbacks by neutron_lib.callbacks * neutron.plugins.ml2.driver_api by neutron_lib.plugins.ml2.api Push this patch for review. After that tests should succeed again in the check queue - merge it. Then you can put your new great custom code on top of this patch. --- Andreas Scheuring (andreas_s) On 9. May 2018, at 10:04, Andreas Scheuring wrote: neutron.plugins.ml2.driver_api got moved to neutron-lib. You probably need to update the networking-onos code and fix all imports there and push the changes... --- Andreas Scheuring (andreas_s) On 9. May 2018, at 10:00, Sangho Shin > wrote: Hello, I am getting the following unit test error in Zuul test. See below. The error is caused only in the pike version, and in stable/ocata version, I do not have the error. ( If you can give me any clue, it would be very helpful ) BTW, in nosetests, there is no error. However, in tox -e py27 tests, I am getting different errors like below. Actually, it is caused because the tests are using different version of neutron library somehow. Actual neutron is installed in /opt/stack/neutron path, and it has correct python files such as callbacks and driver api, which are complained below. So, I would like to know how to specify the correct neutron location in tox tests. Thank you, Sangho tox -e py27 errors. --------------------------------- ========================= Failures during discovery ========================= --- import errors --- Failed to import test module: networking_onos.tests.unit.extensions.test_driver Traceback (most recent call last): File "/opt/stack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py", line 456, in _find_test_path module = self._get_module_from_name(name) File "/opt/stack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py", line 395, in _get_module_from_name __import__(name) File "networking_onos/tests/unit/extensions/test_driver.py", line 25, in import networking_onos.extensions.securitygroup as onos_sg_driver File "networking_onos/extensions/securitygroup.py", line 21, in from networking_onos.extensions import callback File "networking_onos/extensions/callback.py", line 15, in from neutron.callbacks import events ImportError: No module named callbacks Failed to import test module: networking_onos.tests.unit.plugins.ml2.test_driver Traceback (most recent call last): File "/opt/stack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py", line 456, in _find_test_path module = self._get_module_from_name(name) File "/opt/stack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/unittest2/loader.py", line 395, in _get_module_from_name __import__(name) File "networking_onos/tests/unit/plugins/ml2/test_driver.py", line 24, in from neutron.plugins.ml2 import driver_api as api ImportError: cannot import name driver_api Zuul errors. --------------------------- Traceback (most recent call last): 2018-05-09 05:12:30.077594 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/base.py ", line 1182, in _execute_context 2018-05-09 05:12:30.077653 | ubuntu-xenial | context) 2018-05-09 05:12:30.077964 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/sqlalchemy/engine/default.py ", line 470, in do_execute 2018-05-09 05:12:30.078065 | ubuntu-xenial | cursor.execute(statement, parameters) 2018-05-09 05:12:30.078210 | ubuntu-xenial | InterfaceError: Error binding parameter 0 - probably unsupported type. 2018-05-09 05:12:30.078282 | ubuntu-xenial | update failed: No details. 2018-05-09 05:12:30.078367 | ubuntu-xenial | Traceback (most recent call last): 2018-05-09 05:12:30.078683 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/api/v2/resource.py ", line 98, in resource 2018-05-09 05:12:30.078791 | ubuntu-xenial | result = method(request=request, **args) 2018-05-09 05:12:30.079085 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/api/v2/base.py ", line 615, in update 2018-05-09 05:12:30.079202 | ubuntu-xenial | return self._update(request, id, body, **kwargs) 2018-05-09 05:12:30.079480 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/db/api.py ", line 93, in wrapped 2018-05-09 05:12:30.079574 | ubuntu-xenial | setattr(e, '_RETRY_EXCEEDED', True) 2018-05-09 05:12:30.079870 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py ", line 220, in __exit__ 2018-05-09 05:12:30.079941 | ubuntu-xenial | self.force_reraise() 2018-05-09 05:12:30.080242 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py ", line 196, in force_reraise 2018-05-09 05:12:30.080350 | ubuntu-xenial | six.reraise(self.type_, self.value, self.tb) 2018-05-09 05:12:30.080629 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/db/api.py ", line 89, in wrapped 2018-05-09 05:12:30.080706 | ubuntu-xenial | return f(*args, **kwargs) 2018-05-09 05:12:30.080985 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_db/api.py ", line 150, in wrapper 2018-05-09 05:12:30.081064 | ubuntu-xenial | ectxt.value = e.inner_exc 2018-05-09 05:12:30.081363 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py ", line 220, in __exit__ 2018-05-09 05:12:30.081433 | ubuntu-xenial | self.force_reraise() 2018-05-09 05:12:30.081733 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py ", line 196, in force_reraise 2018-05-09 05:12:30.081849 | ubuntu-xenial | six.reraise(self.type_, self.value, self.tb) 2018-05-09 05:12:30.082131 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_db/api.py ", line 138, in wrapper 2018-05-09 05:12:30.082208 | ubuntu-xenial | return f(*args, **kwargs) 2018-05-09 05:12:30.082489 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/db/api.py ", line 128, in wrapped 2018-05-09 05:12:30.082620 | ubuntu-xenial | LOG.debug("Retry wrapper got retriable exception: %s", e) 2018-05-09 05:12:30.082931 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py ", line 220, in __exit__ 2018-05-09 05:12:30.083006 | ubuntu-xenial | self.force_reraise() 2018-05-09 05:12:30.083306 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py ", line 196, in force_reraise 2018-05-09 05:12:30.083415 | ubuntu-xenial | six.reraise(self.type_, self.value, self.tb) 2018-05-09 05:12:30.083696 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/db/api.py ", line 124, in wrapped 2018-05-09 05:12:30.083786 | ubuntu-xenial | return f(*dup_args, **dup_kwargs) 2018-05-09 05:12:30.084081 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/api/v2/base.py ", line 676, in _update 2018-05-09 05:12:30.084161 | ubuntu-xenial | original=orig_object_copy) 2018-05-09 05:12:30.084466 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/callbacks/registry.py ", line 53, in notify 2018-05-09 05:12:30.084611 | ubuntu-xenial | _get_callback_manager().notify(resource, event, trigger, **kwargs) 2018-05-09 05:12:30.084932 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/db/utils.py ", line 105, in _wrapped 2018-05-09 05:12:30.085026 | ubuntu-xenial | raise db_exc.RetryRequest(e) 2018-05-09 05:12:30.085319 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py ", line 220, in __exit__ 2018-05-09 05:12:30.085387 | ubuntu-xenial | self.force_reraise() 2018-05-09 05:12:30.085687 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/oslo_utils/excutils.py ", line 196, in force_reraise 2018-05-09 05:12:30.085796 | ubuntu-xenial | six.reraise(self.type_, self.value, self.tb) 2018-05-09 05:12:30.086098 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/db/utils.py ", line 100, in _wrapped 2018-05-09 05:12:30.086192 | ubuntu-xenial | return function(*args, **kwargs) 2018-05-09 05:12:30.086499 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron_lib/callbacks/manager.py ", line 152, in notify 2018-05-09 05:12:30.086613 | ubuntu-xenial | raise exceptions.CallbackFailure(errors=errors) 2018-05-09 05:12:30.094917 | ubuntu-xenial | CallbackFailure: Callback neutron.notifiers.nova.Notifier._send_nova_notification-115311 failed with "(sqlite3.InterfaceError) Error binding parameter 0 - probably unsupported type. [SQL: u'SELECT ports.project_id AS ports_project_id, ports.id AS ports_id, ports.name AS ports_name, ports.network_id AS ports_network_id, ports.mac_address AS ports_mac_address, ports.admin_state_up AS ports_admin_state_up, ports.status AS ports_status, ports.device_id AS ports_device_id, ports.device_owner AS ports_device_owner, ports.ip_allocation AS ports_ip_allocation, ports.standard_attr_id AS ports_standard_attr_id, standardattributes_1.id AS standardattributes_1_id, standardattributes_1.resource_type AS standardattributes_1_resource_type, standardattributes_1.description AS standardattributes_1_description, standardattributes_1.revision_number AS standardattributes_1_revision_number, standardattributes_1.created_at AS standardattributes_1_created_at, standardattributes_1.updated_at AS standardattributes_1_updated_at, securitygroupportbindings_1.port_id AS securitygroupportbindings_1_port_id, securitygroupportbindings_1.security_group_id AS securitygroupportbindings_1_security_group_id, portbindingports_1.port_id AS portbindingports_1_port_id, portbindingports_1.host AS portbindingports_1_host, portdataplanestatuses_1.port_id AS portdataplanestatuses_1_port_id, portdataplanestatuses_1.data_plane_status AS portdataplanestatuses_1_data_plane_status, portsecuritybindings_1.port_id AS portsecuritybindings_1_port_id, portsecuritybindings_1.port_security_enabled AS portsecuritybindings_1_port_security_enabled, ml2_port_bindings_1.port_id AS ml2_port_bindings_1_port_id, ml2_port_bindings_1.host AS ml2_port_bindings_1_host, ml2_port_bindings_1.vnic_type AS ml2_port_bindings_1_vnic_type, ml2_port_bindings_1.profile AS ml2_port_bindings_1_profile, ml2_port_bindings_1.vif_type AS ml2_port_bindings_1_vif_type, ml2_port_bindings_1.vif_details AS ml2_port_bindings_1_vif_details, ml2_port_bindings_1.status AS ml2_port_bindings_1_status, portdnses_1.port_id AS portdnses_1_port_id, portdnses_1.current_dns_name AS portdnses_1_current_dns_name, portdnses_1.current_dns_domain AS portdnses_1_current_dns_domain, portdnses_1.previous_dns_name AS portdnses_1_previous_dns_name, portdnses_1.previous_dns_domain AS portdnses_1_previous_dns_domain, portdnses_1.dns_name AS portdnses_1_dns_name, portdnses_1.dns_domain AS portdnses_1_dns_domain, qos_port_policy_bindings_1.policy_id AS qos_port_policy_bindings_1_policy_id, qos_port_policy_bindings_1.port_id AS qos_port_policy_bindings_1_port_id, standardattributes_2.id AS standardattributes_2_id, standardattributes_2.resource_type AS standardattributes_2_resource_type, standardattributes_2.description AS standardattributes_2_description, standardattributes_2.revision_number AS standardattributes_2_revision_number, standardattributes_2.created_at AS standardattributes_2_created_at, standardattributes_2.updated_at AS standardattributes_2_updated_at, trunks_1.project_id AS trunks_1_project_id, trunks_1.id AS trunks_1_id, trunks_1.admin_state_up AS trunks_1_admin_state_up, trunks_1.name AS trunks_1_name, trunks_1.port_id AS trunks_1_port_id, trunks_1.status AS trunks_1_status, trunks_1.standard_attr_id AS trunks_1_standard_attr_id, subports_1.port_id AS subports_1_port_id, subports_1.trunk_id AS subports_1_trunk_id, subports_1.segmentation_type AS subports_1_segmentation_type, subports_1.segmentation_id AS subports_1_segmentation_id \nFROM ports LEFT OUTER JOIN standardattributes AS standardattributes_1 ON standardattributes_1.id = ports.standard_attr_id LEFT OUTER JOIN securitygroupportbindings AS securitygroupportbindings_1 ON ports.id = securitygroupportbindings_1.port_id LEFT OUTER JOIN portbindingports AS portbindingports_1 ON ports.id = portbindingports_1.port_id LEFT OUTER JOIN portdataplanestatuses AS portdataplanestatuses_1 ON ports.id = portdataplanestatuses_1.port_id LEFT OUTER JOIN portsecuritybindings AS portsecuritybindings_1 ON ports.id = portsecuritybindings_1.port_id LEFT OUTER JOIN ml2_port_bindings AS ml2_port_bindings_1 ON ports.id = ml2_port_bindings_1.port_id LEFT OUTER JOIN portdnses AS portdnses_1 ON ports.id = portdnses_1.port_id LEFT OUTER JOIN qos_port_policy_bindings AS qos_port_policy_bindings_1 ON ports.id = qos_port_policy_bindings_1.port_id LEFT OUTER JOIN trunks AS trunks_1 ON ports.id = trunks_1.port_id LEFT OUTER JOIN standardattributes AS standardattributes_2 ON standardattributes_2.id = trunks_1.standard_attr_id LEFT OUTER JOIN subports AS subports_1 ON ports.id = subports_1.port_id \nWHERE ports.id = ?'] [parameters: (,)]" 2018-05-09 05:12:30.097463 | ubuntu-xenial | {7} networking_onos.tests.unit.plugins.l3.test_driver.ONOSL3PluginTestCase.test_update_floating_ip [1.435310s] ... FAILED 2018-05-09 05:12:30.097519 | ubuntu-xenial | 2018-05-09 05:12:30.097608 | ubuntu-xenial | Captured traceback: 2018-05-09 05:12:30.097702 | ubuntu-xenial | ~~~~~~~~~~~~~~~~~~~ 2018-05-09 05:12:30.097838 | ubuntu-xenial | Traceback (most recent call last): 2018-05-09 05:12:30.098230 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/neutron/tests/base.py ", line 118, in func 2018-05-09 05:12:30.098369 | ubuntu-xenial | return f(self, *args, **kwargs) 2018-05-09 05:12:30.098642 | ubuntu-xenial | File "networking_onos/tests/unit/plugins/l3/test_driver.py", line 166, in test_update_floating_ip 2018-05-09 05:12:30.098858 | ubuntu-xenial | resp = self._test_send_msg(floating_ip_request, 'put', url) 2018-05-09 05:12:30.099090 | ubuntu-xenial | File "networking_onos/tests/unit/plugins/l3/test_driver.py", line 96, in _test_send_msg 2018-05-09 05:12:30.099261 | ubuntu-xenial | resp = self.api.put(url, self.serialize(dict_info)) 2018-05-09 05:12:30.099597 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py ", line 395, in put 2018-05-09 05:12:30.099712 | ubuntu-xenial | content_type=content_type, 2018-05-09 05:12:30.100056 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py ", line 747, in _gen_request 2018-05-09 05:12:30.100164 | ubuntu-xenial | expect_errors=expect_errors) 2018-05-09 05:12:30.100486 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py ", line 643, in do_request 2018-05-09 05:12:30.100603 | ubuntu-xenial | self._check_status(status, res) 2018-05-09 05:12:30.100931 | ubuntu-xenial | File "/home/zuul/src/git.openstack.org/openstack/networking-onos/.tox/py27/local/lib/python2.7/site-packages/webtest/app.py ", line 675, in _check_status 2018-05-09 05:12:30.101002 | ubuntu-xenial | res) 2018-05-09 05:12:30.101354 | ubuntu-xenial | webtest.app.AppError: Bad response: 500 Internal Server Error (not 200 OK or 3xx redirect for http://localhost/floatingips/7464aaf0-27ea-448a-97df-51732f9e0e25.json ) 2018-05-09 05:12:30.101685 | ubuntu-xenial | '{"NeutronError": {"message": "Request Failed: internal server error while processing your request.", "detail": "", "type": "HTTPInternalServerError"}}' 2018-05-09 05:12:30.101735 | ubuntu-xenial | 2018-05-09 05:12:30.102007 | ubuntu-xenial | {7} networking_onos.tests.unit.plugins.ml2.test_driver.ONOSMechanismDriverTestCase.test_create_port_postcommit [0.004284s] ... ok __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org ?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.bourke at oracle.com Fri May 11 09:35:34 2018 From: paul.bourke at oracle.com (Paul Bourke) Date: Fri, 11 May 2018 10:35:34 +0100 Subject: [openstack-dev] [kolla] Building Kolla containers with 3rd party vendor drivers In-Reply-To: <48550F8B-C186-4C3F-8803-C792B15BB754@cisco.com> References: <48550F8B-C186-4C3F-8803-C792B15BB754@cisco.com> Message-ID: <49543b70-78ef-2e99-9435-1be2b34e01e3@oracle.com> Hi Sandhya, Thanks for starting this thread. I've moved it to the mailing list so the discussion can be available to anyone else who is interested, I hope you don't mind. If your requirement is to have third party plugins (such as Cisco) that are not available on tarballs.openstack.org, available in Kolla, then this is already possible. Using the Cisco case as an example, you would simply need to submit the following patch to https://github.com/openstack/kolla/blob/master/kolla/common/config.py """ 'neutron-server-plugin-networking-cisco': { 'type': 'git', 'location': ('https://github.com/openstack/networking-cisco')}, """ This will then include that plugin as part of the future neutron-server builds. If the requirement is to have Kolla publish a neutron-server container with *only* the Cisco plugin, then this is where it gets a little more tricky. Sure, we can go the route that's proposed in your patch, but we end up then maintaining a massive number of neutron-server containers, one per plugin. It also does not address then the issue of what people want to do when they want a combination or mix of plugins together. So right now I feel Kolla takes a middle ground, where we publish a neutron-server container with a variety of common plugins. If operators have specific requirements, they should create their own config file and build their own images, which we expect any serious production setup to be doing anyway. -Paul On 10/05/18 18:12, Sandhya Dasu (sadasu) wrote: > Yes, I think there is some misunderstanding on what I am trying to accomplish here. > > I am utilizing existing Kolla constructs to prove that they work for 3rd party out of tree vendor drivers too. > At this point, anything that a 3rd party vendor driver does (the way they build their containers, where they publish it and how they generate config) is completely out of scope of Kolla. > > I want to use the spec as a place to articulate and discuss best practices and figure out what part of supporting 3rd party vendor drivers can stay within the Kolla tree and what should be out. > I have witnessed many discussions on this topic but they only take away I get is “there are ways to do it but it can’t be part of Kolla”. > > Using the existing kolla constructs of template-override, plugin-archive and config-dir, let us say the 3rd party vendor builds a container. > OpenStack TC does not want these containers to be part of tarballs.openstack.org. Kolla publishes its containers to DockerHub under the Kolla project. > If these 3rd party vendor drivers publish to Dockerhub they will have to publish under a different project. So, an OpenStack installation that needs these drivers will have to pull images from 2 or more Dokerhub projects?! > > Or do you prefer if the OpenStack operator build their own images using the out-of-tree Dockerfile for that vendor? > > Again, should the config changes to support these drivers be part of the kolla-ansible repo or should they be out-of-tree? > > It is hard to have this type of discussion on IRC so I started this email thread. > > Thanks, > Sandhya > > On 5/10/18, 5:59 AM, "Paul Bourke (pbourke) (Code Review)" wrote: > > Paul Bourke (pbourke) has posted comments on this change. ( https://review.openstack.org/567278 ) > > Change subject: Building Kolla containers with 3rd party vendor drivers > ...................................................................... > > > Patch Set 2: Code-Review-1 > > Hi Sandhya, after reading the spec most of my thoughts echo Eduardo's. I'm wondering if there's some misunderstanding on how the current plugin functionality works? Feels free to ping me on irc I'd be happy to discuss further - maybe there's still some element of what's there that's not working for your use case. > > -- > To view, visit https://review.openstack.org/567278 > To unsubscribe, visit https://review.openstack.org/settings > > Gerrit-MessageType: comment > Gerrit-Change-Id: I681d6a7b38b6cafe7ebe88a1a1f2d53943e1aab2 > Gerrit-PatchSet: 2 > Gerrit-Project: openstack/kolla > Gerrit-Branch: master > Gerrit-Owner: Sandhya Dasu > Gerrit-Reviewer: Duong Ha-Quang > Gerrit-Reviewer: Eduardo Gonzalez > Gerrit-Reviewer: Paul Bourke (pbourke) > Gerrit-Reviewer: Zuul > Gerrit-HasComments: No > > From neil at tigera.io Fri May 11 10:19:23 2018 From: neil at tigera.io (Neil Jerram) Date: Fri, 11 May 2018 11:19:23 +0100 Subject: [openstack-dev] [neutron][ml2 plugin] unit test errors In-Reply-To: References: <08D21635-A69C-4D77-811E-4F67ED4C61A3@opennetworking.org> <5D884907-7422-4A8F-AA94-DA1BE7E037A9@linux.vnet.ibm.com> Message-ID: On Fri, May 11, 2018 at 10:09 AM Andreas Scheuring < scheuran at linux.vnet.ibm.com> wrote: > So what you need to do first is to make a patch for networking-onos that > does ONLY the following > > > replace all occurrences of > > * neutron.callbacks by neutron_lib.callbacks > * neutron.plugins.ml2.driver_api by neutron_lib.plugins.ml2.api > FYI here's what networking-calico has for the second of these points: try: from neutron_lib.plugins.ml2 import api except ImportError: # Neutron code prior to a2c36d7e (10th November 2017). from neutron.plugins.ml2 import driver_api as api ( http://git.openstack.org/cgit/openstack/networking-calico/tree/networking_calico/plugins/ml2/drivers/calico/mech_calico.py#n49 ) However, we do it like this because we want the master networking-calico code to work with many past Neutron releases, and I understand that that is not a common approach; so for networking-onos you may only want the "from neutron_lib.plugins.ml2 import api" line. Regards - Neil -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Fri May 11 11:47:38 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 11 May 2018 12:47:38 +0100 (BST) Subject: [openstack-dev] [nova] [placement] placement update 18-19 Message-ID: HTML: https://anticdent.org/placement-update-18-19.html This is placement update 18-19. 18 was skipped because I was on holiday. With this issue I'm going to start cross-posting to my blog to increase exposure, double up the archiving, and get the content showing up on the OpenStack [Planet](http://planet.openstack.org/). One upshot of this change is that the content will now be formatted more fully as Markdown. I'll be travelling next week so there won't be one of these for weeks 20 or 21, unless someone else feels like it. # Most Important We're continuing to hope that granular and nested resource providers will be fully merged by Summit (a bit more than a week from now). Not clear if this will happen as last I checked it seemed we have multiple threads of changes in progress, many of which will merge conflict with one another. But then again, I may be out of date, it's been difficult to find all those threads while trying to catch up this week. If you're going to be at summit there are (at least) two placement-related forum sessions: * * Please add to those etherpads if you have thoughts. Also a summit, Ed and Eric will be presenting [Placement, Present and Future, in Nova and Beyond](https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/20813/placement-present-and-future-in-nova-and-beyond). # What's Changed Granular requests can now be made to GET /allocation_candidates (meaning resourcesN and requiredN are now accepted). A bug with the safe_connect handler masking real problems has been fixed. The spec for [Network Bandwidth Resource Provider](https://review.openstack.org/#/c/502306/) has finally merged after a lot of thinking and discussion. The spec for [Return resources of entire trees in Placement](https://review.openstack.org/#/c/559466/) has merged. This allows the inclusion of resource providers which are not providing inventory, but are part of the current tree, in the provider summaries of a /allocation_candidates response. There are some new specs (see the end of the specs list, below) which extend required traits handling to be able to say "I need at least one of these traits". # Bugs * Placement related [bugs not yet in progress](https://goo.gl/TgiPXb): 16, -1 on two weeks ago * [In progress placement bugs](https://goo.gl/vzGGDQ) 10, +2 two weeks ago # Specs Total two weeks ago: 11. Now: 13 * VMware: place instances on resource pool (using update_provider_tree) * Proposes NUMA topology with RPs * Account for host agg allocation ratio in placement * Support default allocation ratios * Spec on preemptible servers * Proposes Multiple GPU types * Standardize CPU resource tracking * Propose counting quota usage from placement * Add history behind nullable project_id and user_id * update add-consumer-generation to focus on API * Placement: any traits in allocation_candidate query * Placement: support mixing required traits with any traits * [WIP] Support Placement in Cinder # Main Themes ## Nested providers in allocation candidates Unfortunately I'm really unclear on what the current state of this is. If someone else can give a quick overview that would be excellent. There's code in progress at both of the following topics, some of it is old and in merge conflict: * * ## Mirror nova host aggregates to placement This makes it so some kinds of aggregate filtering can be done "placement side" by mirroring nova host aggregates into placement aggregates. * This is still in progress but is still on its attention break. ## Consumer Generations This allows multiple agents to "safely" update allocations for a single consumer. The code is in progress: * There's also a related change for ensuring [consumer records](https://review.openstack.org/#/c/567678/). ## Granular Ways and means of addressing granular requests when dealing with nested resource providers. Granular in this sense is grouping resource classes and traits together in their own lumps as required. Topic is: * The sole active change in that now is work in progress to get it working from the Nova side. # Extraction I've created patches that adjust devstack and zuul config to use the separate placement database connection. * [devstack](https://review.openstack.org/#/c/564180/) * [zuul](https://review.openstack.org/#/c/564067/) * [db connection](https://review.openstack.org/#/c/362766/) All of these things could merge without requiring any action by users. Instead they allow people to use different connections, but don't require it. Jay has made a first pass at an [os-resource-classes](https://github.com/jaypipes/os-resource-classes/) which I thought was potentially more heavyweight than required, but other people should have a look too. As mentioned above there will be a [forum session](https://etherpad.openstack.org/p/YVR-placement-extraction) about extraction. In the meantime, some of the low hanging fruit on extraction is duplicating and extracting to their own files the various fixtures and base test classes that are required by both the functional and unit tests. And making them not import from the nova hierarchy. # Other 17 entries two weeks ago. 19 now. Some of the older items in this list are not getting much attention. That's a shame. The list is ordered the way it is on purpose. * Purge comp_node and res_prvdr records during deletion of cells/hosts * A huge pile of improvements to osc-placement * General policy sample file for placement * Get resource provider by uuid or name (osc-placement) * placement: Make API history doc more consistent * Handle agg generation conflict in report client * Add unit test for non-placement resize * cover migration cases with functional tests * Bug fixes for sharing resource providers * return resoruces of entire trees in placement * sharing disk in libvirt * Move refresh time from report client to prov tree * PCPU resource class * Add nova-manage placement heal_allocations CLI * Add random sleep between retry calls to placement * rework how we pass candidate request information * add root parent NULL online migration * add resource_requests field to RequestSpec * member_of related cleanups # End Please followup with the many things I've missed. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From juliaashleykreger at gmail.com Fri May 11 11:50:10 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Fri, 11 May 2018 07:50:10 -0400 Subject: [openstack-dev] [ironic][stable] Re-adding Jim Rollenhagen to ironic stable maintenance team? Message-ID: Greetings folks, Is there any objection if we re-add Jim to the ironic-stable-maint team? He was a member prior to his brief departure and I think it would be good to have another set of hands that can approve the changes as three doesn't seem like quite enough when everyone is busy. If there are no objections, I'll re-add him next week. -Julia From dtantsur at redhat.com Fri May 11 12:20:10 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Fri, 11 May 2018 14:20:10 +0200 Subject: [openstack-dev] [ironic][stable] Re-adding Jim Rollenhagen to ironic stable maintenance team? In-Reply-To: References: Message-ID: <9ff7365d-03b7-2530-f315-8f6478bcf264@redhat.com> Hi, Funny, I was just about to ask you about it :) Jim is a former PTL, so I cannot see why we wouldn't add him to the stable team. On 05/11/2018 01:50 PM, Julia Kreger wrote: > Greetings folks, > > Is there any objection if we re-add Jim to the ironic-stable-maint > team? He was a member prior to his brief departure and I think it > would be good to have another set of hands that can approve the > changes as three doesn't seem like quite enough when everyone is busy. > > If there are no objections, I'll re-add him next week. I don't remember if we actually can add people to these teams or it has to be done by the main stable team. > > -Julia > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From juliaashleykreger at gmail.com Fri May 11 12:37:43 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Fri, 11 May 2018 08:37:43 -0400 Subject: [openstack-dev] [ironic][stable] Re-adding Jim Rollenhagen to ironic stable maintenance team? In-Reply-To: <9ff7365d-03b7-2530-f315-8f6478bcf264@redhat.com> References: <9ff7365d-03b7-2530-f315-8f6478bcf264@redhat.com> Message-ID: On Fri, May 11, 2018 at 8:20 AM, Dmitry Tantsur wrote: > Hi, [trim] >> If there are no objections, I'll re-add him next week. > > > I don't remember if we actually can add people to these teams or it has to > be done by the main stable team. > I'm fairly sure I'm the person who deleted him from the group in the first place :( As such, I think I has the magical powers... maybe ;) From jaypipes at gmail.com Fri May 11 13:10:17 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Fri, 11 May 2018 09:10:17 -0400 Subject: [openstack-dev] [placement] low hanging bug for a new contributor Message-ID: <4425e94b-4579-db4a-5ffc-22db3a221855@gmail.com> Hi Stackers, Here's a small bug that would be ideal for a new contributor to pick up: https://bugs.launchpad.net/nova/+bug/1770636 Come find us on #openstack-placement on Freenode if you'd like to pick it up and run with it. Best, -jay From mriedemos at gmail.com Fri May 11 13:45:39 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 11 May 2018 08:45:39 -0500 Subject: [openstack-dev] Should we add a tempest-slow job? Message-ID: <9b338d82-bbcf-f6c0-9ba0-9a402838d958@gmail.com> The tempest-full job used to run API and scenario tests concurrently, and if you go back far enough I think it also ran slow tests. Sometime in the last year or so, the full job was changed to run the scenario tests in serial and exclude the slow tests altogether. So the API tests run concurrently first, and then the scenario tests run in serial. During that change, some other tests were identified as 'slow' and marked as such, meaning they don't get run in the normal tempest-full job. There are some valuable scenario tests marked as slow, however, like the only encrypted volume testing we have in tempest is marked slow so it doesn't get run on every change for at least nova. There is only one job that can be run against nova changes which runs the slow tests but it's in the experimental queue so people forget to run it. As a test, I've proposed a nova-slow job [1] which only runs the slow tests and only the compute API and scenario tests. Since there currently no compute API tests marked as slow, it's really just running slow scenario tests. Results show it runs 37 tests in about 37 minutes [2]. The overall job runtime was 1 hour and 9 minutes, which is on average less than the tempest-full job. The nova-slow job is also running scenarios that nova patches don't actually care about, like the neutron IPv6 scenario tests. My question is, should we make this a generic tempest-slow job which can be run either in the integrated-gate or at least in nova/neutron/cinder consistently (I'm not sure if there are slow tests for just keystone or glance)? I don't know if the other projects already have something like this that they gate on. If so, a nova-specific job for nova changes is fine for me. [1] https://review.openstack.org/#/c/567697/ [2] http://logs.openstack.org/97/567697/1/check/nova-slow/bedfafb/job-output.txt.gz#_2018-05-10_23_46_47_588138 -- Thanks, Matt From opensrloo at gmail.com Fri May 11 14:31:45 2018 From: opensrloo at gmail.com (Ruby Loo) Date: Fri, 11 May 2018 10:31:45 -0400 Subject: [openstack-dev] [ironic][stable] Re-adding Jim Rollenhagen to ironic stable maintenance team? In-Reply-To: References: Message-ID: On Fri, May 11, 2018 at 7:50 AM, Julia Kreger wrote: > Greetings folks, > > Is there any objection if we re-add Jim to the ironic-stable-maint > team? He was a member prior to his brief departure and I think it > would be good to have another set of hands that can approve the > changes as three doesn't seem like quite enough when everyone is busy. > > Glad you brought it up cuz I wanted to re-add him to this, when we re-added him back as an ironic core :) Thanks for making it happen, --ruby -------------- next part -------------- An HTML attachment was scrubbed... URL: From mtreinish at kortar.org Fri May 11 14:32:08 2018 From: mtreinish at kortar.org (Matthew Treinish) Date: Fri, 11 May 2018 10:32:08 -0400 Subject: [openstack-dev] Should we add a tempest-slow job? In-Reply-To: <9b338d82-bbcf-f6c0-9ba0-9a402838d958@gmail.com> References: <9b338d82-bbcf-f6c0-9ba0-9a402838d958@gmail.com> Message-ID: <20180511143208.GA6859@zeong> On Fri, May 11, 2018 at 08:45:39AM -0500, Matt Riedemann wrote: > The tempest-full job used to run API and scenario tests concurrently, and if > you go back far enough I think it also ran slow tests. Well it's a bit more subtle than that. Skipping slow tests was added right before we introduced parallel execution to tempest ~5 years ago: https://github.com/openstack/tempest/commit/68a8060b24abd6b6bf99c4f9296bf418a8349a2d Note those are in separate testr jobs which we migrated to the full job a bit later in that cycle. The full job back then ran using nose and ran things serially. But back then we didn't actually have any tests tagged as slow. It was more of a future proofing thing because we were planning to add a bunch of really slow heat tests we didn't want to run on every commit to each project. The slow tags were first added for heat tests which came later in the havana cycle. > > Sometime in the last year or so, the full job was changed to run the > scenario tests in serial and exclude the slow tests altogether. So the API > tests run concurrently first, and then the scenario tests run in serial. > During that change, some other tests were identified as 'slow' and marked as > such, meaning they don't get run in the normal tempest-full job. It was changed in: https://github.com/openstack/tempest/commit/49505df20f3dc578506e479c2afa4a4f02e464bf > > There are some valuable scenario tests marked as slow, however, like the > only encrypted volume testing we have in tempest is marked slow so it > doesn't get run on every change for at least nova. > > There is only one job that can be run against nova changes which runs the > slow tests but it's in the experimental queue so people forget to run it. > > As a test, I've proposed a nova-slow job [1] which only runs the slow tests > and only the compute API and scenario tests. Since there currently no > compute API tests marked as slow, it's really just running slow scenario > tests. Results show it runs 37 tests in about 37 minutes [2]. The overall > job runtime was 1 hour and 9 minutes, which is on average less than the > tempest-full job. The nova-slow job is also running scenarios that nova > patches don't actually care about, like the neutron IPv6 scenario tests. > > My question is, should we make this a generic tempest-slow job which can be > run either in the integrated-gate or at least in nova/neutron/cinder > consistently (I'm not sure if there are slow tests for just keystone or > glance)? I don't know if the other projects already have something like this > that they gate on. If so, a nova-specific job for nova changes is fine for > me. So there used to be an experimental queue tempest-all job which ran everything in tempest, including the slow tests. I can't find it in the .zuul.yaml in the tempest repo, so my assumption is that got dropped during the v3 migration. I'm fine with adding a general purpose job for just running the slow tests to the integrated gate if we think there is enough value from that. It's mostly just a question of weighing the potential value from the increased coverage vs the increased resource consumption for adding yet another job to the integrated gate. Personally, I'm fine with that tradeoff. -Matt Treinish > > [1] https://review.openstack.org/#/c/567697/ > [2] http://logs.openstack.org/97/567697/1/check/nova-slow/bedfafb/job-output.txt.gz#_2018-05-10_23_46_47_588138 > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From lbragstad at gmail.com Fri May 11 14:47:49 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 11 May 2018 09:47:49 -0500 Subject: [openstack-dev] [edge][keystone][forum]: Keystone edge brainstorming etherpad In-Reply-To: References: Message-ID: <157a5d3a-66ff-a838-fcc2-283c1fc92583@gmail.com> On 05/10/2018 06:30 AM, Csatari, Gergely (Nokia - HU/Budapest) wrote: > > Hi, > >   > > I’ve added some initial text to the Etherpad [1 > ] of > the Possible edge architectures for Keystone Forum session [2 > ]. > > Awesome, I added some of my initial thoughts, too. A very similar thread was brought up in Syndey, and more recently in Dublin, so a lot of those discussions are still fresh in my mind. >   > > Please add your comments and also indicate your willingness to > participate. > The keystone project update is scheduled for Monday [0], which gives us a good opportunity to advertise other important keystone-related sessions. I've added your forum session to it. Thanks for proposing this. I'm looking forward to it. [0] https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21584/keystone-project-update >   > > Thanks, > > Gerg0 > >   > > [1]: https://etherpad.openstack.org/p/YVR-edge-keystone-brainstorming > > [2]: > https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21737/possible-edge-architectures-for-keystone > >   > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From gergely.csatari at nokia.com Fri May 11 15:10:12 2018 From: gergely.csatari at nokia.com (Csatari, Gergely (Nokia - HU/Budapest)) Date: Fri, 11 May 2018 15:10:12 +0000 Subject: [openstack-dev] [edge][keystone][forum]: Keystone edge brainstorming etherpad In-Reply-To: <157a5d3a-66ff-a838-fcc2-283c1fc92583@gmail.com> References: <157a5d3a-66ff-a838-fcc2-283c1fc92583@gmail.com> Message-ID: Hi, Thanks for your comments I've added some reactions. Also thanks for the advertisement. Br, Gerg0 From: Lance Bragstad [mailto:lbragstad at gmail.com] Sent: Friday, May 11, 2018 4:48 PM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [edge][keystone][forum]: Keystone edge brainstorming etherpad On 05/10/2018 06:30 AM, Csatari, Gergely (Nokia - HU/Budapest) wrote: Hi, I've added some initial text to the Etherpad [1] of the Possible edge architectures for Keystone Forum session [2]. Awesome, I added some of my initial thoughts, too. A very similar thread was brought up in Syndey, and more recently in Dublin, so a lot of those discussions are still fresh in my mind. Please add your comments and also indicate your willingness to participate. The keystone project update is scheduled for Monday [0], which gives us a good opportunity to advertise other important keystone-related sessions. I've added your forum session to it. Thanks for proposing this. I'm looking forward to it. [0] https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21584/keystone-project-update Thanks, Gerg0 [1]: https://etherpad.openstack.org/p/YVR-edge-keystone-brainstorming [2]: https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21737/possible-edge-architectures-for-keystone __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Fri May 11 15:14:33 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Sat, 12 May 2018 00:14:33 +0900 Subject: [openstack-dev] [horizon] Scheduling switch to django >= 2.0 In-Reply-To: <276a6199-158c-bb7d-7f7d-f04de9a52e06@debian.org> References: <276a6199-158c-bb7d-7f7d-f04de9a52e06@debian.org> Message-ID: Hi zigo and horizon plugin maintainers, Horizon itself already supports Django 2.0 and horizon unit test covers Django 2.0 with Python 3.5. A question to all is whether we change the upper bound of Django from <2.0 to <2.1. My proposal is to bump the upper bound of Django to <2.1 in Rocky-2. (Note that Django 1.11 will continue to be used for python 2.7 environment.) There are several points we should consider: - If we change it in global-requirements.txt, it means Django 2.0 will be used for python3.5 environment. - Not a small number of horizon plugins still do not support Django 2.0, so bumping the upper bound to <2.1 will break their py35 tests. - From my experience of Django 2.0 support in some plugins, the required changes are relatively simple like [1]. I created an etherpad page to track Django 2.0 support in horizon plugins. https://etherpad.openstack.org/p/django20-support I proposed Django 2.0 support patches to several projects which I think are major. # Do not blame me if I don't cover your project :) Thought? Thanks, Akihiro [1] https://review.openstack.org/#/c/566476/ 2018年5月8日(火) 17:45 Thomas Goirand : > Hi, > > It has been decided that, in Debian, we'll switch to Django 2.0 after > Buster will be released. Buster is to be frozen next February. This > means that we have roughly one more year before Django 1.x goes away. > > Hopefully, Horizon will be ready for it, right? > > Hoping this helps, > Cheers, > > Thomas Goirand (zigo) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Fri May 11 15:46:05 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Fri, 11 May 2018 11:46:05 -0400 Subject: [openstack-dev] [all] Topics for the Board+TC+UC meeting in Vancouver In-Reply-To: References: <1df6fb53-ed96-8eea-a290-ab0b889a1ae2@openstack.org> <0b605e9d-1e7a-450e-e0e9-e1cba90a80ed@gmail.com> Message-ID: On 05/10/2018 08:12 PM, Zane Bitter wrote: > On 10/05/18 16:45, Matt Riedemann wrote: >> On 5/10/2018 3:38 PM, Zane Bitter wrote: >>> How can we avoid (or get out of) the local maximum trap and ensure >>> that OpenStack will meet the needs of all the users we want to serve, >>> not just those whose needs are similar to those of the users we >>> already have? >> >> The phrase "jack of all trades, master of none" comes to mind here. > > Stipulating the constraint that you can't please everybody, how do you > ensure that you're meeting the needs of the users who are most important > to the long-term sustainability of the project, and not just the ones > who were easiest to bootstrap? Who gets to decide who the users are "that are most important to the long-term sustainability of the project"? Assuming there is a single definition of what "the project" actually is... Best, -jay From zbitter at redhat.com Fri May 11 16:21:29 2018 From: zbitter at redhat.com (Zane Bitter) Date: Fri, 11 May 2018 12:21:29 -0400 Subject: [openstack-dev] [all] Topics for the Board+TC+UC meeting in Vancouver In-Reply-To: References: <1df6fb53-ed96-8eea-a290-ab0b889a1ae2@openstack.org> <0b605e9d-1e7a-450e-e0e9-e1cba90a80ed@gmail.com> Message-ID: <31b057e5-cf5b-46e9-342a-31826b19548d@redhat.com> On 11/05/18 11:46, Jay Pipes wrote: > On 05/10/2018 08:12 PM, Zane Bitter wrote: >> On 10/05/18 16:45, Matt Riedemann wrote: >>> On 5/10/2018 3:38 PM, Zane Bitter wrote: >>>> How can we avoid (or get out of) the local maximum trap and ensure >>>> that OpenStack will meet the needs of all the users we want to >>>> serve, not just those whose needs are similar to those of the users >>>> we already have? >>> >>> The phrase "jack of all trades, master of none" comes to mind here. >> >> Stipulating the constraint that you can't please everybody, how do you >> ensure that you're meeting the needs of the users who are most >> important to the long-term sustainability of the project, and not just >> the ones who were easiest to bootstrap? > > Who gets to decide who the users are "that are most important to the > long-term sustainability of the project"? The thing I'm hoping to convince people of here is that the question is interesting independently of how you define that. - ZB From jaypipes at gmail.com Fri May 11 16:31:07 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Fri, 11 May 2018 12:31:07 -0400 Subject: [openstack-dev] [all] Topics for the Board+TC+UC meeting in Vancouver In-Reply-To: <31b057e5-cf5b-46e9-342a-31826b19548d@redhat.com> References: <1df6fb53-ed96-8eea-a290-ab0b889a1ae2@openstack.org> <0b605e9d-1e7a-450e-e0e9-e1cba90a80ed@gmail.com> <31b057e5-cf5b-46e9-342a-31826b19548d@redhat.com> Message-ID: On 05/11/2018 12:21 PM, Zane Bitter wrote: > On 11/05/18 11:46, Jay Pipes wrote: >> On 05/10/2018 08:12 PM, Zane Bitter wrote: >>> On 10/05/18 16:45, Matt Riedemann wrote: >>>> On 5/10/2018 3:38 PM, Zane Bitter wrote: >>>>> How can we avoid (or get out of) the local maximum trap and ensure >>>>> that OpenStack will meet the needs of all the users we want to >>>>> serve, not just those whose needs are similar to those of the users >>>>> we already have? >>>> >>>> The phrase "jack of all trades, master of none" comes to mind here. >>> >>> Stipulating the constraint that you can't please everybody, how do >>> you ensure that you're meeting the needs of the users who are most >>> important to the long-term sustainability of the project, and not >>> just the ones who were easiest to bootstrap? >> >> Who gets to decide who the users are "that are most important to the >> long-term sustainability of the project"? > > The thing I'm hoping to convince people of here is that the question is > interesting independently of how you define that. Agreed. The question is interesting regardless, but how seriously people take the answers to the question will depend on how much they agree with the people that decide who the "important users" are. Best, -jay From prometheanfire at gentoo.org Fri May 11 16:37:20 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Fri, 11 May 2018 11:37:20 -0500 Subject: [openstack-dev] [requirements][glare][mogan][solum][compute-hyperv][kingbird][searchlight][swauth][networking-powervm][rpm-packaging][os-win] uncaping eventlet Message-ID: <20180511163720.65pubhbubdununex@gentoo.org> Please review your particular uncaping patch (all but rpm-packaging are passing gate it looks like). We'd like to move on to a newer eventlet for rocky. https://review.openstack.org/#/q/topic:uncap-eventlet+status:open -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From sadasu at cisco.com Fri May 11 17:02:44 2018 From: sadasu at cisco.com (Sandhya Dasu (sadasu)) Date: Fri, 11 May 2018 17:02:44 +0000 Subject: [openstack-dev] [kolla] Building Kolla containers with 3rd party vendor drivers In-Reply-To: <49543b70-78ef-2e99-9435-1be2b34e01e3@oracle.com> References: <48550F8B-C186-4C3F-8803-C792B15BB754@cisco.com> <49543b70-78ef-2e99-9435-1be2b34e01e3@oracle.com> Message-ID: <59EA437A-08BA-484F-9198-377EAD3A14EE@cisco.com> Hi Paul, I am happy to use the changes you proposed to https://github.com/openstack/kolla/blob/master/kolla/common/config.py I was under the impression that this was disallowed for drivers that weren’t considered “reference drivers”. If that is no longer the case, I am happy to go this route and abandon the approach I took in my diffs in: https://review.openstack.org/#/c/552119/. I agree with the reasoning that Kolla cannot possibly maintain a large number of neutron-server containers, one per plugin. To support operators that want to build their own images, I was hoping that we could come up with a mechanism by which the 3rd party driver owners provide the code (template-override.j2 or Dockerfile.j2 as the case maybe) to build their containers. This code can definitely live out-of-tree with the drivers themselves. Optionally, we could have them reside in-tree in Kolla in a separate directory, say “additional drivers”. Kolla will not be responsible for building a container per driver or for building a huge (neutron-server) container containing all interested drivers. Operators that need one or more of these “additional drivers” will be provided with documentation on how the code in the “additional drivers” path can be used to build their own containers. This documentation will also detail how to combine more than one 3rd party drivers into their own container. I would like the community’s input on what approach best aligns with Kolla’s and the larger OpenStack community’s goals. Thanks, Sandhya On 5/11/18, 5:35 AM, "Paul Bourke" wrote: Hi Sandhya, Thanks for starting this thread. I've moved it to the mailing list so the discussion can be available to anyone else who is interested, I hope you don't mind. If your requirement is to have third party plugins (such as Cisco) that are not available on tarballs.openstack.org, available in Kolla, then this is already possible. Using the Cisco case as an example, you would simply need to submit the following patch to https://github.com/openstack/kolla/blob/master/kolla/common/config.py """ 'neutron-server-plugin-networking-cisco': { 'type': 'git', 'location': ('https://github.com/openstack/networking-cisco')}, """ This will then include that plugin as part of the future neutron-server builds. If the requirement is to have Kolla publish a neutron-server container with *only* the Cisco plugin, then this is where it gets a little more tricky. Sure, we can go the route that's proposed in your patch, but we end up then maintaining a massive number of neutron-server containers, one per plugin. It also does not address then the issue of what people want to do when they want a combination or mix of plugins together. So right now I feel Kolla takes a middle ground, where we publish a neutron-server container with a variety of common plugins. If operators have specific requirements, they should create their own config file and build their own images, which we expect any serious production setup to be doing anyway. -Paul On 10/05/18 18:12, Sandhya Dasu (sadasu) wrote: > Yes, I think there is some misunderstanding on what I am trying to accomplish here. > > I am utilizing existing Kolla constructs to prove that they work for 3rd party out of tree vendor drivers too. > At this point, anything that a 3rd party vendor driver does (the way they build their containers, where they publish it and how they generate config) is completely out of scope of Kolla. > > I want to use the spec as a place to articulate and discuss best practices and figure out what part of supporting 3rd party vendor drivers can stay within the Kolla tree and what should be out. > I have witnessed many discussions on this topic but they only take away I get is “there are ways to do it but it can’t be part of Kolla”. > > Using the existing kolla constructs of template-override, plugin-archive and config-dir, let us say the 3rd party vendor builds a container. > OpenStack TC does not want these containers to be part of tarballs.openstack.org. Kolla publishes its containers to DockerHub under the Kolla project. > If these 3rd party vendor drivers publish to Dockerhub they will have to publish under a different project. So, an OpenStack installation that needs these drivers will have to pull images from 2 or more Dokerhub projects?! > > Or do you prefer if the OpenStack operator build their own images using the out-of-tree Dockerfile for that vendor? > > Again, should the config changes to support these drivers be part of the kolla-ansible repo or should they be out-of-tree? > > It is hard to have this type of discussion on IRC so I started this email thread. > > Thanks, > Sandhya > > On 5/10/18, 5:59 AM, "Paul Bourke (pbourke) (Code Review)" wrote: > > Paul Bourke (pbourke) has posted comments on this change. ( https://review.openstack.org/567278 ) > > Change subject: Building Kolla containers with 3rd party vendor drivers > ...................................................................... > > > Patch Set 2: Code-Review-1 > > Hi Sandhya, after reading the spec most of my thoughts echo Eduardo's. I'm wondering if there's some misunderstanding on how the current plugin functionality works? Feels free to ping me on irc I'd be happy to discuss further - maybe there's still some element of what's there that's not working for your use case. > > -- > To view, visit https://review.openstack.org/567278 > To unsubscribe, visit https://review.openstack.org/settings > > Gerrit-MessageType: comment > Gerrit-Change-Id: I681d6a7b38b6cafe7ebe88a1a1f2d53943e1aab2 > Gerrit-PatchSet: 2 > Gerrit-Project: openstack/kolla > Gerrit-Branch: master > Gerrit-Owner: Sandhya Dasu > Gerrit-Reviewer: Duong Ha-Quang > Gerrit-Reviewer: Eduardo Gonzalez > Gerrit-Reviewer: Paul Bourke (pbourke) > Gerrit-Reviewer: Zuul > Gerrit-HasComments: No > > From doug at doughellmann.com Fri May 11 18:04:38 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 11 May 2018 14:04:38 -0400 Subject: [openstack-dev] [horizon] Scheduling switch to django >= 2.0 In-Reply-To: References: <276a6199-158c-bb7d-7f7d-f04de9a52e06@debian.org> Message-ID: <1526061568-sup-5500@lrrr.local> Excerpts from Akihiro Motoki's message of 2018-05-12 00:14:33 +0900: > Hi zigo and horizon plugin maintainers, > > Horizon itself already supports Django 2.0 and horizon unit test covers > Django 2.0 with Python 3.5. > > A question to all is whether we change the upper bound of Django from <2.0 > to <2.1. > My proposal is to bump the upper bound of Django to <2.1 in Rocky-2. > (Note that Django 1.11 will continue to be used for python 2.7 environment.) Do we need to cap it at all? We've been trying to express our dependencies without caps and rely on the constraints list to test using a common version because this offers the most flexibility as we move to newer versions over time. > There are several points we should consider: > - If we change it in global-requirements.txt, it means Django 2.0 will be > used for python3.5 environment. > - Not a small number of horizon plugins still do not support Django 2.0, so > bumping the upper bound to <2.1 will break their py35 tests. > - From my experience of Django 2.0 support in some plugins, the required > changes are relatively simple like [1]. > > I created an etherpad page to track Django 2.0 support in horizon plugins. > https://etherpad.openstack.org/p/django20-support > > I proposed Django 2.0 support patches to several projects which I think are > major. > # Do not blame me if I don't cover your project :) > > Thought? It seems like a good goal for the horizon-plugin author community to bring those projects up to date by supporting a current version of Django (and any other dependencies), especially as we discuss the impending switch over to python-3-first and then python-3-only. If this is an area where teams need help, updating that etherpad with notes and requests for assistance will help us split up the work. Doug > > Thanks, > Akihiro > > [1] https://review.openstack.org/#/c/566476/ > > 2018年5月8日(火) 17:45 Thomas Goirand : > > > Hi, > > > > It has been decided that, in Debian, we'll switch to Django 2.0 after > > Buster will be released. Buster is to be frozen next February. This > > means that we have roughly one more year before Django 1.x goes away. > > > > Hopefully, Horizon will be ready for it, right? > > > > Hoping this helps, > > Cheers, > > > > Thomas Goirand (zigo) > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > From Kevin.Fox at pnnl.gov Fri May 11 19:00:55 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Fri, 11 May 2018 19:00:55 +0000 Subject: [openstack-dev] [all] Topics for the Board+TC+UC meeting in Vancouver In-Reply-To: References: <1df6fb53-ed96-8eea-a290-ab0b889a1ae2@openstack.org> <0b605e9d-1e7a-450e-e0e9-e1cba90a80ed@gmail.com> <31b057e5-cf5b-46e9-342a-31826b19548d@redhat.com>, Message-ID: <1A3C52DFCD06494D8528644858247BF01C0BEEC5@EX10MBOX03.pnnl.gov> Who are your users, what do they need, are you meeting those needs, and what can you do to better things? If that can't be answered, how do you know if you are making progress or staying relevant? Lines of code committed is not a metric of real progress. Number of reviews isn't. Feature addition metrics aren't necessarily if the features are not relevant. Developer community size is not really a metric of progress either. (not a bad thing. just doesn't grantee progress if devs are going in different directions) If you can't answer them, how do separate things like, "devs are leaving because the project is mature, from the overall project is really broken and folks are just leaving?" Part of the disconnect to me has been that these questions have been left up to the projects by and large. But, users don't use the projects. Users use OpenStack. Or, moving forward, they at least use a Constellation. But Constellation is still just a documentation construct. Not really a first class entity. Currently the isolation between the Projects and the thing that the users use, the Constellation allows for user needs to easily slip through the cracks. Cause "Project X: we agree that is a problem, but its Y projects problem. Project Y: we agree that is a problem, but its X projects problem." No, seriously, its OpenStacks problem. Most of the major issues I've hit in my many years of using OpenStack were in that category. And there wasn't a good forum for addressing them. A related effect of the isolation is also that the projects don't work on the commons nor look around too much what others are doing. Either within OpenStack or outside. They solve problems at the project level and say, look, I've solved it, but don't look at what happens when all the projects do that independently and push more work to the users. The end result of this lack of Leadership is more work for the users compared to competitors. IMO, OpenStack really needs some Leadership at a higher level. It seems to be lacking some things: 1. A group that performs... lacking a good word.... reconnaissance? How is OpenStack fairing in the world. How is the world changing and how must OpenStack change to continue to be relevant. If you don't know you have a problem you can't correct it. 2. A group that decides some difficult political things, like who the users are. Maybe at a per constellation level. This does not mean rejecting use cases from "non users". just helping the projects sort out priorities. 3. A group that decides on a general direction for OpenStack's technical solutions, encourages building up the commons, helps break down the project communication walls and picks homes for features when it takes too long for a user need to be met (users really don't care what OpenStack project does what feature. They just that they are suffering, things don't get addressed in a timely manner, and will maybe consider looking outside of OpenStack for a solution) The current governance structure is focused on hoping the individual projects will look at the big picture and adjust to it, and commit the relevant common code to the commons rather then one-offing a solution and discussing solutions between projects to gain consensus. But that's generally not happening. The projects have a narrow view of the world and just wanna make progress on their code. I get that. The other bits are hard. Guidance to the projects on how they are are, or are not fitting, would help them make better choices and better code. The focus so much on projects has made us loose sight of why they exist. To serve the Users. Users don't use projects as OpenStack has defined them though. And we can't even really define what a user is. This is a big problem. Anyway, more Leadership please! Ready..... GO! :) Thanks, Kevin ________________________________________ From: Jay Pipes [jaypipes at gmail.com] Sent: Friday, May 11, 2018 9:31 AM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [all] Topics for the Board+TC+UC meeting in Vancouver On 05/11/2018 12:21 PM, Zane Bitter wrote: > On 11/05/18 11:46, Jay Pipes wrote: >> On 05/10/2018 08:12 PM, Zane Bitter wrote: >>> On 10/05/18 16:45, Matt Riedemann wrote: >>>> On 5/10/2018 3:38 PM, Zane Bitter wrote: >>>>> How can we avoid (or get out of) the local maximum trap and ensure >>>>> that OpenStack will meet the needs of all the users we want to >>>>> serve, not just those whose needs are similar to those of the users >>>>> we already have? >>>> >>>> The phrase "jack of all trades, master of none" comes to mind here. >>> >>> Stipulating the constraint that you can't please everybody, how do >>> you ensure that you're meeting the needs of the users who are most >>> important to the long-term sustainability of the project, and not >>> just the ones who were easiest to bootstrap? >> >> Who gets to decide who the users are "that are most important to the >> long-term sustainability of the project"? > > The thing I'm hoping to convince people of here is that the question is > interesting independently of how you define that. Agreed. The question is interesting regardless, but how seriously people take the answers to the question will depend on how much they agree with the people that decide who the "important users" are. Best, -jay __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From colleen at gazlene.net Fri May 11 19:03:17 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 11 May 2018 21:03:17 +0200 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 7 May 2018 Message-ID: <1526065397.2860540.1369075056.1C099091@webmail.messagingengine.com> # Keystone Team Update - Week of 7 May 2018 ## News ### Patrole in CI With all the work that has been happening around fixing policy, it would be good to have better policy validation in CI[1]. However, there are some concerns that using Patrole in a voting gate job will lock us in to unwanted behavior. We agreed to start setting up the framework but to keep the jobs nonvoting until 968696[2] is fully fixed. [1] http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-05-08-16.00.log.html#l-51 [2] https://bugs.launchpad.net/keystone/+bug/968696 ### Multi-Site Keystone Keystone has never been able to provide straightforward guidance on implementing multi-region/multi-site clouds. We discussed an implementation proposal to "stretch" over existing clouds[3] with a combination of Galera syncing and orchestration around keystone-manage commands. A proof of concept already exists[4] and a spec will be forthcoming. We had also discussed[5] tying this into the default roles spec[6] by perhaps assigning static, non-UUID IDs to the new default roles in order to gain uniformity across distinct sites, but migrating existing clouds would be a challenge and we would need to come up with a solution for domain-specific roles. [3] http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-05-08-16.00.log.html#l-156 [4] https://github.com/zzzeek/stretch_cluster/tree/standard_tripleo_version [5] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-05-07.log.html#t2018-05-07T17:23:29 [6] https://review.openstack.org/566377 ## Open Specs Search query: https://bit.ly/2G8Ai5q As discussed last week, the default roles spec has been reproposed to keystone-specs[7]. We also need to prioritize reviews of the unified limits specs[8][9]. The remaining specs are likely to be deferred until next cycle. [7] https://review.openstack.org/566377 [8] https://review.openstack.org/540803 [9] https://review.openstack.org/565412 ## Recently Merged Changes Search query: https://bit.ly/2IACk3F We merged 19 changes this week. Among these were patches to enhance service discovery in keystoneauth using service-types-authority. ## Changes that need Attention Search query: https://bit.ly/2wv7QLK There are 43 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. ## Bugs Launchpad report generator: https://github.com/lbragstad/launchpad-toolkit These week we opened 5 new bugs and closed 4. ## Milestone Outlook https://releases.openstack.org/rocky/schedule.html We have about four weeks to get our current spec proposals in shape to be merged, and six weeks to start seeing implementation proposals for those specs. ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter Dashboard generated using gerrit-dash-creator and https://gist.github.com/lbragstad/9b0477289177743d1ebfc276d1697b67 From jimmy at openstack.org Fri May 11 19:28:30 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Fri, 11 May 2018 14:28:30 -0500 Subject: [openstack-dev] [all] Topics for the Board+TC+UC meeting in Vancouver In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C0BEEC5@EX10MBOX03.pnnl.gov> References: <1df6fb53-ed96-8eea-a290-ab0b889a1ae2@openstack.org> <0b605e9d-1e7a-450e-e0e9-e1cba90a80ed@gmail.com> <31b057e5-cf5b-46e9-342a-31826b19548d@redhat.com>, <1A3C52DFCD06494D8528644858247BF01C0BEEC5@EX10MBOX03.pnnl.gov> Message-ID: <5AF5EEDE.5080002@openstack.org> Fox, Kevin M wrote: > Who are your users, what do they need, are you meeting those needs, and what can you do to better things? > > IMO, OpenStack really needs some Leadership at a higher level. It seems to be lacking some things: > 1. A group that performs... lacking a good word.... reconnaissance? How is OpenStack fairing in the world. How is the world changing and how must OpenStack change to continue to be relevant. If you don't know you have a problem you can't correct it. > 2. A group that decides some difficult political things, like who the users are. Maybe at a per constellation level. This does not mean rejecting use cases from "non users". just helping the projects sort out priorities. > 3. A group that decides on a general direction for OpenStack's technical solutions, encourages building up the commons, helps break down the project communication walls and picks homes for features when it takes too long for a user need to be met (users really don't care what OpenStack project does what feature. They just that they are suffering, things don't get addressed in a timely manner, and will maybe consider looking outside of OpenStack for a solution) This is a big reason we're excited that the Ops & Users Meetup are co-locating at the next PTG. Some of the breakdown is getting actionable items from Ops Meetups and UC back to the devs in time for the next development cycle. > > The current governance structure is focused on hoping the individual projects will look at the big picture and adjust to it, and commit the relevant common code to the commons rather then one-offing a solution and discussing solutions between projects to gain consensus. But that's generally not happening. The projects have a narrow view of the world and just wanna make progress on their code. I get that. The other bits are hard. Guidance to the projects on how they are are, or are not fitting, would help them make better choices and better code. Keep in mind, UC also has governance :) I think it's really important to start looking to the UC to help craft the big picture and be part of the conversation. This serves the purpose of getting Ops & Devs working together towards a better OpenStack. It also helps broaden the perspective of everyone involved in the project, from all sides. > > The focus so much on projects has made us loose sight of why they exist. To serve the Users. Users don't use projects as OpenStack has defined them though. And we can't even really define what a user is. This is a big problem. > > Anyway, more Leadership please! Ready..... GO! :) > > Thanks, > Kevin > > ________________________________________ > From: Jay Pipes [jaypipes at gmail.com] > Sent: Friday, May 11, 2018 9:31 AM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [all] Topics for the Board+TC+UC meeting in Vancouver > > On 05/11/2018 12:21 PM, Zane Bitter wrote: >> On 11/05/18 11:46, Jay Pipes wrote: >>> On 05/10/2018 08:12 PM, Zane Bitter wrote: >>>> On 10/05/18 16:45, Matt Riedemann wrote: >>>>> On 5/10/2018 3:38 PM, Zane Bitter wrote: >>>>>> How can we avoid (or get out of) the local maximum trap and ensure >>>>>> that OpenStack will meet the needs of all the users we want to >>>>>> serve, not just those whose needs are similar to those of the users >>>>>> we already have? >>>>> The phrase "jack of all trades, master of none" comes to mind here. >>>> Stipulating the constraint that you can't please everybody, how do >>>> you ensure that you're meeting the needs of the users who are most >>>> important to the long-term sustainability of the project, and not >>>> just the ones who were easiest to bootstrap? >>> Who gets to decide who the users are "that are most important to the >>> long-term sustainability of the project"? >> The thing I'm hoping to convince people of here is that the question is >> interesting independently of how you define that. > > Agreed. The question is interesting regardless, but how seriously people > take the answers to the question will depend on how much they agree with > the people that decide who the "important users" are. > > Best, > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Fri May 11 20:48:06 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 11 May 2018 15:48:06 -0500 Subject: [openstack-dev] [all] Topics for the Board+TC+UC meeting in Vancouver In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C0BEEC5@EX10MBOX03.pnnl.gov> References: <1df6fb53-ed96-8eea-a290-ab0b889a1ae2@openstack.org> <0b605e9d-1e7a-450e-e0e9-e1cba90a80ed@gmail.com> <31b057e5-cf5b-46e9-342a-31826b19548d@redhat.com> <1A3C52DFCD06494D8528644858247BF01C0BEEC5@EX10MBOX03.pnnl.gov> Message-ID: On 05/11/2018 02:00 PM, Fox, Kevin M wrote: > Who are your users, what do they need, are you meeting those needs, and what can you do to better things? > > If that can't be answered, how do you know if you are making progress or staying relevant? > > Lines of code committed is not a metric of real progress. > Number of reviews isn't. > Feature addition metrics aren't necessarily if the features are not relevant. > Developer community size is not really a metric of progress either. (not a bad thing. just doesn't grantee progress if devs are going in different directions) > > If you can't answer them, how do separate things like, "devs are leaving because the project is mature, from the overall project is really broken and folks are just leaving?" > > Part of the disconnect to me has been that these questions have been left up to the projects by and large. But, users don't use the projects. Users use OpenStack. Or, moving forward, they at least use a Constellation. But Constellation is still just a documentation construct. Not really a first class entity. > > Currently the isolation between the Projects and the thing that the users use, the Constellation allows for user needs to easily slip through the cracks. Cause "Project X: we agree that is a problem, but its Y projects problem. Project Y: we agree that is a problem, but its X projects problem." No, seriously, its OpenStacks problem. Most of the major issues I've hit in my many years of using OpenStack were in that category. And there wasn't a good forum for addressing them. I can think of a couple good example problems that probably fall into the category you've described. But, I wouldn't say it was solely because two or more projects were convinced the problem exists and it wasn't their responsibility (IMO, that at least seems like a broad generalization of the root of why cross-project issues take a long time). For example, the push for default roles surfaced in 2015 as an OpenStack-wide specification, but lost steam when we realized just how terrible the migration path would be for users. Eventually, a solution for that migration issue made it's way into the commons (oslo.policy) and enabled a Queens community goal. I think the leadership established through community goals makes this kind of work possible, even if it does take a while. > > A related effect of the isolation is also that the projects don't work on the commons nor look around too much what others are doing. Either within OpenStack or outside. They solve problems at the project level and say, look, I've solved it, but don't look at what happens when all the projects do that independently and push more work to the users. The end result of this lack of Leadership is more work for the users compared to competitors. > > IMO, OpenStack really needs some Leadership at a higher level. It seems to be lacking some things: > 1. A group that performs... lacking a good word.... reconnaissance? How is OpenStack fairing in the world. How is the world changing and how must OpenStack change to continue to be relevant. If you don't know you have a problem you can't correct it. > 2. A group that decides some difficult political things, like who the users are. Maybe at a per constellation level. This does not mean rejecting use cases from "non users". just helping the projects sort out priorities. > 3. A group that decides on a general direction for OpenStack's technical solutions, encourages building up the commons, helps break down the project communication walls and picks homes for features when it takes too long for a user need to be met (users really don't care what OpenStack project does what feature. They just that they are suffering, things don't get addressed in a timely manner, and will maybe consider looking outside of OpenStack for a solution) This sounds like the group of people who propose, review, and implement community goals. > > The current governance structure is focused on hoping the individual projects will look at the big picture and adjust to it, and commit the relevant common code to the commons rather then one-offing a solution and discussing solutions between projects to gain consensus. But that's generally not happening. The projects have a narrow view of the world and just wanna make progress on their code. I get that. The other bits are hard. Guidance to the projects on how they are are, or are not fitting, would help them make better choices and better code. > > The focus so much on projects has made us loose sight of why they exist. To serve the Users. Users don't use projects as OpenStack has defined them though. And we can't even really define what a user is. This is a big problem. > > Anyway, more Leadership please! Ready..... GO! :) > > Thanks, > Kevin > > ________________________________________ > From: Jay Pipes [jaypipes at gmail.com] > Sent: Friday, May 11, 2018 9:31 AM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [all] Topics for the Board+TC+UC meeting in Vancouver > > On 05/11/2018 12:21 PM, Zane Bitter wrote: >> On 11/05/18 11:46, Jay Pipes wrote: >>> On 05/10/2018 08:12 PM, Zane Bitter wrote: >>>> On 10/05/18 16:45, Matt Riedemann wrote: >>>>> On 5/10/2018 3:38 PM, Zane Bitter wrote: >>>>>> How can we avoid (or get out of) the local maximum trap and ensure >>>>>> that OpenStack will meet the needs of all the users we want to >>>>>> serve, not just those whose needs are similar to those of the users >>>>>> we already have? >>>>> The phrase "jack of all trades, master of none" comes to mind here. >>>> Stipulating the constraint that you can't please everybody, how do >>>> you ensure that you're meeting the needs of the users who are most >>>> important to the long-term sustainability of the project, and not >>>> just the ones who were easiest to bootstrap? >>> Who gets to decide who the users are "that are most important to the >>> long-term sustainability of the project"? >> The thing I'm hoping to convince people of here is that the question is >> interesting independently of how you define that. > Agreed. The question is interesting regardless, but how seriously people > take the answers to the question will depend on how much they agree with > the people that decide who the "important users" are. > > Best, > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From gusev at selectel.ru Fri May 11 20:37:06 2018 From: gusev at selectel.ru (Vlad Gusev) Date: Fri, 11 May 2018 23:37:06 +0300 Subject: [openstack-dev] [keystone] keystoneauth version auto discovery for internal endpoints in queens Message-ID: Hello. We faced a bug in keystoneauth, which haven't existed before Queens. In our OpenStack deployments we use urls like http://controller:5000/v3 for internal and admin endpoints and urls like https://api.example.org/identity/v3 for public endpoints. We set option public_endpoint in [default] section of the keystone.conf/nova.conf/cinder.conf/glance.conf/neutron.conf. For example, for keystone it is 'public_endpoint=https://api.example.org/identity/'. Since keystoneauth 3.2.0 or commit https://github.com/openstack/keystoneauth/commit/8b8ff830e89923ca6862362a5d16e496a0c0093c all internal client requests to the internal endpoints (for example, openstack server list from controller node) fail with 404 error, because it tries to do auto discovery at the http://controller:5000/v3. It gets {"href": "https://api.example.org/identity/v3/", "rel": "self"} because of the public_endpoint option, and then in function _combine_relative_url() (keystoneauth1/discover.py:405) keystoneauth combines http://controller:5000/ with the path from public href. So after auto discovery attempt it goes to the wrong path http://controller:5000/identity/v3/ Before this commit openstackclient made auth request to the https://api.example.org/identity/v3/auth/tokens (and it worked, because in our deployment internal services and console clients can access this public url). At best, we expect openstackclient always go to the http://controller:5000/v3/ This problem partially could be solved by explicitly passing public --os-auth-url https://api.example.org/identity/identity/v3 to the console clients even if we want to use internal endpoints. I found similiar bug in launchpad, but it haven't received any attention: https://bugs.launchpad.net/keystoneauth/+bug/1733052 What could be done with this behavior of keystoneauth auto discovery? - Vlad -------------- next part -------------- An HTML attachment was scrubbed... URL: From morgan.fainberg at gmail.com Fri May 11 21:10:07 2018 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Fri, 11 May 2018 14:10:07 -0700 Subject: [openstack-dev] [keystone] keystoneauth version auto discovery for internal endpoints in queens In-Reply-To: References: Message-ID: Typically speaking if we broke a behavior via a change in KeystoneAuth (not some behavior change in openstackclient or the way osc processes requests), we are in the wrong and we will need to go back through and fix the previous behavior. I'll spend some time going through this to verify if this really is a KSA change bug or something else. If it is in-fact a KSA (keystoneauth) bug, we'll work to restore the previous behavior(s) as reasonably quickly as possible. Cheers, --Morgan On Fri, May 11, 2018 at 1:37 PM, Vlad Gusev wrote: > Hello. > > We faced a bug in keystoneauth, which haven't existed before Queens. > > In our OpenStack deployments we use urls like http://controller:5000/v3 for > internal and admin endpoints and urls like > https://api.example.org/identity/v3 for public endpoints. We set option > public_endpoint in [default] section of the > keystone.conf/nova.conf/cinder.conf/glance.conf/neutron.conf. For example, > for keystone it is 'public_endpoint=https://api.example.org/identity/'. > > Since keystoneauth 3.2.0 or commit > https://github.com/openstack/keystoneauth/commit/8b8ff830e89923ca6862362a5d16e496a0c0093c > all internal client requests to the internal endpoints (for example, > openstack server list from controller node) fail with 404 error, because it > tries to do auto discovery at the http://controller:5000/v3. It gets > {"href": "https://api.example.org/identity/v3/", "rel": "self"} because of > the public_endpoint option, and then in function _combine_relative_url() > (keystoneauth1/discover.py:405) keystoneauth combines > http://controller:5000/ with the path from public href. So after auto > discovery attempt it goes to the wrong path > http://controller:5000/identity/v3/ > > Before this commit openstackclient made auth request to the > https://api.example.org/identity/v3/auth/tokens (and it worked, because in > our deployment internal services and console clients can access this public > url). At best, we expect openstackclient always go to the > http://controller:5000/v3/ > > This problem partially could be solved by explicitly passing public > --os-auth-url https://api.example.org/identity/identity/v3 to the console > clients even if we want to use internal endpoints. > > I found similiar bug in launchpad, but it haven't received any attention: > https://bugs.launchpad.net/keystoneauth/+bug/1733052 > > What could be done with this behavior of keystoneauth auto discovery? > > - Vlad > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From mriedemos at gmail.com Fri May 11 21:13:27 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 11 May 2018 16:13:27 -0500 Subject: [openstack-dev] [all] Topics for the Board+TC+UC meeting in Vancouver In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C0BEEC5@EX10MBOX03.pnnl.gov> References: <1df6fb53-ed96-8eea-a290-ab0b889a1ae2@openstack.org> <0b605e9d-1e7a-450e-e0e9-e1cba90a80ed@gmail.com> <31b057e5-cf5b-46e9-342a-31826b19548d@redhat.com> <1A3C52DFCD06494D8528644858247BF01C0BEEC5@EX10MBOX03.pnnl.gov> Message-ID: <8f497d4b-51a9-1172-fcd9-e7dac15776dd@gmail.com> On 5/11/2018 2:00 PM, Fox, Kevin M wrote: > Currently the isolation between the Projects and the thing that the users use, the Constellation allows for user needs to easily slip through the cracks. Cause "Project X: we agree that is a problem, but its Y projects problem. Project Y: we agree that is a problem, but its X projects problem." No, seriously, its OpenStacks problem. Most of the major issues I've hit in my many years of using OpenStack were in that category. And there wasn't a good forum for addressing them. Agree, and we'll be talking about this during the volume multi-attach talk at the summit [1]. Because once we got it out the door in Queens, there was a lot of "what took so long?" feedback, and the answer to that question pulls from a lot of the stuff you're talking about in this thread, i.e. big changes are hard, big changes across multiple projects are hard, finding people to sustain the efforts for those big changes is hard, not dumping a steaming pile on the operators and users is hard (think smooth upgrades), etc. So things take time to do them correctly and even then people are not satisfied because "it took too long". Anyway, there are hopefully some nuggets of wisdom we can share in that talk to make stuff like this smoother in the future. I know this isn't the only example (by far), it's just a recent one. Lance has some other good ones in his reply. [1] https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/20850/the-multi-release-multi-project-road-to-volume-multi-attach -- Thanks, Matt From lbragstad at gmail.com Fri May 11 21:34:32 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 11 May 2018 16:34:32 -0500 Subject: [openstack-dev] [forum] [keystone] unified limits etherpad Message-ID: <7cc85746-009e-4de5-0bbe-6ec8e74122e4@gmail.com> Hi all, I've created an etherpad for the unified limits session at the forum [0]. I've bootstrapped it with some basic context so that we can spend as much time as possible on the session goals. If you have questions, comments, or additional session goals, please feel free to add them to the etherpad. Thanks and see you there, Lance [0] https://etherpad.openstack.org/p/YVR-rocky-unified-limits -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From lbragstad at gmail.com Fri May 11 21:36:56 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 11 May 2018 16:36:56 -0500 Subject: [openstack-dev] [forum] [keystone] default roles etherpad Message-ID: <2cc782b9-bc0f-fb3a-0cec-43d5549bc4cb@gmail.com> Hey everyone, I've created an etherpad for the default roles discussion in Vancouver [0]. It currently contains basic context and some session goals. If you have any input or additional session goals, please don't hesitate to add to the etherpad. Thanks and hope to see you there, Lance [0] https://etherpad.openstack.org/p/YVR-rocky-default-roles -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From mordred at inaugust.com Sat May 12 15:00:54 2018 From: mordred at inaugust.com (Monty Taylor) Date: Sat, 12 May 2018 10:00:54 -0500 Subject: [openstack-dev] [keystone] keystoneauth version auto discovery for internal endpoints in queens In-Reply-To: References: Message-ID: <055e41b5-d3a5-5977-1674-e8f4506f5200@inaugust.com> On 05/11/2018 03:37 PM, Vlad Gusev wrote: > Hello. > > We faced a bug in keystoneauth, which haven't existed before Queens. Sorry about that. > In our OpenStack deployments we use urls like http://controller:5000/v3 > for internal and admin endpoints and urls like > https://api.example.org/identity/v3 for public endpoints. Thank you for using suburl deployment for your public interface and not the silly ports!!! > We set option > public_endpoint in [default] section of the > keystone.conf/nova.conf/cinder.conf/glance.conf/neutron.conf. For > example, for keystone it is > 'public_endpoint=https://api.example.org/identity/'. > > Since keystoneauth 3.2.0 or commit > https://github.com/openstack/keystoneauth/commit/8b8ff830e89923ca6862362a5d16e496a0c0093c > all internal client requests to the internal endpoints (for example, > openstack server list from controller node) fail with 404 error, because > it tries to do auto discovery at the http://controller:5000/v3. It gets > {"href": "https://api.example.org/identity/v3/", "rel": "self"} because > of the public_endpoint option, and then in function > _combine_relative_url() (keystoneauth1/discover.py:405) keystoneauth > combines http://controller:5000/ with the path from public href. So > after auto discovery attempt it goes to the wrong path > http://controller:5000/identity/v3/ Ok. I'm going to argue that there are bugs on both the server side AND in keystoneauth. I believe I know how to fix the keystoneauth one- but let me describe why I think the server is broken as well, and then we can figure out how to fix that. I'm going to describe it in slightly excruciating detail, just to make sure we're all on the same page about mechanics that may be going on behind the scenes. The user has said: I want the internal interface of v3 of the identity service First, the identity service has to be found in the catalog. Looking in the catalog, we find this: { "endpoints": [ { "id": "4deb4d0504a044a395d4480741ba628c", "interface": "public", "region": "RegionOne", "url": "https://api.example.com/identity" }, { "id": "012322eeedcd459edabb4933021112bc", "interface": "internal", "region": "RegionOne", "url": "http://controller:5000/v3" } ], "name": "keystone", "type": "identity" }, We've found the entry for 'identity' service, and looking at the endpoints we see that the internal endpoint is: http://controller:5000/v3 The next step is version discovery, because the user wants version 3 of the api. (I'm skipping possible optimizations that can be applied on purpose) To do version discovery, one does a GET on the endpoint found in the catalog, so GET http://controller:5000/v3. That returns: { "versions": { "values": [ { "status": "stable", "updated": "2016-04-04T00:00:00Z", "media-types": [ { "base": "application/json", "type": "application/vnd.openstack.identity-v3+json" } ], "id": "v3.6", "links": [ { "href": "https://api.example.com/identity/v3/", "rel": "self" } ] } ] } } Here is the server-side bug. A GET on the discovery document on the internal endpoint returned an endpoint for the public interface. That is incorrect information. GET http://controller:5000/v3 should return either: { "versions": { "values": [ { "status": "stable", "updated": "2016-04-04T00:00:00Z", "media-types": [ { "base": "application/json", "type": "application/vnd.openstack.identity-v3+json" } ], "id": "v3.6", "links": [ { "href": "http://controller:5000/v3/", "rel": "self" } ] } ] } } or { "versions": { "values": [ { "status": "stable", "updated": "2016-04-04T00:00:00Z", "media-types": [ { "base": "application/json", "type": "application/vnd.openstack.identity-v3+json" } ], "id": "v3.6", "links": [ { "href": "/v3/", "rel": "self" } ] } ] } } That's because the discovery documents are maps to what the user wants. The user needs to be able to follow them automatically. NOW - there is also a keystoneauth bug in play here that combined with this server-side bug have produced the issue you have. That is in the way we do the catalog / discovery URL join. First of all - we do the catalog / discovery URL join because of a frequently occuring deployment bug in the other direction. That is, it is an EXTREMELY common misconfiguration for the discovery url to return the internal url (this is what happens if public_url is not set). In order to deal with that, we take the url from the catalog (which we know is valid for the given interface) and do the url join you reported between it and the url from the discovery document to produce a working url. This is, as you can see, not doing the correct thing if the catalog url and the discovery url have different paths. I believe we can fix this to be more robust and handle both deployment issues if, instead of using url joining as we are doing now - use the logic we have elsewhere to pop project_id and version from a url and then to put them back in place. If we apply that here, what we'd do is the following: catalog_url is http://controller:5000/v3/ discovery_url is https://api.example.com/identity/v3/ decompose catalog url: catalog_project_id is None catalog_version_segment is "v3" catalog_base_url is "http://controller:5000" decompose discovery url: discovery_project_id is None discovery_version_segment is "v3" discovery_base_url is "https://api.example.com/identity" combine catalog and discovery url for discovered versioned endpoint if catalog_project_id: {catalog_base_url}/{discovery_version_segment}/{catalog_project_id}/ else: {catalog_base_url}/{discovery_version_segment}/ which would produce http://controller:5000/v3/ That may seem like a lot to go through to wind up back at the catalog url, but the process itself needs to work if the url in the catalog is not the url the user should use. For instance, if you put the unversioned endpoint in your catalog for the internal same as you do for the public: catalog_url is http://controller:5000/ discovery_url is https://api.example.com/identity/v3/ decompose catalog url: catalog_project_id is None catalog_version_segment is "v3" catalog_base_url is "http://controller:5000" decompose discovery url: discovery_project_id is None discovery_version_segment is "v3" discovery_base_url is "https://api.example.com/identity" combine catalog and discovery url for discovered versioned endpoint if catalog_project_id: {catalog_base_url}/{discovery_version_segment}/{catalog_project_id}/ else: {catalog_base_url}/{discovery_version_segment}/ which would produce http://controller:5000/v3/ The process in general is also required for services like cinder that still require project_id in the urls and put that in the catalog, because otherwise the discovery endpoint is not actually usable: catalog_url is http://cinder:5000/v2/123456/ discovery_url is https://api.example.com/block-storage/v3/ decompose catalog url: catalog_project_id is "123456" catalog_version_segment is "v2" catalog_base_url is "http://cinder:5000" decompose discovery url: discovery_project_id is None discovery_version_segment is "v3" discovery_base_url is "https://api.example.com/block-storage" combine catalog and discovery url for discovered versioned endpoint if catalog_project_id: {catalog_base_url}/{discovery_version_segment}/{catalog_project_id}/ else: {catalog_base_url}/{discovery_version_segment}/ which would produce http://cinder:5000/v3/123456/ In any case - if you haven't given up reading this email by now ... I believe we can fix the issue you're seeing in keystoneauth - and I'm sorry for our invalid assumption about matching paths. I do think that we should think a bit more systemically about how to have discovery documents return the correct information in the first place so that client-side hacks such as these are not needed. > Before this commit openstackclient made auth request to the > https://api.example.org/identity/v3/auth/tokens (and it worked, because > in our deployment internal services and console clients can access this > public url). At best, we expect openstackclient always go to the > http://controller:5000/v3/ > > This problem partially could be solved by explicitly passing public > --os-auth-url https://api.example.org/identity/identity/v3 to the > console clients even if we want to use internal endpoints. > > I found similiar bug in launchpad, but it haven't received any > attention: https://bugs.launchpad.net/keystoneauth/+bug/1733052 > > What could be done with this behavior of keystoneauth auto discovery? > > - Vlad > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From whayutin at redhat.com Sat May 12 16:10:42 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Sat, 12 May 2018 10:10:42 -0600 Subject: [openstack-dev] [tripleo] tripleo upstream gate outtage, was: -> gate jobs impacted RAX yum mirror In-Reply-To: References: Message-ID: On Wed, May 9, 2018 at 10:43 PM Wesley Hayutin wrote: > FYI.. https://bugs.launchpad.net/tripleo/+bug/1770298 > > I'm on #openstack-infra chatting w/ Ian atm. > Thanks > > Greetings, I wanted to update everyone on the status of the upstream tripleo check and gate jobs. There have been a series of infra related issues that caused the upstream tripleo gates to go red. 1. The first issue hit was https://bugs.launchpad.net/tripleo/+bug/1770298 which caused package install errors 2. Shortly after #1 was resolved CentOS released 7.5 which comes directly into the upstream repos untested and ungated. Additionally the associated qcow2 image and container-base images were not updated at the same time as the yum repos. https://bugs.launchpad.net/tripleo/+bug/1770355 3. Related to #2 the container and bm image rpms were not in sync causing https://bugs.launchpad.net/tripleo/+bug/1770692 4. Building the bm images was failing due to an open issue with the centos kernel, thanks to Yatin and Alfredo for https://review.rdoproject.org/r/#/c/13737/ 5. To ensure the containers are updated to the latest rpms at build time, we have the following patch from Alex https://review.openstack.org/#/c/567636/. 6. I also noticed that we are building the centos-base container in our container build jobs, however it is not pushed out to the container registeries because it is not included in the tripleo-common repo I would like to discuss this with some of the folks working on containers. If we had an updated centos-base container I think some of these issues would have been prevented. The above issues were resolved, and the master promotion jobs all had passed. Thanks to all who were involved! Once the promotion jobs pass and report status to the dlrn_api, a promotion was triggered automatically to upload the promoted images, containers, and updated dlrn hash. This failed due to network latency in the tenant where the tripleo-ci infra is hosted. The issue is tracked here https://bugs.launchpad.net/tripleo/+bug/1770860 Matt Young and myself worked well into the evening on Friday to diagnose the issue and ended up having to execute the image, container and dlrn_hash promotion outside of our tripleo-infra tenant. Thanks to Matt for his effort. At the moment I have updated the ci status in #tripleo, the master check and gate jobs are green in the upstream which should unblock merging most patches. The status of stable branches and third party ci is still being investigated. Automatic promotions are blocked until the network issues in the tripleo-infra tenant are resolved. The bug is marked with alert in #tripleo. Please see #tripleo for future status updates. Thanks all -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Sun May 13 03:44:04 2018 From: emilien at redhat.com (Emilien Macchi) Date: Sat, 12 May 2018 20:44:04 -0700 Subject: [openstack-dev] [tripleo] tripleo upstream gate outtage, was: -> gate jobs impacted RAX yum mirror In-Reply-To: References: Message-ID: On Sat, May 12, 2018 at 9:10 AM, Wesley Hayutin wrote: > > 2. Shortly after #1 was resolved CentOS released 7.5 which comes directly > into the upstream repos untested and ungated. Additionally the associated > qcow2 image and container-base images were not updated at the same time as > the yum repos. https://bugs.launchpad.net/tripleo/+bug/1770355 > Why do we have this situation everytime the OS is upgraded to a major version? Can't we test the image before actually using it? We could have experimental jobs testing latest image and pin gate images to a specific one? Like we could configure infra to deploy centos 7.4 in our gate and 7.5 in experimental, so we can take our time to fix eventual problems and make the switch when we're ready, instead of dealing with fires (that usually come all together). It would be great to make a retrospective on this thing between tripleo ci & infra folks, and see how we can improve things. -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Sun May 13 04:20:55 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sun, 13 May 2018 13:20:55 +0900 Subject: [openstack-dev] Should we add a tempest-slow job? In-Reply-To: <9b338d82-bbcf-f6c0-9ba0-9a402838d958@gmail.com> References: <9b338d82-bbcf-f6c0-9ba0-9a402838d958@gmail.com> Message-ID: On Fri, May 11, 2018 at 10:45 PM, Matt Riedemann wrote: > The tempest-full job used to run API and scenario tests concurrently, and if > you go back far enough I think it also ran slow tests. > > Sometime in the last year or so, the full job was changed to run the > scenario tests in serial and exclude the slow tests altogether. So the API > tests run concurrently first, and then the scenario tests run in serial. > During that change, some other tests were identified as 'slow' and marked as > such, meaning they don't get run in the normal tempest-full job. > > There are some valuable scenario tests marked as slow, however, like the > only encrypted volume testing we have in tempest is marked slow so it > doesn't get run on every change for at least nova. Yes, basically slow tests were selected based on https://ethercalc.openstack.org/nu56u2wrfb2b and there were frequent gate failure for heavy tests mainly from ssh checks so we tried to mark more tests as slow. I agree that some of them are not really slow at least in today situation. > > There is only one job that can be run against nova changes which runs the > slow tests but it's in the experimental queue so people forget to run it. Tempest job "legacy-tempest-dsvm-neutron-scenario-multinode-lvm-multibackend" run those slow tests including migration and LVM multibackend tests. This job runs on tempest check pipeline and experimental (as you mentioned) on nova and cinder [3]. We marked this as n-v to check its stability and now it is good to go as voting on tempest. > > As a test, I've proposed a nova-slow job [1] which only runs the slow tests > and only the compute API and scenario tests. Since there currently no > compute API tests marked as slow, it's really just running slow scenario > tests. Results show it runs 37 tests in about 37 minutes [2]. The overall > job runtime was 1 hour and 9 minutes, which is on average less than the > tempest-full job. The nova-slow job is also running scenarios that nova > patches don't actually care about, like the neutron IPv6 scenario tests. > > My question is, should we make this a generic tempest-slow job which can be > run either in the integrated-gate or at least in nova/neutron/cinder > consistently (I'm not sure if there are slow tests for just keystone or > glance)? I don't know if the other projects already have something like this > that they gate on. If so, a nova-specific job for nova changes is fine for > me. +1 on idea. As of now slow marked tests are from nova, cinder and neutron scenario tests and 2 API swift tests only [4]. I agree that making a generic job in tempest is better for maintainability. We can use existing job for that with below modification- - We can migrate "legacy-tempest-dsvm-neutron-scenario-multinode-lvm-multibackend" job zuulv3 in tempest repo - We can see if we can move migration tests out of it and use "nova-live-migration" job (in tempest check pipeline ) which is much better in live migration env setup and controlled by nova. - then it can be name something like "tempest-scenario-multinode-lvm-multibackend". - run this job in nova, cinder, neutron check pipeline instead of experimental. Another update on slow tests is that we are trying the possibility of taking back the slow tests in tempest-full with new job "tempest-full-parallel" [5]. Currently this job is n-v and if everything works fine in this new job then, we can make tempest-full job to run the slow tests are it used to do previously. > > [1] https://review.openstack.org/#/c/567697/ > [2] > http://logs.openstack.org/97/567697/1/check/nova-slow/bedfafb/job-output.txt.gz#_2018-05-10_23_46_47_588138 ..3 http://codesearch.openstack.org/?q=legacy-tempest-dsvm-neutron-scenario-multinode-lvm-multibackend&i=nope&files=&repos= ..4 https://github.com/openstack/tempest/search?utf8=%E2%9C%93&q=%22type%3D%27slow%27%22&type= ..5 https://github.com/openstack/tempest/blob/9c628189e798f46de8c4b9484237f4d6dc6ade7e/.zuul.yaml#L48 -gmann > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From gmann at ghanshyammann.com Sun May 13 04:24:23 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sun, 13 May 2018 13:24:23 +0900 Subject: [openstack-dev] Should we add a tempest-slow job? In-Reply-To: <20180511143208.GA6859@zeong> References: <9b338d82-bbcf-f6c0-9ba0-9a402838d958@gmail.com> <20180511143208.GA6859@zeong> Message-ID: On Fri, May 11, 2018 at 11:32 PM, Matthew Treinish wrote: > On Fri, May 11, 2018 at 08:45:39AM -0500, Matt Riedemann wrote: >> The tempest-full job used to run API and scenario tests concurrently, and if >> you go back far enough I think it also ran slow tests. > > Well it's a bit more subtle than that. Skipping slow tests was added right > before we introduced parallel execution to tempest ~5 years ago: > > https://github.com/openstack/tempest/commit/68a8060b24abd6b6bf99c4f9296bf418a8349a2d > > Note those are in separate testr jobs which we migrated to the full job a bit > later in that cycle. The full job back then ran using nose and ran things > serially. But back then we didn't actually have any tests tagged as slow. It was > more of a future proofing thing because we were planning to add a bunch of > really slow heat tests we didn't want to run on every commit to each project. > The slow tags were first added for heat tests which came later in the havana > cycle. > >> >> Sometime in the last year or so, the full job was changed to run the >> scenario tests in serial and exclude the slow tests altogether. So the API >> tests run concurrently first, and then the scenario tests run in serial. >> During that change, some other tests were identified as 'slow' and marked as >> such, meaning they don't get run in the normal tempest-full job. > > It was changed in: > > https://github.com/openstack/tempest/commit/49505df20f3dc578506e479c2afa4a4f02e464bf > >> >> There are some valuable scenario tests marked as slow, however, like the >> only encrypted volume testing we have in tempest is marked slow so it >> doesn't get run on every change for at least nova. >> >> There is only one job that can be run against nova changes which runs the >> slow tests but it's in the experimental queue so people forget to run it. >> >> As a test, I've proposed a nova-slow job [1] which only runs the slow tests >> and only the compute API and scenario tests. Since there currently no >> compute API tests marked as slow, it's really just running slow scenario >> tests. Results show it runs 37 tests in about 37 minutes [2]. The overall >> job runtime was 1 hour and 9 minutes, which is on average less than the >> tempest-full job. The nova-slow job is also running scenarios that nova >> patches don't actually care about, like the neutron IPv6 scenario tests. >> >> My question is, should we make this a generic tempest-slow job which can be >> run either in the integrated-gate or at least in nova/neutron/cinder >> consistently (I'm not sure if there are slow tests for just keystone or >> glance)? I don't know if the other projects already have something like this >> that they gate on. If so, a nova-specific job for nova changes is fine for >> me. > > So there used to be an experimental queue tempest-all job which ran everything > in tempest, including the slow tests. I can't find it in the .zuul.yaml in the > tempest repo, so my assumption is that got dropped during the v3 migration. It is there with name "legacy-periodic-tempest-dsvm-all-master" [3]. This runs as experimental and periodic for Tempest. It is not yet migrated, i will plan to migrate that in tempest repo. > > I'm fine with adding a general purpose job for just running the slow tests to > the integrated gate if we think there is enough value from that. It's mostly > just a question of weighing the potential value from the increased coverage vs > the increased resource consumption for adding yet another job to the integrated > gate. Personally, I'm fine with that tradeoff. > > -Matt Treinish > >> >> [1] https://review.openstack.org/#/c/567697/ >> [2] http://logs.openstack.org/97/567697/1/check/nova-slow/bedfafb/job-output.txt.gz#_2018-05-10_23_46_47_588138 >> ..3 http://git.openstack.org/cgit/openstack-infra/openstack-zuul-jobs/tree/zuul.d/zuul-legacy-jobs.yaml#n1579 -gmann > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From fungi at yuggoth.org Sun May 13 12:34:03 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sun, 13 May 2018 12:34:03 +0000 Subject: [openstack-dev] [tripleo] tripleo upstream gate outtage, was: -> gate jobs impacted RAX yum mirror In-Reply-To: References: Message-ID: <20180513123403.pf4s56xs66hkzoqc@yuggoth.org> On 2018-05-12 20:44:04 -0700 (-0700), Emilien Macchi wrote: [...] > Why do we have this situation everytime the OS is upgraded to a major > version? Can't we test the image before actually using it? We could have > experimental jobs testing latest image and pin gate images to a specific > one? > > Like we could configure infra to deploy centos 7.4 in our gate and 7.5 in > experimental, so we can take our time to fix eventual problems and make the > switch when we're ready, instead of dealing with fires (that usually come > all together). > > It would be great to make a retrospective on this thing between tripleo ci > & infra folks, and see how we can improve things. In the past we've trusted statements from Red Hat that you should be able to upgrade to newer point releases without experiencing backward-incompatible breakage. Right now all our related tooling is based on the assumption we made in governance that we can just treat, e.g., RHEL/CentOS 7 as a long-term stable release distribution similar to an Ubuntu LTS and not have to worry about tracking individual point releases. If this is not actually the case any longer, we should likely reevaluate our support claims. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From whayutin at redhat.com Sun May 13 14:25:25 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Sun, 13 May 2018 08:25:25 -0600 Subject: [openstack-dev] [tripleo] tripleo upstream gate outtage, was: -> gate jobs impacted RAX yum mirror In-Reply-To: References: Message-ID: On Sat, May 12, 2018 at 11:45 PM Emilien Macchi wrote: > On Sat, May 12, 2018 at 9:10 AM, Wesley Hayutin > wrote: >> >> 2. Shortly after #1 was resolved CentOS released 7.5 which comes directly >> into the upstream repos untested and ungated. Additionally the associated >> qcow2 image and container-base images were not updated at the same time as >> the yum repos. https://bugs.launchpad.net/tripleo/+bug/1770355 >> > > Why do we have this situation everytime the OS is upgraded to a major > version? Can't we test the image before actually using it? We could have > experimental jobs testing latest image and pin gate images to a specific > one? > Like we could configure infra to deploy centos 7.4 in our gate and 7.5 in > experimental, so we can take our time to fix eventual problems and make the > switch when we're ready, instead of dealing with fires (that usually come > all together). > > It would be great to make a retrospective on this thing between tripleo ci > & infra folks, and see how we can improve things. > I agree, We need to in coordination with the infra team be able to pin / lock content for production check and gate jobs while also have the ability to stage new content e.g. centos 7.5 with experimental or periodic jobs. In this particular case the ci team did check the tripleo deployment w/ centos 7.5 updates, however we did not stage or test what impact the centos minor update would have on the upstream job workflow. The key issue is that the base centos image used upstream can not be pinned by the ci team, if say we could pin that image the ci team could pin the centos repos used in ci and run staging jobs on the latest centos content. I'm glad that you also see the need for some amount of coordination here, I've been in contact with a few folks to initiate the conversation. In an unrelated note, Sagi and I just fixed the network latency issue on our promotion server, it was related to DNS. Automatic promotions should be back online. Thanks all. > -- > Emilien Macchi > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sun May 13 15:24:35 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sun, 13 May 2018 15:24:35 +0000 Subject: [openstack-dev] [tripleo] tripleo upstream gate outtage, was: -> gate jobs impacted RAX yum mirror In-Reply-To: References: Message-ID: <20180513152435.x2iguepehk6fblbr@yuggoth.org> On 2018-05-13 08:25:25 -0600 (-0600), Wesley Hayutin wrote: [...] > We need to in coordination with the infra team be able to pin / lock > content for production check and gate jobs while also have the ability to > stage new content e.g. centos 7.5 with experimental or periodic jobs. [...] It looks like adjustments would be needed to DIB's centos-minimal element if we want to be able to pin it to specific minor releases. However, having to rotate out images in the fashion described would be a fair amount of manual effort and seems like it would violate our support expectations in governance if we end up pinning to older minor versions (for major LTS versions on the other hand, we expect to undergo this level of coordination but they come at a much slower pace with a lot more advance warning). If we need to add controlled roll-out of CentOS minor version updates, this is really no better than Fedora from the Infra team's perspective and we've already said we can't make stable branch testing guarantees for Fedora due to the complexity involved in using different releases for each branch and the need to support our stable branches longer than the distros are supporting the releases on which we're testing. For example, how long would the distro maintainers have committed to supporting RHEL 7.4 after 7.5 was released? Longer than we're committing to extended maintenance on our stable/queens branches? Or would you expect projects to still continue to backport support for these minor platform bumps to all their stable branches too? And what sort of grace period should we give them before we take away the old versions? Also, how many minor versions of CentOS should we expect to end up maintaining in parallel? (Remember, every additional image means that much extra time to build and upload to all our providers, as well as that much more storage on our builders and in our Glance quotas.) -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From prometheanfire at gentoo.org Sun May 13 17:22:06 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Sun, 13 May 2018 12:22:06 -0500 Subject: [openstack-dev] [requirements][barbican][daisycloud][freezer][fuel][heat][pyghmi][rpm-packaging][solum][tatu][trove] pycrypto is dead and insecure, you should migrate Message-ID: <20180513172206.bfaxmmp37vxkkwuc@gentoo.org> This is a reminder to the projects called out that they are using old, unmaintained and probably insecure libraries (it's been dead since 2014). Please migrate off to use the cryptography library. We'd like to drop pycrypto from requirements for rocky. See also, the bug, which has most of you cc'd already. https://bugs.launchpad.net/openstack-requirements/+bug/1749574 +----------------------------------------+---------------------------------------------------------------------+------+---------------------------------------------------+ | Repository | Filename | Line | Text | +----------------------------------------+---------------------------------------------------------------------+------+---------------------------------------------------+ | barbican | requirements.txt | 25 | pycrypto>=2.6 # Public Domain | | daisycloud-core | code/daisy/requirements.txt | 17 | pycrypto>=2.6 # Public Domain | | freezer | requirements.txt | 21 | pycrypto>=2.6 # Public Domain | | fuel-web | nailgun/requirements.txt | 24 | pycrypto>=2.6.1 | | heat-cfnclient | requirements.txt | 2 | PyCrypto>=2.1.0 | | pyghmi | requirements.txt | 1 | pycrypto>=2.6 | | rpm-packaging | requirements.txt | 189 | pycrypto>=2.6 # Public Domain | | solum | requirements.txt | 24 | pycrypto>=2.6 # Public Domain | | tatu | requirements.txt | 7 | pycrypto>=2.6.1 | | tatu | test-requirements.txt | 7 | pycrypto>=2.6.1 | | trove | integration/scripts/files/requirements/fedora-requirements.txt | 30 | pycrypto>=2.6 # Public Domain | | trove | integration/scripts/files/requirements/ubuntu-requirements.txt | 29 | pycrypto>=2.6 # Public Domain | | trove | requirements.txt | 47 | pycrypto>=2.6 # Public Domain | +----------------------------------------+---------------------------------------------------------------------+------+---------------------------------------------------+ -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From zigo at debian.org Sun May 13 18:23:15 2018 From: zigo at debian.org (Thomas Goirand) Date: Sun, 13 May 2018 20:23:15 +0200 Subject: [openstack-dev] [horizon] Scheduling switch to django >= 2.0 In-Reply-To: References: <276a6199-158c-bb7d-7f7d-f04de9a52e06@debian.org> Message-ID: <630a79f3-32fd-a8a0-fc75-e61ef3dd752f@debian.org> On 05/11/2018 05:14 PM, Akihiro Motoki wrote: > Hi zigo and horizon plugin maintainers, > > Horizon itself already supports Django 2.0 and horizon unit test covers > Django 2.0 with Python 3.5. > > A question to all is whether we change the upper bound of Django from > <2.0 to <2.1. > My proposal is to bump the upper bound of Django to <2.1 in Rocky-2. > (Note that Django 1.11 will continue to be used for python 2.7 environment.) All this is nice, thanks for working on Django 2.x. But Debian Buster will be released with Django 1.11 and Python 3.6. So what I need, as far as Debian is concerned is: - Python 3.6 & Django 1.11 for Rocky (that's for Debian Buster). - Python 3.6, probably even 3.7 and Django 2.0 for Stein (that's for after Buster is released). Cheers, Thomas Goirand (zigo) From myoung at redhat.com Sun May 13 20:09:46 2018 From: myoung at redhat.com (Matt Young) Date: Sun, 13 May 2018 16:09:46 -0400 Subject: [openstack-dev] [tripleo] tripleo upstream gate outtage, was: -> gate jobs impacted RAX yum mirror In-Reply-To: References: Message-ID: Re: resolving network latency issue on the promotion server in tripleo-infra tenant, that's great news! Re: retrospective on this class of issue, I'll reach out directly early this week to get something on the calendar for our two teams. We clearly need to brainstorm/hash out together how we can reduce the turbulence moving forward. In addition, as a result of working these issues over the past few days we've identified a few pieces of low hanging (tooling) fruit that are ripe for for improvements that will speed diagnosis / debug in the future. We'll capture these as RFE's and get them into our backlog. Matt On Sun, May 13, 2018 at 10:25 AM, Wesley Hayutin wrote: > > > On Sat, May 12, 2018 at 11:45 PM Emilien Macchi > wrote: > >> On Sat, May 12, 2018 at 9:10 AM, Wesley Hayutin >> wrote: >>> >>> 2. Shortly after #1 was resolved CentOS released 7.5 which comes >>> directly into the upstream repos untested and ungated. Additionally the >>> associated qcow2 image and container-base images were not updated at the >>> same time as the yum repos. https://bugs.launchpad.net/tripleo/+bug/ >>> 1770355 >>> >> >> Why do we have this situation everytime the OS is upgraded to a major >> version? Can't we test the image before actually using it? We could have >> experimental jobs testing latest image and pin gate images to a specific >> one? >> Like we could configure infra to deploy centos 7.4 in our gate and 7.5 in >> experimental, so we can take our time to fix eventual problems and make the >> switch when we're ready, instead of dealing with fires (that usually come >> all together). >> >> It would be great to make a retrospective on this thing between tripleo >> ci & infra folks, and see how we can improve things. >> > > I agree, > We need to in coordination with the infra team be able to pin / lock > content for production check and gate jobs while also have the ability to > stage new content e.g. centos 7.5 with experimental or periodic jobs. > In this particular case the ci team did check the tripleo deployment w/ > centos 7.5 updates, however we did not stage or test what impact the centos > minor update would have on the upstream job workflow. > The key issue is that the base centos image used upstream can not be > pinned by the ci team, if say we could pin that image the ci team could pin > the centos repos used in ci and run staging jobs on the latest centos > content. > > I'm glad that you also see the need for some amount of coordination here, > I've been in contact with a few folks to initiate the conversation. > > In an unrelated note, Sagi and I just fixed the network latency issue on > our promotion server, it was related to DNS. Automatic promotions should > be back online. > Thanks all. > > >> -- >> Emilien Macchi >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon May 14 02:06:28 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 14 May 2018 11:06:28 +0900 Subject: [openstack-dev] Should we add a tempest-slow job? In-Reply-To: References: <9b338d82-bbcf-f6c0-9ba0-9a402838d958@gmail.com> Message-ID: On Sun, May 13, 2018 at 1:20 PM, Ghanshyam Mann wrote: > On Fri, May 11, 2018 at 10:45 PM, Matt Riedemann wrote: >> The tempest-full job used to run API and scenario tests concurrently, and if >> you go back far enough I think it also ran slow tests. >> >> Sometime in the last year or so, the full job was changed to run the >> scenario tests in serial and exclude the slow tests altogether. So the API >> tests run concurrently first, and then the scenario tests run in serial. >> During that change, some other tests were identified as 'slow' and marked as >> such, meaning they don't get run in the normal tempest-full job. >> >> There are some valuable scenario tests marked as slow, however, like the >> only encrypted volume testing we have in tempest is marked slow so it >> doesn't get run on every change for at least nova. > > Yes, basically slow tests were selected based on > https://ethercalc.openstack.org/nu56u2wrfb2b and there were frequent > gate failure for heavy tests mainly from ssh checks so we tried to > mark more tests as slow. > I agree that some of them are not really slow at least in today situation. > >> >> There is only one job that can be run against nova changes which runs the >> slow tests but it's in the experimental queue so people forget to run it. > > Tempest job "legacy-tempest-dsvm-neutron-scenario-multinode-lvm-multibackend" > run those slow tests including migration and LVM multibackend tests. > This job runs on tempest check pipeline and experimental (as you > mentioned) on nova and cinder [3]. We marked this as n-v to check its > stability and now it is good to go as voting on tempest. > >> >> As a test, I've proposed a nova-slow job [1] which only runs the slow tests >> and only the compute API and scenario tests. Since there currently no >> compute API tests marked as slow, it's really just running slow scenario >> tests. Results show it runs 37 tests in about 37 minutes [2]. The overall >> job runtime was 1 hour and 9 minutes, which is on average less than the >> tempest-full job. The nova-slow job is also running scenarios that nova >> patches don't actually care about, like the neutron IPv6 scenario tests. >> >> My question is, should we make this a generic tempest-slow job which can be >> run either in the integrated-gate or at least in nova/neutron/cinder >> consistently (I'm not sure if there are slow tests for just keystone or >> glance)? I don't know if the other projects already have something like this >> that they gate on. If so, a nova-specific job for nova changes is fine for >> me. > > +1 on idea. As of now slow marked tests are from nova, cinder and > neutron scenario tests and 2 API swift tests only [4]. I agree that > making a generic job in tempest is better for maintainability. We can > use existing job for that with below modification- > - We can migrate > "legacy-tempest-dsvm-neutron-scenario-multinode-lvm-multibackend" job > zuulv3 in tempest repo > - We can see if we can move migration tests out of it and use > "nova-live-migration" job (in tempest check pipeline ) which is much > better in live migration env setup and controlled by nova. > - then it can be name something like > "tempest-scenario-multinode-lvm-multibackend". > - run this job in nova, cinder, neutron check pipeline instead of experimental. Like this - https://review.openstack.org/#/q/status:open+project:openstack/tempest+branch:master+topic:scenario-tests-job That makes scenario job as generic with running all scenario tests including slow tests with concurrency 2. I made few cleanup and moved live migration tests out of it which is being run by 'nova-live-migration' job. Last patch making this job as voting on tempest side. If looks good, we can use this to run on project side pipeline as voting. -gmann > > Another update on slow tests is that we are trying the possibility of > taking back the slow tests in tempest-full with new job > "tempest-full-parallel" [5]. Currently this job is n-v and if > everything works fine in this new job then, we can make tempest-full > job to run the slow tests are it used to do previously. > >> >> [1] https://review.openstack.org/#/c/567697/ >> [2] >> http://logs.openstack.org/97/567697/1/check/nova-slow/bedfafb/job-output.txt.gz#_2018-05-10_23_46_47_588138 > > ..3 http://codesearch.openstack.org/?q=legacy-tempest-dsvm-neutron-scenario-multinode-lvm-multibackend&i=nope&files=&repos= > ..4 https://github.com/openstack/tempest/search?utf8=%E2%9C%93&q=%22type%3D%27slow%27%22&type= > ..5 https://github.com/openstack/tempest/blob/9c628189e798f46de8c4b9484237f4d6dc6ade7e/.zuul.yaml#L48 > > > -gmann > >> >> -- >> >> Thanks, >> >> Matt >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From gmann at ghanshyammann.com Mon May 14 02:12:09 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 14 May 2018 11:12:09 +0900 Subject: [openstack-dev] [tempest] Proposing Felipe Monteiro for Tempest core In-Reply-To: References: Message-ID: Thanks all for voting. We have all voting in favor of Felipe and enough to add him as Core. I will add him in Core list on gerrit. Welcome to the team Felipe!! -gmann On Sat, Apr 28, 2018 at 7:27 PM, Ghanshyam Mann wrote: > Hi Tempest Team, > > I would like to propose Felipe Monteiro (irc: felipemonteiro) to Tempest core. > > Felipe has been an active contributor to the Tempest since the Pike > cycle. He has been doing lot of review and commits since then. Filling > the gaps on service clients side and their testing and lot other > areas. He has demonstrated the good quality and feedback while his > review. > > He has good understanding of Tempest source code and project missions > & goal. IMO his efforts are highly valuable and it will be great to > have him in team. > > > As per usual practice, please vote +1 or -1 to the nomination. I will > keep this nomination open for a week or until everyone voted. > > Felipe Reviews and Commit - > https://review.openstack.org/#/q/reviewer:felipe.monteiro at att.com+project:openstack/tempest > https://review.openstack.org/#/q/owner:felipe.monteiro at att.com+project:openstack/tempest > > -gmann From whayutin at redhat.com Mon May 14 02:44:25 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Sun, 13 May 2018 20:44:25 -0600 Subject: [openstack-dev] [tripleo] tripleo upstream gate outtage, was: -> gate jobs impacted RAX yum mirror In-Reply-To: <20180513152435.x2iguepehk6fblbr@yuggoth.org> References: <20180513152435.x2iguepehk6fblbr@yuggoth.org> Message-ID: On Sun, May 13, 2018 at 11:25 AM Jeremy Stanley wrote: > On 2018-05-13 08:25:25 -0600 (-0600), Wesley Hayutin wrote: > [...] > > We need to in coordination with the infra team be able to pin / lock > > content for production check and gate jobs while also have the ability to > > stage new content e.g. centos 7.5 with experimental or periodic jobs. > [...] > > It looks like adjustments would be needed to DIB's centos-minimal > element if we want to be able to pin it to specific minor releases. > However, having to rotate out images in the fashion described would > be a fair amount of manual effort and seems like it would violate > our support expectations in governance if we end up pinning to older > minor versions (for major LTS versions on the other hand, we expect > to undergo this level of coordination but they come at a much slower > pace with a lot more advance warning). If we need to add controlled > roll-out of CentOS minor version updates, this is really no better > than Fedora from the Infra team's perspective and we've already said > we can't make stable branch testing guarantees for Fedora due to the > complexity involved in using different releases for each branch and > the need to support our stable branches longer than the distros are > supporting the releases on which we're testing. > This is good insight Jeremy, thanks for replying. > > For example, how long would the distro maintainers have committed to > supporting RHEL 7.4 after 7.5 was released? Longer than we're > committing to extended maintenance on our stable/queens branches? Or > would you expect projects to still continue to backport support for > these minor platform bumps to all their stable branches too? And > what sort of grace period should we give them before we take away > the old versions? Also, how many minor versions of CentOS should we > expect to end up maintaining in parallel? (Remember, every > additional image means that much extra time to build and upload to > all our providers, as well as that much more storage on our builders > and in our Glance quotas.) > -- > Jeremy Stanley > I think you may be describing a level of support that is far greater than what I was thinking. I also don't want to tax the infra team w/ n+ versions of the baseos to support. I do think it would be helpful to say have a one week change window where folks are given the opportunity to preflight check a new image and the potential impact on the job workflow the updated image may have. If I could update or create a non-voting job w/ the new image that would provide two things. 1. The first is the head's up, this new minor version of centos is coming into the system and you have $x days to deal with it. 2. The ability to build a few non-voting jobs w/ the new image to see what kind of impact it has on the workflow and deployments. In this case the updated 7.5 CentOS image worked fine w/ TripleO, however it did cause our gates to go red because.. a. when we update containers w/ zuul dependendencies all the base-os updates were pulled in and jobs timed out. b. a kernel bug workaround with virt-customize failed to work due the kernel packages changed ( 3rd party job ) c. the containers we use were not yet at CentOS 7.5 but the bm image was causing issues w/ pacemaker. d. there may be a few more that I am forgetting, but hopefully the point is made. We can fix a lot of the issues and I'm not blaming anyone because if we (tripleo ) thought of all the corner cases with our workflow we would have been able to avoid some of these issues. However it does seem like we get hit by $something every time we update a minor version of the baseos. My preference would be to have a heads up and work through the issues than to go immediately red and unable to merge patches. I don't know if other teams get impacted in similiar ways, and I understand this is a big ship and updating CentOS may work just fine for everyone else. Thanks all for your time and effort! > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sangho at opennetworking.org Mon May 14 03:08:39 2018 From: sangho at opennetworking.org (Sangho Shin) Date: Mon, 14 May 2018 12:08:39 +0900 Subject: [openstack-dev] [neutron][ml2 plugin] unit test errors In-Reply-To: References: <08D21635-A69C-4D77-811E-4F67ED4C61A3@opennetworking.org> <5D884907-7422-4A8F-AA94-DA1BE7E037A9@linux.vnet.ibm.com> Message-ID: <687F849B-298A-47F9-836D-8BD2777BC335@opennetworking.org> Andreas and Neil, Thank you so much for your help. I was able to fix the issues thanks to your help. Sangho 나의 iPhone에서 보냄 2018. 5. 11. 오후 7:19, Neil Jerram 작성: >> On Fri, May 11, 2018 at 10:09 AM Andreas Scheuring wrote: >> So what you need to do first is to make a patch for networking-onos that does ONLY the following >> >> >> replace all occurrences of >> >> * neutron.callbacks by neutron_lib.callbacks >> * neutron.plugins.ml2.driver_api by neutron_lib.plugins.ml2.api > > FYI here's what networking-calico has for the second of these points: > > try: > from neutron_lib.plugins.ml2 import api > except ImportError: > # Neutron code prior to a2c36d7e (10th November 2017). > from neutron.plugins.ml2 import driver_api as api > > (http://git.openstack.org/cgit/openstack/networking-calico/tree/networking_calico/plugins/ml2/drivers/calico/mech_calico.py#n49) > > However, we do it like this because we want the master networking-calico code to work with many past Neutron releases, and I understand that that is not a common approach; so for networking-onos you may only want the "from neutron_lib.plugins.ml2 import api" line. > > Regards - Neil > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From hejianle at unitedstack.com Mon May 14 03:23:51 2018 From: hejianle at unitedstack.com (=?utf-8?B?5L2V5YGl5LmQ?=) Date: Mon, 14 May 2018 11:23:51 +0800 Subject: [openstack-dev] openstack-dev] [nova] Cannot live migrattion, because error:libvirtError: the CPU is incompatible with host CPU: Host CPU does not provide required features: cmt, mbm_total, mbm_local Message-ID: Hi, all When I did live-miration , I met the following error: result = proxy_call(self._autowrap, f, *args, **kwargs)May 14 10:33:11 nova-compute[981335]: File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 144, in proxy_call May 14 10:33:11 nova-compute[981335]: rv = execute(f, *args, **kwargs) May 14 10:33:11 nova-compute[981335]: File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 125, in execute May 14 10:33:11 nova-compute[981335]: six.reraise(c, e, tb) May 14 10:33:11 nova-compute[981335]: File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 83, in tworker May 14 10:33:11 nova-compute[981335]: rv = meth(*args, **kwargs) May 14 10:33:11 nova-compute[981335]: File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1939, in migrateToURI3 May 14 10:33:11 nova-compute[981335]: if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed', dom=self) May 14 10:33:11 nova-compute[981335]: libvirtError: the CPU is incompatible with host CPU: Host CPU does not provide required features: cmt, mbm_total, mbm_local Is there any one that has solution for this problem. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon May 14 03:29:45 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 14 May 2018 03:29:45 +0000 Subject: [openstack-dev] [tripleo] tripleo upstream gate outtage, was: -> gate jobs impacted RAX yum mirror In-Reply-To: References: <20180513152435.x2iguepehk6fblbr@yuggoth.org> Message-ID: <20180514032945.f4hpxpcoyhrylius@yuggoth.org> On 2018-05-13 20:44:25 -0600 (-0600), Wesley Hayutin wrote: [...] > I do think it would be helpful to say have a one week change > window where folks are given the opportunity to preflight check a > new image and the potential impact on the job workflow the updated > image may have. If I could update or create a non-voting job w/ > the new image that would provide two things. > > 1. The first is the head's up, this new minor version of centos is > coming into the system and you have $x days to deal with it. > > 2. The ability to build a few non-voting jobs w/ the new image to > see what kind of impact it has on the workflow and deployments. [...] While I can see where you're coming from, right now even the Infra team doesn't know immediately when a new CentOS minor release starts to be used. The packages show up in the mirrors automatically and images begin to be built with them right away. There isn't a conscious "switch" which is thrown by anyone. This is essentially the same way we treat Ubuntu LTS point releases as well. If this is _not_ the way RHEL/CentOS are intended to be consumed (i.e. just upgrade to and run the latest packages available for a given major release series) then we should perhaps take a step back and reevaluate this model. For now we have some fairly deep-driven assumptions in that regard which are reflected in the Linux distributions support policy of our project testing interface as documented in OpenStack governance. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From tdecacqu at redhat.com Mon May 14 03:50:01 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Mon, 14 May 2018 03:50:01 +0000 Subject: [openstack-dev] [tripleo] tripleo upstream gate outtage, was: -> gate jobs impacted RAX yum mirror In-Reply-To: References: <20180513152435.x2iguepehk6fblbr@yuggoth.org> Message-ID: <1526269464.rq3wf8tgg6.tristanC@fedora> On May 14, 2018 2:44 am, Wesley Hayutin wrote: [snip] > I do think it would be helpful to say have a one week change window where > folks are given the opportunity to preflight check a new image and the > potential impact on the job workflow the updated image may have. [snip] How about adding a periodic job that setup centos-release-cr in a pre task? This should highlight issues with up-coming updates: https://wiki.centos.org/AdditionalResources/Repositories/CR -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From hongbin034 at gmail.com Mon May 14 04:30:14 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Mon, 14 May 2018 00:30:14 -0400 Subject: [openstack-dev] [Zun] Add Deepak Mourya to the core team Message-ID: Hi all, This is an announcement of the following change on the Zun core reviewers team: + Deepak Mourya (mourya007) Deepak has been actively involving in Zun for several months. He has submitted several code patches to Zun, all of which are useful features or bug fixes. In particular, I would like to highlight that he has implemented the availability zone API which is a significant contribution to the Zun feature set. Based on his significant contribution, I would like to propose him to become a core reviewer of Zun. This proposal has been voted within the existing core team and is unanimously approved. Welcome to the core team Deepak. Best regards, Hongbin -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevinzs2048 at gmail.com Mon May 14 06:36:54 2018 From: kevinzs2048 at gmail.com (Shuai Zhao) Date: Mon, 14 May 2018 14:36:54 +0800 Subject: [openstack-dev] [Zun] Add Deepak Mourya to the core team In-Reply-To: References: Message-ID: +1, welcome mourya007 ! On Mon, May 14, 2018 at 12:30 PM, Hongbin Lu wrote: > Hi all, > > This is an announcement of the following change on the Zun core reviewers > team: > > + Deepak Mourya (mourya007) > > Deepak has been actively involving in Zun for several months. He has > submitted several code patches to Zun, all of which are useful features or > bug fixes. In particular, I would like to highlight that he has > implemented the availability zone API which is a significant contribution > to the Zun feature set. Based on his significant contribution, I would like > to propose him to become a core reviewer of Zun. > > This proposal has been voted within the existing core team and is > unanimously approved. Welcome to the core team Deepak. > > Best regards, > Hongbin > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Mon May 14 08:26:24 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Mon, 14 May 2018 16:26:24 +0800 Subject: [openstack-dev] [cyborg]Nominating Sundar as new core reviewer (reminder) Message-ID: Hi team, Since the meetbot not function properly after our long running review party, I would like send out this email for archiving purpose about Sundar's nomination to our core review team which was discussed Wed last week. As already stated during the team meeting last week, Sundar has been a tremendous help on two critical specs in Rocky and had conducted great inter-project discussions. He has been very active on team meeting despite the time difference (7am on west coast). He has also contributed a lot on the k8s cyborg integration design. It would be great to have Sundar on the core team to increase our bandwidth. Plz provide any feedback about this nomination before Wed this week :) -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Mon May 14 09:34:17 2018 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 14 May 2018 11:34:17 +0200 Subject: [openstack-dev] [all] Topics for the Board+TC+UC meeting in Vancouver In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C0BEEC5@EX10MBOX03.pnnl.gov> References: <1df6fb53-ed96-8eea-a290-ab0b889a1ae2@openstack.org> <0b605e9d-1e7a-450e-e0e9-e1cba90a80ed@gmail.com> <31b057e5-cf5b-46e9-342a-31826b19548d@redhat.com> <1A3C52DFCD06494D8528644858247BF01C0BEEC5@EX10MBOX03.pnnl.gov> Message-ID: <738d0574-2ad7-b124-e5cd-0f0ec48f69d5@openstack.org> Fox, Kevin M wrote: > [...] > Part of the disconnect to me has been that these questions have been left up to the projects by and large. But, users don't use the projects. Users use OpenStack. Or, moving forward, they at least use a Constellation. But Constellation is still just a documentation construct. Not really a first class entity. > > Currently the isolation between the Projects and the thing that the users use, the Constellation allows for user needs to easily slip through the cracks. Cause "Project X: we agree that is a problem, but its Y projects problem. Project Y: we agree that is a problem, but its X projects problem." No, seriously, its OpenStacks problem. Most of the major issues I've hit in my many years of using OpenStack were in that category. And there wasn't a good forum for addressing them. > > A related effect of the isolation is also that the projects don't work on the commons nor look around too much what others are doing. Either within OpenStack or outside. They solve problems at the project level and say, look, I've solved it, but don't look at what happens when all the projects do that independently and push more work to the users. The end result of this lack of Leadership is more work for the users compared to competitors. > [...] +1 Slicing development along component lines ("project teams") was a useful construct to absorb all the energy that was sent to OpenStack between 2011 and 2016. But at our current stage (less resources, more users) I agree that that structure is no longer optimal. I think we need to start thinking about ways to de-emphasize project teams (organizing work around code boundaries) and organize work around goals instead (across code boundaries). A bit like work in Kubernetes is tracked at SIG level, beyond code ownership. It's not an easy change, with project teams being so integral to our culture, but it is something we should start looking into. -- Thierry Carrez (ttx) From balazs.gibizer at ericsson.com Mon May 14 09:49:56 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 14 May 2018 11:49:56 +0200 Subject: [openstack-dev] [nova] nova-manage cell_v2 map_instances uses invalid UUID as marker in the db In-Reply-To: References: <1525772990.5489.1@smtp.office365.com> <=?utf-8?Q?=22Bal?= =?utf-8?Q?=C3=A1zs?= Gibizer"'s message of "Tue, 8 May 2018 11:49:50 +0200"> Message-ID: <1526291396.23745.0@smtp.office365.com> On Thu, May 10, 2018 at 8:48 PM, Dan Smith wrote: >> The oslo UUIDField emits a warning if the string used as a field >> value >> does not pass the validation of the uuid.UUID(str(value)) call >> [3]. All the offending places are fixed in nova except the >> nova-manage >> cell_v2 map_instances call [1][2]. That call uses markers in the DB >> that are not valid UUIDs. > > No, that call uses markers in the DB that don't fit the canonical > string > representation of a UUID that the oslo library is looking for. There > are > many ways to serialize a UUID: > > https://en.wikipedia.org/wiki/Universally_unique_identifier#Format > > The 8-4-4-4-12 format is one of them (and the most popular). Changing > the dashes to spaces does not make it not a UUID, it makes it not the > same _string_ and it's done (for better or worse) in the > aforementioned > code to skirt the database's UUID-ignorant _string_ uniqueness > constraint. You are right, this is oslo specific. I think this weakens the severity of the warning in this particular case. > >> If we could fix this last offender then we could merge the patch [4] >> that changes the this warning to an exception in the nova tests to >> avoid such future rule violations. >> >> However I'm not sure it is easy to fix. Replacing >> 'INSTANCE_MIGRATION_MARKER' at [1] to >> '00000000-0000-0000-0000-00000000' might work > > The project_id field on the object is not a UUIDField, nor is it 36 > characters in the database schema. It can't be because project ids are > not guaranteed to be UUIDs. Correct. My bad. Then this does not cause any UUID warning. > >> but I don't know what to do with instance_uuid.replace(' ', '-') [2] >> to make it a valid uuid. Also I think that if there is an unfinished >> mapping in the deployment and then the marker is changed in the code >> that leads to inconsistencies. > > IMHO, it would be bad to do anything that breaks people in the middle > of > a mapping procedure. While I understand the desire to have fewer > spurious warnings in the test runs, I feel like doing anything to > impact > the UX or performance of runtime code to make the unit test output > cleaner is a bad idea. Thanks for confirming my original bad feelings about these kind of solutions. > >> I'm open to any suggestions. > > We already store values in this field that are not 8-4-4-4-12, and the > oslo field warning is just a warning. If people feel like we need to > do > something, I propose we just do this: > > https://review.openstack.org/#/c/567669/ > > It is one of those "we normally wouldn't do this with object schemas, > but we know this is okay" sort of situations. > > > Personally, I'd just make the offending tests shut up about the > warning > and move on, but I'm also okay with the above solution if people > prefer. I think that was Takashi's first suggestion as well. As in this particular case the value stored in the field is still a UUID just not in the canonical format I think it is reasonable to silence the warning for these 3 tests. Thanks, gibi From amotoki at gmail.com Mon May 14 09:52:55 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Mon, 14 May 2018 18:52:55 +0900 Subject: [openstack-dev] [horizon] Scheduling switch to django >= 2.0 In-Reply-To: <1526061568-sup-5500@lrrr.local> References: <276a6199-158c-bb7d-7f7d-f04de9a52e06@debian.org> <1526061568-sup-5500@lrrr.local> Message-ID: 2018年5月12日(土) 3:04 Doug Hellmann : > Excerpts from Akihiro Motoki's message of 2018-05-12 00:14:33 +0900: > > Hi zigo and horizon plugin maintainers, > > > > Horizon itself already supports Django 2.0 and horizon unit test covers > > Django 2.0 with Python 3.5. > > > > A question to all is whether we change the upper bound of Django from > <2.0 > > to <2.1. > > My proposal is to bump the upper bound of Django to <2.1 in Rocky-2. > > (Note that Django 1.11 will continue to be used for python 2.7 > environment.) > > Do we need to cap it at all? We've been trying to express our > dependencies without caps and rely on the constraints list to > test using a common version because this offers the most flexibility as > we move to newer versions over time. > The main reason we cap django version so far is that django minor version releases contain some backward incompatible changes and also drop deprecated features. A new django minor version release like 1.11 usually breaks horizon and plugins as horizon developers are not always checking django deprecations. I have a question on uncapping the django version. How can users/operators know which versions are supported? Do they need to check upper-constraints.txt? > > There are several points we should consider: > > - If we change it in global-requirements.txt, it means Django 2.0 will be > > used for python3.5 environment. > > - Not a small number of horizon plugins still do not support Django 2.0, > so > > bumping the upper bound to <2.1 will break their py35 tests. > > - From my experience of Django 2.0 support in some plugins, the required > > changes are relatively simple like [1]. > > > > I created an etherpad page to track Django 2.0 support in horizon > plugins. > > https://etherpad.openstack.org/p/django20-support > > > > I proposed Django 2.0 support patches to several projects which I think > are > > major. > > # Do not blame me if I don't cover your project :) > > > > Thought? > > It seems like a good goal for the horizon-plugin author community > to bring those projects up to date by supporting a current version > of Django (and any other dependencies), especially as we discuss > the impending switch over to python-3-first and then python-3-only. > Yes, python 3 support is an important topic. We also need to switch the default python version in mod_wsgi in DevStack environment sooner or later. > If this is an area where teams need help, updating that etherpad > with notes and requests for assistance will help us split up the > work. > Each team can help testing in Django 2.0 and/or python 3 support. We need to enable corresponding server projects in development environments, but it is not easy to setup all projects by horizon team. Individual projects must be more familiar with their own projects. I sent several patches, but I actually tested them by unit tests. Thanks, Akihiro > > Doug > > > > > Thanks, > > Akihiro > > > > [1] https://review.openstack.org/#/c/566476/ > > > > 2018年5月8日(火) 17:45 Thomas Goirand : > > > > > Hi, > > > > > > It has been decided that, in Debian, we'll switch to Django 2.0 after > > > Buster will be released. Buster is to be frozen next February. This > > > means that we have roughly one more year before Django 1.x goes away. > > > > > > Hopefully, Horizon will be ready for it, right? > > > > > > Hoping this helps, > > > Cheers, > > > > > > Thomas Goirand (zigo) > > > > > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at ericsson.com Mon May 14 11:21:39 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 14 May 2018 13:21:39 +0200 Subject: [openstack-dev] [nova] nova-manage cell_v2 map_instances uses invalid UUID as marker in the db In-Reply-To: <1526291396.23745.0@smtp.office365.com> References: <1525772990.5489.1@smtp.office365.com> <=?utf-8?Q?=22Bal?= =?utf-8?Q?=C3=A1zs?= Gibizer"'s message of "Tue, 8 May 2018 11:49:50 +0200"> Message-ID: <1526296899.23745.1@smtp.office365.com> On Mon, May 14, 2018 at 11:49 AM, Balázs Gibizer wrote: > > > On Thu, May 10, 2018 at 8:48 PM, Dan Smith wrote: >>> >> Personally, I'd just make the offending tests shut up about the >> warning >> and move on, but I'm also okay with the above solution if people >> prefer. > > I think that was Takashi's first suggestion as well. As in this > particular case the value stored in the field is still a UUID just > not in the canonical format I think it is reasonable to silence the > warning for these 3 tests. > I proposed a patch to suppress those warnings: https://review.openstack.org/#/c/568263 Cheers, gibi From paul.bourke at oracle.com Mon May 14 11:35:25 2018 From: paul.bourke at oracle.com (Paul Bourke) Date: Mon, 14 May 2018 12:35:25 +0100 Subject: [openstack-dev] [kolla] Building Kolla containers with 3rd party vendor drivers In-Reply-To: <59EA437A-08BA-484F-9198-377EAD3A14EE@cisco.com> References: <48550F8B-C186-4C3F-8803-C792B15BB754@cisco.com> <49543b70-78ef-2e99-9435-1be2b34e01e3@oracle.com> <59EA437A-08BA-484F-9198-377EAD3A14EE@cisco.com> Message-ID: <4250d00e-747d-b723-34e5-aa5e675b009b@oracle.com> > Operators that need one or more of these “additional drivers” will be provided > with documentation on how the code in the “additional drivers” path can be > used to build their own containers. This documentation will also detail how > to combine more than one 3rd party drivers into their own container. Yes this sounds fine. We already have a 'contrib' directory [0], so I think this would align with what you're suggesting. -Paul [0] https://github.com/openstack/kolla/tree/master/contrib On 11/05/18 18:02, Sandhya Dasu (sadasu) wrote: > Hi Paul, > I am happy to use the changes you proposed to > https://github.com/openstack/kolla/blob/master/kolla/common/config.py > > I was under the impression that this was disallowed for drivers that weren’t > considered “reference drivers”. If that is no longer the case, I am happy to go > this route and abandon the approach I took in my diffs in: > https://review.openstack.org/#/c/552119/. > > I agree with the reasoning that Kolla cannot possibly maintain a large > number of neutron-server containers, one per plugin. > > To support operators that want to build their own images, I was hoping that > we could come up with a mechanism by which the 3rd party driver owners > provide the code (template-override.j2 or Dockerfile.j2 as the case maybe) > to build their containers. This code can definitely live out-of-tree with the > drivers themselves. > > Optionally, we could have them reside in-tree in Kolla in a separate directory, > say “additional drivers”. Kolla will not be responsible for building a container > per driver or for building a huge (neutron-server) container containing all > interested drivers. > > Operators that need one or more of these “additional drivers” will be provided > with documentation on how the code in the “additional drivers” path can be > used to build their own containers. This documentation will also detail how > to combine more than one 3rd party drivers into their own container. > > I would like the community’s input on what approach best aligns with Kolla’s > and the larger OpenStack community’s goals. > > Thanks, > Sandhya > > On 5/11/18, 5:35 AM, "Paul Bourke" wrote: > > Hi Sandhya, > > Thanks for starting this thread. I've moved it to the mailing list so > the discussion can be available to anyone else who is interested, I hope > you don't mind. > > If your requirement is to have third party plugins (such as Cisco) that > are not available on tarballs.openstack.org, available in Kolla, then > this is already possible. > > Using the Cisco case as an example, you would simply need to submit the > following patch to > https://github.com/openstack/kolla/blob/master/kolla/common/config.py > > """ > 'neutron-server-plugin-networking-cisco': { > 'type': 'git', > 'location': ('https://github.com/openstack/networking-cisco')}, > """ > > This will then include that plugin as part of the future neutron-server > builds. > > If the requirement is to have Kolla publish a neutron-server container > with *only* the Cisco plugin, then this is where it gets a little more > tricky. Sure, we can go the route that's proposed in your patch, but we > end up then maintaining a massive number of neutron-server containers, > one per plugin. It also does not address then the issue of what people > want to do when they want a combination or mix of plugins together. > > So right now I feel Kolla takes a middle ground, where we publish a > neutron-server container with a variety of common plugins. If operators > have specific requirements, they should create their own config file and > build their own images, which we expect any serious production setup to > be doing anyway. > > -Paul > > On 10/05/18 18:12, Sandhya Dasu (sadasu) wrote: > > Yes, I think there is some misunderstanding on what I am trying to accomplish here. > > > > I am utilizing existing Kolla constructs to prove that they work for 3rd party out of tree vendor drivers too. > > At this point, anything that a 3rd party vendor driver does (the way they build their containers, where they publish it and how they generate config) is completely out of scope of Kolla. > > > > I want to use the spec as a place to articulate and discuss best practices and figure out what part of supporting 3rd party vendor drivers can stay within the Kolla tree and what should be out. > > I have witnessed many discussions on this topic but they only take away I get is “there are ways to do it but it can’t be part of Kolla”. > > > > Using the existing kolla constructs of template-override, plugin-archive and config-dir, let us say the 3rd party vendor builds a container. > > OpenStack TC does not want these containers to be part of tarballs.openstack.org. Kolla publishes its containers to DockerHub under the Kolla project. > > If these 3rd party vendor drivers publish to Dockerhub they will have to publish under a different project. So, an OpenStack installation that needs these drivers will have to pull images from 2 or more Dokerhub projects?! > > > > Or do you prefer if the OpenStack operator build their own images using the out-of-tree Dockerfile for that vendor? > > > > Again, should the config changes to support these drivers be part of the kolla-ansible repo or should they be out-of-tree? > > > > It is hard to have this type of discussion on IRC so I started this email thread. > > > > Thanks, > > Sandhya > > > > On 5/10/18, 5:59 AM, "Paul Bourke (pbourke) (Code Review)" wrote: > > > > Paul Bourke (pbourke) has posted comments on this change. ( https://review.openstack.org/567278 ) > > > > Change subject: Building Kolla containers with 3rd party vendor drivers > > ...................................................................... > > > > > > Patch Set 2: Code-Review-1 > > > > Hi Sandhya, after reading the spec most of my thoughts echo Eduardo's. I'm wondering if there's some misunderstanding on how the current plugin functionality works? Feels free to ping me on irc I'd be happy to discuss further - maybe there's still some element of what's there that's not working for your use case. > > > > -- > > To view, visit https://review.openstack.org/567278 > > To unsubscribe, visit https://review.openstack.org/settings > > > > Gerrit-MessageType: comment > > Gerrit-Change-Id: I681d6a7b38b6cafe7ebe88a1a1f2d53943e1aab2 > > Gerrit-PatchSet: 2 > > Gerrit-Project: openstack/kolla > > Gerrit-Branch: master > > Gerrit-Owner: Sandhya Dasu > > Gerrit-Reviewer: Duong Ha-Quang > > Gerrit-Reviewer: Eduardo Gonzalez > > Gerrit-Reviewer: Paul Bourke (pbourke) > > Gerrit-Reviewer: Zuul > > Gerrit-HasComments: No > > > > > > From rleander at redhat.com Mon May 14 12:12:32 2018 From: rleander at redhat.com (Rain Leander) Date: Mon, 14 May 2018 14:12:32 +0200 Subject: [openstack-dev] Thank you TryStack!! In-Reply-To: References: <5AB9797D.1090209@tipit.net> <20180430142334.GB10224@localhost.localdomain> <5AE72967.3050100@openstack.org> <20180430151255.bcgaqm5svvtz2rkq@yuggoth.org> <5AE73F3F.4040503@openstack.org> <20180430170204.vvtfq6gktc5i3r6r@yuggoth.org> <5AE74E05.90405@openstack.org> <20180430172905.c3qyjrwucgx5vdww@yuggoth.org> <5AE75A13.4030606@openstack.org> <5AE9CF63.2020503@openstack.org> Message-ID: Thanks Jimmy! Can we go ahead and archive the facebook group? I'm still denying posts / members and we're pretty much just being spammed these days. Let me know either way, please. Rain On Mon, May 14, 2018 at 2:06 PM, Rain Leander wrote: > Thanks Jimmy! Can we go ahead and archive the facebook group? I'm still > denying posts / members and we're pretty much just being spammed these > days. Let me know either way, please. > > Rain > > On Wed, May 2, 2018 at 4:46 PM, Jimmy McArthur > wrote: > >> Just wanted to follow up on this. trystack.openstack.org is now >> correctly redirecting to the same place as trystack.org. >> >> Thanks, >> Jimmy >> >> Jimmy McArthur >> April 30, 2018 at 1:01 PM >> OK - got it :) >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> Jeremy Stanley >> April 30, 2018 at 12:29 PM >> [...] >> >> I was thrown by the fact that DNS currently has >> trystack.openstack.org as a CNAME alias for trystack.org, but >> reviewing logs on static.openstack.org it seems it may have >> previously pointed there (was receiving traffic up until around >> 13:15 UTC today) so if you want to just glom that onto the current >> trystack.org redirect that may make the most sense and we can move >> forward tearing down the old infrastructure for it. >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> Jimmy McArthur >> April 30, 2018 at 12:10 PM >> Yeah... my only concern is that if traffic is actually getting there, a >> redirect to the same place trystack.org is going might be helpful. >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> Jeremy Stanley >> April 30, 2018 at 12:02 PM >> On 2018-04-30 11:07:27 -0500 (-0500), Jimmy McArthur wrote: >> [...] >> [...] >> >> Since I don't think the trystack.o.o site ever found its way fully >> into production, it may make more sense for us to simply delete the >> records for it from DNS. Someone else probably knows more about the >> prior state of it than I though. >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> Jimmy McArthur >> April 30, 2018 at 11:07 AM >> >> >> Jeremy Stanley >> April 30, 2018 at 10:12 AM >> [...] >> >> Yes, before the TryStack effort was closed down, there had been a >> plan for trystack.org to redirect to a trystack.openstack.org site >> hosted in the community infrastructure. >> >> When we talked to trystack we agreed to redirect trystack.org to >> https://openstack.org/software/start since that presents alternative >> options for people to "try openstack". My suggestion would be to redirect >> trystack.openstack.org to the same spot, but certainly open to other >> suggestions :) >> >> At this point I expect we >> can just rip out the section for it from >> https://git.openstack.org/cgit/openstack-infra/system-config >> /tree/modules/openstack_project/manifests/static.pp >> as DNS appears to no longer be pointed there. >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> Jimmy McArthur >> April 30, 2018 at 9:34 AM >> I'm working on redirecting trystack.openstack.org to >> openstack.org/software/start. We have redirects in place for >> trystack.org, but didn't realize trystack.openstack.org as a thing as >> well. >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> Paul Belanger >> April 30, 2018 at 9:23 AM >> The code is hosted by openstack-infra[1], if somebody would like to >> propose a >> patch with the new information. >> >> [1] http://git.openstack.org/cgit/openstack-infra/trystack-site >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> Jens Harbott >> April 30, 2018 at 4:37 AM >> >> Seems it would be great if https://trystack.openstack.org/ would be >> updated with this information, according to comments in #openstack >> users are still landing on that page and try to get a stack there in >> vain. >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> Jimmy Mcarthur >> March 26, 2018 at 5:51 PM >> Hi everyone, >> >> We recently made the tough decision, in conjunction with the dedicated >> volunteers that run TryStack, to end the service as of March 29, 2018. For >> those of you that used it, thank you for being part of the TryStack >> community. >> >> The good news is that you can find more resources to try OpenStack at >> http://www.openstack.org/start, including the Passport Program >> , where you can test on any >> participating public cloud. If you are looking to test different tools or >> application stacks with OpenStack clouds, you should check out Open Lab >> . >> >> Thank you very much to Will Foster, Kambiz Aghaiepour, Rich Bowen, and >> the many other volunteers who have managed this valuable service for the >> last several years! Your contribution to OpenStack was noticed and >> appreciated by many in the community. >> >> Cheers, >> Jimmy >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > K Rain Leander > OpenStack Community Liaison > Open Source and Standards Team > https://www.rdoproject.org/ > http://community.redhat.com > -- K Rain Leander OpenStack Community Liaison Open Source and Standards Team https://www.rdoproject.org/ http://community.redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.friesen at windriver.com Mon May 14 12:39:16 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Mon, 14 May 2018 06:39:16 -0600 Subject: [openstack-dev] openstack-dev] [nova] Cannot live migrattion, because error:libvirtError: the CPU is incompatible with host CPU: Host CPU does not provide required features: cmt, mbm_total, mbm_local In-Reply-To: References: Message-ID: <5AF98374.5050007@windriver.com> On 05/13/2018 09:23 PM, 何健乐 wrote: > Hi, all > When I did live-miration , I met the following error: |result > ||=||proxy_call(||self||._autowrap, f, ||*||args, ||*||*||kwargs)| > |May ||14| |10||:||33||:||11| |nova||-||compute[||981335||]: ||File| > |"/usr/lib64/python2.7/site-packages/libvirt.py"||, line ||1939||, ||in| > |migrateToURI3| > |May ||14| |10||:||33||:||11| |nova||-||compute[||981335||]: ||if| |ret > ||=||=| |-||1||: ||raise| |libvirtError (||'virDomainMigrateToURI3() > failed'||, dom||=||self||)| > |May ||14| |10||:||33||:||11| |nova||-||compute[||981335||]: libvirtError: the > CPU ||is| |incompatible with host CPU: Host CPU does ||not| |provide required > features: cmt, mbm_total, mbm_local| > Is there any one that has solution for this problem. > Thanks > Can you run "virsh capabilities" and provide the "cpu" section for both the source and dest compute nodes? Can you also provide the "cpu_mode", "cpu_model", and "cpu_model_extra_flags" options from the "libvirt" section of /etc/nova/nova.conf on both compute nodes? Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon May 14 12:42:11 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 14 May 2018 08:42:11 -0400 Subject: [openstack-dev] [horizon] Scheduling switch to django >= 2.0 In-Reply-To: References: <276a6199-158c-bb7d-7f7d-f04de9a52e06@debian.org> <1526061568-sup-5500@lrrr.local> Message-ID: <1526301210-sup-5803@lrrr.local> Excerpts from Akihiro Motoki's message of 2018-05-14 18:52:55 +0900: > 2018年5月12日(土) 3:04 Doug Hellmann : > > > Excerpts from Akihiro Motoki's message of 2018-05-12 00:14:33 +0900: > > > Hi zigo and horizon plugin maintainers, > > > > > > Horizon itself already supports Django 2.0 and horizon unit test covers > > > Django 2.0 with Python 3.5. > > > > > > A question to all is whether we change the upper bound of Django from > > <2.0 > > > to <2.1. > > > My proposal is to bump the upper bound of Django to <2.1 in Rocky-2. > > > (Note that Django 1.11 will continue to be used for python 2.7 > > environment.) > > > > Do we need to cap it at all? We've been trying to express our > > dependencies without caps and rely on the constraints list to > > test using a common version because this offers the most flexibility as > > we move to newer versions over time. > > > > The main reason we cap django version so far is that django minor version > releases > contain some backward incompatible changes and also drop deprecated > features. > A new django minor version release like 1.11 usually breaks horizon and > plugins > as horizon developers are not always checking django deprecations. OK. Having the cap in place makes it more complicated to test upgrading, and then upgrade. Because we no longer synchronize requirements, changing openstack/requirements does not trigger the bot to propose the same change to all of the projects using the dependency. Someone will have to do that by hand in the future, as we are doing with eventlet right now (https://review.openstack.org/#/q/topic:uncap-eventlet). Without the cap, we can test the upgrade by proposing a constraint update and running the horizon (and/or plugin) unit tests. When those tests pass, we can then step forward all at once by approving the constraint change. > > I have a question on uncapping the django version. > How can users/operators know which versions are supported? > Do they need to check upper-constraints.txt? We do tell downstream consumers that the upper-constraints.txt file is the set of things we test with, and that any other combination of packages would need to be tested on their systems separately. > > > > There are several points we should consider: > > > - If we change it in global-requirements.txt, it means Django 2.0 will be > > > used for python3.5 environment. > > > - Not a small number of horizon plugins still do not support Django 2.0, > > so > > > bumping the upper bound to <2.1 will break their py35 tests. > > > - From my experience of Django 2.0 support in some plugins, the required > > > changes are relatively simple like [1]. > > > > > > I created an etherpad page to track Django 2.0 support in horizon > > plugins. > > > https://etherpad.openstack.org/p/django20-support > > > > > > I proposed Django 2.0 support patches to several projects which I think > > are > > > major. > > > # Do not blame me if I don't cover your project :) > > > > > > Thought? > > > > It seems like a good goal for the horizon-plugin author community > > to bring those projects up to date by supporting a current version > > of Django (and any other dependencies), especially as we discuss > > the impending switch over to python-3-first and then python-3-only. > > > > Yes, python 3 support is an important topic. > We also need to switch the default python version in mod_wsgi in DevStack > environment sooner or later. Is Python 3 ever used for mod_wsgi? Does the WSGI setup code honor the variable that tells devstack to use Python 3? > > > If this is an area where teams need help, updating that etherpad > > with notes and requests for assistance will help us split up the > > work. > > > > Each team can help testing in Django 2.0 and/or python 3 support. > We need to enable corresponding server projects in development environments, > but it is not easy to setup all projects by horizon team. Individual > projects must be > more familiar with their own projects. > I sent several patches, but I actually tested them by unit tests. > > Thanks, > Akihiro > > > > > Doug > > > > > > > > Thanks, > > > Akihiro > > > > > > [1] https://review.openstack.org/#/c/566476/ > > > > > > 2018年5月8日(火) 17:45 Thomas Goirand : > > > > > > > Hi, > > > > > > > > It has been decided that, in Debian, we'll switch to Django 2.0 after > > > > Buster will be released. Buster is to be frozen next February. This > > > > means that we have roughly one more year before Django 1.x goes away. > > > > > > > > Hopefully, Horizon will be ready for it, right? > > > > > > > > Hoping this helps, > > > > Cheers, > > > > > > > > Thomas Goirand (zigo) > > > > > > > > > > __________________________________________________________________________ > > > > OpenStack Development Mailing List (not for usage questions) > > > > Unsubscribe: > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > From jimmy at openstack.org Mon May 14 12:49:11 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 14 May 2018 07:49:11 -0500 Subject: [openstack-dev] Thank you TryStack!! In-Reply-To: References: <5AB9797D.1090209@tipit.net> <20180430142334.GB10224@localhost.localdomain> <5AE72967.3050100@openstack.org> <20180430151255.bcgaqm5svvtz2rkq@yuggoth.org> <5AE73F3F.4040503@openstack.org> <20180430170204.vvtfq6gktc5i3r6r@yuggoth.org> <5AE74E05.90405@openstack.org> <20180430172905.c3qyjrwucgx5vdww@yuggoth.org> <5AE75A13.4030606@openstack.org> <5AE9CF63.2020503@openstack.org> Message-ID: <027434AE-8FC7-4C8C-B639-23DB90C6E82E@openstack.org> Absolutely. Will take care of today. Thank you all again!!! Thanks, Jimmy McArthur 512.965.4846 > On May 14, 2018, at 7:12 AM, Rain Leander wrote: > > Thanks Jimmy! Can we go ahead and archive the facebook group? I'm still denying posts / members and we're pretty much just being spammed these days. Let me know either way, please. > > Rain > > >> On Mon, May 14, 2018 at 2:06 PM, Rain Leander wrote: >> Thanks Jimmy! Can we go ahead and archive the facebook group? I'm still denying posts / members and we're pretty much just being spammed these days. Let me know either way, please. >> >> Rain >> >>> On Wed, May 2, 2018 at 4:46 PM, Jimmy McArthur wrote: >>> Just wanted to follow up on this. trystack.openstack.org is now correctly redirecting to the same place as trystack.org. >>> >>> Thanks, >>> Jimmy >>> >>>> Jimmy McArthur April 30, 2018 at 1:01 PM >>>> OK - got it :) >>>> >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> Jeremy Stanley April 30, 2018 at 12:29 PM >>>> [...] >>>> >>>> I was thrown by the fact that DNS currently has >>>> trystack.openstack.org as a CNAME alias for trystack.org, but >>>> reviewing logs on static.openstack.org it seems it may have >>>> previously pointed there (was receiving traffic up until around >>>> 13:15 UTC today) so if you want to just glom that onto the current >>>> trystack.org redirect that may make the most sense and we can move >>>> forward tearing down the old infrastructure for it. >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> Jimmy McArthur April 30, 2018 at 12:10 PM >>>> Yeah... my only concern is that if traffic is actually getting there, a redirect to the same place trystack.org is going might be helpful. >>>> >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> Jeremy Stanley April 30, 2018 at 12:02 PM >>>> On 2018-04-30 11:07:27 -0500 (-0500), Jimmy McArthur wrote: >>>> [...] >>>> [...] >>>> >>>> Since I don't think the trystack.o.o site ever found its way fully >>>> into production, it may make more sense for us to simply delete the >>>> records for it from DNS. Someone else probably knows more about the >>>> prior state of it than I though. >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> Jimmy McArthur April 30, 2018 at 11:07 AM >>>> >>>> >>>>> Jeremy Stanley April 30, 2018 at 10:12 AM >>>>> [...] >>>>> >>>>> Yes, before the TryStack effort was closed down, there had been a >>>>> plan for trystack.org to redirect to a trystack.openstack.org site >>>>> hosted in the community infrastructure. >>>> When we talked to trystack we agreed to redirect trystack.org to https://openstack.org/software/start since that presents alternative options for people to "try openstack". My suggestion would be to redirect trystack.openstack.org to the same spot, but certainly open to other suggestions :) >>>>> At this point I expect we >>>>> can just rip out the section for it from >>>>> https://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/manifests/static.pp >>>>> as DNS appears to no longer be pointed there. >>>>> __________________________________________________________________________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> Jimmy McArthur April 30, 2018 at 9:34 AM >>>>> I'm working on redirecting trystack.openstack.org to openstack.org/software/start. We have redirects in place for trystack.org, but didn't realize trystack.openstack.org as a thing as well. >>>>> >>>>> >>>>> __________________________________________________________________________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> Paul Belanger April 30, 2018 at 9:23 AM >>>>> The code is hosted by openstack-infra[1], if somebody would like to propose a >>>>> patch with the new information. >>>>> >>>>> [1] http://git.openstack.org/cgit/openstack-infra/trystack-site >>>>> >>>>> __________________________________________________________________________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> Jens Harbott April 30, 2018 at 4:37 AM >>>>> >>>>> Seems it would be great if https://trystack.openstack.org/ would be >>>>> updated with this information, according to comments in #openstack >>>>> users are still landing on that page and try to get a stack there in >>>>> vain. >>>>> >>>>> __________________________________________________________________________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> Jimmy Mcarthur March 26, 2018 at 5:51 PM >>>>> Hi everyone, >>>>> >>>>> We recently made the tough decision, in conjunction with the dedicated volunteers that run TryStack, to end the service as of March 29, 2018. For those of you that used it, thank you for being part of the TryStack community. >>>>> >>>>> The good news is that you can find more resources to try OpenStack at http://www.openstack.org/start, including the Passport Program, where you can test on any participating public cloud. If you are looking to test different tools or application stacks with OpenStack clouds, you should check out Open Lab. >>>>> >>>>> Thank you very much to Will Foster, Kambiz Aghaiepour, Rich Bowen, and the many other volunteers who have managed this valuable service for the last several years! Your contribution to OpenStack was noticed and appreciated by many in the community. >>>>> >>>>> Cheers, >>>>> Jimmy >>>>> __________________________________________________________________________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> >> -- >> K Rain Leander >> OpenStack Community Liaison >> Open Source and Standards Team >> https://www.rdoproject.org/ >> http://community.redhat.com > > > > -- > K Rain Leander > OpenStack Community Liaison > Open Source and Standards Team > https://www.rdoproject.org/ > http://community.redhat.com > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon May 14 12:52:08 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 14 May 2018 08:52:08 -0400 Subject: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects In-Reply-To: <1522007989-sup-4653@lrrr.local> References: <1521110096-sup-3634@lrrr.local> <1521662425-sup-1628@lrrr.local> <1521749386-sup-1944@lrrr.local> <1522007989-sup-4653@lrrr.local> Message-ID: <1526302110-sup-4784@lrrr.local> Excerpts from Doug Hellmann's message of 2018-03-25 16:04:11 -0400: > Excerpts from Doug Hellmann's message of 2018-03-22 16:16:06 -0400: > > Excerpts from Doug Hellmann's message of 2018-03-21 16:02:06 -0400: > > > Excerpts from Doug Hellmann's message of 2018-03-15 07:03:11 -0400: > > > > > > > > TL;DR > > > > ----- > > > > > > > > Let's stop copying exact dependency specifications into all our > > > > projects to allow them to reflect the actual versions of things > > > > they depend on. The constraints system in pip makes this change > > > > safe. We still need to maintain some level of compatibility, so the > > > > existing requirements-check job (run for changes to requirements.txt > > > > within each repo) will change a bit rather than going away completely. > > > > We can enable unit test jobs to verify the lower constraint settings > > > > at the same time that we're doing the other work. > > > > > > The new job definition is in https://review.openstack.org/555034 and I > > > have updated the oslo.config patch I mentioned before to use the new job > > > instead of one defined in the oslo.config repo (see > > > https://review.openstack.org/550603). > > > > > > I'll wait for that job patch to be reviewed and approved before I start > > > adding the job to a bunch of other repositories. > > > > > > Doug > > > > The job definition for openstack-tox-lower-constraints [1] was approved > > today (thanks AJaegar and pabelenger). > > > > I have started proposing the patches to add that job to the repos listed > > in openstack/requirements/projects.txt using the topic > > "requirements-stop-syncing" [2]. I hope to have the rest of those > > proposed by the end of the day tomorrow, but since they have to run in > > batches I don't know if that will be possible. > > > > The patch to remove the update proposal job is ready for review [3]. > > > > As is the patch to allow project requirements to diverge by changing the > > rules in the requirements-check job [4]. > > > > We ran into a snag with a few of the jobs for projects that rely on > > having service projects installed. There have been a couple of threads > > about that recently, but Monty has promised to start another one to > > provide all of the necessary context so we can fix the issues and move > > ahead. > > > > Doug > > > > All of the patches to define the lower-constraints test jobs have been > proposed [1], and many have already been approved and merged (thank you > for your quick reviews). > > A few of the jobs are failing because the projects depend on installing > some other service from source. We will work out what to do with those > when we solve that problem in a more general way. > > A few of the jobs failed because the dependencies were wrong. In a few > cases I was able to figure out what was wrong, but I can use some help > from project teams more familiar with the code bases to debug the > remaining failures. > > In a few cases projects didn't have python 3 unit test jobs, so I > configured the new job to use python 2. Teams should add a step to their > python 3 migration plan to update the version of python used in the new > job, when that is possible. > > I believe we are now ready to proceed with updating the > requirements-check job to relax the rules about which changes are > allowed [2]. > > Doug > > [1] https://review.openstack.org/#/q/topic:requirements-stop-syncing+status:open > [2] https://review.openstack.org/555402 We still have about 50 open patches related to adding the lower-constraints test job. I'll keep those open until the third milestone of the Rocky development cycle, and then abandon the rest to clear my gerrit view so it is usable again. If you want to add lower-constraints tests to your project and have an open patch in the list [1], please take it over and fix the settings then approve the patch (the fix usually involves making the values in lower-constraints.txt match the values in the various requirements.txt files). If you don't want the job, please leave a comment on the patch to tell me and I will abandon it. Doug [1] https://review.openstack.org/#/q/topic:requirements-stop-syncing+status:open From whayutin at redhat.com Mon May 14 13:07:03 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 14 May 2018 07:07:03 -0600 Subject: [openstack-dev] [tripleo] tripleo upstream gate outtage, was: -> gate jobs impacted RAX yum mirror In-Reply-To: <20180514032945.f4hpxpcoyhrylius@yuggoth.org> References: <20180513152435.x2iguepehk6fblbr@yuggoth.org> <20180514032945.f4hpxpcoyhrylius@yuggoth.org> Message-ID: On Sun, May 13, 2018 at 11:30 PM Jeremy Stanley wrote: > On 2018-05-13 20:44:25 -0600 (-0600), Wesley Hayutin wrote: > [...] > > I do think it would be helpful to say have a one week change > > window where folks are given the opportunity to preflight check a > > new image and the potential impact on the job workflow the updated > > image may have. If I could update or create a non-voting job w/ > > the new image that would provide two things. > > > > 1. The first is the head's up, this new minor version of centos is > > coming into the system and you have $x days to deal with it. > > > > 2. The ability to build a few non-voting jobs w/ the new image to > > see what kind of impact it has on the workflow and deployments. > [...] > > While I can see where you're coming from, right now even the Infra > team doesn't know immediately when a new CentOS minor release starts > to be used. The packages show up in the mirrors automatically and > images begin to be built with them right away. There isn't a > conscious "switch" which is thrown by anyone. This is essentially > the same way we treat Ubuntu LTS point releases as well. If this is > _not_ the way RHEL/CentOS are intended to be consumed (i.e. just > upgrade to and run the latest packages available for a given major > release series) then we should perhaps take a step back and > reevaluate this model. I think you may be conflating the notion that ubuntu or rhel/cent can be updated w/o any issues to applications that run atop of the distributions with what it means to introduce a minor update into the upstream openstack ci workflow. If jobs could execute w/o a timeout the tripleo jobs would have not gone red. Since we do have constraints in the upstream like a timeouts and others we have to prepare containers, images etc to work efficiently in the upstream. For example, if our jobs had the time to yum update the roughly 120 containers in play in each job the tripleo jobs would have just worked. I am not advocating for not having timeouts or constraints on jobs, however I am saying this is an infra issue, not a distribution or distribution support issue. I think this is an important point to consider and I view it as mostly unrelated to the support claims by the distribution. Does that make sense? Thanks > For now we have some fairly deep-driven > assumptions in that regard which are reflected in the Linux > distributions support policy of our project testing interface as > documented in OpenStack governance. > -- > Jeremy Stanley > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Mon May 14 13:30:04 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Mon, 14 May 2018 22:30:04 +0900 Subject: [openstack-dev] [horizon] Scheduling switch to django >= 2.0 In-Reply-To: <1526301210-sup-5803@lrrr.local> References: <276a6199-158c-bb7d-7f7d-f04de9a52e06@debian.org> <1526061568-sup-5500@lrrr.local> <1526301210-sup-5803@lrrr.local> Message-ID: 2018年5月14日(月) 21:42 Doug Hellmann : > Excerpts from Akihiro Motoki's message of 2018-05-14 18:52:55 +0900: > > 2018年5月12日(土) 3:04 Doug Hellmann : > > > > > Excerpts from Akihiro Motoki's message of 2018-05-12 00:14:33 +0900: > > > > Hi zigo and horizon plugin maintainers, > > > > > > > > Horizon itself already supports Django 2.0 and horizon unit test > covers > > > > Django 2.0 with Python 3.5. > > > > > > > > A question to all is whether we change the upper bound of Django from > > > <2.0 > > > > to <2.1. > > > > My proposal is to bump the upper bound of Django to <2.1 in Rocky-2. > > > > (Note that Django 1.11 will continue to be used for python 2.7 > > > environment.) > > > > > > Do we need to cap it at all? We've been trying to express our > > > dependencies without caps and rely on the constraints list to > > > test using a common version because this offers the most flexibility as > > > we move to newer versions over time. > > > > > > > The main reason we cap django version so far is that django minor version > > releases > > contain some backward incompatible changes and also drop deprecated > > features. > > A new django minor version release like 1.11 usually breaks horizon and > > plugins > > as horizon developers are not always checking django deprecations. > > OK. Having the cap in place makes it more complicated to test > upgrading, and then upgrade. Because we no longer synchronize > requirements, changing openstack/requirements does not trigger the > bot to propose the same change to all of the projects using the > dependency. Someone will have to do that by hand in the future, as we > are doing with eventlet right now > (https://review.openstack.org/#/q/topic:uncap-eventlet). > > Without the cap, we can test the upgrade by proposing a constraint > update and running the horizon (and/or plugin) unit tests. When those > tests pass, we can then step forward all at once by approving the > constraint change. > Thanks for the detail context. Honestly I am not sure which is better to cap or uncap the django version. We can try uncapping now and see what happens in the community. cross-horizon-(py27|py35) jobs of openstack/requirements checks if horizon works with a new version. it works for horizon, but perhaps it potentially break horizon plugins as it takes time to catch up with such changes. On the other hand, a version bump in upper-constraints.txt would be a good trigger for horizon plugin maintainers to sync all requirements. In addition, requirements are not synchronized automatically, so it seems not feasible to propose requirements changes per django version change. > > > > > I have a question on uncapping the django version. > > How can users/operators know which versions are supported? > > Do they need to check upper-constraints.txt? > > We do tell downstream consumers that the upper-constraints.txt file is > the set of things we test with, and that any other combination of > packages would need to be tested on their systems separately. > > > > > > > There are several points we should consider: > > > > - If we change it in global-requirements.txt, it means Django 2.0 > will be > > > > used for python3.5 environment. > > > > - Not a small number of horizon plugins still do not support Django > 2.0, > > > so > > > > bumping the upper bound to <2.1 will break their py35 tests. > > > > - From my experience of Django 2.0 support in some plugins, the > required > > > > changes are relatively simple like [1]. > > > > > > > > I created an etherpad page to track Django 2.0 support in horizon > > > plugins. > > > > https://etherpad.openstack.org/p/django20-support > > > > > > > > I proposed Django 2.0 support patches to several projects which I > think > > > are > > > > major. > > > > # Do not blame me if I don't cover your project :) > > > > > > > > Thought? > > > > > > It seems like a good goal for the horizon-plugin author community > > > to bring those projects up to date by supporting a current version > > > of Django (and any other dependencies), especially as we discuss > > > the impending switch over to python-3-first and then python-3-only. > > > > > > > Yes, python 3 support is an important topic. > > We also need to switch the default python version in mod_wsgi in DevStack > > environment sooner or later. > > Is Python 3 ever used for mod_wsgi? Does the WSGI setup code honor > the variable that tells devstack to use Python 3? > Ubuntu 16.04 provides py2 and py3 versions of mod_wsgi (libapache2-mod-wsgi and libapache2-mod-wsgi-py3) and as a quick look the only difference is a module specified in LoadModule apache directive. I haven't tested it yet, but it seems worth explored. Akihiro > > > > > If this is an area where teams need help, updating that etherpad > > > with notes and requests for assistance will help us split up the > > > work. > > > > > > > Each team can help testing in Django 2.0 and/or python 3 support. > > We need to enable corresponding server projects in development > environments, > > but it is not easy to setup all projects by horizon team. Individual > > projects must be > > more familiar with their own projects. > > I sent several patches, but I actually tested them by unit tests. > > > > Thanks, > > Akihiro > > > > > > > > Doug > > > > > > > > > > > Thanks, > > > > Akihiro > > > > > > > > [1] https://review.openstack.org/#/c/566476/ > > > > > > > > 2018年5月8日(火) 17:45 Thomas Goirand : > > > > > > > > > Hi, > > > > > > > > > > It has been decided that, in Debian, we'll switch to Django 2.0 > after > > > > > Buster will be released. Buster is to be frozen next February. This > > > > > means that we have roughly one more year before Django 1.x goes > away. > > > > > > > > > > Hopefully, Horizon will be ready for it, right? > > > > > > > > > > Hoping this helps, > > > > > Cheers, > > > > > > > > > > Thomas Goirand (zigo) > > > > > > > > > > > > > > __________________________________________________________________________ > > > > > OpenStack Development Mailing List (not for usage questions) > > > > > Unsubscribe: > > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-philippe at evrard.me Mon May 14 13:35:32 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Mon, 14 May 2018 15:35:32 +0200 Subject: [openstack-dev] [docs][openstack-ansible] In-Reply-To: References: <03758D6E-D720-4F40-8AE0-1296EB280D95@outlook.com> Message-ID: Can't. use. words. Much sadness! But happiness for you and your future, at the same time :) It was a pleasure to work on your side. https://media.giphy.com/media/IcGkqdUmYLFGE/giphy.gif From doug at doughellmann.com Mon May 14 14:04:03 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 14 May 2018 10:04:03 -0400 Subject: [openstack-dev] [tc] Technical Committee Update, 14 May Message-ID: <1526306269-sup-420@lrrr.local> This is the weekly summary of work being done by the Technical Committee members. The full list of active items is managed in the wiki: https://wiki.openstack.org/wiki/Technical_Committee_Tracker We also track TC objectives for the cycle using StoryBoard at:https://storyboard.openstack.org/#!/project/923 == Recent Activity == Project updates: https://review.openstack.org/#/c/565877/ : governance change adding constellations repo to doc team https://review.openstack.org/#/c/565814/ : add goal-tools repo to tc list https://review.openstack.org/#/c/564830/ : add ansible-role-tripleo-keystone to governance https://review.openstack.org/#/c/565385/ : retire kolla-kubernetes https://review.openstack.org/#/c/566541/ : add os_blazar to openstack-ansible https://review.openstack.org/#/c/565538/ : remove bandit from the governance repository New topics: Zane has proposed an update to the requirements for affiliation diversity for new projects [0]. [0] https://review.openstack.org/#/c/567944/ == Ongoing Discussions == The patch to update the Python 3.5 goal for Kolla [1] adds a new deliverable to the old goal, and it isn't clear whether we want to do that. TC members, please comment in the openstack-dev thread [2]. [1] https://review.openstack.org/557863 [2] http://lists.openstack.org/pipermail/openstack-dev/2018-May/130236.html The proposal to allow teams to drop python 2 support has not had as much discussion as I expected [3][4]. There will be a forum session covering this topic in Vancouver [5]. [3] https://review.openstack.org/561922 [4] http://lists.openstack.org/pipermail/openstack-dev/2018-April/129866.html [5] https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21741/python-2-deprecation-timeline The new repository for documenting constellations is set up and ready to receive proposals. The Adjutant project application [6] is still under review, and the only votes registered are opposed. Last week I mentioned the TC retrospective session, but left out mention of the session cdent is moderating on project "boundaries" [7] and the one mugsie is moderating on Adjutant itself [8]. Sorry for the oversight. [6] https://review.openstack.org/553643 [7] https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21739/official-projects-and-the-boundary-of-what-is-openstack [8] https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21752/adjutant-official-project-status == TC member actions/focus/discussions for the coming week(s) == I have added the two items raised by TC members raised to the draft agenda for the joint Board/TC/UC meeting to be held in Vancouver (see the wiki page [9] under "Strategic Discussions" and "Next steps for fixing bylaws typo"). Please keep in mind that the time allocations and content of the meeting are still subject to change. [9] https://wiki.openstack.org/wiki/Governance/Foundation/20May2018BoardMeeting We will also hold a retrospective for the TC as a team on Monday at the Forum. Please be prepared to discuss things you think are going well, things you think we need to change, items from our backlog that you would like to work on, etc. [10] [10] https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21740/tc-retrospective I need to revise the patch to update the expectations for goal champions based on existing feedback. [11] [11] https://review.openstack.org/564060 We have several items on our backlog that need owners. TC members, please review the storyboard list [12] and consider taking on one of the tasks that we agreed we would do. [12] https://storyboard.openstack.org/#!/project/923 == Contacting the TC == The Technical Committee uses a series of weekly "office hour" time slots for synchronous communication. We hope that by having several such times scheduled, we will have more opportunities to engage with members of the community from different timezones. Office hour times in #openstack-tc: * 09:00 UTC on Tuesdays * 01:00 UTC on Wednesdays * 15:00 UTC on Thursdays If you have something you would like the TC to discuss, you can add it to our office hour conversation starter etherpad at:https://etherpad.openstack.org/p/tc-office-hour-conversation-starters Many of us also run IRC bouncers which stay in #openstack-tc most of the time, so please do not feel that you need to wait for an office hour time to pose a question or offer a suggestion. You can use the string "tc-members" to alert the members to your question. If you expect your topic to require significant discussion or to need input from members of the community other than the TC, please start a mailing list discussion on openstack-dev at lists.openstack.org and use the subject tag "[tc]" to bring it to the attention of TC members. From balazs.gibizer at ericsson.com Mon May 14 14:15:37 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 14 May 2018 16:15:37 +0200 Subject: [openstack-dev] [nova] Notification update week 20 Message-ID: <1526307337.23745.4@smtp.office365.com> Hi, Here is the latest notification subteam update. Bugs ---- [Low] https://bugs.launchpad.net/nova/+bug/1757407 Notification sending sometimes hits the keystone API to get glance endpoints Fix needs some additional work: https://review.openstack.org/#/c/564528/ [Medium] https://bugs.launchpad.net/nova/+bug/1763051 Need to audit when notifications are sent during live migration We need to go throught the live migration codepath and make sure that the different live migartion notifications sent at a proper time. [Low] https://bugs.launchpad.net/nova/+bug/1764392 Avoid bandwidth usage db query in notifications when the virt driver does not support collecting such data [Medium] https://bugs.launchpad.net/nova/+bug/1739325 Server operations fail to complete with versioned notifications if payload contains unset non-nullable fields No progress. We still need to understand how this problem happens to find the proper solution. [Low] https://bugs.launchpad.net/nova/+bug/1487038 nova.exception._cleanse_dict should use oslo_utils.strutils._SANITIZE_KEYS Old abandoned patches exist but need somebody to pick them up: * https://review.openstack.org/#/c/215308/ * https://review.openstack.org/#/c/388345/ Versioned notification transformation ------------------------------------- https://review.openstack.org/#/q/topic:bp/versioned-notification-transformation-rocky+status:open * https://review.openstack.org/#/c/403660 Transform instance.exists notification - lost the +2 due to a merge conflict Introduce instance.lock and instance.unlock notifications --------------------------------------------------------- https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances Implementation proposed but needs some work: https://review.openstack.org/#/c/526251/ - No progress. I've pinged the author but no response. Add the user id and project id of the user initiated the instance action to the notification ----------------------------------------------------------------- https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications Implementation patch exists but still needs work https://review.openstack.org/#/c/536243/ - No progress. I've pinged the author but no response. Sending full traceback in versioned notifications ------------------------------------------------- https://blueprints.launchpad.net/nova/+spec/add-full-traceback-to-error-notifications The bp was reassigned to Kevin_Zheng and he proposed a WIP patch https://review.openstack.org/#/c/564092/ Add versioned notifications for removing a member from a server group --------------------------------------------------------------------- The specless bp https://blueprints.launchpad.net/nova/+spec/add-server-group-remove-member-notifications Based on the PoC patch https://review.openstack.org/#/c/559076/ we see basic problems with the overal bp. See Matt's mail from the ML http://lists.openstack.org/pipermail/openstack-dev/2018-April/129804.html Add notification support for trusted_certs ------------------------------------------ This is part of the bp nova-validate-certificates implementation series to extend some of the instance notifications. The implementation looks good to me in: https://review.openstack.org/#/c/563269 Introduce Pending VM state -------------------------- The spec https://review.openstack.org/#/c/554212 proposes some notification change to signal when a VM goes to PENDING state. Hovewer this information is already available from the versioned instance.update notification. The discussion in the spec is ongoing. Weekly meeting -------------- I have to cancel this week's meeting and next week most of us will be in Vancouver. So the next meeting will be held on 29th of May on #openstack-meeting-4 https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180529T170000 Cheers, gibi From fungi at yuggoth.org Mon May 14 14:35:24 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 14 May 2018 14:35:24 +0000 Subject: [openstack-dev] [tripleo] tripleo upstream gate outtage, was: -> gate jobs impacted RAX yum mirror In-Reply-To: References: <20180513152435.x2iguepehk6fblbr@yuggoth.org> <20180514032945.f4hpxpcoyhrylius@yuggoth.org> Message-ID: <20180514143523.iqsxyam5rtb6jiln@yuggoth.org> On 2018-05-14 07:07:03 -0600 (-0600), Wesley Hayutin wrote: [...] > I think you may be conflating the notion that ubuntu or rhel/cent > can be updated w/o any issues to applications that run atop of the > distributions with what it means to introduce a minor update into > the upstream openstack ci workflow. > > If jobs could execute w/o a timeout the tripleo jobs would have > not gone red. Since we do have constraints in the upstream like a > timeouts and others we have to prepare containers, images etc to > work efficiently in the upstream. For example, if our jobs had > the time to yum update the roughly 120 containers in play in each > job the tripleo jobs would have just worked. I am not advocating > for not having timeouts or constraints on jobs, however I am > saying this is an infra issue, not a distribution or distribution > support issue. > > I think this is an important point to consider and I view it as > mostly unrelated to the support claims by the distribution. Does > that make sense? [...] Thanks, the thread jumped straight to suggesting costly fixes (separate images for each CentOS point release, adding an evaluation period or acceptance testing for new point releases, et cetera) without coming anywhere close to exploring the problem space. Is your only concern that when your jobs started using CentOS 7.5 instead of 7.4 they took longer to run? What was the root cause? Are you saying your jobs consume externally-produced artifacts which lag behind CentOS package updates? Couldn't a significant burst of new packages cause the same symptoms even without it being tied to a minor version increase? This _doesn't_ sound to me like a problem with how we've designed our infrastructure, unless there are additional details you're omitting. It sounds like a problem with how the jobs are designed and expectations around distros slowly trickling package updates into the series without occasional larger bursts of package deltas. I'd like to understand more about why you upgrade packages inside your externally-produced container images at job runtime at all, rather than relying on the package versions baked into them. It seems like you're arguing that the existence of lots of new package versions which aren't already in your container images is the problem, in which case I have trouble with the rationalization of it being "an infra issue" insofar as it requires changes to the services as provided by the OpenStack Infra team. Just to be clear, we didn't "introduce a minor update into the upstream openstack ci workflow." We continuously pull CentOS 7 packages into our package mirrors, and continuously rebuild our centos-7 images from whatever packages the distro says are current. Our automation doesn't know that there's a difference between packages which were part of CentOS 7.4 and 7.5 any more than it knows that there's a difference between Ubuntu 16.04.2 and 16.04.3. Even if we somehow managed to pause our CentOS image updates immediately prior to 7.5, jobs would still try to upgrade those 7.4-based images to the 7.5 packages in our mirror, right? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From zigo at debian.org Mon May 14 14:54:20 2018 From: zigo at debian.org (Thomas Goirand) Date: Mon, 14 May 2018 16:54:20 +0200 Subject: [openstack-dev] [neutron] neutron-server declaring itself as up too early Message-ID: Hi, It looks like to me (I'm not sure yet...) that neutron-server is declaring itself as up when it's not. As a consequence, puppet-openstack just fails on me when doing "neutron net-list" too early and failing. Could it be possible that the systemd notify is called at the wrong place? If so, how could this be fixed? Cheers, Thomas Goirand (zigo) From jistr at redhat.com Mon May 14 15:35:41 2018 From: jistr at redhat.com (=?UTF-8?B?SmnFmcOtIFN0csOhbnNrw70=?=) Date: Mon, 14 May 2018 17:35:41 +0200 Subject: [openstack-dev] [tripleo] Zuul repo insertion in update/upgrade CI Message-ID: <057713e4-a28a-3ed5-6938-6dacf9918ea2@redhat.com> Hi, this is mainly for CI folks and whom-it-may-concern. Recently we came across the topic of how to enable/disable zuul repos at various places in the CI jobs. For normal deploy jobs there's no need to customize, but for update/upgrade jobs there is. It's not entirely straightforward and there's quite a variety of enable/disable spots and combinations which can be useful. Even though improvements in this area are not very likely to get implemented right away, i had some thoughts on the topic so i wanted to capture them. I put the ideas into an etherpad: https://etherpad.openstack.org/p/tripleo-ci-zuul-repo-insertion Feel free to put some more thoughts there or ping me on IRC with anything related. Thanks Jirka From pkovar at redhat.com Mon May 14 15:44:18 2018 From: pkovar at redhat.com (Petr Kovar) Date: Mon, 14 May 2018 17:44:18 +0200 Subject: [openstack-dev] FW: [docs][openstack-ansible] Stepping down from core In-Reply-To: <442FA6C1-282B-44B8-AA29-0B3BD87427C5@outlook.com> References: <442FA6C1-282B-44B8-AA29-0B3BD87427C5@outlook.com> Message-ID: <20180514174418.92a2fc0fab77b7627268f913@redhat.com> Alex, Many thanks for your community leadership, your guidance and help that was essential during the transition period, and really for all your efforts that you have put into keeping the docs team up and running. (Updated the perms accordingly.) Thanks, pk On Wed, 9 May 2018 13:22:04 +0000 Alexandra Settle wrote: > Man I’m so smart I sent a Dear John letter to the ML and forgot the subject header. > > SMOOTH MOVE. > > From: Alexandra Settle > Date: Wednesday, May 9, 2018 at 2:13 PM > To: "OpenStack Development Mailing List (not for usage questions)" > Cc: Petr Kovar , Jean-Philippe Evrard > Subject: [openstack-dev][docs][openstack-ansible] > > Hi all, > > It is with a super heavy heart I have to say that I need to step down as core from the OpenStack-Ansible and Documentation teams – and take a step back from the community. > > The last year has taken me in a completely different direction to what I expected, and try as I might I just don’t have the time to be even a part-time member of this great community :( > > Although I’m moving on, and learning new things, nothing can beat the memories of SnowpenStack and Denver’s super awesome trains. > > I know this isn’t some acceptance speech at the Oscars – but I just want to thank the Foundation and everyone who donates to the travel program. Without you guys, I wouldn’t have been a part of the community as much as I have been and met all your lovely faces. > > I have had such a great time being a part of something as exciting and new as OpenStack, and I hope to continue to lurk in the background of IRC like a total weirdo. I hope to perform some super shit karaoke with you all in another part of the world :) (who knows, maybe I’ll just tag along to PTG’s as a social outing… how cool am I?!) > > I’d also like to thank Mugsie for this sweet shot which is the perfect summary of my time with the OpenStack community. Read into this what you will: > > [cid:image001.jpg at 01D3E79F.EFDEF8E0] > > Don’t be a stranger, > > Alex > > IRC: asettle > Twitter: dewsday > Email: a.settle at outlook.com From whayutin at redhat.com Mon May 14 15:57:17 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 14 May 2018 09:57:17 -0600 Subject: [openstack-dev] [tripleo] tripleo upstream gate outtage, was: -> gate jobs impacted RAX yum mirror In-Reply-To: <20180514143523.iqsxyam5rtb6jiln@yuggoth.org> References: <20180513152435.x2iguepehk6fblbr@yuggoth.org> <20180514032945.f4hpxpcoyhrylius@yuggoth.org> <20180514143523.iqsxyam5rtb6jiln@yuggoth.org> Message-ID: On Mon, May 14, 2018 at 10:36 AM Jeremy Stanley wrote: > On 2018-05-14 07:07:03 -0600 (-0600), Wesley Hayutin wrote: > [...] > > I think you may be conflating the notion that ubuntu or rhel/cent > > can be updated w/o any issues to applications that run atop of the > > distributions with what it means to introduce a minor update into > > the upstream openstack ci workflow. > > > > If jobs could execute w/o a timeout the tripleo jobs would have > > not gone red. Since we do have constraints in the upstream like a > > timeouts and others we have to prepare containers, images etc to > > work efficiently in the upstream. For example, if our jobs had > > the time to yum update the roughly 120 containers in play in each > > job the tripleo jobs would have just worked. I am not advocating > > for not having timeouts or constraints on jobs, however I am > > saying this is an infra issue, not a distribution or distribution > > support issue. > > > > I think this is an important point to consider and I view it as > > mostly unrelated to the support claims by the distribution. Does > > that make sense? > [...] > > Thanks, the thread jumped straight to suggesting costly fixes > (separate images for each CentOS point release, adding an evaluation > period or acceptance testing for new point releases, et cetera) > without coming anywhere close to exploring the problem space. Is > your only concern that when your jobs started using CentOS 7.5 > instead of 7.4 they took longer to run? Yes, If they had unlimited time to run, our workflow would have everything updated to CentOS 7.5 in the job itself and I would expect everything to just work. > What was the root cause? Are > you saying your jobs consume externally-produced artifacts which lag > behind CentOS package updates? Yes, TripleO has externally produced overcloud images, and containers both of which can be yum updated but we try to ensure they are frequently recreated so the yum transaction is small. > Couldn't a significant burst of new > packages cause the same symptoms even without it being tied to a > minor version increase? > Yes, certainly this could happen outside of a minor update of the baseos. > > This _doesn't_ sound to me like a problem with how we've designed > our infrastructure, unless there are additional details you're > omitting. So the only thing out of our control is the package set on the base nodepool image. If that suddenly gets updated with too many packages, then we have to scramble to ensure the images and containers are also udpated. If there is a breaking change in the nodepool image for example [a], we have to react to and fix that as well. > It sounds like a problem with how the jobs are designed > and expectations around distros slowly trickling package updates > into the series without occasional larger bursts of package deltas. > I'd like to understand more about why you upgrade packages inside > your externally-produced container images at job runtime at all, > rather than relying on the package versions baked into them. We do that to ensure the gerrit review itself and it's dependencies are built via rpm and injected into the build. If we did not do this the job would not be testing the change at all. This is a result of being a package based deployment for better or worse. > It > seems like you're arguing that the existence of lots of new package > versions which aren't already in your container images is the > problem, in which case I have trouble with the rationalization of it > being "an infra issue" insofar as it requires changes to the > services as provided by the OpenStack Infra team. > > Just to be clear, we didn't "introduce a minor update into the > upstream openstack ci workflow." We continuously pull CentOS 7 > packages into our package mirrors, and continuously rebuild our > centos-7 images from whatever packages the distro says are current. > Understood, which I think is fine and probably works for most projects. An enhancement could be to stage the new images for say one week or so. Do we need the CentOS updates immediately? Is there a possible path that does not create a lot of work for infra, but also provides some space for projects to prep for the consumption of the updates? > Our automation doesn't know that there's a difference between > packages which were part of CentOS 7.4 and 7.5 any more than it > knows that there's a difference between Ubuntu 16.04.2 and 16.04.3. > Even if we somehow managed to pause our CentOS image updates > immediately prior to 7.5, jobs would still try to upgrade those > 7.4-based images to the 7.5 packages in our mirror, right? > Understood, I suspect this will become a more widespread issue as more projects start to use containers ( not sure ). It's my understanding that there are some mechanisms in place to pin packages in the centos nodepool image so there has been some thoughts generally in the area of this issue. TripleO may be the exception to the rule here and that is fine, I'm more interested in exploring the possibilities of delivering updates in a staged fashion than anything. I don't have insight into what the possibilities are, or if other projects have similiar issues or requests. Perhaps the TripleO project could share the details of our job workflow with the community and this would make more sense. I appreciate your time, effort and thoughts you have shared in the thread. > -- > Jeremy Stanley > [a] https://bugs.launchpad.net/tripleo/+bug/1770298 > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Mon May 14 16:08:18 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 14 May 2018 09:08:18 -0700 Subject: [openstack-dev] [tripleo] tripleo upstream gate outtage, was: -> gate jobs impacted RAX yum mirror In-Reply-To: References: <20180513152435.x2iguepehk6fblbr@yuggoth.org> <20180514032945.f4hpxpcoyhrylius@yuggoth.org> <20180514143523.iqsxyam5rtb6jiln@yuggoth.org> Message-ID: <1526314098.2004671.1371575456.0B1E9F8B@webmail.messagingengine.com> On Mon, May 14, 2018, at 8:57 AM, Wesley Hayutin wrote: > On Mon, May 14, 2018 at 10:36 AM Jeremy Stanley wrote: > > > On 2018-05-14 07:07:03 -0600 (-0600), Wesley Hayutin wrote: > > [...] snip > > > > This _doesn't_ sound to me like a problem with how we've designed > > our infrastructure, unless there are additional details you're > > omitting. > > > So the only thing out of our control is the package set on the base > nodepool image. > If that suddenly gets updated with too many packages, then we have to > scramble to ensure the images and containers are also udpated. > If there is a breaking change in the nodepool image for example [a], we > have to react to and fix that as well. Aren't the container images independent of the hosting platform (eg what infra hosts)? I'm not sure I understand why the host platform updating implies all the container images must also be updated. > > > > It sounds like a problem with how the jobs are designed > > and expectations around distros slowly trickling package updates > > into the series without occasional larger bursts of package deltas. > > I'd like to understand more about why you upgrade packages inside > > your externally-produced container images at job runtime at all, > > rather than relying on the package versions baked into them. > > > We do that to ensure the gerrit review itself and it's dependencies are > built via rpm and injected into the build. > If we did not do this the job would not be testing the change at all. > This is a result of being a package based deployment for better or worse. You'd only need to do that for the change in review, not the entire system right? > snip > > Our automation doesn't know that there's a difference between > > packages which were part of CentOS 7.4 and 7.5 any more than it > > knows that there's a difference between Ubuntu 16.04.2 and 16.04.3. > > Even if we somehow managed to pause our CentOS image updates > > immediately prior to 7.5, jobs would still try to upgrade those > > 7.4-based images to the 7.5 packages in our mirror, right? > > > > Understood, I suspect this will become a more widespread issue as > more projects start to use containers ( not sure ). It's my understanding > that > there are some mechanisms in place to pin packages in the centos nodepool > image so > there has been some thoughts generally in the area of this issue. Again, I think we need to understand why containers would make this worse not better. Seems like the big feature everyone talks about when it comes to containers is isolating packaging whether that be python packages so that nova and glance can use a different version of oslo or cohabitating software that would otherwise conflict. Why do the packages on the host platform so strongly impact your container package lists? > > TripleO may be the exception to the rule here and that is fine, I'm more > interested in exploring > the possibilities of delivering updates in a staged fashion than anything. > I don't have insight into > what the possibilities are, or if other projects have similiar issues or > requests. Perhaps the TripleO > project could share the details of our job workflow with the community and > this would make more sense. > > I appreciate your time, effort and thoughts you have shared in the thread. > > > > -- > > Jeremy Stanley > > > > [a] https://bugs.launchpad.net/tripleo/+bug/1770298 I think understanding the questions above may be the important aspect of understanding what the underlying issue is here and how we might address it. Clark From sferdjao at redhat.com Mon May 14 16:10:30 2018 From: sferdjao at redhat.com (Sahid Orentino Ferdjaoui) Date: Mon, 14 May 2018 18:10:30 +0200 Subject: [openstack-dev] openstack-dev] [nova] Cannot live migrattion, because error:libvirtError: the CPU is incompatible with host CPU: Host CPU does not provide required features: cmt, mbm_total, mbm_local In-Reply-To: References: Message-ID: <20180514161030.GA10015@redhat> On Mon, May 14, 2018 at 11:23:51AM +0800, 何健乐 wrote: > Hi, all > When I did live-miration , I met the following error: result = proxy_call(self._autowrap, f, *args, **kwargs)May 14 10:33:11 nova-compute[981335]: File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 144, in proxy_call > May 14 10:33:11 nova-compute[981335]: rv = execute(f, *args, **kwargs) > May 14 10:33:11 nova-compute[981335]: File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 125, in execute > May 14 10:33:11 nova-compute[981335]: six.reraise(c, e, tb) > May 14 10:33:11 nova-compute[981335]: File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 83, in tworker > May 14 10:33:11 nova-compute[981335]: rv = meth(*args, **kwargs) > May 14 10:33:11 nova-compute[981335]: File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1939, in migrateToURI3 > May 14 10:33:11 nova-compute[981335]: if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed', dom=self) > May 14 10:33:11 nova-compute[981335]: libvirtError: the CPU is incompatible with host CPU: Host CPU does not provide required features: cmt, mbm_total, mbm_local > Is there any one that has solution for this problem. > > Thanks This could be because you are running an older libvirt version on destination node which does not know anything about the cache or memory bandwidth monitoring features from Intel. Upgrading your libvirt version should resolve the issue. Or you are effectively trying to live-migrate a host-model domain to a destination node that does not support such features. To resolve it you should update your nova.conf to use a CPU model for your guests that will be compatible for both of your host. In nova.conf under section libvirt. cpu_mode=custom cpu_model=Haswell Then you should restart nova-compute service and reboot --force the instance so it will take the new cpu configuration into account. s. From openstack at nemebean.com Mon May 14 16:10:48 2018 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 14 May 2018 11:10:48 -0500 Subject: [openstack-dev] [oslo] No meeting next two weeks Message-ID: <7e6b6add-670e-a042-212d-7ede1da9e959@nemebean.com> As discussed in the meeting this week, we plan to skip the Oslo meeting for the next two weeks. The first is during Summit, and the second is the first full day back for many of us so it's unlikely there will be much new to talk about. Meetings will resume as normal after that, and if anything comes up in the meantime we can adjust our plans if needed. Thanks. -Ben From bdobreli at redhat.com Mon May 14 16:15:04 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Mon, 14 May 2018 18:15:04 +0200 Subject: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal Message-ID: An update for your review please folks > Bogdan Dobrelya writes: > >> Hello. >> As Zuul documentation [0] explains, the names "check", "gate", and >> "post" may be altered for more advanced pipelines. Is it doable to >> introduce, for particular openstack projects, multiple check >> stages/steps as check-1, check-2 and so on? And is it possible to make >> the consequent steps reusing environments from the previous steps >> finished with? >> >> Narrowing down to tripleo CI scope, the problem I'd want we to solve >> with this "virtual RFE", and using such multi-staged check pipelines, >> is reducing (ideally, de-duplicating) some of the common steps for >> existing CI jobs. > > What you're describing sounds more like a job graph within a pipeline. > See: https://docs.openstack.org/infra/zuul/user/config.html#attr-job.dependencies > for how to configure a job to run only after another job has completed. > There is also a facility to pass data between such jobs. > > ... (skipped) ... > > Creating a job graph to have one job use the results of the previous job > can make sense in a lot of cases. It doesn't always save *time* > however. > > It's worth noting that in OpenStack's Zuul, we have made an explicit > choice not to have long-running integration jobs depend on shorter pep8 > or tox jobs, and that's because we value developer time more than CPU > time. We would rather run all of the tests and return all of the > results so a developer can fix all of the errors as quickly as possible, > rather than forcing an iterative workflow where they have to fix all the > whitespace issues before the CI system will tell them which actual tests > broke. > > -Jim I proposed a few zuul dependencies [0], [1] to tripleo CI pipelines for undercloud deployments vs upgrades testing (and some more). Given that those undercloud jobs have not so high fail rates though, I think Emilien is right in his comments and those would buy us nothing. From the other side, what do you think folks of making the tripleo-ci-centos-7-3nodes-multinode depend on tripleo-ci-centos-7-containers-multinode [2]? The former seems quite faily and long running, and is non-voting. It deploys (see featuresets configs [3]*) a 3 nodes in HA fashion. And it seems almost never passing, when the containers-multinode fails - see the CI stats page [4]. I've found only a 2 cases there for the otherwise situation, when containers-multinode fails, but 3nodes-multinode passes. So cutting off those future failures via the dependency added, *would* buy us something and allow other jobs to wait less to commence, by a reasonable price of somewhat extended time of the main zuul pipeline. I think it makes sense and that extended CI time will not overhead the RDO CI execution times so much to become a problem. WDYT? [0] https://review.openstack.org/#/c/568275/ [1] https://review.openstack.org/#/c/568278/ [2] https://review.openstack.org/#/c/568326/ [3] https://docs.openstack.org/tripleo-quickstart/latest/feature-configuration.html [4] http://tripleo.org/cistatus.html * ignore the column 1, it's obsolete, all CI jobs now using configs download AFAICT... -- Best regards, Bogdan Dobrelya, Irc #bogdando From juliaashleykreger at gmail.com Mon May 14 16:17:13 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 14 May 2018 12:17:13 -0400 Subject: [openstack-dev] [ironic] Meeting week of May 21st cancelled Message-ID: All, The ironic meeting next week is cancelled as we will have some attendees in Vancouver for the summit and forum. The next meeting will be May 28th. I have updated the wiki[1] page accordingly. -Julia [1]: https://wiki.openstack.org/wiki/Meetings/Ironic#Agenda_for_next_meeting From melwittt at gmail.com Mon May 14 16:32:29 2018 From: melwittt at gmail.com (melanie witt) Date: Mon, 14 May 2018 09:32:29 -0700 Subject: [openstack-dev] [nova] review runway status Message-ID: Howdy everyone, This is just a brief status about the blueprints currently occupying review runways [0] and an ask for the nova-core team to give these reviews priority for their code review focus. * Add z/VM driver https://blueprints.launchpad.net/nova/+spec/add-zvm-driver-rocky (jichen) [END DATE: 2018-05-15] spec amendment https://review.openstack.org/562154 and implementation series starting at https://review.openstack.org/523387 * Local disk serial numbers https://blueprints.launchpad.net/nova/+spec/local-disk-serial-numbers (mdbooth) [END DATE: 2018-05-16] series starting at https://review.openstack.org/526346 * PowerVM Driver (esberglu) [END DATE: 2018-05-28] * Snapshot https://blueprints.launchpad.net/nova/+spec/powervm-snapshot: https://review.openstack.org/#/c/543023/ * DiskAdapter parent class https://blueprints.launchpad.net/nova/+spec/powervm-localdisk: https://review.openstack.org/#/c/549053/ *Localdisk https://blueprints.launchpad.net/nova/+spec/powervm-localdisk: https://review.openstack.org/#/c/549300/ Cheers, -melanie [0] https://etherpad.openstack.org/p/nova-runways-rocky From fungi at yuggoth.org Mon May 14 16:37:03 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 14 May 2018 16:37:03 +0000 Subject: [openstack-dev] [tripleo] tripleo upstream gate outtage, was: -> gate jobs impacted RAX yum mirror In-Reply-To: References: <20180513152435.x2iguepehk6fblbr@yuggoth.org> <20180514032945.f4hpxpcoyhrylius@yuggoth.org> <20180514143523.iqsxyam5rtb6jiln@yuggoth.org> Message-ID: <20180514163703.65t5azirmpng6zjp@yuggoth.org> On 2018-05-14 09:57:17 -0600 (-0600), Wesley Hayutin wrote: > On Mon, May 14, 2018 at 10:36 AM Jeremy Stanley wrote: [...] > > Couldn't a significant burst of new packages cause the same > > symptoms even without it being tied to a minor version increase? > > Yes, certainly this could happen outside of a minor update of the > baseos. Thanks for confirming. So this is not specifically a CentOS minor version increase issue, it's just more likely to occur at minor version boundaries. > So the only thing out of our control is the package set on the > base nodepool image. If that suddenly gets updated with too many > packages, then we have to scramble to ensure the images and > containers are also udpated. It's still unclear to me why the packages on the test instance image (i.e. the "container host") are related to the packages in the container guest images at all. That would seem to be the whole point of having containers? > If there is a breaking change in the nodepool image for example > [a], we have to react to and fix that as well. I would argue that one is a terrible workaround which happened to show its warts. We should fix DIB's pip-and-virtualenv element rather than continue rely on side effects of pinning RPM versions. I've commented to that effect on https://launchpad.net/bugs/1770298 just now. > > It sounds like a problem with how the jobs are designed > > and expectations around distros slowly trickling package updates > > into the series without occasional larger bursts of package deltas. > > I'd like to understand more about why you upgrade packages inside > > your externally-produced container images at job runtime at all, > > rather than relying on the package versions baked into them. > > We do that to ensure the gerrit review itself and it's > dependencies are built via rpm and injected into the build. If we > did not do this the job would not be testing the change at all. > This is a result of being a package based deployment for better or > worse. [...] Now I'll risk jumping to proposing solutions, but have you considered building those particular packages in containers too? That way they're built against the same package versions as will be present in the other container images you're using rather than to the package versions on the host, right? Seems like it would completely sidestep the problem. > An enhancement could be to stage the new images for say one week > or so. Do we need the CentOS updates immediately? Is there a > possible path that does not create a lot of work for infra, but > also provides some space for projects to prep for the consumption > of the updates? [...] Nodepool builds new images constantly, but at least daily. Part of this is to prevent the delta of available packages/indices and other files baked into those images from being more than a day or so stale at any given point in time. The older the image, the more packages (on average) jobs will need to download if they want to test with latest package versions and the more strain it will put on our mirrors and on our bandwidth quotas/donors' networks. There's also a question of retention, if we're building images at least daily but keeping them around for 7 days (storage on the builders, tenant quotas for Glance in our providers) as well as the explosion of additional nodes we'd need since we pre-boot nodes with each of our images (and the idea as I understand it is that you would want jobs to be able to select between any of them). One option, I suppose, would be to switch to building images weekly instead of daily, but that only solves the storage and node count problem not the additional bandwidth and mirror load. And of course, nodepool would need to learn to be able to boot nodes from older versions of an image on record which is not a feature it has right now. > Understood, I suspect this will become a more widespread issue as > more projects start to use containers ( not sure ). I'm still confused as to what makes this a container problem in the general sense, rather than just a problem (leaky abstraction) with how you've designed the job framework in which you're using them. > It's my understanding that there are some mechanisms in place to > pin packages in the centos nodepool image so there has been some > thoughts generally in the area of this issue. [...] If this is a reference back to bug 1770298, as mentioned already I think that's a mistake in diskimage-builder's stdlib which should be corrected, not a pattern we should propagate. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From victoria at vmartinezdelacruz.com Mon May 14 16:42:02 2018 From: victoria at vmartinezdelacruz.com (=?UTF-8?Q?Victoria_Mart=C3=ADnez_de_la_Cruz?=) Date: Mon, 14 May 2018 13:42:02 -0300 Subject: [openstack-dev] [all][tc][ptls][glance] final stages of python 3 transition In-Reply-To: <557b34e5-f27f-6975-07fe-85f6b6c707d7@redhat.com> References: <1525100618-sup-9669@lrrr.local> <297b4c6f-5ce1-1ab5-88db-92b7e06174de@ham.ie> <1525794769-sup-717@lrrr.local> <200c62f6-c13c-5082-1662-692aacf2b581@ham.ie> <20180508162256.GA11443@zeong> <1525800729-sup-4338@lrrr.local> <20180508175543.GB11443@zeong> <1525805985-sup-7865@lrrr.local> <20180508191640.GA16227@sinanju.localdomain> <557b34e5-f27f-6975-07fe-85f6b6c707d7@redhat.com> Message-ID: 2018-05-08 16:31 GMT-03:00 Zane Bitter : > On 08/05/18 15:16, Matthew Treinish wrote: > >> Although, I don't think glance uses oslo.service even in the case where >> it's >> using the standalone eventlet server. It looks like it launches >> eventlet.wsgi >> directly: >> >> https://github.com/openstack/glance/blob/master/glance/common/wsgi.py >> >> and I don't see oslo.service in the requirements file either: >> >> https://github.com/openstack/glance/blob/master/requirements.txt >> > > It would probably independently suffer from https://bugs.launchpad.net/man > ila/+bug/1482633 in Python 3 then. IIUC the code started in oslo > incubator but projects like neutron and manila converted to use the > oslo.service version. There may be other copies of it still floating > around... > > - ZB > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Hi, Jumping in now as I'm helping with py3 support efforts in the manila side. In manila we have both support for Apache WSGI and the built-in server (which depends in eventlet). Would it be a possible workaround to rely on the Apache WSGI server while we wait for evenlet issues to be sorted out? Is there any chance the upper constraints will be updated soon-ish and this can be fixed in a newer eventlet version? This is the only change it's preventing us to be fully py3 compatible, hence it's a big deal for us. Thanks, Victoria -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Mon May 14 17:11:14 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 14 May 2018 11:11:14 -0600 Subject: [openstack-dev] [tripleo] tripleo upstream gate outtage, was: -> gate jobs impacted RAX yum mirror In-Reply-To: <1526314098.2004671.1371575456.0B1E9F8B@webmail.messagingengine.com> References: <20180513152435.x2iguepehk6fblbr@yuggoth.org> <20180514032945.f4hpxpcoyhrylius@yuggoth.org> <20180514143523.iqsxyam5rtb6jiln@yuggoth.org> <1526314098.2004671.1371575456.0B1E9F8B@webmail.messagingengine.com> Message-ID: On Mon, May 14, 2018 at 12:08 PM Clark Boylan wrote: > On Mon, May 14, 2018, at 8:57 AM, Wesley Hayutin wrote: > > On Mon, May 14, 2018 at 10:36 AM Jeremy Stanley > wrote: > > > > > On 2018-05-14 07:07:03 -0600 (-0600), Wesley Hayutin wrote: > > > [...] > > snip > > > > > > > This _doesn't_ sound to me like a problem with how we've designed > > > our infrastructure, unless there are additional details you're > > > omitting. > > > > > > So the only thing out of our control is the package set on the base > > nodepool image. > > If that suddenly gets updated with too many packages, then we have to > > scramble to ensure the images and containers are also udpated. > > If there is a breaking change in the nodepool image for example [a], we > > have to react to and fix that as well. > > Aren't the container images independent of the hosting platform (eg what > infra hosts)? I'm not sure I understand why the host platform updating > implies all the container images must also be updated. > You make a fine point here, I think as with anything there are some bits that are still being worked on. At this moment it's my understanding that pacemaker and possibly a few others components are not 100% containerized atm. I'm not an expert in the subject and my understanding may not be correct. Untill you are 100% containerized there may still be some dependencies on the base image and an impact from changes. > > > > > > > > It sounds like a problem with how the jobs are designed > > > and expectations around distros slowly trickling package updates > > > into the series without occasional larger bursts of package deltas. > > > I'd like to understand more about why you upgrade packages inside > > > your externally-produced container images at job runtime at all, > > > rather than relying on the package versions baked into them. > > > > > > We do that to ensure the gerrit review itself and it's dependencies are > > built via rpm and injected into the build. > > If we did not do this the job would not be testing the change at all. > > This is a result of being a package based deployment for better or > worse. > > You'd only need to do that for the change in review, not the entire system > right? > Correct there is no intention of updating the entire distribution in run time, the intent is to have as much updated in our jobs that build the containers and images. Only the rpm built zuul change should be included in the update, however some zuul changes require a CentOS base package that was not previously installed on the container e.g. a new python dependency introduced in a zuul change. Previously we had not enabled any CentOS repos in the container update, but found that was not viable 100% of the time. We have a change to further limit the scope of the update which should help [1], especialy when facing a minor version update. [1] https://review.openstack.org/#/c/567550/ > > > > > snip > > > > Our automation doesn't know that there's a difference between > > > packages which were part of CentOS 7.4 and 7.5 any more than it > > > knows that there's a difference between Ubuntu 16.04.2 and 16.04.3. > > > Even if we somehow managed to pause our CentOS image updates > > > immediately prior to 7.5, jobs would still try to upgrade those > > > 7.4-based images to the 7.5 packages in our mirror, right? > > > > > > > Understood, I suspect this will become a more widespread issue as > > more projects start to use containers ( not sure ). It's my > understanding > > that > > there are some mechanisms in place to pin packages in the centos nodepool > > image so > > there has been some thoughts generally in the area of this issue. > > Again, I think we need to understand why containers would make this worse > not better. Seems like the big feature everyone talks about when it comes > to containers is isolating packaging whether that be python packages so > that nova and glance can use a different version of oslo or cohabitating > software that would otherwise conflict. Why do the packages on the host > platform so strongly impact your container package lists? > I'll let others comment on that, however my thought is you don't move from A -> Z in one step and containers do not make everything easier immediately. Like most things, it takes a little time. > > > > > TripleO may be the exception to the rule here and that is fine, I'm more > > interested in exploring > > the possibilities of delivering updates in a staged fashion than > anything. > > I don't have insight into > > what the possibilities are, or if other projects have similiar issues or > > requests. Perhaps the TripleO > > project could share the details of our job workflow with the community > and > > this would make more sense. > > > > I appreciate your time, effort and thoughts you have shared in the > thread. > > > > > > > -- > > > Jeremy Stanley > > > > > > > [a] https://bugs.launchpad.net/tripleo/+bug/1770298 > > I think understanding the questions above may be the important aspect of > understanding what the underlying issue is here and how we might address it. > > Clark > Thanks Clark, let me know if I did not get everything on your list there. Thanks again for your time. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Mon May 14 18:00:05 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 14 May 2018 12:00:05 -0600 Subject: [openstack-dev] [tripleo] tripleo upstream gate outtage, was: -> gate jobs impacted RAX yum mirror In-Reply-To: <20180514163703.65t5azirmpng6zjp@yuggoth.org> References: <20180513152435.x2iguepehk6fblbr@yuggoth.org> <20180514032945.f4hpxpcoyhrylius@yuggoth.org> <20180514143523.iqsxyam5rtb6jiln@yuggoth.org> <20180514163703.65t5azirmpng6zjp@yuggoth.org> Message-ID: On Mon, May 14, 2018 at 12:37 PM Jeremy Stanley wrote: > On 2018-05-14 09:57:17 -0600 (-0600), Wesley Hayutin wrote: > > On Mon, May 14, 2018 at 10:36 AM Jeremy Stanley > wrote: > [...] > > > Couldn't a significant burst of new packages cause the same > > > symptoms even without it being tied to a minor version increase? > > > > Yes, certainly this could happen outside of a minor update of the > > baseos. > > Thanks for confirming. So this is not specifically a CentOS minor > version increase issue, it's just more likely to occur at minor > version boundaries. > Correct, you got it > > > So the only thing out of our control is the package set on the > > base nodepool image. If that suddenly gets updated with too many > > packages, then we have to scramble to ensure the images and > > containers are also udpated. > > It's still unclear to me why the packages on the test instance image > (i.e. the "container host") are related to the packages in the > container guest images at all. That would seem to be the whole point > of having containers? > You are right, just note some services are not 100% containerized yet. This doesn't happen overnight it's a process and we're getting there. > > > If there is a breaking change in the nodepool image for example > > [a], we have to react to and fix that as well. > > I would argue that one is a terrible workaround which happened to > show its warts. We should fix DIB's pip-and-virtualenv element > rather than continue rely on side effects of pinning RPM versions. > I've commented to that effect on https://launchpad.net/bugs/1770298 > just now. > > k.. thanks > > > It sounds like a problem with how the jobs are designed > > > and expectations around distros slowly trickling package updates > > > into the series without occasional larger bursts of package deltas. > > > I'd like to understand more about why you upgrade packages inside > > > your externally-produced container images at job runtime at all, > > > rather than relying on the package versions baked into them. > > > > We do that to ensure the gerrit review itself and it's > > dependencies are built via rpm and injected into the build. If we > > did not do this the job would not be testing the change at all. > > This is a result of being a package based deployment for better or > > worse. > [...] > > Now I'll risk jumping to proposing solutions, but have you > considered building those particular packages in containers too? > That way they're built against the same package versions as will be > present in the other container images you're using rather than to > the package versions on the host, right? Seems like it would > completely sidestep the problem. > So a little background. The containers and images used in TripleO are rebuilt multiple times each day via periodic jobs, when they pass our criteria they are pushed out and used upstream. Each zuul change and it's dependencies can potentially impact a few or all the containers in play. We can not rebuild all the containers due to time constraints in each job. We have been able to mount and yum update the containers involved with the zuul change. Latest patch to fine tune that process is here https://review.openstack.org/#/c/567550/ > > > An enhancement could be to stage the new images for say one week > > or so. Do we need the CentOS updates immediately? Is there a > > possible path that does not create a lot of work for infra, but > > also provides some space for projects to prep for the consumption > > of the updates? > [...] > > Nodepool builds new images constantly, but at least daily. Part of > this is to prevent the delta of available packages/indices and other > files baked into those images from being more than a day or so stale > at any given point in time. The older the image, the more packages > (on average) jobs will need to download if they want to test with > latest package versions and the more strain it will put on our > mirrors and on our bandwidth quotas/donors' networks. > Sure that makes perfect sense. We do the same with our containers and images. > > There's also a question of retention, if we're building images at > least daily but keeping them around for 7 days (storage on the > builders, tenant quotas for Glance in our providers) as well as the > explosion of additional nodes we'd need since we pre-boot nodes with > each of our images (and the idea as I understand it is that you > would want jobs to be able to select between any of them). One > option, I suppose, would be to switch to building images weekly > instead of daily, but that only solves the storage and node count > problem not the additional bandwidth and mirror load. And of course, > nodepool would need to learn to be able to boot nodes from older > versions of an image on record which is not a feature it has right > now. > OK.. thanks for walking me through that. It totally makes sense to be concerned with updating the image to save time, bandwidth etc. It would be interesting to see if we could come up with something to protect projects from changes to the new images and maintain images with fresh updates. Project non-voting check jobs on the node-pool image creation job perhaps could be the canary in the coal mine we are seeking. Maybe we could see if that would be something that could be useful to both infra and to various OpenStack projects? > > > Understood, I suspect this will become a more widespread issue as > > more projects start to use containers ( not sure ). > > I'm still confused as to what makes this a container problem in the > general sense, rather than just a problem (leaky abstraction) with > how you've designed the job framework in which you're using them. > > > It's my understanding that there are some mechanisms in place to > > pin packages in the centos nodepool image so there has been some > > thoughts generally in the area of this issue. > [...] > > If this is a reference back to bug 1770298, as mentioned already I > think that's a mistake in diskimage-builder's stdlib which should be > corrected, not a pattern we should propagate. > Cool, good to know and thank you! > -- > Jeremy Stanley > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Mon May 14 18:05:29 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 14 May 2018 11:05:29 -0700 Subject: [openstack-dev] [tripleo] tripleo upstream gate outtage, was: -> gate jobs impacted RAX yum mirror In-Reply-To: References: <20180513152435.x2iguepehk6fblbr@yuggoth.org> <20180514032945.f4hpxpcoyhrylius@yuggoth.org> <20180514143523.iqsxyam5rtb6jiln@yuggoth.org> <1526314098.2004671.1371575456.0B1E9F8B@webmail.messagingengine.com> Message-ID: <1526321129.2818243.1371709056.3B52FCB9@webmail.messagingengine.com> On Mon, May 14, 2018, at 10:11 AM, Wesley Hayutin wrote: > On Mon, May 14, 2018 at 12:08 PM Clark Boylan wrote: > > > On Mon, May 14, 2018, at 8:57 AM, Wesley Hayutin wrote: > > > On Mon, May 14, 2018 at 10:36 AM Jeremy Stanley > > wrote: > > > > > > > On 2018-05-14 07:07:03 -0600 (-0600), Wesley Hayutin wrote: snip > > > > Our automation doesn't know that there's a difference between > > > > packages which were part of CentOS 7.4 and 7.5 any more than it > > > > knows that there's a difference between Ubuntu 16.04.2 and 16.04.3. > > > > Even if we somehow managed to pause our CentOS image updates > > > > immediately prior to 7.5, jobs would still try to upgrade those > > > > 7.4-based images to the 7.5 packages in our mirror, right? > > > > > > > > > > Understood, I suspect this will become a more widespread issue as > > > more projects start to use containers ( not sure ). It's my > > understanding > > > that > > > there are some mechanisms in place to pin packages in the centos nodepool > > > image so > > > there has been some thoughts generally in the area of this issue. > > > > Again, I think we need to understand why containers would make this worse > > not better. Seems like the big feature everyone talks about when it comes > > to containers is isolating packaging whether that be python packages so > > that nova and glance can use a different version of oslo or cohabitating > > software that would otherwise conflict. Why do the packages on the host > > platform so strongly impact your container package lists? > > > > I'll let others comment on that, however my thought is you don't move from > A -> Z in one step and containers do not make everything easier > immediately. Like most things, it takes a little time. > If the main issue is being caught in a transition period at the same time a minor update happens can we treat this as a temporary state? Rather than attempting to for solve this particular case happening again the future we might be better served testing that upcoming CentOS releases won't break tripleo due to changes in the packaging using the centos-release-cr repo as Tristan suggests. That should tell you if something like pacemaker were to stop working. Note this wouldn't require any infra side updates, you would just have these jobs configure the additional repo and go from there. Then on top of that get through the transition period so that the containers isolate you from these changes in the way they should. Then when 7.6 happens you'll have hopefully identified all the broken packaging ahead of time and worked with upstream to address those problems (which should be important for a stable long term support distro) and your containers can update at whatever pace they choose? I don't think it would be appropriate for Infra to stage centos minor versions for a couple reasons. The first is we don't support specific minor versions of CentOS/RHEL, we support the major version and if it updates and OpenStack stops working that is CI doing its job and providing that info. The other major concern is CentOS specifically says "We are trying to make sure people understand they can NOT use older minor versions and still be secure." Similarly to how we won't support Ubuntu 12.04 because it is no longer supported we shouldn't support CentOS 7.4 at this point. These are no longer secure platforms. However, I think testing using the pre release repo as proposed above should allow you to catch issues before updates happen just as well as a staged minor version update would. The added benefit of using this process is you should know as soon as possible and not after the release has been made (helping other users of CentOS by not releasing broken packages in the first place). Clark From lbragstad at gmail.com Mon May 14 18:13:51 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Mon, 14 May 2018 13:13:51 -0500 Subject: [openstack-dev] [ALL][PTLs] Community Goals for Rocky: Toggle the debug option at runtime In-Reply-To: References: <20180316213441.ap4hztvrmn4qkpey@yuggoth.org> <12B971D7-83C6-43AE-9CC3-C63296E9385D@doughellmann.com> Message-ID: <27a719f4-ce6b-2a19-b137-dc3dc153f0b0@gmail.com> On 03/19/2018 09:22 AM, Jim Rollenhagen wrote: > > On Sat, Mar 17, 2018 at 9:49 PM, Doug Hellmann > wrote: > > Both of those are good ideas. > > > Agree. I like the socket idea a bit more as I can imagine some > operators don't want config file changes automatically applied. Do we > want to choose one to standardize on or allow each project (or > operators, via config) the choice? Just to recap, keystone would be listening for when it's configuration file changes, and reinitialize the logger if the logging settings changed, correct? Would that suffice for the goal? We'd be explicit in checking for logging option changes, so modifications to other configuration options shouldn't affect anything, should they? > > I believe adding those things to oslo.service would make them > available to all applications.  > > > Not necessarily - this discussion started when the Keystone team was > discussing how to implement this, given that keystone doesn't use > oslo.service. That said, it should be easy to implement in services > that don't want this dependency, so +1. > > // jim > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From fungi at yuggoth.org Mon May 14 18:56:51 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 14 May 2018 18:56:51 +0000 Subject: [openstack-dev] [tripleo] tripleo upstream gate outtage, was: -> gate jobs impacted RAX yum mirror In-Reply-To: References: <20180513152435.x2iguepehk6fblbr@yuggoth.org> <20180514032945.f4hpxpcoyhrylius@yuggoth.org> <20180514143523.iqsxyam5rtb6jiln@yuggoth.org> <20180514163703.65t5azirmpng6zjp@yuggoth.org> Message-ID: <20180514185651.5icjffrrtic4hhpn@yuggoth.org> On 2018-05-14 12:00:05 -0600 (-0600), Wesley Hayutin wrote: [...] > Project non-voting check jobs on the node-pool image creation job > perhaps could be the canary in the coal mine we are seeking. Maybe > we could see if that would be something that could be useful to > both infra and to various OpenStack projects? [...] This presumes that Nodepool image builds are Zuul jobs, which they aren't (at least not today). Long, long ago in a CI system not so far away, our DevStack-specific image builds were in fact CI jobs and for a while back then we did run DevStack's "smoke" tests as an acceptance test before putting a new image into service. At the time we discovered that even deploying DevStack was too complex and racy to make for a viable acceptance test. The lesson we learned is that most of the image regressions we were concerned with preventing required testing complex enough to be a significant regression magnet itself (Gödel's completeness theorem at work, I expect?). That said, the idea of turning more of Nodepool's tasks into Zuul jobs is an interesting one worthy of lengthy discussion sometime. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Mon May 14 19:03:41 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 14 May 2018 19:03:41 +0000 Subject: [openstack-dev] [tripleo] tripleo upstream gate outtage, was: -> gate jobs impacted RAX yum mirror In-Reply-To: <20180514185651.5icjffrrtic4hhpn@yuggoth.org> References: <20180513152435.x2iguepehk6fblbr@yuggoth.org> <20180514032945.f4hpxpcoyhrylius@yuggoth.org> <20180514143523.iqsxyam5rtb6jiln@yuggoth.org> <20180514163703.65t5azirmpng6zjp@yuggoth.org> <20180514185651.5icjffrrtic4hhpn@yuggoth.org> Message-ID: <20180514190341.krjmluvwcuobdq6p@yuggoth.org> On 2018-05-14 18:56:51 +0000 (+0000), Jeremy Stanley wrote: [...] > Gödel's completeness theorem at work [...] More accurately, Gödel's first incompleteness theorem, I suppose. ;) -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From sshnaidm at redhat.com Mon May 14 19:15:06 2018 From: sshnaidm at redhat.com (Sagi Shnaidman) Date: Mon, 14 May 2018 22:15:06 +0300 Subject: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal In-Reply-To: References: Message-ID: Hi, Bogdan I like the idea with undercloud job. Actually if undercloud fails, I'd stop all other jobs, because it doens't make sense to run them. Seeing the same failure in 10 jobs doesn't add too much. So maybe adding undercloud job as dependency for all multinode jobs would be great idea. I think it's worth to check also how long it will delay jobs. Will all jobs wait until undercloud job is running? Or they will be aborted when undercloud job is failing? However I'm very sceptical about multinode containers and scenarios jobs, they could fail because of very different reasons, like race conditions in product or infra issues. Having skipping some of them will lead to more rechecks from devs trying to discover all problems in a row, which will delay the development process significantly. Thanks On Mon, May 14, 2018 at 7:15 PM, Bogdan Dobrelya wrote: > An update for your review please folks > > Bogdan Dobrelya writes: >> >> Hello. >>> As Zuul documentation [0] explains, the names "check", "gate", and >>> "post" may be altered for more advanced pipelines. Is it doable to >>> introduce, for particular openstack projects, multiple check >>> stages/steps as check-1, check-2 and so on? And is it possible to make >>> the consequent steps reusing environments from the previous steps >>> finished with? >>> >>> Narrowing down to tripleo CI scope, the problem I'd want we to solve >>> with this "virtual RFE", and using such multi-staged check pipelines, >>> is reducing (ideally, de-duplicating) some of the common steps for >>> existing CI jobs. >>> >> >> What you're describing sounds more like a job graph within a pipeline. >> See: https://docs.openstack.org/infra/zuul/user/config.html#attr- >> job.dependencies >> for how to configure a job to run only after another job has completed. >> There is also a facility to pass data between such jobs. >> >> ... (skipped) ... >> >> Creating a job graph to have one job use the results of the previous job >> can make sense in a lot of cases. It doesn't always save *time* >> however. >> >> It's worth noting that in OpenStack's Zuul, we have made an explicit >> choice not to have long-running integration jobs depend on shorter pep8 >> or tox jobs, and that's because we value developer time more than CPU >> time. We would rather run all of the tests and return all of the >> results so a developer can fix all of the errors as quickly as possible, >> rather than forcing an iterative workflow where they have to fix all the >> whitespace issues before the CI system will tell them which actual tests >> broke. >> >> -Jim >> > > I proposed a few zuul dependencies [0], [1] to tripleo CI pipelines for > undercloud deployments vs upgrades testing (and some more). Given that > those undercloud jobs have not so high fail rates though, I think Emilien > is right in his comments and those would buy us nothing. > > From the other side, what do you think folks of making the > tripleo-ci-centos-7-3nodes-multinode depend on > tripleo-ci-centos-7-containers-multinode [2]? The former seems quite > faily and long running, and is non-voting. It deploys (see featuresets > configs [3]*) a 3 nodes in HA fashion. And it seems almost never passing, > when the containers-multinode fails - see the CI stats page [4]. I've found > only a 2 cases there for the otherwise situation, when containers-multinode > fails, but 3nodes-multinode passes. So cutting off those future failures > via the dependency added, *would* buy us something and allow other jobs to > wait less to commence, by a reasonable price of somewhat extended time of > the main zuul pipeline. I think it makes sense and that extended CI time > will not overhead the RDO CI execution times so much to become a problem. > WDYT? > > [0] https://review.openstack.org/#/c/568275/ > [1] https://review.openstack.org/#/c/568278/ > [2] https://review.openstack.org/#/c/568326/ > [3] https://docs.openstack.org/tripleo-quickstart/latest/feature > -configuration.html > [4] http://tripleo.org/cistatus.html > > * ignore the column 1, it's obsolete, all CI jobs now using configs > download AFAICT... > > -- > Best regards, > Bogdan Dobrelya, > Irc #bogdando > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards Sagi Shnaidman -------------- next part -------------- An HTML attachment was scrubbed... URL: From e0ne at e0ne.info Mon May 14 19:20:42 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Mon, 14 May 2018 22:20:42 +0300 Subject: [openstack-dev] [horizon] Scheduling switch to django >= 2.0 In-Reply-To: References: <276a6199-158c-bb7d-7f7d-f04de9a52e06@debian.org> <1526061568-sup-5500@lrrr.local> <1526301210-sup-5803@lrrr.local> Message-ID: Hi all, >From the Horizon's perspective, it would be good to support Django 1.11 as long as we can since it's an LTS release [2]. Django 2.0 support is also extremely important because of it's the first step in a python3-only environment and step forward on supporting next Django 2.2 LTS release which will be released next April. We have to be careful to not break existing plugins and deployments by introducing new Django version requirement. We need to work more closely with plugins teams to getting everything ready for Django 2.0+ before we change our requirements.txt. I don't want to introduce any breaking changes for current plugins so we need to to be sure that each plugin supports Django 2.0. It means plugins have to have voting Django 2.0 jobs on their gates at least. I'll do my best on this effort and will work with plugins teams to do as much as we can in Rocky timeframe. [2] https://www.djangoproject.com/download/ Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Mon, May 14, 2018 at 4:30 PM, Akihiro Motoki wrote: > > > 2018年5月14日(月) 21:42 Doug Hellmann : > >> Excerpts from Akihiro Motoki's message of 2018-05-14 18:52:55 +0900: >> > 2018年5月12日(土) 3:04 Doug Hellmann : >> > >> > > Excerpts from Akihiro Motoki's message of 2018-05-12 00:14:33 +0900: >> > > > Hi zigo and horizon plugin maintainers, >> > > > >> > > > Horizon itself already supports Django 2.0 and horizon unit test >> covers >> > > > Django 2.0 with Python 3.5. >> > > > >> > > > A question to all is whether we change the upper bound of Django >> from >> > > <2.0 >> > > > to <2.1. >> > > > My proposal is to bump the upper bound of Django to <2.1 in Rocky-2. >> > > > (Note that Django 1.11 will continue to be used for python 2.7 >> > > environment.) >> > > >> > > Do we need to cap it at all? We've been trying to express our >> > > dependencies without caps and rely on the constraints list to >> > > test using a common version because this offers the most flexibility >> as >> > > we move to newer versions over time. >> > > >> > >> > The main reason we cap django version so far is that django minor >> version >> > releases >> > contain some backward incompatible changes and also drop deprecated >> > features. >> > A new django minor version release like 1.11 usually breaks horizon and >> > plugins >> > as horizon developers are not always checking django deprecations. >> >> OK. Having the cap in place makes it more complicated to test >> upgrading, and then upgrade. Because we no longer synchronize >> requirements, changing openstack/requirements does not trigger the >> bot to propose the same change to all of the projects using the >> dependency. Someone will have to do that by hand in the future, as we >> are doing with eventlet right now >> (https://review.openstack.org/#/q/topic:uncap-eventlet). >> >> Without the cap, we can test the upgrade by proposing a constraint >> update and running the horizon (and/or plugin) unit tests. When those >> tests pass, we can then step forward all at once by approving the >> constraint change. >> > > Thanks for the detail context. > > Honestly I am not sure which is better to cap or uncap the django version. > We can try uncapping now and see what happens in the community. > > cross-horizon-(py27|py35) jobs of openstack/requirements checks > if horizon works with a new version. it works for horizon, but perhaps it > potentially > break horizon plugins as it takes time to catch up with such changes. > On the other hand, a version bump in upper-constraints.txt would be > a good trigger for horizon plugin maintainers to sync all requirements. > > In addition, requirements are not synchronized automatically, > so it seems not feasible to propose requirements changes per django > version change. > > >> >> > >> > I have a question on uncapping the django version. >> > How can users/operators know which versions are supported? >> > Do they need to check upper-constraints.txt? >> >> We do tell downstream consumers that the upper-constraints.txt file is >> the set of things we test with, and that any other combination of >> packages would need to be tested on their systems separately. >> >> > >> > > > There are several points we should consider: >> > > > - If we change it in global-requirements.txt, it means Django 2.0 >> will be >> > > > used for python3.5 environment. >> > > > - Not a small number of horizon plugins still do not support Django >> 2.0, >> > > so >> > > > bumping the upper bound to <2.1 will break their py35 tests. >> > > > - From my experience of Django 2.0 support in some plugins, the >> required >> > > > changes are relatively simple like [1]. >> > > > >> > > > I created an etherpad page to track Django 2.0 support in horizon >> > > plugins. >> > > > https://etherpad.openstack.org/p/django20-support >> > > > >> > > > I proposed Django 2.0 support patches to several projects which I >> think >> > > are >> > > > major. >> > > > # Do not blame me if I don't cover your project :) >> > > > >> > > > Thought? >> > > >> > > It seems like a good goal for the horizon-plugin author community >> > > to bring those projects up to date by supporting a current version >> > > of Django (and any other dependencies), especially as we discuss >> > > the impending switch over to python-3-first and then python-3-only. >> > > >> > >> > Yes, python 3 support is an important topic. >> > We also need to switch the default python version in mod_wsgi in >> DevStack >> > environment sooner or later. >> >> Is Python 3 ever used for mod_wsgi? Does the WSGI setup code honor >> the variable that tells devstack to use Python 3? >> > > Ubuntu 16.04 provides py2 and py3 versions of mod_wsgi (libapache2-mod-wsgi > and libapache2-mod-wsgi-py3) and as a quick look the only difference is a > module > specified in LoadModule apache directive. > I haven't tested it yet, but it seems worth explored. > > Akihiro > > >> > >> > > If this is an area where teams need help, updating that etherpad >> > > with notes and requests for assistance will help us split up the >> > > work. >> > > >> > >> > Each team can help testing in Django 2.0 and/or python 3 support. >> > We need to enable corresponding server projects in development >> environments, >> > but it is not easy to setup all projects by horizon team. Individual >> > projects must be >> > more familiar with their own projects. >> > I sent several patches, but I actually tested them by unit tests. >> > >> > Thanks, >> > Akihiro >> > >> > > >> > > Doug >> > > >> > > > >> > > > Thanks, >> > > > Akihiro >> > > > >> > > > [1] https://review.openstack.org/#/c/566476/ >> > > > >> > > > 2018年5月8日(火) 17:45 Thomas Goirand : >> > > > >> > > > > Hi, >> > > > > >> > > > > It has been decided that, in Debian, we'll switch to Django 2.0 >> after >> > > > > Buster will be released. Buster is to be frozen next February. >> This >> > > > > means that we have roughly one more year before Django 1.x goes >> away. >> > > > > >> > > > > Hopefully, Horizon will be ready for it, right? >> > > > > >> > > > > Hoping this helps, >> > > > > Cheers, >> > > > > >> > > > > Thomas Goirand (zigo) >> > > > > >> > > > > >> > > ____________________________________________________________ >> ______________ >> > > > > OpenStack Development Mailing List (not for usage questions) >> > > > > Unsubscribe: >> > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > >> > > >> > > ____________________________________________________________ >> ______________ >> > > OpenStack Development Mailing List (not for usage questions) >> > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon May 14 19:24:10 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 14 May 2018 15:24:10 -0400 Subject: [openstack-dev] [ALL][PTLs] Community Goals for Rocky: Toggle the debug option at runtime In-Reply-To: <27a719f4-ce6b-2a19-b137-dc3dc153f0b0@gmail.com> References: <20180316213441.ap4hztvrmn4qkpey@yuggoth.org> <12B971D7-83C6-43AE-9CC3-C63296E9385D@doughellmann.com> <27a719f4-ce6b-2a19-b137-dc3dc153f0b0@gmail.com> Message-ID: <1526325202-sup-17@lrrr.local> Excerpts from Lance Bragstad's message of 2018-05-14 13:13:51 -0500: > > On 03/19/2018 09:22 AM, Jim Rollenhagen wrote: > > > > On Sat, Mar 17, 2018 at 9:49 PM, Doug Hellmann > > wrote: > > > > Both of those are good ideas. > > > > > > Agree. I like the socket idea a bit more as I can imagine some > > operators don't want config file changes automatically applied. Do we > > want to choose one to standardize on or allow each project (or > > operators, via config) the choice? > > Just to recap, keystone would be listening for when it's configuration > file changes, and reinitialize the logger if the logging settings > changed, correct? Sort of. Keystone would need to do something to tell oslo.config to re-load the config files. In services that rely on oslo.service, this is handled with a SIGHUP handler that calls ConfigOpts.mutate_config_files(), so for Keystone you would want to do something similar. That is, you want to wait for an explicit notification from the operator that you should reload the config, and not just watch for the file to change. We could talk about using file modification as a trigger, but reloading is something that may need to be staged across several services in order so we chose for the first version to make the trigger explicit. Relying on watching files will also fail when the modified data is not in a file (which will be possible when we finish the driver work described in http://specs.openstack.org/openstack/oslo-specs/specs/queens/oslo-config-drivers.html). > > Would that suffice for the goal? We'd be explicit in checking for > logging option changes, so modifications to other configuration options > shouldn't affect anything, should they? Yes, oslo.config deals with all of that. Each configuration option has a flag saying whether or not it is mutable (defaults to False). When oslo.config is told to "mutate", it reloads the data sources and reports as warnings any config options that changed that are not mutable. For any options that are marked mutable and have been changed, it calls the "mutate hooks" that have been registered by calling ConfigOpts.register_mutate_hook(), passing some information about which options changed and what changes were made. There's a little more information in https://docs.openstack.org/oslo.config/latest/reference/mutable.html but I notice that does not cover the hooks. The one for oslo.log is in http://git.openstack.org/cgit/openstack/oslo.log/tree/oslo_log/log.py#n229 For the goal, however, all you need to do is set up some way to trigger the call to mutate_config_files() and then document that. > > > > > I believe adding those things to oslo.service would make them > > available to all applications.  > > > > > > Not necessarily - this discussion started when the Keystone team was > > discussing how to implement this, given that keystone doesn't use > > oslo.service. That said, it should be easy to implement in services > > that don't want this dependency, so +1. > > > > // jim > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From Louie.Kwan at windriver.com Mon May 14 19:26:38 2018 From: Louie.Kwan at windriver.com (Kwan, Louie) Date: Mon, 14 May 2018 19:26:38 +0000 Subject: [openstack-dev] [Telemetry] [ceilometer] Ceilometer-file-publisher-compression-csv-format In-Reply-To: <47EFB32CD8770A4D9590812EE28C977E9630A864@ALA-MBC.corp.ad.wrs.com> References: <47EFB32CD8770A4D9590812EE28C977E9630A864@ALA-MBC.corp.ad.wrs.com> Message-ID: <47EFB32CD8770A4D9590812EE28C977E96339745@ALA-MBD.corp.ad.wrs.com> Hi All, Any weekly meeting for Telemetry? I would like to discuss what we can do for the next step for the following review? https://review.openstack.org/#/c/562768/ Ping in the IRC a few times and please advice the next step. Thanks. Louie ________________________________________ From: Kwan, Louie Sent: Friday, May 04, 2018 10:03 AM To: openstack-dev at lists.openstack.org; julien.danjou at enovance.com Subject: [openstack-dev] [Telemetry] [ceilometer] Ceilometer-file-publisher-compression-csv-format Reaching out to Rocky PTL and others. What could be the next step? Thanks. Louie -----Original Message----- From: Kwan, Louie [mailto:Louie.Kwan at windriver.com] Sent: Monday, April 23, 2018 4:10 PM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [ceilometer] Ceilometer-file-publisher-compression-csv-format Submitted the following review on April 19, https://review.openstack.org/#/c/562768/ Would like to know who else could be on the reviewer list and anything else for the next step? Thanks. Louie __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From aschultz at redhat.com Mon May 14 20:06:30 2018 From: aschultz at redhat.com (Alex Schultz) Date: Mon, 14 May 2018 14:06:30 -0600 Subject: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal In-Reply-To: References: Message-ID: On Mon, May 14, 2018 at 10:15 AM, Bogdan Dobrelya wrote: > An update for your review please folks > >> Bogdan Dobrelya writes: >> >>> Hello. >>> As Zuul documentation [0] explains, the names "check", "gate", and >>> "post" may be altered for more advanced pipelines. Is it doable to >>> introduce, for particular openstack projects, multiple check >>> stages/steps as check-1, check-2 and so on? And is it possible to make >>> the consequent steps reusing environments from the previous steps >>> finished with? >>> >>> Narrowing down to tripleo CI scope, the problem I'd want we to solve >>> with this "virtual RFE", and using such multi-staged check pipelines, >>> is reducing (ideally, de-duplicating) some of the common steps for >>> existing CI jobs. >> >> >> What you're describing sounds more like a job graph within a pipeline. >> See: >> https://docs.openstack.org/infra/zuul/user/config.html#attr-job.dependencies >> for how to configure a job to run only after another job has completed. >> There is also a facility to pass data between such jobs. >> >> ... (skipped) ... >> >> Creating a job graph to have one job use the results of the previous job >> can make sense in a lot of cases. It doesn't always save *time* >> however. >> >> It's worth noting that in OpenStack's Zuul, we have made an explicit >> choice not to have long-running integration jobs depend on shorter pep8 >> or tox jobs, and that's because we value developer time more than CPU >> time. We would rather run all of the tests and return all of the >> results so a developer can fix all of the errors as quickly as possible, >> rather than forcing an iterative workflow where they have to fix all the >> whitespace issues before the CI system will tell them which actual tests >> broke. >> >> -Jim > > > I proposed a few zuul dependencies [0], [1] to tripleo CI pipelines for > undercloud deployments vs upgrades testing (and some more). Given that those > undercloud jobs have not so high fail rates though, I think Emilien is right > in his comments and those would buy us nothing. > > From the other side, what do you think folks of making the > tripleo-ci-centos-7-3nodes-multinode depend on > tripleo-ci-centos-7-containers-multinode [2]? The former seems quite faily > and long running, and is non-voting. It deploys (see featuresets configs > [3]*) a 3 nodes in HA fashion. And it seems almost never passing, when the > containers-multinode fails - see the CI stats page [4]. I've found only a 2 > cases there for the otherwise situation, when containers-multinode fails, > but 3nodes-multinode passes. So cutting off those future failures via the > dependency added, *would* buy us something and allow other jobs to wait less > to commence, by a reasonable price of somewhat extended time of the main > zuul pipeline. I think it makes sense and that extended CI time will not > overhead the RDO CI execution times so much to become a problem. WDYT? > I'm not sure it makes sense to add a dependency on other deployment tests. It's going to add additional time to the CI run because the upgrade won't start until well over an hour after the rest of the jobs. The only thing I could think of where this makes more sense is to delay the deployment tests until the pep8/unit tests pass. e.g. let's not burn resources when the code is bad. There might be arguments about lack of information from a deployment when developing things but I would argue that the patch should be vetted properly first in a local environment before taking CI resources. Thanks, -Alex > [0] https://review.openstack.org/#/c/568275/ > [1] https://review.openstack.org/#/c/568278/ > [2] https://review.openstack.org/#/c/568326/ > [3] > https://docs.openstack.org/tripleo-quickstart/latest/feature-configuration.html > [4] http://tripleo.org/cistatus.html > > * ignore the column 1, it's obsolete, all CI jobs now using configs download > AFAICT... > > -- > Best regards, > Bogdan Dobrelya, > Irc #bogdando > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From lbragstad at gmail.com Mon May 14 20:20:42 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Mon, 14 May 2018 15:20:42 -0500 Subject: [openstack-dev] [ALL][PTLs] Community Goals for Rocky: Toggle the debug option at runtime In-Reply-To: <1526325202-sup-17@lrrr.local> References: <20180316213441.ap4hztvrmn4qkpey@yuggoth.org> <12B971D7-83C6-43AE-9CC3-C63296E9385D@doughellmann.com> <27a719f4-ce6b-2a19-b137-dc3dc153f0b0@gmail.com> <1526325202-sup-17@lrrr.local> Message-ID: <557ed9ca-5e68-85a1-858e-ca81797e63bd@gmail.com> On 05/14/2018 02:24 PM, Doug Hellmann wrote: > Excerpts from Lance Bragstad's message of 2018-05-14 13:13:51 -0500: >> On 03/19/2018 09:22 AM, Jim Rollenhagen wrote: >>> On Sat, Mar 17, 2018 at 9:49 PM, Doug Hellmann >> > wrote: >>> >>> Both of those are good ideas. >>> >>> >>> Agree. I like the socket idea a bit more as I can imagine some >>> operators don't want config file changes automatically applied. Do we >>> want to choose one to standardize on or allow each project (or >>> operators, via config) the choice? >> Just to recap, keystone would be listening for when it's configuration >> file changes, and reinitialize the logger if the logging settings >> changed, correct? > Sort of. > > Keystone would need to do something to tell oslo.config to re-load the > config files. In services that rely on oslo.service, this is handled > with a SIGHUP handler that calls ConfigOpts.mutate_config_files(), so > for Keystone you would want to do something similar. > > That is, you want to wait for an explicit notification from the operator > that you should reload the config, and not just watch for the file to > change. We could talk about using file modification as a trigger, but > reloading is something that may need to be staged across several > services in order so we chose for the first version to make the trigger > explicit. Relying on watching files will also fail when the modified > data is not in a file (which will be possible when we finish the driver > work described in > http://specs.openstack.org/openstack/oslo-specs/specs/queens/oslo-config-drivers.html). Hmm, these are good points. I wonder if just converting to use oslo.service would be a lower bar then? > >> Would that suffice for the goal? We'd be explicit in checking for >> logging option changes, so modifications to other configuration options >> shouldn't affect anything, should they? > Yes, oslo.config deals with all of that. > > Each configuration option has a flag saying whether or not it is > mutable (defaults to False). When oslo.config is told to "mutate", > it reloads the data sources and reports as warnings any config > options that changed that are not mutable. > > For any options that are marked mutable and have been changed, it > calls the "mutate hooks" that have been registered by calling > ConfigOpts.register_mutate_hook(), passing some information about > which options changed and what changes were made. > > There's a little more information in > https://docs.openstack.org/oslo.config/latest/reference/mutable.html but > I notice that does not cover the hooks. The one for oslo.log is in > http://git.openstack.org/cgit/openstack/oslo.log/tree/oslo_log/log.py#n229 > > For the goal, however, all you need to do is set up some way to trigger > the call to mutate_config_files() and then document that. > >>> I believe adding those things to oslo.service would make them >>> available to all applications.  >>> >>> >>> Not necessarily - this discussion started when the Keystone team was >>> discussing how to implement this, given that keystone doesn't use >>> oslo.service. That said, it should be easy to implement in services >>> that don't want this dependency, so +1. >>> >>> // jim >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From zaitcev at redhat.com Mon May 14 20:43:38 2018 From: zaitcev at redhat.com (Pete Zaitcev) Date: Mon, 14 May 2018 15:43:38 -0500 Subject: [openstack-dev] [swift][swift3][s3] Keep containers unique among a cluster In-Reply-To: <43BF82CA-5B47-495D-A164-6C8F5E882995@ostorage.com.cn> References: <43BF82CA-5B47-495D-A164-6C8F5E882995@ostorage.com.cn> Message-ID: <20180514154338.75775aa2@lembas.zaitcev.lan> On Thu, 10 May 2018 20:07:03 +0800 Yuxin Wang wrote: > I'm working on a swift project. Our customer cares about S3 compatibility very much. I tested our swift cluster with ceph/s3-tests and analyzed the failed cases. It turns out that lots of the failed cases are related to unique container/bucket. But as we know, containers are just unique in a tenant/project. >[...] > Do you have any ideas on how to do or maybe why not to do? I'd highly appreciate any suggestions. I don't have a recipy, but here's a thought: try making all the accounts that need the interoperability with S3 belong to the same Keystone tenant. As long as you do not give those accounts the owner role (one of those listed in operator_roles=), they will not be able to access each other's buckets (Swift containers). Unfortunately, I think they will not be able to create any buckets either, but perhaps it's something that can be tweaked - for sure if you're willing to far enough to make new middleware. -- Pete From me at not.mn Mon May 14 20:55:42 2018 From: me at not.mn (John Dickinson) Date: Mon, 14 May 2018 13:55:42 -0700 Subject: [openstack-dev] [swift][swift3][s3] Keep containers unique among a cluster In-Reply-To: <20180514154338.75775aa2@lembas.zaitcev.lan> References: <43BF82CA-5B47-495D-A164-6C8F5E882995@ostorage.com.cn> <20180514154338.75775aa2@lembas.zaitcev.lan> Message-ID: <284608A9-0427-4294-BF54-9FEB365090D3@not.mn> On 14 May 2018, at 13:43, Pete Zaitcev wrote: > On Thu, 10 May 2018 20:07:03 +0800 > Yuxin Wang wrote: > >> I'm working on a swift project. Our customer cares about S3 >> compatibility very much. I tested our swift cluster with >> ceph/s3-tests and analyzed the failed cases. It turns out that lots >> of the failed cases are related to unique container/bucket. But as we >> know, containers are just unique in a tenant/project. >> [...] >> Do you have any ideas on how to do or maybe why not to do? I'd highly >> appreciate any suggestions. > > I don't have a recipy, but here's a thought: try making all the > accounts > that need the interoperability with S3 belong to the same Keystone > tenant. > As long as you do not give those accounts the owner role (one of those > listed in operator_roles=), they will not be able to access each > other's > buckets (Swift containers). Unfortunately, I think they will not be > able > to create any buckets either, but perhaps it's something that can be > tweaked - for sure if you're willing to far enough to make new > middleware. > > -- Pete > Pete's idea is interesting. The upstream Swift community has talked about what it will take to support this sort of S3 compatibility, and we've got some pretty good ideas. We'd love your help to implement something. You can find us in #openstack-swift in freenode IRC. As a general overview, swift3 (which has now been integrated into Swift's repo as the "s3api" middleware) maps S3 buckets to a unique (account, container) pair in Swift. This mapping is critical because the Swift account plays a part in Swift's data placement algorithm. This allows both you and I to both have an "images" container in the same Swift cluster in our respective accounts. However, AWS doesn't have an exposed "thing" that's analogous to the account. In order to fill in this missing info, we have to map the S3 bucket name to the appropriate (account, container) pair in Swift. Currently, the s3api middleware does this by encoding the account name into the auth token. This way, when you and I are each accessing our own "images" container as a bucket via the S3 API, our requests go to the right place and do the right thing. This mapping technique has a couple of significant limits. First, we can't do the mapping without the token, so unauthenticated (ie public) S3 API calls can never work. Second, bucket names are not unique. This second issue may or may not be a bug. In your case, it's an issue, but it may be of benefit to others. Either way, it's a difference from the way S3 works. In order to fix this, we need a new way to do the bucket->(account, container) mapping. One idea is to have a key-value registry. There may be other ways to solve this too, but it's not a trivial change. We'd welcome your help in figuring out the right solution! --John From jungleboyj at gmail.com Mon May 14 21:15:53 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Mon, 14 May 2018 16:15:53 -0500 Subject: [openstack-dev] [cinder] forum etherpads now available ... Message-ID: <2caa6f0d-2084-27f5-196e-fdecbf10d6f2@gmail.com> All, I have etherpads created for our Cinder related Forum discussions: * Tuesday, 5/22 11:00 to 11:40 - Room 221-222 - Cinder High Availability (HA) Discussion -https://etherpad.openstack.org/p/YVR18-cinder-ha-forum * Tuesday, 5/22 11:50 to 12:30 - Room 221-222 - Multi-attach Introduction and Future Direction -https://etherpad.openstack.org/p/YVR18-cinder-mutiattach-forum * Wednesday, 5/23 9:40 to 10:30 - Room 221-222 - Cinder's Documentation Discussion -https://etherpad.openstack.org/p/YVR18-cinder-documentation-forum We also have the session on using the placement service: * Monday 5/21 16:20 to 17:00 - Planning to use Placement in Cinderhttps://etherpad.openstack.org/p/YVR-cinder-placement Please take some time to look at the etherpads before the forum and add your thoughts/questions for discussion. Thank you! Jay Bryant (jungleboyj) -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Mon May 14 21:42:20 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 14 May 2018 15:42:20 -0600 Subject: [openstack-dev] [tripleo] tripleo upstream gate outtage, was: -> gate jobs impacted RAX yum mirror In-Reply-To: <1526269464.rq3wf8tgg6.tristanC@fedora> References: <20180513152435.x2iguepehk6fblbr@yuggoth.org> <1526269464.rq3wf8tgg6.tristanC@fedora> Message-ID: On Sun, May 13, 2018 at 11:50 PM Tristan Cacqueray wrote: > On May 14, 2018 2:44 am, Wesley Hayutin wrote: > [snip] > > I do think it would be helpful to say have a one week change window where > > folks are given the opportunity to preflight check a new image and the > > potential impact on the job workflow the updated image may have. > [snip] > > How about adding a periodic job that setup centos-release-cr in a pre > task? This should highlight issues with up-coming updates: > https://wiki.centos.org/AdditionalResources/Repositories/CR > > -Tristan > Thanks for the suggestion Tristan, going to propose using this repo at the next TripleO mtg. Thanks > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Mon May 14 22:04:11 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 14 May 2018 16:04:11 -0600 Subject: [openstack-dev] [tripleo] Zuul repo insertion in update/upgrade CI In-Reply-To: <057713e4-a28a-3ed5-6938-6dacf9918ea2@redhat.com> References: <057713e4-a28a-3ed5-6938-6dacf9918ea2@redhat.com> Message-ID: On Mon, May 14, 2018 at 11:36 AM Jiří Stránský wrote: > Hi, > > this is mainly for CI folks and whom-it-may-concern. > > Recently we came across the topic of how to enable/disable zuul repos at > various places in the CI jobs. For normal deploy jobs there's no need to > customize, but for update/upgrade jobs there is. It's not entirely > straightforward and there's quite a variety of enable/disable spots and > combinations which can be useful. > > Even though improvements in this area are not very likely to get > implemented right away, i had some thoughts on the topic so i wanted to > capture them. I put the ideas into an etherpad: > > https://etherpad.openstack.org/p/tripleo-ci-zuul-repo-insertion > > Feel free to put some more thoughts there or ping me on IRC with > anything related. > > > Thanks > > Jirka > > Thanks Jirka!! > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon May 14 22:46:32 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 14 May 2018 18:46:32 -0400 Subject: [openstack-dev] [ALL][PTLs] Community Goals for Rocky: Toggle the debug option at runtime In-Reply-To: <557ed9ca-5e68-85a1-858e-ca81797e63bd@gmail.com> References: <20180316213441.ap4hztvrmn4qkpey@yuggoth.org> <12B971D7-83C6-43AE-9CC3-C63296E9385D@doughellmann.com> <27a719f4-ce6b-2a19-b137-dc3dc153f0b0@gmail.com> <1526325202-sup-17@lrrr.local> <557ed9ca-5e68-85a1-858e-ca81797e63bd@gmail.com> Message-ID: <1526337894-sup-1085@lrrr.local> Excerpts from Lance Bragstad's message of 2018-05-14 15:20:42 -0500: > > On 05/14/2018 02:24 PM, Doug Hellmann wrote: > > Excerpts from Lance Bragstad's message of 2018-05-14 13:13:51 -0500: > >> On 03/19/2018 09:22 AM, Jim Rollenhagen wrote: > >>> On Sat, Mar 17, 2018 at 9:49 PM, Doug Hellmann >>> > wrote: > >>> > >>> Both of those are good ideas. > >>> > >>> > >>> Agree. I like the socket idea a bit more as I can imagine some > >>> operators don't want config file changes automatically applied. Do we > >>> want to choose one to standardize on or allow each project (or > >>> operators, via config) the choice? > >> Just to recap, keystone would be listening for when it's configuration > >> file changes, and reinitialize the logger if the logging settings > >> changed, correct? > > Sort of. > > > > Keystone would need to do something to tell oslo.config to re-load the > > config files. In services that rely on oslo.service, this is handled > > with a SIGHUP handler that calls ConfigOpts.mutate_config_files(), so > > for Keystone you would want to do something similar. > > > > That is, you want to wait for an explicit notification from the operator > > that you should reload the config, and not just watch for the file to > > change. We could talk about using file modification as a trigger, but > > reloading is something that may need to be staged across several > > services in order so we chose for the first version to make the trigger > > explicit. Relying on watching files will also fail when the modified > > data is not in a file (which will be possible when we finish the driver > > work described in > > http://specs.openstack.org/openstack/oslo-specs/specs/queens/oslo-config-drivers.html). > > Hmm, these are good points. I wonder if just converting to use > oslo.service would be a lower bar then? I thought keystone had moved away from that direction toward deploying only within Apache? I may be out of touch, or have misunderstood something, though. Doug From doug at doughellmann.com Mon May 14 22:49:45 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 14 May 2018 18:49:45 -0400 Subject: [openstack-dev] [horizon] Scheduling switch to django >= 2.0 In-Reply-To: References: <276a6199-158c-bb7d-7f7d-f04de9a52e06@debian.org> <1526061568-sup-5500@lrrr.local> <1526301210-sup-5803@lrrr.local> Message-ID: <1526338155-sup-1361@lrrr.local> Excerpts from Ivan Kolodyazhny's message of 2018-05-14 22:20:42 +0300: > Hi all, > > From the Horizon's perspective, it would be good to support Django 1.11 as > long as we can since it's an LTS release [2]. > Django 2.0 support is also extremely important because of it's the first > step in a python3-only environment and step forward on supporting > next Django 2.2 LTS release which will be released next April. > > We have to be careful to not break existing plugins and deployments by > introducing new Django version requirement. > We need to work more closely with plugins teams to getting everything ready > for Django 2.0+ before we change our requirements.txt. > I don't want to introduce any breaking changes for current plugins so we > need to to be sure that each plugin supports Django 2.0. It means > plugins have to have voting Django 2.0 jobs on their gates at least. I'll > do my best on this effort and will work with plugins teams to do as > much as we can in Rocky timeframe. That sounds like a good plan, thanks Ivan. Doug > > [2] https://www.djangoproject.com/download/ > > Regards, > Ivan Kolodyazhny, > http://blog.e0ne.info/ > > On Mon, May 14, 2018 at 4:30 PM, Akihiro Motoki wrote: > > > > > > > 2018年5月14日(月) 21:42 Doug Hellmann : > > > >> Excerpts from Akihiro Motoki's message of 2018-05-14 18:52:55 +0900: > >> > 2018年5月12日(土) 3:04 Doug Hellmann : > >> > > >> > > Excerpts from Akihiro Motoki's message of 2018-05-12 00:14:33 +0900: > >> > > > Hi zigo and horizon plugin maintainers, > >> > > > > >> > > > Horizon itself already supports Django 2.0 and horizon unit test > >> covers > >> > > > Django 2.0 with Python 3.5. > >> > > > > >> > > > A question to all is whether we change the upper bound of Django > >> from > >> > > <2.0 > >> > > > to <2.1. > >> > > > My proposal is to bump the upper bound of Django to <2.1 in Rocky-2. > >> > > > (Note that Django 1.11 will continue to be used for python 2.7 > >> > > environment.) > >> > > > >> > > Do we need to cap it at all? We've been trying to express our > >> > > dependencies without caps and rely on the constraints list to > >> > > test using a common version because this offers the most flexibility > >> as > >> > > we move to newer versions over time. > >> > > > >> > > >> > The main reason we cap django version so far is that django minor > >> version > >> > releases > >> > contain some backward incompatible changes and also drop deprecated > >> > features. > >> > A new django minor version release like 1.11 usually breaks horizon and > >> > plugins > >> > as horizon developers are not always checking django deprecations. > >> > >> OK. Having the cap in place makes it more complicated to test > >> upgrading, and then upgrade. Because we no longer synchronize > >> requirements, changing openstack/requirements does not trigger the > >> bot to propose the same change to all of the projects using the > >> dependency. Someone will have to do that by hand in the future, as we > >> are doing with eventlet right now > >> (https://review.openstack.org/#/q/topic:uncap-eventlet). > >> > >> Without the cap, we can test the upgrade by proposing a constraint > >> update and running the horizon (and/or plugin) unit tests. When those > >> tests pass, we can then step forward all at once by approving the > >> constraint change. > >> > > > > Thanks for the detail context. > > > > Honestly I am not sure which is better to cap or uncap the django version. > > We can try uncapping now and see what happens in the community. > > > > cross-horizon-(py27|py35) jobs of openstack/requirements checks > > if horizon works with a new version. it works for horizon, but perhaps it > > potentially > > break horizon plugins as it takes time to catch up with such changes. > > On the other hand, a version bump in upper-constraints.txt would be > > a good trigger for horizon plugin maintainers to sync all requirements. > > > > In addition, requirements are not synchronized automatically, > > so it seems not feasible to propose requirements changes per django > > version change. > > > > > >> > >> > > >> > I have a question on uncapping the django version. > >> > How can users/operators know which versions are supported? > >> > Do they need to check upper-constraints.txt? > >> > >> We do tell downstream consumers that the upper-constraints.txt file is > >> the set of things we test with, and that any other combination of > >> packages would need to be tested on their systems separately. > >> > >> > > >> > > > There are several points we should consider: > >> > > > - If we change it in global-requirements.txt, it means Django 2.0 > >> will be > >> > > > used for python3.5 environment. > >> > > > - Not a small number of horizon plugins still do not support Django > >> 2.0, > >> > > so > >> > > > bumping the upper bound to <2.1 will break their py35 tests. > >> > > > - From my experience of Django 2.0 support in some plugins, the > >> required > >> > > > changes are relatively simple like [1]. > >> > > > > >> > > > I created an etherpad page to track Django 2.0 support in horizon > >> > > plugins. > >> > > > https://etherpad.openstack.org/p/django20-support > >> > > > > >> > > > I proposed Django 2.0 support patches to several projects which I > >> think > >> > > are > >> > > > major. > >> > > > # Do not blame me if I don't cover your project :) > >> > > > > >> > > > Thought? > >> > > > >> > > It seems like a good goal for the horizon-plugin author community > >> > > to bring those projects up to date by supporting a current version > >> > > of Django (and any other dependencies), especially as we discuss > >> > > the impending switch over to python-3-first and then python-3-only. > >> > > > >> > > >> > Yes, python 3 support is an important topic. > >> > We also need to switch the default python version in mod_wsgi in > >> DevStack > >> > environment sooner or later. > >> > >> Is Python 3 ever used for mod_wsgi? Does the WSGI setup code honor > >> the variable that tells devstack to use Python 3? > >> > > > > Ubuntu 16.04 provides py2 and py3 versions of mod_wsgi (libapache2-mod-wsgi > > and libapache2-mod-wsgi-py3) and as a quick look the only difference is a > > module > > specified in LoadModule apache directive. > > I haven't tested it yet, but it seems worth explored. > > > > Akihiro > > > > > >> > > >> > > If this is an area where teams need help, updating that etherpad > >> > > with notes and requests for assistance will help us split up the > >> > > work. > >> > > > >> > > >> > Each team can help testing in Django 2.0 and/or python 3 support. > >> > We need to enable corresponding server projects in development > >> environments, > >> > but it is not easy to setup all projects by horizon team. Individual > >> > projects must be > >> > more familiar with their own projects. > >> > I sent several patches, but I actually tested them by unit tests. > >> > > >> > Thanks, > >> > Akihiro > >> > > >> > > > >> > > Doug > >> > > > >> > > > > >> > > > Thanks, > >> > > > Akihiro > >> > > > > >> > > > [1] https://review.openstack.org/#/c/566476/ > >> > > > > >> > > > 2018年5月8日(火) 17:45 Thomas Goirand : > >> > > > > >> > > > > Hi, > >> > > > > > >> > > > > It has been decided that, in Debian, we'll switch to Django 2.0 > >> after > >> > > > > Buster will be released. Buster is to be frozen next February. > >> This > >> > > > > means that we have roughly one more year before Django 1.x goes > >> away. > >> > > > > > >> > > > > Hopefully, Horizon will be ready for it, right? > >> > > > > > >> > > > > Hoping this helps, > >> > > > > Cheers, > >> > > > > > >> > > > > Thomas Goirand (zigo) > >> > > > > > >> > > > > > >> > > ____________________________________________________________ > >> ______________ > >> > > > > OpenStack Development Mailing List (not for usage questions) > >> > > > > Unsubscribe: > >> > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > > > > >> > > > >> > > ____________________________________________________________ > >> ______________ > >> > > OpenStack Development Mailing List (not for usage questions) > >> > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > >> unsubscribe > >> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > > >> > >> ____________________________________________________________ > >> ______________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > >> unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > From lbragstad at gmail.com Mon May 14 23:45:49 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Mon, 14 May 2018 18:45:49 -0500 Subject: [openstack-dev] [ALL][PTLs] Community Goals for Rocky: Toggle the debug option at runtime In-Reply-To: <1526337894-sup-1085@lrrr.local> References: <20180316213441.ap4hztvrmn4qkpey@yuggoth.org> <12B971D7-83C6-43AE-9CC3-C63296E9385D@doughellmann.com> <27a719f4-ce6b-2a19-b137-dc3dc153f0b0@gmail.com> <1526325202-sup-17@lrrr.local> <557ed9ca-5e68-85a1-858e-ca81797e63bd@gmail.com> <1526337894-sup-1085@lrrr.local> Message-ID: <1ca15064-a93e-6080-6b5b-bf70890575ca@gmail.com> On 05/14/2018 05:46 PM, Doug Hellmann wrote: > Excerpts from Lance Bragstad's message of 2018-05-14 15:20:42 -0500: >> On 05/14/2018 02:24 PM, Doug Hellmann wrote: >>> Excerpts from Lance Bragstad's message of 2018-05-14 13:13:51 -0500: >>>> On 03/19/2018 09:22 AM, Jim Rollenhagen wrote: >>>>> On Sat, Mar 17, 2018 at 9:49 PM, Doug Hellmann >>>> > wrote: >>>>> >>>>> Both of those are good ideas. >>>>> >>>>> >>>>> Agree. I like the socket idea a bit more as I can imagine some >>>>> operators don't want config file changes automatically applied. Do we >>>>> want to choose one to standardize on or allow each project (or >>>>> operators, via config) the choice? >>>> Just to recap, keystone would be listening for when it's configuration >>>> file changes, and reinitialize the logger if the logging settings >>>> changed, correct? >>> Sort of. >>> >>> Keystone would need to do something to tell oslo.config to re-load the >>> config files. In services that rely on oslo.service, this is handled >>> with a SIGHUP handler that calls ConfigOpts.mutate_config_files(), so >>> for Keystone you would want to do something similar. >>> >>> That is, you want to wait for an explicit notification from the operator >>> that you should reload the config, and not just watch for the file to >>> change. We could talk about using file modification as a trigger, but >>> reloading is something that may need to be staged across several >>> services in order so we chose for the first version to make the trigger >>> explicit. Relying on watching files will also fail when the modified >>> data is not in a file (which will be possible when we finish the driver >>> work described in >>> http://specs.openstack.org/openstack/oslo-specs/specs/queens/oslo-config-drivers.html). >> Hmm, these are good points. I wonder if just converting to use >> oslo.service would be a lower bar then? > I thought keystone had moved away from that direction toward deploying > only within Apache? I may be out of touch, or have misunderstood > something, though. Oh - never mind... For some reason I was thinking there was a way to use oslo.service and Apache. Either way, I'll do some more digging before tomorrow. I have this as a topic on keystone's meeting agenda to go through our options [0]. If we do come up with something that doesn't involve intercepting signals (specifically for the reason noted by Kristi and Jim in the mod_wsgi documentation), should the community goal be updated to include that option? Just thinking that we can't be the only service in this position. [0] https://etherpad.openstack.org/p/keystone-weekly-meeting > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From tony at bakeyournoodle.com Tue May 15 01:33:48 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Tue, 15 May 2018 11:33:48 +1000 Subject: [openstack-dev] [ironic][stable] Re-adding Jim Rollenhagen to ironic stable maintenance team? In-Reply-To: References: <9ff7365d-03b7-2530-f315-8f6478bcf264@redhat.com> Message-ID: <20180515013347.GB8215@thor.bakeyournoodle.com> On Fri, May 11, 2018 at 08:37:43AM -0400, Julia Kreger wrote: > On Fri, May 11, 2018 at 8:20 AM, Dmitry Tantsur wrote: > > Hi, > [trim] > >> If there are no objections, I'll re-add him next week. > > > > > > I don't remember if we actually can add people to these teams or it has to > > be done by the main stable team. > > > I'm fairly sure I'm the person who deleted him from the group in the > first place :( As such, I think I has the magical powers... maybe > ;) I'm not sure you do have access to do that as the group is owned by stable-main-core. That being said I've re-added Jim. Technically it's nest week now :) Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From hejianle at unitedstack.com Tue May 15 01:50:08 2018 From: hejianle at unitedstack.com (=?utf-8?B?5L2V5YGl5LmQ?=) Date: Tue, 15 May 2018 09:50:08 +0800 Subject: [openstack-dev] openstack-dev] [nova] Cannot live migrattion, because error:libvirtError: the CPU is incompatible with host CPU:Host CPU does not provide required features: cmt, mbm_total, mbm_local Message-ID: Hi ,Chris: the following is information from my machines virsh capobilities from source host: x86_64 Broadwell Intel virsh capobilities from destination host: x86_64 Broadwell Intel libvirt section in nova.conf from source host: [libvirt] inject_partition=-2 inject_password=False disk_cachemodes=network=writeback cpu_mode=host-model virt_type=kvm inject_key=False images_rbd_pool=vms rbd_secret_uuid=43518166-15c4-420f-aa11-e0a681e0e459 images_type=rbd images_rbd_ceph_conf=/etc/ceph/ceph.conf hw_disk_discard=unmap rbd_user=admin live_migration_uri=qemu+ssh://nova_migration@%s/system?keyfile=/etc/nova/migration/identity libvirt section in nova.conf from destination host: [libvirt] inject_partition=-2 inject_password=False disk_cachemodes=network=writeback cpu_mode=host-model virt_type=kvm inject_key=False images_rbd_pool=vms rbd_secret_uuid=43518166-15c4-420f-aa11-e0a681e0e459 images_type=rbd images_rbd_ceph_conf=/etc/ceph/ceph.conf hw_disk_discard=unmap rbd_user=admin live_migration_uri=qemu+ssh://nova_migration@%s/system?keyfile=/etc/nova/migration/identity -------------- next part -------------- An HTML attachment was scrubbed... URL: From gord at live.ca Tue May 15 01:58:00 2018 From: gord at live.ca (gordon chung) Date: Tue, 15 May 2018 01:58:00 +0000 Subject: [openstack-dev] [Telemetry] [ceilometer] Ceilometer-file-publisher-compression-csv-format In-Reply-To: <47EFB32CD8770A4D9590812EE28C977E96339745@ALA-MBD.corp.ad.wrs.com> References: <47EFB32CD8770A4D9590812EE28C977E9630A864@ALA-MBC.corp.ad.wrs.com> <47EFB32CD8770A4D9590812EE28C977E96339745@ALA-MBD.corp.ad.wrs.com> Message-ID: On 2018-05-14 3:26 PM, Kwan, Louie wrote: > Hi All, > > Any weekly meeting for Telemetry? I would like to discuss what we can do for the next step for the following review? > > https://review.openstack.org/#/c/562768/ > > Ping in the IRC a few times and please advice the next step. > there are no meetings given sparse participation. regardless, i've added a review. cheers, -- gord From madhuri.kumari at intel.com Tue May 15 04:43:18 2018 From: madhuri.kumari at intel.com (Kumari, Madhuri) Date: Tue, 15 May 2018 04:43:18 +0000 Subject: [openstack-dev] [Zun] Add Deepak Mourya to the core team In-Reply-To: References: Message-ID: <0512CBBECA36994BAA14C7FEDE986CA60429CB04@BGSMSX102.gar.corp.intel.com> Welcome to the team, Deepak! Regards, Madhuri From: Hongbin Lu [mailto:hongbin034 at gmail.com] Sent: Monday, May 14, 2018 10:00 AM To: OpenStack Development Mailing List (not for usage questions) ; deepak.mourya at nectechnologies.in Subject: [openstack-dev] [Zun] Add Deepak Mourya to the core team Hi all, This is an announcement of the following change on the Zun core reviewers team: + Deepak Mourya (mourya007) Deepak has been actively involving in Zun for several months. He has submitted several code patches to Zun, all of which are useful features or bug fixes. In particular, I would like to highlight that he has implemented the availability zone API which is a significant contribution to the Zun feature set. Based on his significant contribution, I would like to propose him to become a core reviewer of Zun. This proposal has been voted within the existing core team and is unanimously approved. Welcome to the core team Deepak. Best regards, Hongbin -------------- next part -------------- An HTML attachment was scrubbed... URL: From mizuno.shintaro at lab.ntt.co.jp Tue May 15 06:06:29 2018 From: mizuno.shintaro at lab.ntt.co.jp (Shintaro Mizuno) Date: Tue, 15 May 2018 15:06:29 +0900 Subject: [openstack-dev] [Forum] "DPDK/SR-IOV NFV Operational issues and way forward" session etherpad Message-ID: <0eda6a49-352d-5c04-da87-3f1ae72516ac@lab.ntt.co.jp> Hi I have created an etherpad page for "DPDK/SR-IOV NFV Operational issues and way forward" session at the Vancouver Forum [1]. It will take place on Wed 23, 11:50am - 12:30pm Vancouver Convention Centre West - Level Two - Room 221-222 If you are using/testing DPDK/SR-IOV for NFV workloads and interested in discussing their pros/cons and possible next steps for NFV operators and developers, please come join the session. Please also add your comment/topic proposals to the etherpad beforehand. [1] https://etherpad.openstack.org/p/YVR-dpdk-sriov-way-forward Any input is highly appreciated. Regards, Shintaro -- Shintaro MIZUNO (水野伸太郎) NTT Software Innovation Center TEL: 0422-59-4977 E-mail: mizuno.shintaro at lab.ntt.co.jp From jichenjc at cn.ibm.com Tue May 15 06:27:12 2018 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Tue, 15 May 2018 14:27:12 +0800 Subject: [openstack-dev] [nova] review runway status In-Reply-To: References: Message-ID: Thanks for the sharing, The z/VM driver spec review marked as END DATE: 2018-05-15 Thanks a couple folks helped a lot on the review and still need more review activity on the patch sets, can I apply for extend the end date for the run way? Thanks a lot Best Regards! Kevin (Chen) Ji 纪 晨 Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com Phone: +86-10-82451493 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC From: melanie witt To: "OpenStack Development Mailing List (not for usage questions)" Date: 05/15/2018 12:33 AM Subject: [openstack-dev] [nova] review runway status Howdy everyone, This is just a brief status about the blueprints currently occupying review runways [0] and an ask for the nova-core team to give these reviews priority for their code review focus. * Add z/VM driver https://blueprints.launchpad.net/nova/+spec/add-zvm-driver-rocky (jichen) [END DATE: 2018-05-15] spec amendment https://review.openstack.org/562154 and implementation series starting at https://review.openstack.org/523387 * Local disk serial numbers https://blueprints.launchpad.net/nova/+spec/local-disk-serial-numbers (mdbooth) [END DATE: 2018-05-16] series starting at https://review.openstack.org/526346 * PowerVM Driver (esberglu) [END DATE: 2018-05-28] * Snapshot https://blueprints.launchpad.net/nova/+spec/powervm-snapshot: https://review.openstack.org/#/c/543023/ * DiskAdapter parent class https://blueprints.launchpad.net/nova/+spec/powervm-localdisk: https://review.openstack.org/#/c/549053/ *Localdisk https://blueprints.launchpad.net/nova/+spec/powervm-localdisk: https://review.openstack.org/#/c/549300/ Cheers, -melanie [0] https://etherpad.openstack.org/p/nova-runways-rocky __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From sgolovat at redhat.com Tue May 15 07:57:57 2018 From: sgolovat at redhat.com (Sergii Golovatiuk) Date: Tue, 15 May 2018 09:57:57 +0200 Subject: [openstack-dev] [tripleo] tripleo upstream gate outtage, was: -> gate jobs impacted RAX yum mirror In-Reply-To: References: <20180513152435.x2iguepehk6fblbr@yuggoth.org> <1526269464.rq3wf8tgg6.tristanC@fedora> Message-ID: Wesley, For Ubuntu I suggest to enable 'proposed' repo to catch the problem before package will be moved to 'updates'. On Mon, May 14, 2018 at 11:42 PM, Wesley Hayutin wrote: > > > On Sun, May 13, 2018 at 11:50 PM Tristan Cacqueray > wrote: >> >> On May 14, 2018 2:44 am, Wesley Hayutin wrote: >> [snip] >> > I do think it would be helpful to say have a one week change window >> > where >> > folks are given the opportunity to preflight check a new image and the >> > potential impact on the job workflow the updated image may have. >> [snip] >> >> How about adding a periodic job that setup centos-release-cr in a pre >> task? This should highlight issues with up-coming updates: >> https://wiki.centos.org/AdditionalResources/Repositories/CR >> >> -Tristan > > > Thanks for the suggestion Tristan, going to propose using this repo at the > next TripleO mtg. > > Thanks > >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best Regards, Sergii Golovatiuk From thierry at openstack.org Tue May 15 08:37:04 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 15 May 2018 10:37:04 +0200 Subject: [openstack-dev] [tc][goals] tracking status of old goals for new projects In-Reply-To: <1525700930-sup-9125@lrrr.local> References: <1525700930-sup-9125@lrrr.local> Message-ID: <5acb8af5-3486-91ed-ab2d-7e6d0aefcf00@openstack.org> Doug Hellmann wrote: > There is a patch to update the Python 3.5 goal for Kolla [1]. While > I'm glad to see the work happening, the change adds a new deliverable > to an old goal, and it isn’t clear whether we want to use that > approach for tracking goal work indefinitely. I see a few options. > > 1. We could update the existing document. > > 2. We could set up stories in storyboard like we are doing for newer > goals. > > 3. We could do nothing to record the work related to the goal. > > I like option 2, because it means we will be consistent with future > tracking data and we end up with fewer changes in the governance repo > (which was the reason for moving to storyboard in the first place). > > What do others think? I don't have a strong opinion, small preference for (2). At the end of the cycle, the goal becomes just another story with leftover tasks. -- Thierry Carrez (ttx) From thierry at openstack.org Tue May 15 08:38:36 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 15 May 2018 10:38:36 +0200 Subject: [openstack-dev] [tc] Technical Committee Update, 14 May In-Reply-To: <1526306269-sup-420@lrrr.local> References: <1526306269-sup-420@lrrr.local> Message-ID: <85876b36-3ae5-4e0a-c94f-bdf34c9f64ba@openstack.org> Doug Hellmann wrote: > We will also hold a retrospective for the TC as a team on Monday > at the Forum. Please be prepared to discuss things you think are > going well, things you think we need to change, items from our > backlog that you would like to work on, etc. [10] > > [10] https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21740/tc-retrospective You mean Thursday, right ? -- Thierry Carrez (ttx) From bdobreli at redhat.com Tue May 15 08:43:10 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Tue, 15 May 2018 10:43:10 +0200 Subject: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal In-Reply-To: References: Message-ID: On 5/14/18 9:15 PM, Sagi Shnaidman wrote: > Hi, Bogdan > > I like the idea with undercloud job. Actually if undercloud fails, I'd > stop all other jobs, because it doens't make sense to run them. Seeing > the same failure in 10 jobs doesn't add too much. So maybe adding > undercloud job as dependency for all multinode jobs would be great idea. I like that idea, I'll add another patch in the topic then. > I think it's worth to check also how long it will delay jobs. Will all > jobs wait until undercloud job is running? Or they will be aborted when > undercloud job is failing? That is is a good question for openstack-infra folks developing zuul :) But, we could just try it and see how it works, happily zuul v3 allows doing that just in the scope of proposed patches! My expectation is all jobs delayed (and I mean the main zuul pipeline execution time here) by an average time of the undercloud deploy job of ~80 min, which hopefully should not be a big deal given that there is a separate RDO CI pipeline running in parallel, which normally *highly likely* extends that extended time anyway :) And given high chances of additional 'recheck rdo' runs we can observe these days for patches on review. I wish we could introduce inter-pipeline dependencies (zuul CI <-> RDO CI) for those as well... > > However I'm very sceptical about multinode containers and scenarios > jobs, they could fail because of very different reasons, like race > conditions in product or infra issues. Having skipping some of them will > lead to more rechecks from devs trying to discover all problems in a > row, which will delay the development process significantly. right, I roughly estimated delay for the main zuul pipeline execution time for jobs might be a ~2.5h, which is not good. We could live with that had it be a ~1h only, like it takes for the undercloud containers job dependency example. > > Thanks > > > On Mon, May 14, 2018 at 7:15 PM, Bogdan Dobrelya > wrote: > > An update for your review please folks > > Bogdan Dobrelya > writes: > > Hello. > As Zuul documentation [0] explains, the names "check", > "gate", and > "post"  may be altered for more advanced pipelines. Is it > doable to > introduce, for particular openstack projects, multiple check > stages/steps as check-1, check-2 and so on? And is it > possible to make > the consequent steps reusing environments from the previous > steps > finished with? > > Narrowing down to tripleo CI scope, the problem I'd want we > to solve > with this "virtual RFE", and using such multi-staged check > pipelines, > is reducing (ideally, de-duplicating) some of the common > steps for > existing CI jobs. > > > What you're describing sounds more like a job graph within a > pipeline. > See: > https://docs.openstack.org/infra/zuul/user/config.html#attr-job.dependencies > > for how to configure a job to run only after another job has > completed. > There is also a facility to pass data between such jobs. > > ... (skipped) ... > > Creating a job graph to have one job use the results of the > previous job > can make sense in a lot of cases.  It doesn't always save *time* > however. > > It's worth noting that in OpenStack's Zuul, we have made an explicit > choice not to have long-running integration jobs depend on > shorter pep8 > or tox jobs, and that's because we value developer time more > than CPU > time.  We would rather run all of the tests and return all of the > results so a developer can fix all of the errors as quickly as > possible, > rather than forcing an iterative workflow where they have to fix > all the > whitespace issues before the CI system will tell them which > actual tests > broke. > > -Jim > > > I proposed a few zuul dependencies [0], [1] to tripleo CI pipelines > for undercloud deployments vs upgrades testing (and some more). > Given that those undercloud jobs have not so high fail rates though, > I think Emilien is right in his comments and those would buy us nothing. > > From the other side, what do you think folks of making the > tripleo-ci-centos-7-3nodes-multinode depend on > tripleo-ci-centos-7-containers-multinode [2]? The former seems quite > faily and long running, and is non-voting. It deploys (see > featuresets configs [3]*) a 3 nodes in HA fashion. And it seems > almost never passing, when the containers-multinode fails - see the > CI stats page [4]. I've found only a 2 cases there for the otherwise > situation, when containers-multinode fails, but 3nodes-multinode > passes. So cutting off those future failures via the dependency > added, *would* buy us something and allow other jobs to wait less to > commence, by a reasonable price of somewhat extended time of the > main zuul pipeline. I think it makes sense and that extended CI time > will not overhead the RDO CI execution times so much to become a > problem. WDYT? > > [0] https://review.openstack.org/#/c/568275/ > > [1] https://review.openstack.org/#/c/568278/ > > [2] https://review.openstack.org/#/c/568326/ > > [3] > https://docs.openstack.org/tripleo-quickstart/latest/feature-configuration.html > > [4] http://tripleo.org/cistatus.html > > * ignore the column 1, it's obsolete, all CI jobs now using configs > download AFAICT... > > -- > Best regards, > Bogdan Dobrelya, > Irc #bogdando > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > -- > Best regards > Sagi Shnaidman > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Bogdan Dobrelya, Irc #bogdando From bdobreli at redhat.com Tue May 15 08:54:37 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Tue, 15 May 2018 10:54:37 +0200 Subject: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal In-Reply-To: References: Message-ID: <90762663-3bc2-d7b8-1242-ea5a67543c68@redhat.com> On 5/14/18 10:06 PM, Alex Schultz wrote: > On Mon, May 14, 2018 at 10:15 AM, Bogdan Dobrelya wrote: >> An update for your review please folks >> >>> Bogdan Dobrelya writes: >>> >>>> Hello. >>>> As Zuul documentation [0] explains, the names "check", "gate", and >>>> "post" may be altered for more advanced pipelines. Is it doable to >>>> introduce, for particular openstack projects, multiple check >>>> stages/steps as check-1, check-2 and so on? And is it possible to make >>>> the consequent steps reusing environments from the previous steps >>>> finished with? >>>> >>>> Narrowing down to tripleo CI scope, the problem I'd want we to solve >>>> with this "virtual RFE", and using such multi-staged check pipelines, >>>> is reducing (ideally, de-duplicating) some of the common steps for >>>> existing CI jobs. >>> >>> >>> What you're describing sounds more like a job graph within a pipeline. >>> See: >>> https://docs.openstack.org/infra/zuul/user/config.html#attr-job.dependencies >>> for how to configure a job to run only after another job has completed. >>> There is also a facility to pass data between such jobs. >>> >>> ... (skipped) ... >>> >>> Creating a job graph to have one job use the results of the previous job >>> can make sense in a lot of cases. It doesn't always save *time* >>> however. >>> >>> It's worth noting that in OpenStack's Zuul, we have made an explicit >>> choice not to have long-running integration jobs depend on shorter pep8 >>> or tox jobs, and that's because we value developer time more than CPU >>> time. We would rather run all of the tests and return all of the >>> results so a developer can fix all of the errors as quickly as possible, >>> rather than forcing an iterative workflow where they have to fix all the >>> whitespace issues before the CI system will tell them which actual tests >>> broke. >>> >>> -Jim >> >> >> I proposed a few zuul dependencies [0], [1] to tripleo CI pipelines for >> undercloud deployments vs upgrades testing (and some more). Given that those >> undercloud jobs have not so high fail rates though, I think Emilien is right >> in his comments and those would buy us nothing. >> >> From the other side, what do you think folks of making the >> tripleo-ci-centos-7-3nodes-multinode depend on >> tripleo-ci-centos-7-containers-multinode [2]? The former seems quite faily >> and long running, and is non-voting. It deploys (see featuresets configs >> [3]*) a 3 nodes in HA fashion. And it seems almost never passing, when the >> containers-multinode fails - see the CI stats page [4]. I've found only a 2 >> cases there for the otherwise situation, when containers-multinode fails, >> but 3nodes-multinode passes. So cutting off those future failures via the >> dependency added, *would* buy us something and allow other jobs to wait less >> to commence, by a reasonable price of somewhat extended time of the main >> zuul pipeline. I think it makes sense and that extended CI time will not >> overhead the RDO CI execution times so much to become a problem. WDYT? >> > > I'm not sure it makes sense to add a dependency on other deployment > tests. It's going to add additional time to the CI run because the > upgrade won't start until well over an hour after the rest of the The things are not so simple. There is also a significant time-to-wait-in-queue jobs start delay. And it takes probably even longer than the time to execute jobs. And that delay is a function of available HW resources and zuul queue length. And the proposed change affects those parameters as well, assuming jobs with failed dependencies won't run at all. So we could expect longer execution times compensated with shorter wait times! I'm not sure how to estimate that tho. You folks have all numbers and knowledge, let's use that please. > jobs. The only thing I could think of where this makes more sense is > to delay the deployment tests until the pep8/unit tests pass. e.g. > let's not burn resources when the code is bad. There might be > arguments about lack of information from a deployment when developing > things but I would argue that the patch should be vetted properly > first in a local environment before taking CI resources. I support this idea as well, though I'm sceptical about having that blessed in the end :) I'll add a patch though. > > Thanks, > -Alex > >> [0] https://review.openstack.org/#/c/568275/ >> [1] https://review.openstack.org/#/c/568278/ >> [2] https://review.openstack.org/#/c/568326/ >> [3] >> https://docs.openstack.org/tripleo-quickstart/latest/feature-configuration.html >> [4] http://tripleo.org/cistatus.html >> >> * ignore the column 1, it's obsolete, all CI jobs now using configs download >> AFAICT... >> >> -- >> Best regards, >> Bogdan Dobrelya, >> Irc #bogdando >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Bogdan Dobrelya, Irc #bogdando From zigo at debian.org Tue May 15 08:58:42 2018 From: zigo at debian.org (Thomas Goirand) Date: Tue, 15 May 2018 10:58:42 +0200 Subject: [openstack-dev] [horizon] Scheduling switch to django >= 2.0 In-Reply-To: References: <276a6199-158c-bb7d-7f7d-f04de9a52e06@debian.org> <1526061568-sup-5500@lrrr.local> <1526301210-sup-5803@lrrr.local> Message-ID: <4e301490-a661-dd9e-fc71-d9177c86d7e5@debian.org> On 05/14/2018 03:30 PM, Akihiro Motoki wrote: > Is Python 3 ever used for mod_wsgi? Does the WSGI setup code honor > the variable that tells devstack to use Python 3? > > > Ubuntu 16.04 provides py2 and py3 versions of mod_wsgi (libapache2-mod-wsgi > and libapache2-mod-wsgi-py3) and as a quick look the only difference is > a module > specified in LoadModule apache directive. > I haven't tested it yet, but it seems worth explored. > > Akihiro libapache2-mod-wsgi-py3 is what's in use in all Debian packages for OpenStack, and it works well, including for Horizon. Cheers, Thomas Goirand (zigo) From bdobreli at redhat.com Tue May 15 09:39:35 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Tue, 15 May 2018 11:39:35 +0200 Subject: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal In-Reply-To: <90762663-3bc2-d7b8-1242-ea5a67543c68@redhat.com> References: <90762663-3bc2-d7b8-1242-ea5a67543c68@redhat.com> Message-ID: <8060ff1f-e546-868b-8729-090d643969d7@redhat.com> Added a few more patches [0], [1] by the discussion results. PTAL folks. Wrt remaining in the topic, I'd propose to give it a try and revert it, if it proved to be worse than better. Thank you for feedback! The next step could be reusing artifacts, like DLRN repos and containers built for patches and hosted undercloud, in the consequent pipelined jobs. But I'm not sure how to even approach that. [0] https://review.openstack.org/#/c/568536/ [1] https://review.openstack.org/#/c/568543/ On 5/15/18 10:54 AM, Bogdan Dobrelya wrote: > On 5/14/18 10:06 PM, Alex Schultz wrote: >> On Mon, May 14, 2018 at 10:15 AM, Bogdan Dobrelya >> wrote: >>> An update for your review please folks >>> >>>> Bogdan Dobrelya writes: >>>> >>>>> Hello. >>>>> As Zuul documentation [0] explains, the names "check", "gate", and >>>>> "post"  may be altered for more advanced pipelines. Is it doable to >>>>> introduce, for particular openstack projects, multiple check >>>>> stages/steps as check-1, check-2 and so on? And is it possible to make >>>>> the consequent steps reusing environments from the previous steps >>>>> finished with? >>>>> >>>>> Narrowing down to tripleo CI scope, the problem I'd want we to solve >>>>> with this "virtual RFE", and using such multi-staged check pipelines, >>>>> is reducing (ideally, de-duplicating) some of the common steps for >>>>> existing CI jobs. >>>> >>>> >>>> What you're describing sounds more like a job graph within a pipeline. >>>> See: >>>> https://docs.openstack.org/infra/zuul/user/config.html#attr-job.dependencies >>>> >>>> for how to configure a job to run only after another job has completed. >>>> There is also a facility to pass data between such jobs. >>>> >>>> ... (skipped) ... >>>> >>>> Creating a job graph to have one job use the results of the previous >>>> job >>>> can make sense in a lot of cases.  It doesn't always save *time* >>>> however. >>>> >>>> It's worth noting that in OpenStack's Zuul, we have made an explicit >>>> choice not to have long-running integration jobs depend on shorter pep8 >>>> or tox jobs, and that's because we value developer time more than CPU >>>> time.  We would rather run all of the tests and return all of the >>>> results so a developer can fix all of the errors as quickly as >>>> possible, >>>> rather than forcing an iterative workflow where they have to fix all >>>> the >>>> whitespace issues before the CI system will tell them which actual >>>> tests >>>> broke. >>>> >>>> -Jim >>> >>> >>> I proposed a few zuul dependencies [0], [1] to tripleo CI pipelines for >>> undercloud deployments vs upgrades testing (and some more). Given >>> that those >>> undercloud jobs have not so high fail rates though, I think Emilien >>> is right >>> in his comments and those would buy us nothing. >>> >>>  From the other side, what do you think folks of making the >>> tripleo-ci-centos-7-3nodes-multinode depend on >>> tripleo-ci-centos-7-containers-multinode [2]? The former seems quite >>> faily >>> and long running, and is non-voting. It deploys (see featuresets configs >>> [3]*) a 3 nodes in HA fashion. And it seems almost never passing, >>> when the >>> containers-multinode fails - see the CI stats page [4]. I've found >>> only a 2 >>> cases there for the otherwise situation, when containers-multinode >>> fails, >>> but 3nodes-multinode passes. So cutting off those future failures via >>> the >>> dependency added, *would* buy us something and allow other jobs to >>> wait less >>> to commence, by a reasonable price of somewhat extended time of the main >>> zuul pipeline. I think it makes sense and that extended CI time will not >>> overhead the RDO CI execution times so much to become a problem. WDYT? >>> >> >> I'm not sure it makes sense to add a dependency on other deployment >> tests. It's going to add additional time to the CI run because the >> upgrade won't start until well over an hour after the rest of the > > The things are not so simple. There is also a significant > time-to-wait-in-queue jobs start delay. And it takes probably even > longer than the time to execute jobs. And that delay is a function of > available HW resources and zuul queue length. And the proposed change > affects those parameters as well, assuming jobs with failed dependencies > won't run at all. So we could expect longer execution times compensated > with shorter wait times! I'm not sure how to estimate that tho. You > folks have all numbers and knowledge, let's use that please. > >> jobs.  The only thing I could think of where this makes more sense is >> to delay the deployment tests until the pep8/unit tests pass.  e.g. >> let's not burn resources when the code is bad. There might be >> arguments about lack of information from a deployment when developing >> things but I would argue that the patch should be vetted properly >> first in a local environment before taking CI resources. > > I support this idea as well, though I'm sceptical about having that > blessed in the end :) I'll add a patch though. > >> >> Thanks, >> -Alex >> >>> [0] https://review.openstack.org/#/c/568275/ >>> [1] https://review.openstack.org/#/c/568278/ >>> [2] https://review.openstack.org/#/c/568326/ >>> [3] >>> https://docs.openstack.org/tripleo-quickstart/latest/feature-configuration.html >>> >>> [4] http://tripleo.org/cistatus.html >>> >>> * ignore the column 1, it's obsolete, all CI jobs now using configs >>> download >>> AFAICT... >>> >>> -- >>> Best regards, >>> Bogdan Dobrelya, >>> Irc #bogdando >>> >>> __________________________________________________________________________ >>> >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > -- Best regards, Bogdan Dobrelya, Irc #bogdando From sfinucan at redhat.com Tue May 15 10:44:11 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Tue, 15 May 2018 11:44:11 +0100 Subject: [openstack-dev] psycopg2 wheel packaging issues Message-ID: <1526381051.3915.11.camel@redhat.com> I imagine most people have been seeing warnings like the one below raised by various openstack packages recently: .tox/py27/lib/python2.7/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 wheel package will be renamed from release 2.8; in order to keep installing from binary please use "pip install psycopg2-binary" instead. For details see: . Based on this warning, I had done what seemed to be the obvious thing to do and proposed adding psycopg2-binary to the list of global requirements [1]. This would allow us to replace all references to psycopg2 with psycopg2-wheel in individual projects. However, upon further investigation it seems this is not really an option since the two packages exist in the same namespace and will clobber each other. I've now abandoned this patch. Does anyone with stronger Python packaging-fu than I have a better solution for the psycopg2 folks? There's a detailed description of why this was necessary on GitHub [2] along with some potential resolutions, none of which seem to be acceptable. If nothing better is possible, it seems we'll simply have to live with (or silence) these warnings in psycopg2 2.7.x and start installing libpg again once 2.8 is released. Cheers, Stephen [1] https://review.openstack.org/#/c/561924/ [2] https://github.com/psycopg/psycopg2/issues/674 From ifat.afek at nokia.com Tue May 15 11:20:23 2018 From: ifat.afek at nokia.com (Afek, Ifat (Nokia - IL/Kfar Sava)) Date: Tue, 15 May 2018 11:20:23 +0000 Subject: [openstack-dev] [vitrage] etherpads for Vitrage forum sessions Message-ID: <8DBE4D2A-A2E4-4426-8E00-FEC9AA4E6048@nokia.com> Hi, I created etherpads for Vitrage forum sessions: - Advanced RCA use cases - taking Vitrage to the next level: https://etherpad.openstack.org/p/YVR-vitrage-advanced-use-cases - Vitrage RCA over K8s. Pets and Cattle - Monitor each cow? : https://etherpad.openstack.org/p/YVR-vitrage-rca-over-k8s You are welcome to comment and propose more topics for discussion. Thanks, Ifat From doug at doughellmann.com Tue May 15 11:24:00 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 15 May 2018 07:24:00 -0400 Subject: [openstack-dev] psycopg2 wheel packaging issues In-Reply-To: <1526381051.3915.11.camel@redhat.com> References: <1526381051.3915.11.camel@redhat.com> Message-ID: <1526383059-sup-777@lrrr.local> Excerpts from Stephen Finucane's message of 2018-05-15 11:44:11 +0100: > I imagine most people have been seeing warnings like the one below > raised by various openstack packages recently: > > .tox/py27/lib/python2.7/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 > wheel package will be renamed from release 2.8; in order to keep installing from binary > please use "pip install psycopg2-binary" instead. For details see: > . > > Based on this warning, I had done what seemed to be the obvious thing > to do and proposed adding psycopg2-binary to the list of global > requirements [1]. This would allow us to replace all references to > psycopg2 with psycopg2-wheel in individual projects. However, upon > further investigation it seems this is not really an option since the > two packages exist in the same namespace and will clobber each other. > I've now abandoned this patch. > > Does anyone with stronger Python packaging-fu than I have a better > solution for the psycopg2 folks? There's a detailed description of why > this was necessary on GitHub [2] along with some potential resolutions, > none of which seem to be acceptable. If nothing better is possible, it > seems we'll simply have to live with (or silence) these warnings in > psycopg2 2.7.x and start installing libpg again once 2.8 is released. > > Cheers, > Stephen > > [1] https://review.openstack.org/#/c/561924/ > [2] https://github.com/psycopg/psycopg2/issues/674 > Bundling an SSL library seems like a particularly bad situation, but if its ABI isn't stable it may be all they can do. Perhaps some of the folks in the community who actually use Postgresql can get involved with helping the upstream maintainers of psycopg and libpg sort things out. In the mean time, is there any reason we can't just continue to install psycopg2 from source in our gate jobs after 2.8? If the wheel packages for psycopg2 2.7.x are bad perhaps we can come up with a way to pass --no-binary when installing it, but it's not clear if we need to. Does the bug affect us? Doug Doug From bdobreli at redhat.com Tue May 15 12:07:56 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Tue, 15 May 2018 14:07:56 +0200 Subject: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal In-Reply-To: References: Message-ID: Let me clarify the problem I want to solve with pipelines. It is getting *hard* to develop things and move patches to the Happy End (merged): - Patches wait too long for CI jobs to start. It should be minutes and not hours of waiting. - If a patch fails a job w/o a good reason, the consequent recheck operation repeat waiting all over again. How pipelines may help solve it? Pipelines only alleviate, not solve the problem of waiting. We only want to build pipelines for the main zuul check process, omitting gating and RDO CI (for now). Where are two cases to consider: - A patch succeeds all checks - A patch fails a check with dependencies The latter cases benefit us the most, when pipelines are designed like it is proposed here. So that any jobs expected to fail, when a dependency fails, will be omitted from execution. This saves HW resources and zuul queue places a lot, making it available for other patches and allowing those to have CI jobs started faster (less waiting!). When we have "recheck storms", like because of some known intermittent side issue, that outcome is multiplied by the recheck storm um... level, and delivers even better and absolutely amazing results :) Zuul queue will not be growing insanely getting overwhelmed by multiple clones of the rechecked jobs highly likely deemed to fail, and blocking other patches what might have chances to pass checks as non-affected by that intermittent issue. And for the first case, when a patch succeeds, it takes some extended time, and that is the price to pay. How much time it takes to finish in a pipeline fully depends on implementation. The effectiveness could only be measured with numbers extracted from elastic search data, like average time to wait for a job to start, success vs fail execution time percentiles for a job, average amount of rechecks, recheck storms history et al. I don't have that data and don't know how to get it. Any help with that is very appreciated and could really help to move the proposed patches forward or decline it. And we could then compare "before" and "after" as well. I hope that explains the problem scope and the methodology to address that. On 5/14/18 6:15 PM, Bogdan Dobrelya wrote: > An update for your review please folks > >> Bogdan Dobrelya writes: >> >>> Hello. >>> As Zuul documentation [0] explains, the names "check", "gate", and >>> "post"  may be altered for more advanced pipelines. Is it doable to >>> introduce, for particular openstack projects, multiple check >>> stages/steps as check-1, check-2 and so on? And is it possible to make >>> the consequent steps reusing environments from the previous steps >>> finished with? >>> >>> Narrowing down to tripleo CI scope, the problem I'd want we to solve >>> with this "virtual RFE", and using such multi-staged check pipelines, >>> is reducing (ideally, de-duplicating) some of the common steps for >>> existing CI jobs. >> >> What you're describing sounds more like a job graph within a pipeline. >> See: >> https://docs.openstack.org/infra/zuul/user/config.html#attr-job.dependencies >> >> for how to configure a job to run only after another job has completed. >> There is also a facility to pass data between such jobs. >> >> ... (skipped) ... >> >> Creating a job graph to have one job use the results of the previous job >> can make sense in a lot of cases.  It doesn't always save *time* >> however. >> >> It's worth noting that in OpenStack's Zuul, we have made an explicit >> choice not to have long-running integration jobs depend on shorter pep8 >> or tox jobs, and that's because we value developer time more than CPU >> time.  We would rather run all of the tests and return all of the >> results so a developer can fix all of the errors as quickly as possible, >> rather than forcing an iterative workflow where they have to fix all the >> whitespace issues before the CI system will tell them which actual tests >> broke. >> >> -Jim > > I proposed a few zuul dependencies [0], [1] to tripleo CI pipelines for > undercloud deployments vs upgrades testing (and some more). Given that > those undercloud jobs have not so high fail rates though, I think > Emilien is right in his comments and those would buy us nothing. > > From the other side, what do you think folks of making the > tripleo-ci-centos-7-3nodes-multinode depend on > tripleo-ci-centos-7-containers-multinode [2]? The former seems quite > faily and long running, and is non-voting. It deploys (see featuresets > configs [3]*) a 3 nodes in HA fashion. And it seems almost never > passing, when the containers-multinode fails - see the CI stats page > [4]. I've found only a 2 cases there for the otherwise situation, when > containers-multinode fails, but 3nodes-multinode passes. So cutting off > those future failures via the dependency added, *would* buy us something > and allow other jobs to wait less to commence, by a reasonable price of > somewhat extended time of the main zuul pipeline. I think it makes sense > and that extended CI time will not overhead the RDO CI execution times > so much to become a problem. WDYT? > > [0] https://review.openstack.org/#/c/568275/ > [1] https://review.openstack.org/#/c/568278/ > [2] https://review.openstack.org/#/c/568326/ > [3] > https://docs.openstack.org/tripleo-quickstart/latest/feature-configuration.html > > [4] http://tripleo.org/cistatus.html > > * ignore the column 1, it's obsolete, all CI jobs now using configs > download AFAICT... > -- Best regards, Bogdan Dobrelya, Irc #bogdando From gkotton at vmware.com Tue May 15 12:24:16 2018 From: gkotton at vmware.com (Gary Kotton) Date: Tue, 15 May 2018 12:24:16 +0000 Subject: [openstack-dev] [neutron] Bug deputy update Message-ID: Hi, A few minor bugs opened and one critical one - https://bugs.launchpad.net/neutron/+bug/1771293 Thanks Gary -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue May 15 12:30:44 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 15 May 2018 12:30:44 +0000 Subject: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal In-Reply-To: References: Message-ID: <20180515123044.7dnxwpfnlttbeja4@yuggoth.org> On 2018-05-15 14:07:56 +0200 (+0200), Bogdan Dobrelya wrote: [...] > How pipelines may help solve it? > Pipelines only alleviate, not solve the problem of waiting. We only want to > build pipelines for the main zuul check process, omitting gating and RDO CI > (for now). > > Where are two cases to consider: > - A patch succeeds all checks > - A patch fails a check with dependencies > > The latter cases benefit us the most, when pipelines are designed like it is > proposed here. So that any jobs expected to fail, when a dependency fails, > will be omitted from execution. [...] Your choice of terminology is making it hard to follow this proposal. You seem to mean something other than https://zuul-ci.org/docs/zuul/user/config.html#pipeline when you use the term "pipeline" (which gets confusing very quickly for anyone familiar with Zuul configuration concepts). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From bdobreli at redhat.com Tue May 15 13:22:14 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Tue, 15 May 2018 15:22:14 +0200 Subject: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal In-Reply-To: <20180515123044.7dnxwpfnlttbeja4@yuggoth.org> References: <20180515123044.7dnxwpfnlttbeja4@yuggoth.org> Message-ID: On 5/15/18 2:30 PM, Jeremy Stanley wrote: > On 2018-05-15 14:07:56 +0200 (+0200), Bogdan Dobrelya wrote: > [...] >> How pipelines may help solve it? >> Pipelines only alleviate, not solve the problem of waiting. We only want to >> build pipelines for the main zuul check process, omitting gating and RDO CI >> (for now). >> >> Where are two cases to consider: >> - A patch succeeds all checks >> - A patch fails a check with dependencies >> >> The latter cases benefit us the most, when pipelines are designed like it is >> proposed here. So that any jobs expected to fail, when a dependency fails, >> will be omitted from execution. > [...] > > Your choice of terminology is making it hard to follow this > proposal. You seem to mean something other than > https://zuul-ci.org/docs/zuul/user/config.html#pipeline when you use > the term "pipeline" (which gets confusing very quickly for anyone > familiar with Zuul configuration concepts). Indeed, sorry for that confusion. I mean pipelines as jobs executed in batches, ordered via defined dependencies, like gitlab pipelines [0]. And those batches can also be thought of steps, or whatever we call that. [0] https://docs.gitlab.com/ee/ci/pipelines.html > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Bogdan Dobrelya, Irc #bogdando From sfinucan at redhat.com Tue May 15 14:00:17 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Tue, 15 May 2018 15:00:17 +0100 Subject: [openstack-dev] psycopg2 wheel packaging issues In-Reply-To: <1526383059-sup-777@lrrr.local> References: <1526381051.3915.11.camel@redhat.com> <1526383059-sup-777@lrrr.local> Message-ID: <468202d5d80fe969c6e306284df03add69a7543e.camel@redhat.com> On Tue, 2018-05-15 at 07:24 -0400, Doug Hellmann wrote: > Excerpts from Stephen Finucane's message of 2018-05-15 11:44:11 +0100: > > I imagine most people have been seeing warnings like the one below > > raised by various openstack packages recently: > > > > .tox/py27/lib/python2.7/site-packages/psycopg2/__init__.py:144: UserWarning: The psycopg2 > > wheel package will be renamed from release 2.8; in order to keep installing from binary > > please use "pip install psycopg2-binary" instead. For details see: > > . > > > > Based on this warning, I had done what seemed to be the obvious thing > > to do and proposed adding psycopg2-binary to the list of global > > requirements [1]. This would allow us to replace all references to > > psycopg2 with psycopg2-wheel in individual projects. However, upon > > further investigation it seems this is not really an option since the > > two packages exist in the same namespace and will clobber each other. > > I've now abandoned this patch. > > > > Does anyone with stronger Python packaging-fu than I have a better > > solution for the psycopg2 folks? There's a detailed description of why > > this was necessary on GitHub [2] along with some potential resolutions, > > none of which seem to be acceptable. If nothing better is possible, it > > seems we'll simply have to live with (or silence) these warnings in > > psycopg2 2.7.x and start installing libpg again once 2.8 is released. > > > > Cheers, > > Stephen > > > > [1] https://review.openstack.org/#/c/561924/ > > [2] https://github.com/psycopg/psycopg2/issues/674 > > > > Bundling an SSL library seems like a particularly bad situation, but if > its ABI isn't stable it may be all they can do. > > Perhaps some of the folks in the community who actually use Postgresql > can get involved with helping the upstream maintainers of psycopg and > libpg sort things out. Yes, this would be my hope. > In the mean time, is there any reason we can't just continue to > install psycopg2 from source in our gate jobs after 2.8? If the > wheel packages for psycopg2 2.7.x are bad perhaps we can come up > with a way to pass --no-binary when installing it, but it's not > clear if we need to. Does the bug affect us? The only reason we might have issues is the libpq dependency. This was required in 2.6 and will be required once again in 2.8. If this hasn't been dropped from the list of requirements then we won't see any breakages. If we do, we know where the issue lies. Stephen From fungi at yuggoth.org Tue May 15 14:07:57 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 15 May 2018 14:07:57 +0000 Subject: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal In-Reply-To: References: <20180515123044.7dnxwpfnlttbeja4@yuggoth.org> Message-ID: <20180515140756.y64ujdzqu45i7vki@yuggoth.org> On 2018-05-15 15:22:14 +0200 (+0200), Bogdan Dobrelya wrote: [...] > I mean pipelines as jobs executed in batches, ordered via defined > dependencies, like gitlab pipelines [0]. And those batches can > also be thought of steps, or whatever we call that. [...] Got it. So Zuul refers to that relationship as a job dependency: https://zuul-ci.org/docs/zuul/user/config.html#attr-job.dependencies To be clearer, you might refer to this as dependent job ordering or a job dependency graph. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From corvus at inaugust.com Tue May 15 14:30:03 2018 From: corvus at inaugust.com (James E. Blair) Date: Tue, 15 May 2018 07:30:03 -0700 Subject: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal In-Reply-To: <8060ff1f-e546-868b-8729-090d643969d7@redhat.com> (Bogdan Dobrelya's message of "Tue, 15 May 2018 11:39:35 +0200") References: <90762663-3bc2-d7b8-1242-ea5a67543c68@redhat.com> <8060ff1f-e546-868b-8729-090d643969d7@redhat.com> Message-ID: <878t8luh3o.fsf@meyer.lemoncheese.net> Bogdan Dobrelya writes: > Added a few more patches [0], [1] by the discussion results. PTAL folks. > Wrt remaining in the topic, I'd propose to give it a try and revert > it, if it proved to be worse than better. > Thank you for feedback! > > The next step could be reusing artifacts, like DLRN repos and > containers built for patches and hosted undercloud, in the consequent > pipelined jobs. But I'm not sure how to even approach that. > > [0] https://review.openstack.org/#/c/568536/ > [1] https://review.openstack.org/#/c/568543/ In order to use an artifact in a dependent job, you need to store it somewhere and retrieve it. In the parent job, I'd recommend storing the artifact on the log server (in an "artifacts/" directory) next to the job's logs. The log server is essentially a time-limited artifact repository keyed on the zuul build UUID. Pass the URL to the child job using the zuul_return Ansible module. Have the child job fetch it from the log server using the URL it gets. However, don't do that if the artifacts are very large -- more than a few MB -- we'll end up running out of space quickly. In that case, please volunteer some time to help the infra team set up a swift container to store these artifacts. We don't need to *run* swift -- we have clouds with swift already. We just need some help setting up accounts, secrets, and Ansible roles to use it from Zuul. -Jim From doug at doughellmann.com Tue May 15 14:33:21 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 15 May 2018 10:33:21 -0400 Subject: [openstack-dev] [tc] Technical Committee Update, 14 May In-Reply-To: <85876b36-3ae5-4e0a-c94f-bdf34c9f64ba@openstack.org> References: <1526306269-sup-420@lrrr.local> <85876b36-3ae5-4e0a-c94f-bdf34c9f64ba@openstack.org> Message-ID: <1526394785-sup-6419@lrrr.local> Excerpts from Thierry Carrez's message of 2018-05-15 10:38:36 +0200: > Doug Hellmann wrote: > > We will also hold a retrospective for the TC as a team on Monday > > at the Forum. Please be prepared to discuss things you think are > > going well, things you think we need to change, items from our > > backlog that you would like to work on, etc. [10] > > > > [10] https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21740/tc-retrospective > > You mean Thursday, right ? > Oops, yes, Thursday. Doug From doug at doughellmann.com Tue May 15 14:46:38 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 15 May 2018 10:46:38 -0400 Subject: [openstack-dev] [ALL][PTLs] Community Goals for Rocky: Toggle the debug option at runtime In-Reply-To: <1ca15064-a93e-6080-6b5b-bf70890575ca@gmail.com> References: <20180316213441.ap4hztvrmn4qkpey@yuggoth.org> <12B971D7-83C6-43AE-9CC3-C63296E9385D@doughellmann.com> <27a719f4-ce6b-2a19-b137-dc3dc153f0b0@gmail.com> <1526325202-sup-17@lrrr.local> <557ed9ca-5e68-85a1-858e-ca81797e63bd@gmail.com> <1526337894-sup-1085@lrrr.local> <1ca15064-a93e-6080-6b5b-bf70890575ca@gmail.com> Message-ID: <1526395462-sup-8795@lrrr.local> Excerpts from Lance Bragstad's message of 2018-05-14 18:45:49 -0500: > > On 05/14/2018 05:46 PM, Doug Hellmann wrote: > > Excerpts from Lance Bragstad's message of 2018-05-14 15:20:42 -0500: > >> On 05/14/2018 02:24 PM, Doug Hellmann wrote: > >>> Excerpts from Lance Bragstad's message of 2018-05-14 13:13:51 -0500: > >>>> On 03/19/2018 09:22 AM, Jim Rollenhagen wrote: > >>>>> On Sat, Mar 17, 2018 at 9:49 PM, Doug Hellmann >>>>> > wrote: > >>>>> > >>>>> Both of those are good ideas. > >>>>> > >>>>> > >>>>> Agree. I like the socket idea a bit more as I can imagine some > >>>>> operators don't want config file changes automatically applied. Do we > >>>>> want to choose one to standardize on or allow each project (or > >>>>> operators, via config) the choice? > >>>> Just to recap, keystone would be listening for when it's configuration > >>>> file changes, and reinitialize the logger if the logging settings > >>>> changed, correct? > >>> Sort of. > >>> > >>> Keystone would need to do something to tell oslo.config to re-load the > >>> config files. In services that rely on oslo.service, this is handled > >>> with a SIGHUP handler that calls ConfigOpts.mutate_config_files(), so > >>> for Keystone you would want to do something similar. > >>> > >>> That is, you want to wait for an explicit notification from the operator > >>> that you should reload the config, and not just watch for the file to > >>> change. We could talk about using file modification as a trigger, but > >>> reloading is something that may need to be staged across several > >>> services in order so we chose for the first version to make the trigger > >>> explicit. Relying on watching files will also fail when the modified > >>> data is not in a file (which will be possible when we finish the driver > >>> work described in > >>> http://specs.openstack.org/openstack/oslo-specs/specs/queens/oslo-config-drivers.html). > >> Hmm, these are good points. I wonder if just converting to use > >> oslo.service would be a lower bar then? > > I thought keystone had moved away from that direction toward deploying > > only within Apache? I may be out of touch, or have misunderstood > > something, though. > > Oh - never mind... For some reason I was thinking there was a way to use > oslo.service and Apache. > > Either way, I'll do some more digging before tomorrow. I have this as a > topic on keystone's meeting agenda to go through our options [0]. If we > do come up with something that doesn't involve intercepting signals > (specifically for the reason noted by Kristi and Jim in the mod_wsgi > documentation), should the community goal be updated to include that > option? Just thinking that we can't be the only service in this position. I think we've left the implementation details up to the project teams, for just that reason. That said, it would be good to document how you do it (either formally or with a mailing list thread). And FWIW, if what you choose to do is monitor a file, that's fine as a trigger. I suggest not using the configuration file itself, though, for the reasons mentioned earlier. Doug PS - I wonder how Apache deals with reloading its own configuration file. Is there some sort of hook you could use? > > [0] https://etherpad.openstack.org/p/keystone-weekly-meeting > > > > > Doug > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From emilien at redhat.com Tue May 15 14:51:28 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 15 May 2018 07:51:28 -0700 Subject: [openstack-dev] [tripleo] Containerized Undercloud deep-dive Message-ID: Dan and I are organizing a deep-dive session focused on the containerized undercloud. https://etherpad.openstack.org/p/tripleo-deep-dive-containerized-undercloud We proposed a date + list of topics but feel free to comment and ask for topics/questions. Thanks, -- Emilien & Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Tue May 15 14:53:32 2018 From: aschultz at redhat.com (Alex Schultz) Date: Tue, 15 May 2018 08:53:32 -0600 Subject: [openstack-dev] [tripleo] Migration to Storyboard In-Reply-To: References: Message-ID: Bumping this up so folks can review this. It was mentioned in this week's meeting that it would be a good idea for folks to take a look at Storyboard to get familiar with it. The upstream docs have been updated[0] to point to the differences when dealing with proposed patches. Please take some time to review this and raise any concerns/issues now. Thanks, -Alex [0] https://docs.openstack.org/infra/manual/developers.html#development-workflow On Wed, May 9, 2018 at 1:24 PM, Alex Schultz wrote: > Hello tripleo folks, > > So we've been experimenting with migrating some squads over to > storyboard[0] but this seems to be causing more issues than perhaps > it's worth. Since the upstream community would like to standardize on > Storyboard at some point, I would propose that we do a cut over of all > the tripleo bugs/blueprints from Launchpad to Storyboard. > > In the irc meeting this week[1], I asked that the tripleo-ci team make > sure the existing scripts that we use to monitor bugs for CI support > Storyboard. I would consider this a prerequisite for the migration. > I am thinking it would be beneficial to get this done before or as > close to M2. > > Thoughts, concerns, etc? > > Thanks, > -Alex > > [0] https://storyboard.openstack.org/#!/project_group/76 > [1] http://eavesdrop.openstack.org/meetings/tripleo/2018/tripleo.2018-05-08-14.00.log.html#l-42 From sshnaidm at redhat.com Tue May 15 15:08:01 2018 From: sshnaidm at redhat.com (Sagi Shnaidman) Date: Tue, 15 May 2018 18:08:01 +0300 Subject: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal In-Reply-To: References: Message-ID: Bogdan, I think before final decisions we need to know exactly - what a price we need to pay? Without exact numbers it will be difficult to discuss about. I we need to wait 80 mins of undercloud-containers job to finish for starting all other jobs, it will be about 4.5 hours to wait for result (+ 4.5 hours in gate) which is too big price imho and doesn't worth an effort. What are exact numbers we are talking about? Thanks On Tue, May 15, 2018 at 3:07 PM, Bogdan Dobrelya wrote: > Let me clarify the problem I want to solve with pipelines. > > It is getting *hard* to develop things and move patches to the Happy End > (merged): > - Patches wait too long for CI jobs to start. It should be minutes and not > hours of waiting. > - If a patch fails a job w/o a good reason, the consequent recheck > operation repeat waiting all over again. > > How pipelines may help solve it? > Pipelines only alleviate, not solve the problem of waiting. We only want > to build pipelines for the main zuul check process, omitting gating and RDO > CI (for now). > > Where are two cases to consider: > - A patch succeeds all checks > - A patch fails a check with dependencies > > The latter cases benefit us the most, when pipelines are designed like it > is proposed here. So that any jobs expected to fail, when a dependency > fails, will be omitted from execution. This saves HW resources and zuul > queue places a lot, making it available for other patches and allowing > those to have CI jobs started faster (less waiting!). When we have "recheck > storms", like because of some known intermittent side issue, that outcome > is multiplied by the recheck storm um... level, and delivers even better > and absolutely amazing results :) Zuul queue will not be growing insanely > getting overwhelmed by multiple clones of the rechecked jobs highly likely > deemed to fail, and blocking other patches what might have chances to pass > checks as non-affected by that intermittent issue. > > And for the first case, when a patch succeeds, it takes some extended > time, and that is the price to pay. How much time it takes to finish in a > pipeline fully depends on implementation. > > The effectiveness could only be measured with numbers extracted from > elastic search data, like average time to wait for a job to start, success > vs fail execution time percentiles for a job, average amount of rechecks, > recheck storms history et al. I don't have that data and don't know how to > get it. Any help with that is very appreciated and could really help to > move the proposed patches forward or decline it. And we could then compare > "before" and "after" as well. > > I hope that explains the problem scope and the methodology to address that. > > > On 5/14/18 6:15 PM, Bogdan Dobrelya wrote: > >> An update for your review please folks >> >> Bogdan Dobrelya writes: >>> >>> Hello. >>>> As Zuul documentation [0] explains, the names "check", "gate", and >>>> "post" may be altered for more advanced pipelines. Is it doable to >>>> introduce, for particular openstack projects, multiple check >>>> stages/steps as check-1, check-2 and so on? And is it possible to make >>>> the consequent steps reusing environments from the previous steps >>>> finished with? >>>> >>>> Narrowing down to tripleo CI scope, the problem I'd want we to solve >>>> with this "virtual RFE", and using such multi-staged check pipelines, >>>> is reducing (ideally, de-duplicating) some of the common steps for >>>> existing CI jobs. >>>> >>> >>> What you're describing sounds more like a job graph within a pipeline. >>> See: https://docs.openstack.org/infra/zuul/user/config.html#attr- >>> job.dependencies >>> for how to configure a job to run only after another job has completed. >>> There is also a facility to pass data between such jobs. >>> >>> ... (skipped) ... >>> >>> Creating a job graph to have one job use the results of the previous job >>> can make sense in a lot of cases. It doesn't always save *time* >>> however. >>> >>> It's worth noting that in OpenStack's Zuul, we have made an explicit >>> choice not to have long-running integration jobs depend on shorter pep8 >>> or tox jobs, and that's because we value developer time more than CPU >>> time. We would rather run all of the tests and return all of the >>> results so a developer can fix all of the errors as quickly as possible, >>> rather than forcing an iterative workflow where they have to fix all the >>> whitespace issues before the CI system will tell them which actual tests >>> broke. >>> >>> -Jim >>> >> >> I proposed a few zuul dependencies [0], [1] to tripleo CI pipelines for >> undercloud deployments vs upgrades testing (and some more). Given that >> those undercloud jobs have not so high fail rates though, I think Emilien >> is right in his comments and those would buy us nothing. >> >> From the other side, what do you think folks of making the >> tripleo-ci-centos-7-3nodes-multinode depend on >> tripleo-ci-centos-7-containers-multinode [2]? The former seems quite >> faily and long running, and is non-voting. It deploys (see featuresets >> configs [3]*) a 3 nodes in HA fashion. And it seems almost never passing, >> when the containers-multinode fails - see the CI stats page [4]. I've found >> only a 2 cases there for the otherwise situation, when containers-multinode >> fails, but 3nodes-multinode passes. So cutting off those future failures >> via the dependency added, *would* buy us something and allow other jobs to >> wait less to commence, by a reasonable price of somewhat extended time of >> the main zuul pipeline. I think it makes sense and that extended CI time >> will not overhead the RDO CI execution times so much to become a problem. >> WDYT? >> >> [0] https://review.openstack.org/#/c/568275/ >> [1] https://review.openstack.org/#/c/568278/ >> [2] https://review.openstack.org/#/c/568326/ >> [3] https://docs.openstack.org/tripleo-quickstart/latest/feature >> -configuration.html >> [4] http://tripleo.org/cistatus.html >> >> * ignore the column 1, it's obsolete, all CI jobs now using configs >> download AFAICT... >> >> > > -- > Best regards, > Bogdan Dobrelya, > Irc #bogdando > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards Sagi Shnaidman -------------- next part -------------- An HTML attachment was scrubbed... URL: From dimitri.pertin at inria.fr Tue May 15 15:16:22 2018 From: dimitri.pertin at inria.fr (Dimitri Pertin) Date: Tue, 15 May 2018 17:16:22 +0200 Subject: [openstack-dev] [SIG][Edge-computing][FEMDC] Wed. 16 May - FEMDC IRC Meeting 15:00 UTC In-Reply-To: <295656139.99088970.1525873845270.JavaMail.root@zimbra29-e5.priv.proxad.net> References: <295656139.99088970.1525873845270.JavaMail.root@zimbra29-e5.priv.proxad.net> Message-ID: Dear all, Here is a gentle reminder regarding the FEMDC meeting that was postponed from last week to tomorrow: May, the 16th at 15:00 UTC. As a consequence, the meeting will be held on #edge-computing-irc This meeting will focus on the preparation of the Vancouver summit (presentations, F2F sessions, ...). You can already check and fill this pad with you wishes/ideas: https://etherpad.openstack.org/p/FEMDC_Vancouver As usually, a draft of the agenda is available at line 550 and you are very welcome to add any item: https://etherpad.openstack.org/p/massively_distributed_ircmeetings_2018 Best regards, Dimitri -------- Forwarded Message -------- Subject: [Edge-computing] [FEMDC] IRC meeting postponed to next Wednesday Date: Wed, 9 May 2018 15:50:45 +0200 (CEST) From: lebre.adrien at free.fr To: OpenStack Development Mailing List (not for usage questions) , openstack-sigs at lists.openstack.org, edge-computing at lists.openstack.org Dear all, Neither Paul-Andre nor me can chair the meeting today so we propose to postpone it for one week. The agenda will be delivered soon but you can consider that next meeting will focus on the preparation of the Vancouver summit (presentations, F2F meetings...). Best regards, ad_ri3n_ _______________________________________________ Edge-computing mailing list Edge-computing at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing From bdobreli at redhat.com Tue May 15 15:31:07 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Tue, 15 May 2018 17:31:07 +0200 Subject: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal In-Reply-To: <878t8luh3o.fsf@meyer.lemoncheese.net> References: <90762663-3bc2-d7b8-1242-ea5a67543c68@redhat.com> <8060ff1f-e546-868b-8729-090d643969d7@redhat.com> <878t8luh3o.fsf@meyer.lemoncheese.net> Message-ID: <9efdab28-de9c-ad94-57c8-616d557b2205@redhat.com> On 5/15/18 4:30 PM, James E. Blair wrote: > Bogdan Dobrelya writes: > >> Added a few more patches [0], [1] by the discussion results. PTAL folks. >> Wrt remaining in the topic, I'd propose to give it a try and revert >> it, if it proved to be worse than better. >> Thank you for feedback! >> >> The next step could be reusing artifacts, like DLRN repos and >> containers built for patches and hosted undercloud, in the consequent >> pipelined jobs. But I'm not sure how to even approach that. >> >> [0] https://review.openstack.org/#/c/568536/ >> [1] https://review.openstack.org/#/c/568543/ > > In order to use an artifact in a dependent job, you need to store it > somewhere and retrieve it. > > In the parent job, I'd recommend storing the artifact on the log server > (in an "artifacts/" directory) next to the job's logs. The log server > is essentially a time-limited artifact repository keyed on the zuul > build UUID. > > Pass the URL to the child job using the zuul_return Ansible module. > > Have the child job fetch it from the log server using the URL it gets. > > However, don't do that if the artifacts are very large -- more than a > few MB -- we'll end up running out of space quickly. > > In that case, please volunteer some time to help the infra team set up a > swift container to store these artifacts. We don't need to *run* > swift -- we have clouds with swift already. We just need some help > setting up accounts, secrets, and Ansible roles to use it from Zuul. Thank you, that's a good proposal! So when we have done that upstream infra swift setup for tripleo, the 1st step in the job dependency graph may be using quickstart to do something like: * check out testing depends-on things, * build repos and all tripleo docker images from these repos, * upload into a swift container, with an automatic expiration set, the de-duplicated and compressed tarball created with something like: # docker save $(docker images -q) | gzip -1 > all.tar.xz (I expect it will be something like a 2G file) * something similar for DLRN repos prolly, I'm not an expert for this part. Then those stored artifacts to be picked up by the next step in the graph, deploying undercloud and overcloud in the single step, like: * fetch the swift containers with repos and container images * docker load -i all.tar.xz * populate images into a local registry, as usual * something similar for the repos. Includes an offline yum update (we already have a compressed repo, right? profit!) * deploy UC * deploy OC, if a job wants it And if OC deployment brought into a separate step, we do not need local registries, just 'docker load -i all.tar.xz' issued for overcloud nodes should replace image prep workflows and registries, AFAICT. Not sure with the repos for that case. I wish to assist with the upstream infra swift setup for tripleo, and that plan, just need a blessing and more hands from tripleo CI squad ;) > > -Jim > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Bogdan Dobrelya, Irc #bogdando From cdent+os at anticdent.org Tue May 15 15:31:52 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 15 May 2018 16:31:52 +0100 (BST) Subject: [openstack-dev] [tc] [all] TC Report 18-20 Message-ID: HTML: https://anticdent.org/tc-report-18-20.html Trying to write a TC report after a gap of 3 weeks is hard enough, but when that gap involves some time off, the TC elections, and the run up to summit (next week in [Vancouver](https://www.openstack.org/summit/vancouver-2018/)) then it gets bewildering. Rather than trying to give anything like a full summary, I'll go for some highlights. Be aware that since next week is summit and I'll be travelling the week after, there will be another gap in reports. # Elections The elections were for seven positions. Of those, three are new to the TC: Graham Hayes, Mohammed Naser, Zane Bitter. Having new people is _great_. There's a growing sense that the TC needs to take a more active role in helping adapt the culture of OpenStack to its changing place in the world (see some of the comments below). Having new people helps with that greatly. Doug Hellman has become the chair of the TC, taking the seat long held by Thierry. This is the first time (that I'm aware of) that a non-Foundation-staff individual has been the chair. One of the most interesting parts of the election process were the email threads started by Doug. There's hope that existing TC members that were not elected in this cycle, those that have departed, and anyone else will provide their answers to them too. An [email reminder](http://lists.openstack.org/pipermail/openstack-dev/2018-May/130382.html) exists. # Summit Is next week, in Vancouver. The TC has several [Forum](https://wiki.openstack.org/wiki/Forum/Vancouver2018) sessions planned including: * [S release goals](https://etherpad.openstack.org/p/YVR-S-release-goals) * [Project boundaries and what is OpenStack](https://etherpad.openstack.org/p/YVR-forum-TC-project-boundaries) * [TC Retrospective](https://etherpad.openstack.org/p/YVR-tc-retrospective) * [Cross Community Governance](https://etherpad.openstack.org/p/YVR-cross-osf-tech-governance) # Corporate Foundation Contributions There's ongoing discussion about how [to measure](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-04-24.log.html#t2018-04-24T15:43:59) upstream contribution from corporate Foundation members and what to do if contribution seems lacking. Part of the reason this came up was because the mode of contribution from new platinum member, Tencent, is not clear. For a platinum member, it should be _obvious_. # LCOO There's been some concern expressed about the The Large Contributing OpenStack Operators (LCOO) group and the way they operate. They use an [Atlassian Wiki](https://openstack-lcoo.atlassian.net/) and Slack, and have restricted membership. These things tend to not align with the norms for tool usage and collaboration in OpenStack. This topic came up in [late April](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-04-26.log.html#t2018-04-26T14:39:36) but is worth revisiting in Vancouver. # Constellations One of the things that came out in election campaigning is that OpenStack needs to be more clear about the many ways that OpenStack can be used, in part as a way of being more clear about what OpenStack _is_. Constellations are one way to do this and work has begun on one for [Scientific Computing](https://review.openstack.org/#/c/565466/). There's some discussion there on what a constellation is supposed to accomplish. If you have an opinion, you should comment. # Board Meeting The day before summit there is a "combined leadership" meeting with the Foundation Board, the User Committee and the Technical Committee. Doug has posted a [review of the agenda](http://lists.openstack.org/pipermail/openstack-dev/2018-May/130336.html). These meetings are open to any Foundation members and often involve a lot of insight into the future of OpenStack. And snacks. # Feedback, Leadership and Dictatorship of the Projects Zane started [an email thread](http://lists.openstack.org/pipermail/openstack-dev/2018-May/130375.html) about ways to replace or augment the once large and positive feedback loop that was present in earlier days of OpenStack. That now has the potential to trap us into what he describes as a "local maximum". The thread eventually evolved into concerns that the individual sub-projects in OpenStack can sometimes have too much power and identity compared to the overarching project, leading to isolation and difficulty getting overarching things done. There was a bit of discussion about this [in IRC](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-05-11.log.html#t2018-05-11T19:13:02) but the important parts are in the several messages in the thread. Some people think that the community goals help to fill some of this void. Others thinks this is not quite enough and perhaps project teams as a point of emphasis is ["no longer optimal"](http://lists.openstack.org/pipermail/openstack-dev/2018-May/130436.html). But in all this talk of change, how do we do the work if we're already busy? What can we not do? That was a topic [Monday morning](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-05-14.log.html#t2018-05-14T09:00:00). # API Version Bumps Also on Monday, plans [were made](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-05-14.log.html#t2018-05-14T17:27:05) to have a session in Vancouver about how to do across-the-system minimum API version bumps. This started in response to a meandering thread [on twitter](https://twitter.com/robcresswell/status/994911766776877059) about inconsistencies in the OpenStack's APIs "never" being resolved. # Where Now? It's hard to make any conclusions from the election results. A relatively small number of people voted for a relatively small number of candidates. And there's always the sense that voting is primarily based on name recognition where platforms and policies have little bearing. However, if we are to take the results at face value then it appears that at least some of the electorate wants one or both of the following from the TC: * Increased communication and engagement. * Greater and more active exercising of whatever power they can dredge up to help lead and change the community more directly. Do _you_ think this is true? What direction do things need to go? I'm currently in the state of mind where it is critical that we create and maintain the big picture information artifacts ("OpenStack is X, Y, and Z", "OpenStack is not A, B and C", "Next year OpenStack will start being E but will stop being Z") that allow contributors of any sort to pick amongst the (too) many opportunities for things to do. Especially making it easier—and socially and professionally _safer_—to say "no" to something. This makes it more clean and clear to get the right things done—rather than context switch—and to create the necessary headspace to consider improvements rather than doing the same thing over again. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From fungi at yuggoth.org Tue May 15 15:40:53 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 15 May 2018 15:40:53 +0000 Subject: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal In-Reply-To: <9efdab28-de9c-ad94-57c8-616d557b2205@redhat.com> References: <90762663-3bc2-d7b8-1242-ea5a67543c68@redhat.com> <8060ff1f-e546-868b-8729-090d643969d7@redhat.com> <878t8luh3o.fsf@meyer.lemoncheese.net> <9efdab28-de9c-ad94-57c8-616d557b2205@redhat.com> Message-ID: <20180515154053.5udtbk2lubtj7w5n@yuggoth.org> On 2018-05-15 17:31:07 +0200 (+0200), Bogdan Dobrelya wrote: [...] > * upload into a swift container, with an automatic expiration set, the > de-duplicated and compressed tarball created with something like: > # docker save $(docker images -q) | gzip -1 > all.tar.xz > (I expect it will be something like a 2G file) > * something similar for DLRN repos prolly, I'm not an expert for this part. > > Then those stored artifacts to be picked up by the next step in the graph, > deploying undercloud and overcloud in the single step, like: > * fetch the swift containers with repos and container images [...] I do worry a little about network fragility here, as well as extremely variable performance. Randomly-selected job nodes could be shuffling those files halfway across the globe so either upload or download (or both) will experience high round-trip latency as well as potentially constrained throughput, packet loss, disconnects/interruptions and so on... all the things we deal with when trying to rely on the Internet, except magnified by the quantity of data being transferred about. Ultimately still worth trying, I think, but just keep in mind it may introduce more issues than it solves. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From bdobreli at redhat.com Tue May 15 15:54:42 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Tue, 15 May 2018 17:54:42 +0200 Subject: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal In-Reply-To: References: Message-ID: <6c41be95-ae93-70d7-fe3a-e7bce922d5c6@redhat.com> On 5/15/18 5:08 PM, Sagi Shnaidman wrote: > Bogdan, > > I think before final decisions we need to know exactly - what a price we > need to pay? Without exact numbers it will be difficult to discuss about. > I we need to wait 80 mins of undercloud-containers job to finish for > starting all other jobs, it will be about 4.5 hours to wait for result > (+ 4.5 hours in gate) which is too big price imho and doesn't worth an > effort. > > What are exact numbers we are talking about? I fully agree but can't have those numbers, sorry! As I noted above, those are definitely sitting in openstack-infra's elastic search DB, just needed to get extracted with some assistance of folks who know more on that! > > Thanks > > > On Tue, May 15, 2018 at 3:07 PM, Bogdan Dobrelya > wrote: > > Let me clarify the problem I want to solve with pipelines. > > It is getting *hard* to develop things and move patches to the Happy > End (merged): > - Patches wait too long for CI jobs to start. It should be minutes > and not hours of waiting. > - If a patch fails a job w/o a good reason, the consequent recheck > operation repeat waiting all over again. > > How pipelines may help solve it? > Pipelines only alleviate, not solve the problem of waiting. We only > want to build pipelines for the main zuul check process, omitting > gating and RDO CI (for now). > > Where are two cases to consider: > - A patch succeeds all checks > - A patch fails a check with dependencies > > The latter cases benefit us the most, when pipelines are designed > like it is proposed here. So that any jobs expected to fail, when a > dependency fails, will be omitted from execution. This saves HW > resources and zuul queue places a lot, making it available for other > patches and allowing those to have CI jobs started faster (less > waiting!). When we have "recheck storms", like because of some known > intermittent side issue, that outcome is multiplied by the recheck > storm um... level, and delivers even better and absolutely amazing > results :) Zuul queue will not be growing insanely getting > overwhelmed by multiple clones of the rechecked jobs highly likely > deemed to fail, and blocking other patches what might have chances > to pass checks as non-affected by that intermittent issue. > > And for the first case, when a patch succeeds, it takes some > extended time, and that is the price to pay. How much time it takes > to finish in a pipeline fully depends on implementation. > > The effectiveness could only be measured with numbers extracted from > elastic search data, like average time to wait for a job to start, > success vs fail execution time percentiles for a job, average amount > of rechecks, recheck storms history et al. I don't have that data > and don't know how to get it. Any help with that is very appreciated > and could really help to move the proposed patches forward or > decline it. And we could then compare "before" and "after" as well. > > I hope that explains the problem scope and the methodology to > address that. > > > On 5/14/18 6:15 PM, Bogdan Dobrelya wrote: > > An update for your review please folks > > Bogdan Dobrelya > > writes: > > Hello. > As Zuul documentation [0] explains, the names "check", > "gate", and > "post"  may be altered for more advanced pipelines. Is > it doable to > introduce, for particular openstack projects, multiple check > stages/steps as check-1, check-2 and so on? And is it > possible to make > the consequent steps reusing environments from the > previous steps > finished with? > > Narrowing down to tripleo CI scope, the problem I'd want > we to solve > with this "virtual RFE", and using such multi-staged > check pipelines, > is reducing (ideally, de-duplicating) some of the common > steps for > existing CI jobs. > > > What you're describing sounds more like a job graph within a > pipeline. > See: > https://docs.openstack.org/infra/zuul/user/config.html#attr-job.dependencies > > > for how to configure a job to run only after another job has > completed. > There is also a facility to pass data between such jobs. > > ... (skipped) ... > > Creating a job graph to have one job use the results of the > previous job > can make sense in a lot of cases.  It doesn't always save *time* > however. > > It's worth noting that in OpenStack's Zuul, we have made an > explicit > choice not to have long-running integration jobs depend on > shorter pep8 > or tox jobs, and that's because we value developer time more > than CPU > time.  We would rather run all of the tests and return all > of the > results so a developer can fix all of the errors as quickly > as possible, > rather than forcing an iterative workflow where they have to > fix all the > whitespace issues before the CI system will tell them which > actual tests > broke. > > -Jim > > > I proposed a few zuul dependencies [0], [1] to tripleo CI > pipelines for undercloud deployments vs upgrades testing (and > some more). Given that those undercloud jobs have not so high > fail rates though, I think Emilien is right in his comments and > those would buy us nothing. > >  From the other side, what do you think folks of making the > tripleo-ci-centos-7-3nodes-multinode depend on > tripleo-ci-centos-7-containers-multinode [2]? The former seems > quite faily and long running, and is non-voting. It deploys (see > featuresets configs [3]*) a 3 nodes in HA fashion. And it seems > almost never passing, when the containers-multinode fails - see > the CI stats page [4]. I've found only a 2 cases there for the > otherwise situation, when containers-multinode fails, but > 3nodes-multinode passes. So cutting off those future failures > via the dependency added, *would* buy us something and allow > other jobs to wait less to commence, by a reasonable price of > somewhat extended time of the main zuul pipeline. I think it > makes sense and that extended CI time will not overhead the RDO > CI execution times so much to become a problem. WDYT? > > [0] https://review.openstack.org/#/c/568275/ > > [1] https://review.openstack.org/#/c/568278/ > > [2] https://review.openstack.org/#/c/568326/ > > [3] > https://docs.openstack.org/tripleo-quickstart/latest/feature-configuration.html > > > [4] http://tripleo.org/cistatus.html > > > * ignore the column 1, it's obsolete, all CI jobs now using > configs download AFAICT... > > > > -- > Best regards, > Bogdan Dobrelya, > Irc #bogdando > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > -- > Best regards > Sagi Shnaidman > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Bogdan Dobrelya, Irc #bogdando From gr at ham.ie Tue May 15 16:20:00 2018 From: gr at ham.ie (Graham Hayes) Date: Tue, 15 May 2018 17:20:00 +0100 Subject: [openstack-dev] [tc] [all] TC Report 18-20 In-Reply-To: References: Message-ID: <329e01d3-e5f4-9d06-ec74-d503475e1af9@ham.ie> On 15/05/18 16:31, Chris Dent wrote: > > HTML: https://anticdent.org/tc-report-18-20.html > > Trying to write a TC report after a gap of 3 weeks is hard enough, > but when that gap involves some time off, the TC elections, and the > run up to summit (next week in > [Vancouver](https://www.openstack.org/summit/vancouver-2018/)) then > it gets bewildering. Rather than trying to give anything like a full > summary, I'll go for some highlights. > > Be aware that since next week is summit and I'll be travelling the > week after, there will be another gap in reports. > > # Elections > > The elections were for seven positions. Of those, three are new to > the TC: Graham Hayes, Mohammed Naser, Zane Bitter. Having new people > is _great_. There's a growing sense that the TC needs to take a more > active role in helping adapt the culture of OpenStack to its > changing place in the world (see some of the comments below). Having > new people helps with that greatly. > > Doug Hellman has become the chair of the TC, taking the seat long > held by Thierry. This is the first time (that I'm aware of) that a > non-Foundation-staff individual has been the chair. > > One of the most interesting parts of the election process were the > email threads started by Doug. There's hope that existing TC > members that were not elected in this cycle, those that have > departed, and anyone else will provide their answers to them too. An > [email > reminder](http://lists.openstack.org/pipermail/openstack-dev/2018-May/130382.html) > > exists. > > # Summit > > Is next week, in Vancouver. The TC has several > [Forum](https://wiki.openstack.org/wiki/Forum/Vancouver2018) > sessions planned including: > > * [S release >   goals](https://etherpad.openstack.org/p/YVR-S-release-goals) > * [Project boundaries and what is >   > OpenStack](https://etherpad.openstack.org/p/YVR-forum-TC-project-boundaries) > > * [TC >   Retrospective](https://etherpad.openstack.org/p/YVR-tc-retrospective) > * [Cross Community >   > Governance](https://etherpad.openstack.org/p/YVR-cross-osf-tech-governance) > > # Corporate Foundation Contributions > > There's ongoing discussion about how [to > measure](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-04-24.log.html#t2018-04-24T15:43:59) > > upstream contribution from corporate Foundation members and what to > do if contribution seems lacking. Part of the reason this came up > was because the mode of contribution from new platinum member, > Tencent, is not clear. For a platinum member, it should be > _obvious_. This is a very important point. By adding a company (especially at this level) we grant them a certain amount of our credibility. We need to be sure that this is earned by the new member. > # LCOO > > There's been some concern expressed about the The Large Contributing > OpenStack Operators (LCOO) group and the way they operate. They use > an [Atlassian Wiki](https://openstack-lcoo.atlassian.net/) and > Slack, and have restricted membership. These things tend to not > align with the norms for tool usage and collaboration in OpenStack. > This topic came up in [late > April](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-04-26.log.html#t2018-04-26T14:39:36) > > but is worth revisiting in Vancouver. From what I understand, this group came into being before the UC was created - a joint UC/TC/LCOO sync up in Vancouver is probably a good idea. > # Constellations > > One of the things that came out in election campaigning is that > OpenStack needs to be more clear about the many ways that OpenStack > can be used, in part as a way of being more clear about what > OpenStack _is_. Constellations are one way to do this and work has > begun on one for [Scientific > Computing](https://review.openstack.org/#/c/565466/). There's some > discussion there on what a constellation is supposed to accomplish. > If you have an opinion, you should comment. > > # Board Meeting > > The day before summit there is a "combined leadership" meeting with > the Foundation Board, the User Committee and the Technical > Committee. Doug has posted a [review of the > agenda](http://lists.openstack.org/pipermail/openstack-dev/2018-May/130336.html). > > These meetings are open to any Foundation members and often involve > a lot of insight into the future of OpenStack. And snacks. > > # Feedback, Leadership and Dictatorship of the Projects > > Zane started [an email > thread](http://lists.openstack.org/pipermail/openstack-dev/2018-May/130375.html) > > about ways to replace or augment the once large and positive > feedback loop that was present in earlier days of OpenStack. That > now has the potential to trap us into what he describes as a "local > maximum". The thread eventually evolved into concerns that the > individual sub-projects in OpenStack can sometimes have too much > power and identity compared to the overarching project, leading to > isolation and difficulty getting overarching things done. There was a > bit of discussion about this [in > IRC](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-05-11.log.html#t2018-05-11T19:13:02) > > but the important parts are in the several messages in the thread. > > Some people think that the community goals help to fill some of this > void. Others thinks this is not quite enough and perhaps project > teams as a point of emphasis is ["no longer > optimal"](http://lists.openstack.org/pipermail/openstack-dev/2018-May/130436.html). > > > But in all this talk of change, how do we do the work if we're > already busy? What can we not do? That was a topic [Monday > morning](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-05-14.log.html#t2018-05-14T09:00:00). > > > # API Version Bumps > > Also on Monday, plans [were > made](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-05-14.log.html#t2018-05-14T17:27:05) > > to have a session in Vancouver about how to do across-the-system > minimum API version bumps. This started in response to a meandering > thread [on > twitter](https://twitter.com/robcresswell/status/994911766776877059) > about inconsistencies in the OpenStack's APIs "never" being > resolved. This should be a good session, but will need serious buy in across the project to actually make an impact. Otherwise it will be the usual people in a room, arguing about the usual things, waiting to restart the debate in Denver. > # Where Now? > > It's hard to make any conclusions from the election results. A > relatively small number of people voted for a relatively small > number of candidates. And there's always the sense that voting is > primarily based on name recognition where platforms and policies > have little bearing. However, if we are to take the results at face > value then it appears that at least some of the electorate wants one > or both of the following from the TC: > > * Increased communication and engagement. I think this is true, based on conversations with people in the last few months. These reports really help let people know what is happening > * Greater and more active exercising of whatever power they can >   dredge up to help lead and change the community more directly. Yes, with a caveat. People want the TC to use their "power" (more ability to influence) to get things done, but may dislike where the TC uses that influence. > Do _you_ think this is true? What direction do things need to go? > > I'm currently in the state of mind where it is critical that we > create and maintain the big picture information artifacts > ("OpenStack is X, Y, and Z", "OpenStack is not A, B and C", "Next > year OpenStack will start being E but will stop being Z") that allow > contributors of any sort to pick amongst the (too) many > opportunities for things to do. Especially making it easier—and > socially and professionally _safer_—to say "no" to something. This > makes it more clean and clear to get the right things done—rather > than context switch—and to create the necessary headspace to > consider improvements rather than doing the same thing over again. I think we definitely need to get to a point where we can say "no", to both projects, and features in those projects. "Doesn't fit with the future vision" should be something we can and do say. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From zbitter at redhat.com Tue May 15 16:25:04 2018 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 15 May 2018 12:25:04 -0400 Subject: [openstack-dev] [requirements][barbican][daisycloud][freezer][fuel][heat][pyghmi][rpm-packaging][solum][tatu][trove] pycrypto is dead and insecure, you should migrate In-Reply-To: <20180513172206.bfaxmmp37vxkkwuc@gentoo.org> References: <20180513172206.bfaxmmp37vxkkwuc@gentoo.org> Message-ID: On 13/05/18 13:22, Matthew Thode wrote: > This is a reminder to the projects called out that they are using old, > unmaintained and probably insecure libraries (it's been dead since > 2014). Please migrate off to use the cryptography library. We'd like > to drop pycrypto from requirements for rocky. > > See also, the bug, which has most of you cc'd already. > > https://bugs.launchpad.net/openstack-requirements/+bug/1749574 > > +----------------------------------------+---------------------------------------------------------------------+------+---------------------------------------------------+ > | Repository | Filename | Line | Text | > +----------------------------------------+---------------------------------------------------------------------+------+---------------------------------------------------+ > | barbican | requirements.txt | 25 | pycrypto>=2.6 # Public Domain | > | daisycloud-core | code/daisy/requirements.txt | 17 | pycrypto>=2.6 # Public Domain | > | freezer | requirements.txt | 21 | pycrypto>=2.6 # Public Domain | > | fuel-web | nailgun/requirements.txt | 24 | pycrypto>=2.6.1 | > | heat-cfnclient | requirements.txt | 2 | PyCrypto>=2.1.0 | AFAICT heat-cfnclient isn't actually using PyCrypto, even though it's listed in requirements.txt. The whole project is just a light wrapper around python-boto (though this wasn't always the case IIRC), so I suspect it's just relying on boto for all of the auth stuff. > | pyghmi | requirements.txt | 1 | pycrypto>=2.6 | > | rpm-packaging | requirements.txt | 189 | pycrypto>=2.6 # Public Domain | > | solum | requirements.txt | 24 | pycrypto>=2.6 # Public Domain | > | tatu | requirements.txt | 7 | pycrypto>=2.6.1 | > | tatu | test-requirements.txt | 7 | pycrypto>=2.6.1 | > | trove | integration/scripts/files/requirements/fedora-requirements.txt | 30 | pycrypto>=2.6 # Public Domain | > | trove | integration/scripts/files/requirements/ubuntu-requirements.txt | 29 | pycrypto>=2.6 # Public Domain | > | trove | requirements.txt | 47 | pycrypto>=2.6 # Public Domain | > +----------------------------------------+---------------------------------------------------------------------+------+---------------------------------------------------+ > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From Tim.Bell at cern.ch Tue May 15 16:33:23 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Tue, 15 May 2018 16:33:23 +0000 Subject: [openstack-dev] [tc] [all] TC Report 18-20 In-Reply-To: <329e01d3-e5f4-9d06-ec74-d503475e1af9@ham.ie> References: <329e01d3-e5f4-9d06-ec74-d503475e1af9@ham.ie> Message-ID: <55D81F7C-CEEA-49D7-9B31-30768C8A8BAA@cern.ch> From my memory, the LCOO was started in 2015 or 2016. The UC was started at the end of 2012, start of 2013 (https://www.openstack.org/blog/?p=3777) with Ryan, JC and I. Tim -----Original Message----- From: Graham Hayes Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Tuesday, 15 May 2018 at 18:22 To: "openstack-dev at lists.openstack.org" Subject: Re: [openstack-dev] [tc] [all] TC Report 18-20 ...... > # LCOO > > There's been some concern expressed about the The Large Contributing > OpenStack Operators (LCOO) group and the way they operate. They use > an [Atlassian Wiki](https://openstack-lcoo.atlassian.net/) and > Slack, and have restricted membership. These things tend to not > align with the norms for tool usage and collaboration in OpenStack. > This topic came up in [late > April](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-04-26.log.html#t2018-04-26T14:39:36) > > but is worth revisiting in Vancouver. From what I understand, this group came into being before the UC was created - a joint UC/TC/LCOO sync up in Vancouver is probably a good idea. From corvus at inaugust.com Tue May 15 16:40:28 2018 From: corvus at inaugust.com (James E. Blair) Date: Tue, 15 May 2018 09:40:28 -0700 Subject: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal In-Reply-To: <9efdab28-de9c-ad94-57c8-616d557b2205@redhat.com> (Bogdan Dobrelya's message of "Tue, 15 May 2018 17:31:07 +0200") References: <90762663-3bc2-d7b8-1242-ea5a67543c68@redhat.com> <8060ff1f-e546-868b-8729-090d643969d7@redhat.com> <878t8luh3o.fsf@meyer.lemoncheese.net> <9efdab28-de9c-ad94-57c8-616d557b2205@redhat.com> Message-ID: <87bmdgswhv.fsf@meyer.lemoncheese.net> Bogdan Dobrelya writes: > * check out testing depends-on things, (Zuul should have done this for you, but yes.) > * build repos and all tripleo docker images from these repos, > * upload into a swift container, with an automatic expiration set, the > de-duplicated and compressed tarball created with something like: > # docker save $(docker images -q) | gzip -1 > all.tar.xz > (I expect it will be something like a 2G file) > * something similar for DLRN repos prolly, I'm not an expert for this part. > > Then those stored artifacts to be picked up by the next step in the > graph, deploying undercloud and overcloud in the single step, like: > * fetch the swift containers with repos and container images > * docker load -i all.tar.xz > * populate images into a local registry, as usual > * something similar for the repos. Includes an offline yum update (we > already have a compressed repo, right? profit!) > * deploy UC > * deploy OC, if a job wants it > > And if OC deployment brought into a separate step, we do not need > local registries, just 'docker load -i all.tar.xz' issued for > overcloud nodes should replace image prep workflows and registries, > AFAICT. Not sure with the repos for that case. > > I wish to assist with the upstream infra swift setup for tripleo, and > that plan, just need a blessing and more hands from tripleo CI squad > ;) That sounds about right (at least the Zuul parts :). We're also talking about making a new kind of job which can continue to run after it's "finished" so that you could use it to do something like host a container registry that's used by other jobs running on the change. We don't have that feature yet, but if we did, would you prefer to use that instead of the intermediate swift storage? -Jim From gr at ham.ie Tue May 15 16:40:32 2018 From: gr at ham.ie (Graham Hayes) Date: Tue, 15 May 2018 17:40:32 +0100 Subject: [openstack-dev] [tc] [all] TC Report 18-20 In-Reply-To: <55D81F7C-CEEA-49D7-9B31-30768C8A8BAA@cern.ch> References: <329e01d3-e5f4-9d06-ec74-d503475e1af9@ham.ie> <55D81F7C-CEEA-49D7-9B31-30768C8A8BAA@cern.ch> Message-ID: <1190674e-d033-4bba-8c81-ec63eb40b672@ham.ie> On 15/05/18 17:33, Tim Bell wrote: > From my memory, the LCOO was started in 2015 or 2016. The UC was started at the end of 2012, start of 2013 (https://www.openstack.org/blog/?p=3777) with Ryan, JC and I. > > Tim Yeap - I miss read what mrhillsman said [0]. The point still stands - I think this does need to be discussed, and the outcome published to the list. Any additional background on why we allowed LCOO to operate like this would help a lot. - Graham 0 - http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-04-26.log.html#t2018-04-26T15:03:54 > -----Original Message----- > From: Graham Hayes > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Tuesday, 15 May 2018 at 18:22 > To: "openstack-dev at lists.openstack.org" > Subject: Re: [openstack-dev] [tc] [all] TC Report 18-20 > > ...... > > > # LCOO > > > > There's been some concern expressed about the The Large Contributing > > OpenStack Operators (LCOO) group and the way they operate. They use > > an [Atlassian Wiki](https://openstack-lcoo.atlassian.net/) and > > Slack, and have restricted membership. These things tend to not > > align with the norms for tool usage and collaboration in OpenStack. > > This topic came up in [late > > April](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-04-26.log.html#t2018-04-26T14:39:36) > > > > but is worth revisiting in Vancouver. > > From what I understand, this group came into being before the UC was > created - a joint UC/TC/LCOO sync up in Vancouver is probably a good > idea. > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From davanum at gmail.com Tue May 15 16:44:50 2018 From: davanum at gmail.com (Davanum Srinivas) Date: Tue, 15 May 2018 12:44:50 -0400 Subject: [openstack-dev] [tc] [all] TC Report 18-20 In-Reply-To: <1190674e-d033-4bba-8c81-ec63eb40b672@ham.ie> References: <329e01d3-e5f4-9d06-ec74-d503475e1af9@ham.ie> <55D81F7C-CEEA-49D7-9B31-30768C8A8BAA@cern.ch> <1190674e-d033-4bba-8c81-ec63eb40b672@ham.ie> Message-ID: fyi Jay tried to once - http://lists.openstack.org/pipermail/openstack-dev/2017-February/thread.html#111511 On Tue, May 15, 2018 at 12:40 PM, Graham Hayes wrote: > On 15/05/18 17:33, Tim Bell wrote: >> From my memory, the LCOO was started in 2015 or 2016. The UC was started at the end of 2012, start of 2013 (https://www.openstack.org/blog/?p=3777) with Ryan, JC and I. >> >> Tim > > Yeap - I miss read what mrhillsman said [0]. > > The point still stands - I think this does need to be discussed, and the > outcome published to the list. > > Any additional background on why we allowed LCOO to operate like this > would help a lot. > > - Graham > > 0 - > http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-04-26.log.html#t2018-04-26T15:03:54 > >> -----Original Message----- >> From: Graham Hayes >> Reply-To: "OpenStack Development Mailing List (not for usage questions)" >> Date: Tuesday, 15 May 2018 at 18:22 >> To: "openstack-dev at lists.openstack.org" >> Subject: Re: [openstack-dev] [tc] [all] TC Report 18-20 >> >> ...... >> >> > # LCOO >> > >> > There's been some concern expressed about the The Large Contributing >> > OpenStack Operators (LCOO) group and the way they operate. They use >> > an [Atlassian Wiki](https://openstack-lcoo.atlassian.net/) and >> > Slack, and have restricted membership. These things tend to not >> > align with the norms for tool usage and collaboration in OpenStack. >> > This topic came up in [late >> > April](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-04-26.log.html#t2018-04-26T14:39:36) >> > >> > but is worth revisiting in Vancouver. >> >> From what I understand, this group came into being before the UC was >> created - a joint UC/TC/LCOO sync up in Vancouver is probably a good >> idea. >> >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Davanum Srinivas :: https://twitter.com/dims From davanum at gmail.com Tue May 15 16:44:50 2018 From: davanum at gmail.com (Davanum Srinivas) Date: Tue, 15 May 2018 12:44:50 -0400 Subject: [openstack-dev] [tc] [all] TC Report 18-20 In-Reply-To: <1190674e-d033-4bba-8c81-ec63eb40b672@ham.ie> References: <329e01d3-e5f4-9d06-ec74-d503475e1af9@ham.ie> <55D81F7C-CEEA-49D7-9B31-30768C8A8BAA@cern.ch> <1190674e-d033-4bba-8c81-ec63eb40b672@ham.ie> Message-ID: fyi Jay tried to once - http://lists.openstack.org/pipermail/openstack-dev/2017-February/thread.html#111511 On Tue, May 15, 2018 at 12:40 PM, Graham Hayes wrote: > On 15/05/18 17:33, Tim Bell wrote: >> From my memory, the LCOO was started in 2015 or 2016. The UC was started at the end of 2012, start of 2013 (https://www.openstack.org/blog/?p=3777) with Ryan, JC and I. >> >> Tim > > Yeap - I miss read what mrhillsman said [0]. > > The point still stands - I think this does need to be discussed, and the > outcome published to the list. > > Any additional background on why we allowed LCOO to operate like this > would help a lot. > > - Graham > > 0 - > http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-04-26.log.html#t2018-04-26T15:03:54 > >> -----Original Message----- >> From: Graham Hayes >> Reply-To: "OpenStack Development Mailing List (not for usage questions)" >> Date: Tuesday, 15 May 2018 at 18:22 >> To: "openstack-dev at lists.openstack.org" >> Subject: Re: [openstack-dev] [tc] [all] TC Report 18-20 >> >> ...... >> >> > # LCOO >> > >> > There's been some concern expressed about the The Large Contributing >> > OpenStack Operators (LCOO) group and the way they operate. They use >> > an [Atlassian Wiki](https://openstack-lcoo.atlassian.net/) and >> > Slack, and have restricted membership. These things tend to not >> > align with the norms for tool usage and collaboration in OpenStack. >> > This topic came up in [late >> > April](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-04-26.log.html#t2018-04-26T14:39:36) >> > >> > but is worth revisiting in Vancouver. >> >> From what I understand, this group came into being before the UC was >> created - a joint UC/TC/LCOO sync up in Vancouver is probably a good >> idea. >> >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Davanum Srinivas :: https://twitter.com/dims From prometheanfire at gentoo.org Tue May 15 16:55:45 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Tue, 15 May 2018 11:55:45 -0500 Subject: [openstack-dev] [zVMCloudConnector][python-zvm-sdk][requirements] Unblock webob-1.8.1 Message-ID: <20180515165545.kahgul6qo4tb7y3p@gentoo.org> Please unblock webob-1.8.1, you are the only library holding it back at this point. I don't see a way to submit code to the project so I cc'd the project in launchpad. https://bugs.launchpad.net/openstack-requirements/+bug/1765748 -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From fungi at yuggoth.org Tue May 15 16:56:20 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 15 May 2018 16:56:20 +0000 Subject: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal In-Reply-To: <87bmdgswhv.fsf@meyer.lemoncheese.net> References: <90762663-3bc2-d7b8-1242-ea5a67543c68@redhat.com> <8060ff1f-e546-868b-8729-090d643969d7@redhat.com> <878t8luh3o.fsf@meyer.lemoncheese.net> <9efdab28-de9c-ad94-57c8-616d557b2205@redhat.com> <87bmdgswhv.fsf@meyer.lemoncheese.net> Message-ID: <20180515165620.653br5e7mtzcr6r2@yuggoth.org> On 2018-05-15 09:40:28 -0700 (-0700), James E. Blair wrote: [...] > We're also talking about making a new kind of job which can continue to > run after it's "finished" so that you could use it to do something like > host a container registry that's used by other jobs running on the > change. We don't have that feature yet, but if we did, would you prefer > to use that instead of the intermediate swift storage? If the subsequent jobs depending on that one get nodes allocated from the same provider, that could solve a lot of the potential network performance risks as well. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From prometheanfire at gentoo.org Tue May 15 17:05:39 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Tue, 15 May 2018 12:05:39 -0500 Subject: [openstack-dev] [requirements][barbican][daisycloud][freezer][fuel][heat][pyghmi][rpm-packaging][solum][tatu][trove] pycrypto is dead and insecure, you should migrate In-Reply-To: References: <20180513172206.bfaxmmp37vxkkwuc@gentoo.org> Message-ID: <20180515170539.glwzuugg3ihec3x2@gentoo.org> On 18-05-15 12:25:04, Zane Bitter wrote: > On 13/05/18 13:22, Matthew Thode wrote: > > This is a reminder to the projects called out that they are using old, > > unmaintained and probably insecure libraries (it's been dead since > > 2014). Please migrate off to use the cryptography library. We'd like > > to drop pycrypto from requirements for rocky. > > > > See also, the bug, which has most of you cc'd already. > > > > https://bugs.launchpad.net/openstack-requirements/+bug/1749574 > > > > +----------------------------------------+---------------------------------------------------------------------+------+---------------------------------------------------+ > > | Repository | Filename | Line | Text | > > +----------------------------------------+---------------------------------------------------------------------+------+---------------------------------------------------+ > > | barbican | requirements.txt | 25 | pycrypto>=2.6 # Public Domain | > > | daisycloud-core | code/daisy/requirements.txt | 17 | pycrypto>=2.6 # Public Domain | > > | freezer | requirements.txt | 21 | pycrypto>=2.6 # Public Domain | > > | fuel-web | nailgun/requirements.txt | 24 | pycrypto>=2.6.1 | > > | heat-cfnclient | requirements.txt | 2 | PyCrypto>=2.1.0 | > > AFAICT heat-cfnclient isn't actually using PyCrypto, even though it's listed > in requirements.txt. The whole project is just a light wrapper around > python-boto (though this wasn't always the case IIRC), so I suspect it's > just relying on boto for all of the auth stuff. > Thanks for the notice, submitted a review to remove it. https://review.openstack.org/568646 > > | pyghmi | requirements.txt | 1 | pycrypto>=2.6 | > > | rpm-packaging | requirements.txt | 189 | pycrypto>=2.6 # Public Domain | > > | solum | requirements.txt | 24 | pycrypto>=2.6 # Public Domain | > > | tatu | requirements.txt | 7 | pycrypto>=2.6.1 | > > | tatu | test-requirements.txt | 7 | pycrypto>=2.6.1 | > > | trove | integration/scripts/files/requirements/fedora-requirements.txt | 30 | pycrypto>=2.6 # Public Domain | > > | trove | integration/scripts/files/requirements/ubuntu-requirements.txt | 29 | pycrypto>=2.6 # Public Domain | > > | trove | requirements.txt | 47 | pycrypto>=2.6 # Public Domain | > > +----------------------------------------+---------------------------------------------------------------------+------+---------------------------------------------------+ > > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From corvus at inaugust.com Tue May 15 17:28:14 2018 From: corvus at inaugust.com (James E. Blair) Date: Tue, 15 May 2018 10:28:14 -0700 Subject: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal In-Reply-To: <20180515165620.653br5e7mtzcr6r2@yuggoth.org> (Jeremy Stanley's message of "Tue, 15 May 2018 16:56:20 +0000") References: <90762663-3bc2-d7b8-1242-ea5a67543c68@redhat.com> <8060ff1f-e546-868b-8729-090d643969d7@redhat.com> <878t8luh3o.fsf@meyer.lemoncheese.net> <9efdab28-de9c-ad94-57c8-616d557b2205@redhat.com> <87bmdgswhv.fsf@meyer.lemoncheese.net> <20180515165620.653br5e7mtzcr6r2@yuggoth.org> Message-ID: <877eo4sua9.fsf@meyer.lemoncheese.net> Jeremy Stanley writes: > On 2018-05-15 09:40:28 -0700 (-0700), James E. Blair wrote: > [...] >> We're also talking about making a new kind of job which can continue to >> run after it's "finished" so that you could use it to do something like >> host a container registry that's used by other jobs running on the >> change. We don't have that feature yet, but if we did, would you prefer >> to use that instead of the intermediate swift storage? > > If the subsequent jobs depending on that one get nodes allocated > from the same provider, that could solve a lot of the potential > network performance risks as well. That's... tricky. We're *also* looking at affinity for buildsets, and I'm optimistic we'll end up with something there eventually, but that's likely to be a more substantive change and probably won't happen as soon. I do agree it will be nice, especially for use cases like this. -Jim From emilien at redhat.com Tue May 15 17:50:40 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 15 May 2018 10:50:40 -0700 Subject: [openstack-dev] [tripleo] The Weekly Owl - 21th Edition Message-ID: Welcome to the twenty first edition of a weekly update in TripleO world! The goal is to provide a short reading (less than 5 minutes) to learn what's new this week. Any contributions and feedback are welcome. Link to the previous version: http://lists.openstack.org/pipermail/openstack-dev/2018-May/130273.html +---------------------------------+ | General announcements | +---------------------------------+ +--> Migration to Storyboard is scheduled for rocky-m2, please be aware of its usage: https://docs.openstack.org/infra/manual/developers.html#development-workflow +--> We have 3 more weeks until milestone 2 ! Check-out the schedule: https://releases.openstack.org/rocky/schedule.html +------------------------------+ | Continuous Integration | +------------------------------+ +--> Ruck is Matt and Rover is Sagi. Please let them know any new CI issue. +--> centos 7.5 blockers were solved, now looking at how we can improve centos testing and avoid gate downtime in the future +--> Master promotion is 0 day, Queens is 6 days, Pike is 6 days and Ocata is 6 days. +--> Sprint themes are Upgrade CI (new jobs, forward looking release state machine, voting jobs) and refactor python-tempestconf for service discovery. +--> Discussion in progress around zuul v3 multi-staged check pipelines in TripleO CI +--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting +-------------+ | Upgrades | +-------------+ +--> Collaboration with CI team for upgrade jobs. +--> Need reviews on FFU work, check the etherpad. +--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad-status +---------------+ | Containers | +---------------+ +--> Support of image customization during upload in (good) progress. +--> Efforts arounds all-in-one installer, also good progress. +--> Preparing next deep dive: https://etherpad.openstack.org/p/tripleo-deep-dive-containerized-undercloud +--> More: https://etherpad.openstack.org/p/tripleo-containers-squad-status +----------------------+ | config-download | +----------------------+ +--> config download status commands and workflows (need reviews) +--> UI work still ongoing +--> Major doc update: https://review.openstack.org/#/c/566606 +--> More: https://etherpad.openstack.org/p/tripleo-config- download-squad-status +--------------+ | Integration | +--------------+ +--> Need to add support for NodeDataLookup parameter into "config-download" deployment mechanism (not started yet). +--> Need review on https://review.openstack.org/#/c/563112/ +--> More: https://etherpad.openstack.org/p/tripleo-integration-squad-status +---------+ | UI/CLI | +---------+ +--> Still working on Network Wizard. +--> Finishing config-download integration +--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status +---------------+ | Validations | +---------------+ +--> Custom validations spec ready for reviews: https://review.openstack.org/#/c/393775/ +--> Mistral workflow plugin +--> More: https://etherpad.openstack.org/p/tripleo-validations-squad-status +---------------+ | Networking | +---------------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-networking-squad-status +--------------+ | Workflows | +--------------+ +--> Lot of reviews are needed, please check them out +--> Workflows should now all use the tripleo.messaging.v1.send workflow to send messages +--> More: https://etherpad.openstack.org/p/tripleo-workflows-squad-status +-----------+ | Security | +-----------+ +--> Swift object encryption by default in the undercloud +--> TLS by default for the overcloud +--> More: https://etherpad.openstack.org/p/tripleo-security-squad +------------+ | Owl fact | +------------+ Barn Owls swallow their prey whole—skin, bones, and all—and they eat up to 1,000 mice each year. Source: https://www.audubon.org/news/11-fun-facts-about-owls Thank you all for reading and stay tuned! -- Your fellow reporter, Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaosorior at gmail.com Tue May 15 19:19:52 2018 From: jaosorior at gmail.com (Juan Antonio Osorio) Date: Tue, 15 May 2018 22:19:52 +0300 Subject: [openstack-dev] [tripleo] Encrypted swift volumes by default in the undercloud Message-ID: Hello! As part of the work from the Security Squad, we added the ability for the containerized undercloud to encrypt the overcloud plans. This is done by enabling Swift's encrypted volumes, which require barbican. Right now it's turned off, but I would like to enable it by default [1]. What do you folks think? [1] https://review.openstack.org/#/c/567200/ BR -- Juan Antonio Osorio R. e-mail: jaosorior at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Tue May 15 19:57:04 2018 From: melwittt at gmail.com (melanie witt) Date: Tue, 15 May 2018 12:57:04 -0700 Subject: [openstack-dev] [nova] review runway status In-Reply-To: References: Message-ID: <4c0617ea-ed16-3c2e-754c-5f222fb0dce0@gmail.com> On Tue, 15 May 2018 14:27:12 +0800, Chen Ch Ji wrote: > Thanks for the sharing, The z/VM driver spec review marked as END DATE: > 2018-05-15 > Thanks a couple folks helped a lot on the review and still need more > review activity on the patch sets, can I apply for extend the end date > for the run way? We haven't done any extensions on end dates for blueprints in runways. One of the main ideas of runways is to set a consistent time box for items in runways and highlight a variety of blueprints throughout the release cycle. We have other blueprints in the queue that are waiting for their two week time box in a runway too. Authors can add their blueprints back to the end of the queue if more review time is needed and the blueprint will be added to a runway when its turn arrives again. So please feel free to do that if more review time is needed. Best, -melanie From whayutin at redhat.com Tue May 15 20:31:16 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 15 May 2018 14:31:16 -0600 Subject: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal In-Reply-To: <877eo4sua9.fsf@meyer.lemoncheese.net> References: <90762663-3bc2-d7b8-1242-ea5a67543c68@redhat.com> <8060ff1f-e546-868b-8729-090d643969d7@redhat.com> <878t8luh3o.fsf@meyer.lemoncheese.net> <9efdab28-de9c-ad94-57c8-616d557b2205@redhat.com> <87bmdgswhv.fsf@meyer.lemoncheese.net> <20180515165620.653br5e7mtzcr6r2@yuggoth.org> <877eo4sua9.fsf@meyer.lemoncheese.net> Message-ID: On Tue, May 15, 2018 at 1:29 PM James E. Blair wrote: > Jeremy Stanley writes: > > > On 2018-05-15 09:40:28 -0700 (-0700), James E. Blair wrote: > > [...] > >> We're also talking about making a new kind of job which can continue to > >> run after it's "finished" so that you could use it to do something like > >> host a container registry that's used by other jobs running on the > >> change. We don't have that feature yet, but if we did, would you prefer > >> to use that instead of the intermediate swift storage? > > > > If the subsequent jobs depending on that one get nodes allocated > > from the same provider, that could solve a lot of the potential > > network performance risks as well. > > That's... tricky. We're *also* looking at affinity for buildsets, and > I'm optimistic we'll end up with something there eventually, but that's > likely to be a more substantive change and probably won't happen as > soon. I do agree it will be nice, especially for use cases like this. > > -Jim > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev There is a lot here to unpack and discuss, but I really like the ideas I'm seeing. Nice work Bogdan! I've added it the tripleo meeting agenda for next week so we can continue socializing the idea and get feedback. Thanks! https://etherpad.openstack.org/p/tripleo-meeting-items -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Tue May 15 20:35:48 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 15 May 2018 14:35:48 -0600 Subject: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal In-Reply-To: References: Message-ID: On Mon, May 14, 2018 at 3:16 PM Sagi Shnaidman wrote: > Hi, Bogdan > > I like the idea with undercloud job. Actually if undercloud fails, I'd > stop all other jobs, because it doens't make sense to run them. Seeing the > same failure in 10 jobs doesn't add too much. So maybe adding undercloud > job as dependency for all multinode jobs would be great idea. I think it's > worth to check also how long it will delay jobs. Will all jobs wait until > undercloud job is running? Or they will be aborted when undercloud job is > failing? > > However I'm very sceptical about multinode containers and scenarios jobs, > they could fail because of very different reasons, like race conditions in > product or infra issues. Having skipping some of them will lead to more > rechecks from devs trying to discover all problems in a row, which will > delay the development process significantly. > > Thanks > I agree on both counts w/ Sagi here. Thanks Sagi > > > On Mon, May 14, 2018 at 7:15 PM, Bogdan Dobrelya > wrote: > >> An update for your review please folks >> >> Bogdan Dobrelya writes: >>> >>> Hello. >>>> As Zuul documentation [0] explains, the names "check", "gate", and >>>> "post" may be altered for more advanced pipelines. Is it doable to >>>> introduce, for particular openstack projects, multiple check >>>> stages/steps as check-1, check-2 and so on? And is it possible to make >>>> the consequent steps reusing environments from the previous steps >>>> finished with? >>>> >>>> Narrowing down to tripleo CI scope, the problem I'd want we to solve >>>> with this "virtual RFE", and using such multi-staged check pipelines, >>>> is reducing (ideally, de-duplicating) some of the common steps for >>>> existing CI jobs. >>>> >>> >>> What you're describing sounds more like a job graph within a pipeline. >>> See: >>> https://docs.openstack.org/infra/zuul/user/config.html#attr-job.dependencies >>> for how to configure a job to run only after another job has completed. >>> There is also a facility to pass data between such jobs. >>> >>> ... (skipped) ... >>> >>> Creating a job graph to have one job use the results of the previous job >>> can make sense in a lot of cases. It doesn't always save *time* >>> however. >>> >>> It's worth noting that in OpenStack's Zuul, we have made an explicit >>> choice not to have long-running integration jobs depend on shorter pep8 >>> or tox jobs, and that's because we value developer time more than CPU >>> time. We would rather run all of the tests and return all of the >>> results so a developer can fix all of the errors as quickly as possible, >>> rather than forcing an iterative workflow where they have to fix all the >>> whitespace issues before the CI system will tell them which actual tests >>> broke. >>> >>> -Jim >>> >> >> I proposed a few zuul dependencies [0], [1] to tripleo CI pipelines for >> undercloud deployments vs upgrades testing (and some more). Given that >> those undercloud jobs have not so high fail rates though, I think Emilien >> is right in his comments and those would buy us nothing. >> >> From the other side, what do you think folks of making the >> tripleo-ci-centos-7-3nodes-multinode depend on >> tripleo-ci-centos-7-containers-multinode [2]? The former seems quite faily >> and long running, and is non-voting. It deploys (see featuresets configs >> [3]*) a 3 nodes in HA fashion. And it seems almost never passing, when the >> containers-multinode fails - see the CI stats page [4]. I've found only a 2 >> cases there for the otherwise situation, when containers-multinode fails, >> but 3nodes-multinode passes. So cutting off those future failures via the >> dependency added, *would* buy us something and allow other jobs to wait >> less to commence, by a reasonable price of somewhat extended time of the >> main zuul pipeline. I think it makes sense and that extended CI time will >> not overhead the RDO CI execution times so much to become a problem. WDYT? >> >> [0] https://review.openstack.org/#/c/568275/ >> [1] https://review.openstack.org/#/c/568278/ >> [2] https://review.openstack.org/#/c/568326/ >> [3] >> https://docs.openstack.org/tripleo-quickstart/latest/feature-configuration.html >> [4] http://tripleo.org/cistatus.html >> >> * ignore the column 1, it's obsolete, all CI jobs now using configs >> download AFAICT... >> >> -- >> Best regards, >> Bogdan Dobrelya, >> Irc #bogdando >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > -- > Best regards > Sagi Shnaidman > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Tue May 15 20:52:26 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 15 May 2018 14:52:26 -0600 Subject: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal In-Reply-To: <20180515154053.5udtbk2lubtj7w5n@yuggoth.org> References: <90762663-3bc2-d7b8-1242-ea5a67543c68@redhat.com> <8060ff1f-e546-868b-8729-090d643969d7@redhat.com> <878t8luh3o.fsf@meyer.lemoncheese.net> <9efdab28-de9c-ad94-57c8-616d557b2205@redhat.com> <20180515154053.5udtbk2lubtj7w5n@yuggoth.org> Message-ID: On Tue, May 15, 2018 at 11:42 AM Jeremy Stanley wrote: > On 2018-05-15 17:31:07 +0200 (+0200), Bogdan Dobrelya wrote: > [...] > > * upload into a swift container, with an automatic expiration set, the > > de-duplicated and compressed tarball created with something like: > > # docker save $(docker images -q) | gzip -1 > all.tar.xz > > (I expect it will be something like a 2G file) > > * something similar for DLRN repos prolly, I'm not an expert for this > part. > > > > Then those stored artifacts to be picked up by the next step in the > graph, > > deploying undercloud and overcloud in the single step, like: > > * fetch the swift containers with repos and container images > [...] > > I do worry a little about network fragility here, as well as > extremely variable performance. Randomly-selected job nodes could be > shuffling those files halfway across the globe so either upload or > download (or both) will experience high round-trip latency as well > as potentially constrained throughput, packet loss, > disconnects/interruptions and so on... all the things we deal with > when trying to rely on the Internet, except magnified by the > quantity of data being transferred about. > > Ultimately still worth trying, I think, but just keep in mind it may > introduce more issues than it solves. > -- > Jeremy Stanley > Question... If we were to build or update the containers that need an update and I'm assuming the overcloud images here as well as a parent job. The content would then sync to a swift file server on a central point for ALL the openstack providers or it would be sync'd to each cloud? Not to throw too much cold water on the idea, but... I wonder if the time to upload and download the containers and images would significantly reduce any advantage this process has. Although centralizing the container updates and images on a per check job basis sounds attractive, I get the sense we need to be very careful and fully vett the idea. At the moment it's also an optimization ( maybe ) so I don't see this as a very high priority atm. Let's bring the discussion the tripleo meeting next week. Thanks all! > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue May 15 21:01:21 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 15 May 2018 21:01:21 +0000 Subject: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal In-Reply-To: References: <90762663-3bc2-d7b8-1242-ea5a67543c68@redhat.com> <8060ff1f-e546-868b-8729-090d643969d7@redhat.com> <878t8luh3o.fsf@meyer.lemoncheese.net> <9efdab28-de9c-ad94-57c8-616d557b2205@redhat.com> <20180515154053.5udtbk2lubtj7w5n@yuggoth.org> Message-ID: <20180515210121.dpmxvauhcjdvpjw2@yuggoth.org> On 2018-05-15 14:52:26 -0600 (-0600), Wesley Hayutin wrote: [...] > The content would then sync to a swift file server on a central > point for ALL the openstack providers or it would be sync'd to > each cloud? [...] We haven't previously requested that all the Infra provider donors support Swift, and even for the ones who do I don't think we can count on it being available in every region where we run jobs. I assumed that implementation would be a single (central) Swift tenant provided by one of our donors who has it, thus the reason for my performance concerns at "large" artifact sizes. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From anlin.kong at gmail.com Tue May 15 23:12:01 2018 From: anlin.kong at gmail.com (Lingxian Kong) Date: Wed, 16 May 2018 11:12:01 +1200 Subject: [openstack-dev] [ALL][PTLs] Community Goals for Rocky: Toggle the debug option at runtime In-Reply-To: <1526395462-sup-8795@lrrr.local> References: <20180316213441.ap4hztvrmn4qkpey@yuggoth.org> <12B971D7-83C6-43AE-9CC3-C63296E9385D@doughellmann.com> <27a719f4-ce6b-2a19-b137-dc3dc153f0b0@gmail.com> <1526325202-sup-17@lrrr.local> <557ed9ca-5e68-85a1-858e-ca81797e63bd@gmail.com> <1526337894-sup-1085@lrrr.local> <1ca15064-a93e-6080-6b5b-bf70890575ca@gmail.com> <1526395462-sup-8795@lrrr.local> Message-ID: Hi, Maybe I missed the original discussion, I found the 'mutable' configuration implementation relies on oslo.service, but is there any guide for the projects using cotyledon instead? Cheers, Lingxian Kong On Wed, May 16, 2018 at 2:46 AM Doug Hellmann wrote: > Excerpts from Lance Bragstad's message of 2018-05-14 18:45:49 -0500: > > > > On 05/14/2018 05:46 PM, Doug Hellmann wrote: > > > Excerpts from Lance Bragstad's message of 2018-05-14 15:20:42 -0500: > > >> On 05/14/2018 02:24 PM, Doug Hellmann wrote: > > >>> Excerpts from Lance Bragstad's message of 2018-05-14 13:13:51 -0500: > > >>>> On 03/19/2018 09:22 AM, Jim Rollenhagen wrote: > > >>>>> On Sat, Mar 17, 2018 at 9:49 PM, Doug Hellmann < > doug at doughellmann.com > > >>>>> > wrote: > > >>>>> > > >>>>> Both of those are good ideas. > > >>>>> > > >>>>> > > >>>>> Agree. I like the socket idea a bit more as I can imagine some > > >>>>> operators don't want config file changes automatically applied. Do > we > > >>>>> want to choose one to standardize on or allow each project (or > > >>>>> operators, via config) the choice? > > >>>> Just to recap, keystone would be listening for when it's > configuration > > >>>> file changes, and reinitialize the logger if the logging settings > > >>>> changed, correct? > > >>> Sort of. > > >>> > > >>> Keystone would need to do something to tell oslo.config to re-load > the > > >>> config files. In services that rely on oslo.service, this is handled > > >>> with a SIGHUP handler that calls ConfigOpts.mutate_config_files(), so > > >>> for Keystone you would want to do something similar. > > >>> > > >>> That is, you want to wait for an explicit notification from the > operator > > >>> that you should reload the config, and not just watch for the file to > > >>> change. We could talk about using file modification as a trigger, but > > >>> reloading is something that may need to be staged across several > > >>> services in order so we chose for the first version to make the > trigger > > >>> explicit. Relying on watching files will also fail when the modified > > >>> data is not in a file (which will be possible when we finish the > driver > > >>> work described in > > >>> > http://specs.openstack.org/openstack/oslo-specs/specs/queens/oslo-config-drivers.html > ). > > >> Hmm, these are good points. I wonder if just converting to use > > >> oslo.service would be a lower bar then? > > > I thought keystone had moved away from that direction toward deploying > > > only within Apache? I may be out of touch, or have misunderstood > > > something, though. > > > > Oh - never mind... For some reason I was thinking there was a way to use > > oslo.service and Apache. > > > > Either way, I'll do some more digging before tomorrow. I have this as a > > topic on keystone's meeting agenda to go through our options [0]. If we > > do come up with something that doesn't involve intercepting signals > > (specifically for the reason noted by Kristi and Jim in the mod_wsgi > > documentation), should the community goal be updated to include that > > option? Just thinking that we can't be the only service in this position. > > I think we've left the implementation details up to the project > teams, for just that reason. That said, it would be good to document > how you do it (either formally or with a mailing list thread). > > And FWIW, if what you choose to do is monitor a file, that's fine > as a trigger. I suggest not using the configuration file itself, > though, for the reasons mentioned earlier. > > Doug > > PS - I wonder how Apache deals with reloading its own configuration > file. Is there some sort of hook you could use? > > > > > [0] https://etherpad.openstack.org/p/keystone-weekly-meeting > > > > > > > > Doug > > > > > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Wed May 16 06:18:09 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Wed, 16 May 2018 14:18:09 +0800 Subject: [openstack-dev] [Openstack-operators][heat][all] Heat now migrated to StoryBoard!! In-Reply-To: References: Message-ID: Bump the last time Hi all, As we keep adding more info to the migration guideline [1], you might like to take a look again. And do hope it will make things easier for you. If not, please find me in irc or mail. [1] https://etherpad.openstack.org/p/Heat-StoryBoard-Migration-Info 2018-05-10 18:42 GMT+08:00 Rico Lin : > Hi all, > As we keep adding more info to the migration guideline [1], you might like > to take a look again. > And do hope it will make things easier for you. If not, please find me in > irc or mail. > > [1] https://etherpad.openstack.org/p/Heat-StoryBoard-Migration-Info > > Here's the quick hint for you, your bug id is exactly your story id. > > 2018-05-07 18:27 GMT+08:00 Rico Lin : > >> Hi all, >> >> I updated more information to this guideline in [1]. >> Please must take a view on [1] to see what's been updated. >> will likely to keep update on that etherpad if new Q&A or issue found. >> >> Will keep trying to make this process as painless for you as possible, >> so please endure with us for now, and sorry for any inconvenience >> >> *[1] https://etherpad.openstack.org/p/Heat-StoryBoard-Migration-Info >> * >> >> 2018-05-05 12:15 GMT+08:00 Rico Lin : >> >>> looping heat-dashboard team >>> >>> 2018-05-05 12:02 GMT+08:00 Rico Lin : >>> >>>> Dear all Heat members and friends >>>> >>>> As you might award, OpenStack projects are scheduled to migrating ([5]) >>>> from Launchpad to StoryBoard [1]. >>>> For whom who like to know where to file a bug/blueprint, here are some >>>> heads up for you. >>>> >>>> *What's StoryBoard?* >>>> StoryBoard is a cross-project task-tracker, contains numbers of >>>> ``project``, each project contains numbers of ``story`` which you can think >>>> it as an issue or blueprint. Within each story, contains one or multiple >>>> ``task`` (task separate stories into the tasks to resolve/implement). To >>>> learn more about StoryBoard or how to make a good story, you can reference >>>> [6]. >>>> >>>> *How to file a bug?* >>>> This is actually simple, use your current ubuntu-one id to access to >>>> storyboard. Then find the corresponding project in [2] and create a story >>>> to it with a description of your issue. We should try to create tasks which >>>> to reference with patches in Gerrit. >>>> >>>> *How to work on a spec (blueprint)?* >>>> File a story like you used to file a Blueprint. Create tasks for your >>>> plan. Also you might want to create a task for adding spec( in heat-spec >>>> repo) if your blueprint needs documents to explain. >>>> I still leave current blueprint page open, so if you like to create a >>>> story from BP, you can still get information. Right now we will start work >>>> as task-driven workflow, so BPs should act no big difference with a bug in >>>> StoryBoard (which is a story with many tasks). >>>> >>>> *Where should I put my story?* >>>> We migrate all heat sub-projects to StoryBoard to try to keep the >>>> impact to whatever you're doing as small as possible. However, if you plan >>>> to create a new story, *please create it under heat project [4]* and >>>> tag it with what it might affect with (like python-heatclint, >>>> heat-dashboard, heat-agents). We do hope to let users focus their stories >>>> in one place so all stories will get better attention and project >>>> maintainers don't need to go around separate places to find it. >>>> >>>> *How to connect from Gerrit to StoryBoard?* >>>> We usually use following key to reference Launchpad >>>> Closes-Bug: ####### >>>> Partial-Bug: ####### >>>> Related-Bug: ####### >>>> >>>> Now in StoryBoard, you can use following key. >>>> Task: ###### >>>> Story: ###### >>>> you can find more info in [3]. >>>> >>>> *What I need to do for my exists bug/bps?* >>>> Your bug is automatically migrated to StoryBoard, however, the >>>> reference in your patches ware not, so you need to change your commit >>>> message to replace the old link to launchpad to new links to StoryBoard. >>>> >>>> *Do we still need Launchpad after all this migration are done?* >>>> As the plan, we won't need Launchpad for heat anymore once we have done >>>> with migrating. Will forbid new bugs/bps filed in Launchpad. Also, try to >>>> provide new information as many as possible. Hopefully, we can make >>>> everyone happy. For those newly created bugs during/after migration, don't >>>> worry we will disallow further create new bugs/bps and do a second migrate >>>> so we won't missed yours. >>>> >>>> [1] https://storyboard.openstack.org/ >>>> [2] https://storyboard.openstack.org/#!/project_group/82 >>>> [3] https://docs.openstack.org/infra/manual/developers.html# >>>> development-workflow >>>> [4] https://storyboard.openstack.org/#!/project/989 >>>> [5] https://docs.openstack.org/infra/storyboard/migration.html >>>> [6] https://docs.openstack.org/infra/storyboard/gui/tasks_st >>>> ories_tags.html#what-is-a-story >>>> >>>> >>>> >>>> -- >>>> May The Force of OpenStack Be With You, >>>> >>>> *Rico Lin*irc: ricolin >>>> >>>> >>> >>> >>> -- >>> May The Force of OpenStack Be With You, >>> >>> *Rico Lin*irc: ricolin >>> >>> >> >> >> -- >> May The Force of OpenStack Be With You, >> >> *Rico Lin*irc: ricolin >> >> > > > -- > May The Force of OpenStack Be With You, > > *Rico Lin*irc: ricolin > > -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From jichenjc at cn.ibm.com Wed May 16 06:34:49 2018 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Wed, 16 May 2018 14:34:49 +0800 Subject: [openstack-dev] [zVMCloudConnector][python-zvm-sdk][requirements] Unblock webob-1.8.1 In-Reply-To: <20180515165545.kahgul6qo4tb7y3p@gentoo.org> References: <20180515165545.kahgul6qo4tb7y3p@gentoo.org> Message-ID: Thanks for the reminder we updated the code to 1.1.1 , and current info can be found https://pypi.org/project/zVMCloudConnector/#description detailed code can be found here , you can see no block for webob anymore https://github.com/mfcloud/python-zvm-sdk/blob/master/requirements.txt#L7 please share what's additional work need to be done, thanks Best Regards! Kevin (Chen) Ji 纪 晨 Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com Phone: +86-10-82451493 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC From: Matthew Thode To: openstack-dev at lists.openstack.org Date: 05/16/2018 01:01 AM Subject: [openstack-dev] [zVMCloudConnector][python-zvm-sdk][requirements] Unblock webob-1.8.1 Please unblock webob-1.8.1, you are the only library holding it back at this point. I don't see a way to submit code to the project so I cc'd the project in launchpad. https://bugs.launchpad.net/openstack-requirements/+bug/1765748 -- Matthew Thode (prometheanfire) [attachment "signature.asc" deleted by Chen CH Ji/China/IBM] __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From thierry at openstack.org Wed May 16 07:39:08 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 16 May 2018 09:39:08 +0200 Subject: [openstack-dev] [tc] [all] TC Report 18-20 In-Reply-To: <1190674e-d033-4bba-8c81-ec63eb40b672@ham.ie> References: <329e01d3-e5f4-9d06-ec74-d503475e1af9@ham.ie> <55D81F7C-CEEA-49D7-9B31-30768C8A8BAA@cern.ch> <1190674e-d033-4bba-8c81-ec63eb40b672@ham.ie> Message-ID: Graham Hayes wrote: > Any additional background on why we allowed LCOO to operate like this > would help a lot. We can't prevent any group of organizations to work in any way they prefer -- we can, however, deny them the right to be called an OpenStack workgroup if they fail at openly collaborating. We can raise the topic, but in the end it is a User Committee decision though, since the LCOO is a User Committee-blessed working group. Source: https://governance.openstack.org/uc/ -- Thierry Carrez (ttx) From bdobreli at redhat.com Wed May 16 09:31:30 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Wed, 16 May 2018 11:31:30 +0200 Subject: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal In-Reply-To: References: <90762663-3bc2-d7b8-1242-ea5a67543c68@redhat.com> <8060ff1f-e546-868b-8729-090d643969d7@redhat.com> <878t8luh3o.fsf@meyer.lemoncheese.net> <9efdab28-de9c-ad94-57c8-616d557b2205@redhat.com> <87bmdgswhv.fsf@meyer.lemoncheese.net> <20180515165620.653br5e7mtzcr6r2@yuggoth.org> <877eo4sua9.fsf@meyer.lemoncheese.net> Message-ID: On 5/15/18 10:31 PM, Wesley Hayutin wrote: > > > On Tue, May 15, 2018 at 1:29 PM James E. Blair > wrote: > > Jeremy Stanley > writes: > > > On 2018-05-15 09:40:28 -0700 (-0700), James E. Blair wrote: > > [...] > >> We're also talking about making a new kind of job which can > continue to > >> run after it's "finished" so that you could use it to do > something like > >> host a container registry that's used by other jobs running on the > >> change.  We don't have that feature yet, but if we did, would > you prefer > >> to use that instead of the intermediate swift storage? > > > > If the subsequent jobs depending on that one get nodes allocated > > from the same provider, that could solve a lot of the potential > > network performance risks as well. > > That's... tricky.  We're *also* looking at affinity for buildsets, and > I'm optimistic we'll end up with something there eventually, but that's > likely to be a more substantive change and probably won't happen as > soon.  I do agree it will be nice, especially for use cases like this. > > -Jim > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > There is a lot here to unpack and discuss, but I really like the ideas > I'm seeing. > Nice work Bogdan!  I've added it the tripleo meeting agenda for next > week so we can continue socializing the idea and get feedback. > > Thanks! > > https://etherpad.openstack.org/p/tripleo-meeting-items Thank you for feedback, folks. There is a lot of technical caveats, right. I'm pretty sure though with broader containers adoption, openstack infra will catch up eventually, so we all could benefit our upstream CI jobs with affinity based and co-located data available around for consequent build steps. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Bogdan Dobrelya, Irc #bogdando From jpena at redhat.com Wed May 16 09:53:14 2018 From: jpena at redhat.com (Javier Pena) Date: Wed, 16 May 2018 05:53:14 -0400 (EDT) Subject: [openstack-dev] [requirements][barbican][daisycloud][freezer][fuel][heat][pyghmi][rpm-packaging][solum][tatu][trove] pycrypto is dead and insecure, you should migrate In-Reply-To: <20180513172206.bfaxmmp37vxkkwuc@gentoo.org> References: <20180513172206.bfaxmmp37vxkkwuc@gentoo.org> Message-ID: <570878291.2031301.1526464394082.JavaMail.zimbra@redhat.com> ----- Original Message ----- > This is a reminder to the projects called out that they are using old, > unmaintained and probably insecure libraries (it's been dead since > 2014). Please migrate off to use the cryptography library. We'd like > to drop pycrypto from requirements for rocky. > > See also, the bug, which has most of you cc'd already. > > https://bugs.launchpad.net/openstack-requirements/+bug/1749574 > In the rpm-packaging case, the requirements.txt file is not actually a list of requirements for the project, but a copy of the requirements project upper-constraints.txt file (a bit outdated now). Regards, Javier > +----------------------------------------+---------------------------------------------------------------------+------+---------------------------------------------------+ > | Repository | Filename > | | Line | Text > | | > +----------------------------------------+---------------------------------------------------------------------+------+---------------------------------------------------+ > | barbican | requirements.txt > | | 25 | pycrypto>=2.6 > | # Public Domain | > | daisycloud-core | code/daisy/requirements.txt > | | 17 | pycrypto>=2.6 # Public > | Domain | > | freezer | requirements.txt > | | 21 | pycrypto>=2.6 > | # Public Domain | > | fuel-web | nailgun/requirements.txt > | | 24 | pycrypto>=2.6.1 > | | > | heat-cfnclient | requirements.txt > | | 2 | > | PyCrypto>=2.1.0 | > | pyghmi | requirements.txt > | | 1 | pycrypto>=2.6 > | | > | rpm-packaging | requirements.txt > | | 189 | pycrypto>=2.6 > | # Public Domain | > | solum | requirements.txt > | | 24 | pycrypto>=2.6 > | # Public Domain | > | tatu | requirements.txt > | | 7 | > | pycrypto>=2.6.1 | > | tatu | test-requirements.txt > | | 7 | pycrypto>=2.6.1 > | | > | trove | > | integration/scripts/files/requirements/fedora-requirements.txt | 30 > | | pycrypto>=2.6 # Public Domain | > | trove | > | integration/scripts/files/requirements/ubuntu-requirements.txt | 29 > | | pycrypto>=2.6 # Public Domain | > | trove | requirements.txt > | | 47 | pycrypto>=2.6 > | # Public Domain | > +----------------------------------------+---------------------------------------------------------------------+------+---------------------------------------------------+ > > -- > Matthew Thode (prometheanfire) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From zhipengh512 at gmail.com Wed May 16 10:15:02 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 16 May 2018 18:15:02 +0800 Subject: [openstack-dev] [cyborg]Weekly Team Meeting 2018.05.16 Message-ID: Hi team, As usual we will have our weekly meeting starting UTC1400 at #openstack-cyborg, initial agenda as follows: 1. summit prep 2. sub team report 3. critical rocky spec review 4. open patches -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From superuser151093 at gmail.com Wed May 16 10:28:53 2018 From: superuser151093 at gmail.com (super user) Date: Wed, 16 May 2018 19:28:53 +0900 Subject: [openstack-dev] [Designate] Plan for OSM In-Reply-To: <5fef90b8b7ff4dae8c7b280fade5cfb2@G07SGEXCMSGPS03.g07.fujitsu.local> References: <5fef90b8b7ff4dae8c7b280fade5cfb2@G07SGEXCMSGPS03.g07.fujitsu.local> Message-ID: I thought it is Open Source MANO. On Thu, May 3, 2018, 4:45 PM duonghq at vn.fujitsu.com wrote: > Hi Ben, > > >>On 04/25/2018 11:31 PM, daidv at vn.fujitsu.com wrote: > >> Hi forks, > >> > >> We tested and completed our process with OVO migration in Queens cycle. > >> Now, we can continue with OSM implementation for Designate. > >> Actually, we have pushed some patches related to OSM[1] and it's ready > to review. > > > Out of curiosity, what does OSM stand for? Based on the patches it > > seems related to rolling upgrades, but a quick glance at them doesn't > > make it obvious to me what's going on. Thanks. > > OSM stands for Online Schema Migration, which means that we can migrate > database schema without > downtime for service. > > > -Ben > > Best regards, > > Ha Quang Duong (Mr.) > PODC - Fujitsu Vietnam Ltd. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Wed May 16 10:38:59 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Wed, 16 May 2018 12:38:59 +0200 Subject: [openstack-dev] [neutron] Next Neutron CI meeting canceled Message-ID: <988B8041-69C6-4D8D-B816-AF5F86682BD3@redhat.com> Hi, As there is summit in next week, neutron CI meeting from 22.05 is cancelled. Next meeting will be normally on 29.05 — Slawek Kaplonski Senior software engineer Red Hat From skaplons at redhat.com Wed May 16 10:39:58 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Wed, 16 May 2018 12:39:58 +0200 Subject: [openstack-dev] [neutron] Next Neutron QoS meeting cancelled Message-ID: Hi, As there is summit in next week, neutron QoS meeting from 22.05 is cancelled. Next meeting will be normally on 05.06.2018 — Slawek Kaplonski Senior software engineer Red Hat From eumel at arcor.de Wed May 16 10:43:06 2018 From: eumel at arcor.de (Frank Kloeker) Date: Wed, 16 May 2018 12:43:06 +0200 Subject: [openstack-dev] [I18n] [Docs] Forum session Vancouver Message-ID: <8d2e092118bf02c028c17151b8a34af5@arcor.de> Good morning, just a quick note when packing the suitcase: We have a Docs/I18n Forum session on Monday 21th, 13:30, direct after lunch [1]. Take the chance to discuss topics about project onboarding with translation or documentation, usage of translated documents or tools. Or just come to say Hello :-) Looking forward to see you there! kind regards Frank (PTL I18n) [1] https://etherpad.openstack.org/p/docs-i18n-project-onboarding-vancouver From dtantsur at redhat.com Wed May 16 11:16:09 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 16 May 2018 13:16:09 +0200 Subject: [openstack-dev] [tripleo] Encrypted swift volumes by default in the undercloud In-Reply-To: References: Message-ID: Hi, On 05/15/2018 09:19 PM, Juan Antonio Osorio wrote: > Hello! > > As part of the work from the Security Squad, we added the ability for the > containerized undercloud to encrypt the overcloud plans. This is done by > enabling Swift's encrypted volumes, which require barbican. Right now it's > turned off, but I would like to enable it by default [1]. What do you folks think? I like the idea, but I'm a bit skeptical about adding a new service to already quite bloated undercloud. Why is barbican a hard requirement here? > > [1] https://review.openstack.org/#/c/567200/ > > BR > > -- > Juan Antonio Osorio R. > e-mail: jaosorior at gmail.com > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From fungi at yuggoth.org Wed May 16 12:17:03 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 16 May 2018 12:17:03 +0000 Subject: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal In-Reply-To: References: <90762663-3bc2-d7b8-1242-ea5a67543c68@redhat.com> <8060ff1f-e546-868b-8729-090d643969d7@redhat.com> <878t8luh3o.fsf@meyer.lemoncheese.net> <9efdab28-de9c-ad94-57c8-616d557b2205@redhat.com> <87bmdgswhv.fsf@meyer.lemoncheese.net> <20180515165620.653br5e7mtzcr6r2@yuggoth.org> <877eo4sua9.fsf@meyer.lemoncheese.net> Message-ID: <20180516121703.vh2s2epm66g2wmk6@yuggoth.org> On 2018-05-16 11:31:30 +0200 (+0200), Bogdan Dobrelya wrote: [...] > I'm pretty sure though with broader containers adoption, openstack > infra will catch up eventually, so we all could benefit our > upstream CI jobs with affinity based and co-located data available > around for consequent build steps. I still don't see what it has to do with containers. We've known these were potentially useful features long before container-oriented projects came into the picture. We simply focused on implementing other, even more generally-applicable features first. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From thierry at openstack.org Wed May 16 12:19:29 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 16 May 2018 14:19:29 +0200 Subject: [openstack-dev] [ptg] Early bird pricing ends EOD tomorrow, act now Message-ID: Friendly reminder that the "early bird" pricing for the next PTG in Denver in September is ending at 6:59 UTC Friday (which translates to Thursday, 23:59 Pacific time). If you are concerned with pricing increase and want to snatch the $199 ticket deal before the price jumps to $399, act now ! More details: https://www.eventbrite.com/e/project-teams-gathering-denver-2018-tickets-45296703660 -- Thierry Carrez (ttx) From pkovar at redhat.com Wed May 16 12:29:54 2018 From: pkovar at redhat.com (Petr Kovar) Date: Wed, 16 May 2018 14:29:54 +0200 Subject: [openstack-dev] [docs] Documentation meeting today Message-ID: <20180516142954.6fcbefc0ef2d17248a98c385@redhat.com> Hi all, The docs meeting will continue today at 16:00 UTC in #openstack-doc, as scheduled. For more details, see the meeting page: https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting Cheers, pk From bdobreli at redhat.com Wed May 16 13:17:52 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Wed, 16 May 2018 15:17:52 +0200 Subject: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal In-Reply-To: <20180516121703.vh2s2epm66g2wmk6@yuggoth.org> References: <90762663-3bc2-d7b8-1242-ea5a67543c68@redhat.com> <8060ff1f-e546-868b-8729-090d643969d7@redhat.com> <878t8luh3o.fsf@meyer.lemoncheese.net> <9efdab28-de9c-ad94-57c8-616d557b2205@redhat.com> <87bmdgswhv.fsf@meyer.lemoncheese.net> <20180515165620.653br5e7mtzcr6r2@yuggoth.org> <877eo4sua9.fsf@meyer.lemoncheese.net> <20180516121703.vh2s2epm66g2wmk6@yuggoth.org> Message-ID: On 5/16/18 2:17 PM, Jeremy Stanley wrote: > On 2018-05-16 11:31:30 +0200 (+0200), Bogdan Dobrelya wrote: > [...] >> I'm pretty sure though with broader containers adoption, openstack >> infra will catch up eventually, so we all could benefit our >> upstream CI jobs with affinity based and co-located data available >> around for consequent build steps. > > I still don't see what it has to do with containers. We've known My understanding, I may be totally wrong, is that unlike to packages and repos (do not count OSTree [0]), containers use layers and can be exported into tarballs with built-in de-duplication. This makes idea of tossing those tarballs around much more attractive, than doing something similar to repositories with packages. Of course container images can be pre-built into nodepool images, just like packages, so CI users can rebuild on top with less changes brought into new layers, which is another nice to have option by the way. [0] https://rpm-ostree.readthedocs.io/en/latest/ > these were potentially useful features long before > container-oriented projects came into the picture. We simply focused > on implementing other, even more generally-applicable features > first. Right, I think this only confirms that it *does* have something to containers, and priorities for containerized use cases will follow containers adoption trends. If everyone one day suddenly ask for nodepool images containing latest kolla containers injected, for example. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Bogdan Dobrelya, Irc #bogdando From doug at doughellmann.com Wed May 16 13:55:11 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 16 May 2018 09:55:11 -0400 Subject: [openstack-dev] [ALL][PTLs] Community Goals for Rocky: Toggle the debug option at runtime In-Reply-To: References: <20180316213441.ap4hztvrmn4qkpey@yuggoth.org> <12B971D7-83C6-43AE-9CC3-C63296E9385D@doughellmann.com> <27a719f4-ce6b-2a19-b137-dc3dc153f0b0@gmail.com> <1526325202-sup-17@lrrr.local> <557ed9ca-5e68-85a1-858e-ca81797e63bd@gmail.com> <1526337894-sup-1085@lrrr.local> <1ca15064-a93e-6080-6b5b-bf70890575ca@gmail.com> <1526395462-sup-8795@lrrr.local> Message-ID: <1526478761-sup-7228@lrrr.local> Excerpts from Lingxian Kong's message of 2018-05-16 11:12:01 +1200: > Hi, > > Maybe I missed the original discussion, I found the 'mutable' configuration > implementation relies on oslo.service, but is there any guide for the > projects using cotyledon instead? oslo.service implements the signal handler natively, but the feature does not rely on oslo.service. The method in oslo.config that does the work makes no assumptions about what triggers it. We did this on purpose to support projects that do not use oslo.service. I don't know enough about cotyledon to tell you how to do it, but you need to set up a signal handler so that SIGHUP invokes the mutate_config_files() method of the ConfigOpts instance being used by the application. Doug > > Cheers, > Lingxian Kong > > On Wed, May 16, 2018 at 2:46 AM Doug Hellmann wrote: > > > Excerpts from Lance Bragstad's message of 2018-05-14 18:45:49 -0500: > > > > > > On 05/14/2018 05:46 PM, Doug Hellmann wrote: > > > > Excerpts from Lance Bragstad's message of 2018-05-14 15:20:42 -0500: > > > >> On 05/14/2018 02:24 PM, Doug Hellmann wrote: > > > >>> Excerpts from Lance Bragstad's message of 2018-05-14 13:13:51 -0500: > > > >>>> On 03/19/2018 09:22 AM, Jim Rollenhagen wrote: > > > >>>>> On Sat, Mar 17, 2018 at 9:49 PM, Doug Hellmann < > > doug at doughellmann.com > > > >>>>> > wrote: > > > >>>>> > > > >>>>> Both of those are good ideas. > > > >>>>> > > > >>>>> > > > >>>>> Agree. I like the socket idea a bit more as I can imagine some > > > >>>>> operators don't want config file changes automatically applied. Do > > we > > > >>>>> want to choose one to standardize on or allow each project (or > > > >>>>> operators, via config) the choice? > > > >>>> Just to recap, keystone would be listening for when it's > > configuration > > > >>>> file changes, and reinitialize the logger if the logging settings > > > >>>> changed, correct? > > > >>> Sort of. > > > >>> > > > >>> Keystone would need to do something to tell oslo.config to re-load > > the > > > >>> config files. In services that rely on oslo.service, this is handled > > > >>> with a SIGHUP handler that calls ConfigOpts.mutate_config_files(), so > > > >>> for Keystone you would want to do something similar. > > > >>> > > > >>> That is, you want to wait for an explicit notification from the > > operator > > > >>> that you should reload the config, and not just watch for the file to > > > >>> change. We could talk about using file modification as a trigger, but > > > >>> reloading is something that may need to be staged across several > > > >>> services in order so we chose for the first version to make the > > trigger > > > >>> explicit. Relying on watching files will also fail when the modified > > > >>> data is not in a file (which will be possible when we finish the > > driver > > > >>> work described in > > > >>> > > http://specs.openstack.org/openstack/oslo-specs/specs/queens/oslo-config-drivers.html > > ). > > > >> Hmm, these are good points. I wonder if just converting to use > > > >> oslo.service would be a lower bar then? > > > > I thought keystone had moved away from that direction toward deploying > > > > only within Apache? I may be out of touch, or have misunderstood > > > > something, though. > > > > > > Oh - never mind... For some reason I was thinking there was a way to use > > > oslo.service and Apache. > > > > > > Either way, I'll do some more digging before tomorrow. I have this as a > > > topic on keystone's meeting agenda to go through our options [0]. If we > > > do come up with something that doesn't involve intercepting signals > > > (specifically for the reason noted by Kristi and Jim in the mod_wsgi > > > documentation), should the community goal be updated to include that > > > option? Just thinking that we can't be the only service in this position. > > > > I think we've left the implementation details up to the project > > teams, for just that reason. That said, it would be good to document > > how you do it (either formally or with a mailing list thread). > > > > And FWIW, if what you choose to do is monitor a file, that's fine > > as a trigger. I suggest not using the configuration file itself, > > though, for the reasons mentioned earlier. > > > > Doug > > > > PS - I wonder how Apache deals with reloading its own configuration > > file. Is there some sort of hook you could use? > > > > > > > > [0] https://etherpad.openstack.org/p/keystone-weekly-meeting > > > > > > > > > > > Doug > > > > > > > > > > __________________________________________________________________________ > > > > OpenStack Development Mailing List (not for usage questions) > > > > Unsubscribe: > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > From jim at jimrollenhagen.com Wed May 16 14:23:00 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Wed, 16 May 2018 10:23:00 -0400 Subject: [openstack-dev] [ALL][PTLs] Community Goals for Rocky: Toggle the debug option at runtime In-Reply-To: <1526478761-sup-7228@lrrr.local> References: <20180316213441.ap4hztvrmn4qkpey@yuggoth.org> <12B971D7-83C6-43AE-9CC3-C63296E9385D@doughellmann.com> <27a719f4-ce6b-2a19-b137-dc3dc153f0b0@gmail.com> <1526325202-sup-17@lrrr.local> <557ed9ca-5e68-85a1-858e-ca81797e63bd@gmail.com> <1526337894-sup-1085@lrrr.local> <1ca15064-a93e-6080-6b5b-bf70890575ca@gmail.com> <1526395462-sup-8795@lrrr.local> <1526478761-sup-7228@lrrr.local> Message-ID: On Wed, May 16, 2018 at 9:55 AM, Doug Hellmann wrote: > Excerpts from Lingxian Kong's message of 2018-05-16 11:12:01 +1200: > > Hi, > > > > Maybe I missed the original discussion, I found the 'mutable' > configuration > > implementation relies on oslo.service, but is there any guide for the > > projects using cotyledon instead? > > oslo.service implements the signal handler natively, but the feature > does not rely on oslo.service. The method in oslo.config that does the > work makes no assumptions about what triggers it. We did this on purpose > to support projects that do not use oslo.service. > > I don't know enough about cotyledon to tell you how to do it, but you > need to set up a signal handler so that SIGHUP invokes the > mutate_config_files() method of the ConfigOpts instance being used by > the application. > This was asked in another thread, see my reply :) http://lists.openstack.org/pipermail/openstack-dev/2018-March/128797.html // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From lebre.adrien at free.fr Wed May 16 15:05:30 2018 From: lebre.adrien at free.fr (lebre.adrien at free.fr) Date: Wed, 16 May 2018 17:05:30 +0200 (CEST) Subject: [openstack-dev] [Openstack-sigs] [SIG][Edge-computing][FEMDC] Wed. 16 May - FEMDC IRC Meeting 15:00 UTC In-Reply-To: Message-ID: <1910573070.31593773.1526483130740.JavaMail.root@zimbra29-e5.priv.proxad.net> s/#edge-computing-irc/#edge-computing-group Sorry for the typo. The meeting starts now. ad_ri3n_ ----- Mail original ----- > De: "Dimitri Pertin" > À: openstack-dev at lists.openstack.org > Cc: openstack-sigs at lists.openstack.org, edge-computing at lists.openstack.org > Envoyé: Mardi 15 Mai 2018 17:16:22 > Objet: [Openstack-sigs] [SIG][Edge-computing][openstack-dev][FEMDC] Wed. 16 May - FEMDC IRC Meeting 15:00 UTC > > Dear all, > > Here is a gentle reminder regarding the FEMDC meeting that was > postponed > from last week to tomorrow: May, the 16th at 15:00 UTC. > > As a consequence, the meeting will be held on #edge-computing-irc > > This meeting will focus on the preparation of the Vancouver summit > (presentations, F2F sessions, ...). You can already check and fill > this > pad with you wishes/ideas: > https://etherpad.openstack.org/p/FEMDC_Vancouver > > As usually, a draft of the agenda is available at line 550 and you > are > very welcome to add any item: > https://etherpad.openstack.org/p/massively_distributed_ircmeetings_2018 > > Best regards, > > Dimitri > > > > -------- Forwarded Message -------- > Subject: [Edge-computing] [FEMDC] IRC meeting postponed to next > Wednesday > Date: Wed, 9 May 2018 15:50:45 +0200 (CEST) > From: lebre.adrien at free.fr > To: OpenStack Development Mailing List (not for usage questions) > , > openstack-sigs at lists.openstack.org, > edge-computing at lists.openstack.org > > Dear all, > Neither Paul-Andre nor me can chair the meeting today so we propose > to > postpone it for one week. The agenda will be delivered soon but you > can > consider that next meeting will focus on the preparation of the > Vancouver summit (presentations, F2F meetings...). > Best regards, ad_ri3n_ > > _______________________________________________ > Edge-computing mailing list > Edge-computing at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing > > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs > From miguel at mlavalle.com Wed May 16 15:18:29 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Wed, 16 May 2018 10:18:29 -0500 Subject: [openstack-dev] [neutron] Neutron weekly meeting canceled on May 22nd Message-ID: Hi Dear Neutron Team, Due to the OpenStack Summit in Vancouver next week, we will cancel the team meeting scheduled on May 22nd at 1400UTC. We will resume our meetings on May 28th at 2100UTC Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed May 16 15:25:17 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 16 May 2018 15:25:17 +0000 Subject: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal In-Reply-To: References: <8060ff1f-e546-868b-8729-090d643969d7@redhat.com> <878t8luh3o.fsf@meyer.lemoncheese.net> <9efdab28-de9c-ad94-57c8-616d557b2205@redhat.com> <87bmdgswhv.fsf@meyer.lemoncheese.net> <20180515165620.653br5e7mtzcr6r2@yuggoth.org> <877eo4sua9.fsf@meyer.lemoncheese.net> <20180516121703.vh2s2epm66g2wmk6@yuggoth.org> Message-ID: <20180516152517.6pf4yejuz2gdbzrw@yuggoth.org> On 2018-05-16 15:17:52 +0200 (+0200), Bogdan Dobrelya wrote: [...] > My understanding, I may be totally wrong, is that unlike to > packages and repos (do not count OSTree [0]), containers use > layers and can be exported into tarballs with built-in > de-duplication. This makes idea of tossing those tarballs around > much more attractive, than doing something similar to repositories > with packages. [...] Projects which utilize service VMs (e.g. Trove) were asking to do precisely the same things and had nothing to do with containers. The idea that you might build a VM image up from proposed source in one job and then fire several other jobs which used that proposed image well-predates similar requests from container-oriented projects. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From miguel at mlavalle.com Wed May 16 15:25:45 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Wed, 16 May 2018 10:25:45 -0500 Subject: [openstack-dev] [neutron] Neutron drivers meeting canceled on May 25th Message-ID: Hi Dear Neutron Team, Due to the OpenStack Summit in Vancouver next week, we will cancel the drivers meeting scheduled on May 25th at 1400UTC. We will resume our meetings on June 1st at 1400UTC Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Wed May 16 15:27:49 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Wed, 16 May 2018 10:27:49 -0500 Subject: [openstack-dev] [neutron] Neutron L3 sub-team meeting canceled on May 24th Message-ID: Hi Dear Neutron Team, Due to the OpenStack Summit in Vancouver next week, we will cancel the drivers meeting scheduled on May 24th at 1500UTC. We will resume our meetings on May 31st at 1500UTC Best regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From alee at redhat.com Wed May 16 15:31:43 2018 From: alee at redhat.com (Ade Lee) Date: Wed, 16 May 2018 11:31:43 -0400 Subject: [openstack-dev] [requirements][barbican][daisycloud][freezer][fuel][heat][pyghmi][rpm-packaging][solum][tatu][trove] pycrypto is dead and insecure, you should migrate In-Reply-To: <20180513172206.bfaxmmp37vxkkwuc@gentoo.org> References: <20180513172206.bfaxmmp37vxkkwuc@gentoo.org> Message-ID: <1526484703.28944.62.camel@redhat.com> Thanks for the reminder. We replaced the pycrypto code in Barbican but forgot to remove the dependency in requirements.txt. A review has been added to do this. https://review.openstack.org/568879 On Sun, 2018-05-13 at 12:22 -0500, Matthew Thode wrote: > This is a reminder to the projects called out that they are using > old, > unmaintained and probably insecure libraries (it's been dead since > 2014). Please migrate off to use the cryptography library. We'd > like > to drop pycrypto from requirements for rocky. > > See also, the bug, which has most of you cc'd already. > > https://bugs.launchpad.net/openstack-requirements/+bug/1749574 > > +----------------------------------------+--------------------------- > ------------------------------------------+------+------------------- > --------------------------------+ > > Repository | > > Filename > > | Line | Text | > > +----------------------------------------+--------------------------- > ------------------------------------------+------+------------------- > --------------------------------+ > > barbican | > > requirements.txt > > | 25 | pycrypto>=2.6 # Public Domain | > > daisycloud-core | > > code/daisy/requirements.txt > > | 17 | pycrypto>=2.6 # Public Domain | > > freezer | > > requirements.txt > > | 21 | pycrypto>=2.6 # Public Domain | > > fuel-web | > > nailgun/requirements.txt > > | 24 | pycrypto>=2.6.1 | > > heat-cfnclient | > > requirements.txt > > | 2 | PyCrypto>=2.1.0 | > > pyghmi | > > requirements.txt > > | 1 | pycrypto>=2.6 | > > rpm-packaging | > > requirements.txt > > | 189 | pycrypto>=2.6 # Public Domain | > > solum | > > requirements.txt > > | 24 | pycrypto>=2.6 # Public Domain | > > tatu | > > requirements.txt > > | 7 | pycrypto>=2.6.1 | > > tatu | test- > > requirements.txt | > > 7 | pycrypto>=2.6.1 | > > trove | > > integration/scripts/files/requirements/fedora- > > requirements.txt | 30 | pycrypto>=2.6 # Public > > Domain | > > trove | > > integration/scripts/files/requirements/ubuntu- > > requirements.txt | 29 | pycrypto>=2.6 # Public > > Domain | > > trove | > > requirements.txt > > | 47 | pycrypto>=2.6 # Public Domain | > > +----------------------------------------+--------------------------- > ------------------------------------------+------+------------------- > --------------------------------+ > > _____________________________________________________________________ > _____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > cribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mriedemos at gmail.com Wed May 16 15:37:20 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 16 May 2018 10:37:20 -0500 Subject: [openstack-dev] [nova] Can we remove the monkey_patch_modules config option? In-Reply-To: <52631e59-6b30-3994-f848-f487ed9af9d4@gmail.com> References: <52631e59-6b30-3994-f848-f487ed9af9d4@gmail.com> Message-ID: <0378ea70-8743-96e2-189d-93daabddc6d1@gmail.com> On 8/31/2017 2:14 PM, Matt Riedemann wrote: > > The change to deprecate the monkey_patch and monkey_patch_modules option > was deprecated in Queens, so they wouldn't be removed until the Rocky > release at the earliest. We don't backport changes which deprecate things. > > My recommendation to you is upstream your changes. If it's a new > feature, propose a blueprint [1]. > > Chances are someone else needs or wants the same thing, or is already > doing something similar but without the monkey patch config options. > > [1] https://docs.openstack.org/nova/latest/contributor/blueprints.html Following up on this, I have submitted a patch to remove the deprecated monkey_patch-related config options: https://review.openstack.org/#/c/568880/ -- Thanks, Matt From pkovar at redhat.com Wed May 16 15:39:14 2018 From: pkovar at redhat.com (Petr Kovar) Date: Wed, 16 May 2018 17:39:14 +0200 Subject: [openstack-dev] [docs] Automating documentation the tripleo way? Message-ID: <20180516173914.2fb66aa5a7a9cdaa066324e1@redhat.com> Hi all, In the past few years, we've seen several efforts aimed at automating procedural documentation, mostly centered around the OpenStack installation guide. This idea to automatically produce and verify installation steps or similar procedures was mentioned again at the last Summit (https://etherpad.openstack.org/p/SYD-install-guide-testing). It was brought to my attention that the tripleo team has been working on automating some of the tripleo deployment procedures, using a Bash script with included comment lines to supply some RST-formatted narrative, for example: https://github.com/openstack/tripleo-quickstart-extras/blob/master/roles/overcloud-prep-images/templates/overcloud-prep-images.sh.j2 The Bash script can then be converted to RST, e.g.: https://thirdparty.logs.rdoproject.org/jenkins-tripleo-quickstart-queens-rdo_trunk-baremetal-dell_fc430_envB-single_nic_vlans-27/docs/build/ Source Code: https://github.com/openstack/tripleo-quickstart-extras/tree/master/roles/collect-logs I really liked this approach and while I don't want to sound like selling other people's work, I'm wondering if there is still an interest among the broader OpenStack community in automating documentation like this? Thanks, pk From hongbin034 at gmail.com Wed May 16 15:45:16 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Wed, 16 May 2018 11:45:16 -0400 Subject: [openstack-dev] [Zun] Team meeting canceled on 2018-05-22 Message-ID: Hi all, Due to OpenStack Vancouver Summit, we canceled the weekly team meeting at 2018-05-22. Updated Meeting schedule can be found at https://wiki.openstack.org/wiki/Zun#Meetings . Best regards, Hongbin -------------- next part -------------- An HTML attachment was scrubbed... URL: From pkovar at redhat.com Wed May 16 16:24:45 2018 From: pkovar at redhat.com (Petr Kovar) Date: Wed, 16 May 2018 18:24:45 +0200 Subject: [openstack-dev] [docs] Style guide for OpenStack documentation Message-ID: <20180516182445.3b9286418271e98ad9581474@redhat.com> Hi all, For OpenStack documentation contributors, we provide a basic style guide that describes most important guidelines for writing, user interface guidelines, and RST conventions: https://docs.openstack.org/doc-contrib-guide/writing-style.html https://docs.openstack.org/doc-contrib-guide/user-guidelines.html https://docs.openstack.org/doc-contrib-guide/rst-conv.html In these documents, we also refer to the printed IBM Style Guide for more information. While this is a standard resource used by many technical writers, we no longer have a group of dedicated writers working on our docs. Also, the IBM Style Guide is not available online for free, making it inaccessible to many in the community. I'd like to propose replacing the reference to the IBM Style Guide with a reference to the developerWorks editorial style guide (https://www.ibm.com/developerworks/library/styleguidelines/). This lightweight version comes from the same company and is based on the same guidelines, but most importantly, it is available for free. Any objections to this? A related change: https://review.openstack.org/#/c/562310/ I hope this change will make it much easier for our docs contributors to consult style guidelines. Thanks, pk From sundar.nadathur at intel.com Wed May 16 17:01:44 2018 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Wed, 16 May 2018 10:01:44 -0700 Subject: [openstack-dev] [cyborg] [nova] Cyborg quotas Message-ID: <6d8232e3-79ca-c61d-ad64-a99e923e2114@intel.com> Hi,    The Cyborg quota spec [1] proposes to implement a quota (maximum usage) for accelerators on a per-project basis, to prevent one project (tenant) from over-using some resources and starving other tenants. There are separate resource classes for different accelerator types (GPUs, FPGAs, etc.), and so we can do quotas per RC. The current proposal [2] is to track the usage in Cyborg agent/driver. I am not sure that scheme will work, as I have indicated in the comments on [1]. Here is another possible way. * The operator configures the oslo.limit in keystone per-project per-resource-class (GPU, FPGA, ...). o Until this gets into Keystone, Cyborg may define its own quota table, as defined in [1]. * Cyborg implements a table to track per-project usage, as defined in [1]. * Cyborg provides a filter for the Nova scheduler, which checks whether the project making the request has exceeded its own quota. o If so, it removes all candidates, thus failing the request. o If not, it updates the per-project usage in its own DB. Since this is an out-of-tree filter, at least to start with, it should be ok to directly update the db without making REST API calls. IOW, the resource usage tracking and enforcement are done as part of the request scheduling, rather than done at the compute node. If there are better ways, or ways to avoid a filter, please LMK. [1] https://review.openstack.org/#/c/560285/ [2] https://review.openstack.org/#/c/564968/ Thanks. Regards, Sundar -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed May 16 17:05:15 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 16 May 2018 17:05:15 +0000 Subject: [openstack-dev] [docs] Style guide for OpenStack documentation In-Reply-To: <20180516182445.3b9286418271e98ad9581474@redhat.com> References: <20180516182445.3b9286418271e98ad9581474@redhat.com> Message-ID: <20180516170515.2gvyxqrnoacitndp@yuggoth.org> On 2018-05-16 18:24:45 +0200 (+0200), Petr Kovar wrote: [...] > I'd like to propose replacing the reference to the IBM Style Guide > with a reference to the developerWorks editorial style guide > (https://www.ibm.com/developerworks/library/styleguidelines/). > This lightweight version comes from the same company and is based > on the same guidelines, but most importantly, it is available for > free. [...] I suppose replacing a style guide nobody can access with one everyone can (modulo legal concerns) is a step up. Still, are there no style guides published under an actual free/open license? If https://www.ibm.com/developerworks/community/terms/use/ is correct then even accidental creation of a derivative work might be prosecuted as copyright infringement. http://www.writethedocs.org/guide/writing/style-guides/#selecting-a-good-style-guide-for-you mentions some more aligned with our community's open ideals, such as the 18F Content Guide (public domain), SUSE Documentation Style Guide (GFDL), GNOME Documentation Style Guide (GFDL), and the Writing Style Guide and Preferred Usage for DOD Issuances (public domain). Granted adopting one of those might lead to a need to overhaul some aspects of style in existing documents, so I can understand it's not a choice to be made lightly. Still, we should always consider embracing open process, and that includes using guidelines which we can freely derive and republish as needed. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From openstack at nemebean.com Wed May 16 17:11:24 2018 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 16 May 2018 12:11:24 -0500 Subject: [openstack-dev] [docs] Automating documentation the tripleo way? In-Reply-To: <20180516173914.2fb66aa5a7a9cdaa066324e1@redhat.com> References: <20180516173914.2fb66aa5a7a9cdaa066324e1@redhat.com> Message-ID: On 05/16/2018 10:39 AM, Petr Kovar wrote: > Hi all, > > In the past few years, we've seen several efforts aimed at automating > procedural documentation, mostly centered around the OpenStack > installation guide. This idea to automatically produce and verify > installation steps or similar procedures was mentioned again at the last > Summit (https://etherpad.openstack.org/p/SYD-install-guide-testing). > > It was brought to my attention that the tripleo team has been working on > automating some of the tripleo deployment procedures, using a Bash script > with included comment lines to supply some RST-formatted narrative, for > example: > > https://github.com/openstack/tripleo-quickstart-extras/blob/master/roles/overcloud-prep-images/templates/overcloud-prep-images.sh.j2 > > The Bash script can then be converted to RST, e.g.: > > https://thirdparty.logs.rdoproject.org/jenkins-tripleo-quickstart-queens-rdo_trunk-baremetal-dell_fc430_envB-single_nic_vlans-27/docs/build/ > > Source Code: > > https://github.com/openstack/tripleo-quickstart-extras/tree/master/roles/collect-logs > > I really liked this approach and while I don't want to sound like selling > other people's work, I'm wondering if there is still an interest among the > broader OpenStack community in automating documentation like this? I think it's worth noting that TripleO doesn't even use the generated docs. The main reason is that we tried this back in the tripleo-incubator days and it was not the silver bullet for good docs that it appears to be on the surface. As the deployment scripts grow features and more complicated logic it becomes increasingly difficult to write inline documentation that is readable. In the end, the tripleo-incubator docs had a number of large bash snippets that referred to internal variables and such. It wasn't actually good documentation. When we moved to instack-undercloud to drive TripleO deployments we also moved to a more traditional hand-written docs repo. Both options have their benefits and drawbacks, but neither absolves the development team of their responsibility to curate the docs. IME the inline method actually makes it harder to do this because it tightly couples your code and docs in a very inflexible way. /2 cents -Ben From jaypipes at gmail.com Wed May 16 17:24:24 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 16 May 2018 13:24:24 -0400 Subject: [openstack-dev] [cyborg] [nova] Cyborg quotas In-Reply-To: <6d8232e3-79ca-c61d-ad64-a99e923e2114@intel.com> References: <6d8232e3-79ca-c61d-ad64-a99e923e2114@intel.com> Message-ID: <4ba31b19-98cf-25b2-a0c2-0f64a29756e8@gmail.com> On 05/16/2018 01:01 PM, Nadathur, Sundar wrote: > Hi, >    The Cyborg quota spec [1] proposes to implement a quota (maximum > usage) for accelerators on a per-project basis, to prevent one project > (tenant) from over-using some resources and starving other tenants. > There are separate resource classes for different accelerator types > (GPUs, FPGAs, etc.), and so we can do quotas per RC. > > The current proposal [2] is to track the usage in Cyborg agent/driver. I > am not sure that scheme will work, as I have indicated in the comments > on [1]. Here is another possible way. > > * The operator configures the oslo.limit in keystone per-project > per-resource-class (GPU, FPGA, ...). > o Until this gets into Keystone, Cyborg may define its own quota > table, as defined in [1]. > * Cyborg implements a table to track per-project usage, as defined in [1]. Placement already stores usage information for all allocations of resources. There is already even a /usages API endpoint that you can specify a project and/or user: https://developer.openstack.org/api-ref/placement/#list-usages I see no reason not to use it. There is already actually a spec to use placement for quota usage checks in Nova here: https://review.openstack.org/#/c/509042/ Probably best to have a look at that and see if it will end up meeting your needs. > * Cyborg provides a filter for the Nova scheduler, which checks > whether the project making the request has exceeded its own quota. Quota checks happen before Nova's scheduler gets involved, so having a scheduler filter handle quota usage checking is pretty much a non-starter. I'll have a look at the patches you've proposed and comment there. Best, -jay From fungi at yuggoth.org Wed May 16 17:42:09 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 16 May 2018 17:42:09 +0000 Subject: [openstack-dev] [tripleo] [barbican] [tc] key store in base services (was: Encrypted swift volumes by default in the undercloud) In-Reply-To: References: Message-ID: <20180516174209.45ghmqz7qmshsd7g@yuggoth.org> On 2018-05-16 13:16:09 +0200 (+0200), Dmitry Tantsur wrote: > On 05/15/2018 09:19 PM, Juan Antonio Osorio wrote: > > As part of the work from the Security Squad, we added the > > ability for the containerized undercloud to encrypt the > > overcloud plans. This is done by enabling Swift's encrypted > > volumes, which require barbican. Right now it's turned off, but > > I would like to enable it by default [1]. What do you folks > > think? > > I like the idea, but I'm a bit skeptical about adding a new > service to already quite bloated undercloud. Why is barbican a > hard requirement here? [...] This exchange has given me pause to reflect on discussions we were having one year ago (leading up to and at the Forum in Boston). https://www.openstack.org/summit/boston-2017/summit-schedule/events/18736/key-management-developeroperatorcommunity-coordination https://etherpad.openstack.org/p/BOS-forum-key-management As a community, we're likely to continue to make imbalanced trade-offs against relevant security features if we don't move forward and declare that some sort of standardized key storage solution is a fundamental component on which OpenStack services can rely. Being able to just assume that you can encrypt volumes in Swift, even as a means to further secure a TripleO undercloud, would be a step in the right direction for security-minded deployments. Unfortunately, I'm unable to find any follow-up summary on the mailing list from the aforementioned session, but recollection from those who were present (I had a schedule conflict at that time) was that a Castellan-compatible key store would at least be a candidate for inclusion in our base services list: https://governance.openstack.org/tc/reference/base-services.html So a year has passed... where are we with this? Is it still something we want to do (I think so, do others)? What are the next steps so this doesn't come up again and again? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From doug at doughellmann.com Wed May 16 18:40:47 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 16 May 2018 14:40:47 -0400 Subject: [openstack-dev] [docs] Automating documentation the tripleo way? In-Reply-To: <20180516173914.2fb66aa5a7a9cdaa066324e1@redhat.com> References: <20180516173914.2fb66aa5a7a9cdaa066324e1@redhat.com> Message-ID: <1526495787-sup-1958@lrrr.local> Excerpts from Petr Kovar's message of 2018-05-16 17:39:14 +0200: > Hi all, > > In the past few years, we've seen several efforts aimed at automating > procedural documentation, mostly centered around the OpenStack > installation guide. This idea to automatically produce and verify > installation steps or similar procedures was mentioned again at the last > Summit (https://etherpad.openstack.org/p/SYD-install-guide-testing). > > It was brought to my attention that the tripleo team has been working on > automating some of the tripleo deployment procedures, using a Bash script > with included comment lines to supply some RST-formatted narrative, for > example: > > https://github.com/openstack/tripleo-quickstart-extras/blob/master/roles/overcloud-prep-images/templates/overcloud-prep-images.sh.j2 > > The Bash script can then be converted to RST, e.g.: > > https://thirdparty.logs.rdoproject.org/jenkins-tripleo-quickstart-queens-rdo_trunk-baremetal-dell_fc430_envB-single_nic_vlans-27/docs/build/ > > Source Code: > > https://github.com/openstack/tripleo-quickstart-extras/tree/master/roles/collect-logs > > I really liked this approach and while I don't want to sound like selling > other people's work, I'm wondering if there is still an interest among the > broader OpenStack community in automating documentation like this? > > Thanks, > pk > Weren't the folks doing the training-labs or training-guides taking a similar approach? IIRC, they ended up implementing what amounted to their own installer for OpenStack, and then ended up with all of the associated upgrade and testing burden. I like the idea of trying to use some automation from this, but I wonder if we'd be better off extracting data from other tools, rather than building a new one. Doug From whayutin at redhat.com Wed May 16 18:51:25 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 16 May 2018 12:51:25 -0600 Subject: [openstack-dev] [docs] Automating documentation the tripleo way? In-Reply-To: <1526495787-sup-1958@lrrr.local> References: <20180516173914.2fb66aa5a7a9cdaa066324e1@redhat.com> <1526495787-sup-1958@lrrr.local> Message-ID: On Wed, May 16, 2018 at 2:41 PM Doug Hellmann wrote: > Excerpts from Petr Kovar's message of 2018-05-16 17:39:14 +0200: > > Hi all, > > > > In the past few years, we've seen several efforts aimed at automating > > procedural documentation, mostly centered around the OpenStack > > installation guide. This idea to automatically produce and verify > > installation steps or similar procedures was mentioned again at the last > > Summit (https://etherpad.openstack.org/p/SYD-install-guide-testing). > > > > It was brought to my attention that the tripleo team has been working on > > automating some of the tripleo deployment procedures, using a Bash script > > with included comment lines to supply some RST-formatted narrative, for > > example: > > > > > https://github.com/openstack/tripleo-quickstart-extras/blob/master/roles/overcloud-prep-images/templates/overcloud-prep-images.sh.j2 > > > > The Bash script can then be converted to RST, e.g.: > > > > > https://thirdparty.logs.rdoproject.org/jenkins-tripleo-quickstart-queens-rdo_trunk-baremetal-dell_fc430_envB-single_nic_vlans-27/docs/build/ > > > > Source Code: > > > > > https://github.com/openstack/tripleo-quickstart-extras/tree/master/roles/collect-logs > > > > I really liked this approach and while I don't want to sound like selling > > other people's work, I'm wondering if there is still an interest among > the > > broader OpenStack community in automating documentation like this? > > > > Thanks, > > pk > > > > Weren't the folks doing the training-labs or training-guides taking a > similar approach? IIRC, they ended up implementing what amounted to > their own installer for OpenStack, and then ended up with all of the > associated upgrade and testing burden. > > I like the idea of trying to use some automation from this, but I wonder > if we'd be better off extracting data from other tools, rather than > building a new one. > > Doug > So there really isn't anything new to create, the work is done and executed on every tripleo change that runs in rdo-cloud. Instead of dismissing the idea upfront I'm more inclined to set an achievable small step to see how well it works. My thought would be to focus on the upcoming all-in-one installer and the automated doc generated with that workflow. I'd like to target publishing the all-in-one tripleo installer doc to [1] for Stein and of course a section of tripleo.org. What do you think? [1] https://docs.openstack.org/queens/deploy/ > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Wed May 16 18:56:25 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Wed, 16 May 2018 13:56:25 -0500 Subject: [openstack-dev] [neutron] Team dinner in Vancouver, Tuesday 22nd 7pm Message-ID: Hi Dear Neutron Team, Our team dinner will take place at Al Porto Ristorante, which is located half a mile from the Summit venue: 321 Water St, Vancouver, BC V6B 1B8 Phone: +1 604-683-8376 http://www.alporto.ca https://goo.gl/8q5Qy2 Our reservation is at 7pm, under mi name: "Miguel Lavalle" Looking forward to see you all there! Best regards MIguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed May 16 19:00:35 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 16 May 2018 15:00:35 -0400 Subject: [openstack-dev] [tripleo] [barbican] [tc] key store in base services (was: Encrypted swift volumes by default in the undercloud) In-Reply-To: <20180516174209.45ghmqz7qmshsd7g@yuggoth.org> References: <20180516174209.45ghmqz7qmshsd7g@yuggoth.org> Message-ID: <1526497171-sup-2640@lrrr.local> Excerpts from Jeremy Stanley's message of 2018-05-16 17:42:09 +0000: > On 2018-05-16 13:16:09 +0200 (+0200), Dmitry Tantsur wrote: > > On 05/15/2018 09:19 PM, Juan Antonio Osorio wrote: > > > As part of the work from the Security Squad, we added the > > > ability for the containerized undercloud to encrypt the > > > overcloud plans. This is done by enabling Swift's encrypted > > > volumes, which require barbican. Right now it's turned off, but > > > I would like to enable it by default [1]. What do you folks > > > think? > > > > I like the idea, but I'm a bit skeptical about adding a new > > service to already quite bloated undercloud. Why is barbican a > > hard requirement here? > [...] > > This exchange has given me pause to reflect on discussions we were > having one year ago (leading up to and at the Forum in Boston). > > https://www.openstack.org/summit/boston-2017/summit-schedule/events/18736/key-management-developeroperatorcommunity-coordination > > https://etherpad.openstack.org/p/BOS-forum-key-management > > As a community, we're likely to continue to make imbalanced > trade-offs against relevant security features if we don't move > forward and declare that some sort of standardized key storage > solution is a fundamental component on which OpenStack services can > rely. Being able to just assume that you can encrypt volumes in > Swift, even as a means to further secure a TripleO undercloud, would > be a step in the right direction for security-minded deployments. > > Unfortunately, I'm unable to find any follow-up summary on the > mailing list from the aforementioned session, but recollection from > those who were present (I had a schedule conflict at that time) was > that a Castellan-compatible key store would at least be a candidate > for inclusion in our base services list: > > https://governance.openstack.org/tc/reference/base-services.html > > So a year has passed... where are we with this? Is it still > something we want to do (I think so, do others)? What are the next > steps so this doesn't come up again and again? It seems like we should add "some form of key manager" to the base service lists, shouldn't we? And then we would encourage projects to use castellan to talk to it. Unless we want to try to pick a single key manager, which feels like a much longer sort of conversation. Doug From doug at doughellmann.com Wed May 16 19:04:16 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 16 May 2018 15:04:16 -0400 Subject: [openstack-dev] [docs] Automating documentation the tripleo way? In-Reply-To: References: <20180516173914.2fb66aa5a7a9cdaa066324e1@redhat.com> <1526495787-sup-1958@lrrr.local> Message-ID: <1526497268-sup-3619@lrrr.local> Excerpts from Wesley Hayutin's message of 2018-05-16 12:51:25 -0600: > On Wed, May 16, 2018 at 2:41 PM Doug Hellmann wrote: > > > Excerpts from Petr Kovar's message of 2018-05-16 17:39:14 +0200: > > > Hi all, > > > > > > In the past few years, we've seen several efforts aimed at automating > > > procedural documentation, mostly centered around the OpenStack > > > installation guide. This idea to automatically produce and verify > > > installation steps or similar procedures was mentioned again at the last > > > Summit (https://etherpad.openstack.org/p/SYD-install-guide-testing). > > > > > > It was brought to my attention that the tripleo team has been working on > > > automating some of the tripleo deployment procedures, using a Bash script > > > with included comment lines to supply some RST-formatted narrative, for > > > example: > > > > > > > > https://github.com/openstack/tripleo-quickstart-extras/blob/master/roles/overcloud-prep-images/templates/overcloud-prep-images.sh.j2 > > > > > > The Bash script can then be converted to RST, e.g.: > > > > > > > > https://thirdparty.logs.rdoproject.org/jenkins-tripleo-quickstart-queens-rdo_trunk-baremetal-dell_fc430_envB-single_nic_vlans-27/docs/build/ > > > > > > Source Code: > > > > > > > > https://github.com/openstack/tripleo-quickstart-extras/tree/master/roles/collect-logs > > > > > > I really liked this approach and while I don't want to sound like selling > > > other people's work, I'm wondering if there is still an interest among > > the > > > broader OpenStack community in automating documentation like this? > > > > > > Thanks, > > > pk > > > > > > > Weren't the folks doing the training-labs or training-guides taking a > > similar approach? IIRC, they ended up implementing what amounted to > > their own installer for OpenStack, and then ended up with all of the > > associated upgrade and testing burden. > > > > I like the idea of trying to use some automation from this, but I wonder > > if we'd be better off extracting data from other tools, rather than > > building a new one. > > > > Doug > > > > So there really isn't anything new to create, the work is done and executed > on every tripleo change that runs in rdo-cloud. It wasn't clear what Petr was hoping to get. Deploying with TripleO is only one way to deploy, so we wouldn't be able to replace the current installation guides with the results of this work. It sounds like that's not the goal, though. > > Instead of dismissing the idea upfront I'm more inclined to set an > achievable small step to see how well it works. My thought would be to > focus on the upcoming all-in-one installer and the automated doc generated > with that workflow. I'd like to target publishing the all-in-one tripleo > installer doc to [1] for Stein and of course a section of tripleo.org. As an official project, why is TripleO still publishing docs to its own site? That's not something we generally encourage. That said, publishing a new deployment guide based on this technique makes sense in general. What about Ben's comments elsewhere in the thread? Doug > > What do you think? > > [1] https://docs.openstack.org/queens/deploy/ > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > From mriedemos at gmail.com Wed May 16 19:25:33 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 16 May 2018 14:25:33 -0500 Subject: [openstack-dev] [cyborg] [nova] Cyborg quotas In-Reply-To: <4ba31b19-98cf-25b2-a0c2-0f64a29756e8@gmail.com> References: <6d8232e3-79ca-c61d-ad64-a99e923e2114@intel.com> <4ba31b19-98cf-25b2-a0c2-0f64a29756e8@gmail.com> Message-ID: On 5/16/2018 12:24 PM, Jay Pipes wrote: > Quota checks happen before Nova's scheduler gets involved, so having a > scheduler filter handle quota usage checking is pretty much a non-starter. For server resources yeah, for things like instances quota, CPU and RAM, etc. Nova does an up-front quota check for ports when creating servers (from nova-api) based on the number of servers requested vs whether or not pre-created ports were provided in the server create request. Nova does *not* do the same up-front quota check for volumes when booting from volume and nova creates the volume in the nova-compute service, which could lead to an OverQuota error from Cinder and then we abort the build [1]. So what nova does probably depends on what the API interaction is when creating a server. As far as I know, nova isn't getting passed explicit cyborg-controlled resources like neutron ports or cinder volumes, right? I thought the interaction was that the flavor would have required resource amounts for cyborg resources which the nova-scheduler will allocate for the instance, and then some magic happens in nova-compute to hook those things up (like os-brick/os-vif type stuff). Is nova going to be creating resources in cyborg like it creates ports in neutron or volumes in cinder? If not, then I don't see why nova would have anything to do with quota checks on these types of resources. Also, you should totally start with modeling your limits in keystone since that's the direction all projects should be going. I believe the usages are tracked per-project but the limits are meant to be unified in keystone for all projects. [1] https://bugs.launchpad.net/nova/+bug/1742102 -- Thanks, Matt From whayutin at redhat.com Wed May 16 19:26:46 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 16 May 2018 13:26:46 -0600 Subject: [openstack-dev] [docs] Automating documentation the tripleo way? In-Reply-To: <1526497268-sup-3619@lrrr.local> References: <20180516173914.2fb66aa5a7a9cdaa066324e1@redhat.com> <1526495787-sup-1958@lrrr.local> <1526497268-sup-3619@lrrr.local> Message-ID: On Wed, May 16, 2018 at 3:05 PM Doug Hellmann wrote: > Excerpts from Wesley Hayutin's message of 2018-05-16 12:51:25 -0600: > > On Wed, May 16, 2018 at 2:41 PM Doug Hellmann > wrote: > > > > > Excerpts from Petr Kovar's message of 2018-05-16 17:39:14 +0200: > > > > Hi all, > > > > > > > > In the past few years, we've seen several efforts aimed at automating > > > > procedural documentation, mostly centered around the OpenStack > > > > installation guide. This idea to automatically produce and verify > > > > installation steps or similar procedures was mentioned again at the > last > > > > Summit (https://etherpad.openstack.org/p/SYD-install-guide-testing). > > > > > > > > It was brought to my attention that the tripleo team has been > working on > > > > automating some of the tripleo deployment procedures, using a Bash > script > > > > with included comment lines to supply some RST-formatted narrative, > for > > > > example: > > > > > > > > > > > > https://github.com/openstack/tripleo-quickstart-extras/blob/master/roles/overcloud-prep-images/templates/overcloud-prep-images.sh.j2 > > > > > > > > The Bash script can then be converted to RST, e.g.: > > > > > > > > > > > > https://thirdparty.logs.rdoproject.org/jenkins-tripleo-quickstart-queens-rdo_trunk-baremetal-dell_fc430_envB-single_nic_vlans-27/docs/build/ > > > > > > > > Source Code: > > > > > > > > > > > > https://github.com/openstack/tripleo-quickstart-extras/tree/master/roles/collect-logs > > > > > > > > I really liked this approach and while I don't want to sound like > selling > > > > other people's work, I'm wondering if there is still an interest > among > > > the > > > > broader OpenStack community in automating documentation like this? > > > > > > > > Thanks, > > > > pk > > > > > > > > > > Weren't the folks doing the training-labs or training-guides taking a > > > similar approach? IIRC, they ended up implementing what amounted to > > > their own installer for OpenStack, and then ended up with all of the > > > associated upgrade and testing burden. > > > > > > I like the idea of trying to use some automation from this, but I > wonder > > > if we'd be better off extracting data from other tools, rather than > > > building a new one. > > > > > > Doug > > > > > > > So there really isn't anything new to create, the work is done and > executed > > on every tripleo change that runs in rdo-cloud. > > It wasn't clear what Petr was hoping to get. Deploying with TripleO is > only one way to deploy, so we wouldn't be able to replace the current > installation guides with the results of this work. It sounds like that's > not the goal, though. > > > > > Instead of dismissing the idea upfront I'm more inclined to set an > > achievable small step to see how well it works. My thought would be to > > focus on the upcoming all-in-one installer and the automated doc > generated > > with that workflow. I'd like to target publishing the all-in-one tripleo > > installer doc to [1] for Stein and of course a section of tripleo.org. > > As an official project, why is TripleO still publishing docs to its own > site? That's not something we generally encourage. > > That said, publishing a new deployment guide based on this technique > makes sense in general. What about Ben's comments elsewhere in the > thread? > I think Ben is referring to an older implementation and a slightly different design but still has some points that we would want to be mindful of. I think this is a worthy effort to take another pass at this regarless to be honest as we've found a good combination of interested folks and sometimes the right people make all the difference. My personal opinion is that I'm not expecting the automated doc generation to be upload ready to a doc server after each run. I do expect it to do 95% of the work, and to help keep the doc up to date with what is executed in the latest releases of TripleO. Also noting the doc used is a mixture of static and generated documentation which I think worked out quite well in order to not soley rely on what is executed in ci. So again, my thought is to create a small achievable goal and see where the collaboration takes us. Thanks > > Doug > > > > > What do you think? > > > > [1] https://docs.openstack.org/queens/deploy/ > > > > > > > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Wed May 16 19:34:26 2018 From: aschultz at redhat.com (Alex Schultz) Date: Wed, 16 May 2018 13:34:26 -0600 Subject: [openstack-dev] [docs] Automating documentation the tripleo way? In-Reply-To: <1526497268-sup-3619@lrrr.local> References: <20180516173914.2fb66aa5a7a9cdaa066324e1@redhat.com> <1526495787-sup-1958@lrrr.local> <1526497268-sup-3619@lrrr.local> Message-ID: On Wed, May 16, 2018 at 1:04 PM, Doug Hellmann wrote: > Excerpts from Wesley Hayutin's message of 2018-05-16 12:51:25 -0600: >> On Wed, May 16, 2018 at 2:41 PM Doug Hellmann wrote: >> >> > Excerpts from Petr Kovar's message of 2018-05-16 17:39:14 +0200: >> > > Hi all, >> > > >> > > In the past few years, we've seen several efforts aimed at automating >> > > procedural documentation, mostly centered around the OpenStack >> > > installation guide. This idea to automatically produce and verify >> > > installation steps or similar procedures was mentioned again at the last >> > > Summit (https://etherpad.openstack.org/p/SYD-install-guide-testing). >> > > >> > > It was brought to my attention that the tripleo team has been working on >> > > automating some of the tripleo deployment procedures, using a Bash script >> > > with included comment lines to supply some RST-formatted narrative, for >> > > example: >> > > >> > > >> > https://github.com/openstack/tripleo-quickstart-extras/blob/master/roles/overcloud-prep-images/templates/overcloud-prep-images.sh.j2 >> > > >> > > The Bash script can then be converted to RST, e.g.: >> > > >> > > >> > https://thirdparty.logs.rdoproject.org/jenkins-tripleo-quickstart-queens-rdo_trunk-baremetal-dell_fc430_envB-single_nic_vlans-27/docs/build/ >> > > >> > > Source Code: >> > > >> > > >> > https://github.com/openstack/tripleo-quickstart-extras/tree/master/roles/collect-logs >> > > >> > > I really liked this approach and while I don't want to sound like selling >> > > other people's work, I'm wondering if there is still an interest among >> > the >> > > broader OpenStack community in automating documentation like this? >> > > >> > > Thanks, >> > > pk >> > > >> > >> > Weren't the folks doing the training-labs or training-guides taking a >> > similar approach? IIRC, they ended up implementing what amounted to >> > their own installer for OpenStack, and then ended up with all of the >> > associated upgrade and testing burden. >> > >> > I like the idea of trying to use some automation from this, but I wonder >> > if we'd be better off extracting data from other tools, rather than >> > building a new one. >> > >> > Doug >> > >> >> So there really isn't anything new to create, the work is done and executed >> on every tripleo change that runs in rdo-cloud. > > It wasn't clear what Petr was hoping to get. Deploying with TripleO is > only one way to deploy, so we wouldn't be able to replace the current > installation guides with the results of this work. It sounds like that's > not the goal, though. > >> >> Instead of dismissing the idea upfront I'm more inclined to set an >> achievable small step to see how well it works. My thought would be to >> focus on the upcoming all-in-one installer and the automated doc generated >> with that workflow. I'd like to target publishing the all-in-one tripleo >> installer doc to [1] for Stein and of course a section of tripleo.org. > > As an official project, why is TripleO still publishing docs to its own > site? That's not something we generally encourage. > We publish on docs.o.o. It's the same docs, just different theme. https://docs.openstack.org/tripleo-docs/latest/install/index.html https://docs.openstack.org/tripleo-docs/latest/contributor/index.html I guess we could just change the tripleo.org to redirect to the docs.o.o stuff. I'm not sure the history behind this. I would say that you can't really find them from the main docs.o.o page unless you search so maybe that's part of it? I'm assuming this is likely because we don't version our docs in the past so they don't show up. Is there a better way to ensure visibility of docs? Thanks, -Alex > That said, publishing a new deployment guide based on this technique > makes sense in general. What about Ben's comments elsewhere in the > thread? > > Doug > >> >> What do you think? >> >> [1] https://docs.openstack.org/queens/deploy/ >> >> > >> > __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From hrybacki at redhat.com Wed May 16 19:46:55 2018 From: hrybacki at redhat.com (Harry Rybacki) Date: Wed, 16 May 2018 15:46:55 -0400 Subject: [openstack-dev] [docs] Automating documentation the tripleo way? In-Reply-To: References: <20180516173914.2fb66aa5a7a9cdaa066324e1@redhat.com> <1526495787-sup-1958@lrrr.local> Message-ID: On Wed, May 16, 2018 at 2:51 PM, Wesley Hayutin wrote: > > > On Wed, May 16, 2018 at 2:41 PM Doug Hellmann wrote: >> >> Excerpts from Petr Kovar's message of 2018-05-16 17:39:14 +0200: >> > Hi all, >> > >> > In the past few years, we've seen several efforts aimed at automating >> > procedural documentation, mostly centered around the OpenStack >> > installation guide. This idea to automatically produce and verify >> > installation steps or similar procedures was mentioned again at the last >> > Summit (https://etherpad.openstack.org/p/SYD-install-guide-testing). >> > >> > It was brought to my attention that the tripleo team has been working on >> > automating some of the tripleo deployment procedures, using a Bash >> > script >> > with included comment lines to supply some RST-formatted narrative, for >> > example: >> > >> > >> > https://github.com/openstack/tripleo-quickstart-extras/blob/master/roles/overcloud-prep-images/templates/overcloud-prep-images.sh.j2 >> > >> > The Bash script can then be converted to RST, e.g.: >> > >> > >> > https://thirdparty.logs.rdoproject.org/jenkins-tripleo-quickstart-queens-rdo_trunk-baremetal-dell_fc430_envB-single_nic_vlans-27/docs/build/ >> > >> > Source Code: >> > >> > >> > https://github.com/openstack/tripleo-quickstart-extras/tree/master/roles/collect-logs >> > >> > I really liked this approach and while I don't want to sound like >> > selling >> > other people's work, I'm wondering if there is still an interest among >> > the >> > broader OpenStack community in automating documentation like this? >> > >> > Thanks, >> > pk >> > >> >> Weren't the folks doing the training-labs or training-guides taking a >> similar approach? IIRC, they ended up implementing what amounted to >> their own installer for OpenStack, and then ended up with all of the >> associated upgrade and testing burden. >> >> I like the idea of trying to use some automation from this, but I wonder >> if we'd be better off extracting data from other tools, rather than >> building a new one. >> >> Doug > > > So there really isn't anything new to create, the work is done and executed > on every tripleo change that runs in rdo-cloud. > > Instead of dismissing the idea upfront I'm more inclined to set an > achievable small step to see how well it works. My thought would be to > focus on the upcoming all-in-one installer and the automated doc generated > with that workflow. I'd like to target publishing the all-in-one tripleo > installer doc to [1] for Stein and of course a section of tripleo.org. > > What do you think? > Interesting idea -- discussing this a bit at Summit (for those who will be in attendance) seems like a good place to start. > [1] https://docs.openstack.org/queens/deploy/ > > >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From aschultz at redhat.com Wed May 16 20:07:46 2018 From: aschultz at redhat.com (Alex Schultz) Date: Wed, 16 May 2018 14:07:46 -0600 Subject: [openstack-dev] [tripleo] Cancel IRC meeting for May 22, 2018 Message-ID: Since the summit is coming up, there will likely be very low attendance. We'll carry any open items until the following week. Thanks, -Alex From emilien at redhat.com Wed May 16 20:37:36 2018 From: emilien at redhat.com (Emilien Macchi) Date: Wed, 16 May 2018 13:37:36 -0700 Subject: [openstack-dev] [tripleo] Cancel IRC meeting for May 22, 2018 In-Reply-To: References: Message-ID: On Wed, May 16, 2018 at 1:07 PM, Alex Schultz wrote: > Since the summit is coming up, there will likely be very low > attendance. We'll carry any open items until the following week. > No Weekly Owl as well, but be patient for the next Edition special Summit. -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Wed May 16 20:45:14 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Wed, 16 May 2018 13:45:14 -0700 Subject: [openstack-dev] [octavia] Weekly IRC meeting cancelled May 23rd Message-ID: Some of the team will be attending the OpenStack summit in Vancouver, so I am cancelling the weekly IRC meeting for the 23rd. We will resume our normal schedule on the 30th. Michael From fungi at yuggoth.org Wed May 16 20:54:50 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 16 May 2018 20:54:50 +0000 Subject: [openstack-dev] [tripleo] [barbican] [tc] key store in base services (was: Encrypted swift volumes by default in the undercloud) In-Reply-To: <20180516174209.45ghmqz7qmshsd7g@yuggoth.org> References: <20180516174209.45ghmqz7qmshsd7g@yuggoth.org> Message-ID: <20180516205450.5we5vemmyiy2tdb3@yuggoth.org> On 2018-05-16 17:42:09 +0000 (+0000), Jeremy Stanley wrote: [...] > Unfortunately, I'm unable to find any follow-up summary on the > mailing list from the aforementioned session, but recollection from > those who were present (I had a schedule conflict at that time) was > that a Castellan-compatible key store would at least be a candidate > for inclusion in our base services list [...] As Jim Rollenhagen pointed out in #openstack-tc, I was probably thinking of the earlier Pike PTG session the Architecture WG held, summarized at: http://lists.openstack.org/pipermail/openstack-dev/2017-February/113016.html Ensuing discussion yielded that there was no good reason to rename a library even if the Oslo team was going to officially adopt it (which for castellan they subsequently did in March 2017). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From prometheanfire at gentoo.org Wed May 16 20:59:47 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 16 May 2018 15:59:47 -0500 Subject: [openstack-dev] [all][requirements][docs] sphinx update to 1.7.4 from 1.6.5 Message-ID: <20180516205947.ezyhuvmocvxmb3lz@gentoo.org> Sphinx has breaking changes (yet again) and we need to figure out how to deal with it. I think the fix will be simple for affected projects, but we should probably move forward on this. The error people are getting seems to be 'Field list ends without a blank line; unexpected unindent.' I'd like to keep on 1.7.4 and have the affected projects fix the error so we can move on, but the revert has been proposed (and approved to get gate unbroken for them). https://review.openstack.org/568248 Any advice from the community is welcome. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From doug at doughellmann.com Wed May 16 21:07:09 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 16 May 2018 17:07:09 -0400 Subject: [openstack-dev] [all][requirements][docs] sphinx update to 1.7.4 from 1.6.5 In-Reply-To: <20180516205947.ezyhuvmocvxmb3lz@gentoo.org> References: <20180516205947.ezyhuvmocvxmb3lz@gentoo.org> Message-ID: <1526504809-sup-2834@lrrr.local> Excerpts from Matthew Thode's message of 2018-05-16 15:59:47 -0500: > Sphinx has breaking changes (yet again) and we need to figure out how to > deal with it. I think the fix will be simple for affected projects, but > we should probably move forward on this. The error people are getting > seems to be 'Field list ends without a blank line; unexpected unindent.' > > I'd like to keep on 1.7.4 and have the affected projects fix the error > so we can move on, but the revert has been proposed (and approved to get > gate unbroken for them). https://review.openstack.org/568248 Any > advice from the community is welcome. > Is it sphinx, or docutils? Do you have an example of the error? From prometheanfire at gentoo.org Wed May 16 21:14:36 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 16 May 2018 16:14:36 -0500 Subject: [openstack-dev] [all][requirements][docs] sphinx update to 1.7.4 from 1.6.5 In-Reply-To: <1526504809-sup-2834@lrrr.local> References: <20180516205947.ezyhuvmocvxmb3lz@gentoo.org> <1526504809-sup-2834@lrrr.local> Message-ID: <20180516211436.coyp2zli22uoosg7@gentoo.org> On 18-05-16 17:07:09, Doug Hellmann wrote: > Excerpts from Matthew Thode's message of 2018-05-16 15:59:47 -0500: > > Sphinx has breaking changes (yet again) and we need to figure out how to > > deal with it. I think the fix will be simple for affected projects, but > > we should probably move forward on this. The error people are getting > > seems to be 'Field list ends without a blank line; unexpected unindent.' > > > > I'd like to keep on 1.7.4 and have the affected projects fix the error > > so we can move on, but the revert has been proposed (and approved to get > > gate unbroken for them). https://review.openstack.org/568248 Any > > advice from the community is welcome. > > > > Is it sphinx, or docutils? > > Do you have an example of the error? > From https://bugs.launchpad.net/networking-midonet/+bug/1771092 2018-05-13 14:22:06.176410 | ubuntu-xenial | Warning, treated as error: 2018-05-13 14:22:06.176967 | ubuntu-xenial | /home/zuul/src/git.openstack.org/openstack/networking-midonet/midonet/neutron/db/l3_db_midonet.py:docstring of midonet.neutron.db.l3_db_midonet.MidonetL3DBMixin.get_router_for_floatingip:8:Field list ends without a blank line; unexpected unindent. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From anlin.kong at gmail.com Wed May 16 21:41:26 2018 From: anlin.kong at gmail.com (Lingxian Kong) Date: Thu, 17 May 2018 09:41:26 +1200 Subject: [openstack-dev] [ALL][PTLs] Community Goals for Rocky: Toggle the debug option at runtime In-Reply-To: References: <20180316213441.ap4hztvrmn4qkpey@yuggoth.org> <12B971D7-83C6-43AE-9CC3-C63296E9385D@doughellmann.com> <27a719f4-ce6b-2a19-b137-dc3dc153f0b0@gmail.com> <1526325202-sup-17@lrrr.local> <557ed9ca-5e68-85a1-858e-ca81797e63bd@gmail.com> <1526337894-sup-1085@lrrr.local> <1ca15064-a93e-6080-6b5b-bf70890575ca@gmail.com> <1526395462-sup-8795@lrrr.local> <1526478761-sup-7228@lrrr.local> Message-ID: Thanks for your reply, @Doug and @Jim Cheers, Lingxian Kong On Thu, May 17, 2018 at 2:23 AM Jim Rollenhagen wrote: > On Wed, May 16, 2018 at 9:55 AM, Doug Hellmann > wrote: > >> Excerpts from Lingxian Kong's message of 2018-05-16 11:12:01 +1200: >> > Hi, >> > >> > Maybe I missed the original discussion, I found the 'mutable' >> configuration >> > implementation relies on oslo.service, but is there any guide for the >> > projects using cotyledon instead? >> >> oslo.service implements the signal handler natively, but the feature >> does not rely on oslo.service. The method in oslo.config that does the >> work makes no assumptions about what triggers it. We did this on purpose >> to support projects that do not use oslo.service. >> >> I don't know enough about cotyledon to tell you how to do it, but you >> need to set up a signal handler so that SIGHUP invokes the >> mutate_config_files() method of the ConfigOpts instance being used by >> the application. >> > > This was asked in another thread, see my reply :) > http://lists.openstack.org/pipermail/openstack-dev/2018-March/128797.html > > // jim > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongbin034 at gmail.com Wed May 16 21:44:55 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Wed, 16 May 2018 17:44:55 -0400 Subject: [openstack-dev] [Zun] Relocate jobs from openstack/zun to openstack/zun-tempest-plugin Message-ID: Hi all, I have a series of patches for moving gate jobs from openstack/zun to openstack/zun-tempest-plugin: https://review.openstack.org/#/q/topic:relocate-zun-jobs+(status:open+OR+status:merged) Moving forward those patches will incur a period of time that our gate will have no tempest tests coverage. Therefore, I will have to fast-approve those series of patches to minimize the transition period, so I sent this email to collect feedback before performing the fast-approval . The goal of those patches is to move the job definitions and playbooks from openstack/zun to openstack/zun-tempest-plugin. The advantages of such change are as following: * Make job definitions closer to tempest test cases so that it is optimal for development and code reviews workflow. For example, sometime, we can avoid to split a patch into two repos in order to add a simple tempest test case. * openstack/zun is branched and openstack/zun-tempest-plugin is branchless. Zuul job definitions seem to fit better into branchless context. * It saves us the overhead to backport job definitions to stable branch. Sometime, missing a backport might lead to gate breakage and blocking development workflow. All, what do you think? If there is no concern, I will fast-approve the series of patches in a few days. Best regards, Hongbin -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Wed May 16 23:02:06 2018 From: corvus at inaugust.com (James E. Blair) Date: Wed, 16 May 2018 16:02:06 -0700 Subject: [openstack-dev] [Zun] Relocate jobs from openstack/zun to openstack/zun-tempest-plugin In-Reply-To: (Hongbin Lu's message of "Wed, 16 May 2018 17:44:55 -0400") References: Message-ID: <87sh6ri4r5.fsf@meyer.lemoncheese.net> Hongbin Lu writes: > The goal of those patches is to move the job definitions and playbooks from > openstack/zun to openstack/zun-tempest-plugin. The advantages of such > change are as following: > > * Make job definitions closer to tempest test cases so that it is optimal > for development and code reviews workflow. For example, sometime, we can > avoid to split a patch into two repos in order to add a simple tempest test > case. > * openstack/zun is branched and openstack/zun-tempest-plugin is branchless. > Zuul job definitions seem to fit better into branchless context. > * It saves us the overhead to backport job definitions to stable branch. > Sometime, missing a backport might lead to gate breakage and blocking > development workflow. Just a minor clarification: it's not always the case that branchless is better. Jobs which operate on repos that are branched are likely to be easier to work with in the long run, as whatever configuration is specific to the branch appears on that branch, instead of somewhere else. Further, there shouldn't be a need to backport changes once the initial jobs are set up. In the future, when you branch master to stable/foo, you'll automatically get a copy of the job that's appropriate for that point in time, and you only need to update it if you're already updating the software on that branch. Older versions of jobs on stable branches can continue to use their old configuration. For jobs which should perform the same function on all branches, it is easier to have those defined in branchless repos. But in either case, you can accomplish the same thing without moving jobs. In a branched repo, you can add a "branches: .*" matcher, and in a branchless repo, you can add multiple variants for each branch. The new v3-native devstack jobs are branched, and are defined in the devstack repo. They define how to set up devstack for each branch. But the tempest jobs (which build on top of the devstack jobs), are not branched (just like tempest), since they are designed to run the same way on all branches. I don't know enough about the situation to recommend one way or the other for Zun. But I do want to emphasize that the best answer depends on the circumstances. -Jim From jungleboyj at gmail.com Thu May 17 00:08:12 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Wed, 16 May 2018 19:08:12 -0500 Subject: [openstack-dev] [cinder] Project Team Dinner at Vancouver Summit Message-ID: <61eac647-61e8-a96e-3760-0af698f2f86a@gmail.com> Team, We discussed having a team dinner like we have done in the past during today's team meeting.  It sounded like most people would be available Tuesday evening, so that is the evening I am planning for. If you are able to attend please add your name to the etherpad [1] by Sunday 5/20 so that I can make reservations. Thank you! Jay [1] https://etherpad.openstack.org/p/YVR18-cinder-dinner From gmann at ghanshyammann.com Thu May 17 00:07:33 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 17 May 2018 09:07:33 +0900 Subject: [openstack-dev] [Openstack-operators] [Forum] [QA] Etherpad for Users / Operators adoption of QA tools / plugins sessions Message-ID: Hi All, I've created the below etherpad for the QA feedback sessions[1] which is schedule on Monday, May 21, 1:30 pm. This contains the basic agenda and feedback items we are planning to discuss in this sessions. If you have any additional items to add or modify, please feel free to do that. Hope to see you all next week and have safe travel. .. 1 https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21742/users-operators-adoption-of-qa-tools-plugins -gmann From dbingham at godaddy.com Thu May 17 00:18:08 2018 From: dbingham at godaddy.com (David G. Bingham) Date: Thu, 17 May 2018 00:18:08 +0000 Subject: [openstack-dev] [nova] Apply_cells to allow automation of nova-manage cells_v2 commands Message-ID: <991A2C0D-D13B-45E1-BDFC-C5A7FA931CF8@godaddy.com> Yo Nova Gurus :-), We here at GoDaddy are getting hot and heavy into Cells V2 these days and would like to propose an enhancement or maybe see if something like this is already in the works. Need: To be able to “synchronize” cells from a specified file (git controlled, or inventory generated). Details: We are thinking about adding a new method to nova-manage called “apply-cells” that would take a json/yaml file and “make-it-so”. This method would make the cells in the DB match exactly that of what the spec says matching on the cell’s name. Internally it calls its own create_cell, update_cell, and delete_cell commands to get it done. We already have a POC in the works. Are you aware of any others who have made requests for something like this? Ref: https://review.openstack.org/#/c/568987/ Thanks a ton, David Bingham (wwriverrat on IRC) -------------- next part -------------- An HTML attachment was scrubbed... URL: From soulxu at gmail.com Thu May 17 01:38:51 2018 From: soulxu at gmail.com (Alex Xu) Date: Thu, 17 May 2018 09:38:51 +0800 Subject: [openstack-dev] [cyborg] [nova] Cyborg quotas In-Reply-To: <4ba31b19-98cf-25b2-a0c2-0f64a29756e8@gmail.com> References: <6d8232e3-79ca-c61d-ad64-a99e923e2114@intel.com> <4ba31b19-98cf-25b2-a0c2-0f64a29756e8@gmail.com> Message-ID: 2018-05-17 1:24 GMT+08:00 Jay Pipes : > On 05/16/2018 01:01 PM, Nadathur, Sundar wrote: > >> Hi, >> The Cyborg quota spec [1] proposes to implement a quota (maximum >> usage) for accelerators on a per-project basis, to prevent one project >> (tenant) from over-using some resources and starving other tenants. There >> are separate resource classes for different accelerator types (GPUs, FPGAs, >> etc.), and so we can do quotas per RC. >> >> The current proposal [2] is to track the usage in Cyborg agent/driver. I >> am not sure that scheme will work, as I have indicated in the comments on >> [1]. Here is another possible way. >> >> * The operator configures the oslo.limit in keystone per-project >> per-resource-class (GPU, FPGA, ...). >> o Until this gets into Keystone, Cyborg may define its own quota >> table, as defined in [1]. >> * Cyborg implements a table to track per-project usage, as defined in >> [1]. >> > > Placement already stores usage information for all allocations of > resources. There is already even a /usages API endpoint that you can > specify a project and/or user: > > https://developer.openstack.org/api-ref/placement/#list-usages > > I see no reason not to use it. > > There is already actually a spec to use placement for quota usage checks > in Nova here: > > https://review.openstack.org/#/c/509042/ FYI, I'm working on a spec which append to that spec. It's about counting quota for the resource class(GPU, custom RC, etc) other than nova built-in resources(cores, ram). It should be able to count the resource classes which are used by cyborg. But yes, we probably should answer Matt's question first, whether we should let Nova count quota instead of Cyborg. > > > Probably best to have a look at that and see if it will end up meeting > your needs. > > * Cyborg provides a filter for the Nova scheduler, which checks >> whether the project making the request has exceeded its own quota. >> > > Quota checks happen before Nova's scheduler gets involved, so having a > scheduler filter handle quota usage checking is pretty much a non-starter. > > I'll have a look at the patches you've proposed and comment there. > > Best, > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongbin034 at gmail.com Thu May 17 02:05:04 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Wed, 16 May 2018 22:05:04 -0400 Subject: [openstack-dev] [Zun] Relocate jobs from openstack/zun to openstack/zun-tempest-plugin In-Reply-To: <87sh6ri4r5.fsf@meyer.lemoncheese.net> References: <87sh6ri4r5.fsf@meyer.lemoncheese.net> Message-ID: On Wed, May 16, 2018 at 7:02 PM, James E. Blair wrote: > Hongbin Lu writes: > > > The goal of those patches is to move the job definitions and playbooks > from > > openstack/zun to openstack/zun-tempest-plugin. The advantages of such > > change are as following: > > > > * Make job definitions closer to tempest test cases so that it is optimal > > for development and code reviews workflow. For example, sometime, we can > > avoid to split a patch into two repos in order to add a simple tempest > test > > case. > > * openstack/zun is branched and openstack/zun-tempest-plugin is > branchless. > > Zuul job definitions seem to fit better into branchless context. > > * It saves us the overhead to backport job definitions to stable branch. > > Sometime, missing a backport might lead to gate breakage and blocking > > development workflow. > > Just a minor clarification: it's not always the case that branchless is > better. > > Jobs which operate on repos that are branched are likely to be easier to > work with in the long run, as whatever configuration is specific to the > branch appears on that branch, instead of somewhere else. > > Further, there shouldn't be a need to backport changes once the initial > jobs are set up. In the future, when you branch master to stable/foo, > you'll automatically get a copy of the job that's appropriate for that > point in time, and you only need to update it if you're already updating > the software on that branch. Older versions of jobs on stable branches > can continue to use their old configuration. > > For jobs which should perform the same function on all branches, it is > easier to have those defined in branchless repos. But in either case, > you can accomplish the same thing without moving jobs. In a branched > repo, you can add a "branches: .*" matcher, and in a branchless repo, > you can add multiple variants for each branch. > > The new v3-native devstack jobs are branched, and are defined in the > devstack repo. They define how to set up devstack for each branch. But > the tempest jobs (which build on top of the devstack jobs), are not > branched (just like tempest), since they are designed to run the same > way on all branches. > > I don't know enough about the situation to recommend one way or the > other for Zun. But I do want to emphasize that the best answer depends > on the circumstances. > Hi Jim, Thanks a lot for sharing your expertise. Based on what you said, I re-visited the Zun's situation and think the branchless approach might not be better. In before, I ran into a situation that I wanted to write a tempest test case with a specific API microversion. In order to get it setup, it seems I need to submit three patches: a tempest test case (at zun-tempest-plugin), a tempest config to populate the min/max microversion (at zun master) and a backport (at zun stable/queens). A simple change that needs to be splited into three patches made me revisit the current layout and explore the branchless approach that are exercised in neutron [1]. I am re-visiting the situation now and leaning to the branched approach because the setup of microversion min/max seems to be just once per branch. However, I am not sure though how often we will run into a similar situation that a change in tempest test case is coupled with job config. If it frequently happens in the future, switching to branchless approach might be more convenient. [1] https://github.com/openstack/neutron-tempest-plugin/blob/master/.zuul.yaml Best regards, Hongbin > > -Jim > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Thu May 17 03:43:16 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Wed, 16 May 2018 20:43:16 -0700 Subject: [openstack-dev] [all][requirements][docs] sphinx update to 1.7.4 from 1.6.5 In-Reply-To: <20180516205947.ezyhuvmocvxmb3lz@gentoo.org> References: <20180516205947.ezyhuvmocvxmb3lz@gentoo.org> Message-ID: The Octavia project had other breakage due to sphinx > 1.7 but we have already resolved those issues. Back story: the way arguments are handled for apidoc changed. An example patch for the fix would be: https://review.openstack.org/#/c/568383/ Michael (johnsom) On Wed, May 16, 2018 at 1:59 PM, Matthew Thode wrote: > Sphinx has breaking changes (yet again) and we need to figure out how to > deal with it. I think the fix will be simple for affected projects, but > we should probably move forward on this. The error people are getting > seems to be 'Field list ends without a blank line; unexpected unindent.' > > I'd like to keep on 1.7.4 and have the affected projects fix the error > so we can move on, but the revert has been proposed (and approved to get > gate unbroken for them). https://review.openstack.org/568248 Any > advice from the community is welcome. > > -- > Matthew Thode (prometheanfire) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From tony at bakeyournoodle.com Thu May 17 03:51:06 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 17 May 2018 13:51:06 +1000 Subject: [openstack-dev] [all][requirements][docs] sphinx update to 1.7.4 from 1.6.5 In-Reply-To: <20180516211436.coyp2zli22uoosg7@gentoo.org> References: <20180516205947.ezyhuvmocvxmb3lz@gentoo.org> <1526504809-sup-2834@lrrr.local> <20180516211436.coyp2zli22uoosg7@gentoo.org> Message-ID: <20180517035105.GD8215@thor.bakeyournoodle.com> On Wed, May 16, 2018 at 04:14:36PM -0500, Matthew Thode wrote: > On 18-05-16 17:07:09, Doug Hellmann wrote: > > Excerpts from Matthew Thode's message of 2018-05-16 15:59:47 -0500: > > > Sphinx has breaking changes (yet again) and we need to figure out how to > > > deal with it. I think the fix will be simple for affected projects, but > > > we should probably move forward on this. The error people are getting > > > seems to be 'Field list ends without a blank line; unexpected unindent.' > > > > > > I'd like to keep on 1.7.4 and have the affected projects fix the error > > > so we can move on, but the revert has been proposed (and approved to get > > > gate unbroken for them). https://review.openstack.org/568248 Any > > > advice from the community is welcome. > > > > > > > Is it sphinx, or docutils? > > > > Do you have an example of the error? > > > > From https://bugs.launchpad.net/networking-midonet/+bug/1771092 > > 2018-05-13 14:22:06.176410 | ubuntu-xenial | Warning, treated as error: > 2018-05-13 14:22:06.176967 | ubuntu-xenial | /home/zuul/src/git.openstack.org/openstack/networking-midonet/midonet/neutron/db/l3_db_midonet.py:docstring of midonet.neutron.db.l3_db_midonet.MidonetL3DBMixin.get_router_for_floatingip:8:Field list ends without a blank line; unexpected unindent. > Adding something like: (.docs) [tony at thor networking-midonet]$ ( cd ../neutron && git diff ) diff --git a/neutron/db/l3_db.py b/neutron/db/l3_db.py index 33b5d99b1..66794542a 100644 --- a/neutron/db/l3_db.py +++ b/neutron/db/l3_db.py @@ -1091,8 +1091,8 @@ class L3_NAT_dbonly_mixin(l3.RouterPluginBase, :param internal_subnet: The subnet for the fixed-ip. :param external_network_id: The external network for floating-ip. - :raises: ExternalGatewayForFloatingIPNotFound if no suitable router - is found. + :raises: ExternalGatewayForFloatingIPNotFound if no suitable router \ + is found. """ # Find routers(with router_id and interface address) that (.docs) [tony at thor networking-midonet]$ ( cd ../os-vif && git diff ) diff --git a/os_vif/plugin.py b/os_vif/plugin.py index 56566a6..2a437a6 100644 --- a/os_vif/plugin.py +++ b/os_vif/plugin.py @@ -49,10 +49,11 @@ class PluginBase(object): Given a model of a VIF, perform operations to plug the VIF properly. :param vif: `os_vif.objects.vif.VIFBase` object. - :param instance_info: `os_vif.objects.instance_info.InstanceInfo` - object. - :raises `processutils.ProcessExecutionError`. Plugins implementing - this method should let `processutils.ProcessExecutionError` + :param instance_info: `os_vif.objects.instance_info.InstanceInfo` \ + object. + + :raises `processutils.ProcessExecutionError`. Plugins implementing \ + this method should let `processutils.ProcessExecutionError` \ bubble up. """ @@ -63,9 +64,10 @@ class PluginBase(object): :param vif: `os_vif.objects.vif.VIFBase` object. :param instance_info: `os_vif.objects.instance_info.InstanceInfo` - object. - :raises `processutils.ProcessExecutionError`. Plugins implementing - this method should let `processutils.ProcessExecutionError` + object. + + :raises `processutils.ProcessExecutionError`. Plugins implementing \ + this method should let `processutils.ProcessExecutionError` \ bubble up. """ fixes the midonet docs build for me (locally) on sphinx 1.7.4. I'm far from a sphinx expert but the chnages to neutron and os-vif seem correct to me. Perhaps the sphinx parser just got more strict? Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From prometheanfire at gentoo.org Thu May 17 04:18:12 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Wed, 16 May 2018 23:18:12 -0500 Subject: [openstack-dev] [all][requirements][docs] sphinx update to 1.7.4 from 1.6.5 In-Reply-To: <20180517035105.GD8215@thor.bakeyournoodle.com> References: <20180516205947.ezyhuvmocvxmb3lz@gentoo.org> <1526504809-sup-2834@lrrr.local> <20180516211436.coyp2zli22uoosg7@gentoo.org> <20180517035105.GD8215@thor.bakeyournoodle.com> Message-ID: <20180517041811.r2xt2a7achehn43f@gentoo.org> On 18-05-17 13:51:06, Tony Breeds wrote: > On Wed, May 16, 2018 at 04:14:36PM -0500, Matthew Thode wrote: > > On 18-05-16 17:07:09, Doug Hellmann wrote: > > > Excerpts from Matthew Thode's message of 2018-05-16 15:59:47 -0500: > > > > Sphinx has breaking changes (yet again) and we need to figure out how to > > > > deal with it. I think the fix will be simple for affected projects, but > > > > we should probably move forward on this. The error people are getting > > > > seems to be 'Field list ends without a blank line; unexpected unindent.' > > > > > > > > I'd like to keep on 1.7.4 and have the affected projects fix the error > > > > so we can move on, but the revert has been proposed (and approved to get > > > > gate unbroken for them). https://review.openstack.org/568248 Any > > > > advice from the community is welcome. > > > > > > > > > > Is it sphinx, or docutils? > > > > > > Do you have an example of the error? > > > > > > > From https://bugs.launchpad.net/networking-midonet/+bug/1771092 > > > > 2018-05-13 14:22:06.176410 | ubuntu-xenial | Warning, treated as error: > > 2018-05-13 14:22:06.176967 | ubuntu-xenial | /home/zuul/src/git.openstack.org/openstack/networking-midonet/midonet/neutron/db/l3_db_midonet.py:docstring of midonet.neutron.db.l3_db_midonet.MidonetL3DBMixin.get_router_for_floatingip:8:Field list ends without a blank line; unexpected unindent. > > > > Perhaps the sphinx parser just got more strict? > Yep -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From yu-kasuya at kddi-research.jp Thu May 17 05:39:03 2018 From: yu-kasuya at kddi-research.jp (Yuki Kasuya) Date: Thu, 17 May 2018 14:39:03 +0900 Subject: [openstack-dev] [Forum] Fault Management/Monitoring for NFV/Edge/5G/IoT Message-ID: <0091929a-0ca3-ff11-5a41-4525c53a4fb9@kddi-research.jp> Hi All, I've created an etherpad for Fault Management/Monitoring for NFV/Edge/5G/IoT. It'll take place on Tuesday, May 22, 4:40pm-6:10pm @ Room 221-222. If you have any usecase/idea/challenge for FM at these new area, could you join this forum and add any topic/comment to etherpad. https://etherpad.openstack.org/p/YVR-fm-monitoring Best regards, Yuki -- --------------------------------------------- KDDI Research, Inc. Integrated Core Network Control And Management Laboratory Yuki Kasuya yu-kasuya at kddilabs.jp +81 80 9048 8405 From soulxu at gmail.com Thu May 17 06:34:44 2018 From: soulxu at gmail.com (Alex Xu) Date: Thu, 17 May 2018 14:34:44 +0800 Subject: [openstack-dev] [cyborg] [nova] Cyborg quotas In-Reply-To: References: <6d8232e3-79ca-c61d-ad64-a99e923e2114@intel.com> <4ba31b19-98cf-25b2-a0c2-0f64a29756e8@gmail.com> Message-ID: 2018-05-17 9:38 GMT+08:00 Alex Xu : > > > 2018-05-17 1:24 GMT+08:00 Jay Pipes : > >> On 05/16/2018 01:01 PM, Nadathur, Sundar wrote: >> >>> Hi, >>> The Cyborg quota spec [1] proposes to implement a quota (maximum >>> usage) for accelerators on a per-project basis, to prevent one project >>> (tenant) from over-using some resources and starving other tenants. There >>> are separate resource classes for different accelerator types (GPUs, FPGAs, >>> etc.), and so we can do quotas per RC. >>> >>> The current proposal [2] is to track the usage in Cyborg agent/driver. I >>> am not sure that scheme will work, as I have indicated in the comments on >>> [1]. Here is another possible way. >>> >>> * The operator configures the oslo.limit in keystone per-project >>> per-resource-class (GPU, FPGA, ...). >>> o Until this gets into Keystone, Cyborg may define its own quota >>> table, as defined in [1]. >>> * Cyborg implements a table to track per-project usage, as defined in >>> [1]. >>> >> >> Placement already stores usage information for all allocations of >> resources. There is already even a /usages API endpoint that you can >> specify a project and/or user: >> >> https://developer.openstack.org/api-ref/placement/#list-usages >> >> I see no reason not to use it. >> >> There is already actually a spec to use placement for quota usage checks >> in Nova here: >> >> https://review.openstack.org/#/c/509042/ > > > FYI, I'm working on a spec which append to that spec. It's about counting > quota for the resource class(GPU, custom RC, etc) other than nova built-in > resources(cores, ram). It should be able to count the resource classes > which are used by cyborg. But yes, we probably should answer Matt's > question first, whether we should let Nova count quota instead of Cyborg. > here is the line https://review.openstack.org/#/c/569011/ > > >> >> >> Probably best to have a look at that and see if it will end up meeting >> your needs. >> >> * Cyborg provides a filter for the Nova scheduler, which checks >>> whether the project making the request has exceeded its own quota. >>> >> >> Quota checks happen before Nova's scheduler gets involved, so having a >> scheduler filter handle quota usage checking is pretty much a non-starter. >> >> I'll have a look at the patches you've proposed and comment there. >> >> Best, >> -jay >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gkotton at vmware.com Thu May 17 06:58:07 2018 From: gkotton at vmware.com (Gary Kotton) Date: Thu, 17 May 2018 06:58:07 +0000 Subject: [openstack-dev] [neutron] Bug deputy Message-ID: Hi, An urgent matter has come up this week. If possible, can someone please replace me. Sorry Gary -------------- next part -------------- An HTML attachment was scrubbed... URL: From iwamoto at valinux.co.jp Thu May 17 07:09:12 2018 From: iwamoto at valinux.co.jp (IWAMOTO Toshihiro) Date: Thu, 17 May 2018 16:09:12 +0900 Subject: [openstack-dev] [python3] flake8 and pycodestyle W60x warnings Message-ID: <20180517070912.4A525B34AE@mail.valinux.co.jp> pycodestyle-2.4.0 added new warnings W605 and W606, which needs to be addressed or future versions of python3 will refuse to run. https://github.com/PyCQA/pycodestyle/pull/676 (OpenStack CI's flake8 is pre pycodestyle age, flake8 version isn't managed by g-r, but that's another story. No release of flake8 supports pycodestyle 2.4.0 yet. ;) nova seems to have ~200 of those warnings, while other projects don't have much, FWIW. -- IWAMOTO Toshihiro From tobias.urdin at crystone.com Thu May 17 07:49:50 2018 From: tobias.urdin at crystone.com (Tobias Urdin) Date: Thu, 17 May 2018 07:49:50 +0000 Subject: [openstack-dev] [puppet] [magnum] Magnum tempest fails with 400 bad request Message-ID: <380be2f6db2d427190de9fd0e3d3992d@mb01.staff.ognet.se> Hello, I was interested in getting Magnum working in gate by getting @dms patch fixed and merged [1]. The installation goes fine on Ubuntu and CentOS however the tempest testing for Magnum fails on CentOS (it not available in Ubuntu). It seems to be related to authentication against keystone but I don't understand why, please see logs [2] [3] [1] https://review.openstack.org/#/c/367012/ [2] http://logs.openstack.org/12/367012/28/check/puppet-openstack-integration-4-scenario003-tempest-centos-7/3f5252b/logs/magnum/magnum-api.txt.gz#_2018-05-16_15_10_36_010 [3] http://logs.openstack.org/12/367012/28/check/puppet-openstack-integration-4-scenario003-tempest-centos-7/3f5252b/ From thierry at openstack.org Thu May 17 07:58:00 2018 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 17 May 2018 09:58:00 +0200 Subject: [openstack-dev] [tripleo] [barbican] [tc] key store in base services In-Reply-To: <20180516174209.45ghmqz7qmshsd7g@yuggoth.org> References: <20180516174209.45ghmqz7qmshsd7g@yuggoth.org> Message-ID: <16b41f65-053b-70c3-b95f-93b763a5f4ae@openstack.org> Jeremy Stanley wrote: > [...] > As a community, we're likely to continue to make imbalanced > trade-offs against relevant security features if we don't move > forward and declare that some sort of standardized key storage > solution is a fundamental component on which OpenStack services can > rely. Being able to just assume that you can encrypt volumes in > Swift, even as a means to further secure a TripleO undercloud, would > be a step in the right direction for security-minded deployments. > > Unfortunately, I'm unable to find any follow-up summary on the > mailing list from the aforementioned session, but recollection from > those who were present (I had a schedule conflict at that time) was > that a Castellan-compatible key store would at least be a candidate > for inclusion in our base services list: > > https://governance.openstack.org/tc/reference/base-services.html Yes, last time this was discussed, there was lazy consensus that adding "a Castellan-compatible secret store" would be a good addition to the base services list if we wanted to avoid proliferation of half-baked keystore implementations in various components. The two blockers were: 1/ castellan had to be made less Barbican-specific, offer at least one other secrets store (Vault), and move under Oslo (done) 2/ some projects (was it Designate ? Octavia ?) were relying on advanced functions of Barbican not generally found in other secrets store, like certificate generation, and so would prefer to depend on Barbican itself, which confuses the messaging around the base service addition a bit ("any Castellan-supported secret store as long as it's Barbican") -- Thierry Carrez (ttx) From bdobreli at redhat.com Thu May 17 08:18:11 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Thu, 17 May 2018 10:18:11 +0200 Subject: [openstack-dev] [tripleo] [barbican] [tc] key store in base services In-Reply-To: <16b41f65-053b-70c3-b95f-93b763a5f4ae@openstack.org> References: <20180516174209.45ghmqz7qmshsd7g@yuggoth.org> <16b41f65-053b-70c3-b95f-93b763a5f4ae@openstack.org> Message-ID: On 5/17/18 9:58 AM, Thierry Carrez wrote: > Jeremy Stanley wrote: >> [...] >> As a community, we're likely to continue to make imbalanced >> trade-offs against relevant security features if we don't move >> forward and declare that some sort of standardized key storage >> solution is a fundamental component on which OpenStack services can >> rely. Being able to just assume that you can encrypt volumes in >> Swift, even as a means to further secure a TripleO undercloud, would >> be a step in the right direction for security-minded deployments. >> >> Unfortunately, I'm unable to find any follow-up summary on the >> mailing list from the aforementioned session, but recollection from >> those who were present (I had a schedule conflict at that time) was >> that a Castellan-compatible key store would at least be a candidate >> for inclusion in our base services list: >> >> https://governance.openstack.org/tc/reference/base-services.html > > Yes, last time this was discussed, there was lazy consensus that adding > "a Castellan-compatible secret store" would be a good addition to the > base services list if we wanted to avoid proliferation of half-baked > keystore implementations in various components. > > The two blockers were: > > 1/ castellan had to be made less Barbican-specific, offer at least one > other secrets store (Vault), and move under Oslo (done) Back to the subject and tripleo underclouds running Barbican, using vault as a backend may be a good option, given that openshift supports [0] it as well for storing k8s secrets, and kubespray does [1] for vanilla k8s deployments, and that we have openshift/k8s-based control plane for openstack on the integration roadmap. So we'll highly likely end up running Barbican/Vault on undercloud anyway. [0] https://blog.openshift.com/managing-secrets-openshift-vault-integration/ [1] https://github.com/kubernetes-incubator/kubespray/blob/master/docs/vault.md > > 2/ some projects (was it Designate ? Octavia ?) were relying on advanced > functions of Barbican not generally found in other secrets store, like > certificate generation, and so would prefer to depend on Barbican > itself, which confuses the messaging around the base service addition a > bit ("any Castellan-supported secret store as long as it's Barbican") > -- Best regards, Bogdan Dobrelya, Irc #bogdando From cjeanner at redhat.com Thu May 17 08:33:54 2018 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Thu, 17 May 2018 10:33:54 +0200 Subject: [openstack-dev] [tripleo] [barbican] [tc] key store in base services In-Reply-To: References: <20180516174209.45ghmqz7qmshsd7g@yuggoth.org> <16b41f65-053b-70c3-b95f-93b763a5f4ae@openstack.org> Message-ID: On 05/17/2018 10:18 AM, Bogdan Dobrelya wrote: > On 5/17/18 9:58 AM, Thierry Carrez wrote: >> Jeremy Stanley wrote: >>> [...] >>> As a community, we're likely to continue to make imbalanced >>> trade-offs against relevant security features if we don't move >>> forward and declare that some sort of standardized key storage >>> solution is a fundamental component on which OpenStack services can >>> rely. Being able to just assume that you can encrypt volumes in >>> Swift, even as a means to further secure a TripleO undercloud, would >>> be a step in the right direction for security-minded deployments. >>> >>> Unfortunately, I'm unable to find any follow-up summary on the >>> mailing list from the aforementioned session, but recollection from >>> those who were present (I had a schedule conflict at that time) was >>> that a Castellan-compatible key store would at least be a candidate >>> for inclusion in our base services list: >>> >>> https://governance.openstack.org/tc/reference/base-services.html >> >> Yes, last time this was discussed, there was lazy consensus that >> adding "a Castellan-compatible secret store" would be a good addition >> to the base services list if we wanted to avoid proliferation of >> half-baked keystore implementations in various components. >> >> The two blockers were: >> >> 1/ castellan had to be made less Barbican-specific, offer at least one >> other secrets store (Vault), and move under Oslo (done) > > Back to the subject and tripleo underclouds running Barbican, using > vault as a backend may be a good option, given that openshift supports > [0] it as well for storing k8s secrets, and kubespray does [1] for > vanilla k8s deployments, and that we have openshift/k8s-based control > plane for openstack on the integration roadmap. So we'll highly likely > end up running Barbican/Vault on undercloud anyway. > > [0] > https://blog.openshift.com/managing-secrets-openshift-vault-integration/ > [1] > https://github.com/kubernetes-incubator/kubespray/blob/master/docs/vault.md > That just sounds lovely, especially since this allows to converge "secure storage" tech between projects. On my own, I was considering some secure storage (custodia) in the context of the public TLS certificate storage/update/provisioning. Having by default a native way to store secrets used by the overcloud deploy/life is a really good thing, and will prevent leaks, having ardcoded passwords in files and so on (although, yeah, you'll need something to access barbican ;)). >> >> 2/ some projects (was it Designate ? Octavia ?) were relying on >> advanced functions of Barbican not generally found in other secrets >> store, like certificate generation, and so would prefer to depend on >> Barbican itself, which confuses the messaging around the base service >> addition a bit ("any Castellan-supported secret store as long as it's >> Barbican") >> > > -- Cédric Jeanneret Software Engineer DFG:DF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From gmann at ghanshyammann.com Thu May 17 08:57:43 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 17 May 2018 17:57:43 +0900 Subject: [openstack-dev] [QA][Forum] QA onboarding session in Vancouver Message-ID: Hi All, QA team is planning an onboarding session during the Vancouver Summit: - https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21646/qa-project-onboarding - Tuesday, May 22, 2018, 9:50am-10:30am - Vancouver Convention Centre West - Level Two - Room 223 Details of this sessions is added in this etherpad [1]. Apart from what written in etherpad, this session will be more open and anyone can bring up the topic related to QA. Attendees can interact with the QA developers in term of help they need or want to help QA. Have a safe flight and looking forward to meet you there ! ..1 https://etherpad.openstack.org/p/YVR18-forum-qa-onboarding-vancouver -QA Team From rl at patchworkscience.org Thu May 17 09:14:39 2018 From: rl at patchworkscience.org (Roger Luethi) Date: Thu, 17 May 2018 11:14:39 +0200 Subject: [openstack-dev] [docs] Automating documentation the tripleo way? In-Reply-To: <1526495787-sup-1958@lrrr.local> References: <20180516173914.2fb66aa5a7a9cdaa066324e1@redhat.com> <1526495787-sup-1958@lrrr.local> Message-ID: <00a81f15-0f0c-96a8-966c-9044e32a344c@patchworkscience.org> On 16.05.18 20:40, Doug Hellmann wrote: > Weren't the folks doing the training-labs or training-guides taking a > similar approach? IIRC, they ended up implementing what amounted to > their own installer for OpenStack, and then ended up with all of the > associated upgrade and testing burden. training-labs uses its own installer because the project goal is to do the full deployment (that is, including the creation of appropriate VMs) in an automated fashion on all supported platforms (Linux, macOS, Windows). The scripts that are injected into the VMs follow the install-guide as closely as possible. We were pretty close to automating the translation from install-guide docs to shell scripts, but some issues remained (e.g., some scripts need guards waiting for services to come up in order to avoid race conditions; this is not documented in the install-guide). Roger From gmann at ghanshyammann.com Thu May 17 09:42:04 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 17 May 2018 18:42:04 +0900 Subject: [openstack-dev] Canceling QA office hour for next week Message-ID: Hi All, As most of us will be in Vancouver summit next week, i am canceling the next week QA office hour (24th May, Thursday). We will resume the same after summit which will be on 31st May, Thursday. -gmann From zigo at debian.org Thu May 17 12:07:26 2018 From: zigo at debian.org (Thomas Goirand) Date: Thu, 17 May 2018 14:07:26 +0200 Subject: [openstack-dev] [puppet] [magnum] Magnum tempest fails with 400 bad request In-Reply-To: <380be2f6db2d427190de9fd0e3d3992d@mb01.staff.ognet.se> References: <380be2f6db2d427190de9fd0e3d3992d@mb01.staff.ognet.se> Message-ID: On 05/17/2018 09:49 AM, Tobias Urdin wrote: > Hello, > > I was interested in getting Magnum working in gate by getting @dms patch > fixed and merged [1]. > > The installation goes fine on Ubuntu and CentOS however the tempest > testing for Magnum fails on CentOS (it not available in Ubuntu). > > > It seems to be related to authentication against keystone but I don't > understand why, please see logs [2] [3] > > > [1] https://review.openstack.org/#/c/367012/ > > [2] > http://logs.openstack.org/12/367012/28/check/puppet-openstack-integration-4-scenario003-tempest-centos-7/3f5252b/logs/magnum/magnum-api.txt.gz#_2018-05-16_15_10_36_010 > > [3] > http://logs.openstack.org/12/367012/28/check/puppet-openstack-integration-4-scenario003-tempest-centos-7/3f5252b/ >From that log, you're getting a 404 from nova-api. Response - Headers: {'status': '404', u'content-length': '113', 'content-location': 'https://[::1]:8774/v2.1/os-keypairs/default', u'x-compute-request-id': 'req-35ae4651-186c-4f20-9143-f68f67b7d401', u'vary': 'OpenStack-API-Version,X-OpenStack-Nova-API-Version', u'server': 'Apache/2.4.6 (CentOS)', u'openstack-api-version': 'compute 2.1', u'connection': 'close', u'x-openstack-nova-api-version': '2.1', u'date': 'Wed, 16 May 2018 15:10:33 GMT', u'content-type': 'application/json; charset=UTF-8', u'x-openstack-request-id': 'req-35ae4651-186c-4f20-9143-f68f67b7d401'} but that seems fine because the request right after is working, however just right after, you're getting a 500 error on magnum-api a bit further: Response - Headers: {'status': '500', u'content-length': '149', 'content-location': 'https://[::1]:9511/clustertemplates', u'openstack-api-maximum-version': 'container-infra 1.6', u'vary': 'OpenStack-API-Version', u'openstack-api-minimum-version': 'container-infra 1.1', u'server': 'Werkzeug/0.11.6 Python/2.7.5', u'openstack-api-version': 'container-infra 1.1', u'date': 'Wed, 16 May 2018 15:10:36 GMT', u'content-type': 'application/json', u'x-openstack-request-id': 'req-12c635c9-889a-48b4-91d4-ded51220ad64'} With this body: Body: {"errors": [{"status": 500, "code": "server", "links": [], "title": "Bad Request (HTTP 400)", "detail": "Bad Request (HTTP 400)", "request_id": ""}]} 2018-05-16 15:24:14.434432 | centos-7 | 2018-05-16 15:10:36,016 13619 DEBUG [tempest.lib.common.dynamic_creds] Clearing network: {u'provider:physical_network': None, u'ipv6_address_scope': None, u'revision_number': 2, u'port_security_enabled': True, u'mtu': 1400, u'id': u'c26c237a-0583-4f72-8300-f87051080be7', u'router:external': False, u'availability_zone_hints': [], u'availability_zones': [], u'provider:segmentation_id': 35, u'ipv4_address_scope': None, u'shared': False, u'project_id': u'31c5c1fbc46e4880b7e498e493700a50', u'status': u'ACTIVE', u'subnets': [], u'description': u'', u'tags': [], u'updated_at': u'2018-05-16T15:10:26Z', u'is_default': False, u'qos_policy_id': None, u'name': u'tempest-setUp-2113966350-network', u'admin_state_up': True, u'tenant_id': u'31c5c1fbc46e4880b7e498e493700a50', u'created_at': u'2018-05-16T15:10:26Z', u'provider:network_type': u'vxlan'}, subnet: {u'service_types': [], u'description': u'', u'enable_dhcp': True, u'tags': [], u'network_id': u'c26c237a-0583-4f72-8300-f87051080be7', u'tenant_id': u'31c5c1fbc46e4880b7e498e493700a50', u'created_at': u'2018-05-16T15:10:26Z', u'dns_nameservers': [], u'updated_at': u'2018-05-16T15:10:26Z', u'ipv6_ra_mode': None, u'allocation_pools': [{u'start': u'10.100.0.2', u'end': u'10.100.0.14'}], u'gateway_ip': u'10.100.0.1', u'revision_number': 0, u'ipv6_address_mode': None, u'ip_version': 4, u'host_routes': [], u'cidr': u'10.100.0.0/28', u'project_id': u'31c5c1fbc46e4880b7e498e493700a50', u'id': u'a7233852-e3f1-4129-b34e-c607aef5172e', u'subnetpool_id': None, u'name': u'tempest-setUp-2113966350-subnet'}, router: {u'status': u'ACTIVE', u'external_gateway_info': {u'network_id': u'c6cf6d80-fcbb-46e6-aefd-17f41b5c57b1', u'enable_snat': True, u'external_fixed_ips': [{u'subnet_id': u'34e589e9-86d2-4f72-a0c3-7990406561b1', u'ip_address': u'172.24.5.13'}]}, u'availability_zone_hints': [], u'availability_zones': [], u'description': u'', u'tags': [], u'tenant_id': u'31c5c1fbc46e4880b7e498e493700a50', u'created_at': u'2018-05-16T15:10:27Z', u'admin_state_up': True, u'distributed': False, u'updated_at': u'2018-05-16T15:10:29Z', u'ha': False, u'flavor_id': None, u'revision_number': 2, u'routes': [], u'project_id': u'31c5c1fbc46e4880b7e498e493700a50', u'id': u'bdf13d72-c19c-4ad1-b57d-ed6da9c569b3', u'name': u'tempest-setUp-2113966350-router'} And right after that, we can only see clean-up calls (removing routers, DELETE calls, etc.). Looking at the magnum-api log shows issues in glanceclient just right before the 500 error. So, something's probably going on there, with a bad glanceclient request. Having a look into magnum.conf doesn't show anything suspicious concerning [glance_client] though, so I went to look into tempest.conf. And there, it shows no [magnum] section, and I believe that's the issue. Your tempest package/whatever hasn't been built with the magnum plugin, and there's nothing configured for magnum like: [magnum]/image_id and such. Maybe that still works though, because of default values? I wasn't able to completely figure it out, so I hope this helps... Did you try to debug this in a VM? Cheers, Thomas Goirand (zigo) From e0ne at e0ne.info Thu May 17 12:28:50 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Thu, 17 May 2018 15:28:50 +0300 Subject: [openstack-dev] [horizon] No meeting on May 23rd Message-ID: Hi all, Some of the team will be attending the OpenStack summit in Vancouver, so I am canceling the weekly IRC meeting for the 23rd. Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu May 17 12:33:40 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 17 May 2018 12:33:40 +0000 Subject: [openstack-dev] [python3] flake8 and pycodestyle W60x warnings In-Reply-To: <20180517070912.4A525B34AE@mail.valinux.co.jp> References: <20180517070912.4A525B34AE@mail.valinux.co.jp> Message-ID: <20180517123340.zr47hkvrdrdnzouo@yuggoth.org> On 2018-05-17 16:09:12 +0900 (+0900), IWAMOTO Toshihiro wrote: [...] > OpenStack CI's flake8 is pre pycodestyle age, It's not "OpenStack CI's flake8" version. Nova's master branch is getting flake8 transitively through its test-requirement on hacking!=0.13.0,<0.14,>=0.12.0 which is causing it to select hacking==0.12.0 (the only version between 0.12.0 and 0.14.0 is 0.13.0 which is explicitly skipped). In turn, that version of hacking declares a requirement on flake8<2.6.0,>=2.5.4 which is causing it to use flake8==2.5.5. As you noted, that depends on pep8!=1.6.0,!=1.6.1,!=1.6.2,>=1.5.7 so pep8==1.7.1 gets used. > flake8 version isn't managed by g-r, but that's another story. [...] The reason we don't globally-constrain hacking, flake8 or other static analyzers is that projects are going to want to comply with new rules at their own individual paces; it's up to the Nova team to decide when to move their master branch testing to new versions of these. Per the example above if they upped their hacking cap to <1.2 they would get hacking==1.1.0 (the latest release) which would install flake8==2.6.2 and so pycodestyle==2.0.0. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From Neha.Alhat at nttdata.com Thu May 17 13:03:32 2018 From: Neha.Alhat at nttdata.com (Alhat, Neha) Date: Thu, 17 May 2018 13:03:32 +0000 Subject: [openstack-dev] [Cinder] Need suggestion to add split-logger functionality in cinder Message-ID: Hi All, Problem description: Monty Taylor added split-logger functionality in patch[1]. This functionality splits logs in four different logs: * keystoneauth.session.request * keystoneauth.session.response * keystoneauth.session.body * keystoneauth.session.request-id Working on enabling this split_logger functionality for cinder to its internal clients interaction (like glanceclient, keystoneclient, novaclient). Steps followed to enable this functionality: 1. Register the configuration option 'split_loggers' in keystoneauth [2]. 2. After registering 'split_loggers' option in keystoneauth for cinder to novaclient interaction, need to set value of 'split_loggers=True/False' under [nova] section of cinder.conf, so that this 'split_loggers' value will get loaded from [nova] section at the time of loading session [3]. So trying same approach for cinder to glanceclient interaction, but there is one difference: glanceclient uses 'load_from_options' method [4] while loading session And novaclient is using 'load_session_from_conf_options' method[3] from keystoneauth. Impact: For this need to register the session conf options from keystoneauth under [glance] section which earlier was under [default] section. Please refer changes made for this[5]. Pros of using this approach: 1. As we are setting conf option in keystoneauth it will directly load value of 'split_loggers' from conf file. So no need to pass explicitly 'split_loggers' value at the time of loading keystoneauth session. Cons of using this approch: 1. Need to register session conf options under [glance] section instead of [default]. Need opinion on this as it is changing the group from [default] to [glance] for session conf options. [1]: https://review.openstack.org/#/c/505764/ [2]: http://paste.openstack.org/show/721071/ [3]: https://github.com/openstack/cinder/blob/master/cinder/compute/nova.py#L112 [4]: https://github.com/openstack/cinder/blob/master/cinder/image/glance.py#L104 [5]: http://paste.openstack.org/show/721095/ Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged,confidential, and proprietary data. If you are not the intended recipient,please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Thu May 17 13:40:06 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 17 May 2018 08:40:06 -0500 Subject: [openstack-dev] [release] Release countdown for week R-14 and R-13, May 21 - June 1 Message-ID: <20180517134005.GA6953@sm-xps> Here is the countdown content for the next two weeks, to cover while the Summit takes place. Development Focus ----------------- Work on new features should be well underway. The Rocky-2 milestone is coming up quick. Hopefully teams have good representation attending the Forum. This is a great opportunity for getting feedback on existing and planned features and bringing that feedback to the teams. *Note* With the Summit/Forum taking place next week, the release team will not be processing any normal release requests. Please ping us directly if something comes up that cannot wait, but with the many distractions of the event, we want to avoid releasing anything that could cause problems and require the attention of those otherwise engaged. General Information ------------------- Membership freeze coincides with milestone 2 [0]. This means projects that have not done a release yet must do so for the next two milestones to be included in the Queens release. [0] https://releases.openstack.org/rocky/schedule.html#r-mf In case you missed it last week, some projects still need to respond to the mutable config [1] and mox removal [2] Rocky series goals. Just a reminder that teams should respond to these goals, even if they do not trigger any work for your specific project. [1] https://storyboard.openstack.org/#!/story/2001545 [2] https://storyboard.openstack.org/#!/story/2001546 Upcoming Deadlines & Dates -------------------------- Forum at OpenStack Summit in Vancouver: May 21-24 Rocky-2 Milestone: June 7 -- Sean McGinnis (smcginnis) From pkovar at redhat.com Thu May 17 14:22:14 2018 From: pkovar at redhat.com (Petr Kovar) Date: Thu, 17 May 2018 16:22:14 +0200 Subject: [openstack-dev] [docs] Automating documentation the tripleo way? In-Reply-To: References: <20180516173914.2fb66aa5a7a9cdaa066324e1@redhat.com> <1526495787-sup-1958@lrrr.local> <1526497268-sup-3619@lrrr.local> Message-ID: <20180517162214.44f797c570f552b371a03a69@redhat.com> On Wed, 16 May 2018 13:26:46 -0600 Wesley Hayutin wrote: > On Wed, May 16, 2018 at 3:05 PM Doug Hellmann wrote: > > > Excerpts from Wesley Hayutin's message of 2018-05-16 12:51:25 -0600: > > > On Wed, May 16, 2018 at 2:41 PM Doug Hellmann > > wrote: > > > > > > > Excerpts from Petr Kovar's message of 2018-05-16 17:39:14 +0200: > > > > > Hi all, > > > > > > > > > > In the past few years, we've seen several efforts aimed at automating > > > > > procedural documentation, mostly centered around the OpenStack > > > > > installation guide. This idea to automatically produce and verify > > > > > installation steps or similar procedures was mentioned again at the > > last > > > > > Summit (https://etherpad.openstack.org/p/SYD-install-guide-testing). > > > > > > > > > > It was brought to my attention that the tripleo team has been > > working on > > > > > automating some of the tripleo deployment procedures, using a Bash > > script > > > > > with included comment lines to supply some RST-formatted narrative, > > for > > > > > example: > > > > > > > > > > > > > > > > https://github.com/openstack/tripleo-quickstart-extras/blob/master/roles/overcloud-prep-images/templates/overcloud-prep-images.sh.j2 > > > > > > > > > > The Bash script can then be converted to RST, e.g.: > > > > > > > > > > > > > > > > https://thirdparty.logs.rdoproject.org/jenkins-tripleo-quickstart-queens-rdo_trunk-baremetal-dell_fc430_envB-single_nic_vlans-27/docs/build/ > > > > > > > > > > Source Code: > > > > > > > > > > > > > > > > https://github.com/openstack/tripleo-quickstart-extras/tree/master/roles/collect-logs > > > > > > > > > > I really liked this approach and while I don't want to sound like > > selling > > > > > other people's work, I'm wondering if there is still an interest > > among > > > > the > > > > > broader OpenStack community in automating documentation like this? > > > > > > > > > > Thanks, > > > > > pk > > > > > > > > > > > > > Weren't the folks doing the training-labs or training-guides taking a > > > > similar approach? IIRC, they ended up implementing what amounted to > > > > their own installer for OpenStack, and then ended up with all of the > > > > associated upgrade and testing burden. > > > > > > > > I like the idea of trying to use some automation from this, but I > > wonder > > > > if we'd be better off extracting data from other tools, rather than > > > > building a new one. > > > > > > > > Doug > > > > > > > > > > So there really isn't anything new to create, the work is done and > > executed > > > on every tripleo change that runs in rdo-cloud. > > > > It wasn't clear what Petr was hoping to get. Deploying with TripleO is > > only one way to deploy, so we wouldn't be able to replace the current > > installation guides with the results of this work. It sounds like that's > > not the goal, though. Yes, I wasn't very clear on the goals as I didn't want to make too many assumptions before learning about technical details from other people. Ben's comments made me realize this approach would probably be best suited for generating documents such as quick start guides or tutorials that are procedural, yet they don't aim at describing multiple use cases. > > > > > > Instead of dismissing the idea upfront I'm more inclined to set an > > > achievable small step to see how well it works. My thought would be to > > > focus on the upcoming all-in-one installer and the automated doc > > generated > > > with that workflow. I'd like to target publishing the all-in-one tripleo > > > installer doc to [1] for Stein and of course a section of tripleo.org. > > > > As an official project, why is TripleO still publishing docs to its own > > site? That's not something we generally encourage. > > > > That said, publishing a new deployment guide based on this technique > > makes sense in general. What about Ben's comments elsewhere in the > > thread? > > > > I think Ben is referring to an older implementation and a slightly > different design but still has some points that we would want to be mindful > of. I think this is a worthy effort to take another pass at this > regarless to be honest as we've found a good combination of interested > folks and sometimes the right people make all the difference. > > My personal opinion is that I'm not expecting the automated doc generation > to be upload ready to a doc server after each run. I do expect it to do > 95% of the work, and to help keep the doc up to date with what is executed > in the latest releases of TripleO. Would it make sense to consider a bot automatically creating patches with content updates that would be then curated and reviewed by the docs contributors? > Also noting the doc used is a mixture > of static and generated documentation which I think worked out quite well > in order to not soley rely on what is executed in ci. > > So again, my thought is to create a small achievable goal and see where the > collaboration takes us. Is a tripleo-focused quick-start deployment guide (that would get integrated with the existing tripleo content) such a small achievable goal? Cheers, pk From pkovar at redhat.com Thu May 17 14:35:36 2018 From: pkovar at redhat.com (Petr Kovar) Date: Thu, 17 May 2018 16:35:36 +0200 Subject: [openstack-dev] [docs] Style guide for OpenStack documentation In-Reply-To: <20180516170515.2gvyxqrnoacitndp@yuggoth.org> References: <20180516182445.3b9286418271e98ad9581474@redhat.com> <20180516170515.2gvyxqrnoacitndp@yuggoth.org> Message-ID: <20180517163536.39fa9c4e1fe97fece9f4775e@redhat.com> On Wed, 16 May 2018 17:05:15 +0000 Jeremy Stanley wrote: > On 2018-05-16 18:24:45 +0200 (+0200), Petr Kovar wrote: > [...] > > I'd like to propose replacing the reference to the IBM Style Guide > > with a reference to the developerWorks editorial style guide > > (https://www.ibm.com/developerworks/library/styleguidelines/). > > This lightweight version comes from the same company and is based > > on the same guidelines, but most importantly, it is available for > > free. > [...] > > I suppose replacing a style guide nobody can access with one > everyone can (modulo legal concerns) is a step up. Still, are there > no style guides published under an actual free/open license? If > https://www.ibm.com/developerworks/community/terms/use/ is correct > then even accidental creation of a derivative work might be > prosecuted as copyright infringement. We don't really plan on reusing content from that site, just referring to it, so is it a concern? > http://www.writethedocs.org/guide/writing/style-guides/#selecting-a-good-style-guide-for-you > mentions some more aligned with our community's open ideals, such as > the 18F Content Guide (public domain), SUSE Documentation Style > Guide (GFDL), GNOME Documentation Style Guide (GFDL), and the > Writing Style Guide and Preferred Usage for DOD Issuances (public > domain). Granted adopting one of those might lead to a need to > overhaul some aspects of style in existing documents, so I can > understand it's not a choice to be made lightly. Still, we should > always consider embracing open process, and that includes using > guidelines which we can freely derive and republish as needed. I would be interested in hearing what other people think about that, but I would strongly prefer to stick with the existing "publisher" as that creates fewer issues than switching to a completely different style guide and then having to adjust our guidelines based on the IBM guide, etc. Thanks, pk From fungi at yuggoth.org Thu May 17 15:03:23 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 17 May 2018 15:03:23 +0000 Subject: [openstack-dev] [docs] Style guide for OpenStack documentation In-Reply-To: <20180517163536.39fa9c4e1fe97fece9f4775e@redhat.com> References: <20180516182445.3b9286418271e98ad9581474@redhat.com> <20180516170515.2gvyxqrnoacitndp@yuggoth.org> <20180517163536.39fa9c4e1fe97fece9f4775e@redhat.com> Message-ID: <20180517150323.47mdca3625l5dfj7@yuggoth.org> On 2018-05-17 16:35:36 +0200 (+0200), Petr Kovar wrote: > On Wed, 16 May 2018 17:05:15 +0000 > Jeremy Stanley wrote: > > > On 2018-05-16 18:24:45 +0200 (+0200), Petr Kovar wrote: > > [...] > > > I'd like to propose replacing the reference to the IBM Style Guide > > > with a reference to the developerWorks editorial style guide > > > (https://www.ibm.com/developerworks/library/styleguidelines/). > > > This lightweight version comes from the same company and is based > > > on the same guidelines, but most importantly, it is available for > > > free. > > [...] > > > > I suppose replacing a style guide nobody can access with one > > everyone can (modulo legal concerns) is a step up. Still, are there > > no style guides published under an actual free/open license? If > > https://www.ibm.com/developerworks/community/terms/use/ is correct > > then even accidental creation of a derivative work might be > > prosecuted as copyright infringement. > > > We don't really plan on reusing content from that site, just referring to > it, so is it a concern? [...] A style guide is a tool. Free and open collaboration needs free (libre, not merely gratis) tools, and that doesn't just mean software. If, down the road, you want an OpenStack Documentation Style Guide which covers OpenStack-specific concerns to quote or transclude information from a more thorough guide, that becomes a derivative work and is subject to the licensing terms for the guide from which you're copying. There are a lot of other parallels between writing software and writing prose here beyond mere intellectual property concerns too. Saying that OpenStack Documentation is free and open, but then endorsing an effectively proprietary guide as something its authors should read and follow, sends a mixed message as to our position on open documentation (as a style guide is of course also documentation in its own right). On the other hand, recommending use of a style guide which is available under a free/libre open source license or within the public domain resonates with our ideals and principles as a community, serving only to strengthen our position on openness in all its endeavors (including documentation). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From edmondsw at us.ibm.com Thu May 17 15:05:56 2018 From: edmondsw at us.ibm.com (William M Edmonds) Date: Thu, 17 May 2018 11:05:56 -0400 Subject: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects In-Reply-To: <1526302110-sup-4784@lrrr.local> References: <1521110096-sup-3634@lrrr.local> <1521662425-sup-1628@lrrr.local> <1521749386-sup-1944@lrrr.local> <1522007989-sup-4653@lrrr.local> <1526302110-sup-4784@lrrr.local> Message-ID: Doug Hellmann wrote on 05/14/2018 08:52:08 AM: > ... snip ... > > We still have about 50 open patches related to adding the > lower-constraints test job. I'll keep those open until the third > milestone of the Rocky development cycle, and then abandon the rest to > clear my gerrit view so it is usable again. > > If you want to add lower-constraints tests to your project and have > an open patch in the list [1], please take it over and fix the > settings then approve the patch (the fix usually involves making > the values in lower-constraints.txt match the values in the various > requirements.txt files). > > If you don't want the job, please leave a comment on the patch to > tell me and I will abandon it. > > Doug > > [1] https://review.openstack.org/#/q/topic:requirements-stop-syncing +status:open I believe we're stuck for nova-powervm [1] and ceilometer-powervm [2] until/unless nova and ceilometer, respectively, post releases to pypi. Is anyone working on that? Even then, I don't love what we've had to do to get this working for networking-powervm [3][4], which is what we'd do for nova-powervm and ceilometer-powervm as well once they're on pypi. When you consider master, it's a really nasty hack (including a non-master version in requirements.txt because obviously master can't be on pypi). It's better than not testing, but if someone has a better idea... And I'd appreciate -infra reviews on [4] since I have no idea how to ensure that's doing what it's intended to do. [1] https://review.openstack.org/#/c/555964/ [2] https://review.openstack.org/#/c/555358/ [3] https://review.openstack.org/#/c/555936/ [4] https://review.openstack.org/#/c/569104/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From janzian at us.ibm.com Thu May 17 15:21:09 2018 From: janzian at us.ibm.com (James Anziano) Date: Thu, 17 May 2018 15:21:09 +0000 Subject: [openstack-dev] [neutron] Bug deputy In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From gkotton at vmware.com Thu May 17 15:35:24 2018 From: gkotton at vmware.com (Gary Kotton) Date: Thu, 17 May 2018 15:35:24 +0000 Subject: [openstack-dev] [neutron] Bug deputy In-Reply-To: References: Message-ID: <025A9AFD-C1B9-412A-A5A6-DF4818333A61@vmware.com> Thanks! From: James Anziano Reply-To: OpenStack List Date: Thursday, May 17, 2018 at 6:21 PM To: OpenStack List Cc: OpenStack List Subject: Re: [openstack-dev] [neutron] Bug deputy Hey Gary, my turn is coming up soon (week of June 4th), I can jump the line a bit and cover you if you or anyone can cover my currently assigned week. Thanks, - James Anziano ----- Original message ----- From: Gary Kotton To: OpenStack List Cc: Subject: [openstack-dev] [neutron] Bug deputy Date: Thu, May 17, 2018 1:59 AM Hi, An urgent matter has come up this week. If possible, can someone please replace me. Sorry Gary __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Thu May 17 16:02:02 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 17 May 2018 12:02:02 -0400 Subject: [openstack-dev] [all][api] late addition to forum schedule Message-ID: <1526572746-sup-4787@lrrr.local> After some discussion on twitter and IRC, we've added a new session to the Forum schedule for next week to discuss our options for cleaning up some of the design/technical debt in our REST APIs. It's early days in the conversation, but we wanted to take advantage of our time together in person to brainstorm about how to do something like this. If you're interested, please plan to attend the session on Wednesday at 4:40 [1]. The session description: The introduction of microversions in OpenStack APIs added a mechanism to incrementally change APIs without breaking users. We're now at the point where people would like to start making old things go away, which means we need to hammer out a plan and potentially put it forward as a community goal. [1] https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21881/api-debt-cleanup From skandasw at cisco.com Thu May 17 16:21:58 2018 From: skandasw at cisco.com (Sridar Kandaswamy (skandasw)) Date: Thu, 17 May 2018 16:21:58 +0000 Subject: [openstack-dev] [neutron] [fwaas] Neutron FWaaS weekly team meeting cancelled on May 24. Message-ID: Hi All: With the Summit at Vancouver, we will cancel the FWaaS weekly meeting for May 24 14:00 UTC. We will resume as usual from May 31. Thanks Sridar -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Thu May 17 16:26:20 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Thu, 17 May 2018 10:26:20 -0600 Subject: [openstack-dev] [docs] Automating documentation the tripleo way? In-Reply-To: <20180517162214.44f797c570f552b371a03a69@redhat.com> References: <20180516173914.2fb66aa5a7a9cdaa066324e1@redhat.com> <1526495787-sup-1958@lrrr.local> <1526497268-sup-3619@lrrr.local> <20180517162214.44f797c570f552b371a03a69@redhat.com> Message-ID: On Thu, May 17, 2018 at 10:22 AM Petr Kovar wrote: > On Wed, 16 May 2018 13:26:46 -0600 > Wesley Hayutin wrote: > > > On Wed, May 16, 2018 at 3:05 PM Doug Hellmann > wrote: > > > > > Excerpts from Wesley Hayutin's message of 2018-05-16 12:51:25 -0600: > > > > On Wed, May 16, 2018 at 2:41 PM Doug Hellmann > > > > wrote: > > > > > > > > > Excerpts from Petr Kovar's message of 2018-05-16 17:39:14 +0200: > > > > > > Hi all, > > > > > > > > > > > > In the past few years, we've seen several efforts aimed at > automating > > > > > > procedural documentation, mostly centered around the OpenStack > > > > > > installation guide. This idea to automatically produce and verify > > > > > > installation steps or similar procedures was mentioned again at > the > > > last > > > > > > Summit ( > https://etherpad.openstack.org/p/SYD-install-guide-testing). > > > > > > > > > > > > It was brought to my attention that the tripleo team has been > > > working on > > > > > > automating some of the tripleo deployment procedures, using a > Bash > > > script > > > > > > with included comment lines to supply some RST-formatted > narrative, > > > for > > > > > > example: > > > > > > > > > > > > > > > > > > > > > https://github.com/openstack/tripleo-quickstart-extras/blob/master/roles/overcloud-prep-images/templates/overcloud-prep-images.sh.j2 > > > > > > > > > > > > The Bash script can then be converted to RST, e.g.: > > > > > > > > > > > > > > > > > > > > > https://thirdparty.logs.rdoproject.org/jenkins-tripleo-quickstart-queens-rdo_trunk-baremetal-dell_fc430_envB-single_nic_vlans-27/docs/build/ > > > > > > > > > > > > Source Code: > > > > > > > > > > > > > > > > > > > > > https://github.com/openstack/tripleo-quickstart-extras/tree/master/roles/collect-logs > > > > > > > > > > > > I really liked this approach and while I don't want to sound like > > > selling > > > > > > other people's work, I'm wondering if there is still an interest > > > among > > > > > the > > > > > > broader OpenStack community in automating documentation like > this? > > > > > > > > > > > > Thanks, > > > > > > pk > > > > > > > > > > > > > > > > Weren't the folks doing the training-labs or training-guides > taking a > > > > > similar approach? IIRC, they ended up implementing what amounted to > > > > > their own installer for OpenStack, and then ended up with all of > the > > > > > associated upgrade and testing burden. > > > > > > > > > > I like the idea of trying to use some automation from this, but I > > > wonder > > > > > if we'd be better off extracting data from other tools, rather than > > > > > building a new one. > > > > > > > > > > Doug > > > > > > > > > > > > > So there really isn't anything new to create, the work is done and > > > executed > > > > on every tripleo change that runs in rdo-cloud. > > > > > > It wasn't clear what Petr was hoping to get. Deploying with TripleO is > > > only one way to deploy, so we wouldn't be able to replace the current > > > installation guides with the results of this work. It sounds like > that's > > > not the goal, though. > > > Yes, I wasn't very clear on the goals as I didn't want to make too many > assumptions before learning about technical details from other people. > Ben's comments made me realize this approach would probably be best suited > for generating documents such as quick start guides or tutorials that are > procedural, yet they don't aim at describing multiple use cases. > > > > > > > > > > Instead of dismissing the idea upfront I'm more inclined to set an > > > > achievable small step to see how well it works. My thought would be > to > > > > focus on the upcoming all-in-one installer and the automated doc > > > generated > > > > with that workflow. I'd like to target publishing the all-in-one > tripleo > > > > installer doc to [1] for Stein and of course a section of > tripleo.org. > > > > > > As an official project, why is TripleO still publishing docs to its own > > > site? That's not something we generally encourage. > > > > > > That said, publishing a new deployment guide based on this technique > > > makes sense in general. What about Ben's comments elsewhere in the > > > thread? > > > > > > > I think Ben is referring to an older implementation and a slightly > > different design but still has some points that we would want to be > mindful > > of. I think this is a worthy effort to take another pass at this > > regarless to be honest as we've found a good combination of interested > > folks and sometimes the right people make all the difference. > > > > My personal opinion is that I'm not expecting the automated doc > generation > > to be upload ready to a doc server after each run. I do expect it to do > > 95% of the work, and to help keep the doc up to date with what is > executed > > in the latest releases of TripleO. > > > Would it make sense to consider a bot automatically creating patches > with content updates that would be then curated and reviewed by the docs > contributors? > > > > Also noting the doc used is a mixture > > of static and generated documentation which I think worked out quite well > > in order to not soley rely on what is executed in ci. > > > > So again, my thought is to create a small achievable goal and see where > the > > collaboration takes us. > > > Is a tripleo-focused quick-start deployment guide (that would get > integrated with the existing tripleo content) such a small achievable goal? > I think so. I still would like to focus on the all-in-ine [1] deployment of TripleO for an initial deployment guide from this effort. Having something that is fast, approachable, well documented and easy to understand is a great combination IMHO. There also will not be that many steps to this kind of deployment. However having the CI write the various deployments incantations of composable services would save us a lot of time. [1] https://blueprints.launchpad.net/tripleo/+spec/all-in-one > > > Cheers, > pk > -------------- next part -------------- An HTML attachment was scrubbed... URL: From msm at redhat.com Thu May 17 16:54:54 2018 From: msm at redhat.com (Michael McCune) Date: Thu, 17 May 2018 12:54:54 -0400 Subject: [openstack-dev] [all][api] POST /api-sig/news Message-ID: Greetings OpenStack community, Today's meeting was brief, primarily focused on planning for the summit sessions[7][8] that the SIG will host and facilitate. The first session[7], will be a Birds of a Feather (BoF) gathering where the topics will be determined by the attendees. One topic that will surely make that list is the GraphQL proof of concept for Neutron that has been discussed on the mailing list[9]. The second session[8], will be a directed discussion addressing technical debt in the REST APIs of OpenStack. We're now at the point where people would like to start removing old code. This session will give interested parties details about how they can leverage microversions and the guidelines of the SIG to reduce their debt, drop old functionality, and improve the consistency of their APIs. It will also clarify what it means when we bump the minimum microversion for a service in the future and discuss plans for creating an OpenStack community goal. For both sessions, the SIG has aligned itself towards helping coordinate discussions, clear up misunderstandings, and generally be helpful in ensuring that all voices are heard and cross-cutting concerns are addressed. If you are heading to summit, we hope to see you there! There being no recent changes to pending guidelines nor to bugs, we ended the meeting early. As always if you're interested in helping out, in addition to coming to the meetings, there's also: * The list of bugs [5] indicates several missing or incomplete guidelines. * The existing guidelines [2] always need refreshing to account for changes over time. If you find something that's not quite right, submit a patch [6] to fix it. * Have you done something for which you think guidance would have made things easier but couldn't find any? Submit a patch and help others [6]. # Newly Published Guidelines None # API Guidelines Proposed for Freeze Guidelines that are ready for wider review by the whole community. None # Guidelines Currently Under Review [3] * Update parameter names in microversion sdk spec https://review.openstack.org/#/c/557773/ * Add API-schema guide (still being defined) https://review.openstack.org/#/c/524467/ * A (shrinking) suite of several documents about doing version and service discovery Start at https://review.openstack.org/#/c/459405/ * WIP: microversion architecture archival doc (very early; not yet ready for review) https://review.openstack.org/444892 # Highlighting your API impacting issues If you seek further review and insight from the API SIG about APIs that you are developing or changing, please address your concerns in an email to the OpenStack developer mailing list[1] with the tag "[api]" in the subject. In your email, you should include any relevant reviews, links, and comments to help guide the discussion of the specific challenge you are facing. To learn more about the API SIG mission and the work we do, see our wiki page [4] and guidelines [2]. Thanks for reading and see you next week! # References [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] http://specs.openstack.org/openstack/api-wg/ [3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z [4] https://wiki.openstack.org/wiki/API_SIG [5] https://bugs.launchpad.net/openstack-api-wg [6] https://git.openstack.org/cgit/openstack/api-wg [7] https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21798/api-special-interest-group-session [8] https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21881/api-debt-cleanup [9] http://lists.openstack.org/pipermail/openstack-dev/2018-May/130219.html Meeting Agenda https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda Past Meeting Records http://eavesdrop.openstack.org/meetings/api_sig/ Open Bugs https://bugs.launchpad.net/openstack-api-wg From tpb at dyncloud.net Thu May 17 17:02:20 2018 From: tpb at dyncloud.net (Tom Barron) Date: Thu, 17 May 2018 13:02:20 -0400 Subject: [openstack-dev] [manila] no community meeting Thurs 24 May 2018 Message-ID: <20180517170220.aaqukzp4fn3sdjyo@barron.net> There will be no Manila weekly meeting, Thursday May 24, given the Vancouver Summit is going on that week. -- Tom Barron From tpb at dyncloud.net Thu May 17 17:57:24 2018 From: tpb at dyncloud.net (Tom Barron) Date: Thu, 17 May 2018 13:57:24 -0400 Subject: [openstack-dev] [manila] manila operator's feedback forum etherpad available Message-ID: <20180517175724.4asfj3nvu3pj3ru6@barron.net> Next week at the Summit there is a forum session dedicated to Manila opertors' feedback on Thursday from 1:50-2:30pm [1] for which we have started an etherpad [2]. Please come and help manila developers do the right thing! We're particularly interested in experiences running the OpenStack share service at scale and overcoming any obstacles to deployment but are interested in getting any and all feedback from real deployments so that we can tailor our development and maintenance efforts to real world needs. Please feel free and encouraged to add to the etherpad starting now. See you there! -- Tom Barron Manila PTL irc: tbarron [1] https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21780/manila-ops-feedback-running-at-scale-overcoming-barriers-to-deployment [2] https://etherpad.openstack.org/p/YVR18-manila-forum-ops-feedback From gagehugo at gmail.com Thu May 17 19:42:12 2018 From: gagehugo at gmail.com (Gage Hugo) Date: Thu, 17 May 2018 14:42:12 -0500 Subject: [openstack-dev] [security sig] No meeting May 24th Message-ID: Hello, Due to members attending the OpenStack summit in Vancouver, we will be canceling the Security SIG meeting on May 24th. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Thu May 17 20:36:01 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 17 May 2018 15:36:01 -0500 Subject: [openstack-dev] [nova] FYI on changes that might impact out of tree scheduler filters Message-ID: <58e08692-483a-9188-d2ee-e02978ce995c@gmail.com> CERN has upgraded to Cells v2 and is doing performance testing of the scheduler and were reporting some things today which got us back to this bug [1]. So I've starting pushing some patches related to this but also related to an older blueprint I created [2]. In summary, we do quite a bit of DB work just to load up a list of instance objects per host that the in-tree filters don't even use. The first change [3] is a simple optimization to avoid the default joins on the instance_info_caches and security_groups tables. If you have out of tree filters that, for whatever reason, rely on the HostState.instances objects to have info_cache or security_groups set, they'll continue to work, but will have to round-trip to the DB to lazy-load the fields, which is going to be a performance penalty on that filter. See the change for details. The second change in the series [4] is more drastic in that we'll do away with pulling the full Instance object per host, which means only a select set of optional fields can be lazy-loaded [5], and the rest will result in an exception. The patch currently has a workaround config option to continue doing things the old way if you have out of tree filters that rely on this, but for good citizens with only in-tree filters, you will get a performance improvement during scheduling. There are some other things we can do to optimize more of this flow, but this email is just about the ones that have patches up right now. [1] https://bugs.launchpad.net/nova/+bug/1737465 [2] https://blueprints.launchpad.net/nova/+spec/put-host-manager-instance-info-on-a-diet [3] https://review.openstack.org/#/c/569218/ [4] https://review.openstack.org/#/c/569247/ [5] https://github.com/openstack/nova/blob/de52fefa1fd52ccaac6807e5010c5f2a2dcbaab5/nova/objects/instance.py#L66 -- Thanks, Matt From sundar.nadathur at intel.com Thu May 17 20:36:51 2018 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Thu, 17 May 2018 13:36:51 -0700 Subject: [openstack-dev] [cyborg] [nova] Cyborg quotas In-Reply-To: References: <6d8232e3-79ca-c61d-ad64-a99e923e2114@intel.com> <4ba31b19-98cf-25b2-a0c2-0f64a29756e8@gmail.com> Message-ID: <376d9f27-b264-cc2e-6dc2-5ee8ae773f95@intel.com> Hi all,     Thanks for all the feedback. Please see below. 2018-05-17 1:24 GMT+08:00 Jay Pipes >: Placement already stores usage information for all allocations of resources. There is already even a /usages API endpoint that you can specify a project and/or user: https://developer.openstack.org/api-ref/placement/#list-usages I see no reason not to use it.  This does not seem to be per-project (per-tenant). Given a tenant ID and a resource class, we want to get usages of that RC by that tenant. Please LMK if I misunderstood something. As Matt mentioned, Nova does not handle accelerators and presumably would not handle quotas for them either. On 5/16/2018 11:34 PM, Alex Xu wrote: 2018-05-17 1:24 GMT+08:00 Jay Pipes >: [....] There is already actually a spec to use placement for quota usage checks in Nova here: https://review.openstack.org/#/c/509042/ FYI, I'm working on a spec which append to that spec. It's about counting quota for the resource class(GPU, custom RC, etc) other than nova built-in resources(cores, ram). It should be able to count the resource classes which are used by cyborg. But yes, we probably should answer Matt's question first, whether we should let Nova count quota instead of Cyborg. here is the line https://review.openstack.org/#/c/569011/ Alex, is this expected to be implemented by Rocky? > > > Probably best to have a look at that and see if it will end up > meeting your needs. > >   * Cyborg provides a filter for the Nova scheduler, which > checks >     whether the project making the request has exceeded > its own quota. > > > Quota checks happen before Nova's scheduler gets involved, so > having a scheduler filter handle quota usage checking is > pretty much a non-starter. > This applies only to the resources that Nova handles, IIUC, which does not handle accelerators. The generic method that Alex talks about is obviously preferable but, if that is not available in Rocky, is the filter an option? > > > I'll have a look at the patches you've proposed and comment there. > Thanks! > > > Best, > -jay > Regards, Sundar -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Thu May 17 21:48:14 2018 From: zbitter at redhat.com (Zane Bitter) Date: Thu, 17 May 2018 17:48:14 -0400 Subject: [openstack-dev] [docs] Automating documentation the tripleo way? In-Reply-To: References: <20180516173914.2fb66aa5a7a9cdaa066324e1@redhat.com> Message-ID: <0364e0a4-4f7b-2aa5-ad98-632f31608225@redhat.com> On 16/05/18 13:11, Ben Nemec wrote: > > > On 05/16/2018 10:39 AM, Petr Kovar wrote: >> Hi all, >> >> In the past few years, we've seen several efforts aimed at automating >> procedural documentation, mostly centered around the OpenStack >> installation guide. This idea to automatically produce and verify >> installation steps or similar procedures was mentioned again at the last >> Summit (https://etherpad.openstack.org/p/SYD-install-guide-testing). >> >> It was brought to my attention that the tripleo team has been working on >> automating some of the tripleo deployment procedures, using a Bash script >> with included comment lines to supply some RST-formatted narrative, for >> example: >> >> https://github.com/openstack/tripleo-quickstart-extras/blob/master/roles/overcloud-prep-images/templates/overcloud-prep-images.sh.j2 >> >> >> The Bash script can then be converted to RST, e.g.: >> >> https://thirdparty.logs.rdoproject.org/jenkins-tripleo-quickstart-queens-rdo_trunk-baremetal-dell_fc430_envB-single_nic_vlans-27/docs/build/ >> >> >> Source Code: >> >> https://github.com/openstack/tripleo-quickstart-extras/tree/master/roles/collect-logs >> >> >> I really liked this approach and while I don't want to sound like selling >> other people's work, I'm wondering if there is still an interest among >> the >> broader OpenStack community in automating documentation like this? > > I think it's worth noting that TripleO doesn't even use the generated > docs.  The main reason is that we tried this back in the > tripleo-incubator days and it was not the silver bullet for good docs > that it appears to be on the surface.  As the deployment scripts grow > features and more complicated logic it becomes increasingly difficult to > write inline documentation that is readable.  In the end, the > tripleo-incubator docs had a number of large bash snippets that referred > to internal variables and such.  It wasn't actually good documentation. FWIW in the early days of Heat I had an implementation that did this in the opposite direction: the script was extracted from the (rst) documentation, instead of extracting the documentation from the script. This is the way you need to do it to work around the kinds of concerns you mention. (Bash will try to execute literally anything that isn't a comment; rst makes it much easier to overload the meanings of different constructs.) Basically how it worked was that everything that was indented by 4 spaces in the rst file was extracted into the script - this could be a code block (which of course appeared as a code block in the documentation) or a comment block (which didn't). This enabled you to hide stuff that is boring but necessary to make the script work from the documentation. You could also do actual comments or code blocks that didn't appear in the script (e.g. for giving alternate implementations) by indenting only 2 spaces. The actual extraction was done by this fun sed script: http://git.openstack.org/cgit/openstack/heat/plain/tools/rst2script.sed?id=95e5ed067096ff52bbcd6c49146b74e1d59d2d3f Here's the getting started guide we wrote for Heat using this: http://git.openstack.org/cgit/openstack/heat/plain/docs/GettingStarted.rst?id=c0c1768e4a2b441ef286fb49c60419be3fe80786 In the end we didn't keep it around. I think mostly because we weren't able to actually run the script in the gate at the time (2012), and because after Heat support was added to devstack the getting started guide essentially reduced to 'use devstack' (did I mention it was 2012?). So we didn't gain any long term experience in whether this is a good idea or not, although we did maintain it somewhat successfully for a year. But if you're going to try to do something similar then I'd recommend this method as a starting point. cheers, Zane. > When we moved to instack-undercloud to drive TripleO deployments we also > moved to a more traditional hand-written docs repo.  Both options have > their benefits and drawbacks, but neither absolves the development team > of their responsibility to curate the docs.  IME the inline method > actually makes it harder to do this because it tightly couples your code > and docs in a very inflexible way. > > /2 cents > > -Ben > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mriedemos at gmail.com Thu May 17 22:18:16 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 17 May 2018 17:18:16 -0500 Subject: [openstack-dev] [cyborg] [nova] Cyborg quotas In-Reply-To: <376d9f27-b264-cc2e-6dc2-5ee8ae773f95@intel.com> References: <6d8232e3-79ca-c61d-ad64-a99e923e2114@intel.com> <4ba31b19-98cf-25b2-a0c2-0f64a29756e8@gmail.com> <376d9f27-b264-cc2e-6dc2-5ee8ae773f95@intel.com> Message-ID: On 5/17/2018 3:36 PM, Nadathur, Sundar wrote: > This applies only to the resources that Nova handles, IIUC, which does > not handle accelerators. The generic method that Alex talks about is > obviously preferable but, if that is not available in Rocky, is the > filter an option? If nova isn't creating accelerator resources managed by cyborg, I have no idea why nova would be doing quota checks on those types of resources. And no, I don't think adding a scheduler filter to nova for checking accelerator quota is something we'd add either. I'm not sure that would even make sense - the quota for the resource is per tenant, not per host is it? The scheduler filters work on a per-host basis. Like any other resource in openstack, the project that manages that resource should be in charge of enforcing quota limits for it. -- Thanks, Matt From mriedemos at gmail.com Thu May 17 22:23:09 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 17 May 2018 17:23:09 -0500 Subject: [openstack-dev] [all][api] late addition to forum schedule In-Reply-To: <1526572746-sup-4787@lrrr.local> References: <1526572746-sup-4787@lrrr.local> Message-ID: <400482cf-c7d2-39bb-7718-e09949a8d025@gmail.com> On 5/17/2018 11:02 AM, Doug Hellmann wrote: > After some discussion on twitter and IRC, we've added a new session to > the Forum schedule for next week to discuss our options for cleaning up > some of the design/technical debt in our REST APIs. Not to troll too hard here, but it's kind of frustrating to see that twitter trumps people actually proposing sessions on time and then having them be rejected. > The session description: > > The introduction of microversions in OpenStack APIs added a > mechanism to incrementally change APIs without breaking users. > We're now at the point where people would like to start making > old things go away, which means we need to hammer out a plan and > potentially put it forward as a community goal. > > [1]https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21881/api-debt-cleanup This also came up at the Pike PTG in Atlanta: https://etherpad.openstack.org/p/ptg-architecture-workgroup See the "raising the minimum microversion" section. The TODO was Ironic was going to go off and do this and see how much people freaked out. What's changed since then besides that not happening? Since I'm not on twitter, I don't know what new thing prompted this. -- Thanks, Matt From ekuvaja at redhat.com Thu May 17 22:58:39 2018 From: ekuvaja at redhat.com (Erno Kuvaja) Date: Thu, 17 May 2018 23:58:39 +0100 Subject: [openstack-dev] [Glance] No team meeting during the summit week Message-ID: As majority of the team is in Vancouver for the summit we will cancel next weeks meeting (24th of May). Glance team will have next meeting in IRC Thu 31st. Thanks, Erno "jokke" Kuvaja From mriedemos at gmail.com Thu May 17 23:47:06 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 17 May 2018 18:47:06 -0500 Subject: [openstack-dev] [all][api] late addition to forum schedule In-Reply-To: <400482cf-c7d2-39bb-7718-e09949a8d025@gmail.com> References: <1526572746-sup-4787@lrrr.local> <400482cf-c7d2-39bb-7718-e09949a8d025@gmail.com> Message-ID: On 5/17/2018 5:23 PM, Matt Riedemann wrote: > Not to troll too hard here, but it's kind of frustrating to see that > twitter trumps people actually proposing sessions on time and then > having them be rejected. I reckon this is because there were already a pre-defined set of slots / rooms for Forum sessions and we had fewer sessions proposed than reserved slots, and that's why adding something in later is not a major issue? -- Thanks, Matt From luo.lujin at jp.fujitsu.com Fri May 18 00:02:00 2018 From: luo.lujin at jp.fujitsu.com (Luo, Lujin) Date: Fri, 18 May 2018 00:02:00 +0000 Subject: [openstack-dev] [Neutron] [Upgrades] Cancel next IRC meeting (May 24th) Message-ID: Hi, We are canceling our next Neutron Upgrades subteam meeting on May 24th, due to summit. We will resume on May 31st. Thanks, Lujin From fungi at yuggoth.org Fri May 18 00:03:35 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 18 May 2018 00:03:35 +0000 Subject: [openstack-dev] [all][api] late addition to forum schedule In-Reply-To: References: <1526572746-sup-4787@lrrr.local> <400482cf-c7d2-39bb-7718-e09949a8d025@gmail.com> Message-ID: <20180518000334.h5vsul7ajab4tapj@yuggoth.org> On 2018-05-17 18:47:06 -0500 (-0500), Matt Riedemann wrote: > On 5/17/2018 5:23 PM, Matt Riedemann wrote: > > Not to troll too hard here, but it's kind of frustrating to see that > > twitter trumps people actually proposing sessions on time and then > > having them be rejected. > > I reckon this is because there were already a pre-defined set of slots / > rooms for Forum sessions and we had fewer sessions proposed than reserved > slots, and that's why adding something in later is not a major issue? Yes, as I understand it we still have some overflow space too if planned forum sessions need continuing. Session leaders have hopefully received details from the event planners on how to reserve additional space in such situations. As far as I'm aware no proposed Forum sessions were rejected this time around, and there was some discussion among members of the TC (in #openstack-tc[*]) before it was agreed there was room to squeeze this particular latecomer into the lineup. [*] http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-05-14.log.html#t2018-05-14T17:27:05 -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From rochelle.grober at huawei.com Fri May 18 00:55:22 2018 From: rochelle.grober at huawei.com (Rochelle Grober) Date: Fri, 18 May 2018 00:55:22 +0000 Subject: [openstack-dev] [Forum] [all] [Stable] OpenStack is "mature" -- time to get serious on Maintainers -- Session etherpad and food for thought for discussion Message-ID: Folks, TL;DR The last session related to extended releases is: OpenStack is "mature" -- time to get serious on Maintainers It will be in room 220 at 11:00-11:40 The etherpad for the last session in the series on Extended releases is here: https://etherpad.openstack.org/p/YVR-openstack-maintainers-maint-pt3 There are links to info on other communities’ maintainer process/role/responsibilities also, as reference material on how other have made it work (or not). The nitty gritty details: The upcoming Forum is filled with sessions that are focused on issues needed to improve and maintain the sustainability of OpenStack projects for the long term. We have discussion on reducing technical debt, extended releases, fast forward installs, bringing Ops and User communities closer together, etc. The community is showing it is now invested in activities that are often part of “Sustaining Engineering” teams (corporate speak) or “Maintainers (OSS speak). We are doing this; we are thinking about the moving parts to do this; let’s think about the contributors who want to do these and bring some clarity to their roles and the processes they need to be successful. I am hoping you read this and keep these ideas in mind as you participate in the various Forum sessions. Then you can bring the ideas generated during all these discussions to the Maintainers session near the end of the Summit to brainstorm how to visualize and define this new(ish) component of our technical community. So, who has been doing the maintenance work so far? Mostly (mostly) unsung heroes like the Stable Release team, Release team, Oslo team, project liaisons and the community goals champions (yes, moving to py3 is a sustaining/maintenance type of activity). And some operators (Hi, mnaser!). We need to lean on their experience and what we think the community will need to reduce that technical debt to outline what the common tasks of maintainers should be, what else might fall in their purview, and how to partner with them to better serve them. With API lower limits, new tool versions, placement, py3, and even projects reaching “code complete” or “maintenance mode,” there is a lot of work for maintainers to do (I really don’t like that term, but is there one that fits OpenStack’s community?). It would be great if we could find a way to share the load such that we can have part time contributors here. We know that operators know how to cherrypick, test in there clouds, do bug fixes. How do we pair with them to get fixes upstreamed without requiring them to be full on developers? We have a bunch of alumni who have stopped being “cores” and sometimes even developers, but who love our community and might be willing and able to put in a few hours a week, maybe reviewing small patches, providing help with user/ops submitted patch requests, or whatever. They were trusted with +2 and +W in the past, so we should at least be able to trust they know what they know. We would need some way to identify them to Cores, since they would be sort of 1.5 on the voting scale, but…… So, burn out is high in other communities for maintainers. We need to find a way to make sustaining the stable parts of OpenStack sustainable. Hope you can make the talk, or add to the etherpad, or both. The etherpad is very musch still a work in progress (trying to organize it to make sense). If you want to jump in now, go for it, otherwise it should be in reasonable shape for use at the session. I hope we get a good mix of community and a good collection of those who are already doing the job without title. Thanks and see you next week. --rocky ________________________________ 华为技术有限公司 Huawei Technologies Co., Ltd. [Company_logo] Rochelle Grober Sr. Staff Architect, Open Source Office Phone:408-330-5472 Email:rochelle.grober at huawei.com ________________________________  本邮件及其附件含有华为公司的保密信息,仅限于发送给上面地址中列出的个人或群组。禁 止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、或散发)本邮件中 的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本邮件! This e-mail and its attachments contain confidential information from HUAWEI, which is intended only for the person or entity whose address is listed above. Any use of the information contained herein in any way (including, but not limited to, total or partial disclosure, reproduction, or dissemination) by persons other than the intended recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender by phone or email immediately and delete it! -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 5474 bytes Desc: image001.png URL: From rochelle.grober at huawei.com Fri May 18 01:45:13 2018 From: rochelle.grober at huawei.com (Rochelle Grober) Date: Fri, 18 May 2018 01:45:13 +0000 Subject: [openstack-dev] [tc] [all] TC Report 18-20 In-Reply-To: References: <329e01d3-e5f4-9d06-ec74-d503475e1af9@ham.ie> <55D81F7C-CEEA-49D7-9B31-30768C8A8BAA@cern.ch> <1190674e-d033-4bba-8c81-ec63eb40b672@ham.ie> Message-ID: Thierry Carrez [mailto:thierry at openstack.org] > > Graham Hayes wrote: > > Any additional background on why we allowed LCOO to operate like this > > would help a lot. > The group was started back when OPNFV was first getting involved with OpenStack. Many of the members came from that community. They had a "vision" that the members would have to commit to provide developers to address the feature gaps the group was concerned with. There was some interaction between them and the Product WG, and I at least attempted to get them to meet and talk with the Large Deployment Team(?) (an ops group that met at the Ops midcycles and discussed their issues, workarounds, gaps, etc.) Are they still active? Is anyone aware of any docs/code/bugfixes/features that came out of the group? --Rocky > We can't prevent any group of organizations to work in any way they prefer - > - we can, however, deny them the right to be called an OpenStack > workgroup if they fail at openly collaborating. We can raise the topic, but in > the end it is a User Committee decision though, since the LCOO is a User > Committee-blessed working group. > > Source: https://governance.openstack.org/uc/ > > -- > Thierry Carrez (ttx) From openstack at roodsari.us Fri May 18 02:38:32 2018 From: openstack at roodsari.us (rezroo) Date: Thu, 17 May 2018 19:38:32 -0700 Subject: [openstack-dev] [devstack][python/pip][octavia] pip failure during octavia/ocata image build by devstack Message-ID: <0b16ed3e-a456-a704-fb7b-ffb403616cbe@roodsari.us> Hello - I'm trying to install a working local.conf devstack ocata on a new server, and some python packages have changed so I end up with this error during the build of octavia image: 2018-05-18 01:00:26.276 |   Found existing installation: Jinja2 2.8 2018-05-18 01:00:26.280 |     Uninstalling Jinja2-2.8: 2018-05-18 01:00:26.280 |       Successfully uninstalled Jinja2-2.8 2018-05-18 01:00:26.839 |   Found existing installation: PyYAML 3.11 2018-05-18 01:00:26.969 | Cannot uninstall 'PyYAML'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall. 2018-05-18 02:05:44.768 | Unmount /tmp/dib_build.2fbBBePD/mnt/var/cache/apt/archives 2018-05-18 02:05:44.796 | Unmount /tmp/dib_build.2fbBBePD/mnt/tmp/pip 2018-05-18 02:05:44.820 | Unmount /tmp/dib_build.2fbBBePD/mnt/tmp/in_target.d 2018-05-18 02:05:44.844 | Unmount /tmp/dib_build.2fbBBePD/mnt/tmp/ccache 2018-05-18 02:05:44.868 | Unmount /tmp/dib_build.2fbBBePD/mnt/sys 2018-05-18 02:05:44.896 | Unmount /tmp/dib_build.2fbBBePD/mnt/proc 2018-05-18 02:05:44.920 | Unmount /tmp/dib_build.2fbBBePD/mnt/dev/pts 2018-05-18 02:05:44.947 | Unmount /tmp/dib_build.2fbBBePD/mnt/dev 2018-05-18 02:05:50.668 | +/opt/stack/octavia/devstack/plugin.sh:build_octavia_worker_image:1 exit_trap 2018-05-18 02:05:50.679 | +./devstack/stack.sh:exit_trap:494         local r=1 2018-05-18 02:05:50.690 | ++./devstack/stack.sh:exit_trap:495         jobs -p 2018-05-18 02:05:50.700 | +./devstack/stack.sh:exit_trap:495         jobs= 2018-05-18 02:05:50.710 | +./devstack/stack.sh:exit_trap:498         [[ -n '' ]] 2018-05-18 02:05:50.720 | +./devstack/stack.sh:exit_trap:504         kill_spinner 2018-05-18 02:05:50.731 | +./devstack/stack.sh:kill_spinner:390      '[' '!' -z '' ']' 2018-05-18 02:05:50.741 | +./devstack/stack.sh:exit_trap:506         [[ 1 -ne 0 ]] 2018-05-18 02:05:50.751 | +./devstack/stack.sh:exit_trap:507         echo 'Error on exit' 2018-05-18 02:05:50.751 | Error on exit 2018-05-18 02:05:50.761 | +./devstack/stack.sh:exit_trap:508         generate-subunit 1526608058 1092 fail 2018-05-18 02:05:51.148 | +./devstack/stack.sh:exit_trap:509         [[ -z /tmp ]] 2018-05-18 02:05:51.157 | +./devstack/stack.sh:exit_trap:512 /home/stack/devstack/tools/worlddump.py -d /tmp I've tried pip uninstalling PyYAML and pip installing it before running stack.sh, but the error comes back. $ sudo pip uninstall PyYAML The directory '/home/stack/.cache/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag. Uninstalling PyYAML-3.12: /usr/local/lib/python2.7/dist-packages/PyYAML-3.12.dist-info/INSTALLER /usr/local/lib/python2.7/dist-packages/PyYAML-3.12.dist-info/METADATA /usr/local/lib/python2.7/dist-packages/PyYAML-3.12.dist-info/RECORD /usr/local/lib/python2.7/dist-packages/PyYAML-3.12.dist-info/WHEEL /usr/local/lib/python2.7/dist-packages/PyYAML-3.12.dist-info/top_level.txt   /usr/local/lib/python2.7/dist-packages/_yaml.so Proceed (y/n)? y   Successfully uninstalled PyYAML-3.12 I've posted my question to the pip folks and they think it's an openstack issue: https://github.com/pypa/pip/issues/4805 Is there a workaround here? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykarel at redhat.com Fri May 18 08:06:23 2018 From: ykarel at redhat.com (Yatin Karel) Date: Fri, 18 May 2018 13:36:23 +0530 Subject: [openstack-dev] [puppet] [magnum] Magnum tempest fails with 400 bad request In-Reply-To: References: <380be2f6db2d427190de9fd0e3d3992d@mb01.staff.ognet.se> Message-ID: Hi Tobias, Thanks for looking into it. Currently the issue i see is magnum configuration[1] is wrong:- auth_uri=http://localhost:5000, should be https and v3 versioned as per scenario003 deployment configuration. Magnum relies on auth_uri param and that too versioned("v3") like below:- auth_uri=https://[::1]:5000/v3 After fixing this config current issue would be solved. Also i think there is more work required to fix it completely but let's clear the current issue first. Also would be good to try our atomic 27 image(current is too old):- tempest::magnum::image_source https://download.fedoraproject.org/pub/alt/atomic/stable/Fedora-Atomic-27-20180212.2/CloudImages/x86_64/images/Fedora-Atomic-27-20180212.2.x86_64.qcow2 Some other thing that would be required are below:- The cluster vm magnum creates should be able to connect to openstack services and to internet. Also settings would be required to work with SSL enabled services like either TLS_DISABLED or setting up verify_ca and cert configuration in magnum.conf [1] http://logs.openstack.org/12/367012/28/check/puppet-openstack-integration-4-scenario003-tempest-centos-7/3f5252b/logs/etc/magnum/magnum.conf.txt.gz Thanks and Regards Yatin Karel On Thu, May 17, 2018 at 5:37 PM, Thomas Goirand wrote: > On 05/17/2018 09:49 AM, Tobias Urdin wrote: >> Hello, >> >> I was interested in getting Magnum working in gate by getting @dms patch >> fixed and merged [1]. >> >> The installation goes fine on Ubuntu and CentOS however the tempest >> testing for Magnum fails on CentOS (it not available in Ubuntu). >> >> >> It seems to be related to authentication against keystone but I don't >> understand why, please see logs [2] [3] >> >> >> [1] https://review.openstack.org/#/c/367012/ >> >> [2] >> http://logs.openstack.org/12/367012/28/check/puppet-openstack-integration-4-scenario003-tempest-centos-7/3f5252b/logs/magnum/magnum-api.txt.gz#_2018-05-16_15_10_36_010 >> >> [3] >> http://logs.openstack.org/12/367012/28/check/puppet-openstack-integration-4-scenario003-tempest-centos-7/3f5252b/ > > From that log, you're getting a 404 from nova-api. > > Response - Headers: {'status': '404', u'content-length': '113', > 'content-location': 'https://[::1]:8774/v2.1/os-keypairs/default', > u'x-compute-request-id': 'req-35ae4651-186c-4f20-9143-f68f67b7d401', > u'vary': 'OpenStack-API-Version,X-OpenStack-Nova-API-Version', > u'server': 'Apache/2.4.6 (CentOS)', u'openstack-api-version': 'compute > 2.1', u'connection': 'close', u'x-openstack-nova-api-version': '2.1', > u'date': 'Wed, 16 May 2018 15:10:33 GMT', u'content-type': > 'application/json; charset=UTF-8', u'x-openstack-request-id': > 'req-35ae4651-186c-4f20-9143-f68f67b7d401'} > > but that seems fine because the request right after is working, however > just right after, you're getting a 500 error on magnum-api a bit further: > > Response - Headers: {'status': '500', u'content-length': '149', > 'content-location': 'https://[::1]:9511/clustertemplates', > u'openstack-api-maximum-version': 'container-infra 1.6', u'vary': > 'OpenStack-API-Version', u'openstack-api-minimum-version': > 'container-infra 1.1', u'server': 'Werkzeug/0.11.6 Python/2.7.5', > u'openstack-api-version': 'container-infra 1.1', u'date': 'Wed, 16 May > 2018 15:10:36 GMT', u'content-type': 'application/json', > u'x-openstack-request-id': 'req-12c635c9-889a-48b4-91d4-ded51220ad64'} > > With this body: > > Body: {"errors": [{"status": 500, "code": "server", "links": [], > "title": "Bad Request (HTTP 400)", "detail": "Bad Request (HTTP 400)", > "request_id": ""}]} > 2018-05-16 15:24:14.434432 | centos-7 | 2018-05-16 15:10:36,016 > 13619 DEBUG [tempest.lib.common.dynamic_creds] Clearing network: > {u'provider:physical_network': None, u'ipv6_address_scope': None, > u'revision_number': 2, u'port_security_enabled': True, u'mtu': 1400, > u'id': u'c26c237a-0583-4f72-8300-f87051080be7', u'router:external': > False, u'availability_zone_hints': [], u'availability_zones': [], > u'provider:segmentation_id': 35, u'ipv4_address_scope': None, u'shared': > False, u'project_id': u'31c5c1fbc46e4880b7e498e493700a50', u'status': > u'ACTIVE', u'subnets': [], u'description': u'', u'tags': [], > u'updated_at': u'2018-05-16T15:10:26Z', u'is_default': False, > u'qos_policy_id': None, u'name': u'tempest-setUp-2113966350-network', > u'admin_state_up': True, u'tenant_id': > u'31c5c1fbc46e4880b7e498e493700a50', u'created_at': > u'2018-05-16T15:10:26Z', u'provider:network_type': u'vxlan'}, subnet: > {u'service_types': [], u'description': u'', u'enable_dhcp': True, > u'tags': [], u'network_id': u'c26c237a-0583-4f72-8300-f87051080be7', > u'tenant_id': u'31c5c1fbc46e4880b7e498e493700a50', u'created_at': > u'2018-05-16T15:10:26Z', u'dns_nameservers': [], u'updated_at': > u'2018-05-16T15:10:26Z', u'ipv6_ra_mode': None, u'allocation_pools': > [{u'start': u'10.100.0.2', u'end': u'10.100.0.14'}], u'gateway_ip': > u'10.100.0.1', u'revision_number': 0, u'ipv6_address_mode': None, > u'ip_version': 4, u'host_routes': [], u'cidr': u'10.100.0.0/28', > u'project_id': u'31c5c1fbc46e4880b7e498e493700a50', u'id': > u'a7233852-e3f1-4129-b34e-c607aef5172e', u'subnetpool_id': None, > u'name': u'tempest-setUp-2113966350-subnet'}, router: {u'status': > u'ACTIVE', u'external_gateway_info': {u'network_id': > u'c6cf6d80-fcbb-46e6-aefd-17f41b5c57b1', u'enable_snat': True, > u'external_fixed_ips': [{u'subnet_id': > u'34e589e9-86d2-4f72-a0c3-7990406561b1', u'ip_address': > u'172.24.5.13'}]}, u'availability_zone_hints': [], > u'availability_zones': [], u'description': u'', u'tags': [], > u'tenant_id': u'31c5c1fbc46e4880b7e498e493700a50', u'created_at': > u'2018-05-16T15:10:27Z', u'admin_state_up': True, u'distributed': False, > u'updated_at': u'2018-05-16T15:10:29Z', u'ha': False, u'flavor_id': > None, u'revision_number': 2, u'routes': [], u'project_id': > u'31c5c1fbc46e4880b7e498e493700a50', u'id': > u'bdf13d72-c19c-4ad1-b57d-ed6da9c569b3', u'name': > u'tempest-setUp-2113966350-router'} > > And right after that, we can only see clean-up calls (removing routers, > DELETE calls, etc.). > > Looking at the magnum-api log shows issues in glanceclient just right > before the 500 error. > > So, something's probably going on there, with a bad glanceclient > request. Having a look into magnum.conf doesn't show anything suspicious > concerning [glance_client] though, so I went to look into tempest.conf. > And there, it shows no [magnum] section, and I believe that's the issue. > Your tempest package/whatever hasn't been built with the magnum plugin, > and there's nothing configured for magnum like: [magnum]/image_id and > such. Maybe that still works though, because of default values? > > I wasn't able to completely figure it out, so I hope this helps... Did > you try to debug this in a VM? > > Cheers, > > Thomas Goirand (zigo) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dtantsur at redhat.com Fri May 18 09:38:56 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Fri, 18 May 2018 11:38:56 +0200 Subject: [openstack-dev] [all][api] late addition to forum schedule In-Reply-To: <400482cf-c7d2-39bb-7718-e09949a8d025@gmail.com> References: <1526572746-sup-4787@lrrr.local> <400482cf-c7d2-39bb-7718-e09949a8d025@gmail.com> Message-ID: On 05/18/2018 12:23 AM, Matt Riedemann wrote: > On 5/17/2018 11:02 AM, Doug Hellmann wrote: >> After some discussion on twitter and IRC, we've added a new session to >> the Forum schedule for next week to discuss our options for cleaning up >> some of the design/technical debt in our REST APIs. > > Not to troll too hard here, but it's kind of frustrating to see that twitter > trumps people actually proposing sessions on time and then having them be rejected. > >> The session description: >> >>    The introduction of microversions in OpenStack APIs added a >>    mechanism to incrementally change APIs without breaking users. >>    We're now at the point where people would like to start making >>    old things go away, which means we need to hammer out a plan and >>    potentially put it forward as a community goal. >> >> [1]https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21881/api-debt-cleanup >> > > This also came up at the Pike PTG in Atlanta: > > https://etherpad.openstack.org/p/ptg-architecture-workgroup > > See the "raising the minimum microversion" section. The TODO was Ironic was > going to go off and do this and see how much people freaked out. What's changed > since then besides that not happening? Since I'm not on twitter, I don't know > what new thing prompted this. > Jim was driving this effort, then he left and it went into limbo. I'm not sure we're still interested in doing that, given the overall backlog. From amoralej at redhat.com Fri May 18 11:24:16 2018 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Fri, 18 May 2018 13:24:16 +0200 Subject: [openstack-dev] [tripleo][rdo] Fwd: Status of activities related to python3 PoC in RDO Message-ID: FYI ---------- Forwarded message ---------- From: Alfredo Moralejo Alonso Date: Fri, May 18, 2018 at 1:02 PM Subject: Status of activities related to python3 PoC in RDO To: dev at lists.rdoproject.org, users at lists.rdoproject.org Hi, One of the goals for RDO during this cycle is to carry out a PoC of python3 packaging using Fedora 28 as base OS. I'd like to update about the current status about the tasks related to this goal so that all involved teams can take required actions: 1. A initial stabilized fedora repos is available and ready to be used: - The repo configuration is in https://trunk.rdoproject.org/ fedora/dlrn-deps.repo - It contains only a subset of packages in Fedora 28 repo. If more packages are required, they can be added sending a review to fedora-stable-config repo, as in https://review.rdoproject.org/r/#/c/13744/ - We are still implementing some periodic updates on that repo. 2. A DLRN builder has been created using fedora-stable repo in https://trunk.rdoproject.org/fedora . Note that only packages with python3 subpackages are being built on it. We will keep adding new packages as specs are ready. 3. A new image and node type rdo-fedora-stable have been created in review.rdoproject.org and it's ready to be used in jobs as needed. Please, let us know using this mail list or #rdo channel in freenode if you need further help with regards with this topic. Best regards, Alfredo -------------- next part -------------- An HTML attachment was scrubbed... URL: From sundar.nadathur at intel.com Fri May 18 11:58:17 2018 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Fri, 18 May 2018 04:58:17 -0700 Subject: [openstack-dev] [cyborg] [nova] Cyborg quotas In-Reply-To: References: <6d8232e3-79ca-c61d-ad64-a99e923e2114@intel.com> <4ba31b19-98cf-25b2-a0c2-0f64a29756e8@gmail.com> <376d9f27-b264-cc2e-6dc2-5ee8ae773f95@intel.com> Message-ID: <0602af19-987b-e200-d49d-754bec4c0556@intel.com> Hi Matt, On 5/17/2018 3:18 PM, Matt Riedemann wrote: > On 5/17/2018 3:36 PM, Nadathur, Sundar wrote: >> This applies only to the resources that Nova handles, IIUC, which >> does not handle accelerators. The generic method that Alex talks >> about is obviously preferable but, if that is not available in Rocky, >> is the filter an option? > > If nova isn't creating accelerator resources managed by cyborg, I have > no idea why nova would be doing quota checks on those types of > resources. And no, I don't think adding a scheduler filter to nova for > checking accelerator quota is something we'd add either. I'm not sure > that would even make sense - the quota for the resource is per tenant, > not per host is it? The scheduler filters work on a per-host basis. Can we not extend BaseFilter.filter_all() to get all the hosts in a filter? https://github.com/openstack/nova/blob/master/nova/filters.py#L36 I should have made it clearer that this putative filter will be out-of-tree, and needed only till better solutions become available. > > Like any other resource in openstack, the project that manages that > resource should be in charge of enforcing quota limits for it. Agreed. Not sure how other projects handle it, but here's the situation for Cyborg. A request may get scheduled on a compute node with no intervention by Cyborg. So, the earliest check that can be made today is in the selected compute node. A simple approach can result in quota violations as in this example. Say there are 5 devices in a cluster. A tenant has a quota of 4 and is currently using 3. That leaves 2 unused devices, of which the tenant is permitted to use only one. But he may submit two concurrent requests, and they may land on two different compute nodes. The Cyborg agent in each node will see the current tenant usage as 3 and let the request go through, resulting in quota violation. To prevent this, we need some kind of atomic update , like SQLAlchemy's with_lockmode(): https://wiki.openstack.org/wiki/OpenStack_and_SQLAlchemy#Pessimistic_Locking_-_SELECT_FOR_UPDATE That seems to have issues, as documented in the link above. Also, since every compute node does that, it would also serialize the bringup of all instances with accelerators, across the cluster. If there is a better solution, I'll be happy to hear it. Thanks, Sundar -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Fri May 18 12:06:51 2018 From: sbauza at redhat.com (Sylvain Bauza) Date: Fri, 18 May 2018 14:06:51 +0200 Subject: [openstack-dev] [cyborg] [nova] Cyborg quotas In-Reply-To: <0602af19-987b-e200-d49d-754bec4c0556@intel.com> References: <6d8232e3-79ca-c61d-ad64-a99e923e2114@intel.com> <4ba31b19-98cf-25b2-a0c2-0f64a29756e8@gmail.com> <376d9f27-b264-cc2e-6dc2-5ee8ae773f95@intel.com> <0602af19-987b-e200-d49d-754bec4c0556@intel.com> Message-ID: Le ven. 18 mai 2018 à 13:59, Nadathur, Sundar a écrit : > Hi Matt, > On 5/17/2018 3:18 PM, Matt Riedemann wrote: > > On 5/17/2018 3:36 PM, Nadathur, Sundar wrote: > > This applies only to the resources that Nova handles, IIUC, which does not > handle accelerators. The generic method that Alex talks about is obviously > preferable but, if that is not available in Rocky, is the filter an option? > > > If nova isn't creating accelerator resources managed by cyborg, I have no > idea why nova would be doing quota checks on those types of resources. And > no, I don't think adding a scheduler filter to nova for checking > accelerator quota is something we'd add either. I'm not sure that would > even make sense - the quota for the resource is per tenant, not per host is > it? The scheduler filters work on a per-host basis. > > Can we not extend BaseFilter.filter_all() to get all the hosts in a > filter? > > https://github.com/openstack/nova/blob/master/nova/filters.py#L36 > > I should have made it clearer that this putative filter will be > out-of-tree, and needed only till better solutions become available. > No, there are two clear parameters for a filter, and changing that would mean a new paradigm for FilterScheduler. If you need to have a check for all the hosts, maybe it should be either a pre-filter for Placement or a post-filter but we don't accept out of tree yet. > Like any other resource in openstack, the project that manages that > resource should be in charge of enforcing quota limits for it. > > Agreed. Not sure how other projects handle it, but here's the situation > for Cyborg. A request may get scheduled on a compute node with no > intervention by Cyborg. So, the earliest check that can be made today is in > the selected compute node. A simple approach can result in quota violations > as in this example. > > Say there are 5 devices in a cluster. A tenant has a quota of 4 and is > currently using 3. That leaves 2 unused devices, of which the tenant is > permitted to use only one. But he may submit two concurrent requests, and > they may land on two different compute nodes. The Cyborg agent in each node > will see the current tenant usage as 3 and let the request go through, > resulting in quota violation. > > To prevent this, we need some kind of atomic update , like SQLAlchemy's > with_lockmode(): > > https://wiki.openstack.org/wiki/OpenStack_and_SQLAlchemy#Pessimistic_Locking_-_SELECT_FOR_UPDATE > That seems to have issues, as documented in the link above. Also, since > every compute node does that, it would also serialize the bringup of all > instances with accelerators, across the cluster. > > If there is a better solution, I'll be happy to hear it. > > Thanks, > Sundar > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at jimrollenhagen.com Fri May 18 12:07:05 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Fri, 18 May 2018 08:07:05 -0400 Subject: [openstack-dev] [all][api] late addition to forum schedule In-Reply-To: References: <1526572746-sup-4787@lrrr.local> <400482cf-c7d2-39bb-7718-e09949a8d025@gmail.com> Message-ID: On Fri, May 18, 2018 at 5:38 AM, Dmitry Tantsur wrote: > On 05/18/2018 12:23 AM, Matt Riedemann wrote: > >> On 5/17/2018 11:02 AM, Doug Hellmann wrote: >> >>> After some discussion on twitter and IRC, we've added a new session to >>> the Forum schedule for next week to discuss our options for cleaning up >>> some of the design/technical debt in our REST APIs. >>> >> >> Not to troll too hard here, but it's kind of frustrating to see that >> twitter trumps people actually proposing sessions on time and then having >> them be rejected. >> >> The session description: >>> >>> The introduction of microversions in OpenStack APIs added a >>> mechanism to incrementally change APIs without breaking users. >>> We're now at the point where people would like to start making >>> old things go away, which means we need to hammer out a plan and >>> potentially put it forward as a community goal. >>> >>> [1]https://www.openstack.org/summit/vancouver-2018/summit-sc >>> hedule/events/21881/api-debt-cleanup >>> >> >> This also came up at the Pike PTG in Atlanta: >> >> https://etherpad.openstack.org/p/ptg-architecture-workgroup >> >> See the "raising the minimum microversion" section. The TODO was Ironic >> was going to go off and do this and see how much people freaked out. What's >> changed since then besides that not happening? Since I'm not on twitter, I >> don't know what new thing prompted this. >> >> > Jim was driving this effort, then he left and it went into limbo. I'm not > sure we're still interested in doing that, given the overall backlog. Well, I'm still interested in doing this, but don't really have the time :( // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From sundar.nadathur at intel.com Fri May 18 12:25:32 2018 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Fri, 18 May 2018 05:25:32 -0700 Subject: [openstack-dev] [cyborg] [nova] Cyborg quotas In-Reply-To: References: <6d8232e3-79ca-c61d-ad64-a99e923e2114@intel.com> <4ba31b19-98cf-25b2-a0c2-0f64a29756e8@gmail.com> <376d9f27-b264-cc2e-6dc2-5ee8ae773f95@intel.com> <0602af19-987b-e200-d49d-754bec4c0556@intel.com> Message-ID: <895dc16f-0ca2-1b3a-509a-53e71137eb90@intel.com> On 5/18/2018 5:06 AM, Sylvain Bauza wrote: > > > Le ven. 18 mai 2018 à 13:59, Nadathur, Sundar > > a écrit : > > Hi Matt, > > On 5/17/2018 3:18 PM, Matt Riedemann wrote: >> On 5/17/2018 3:36 PM, Nadathur, Sundar wrote: >>> This applies only to the resources that Nova handles, IIUC, >>> which does not handle accelerators. The generic method that Alex >>> talks about is obviously preferable but, if that is not >>> available in Rocky, is the filter an option? >> >> If nova isn't creating accelerator resources managed by cyborg, I >> have no idea why nova would be doing quota checks on those types >> of resources. And no, I don't think adding a scheduler filter to >> nova for checking accelerator quota is something we'd add either. >> I'm not sure that would even make sense - the quota for the >> resource is per tenant, not per host is it? The scheduler filters >> work on a per-host basis. > Can we not extend BaseFilter.filter_all() to get all the hosts in > a filter? > https://github.com/openstack/nova/blob/master/nova/filters.py#L36 > > I should have made it clearer that this putative filter will be > out-of-tree, and needed only till better solutions become available. > > > No, there are two clear parameters for a filter, and changing that > would mean a new paradigm for FilterScheduler. > If you need to have a check for all the hosts, maybe it should be > either a pre-filter for Placement or a post-filter but we don't accept > out of tree yet. Thanks, Sylvain. So, the filter approach got filtered out. Matt had mentioned that Cinder volume quotas are not checked by Nova either, citing:      https://bugs.launchpad.net/nova/+bug/1742102 That includes this comment:     https://bugs.launchpad.net/nova/+bug/1742102/comments/4 I'll check how Cinder does it today. Thanks to all for your valuable input. Regards, Sundar -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Fri May 18 13:01:37 2018 From: zigo at debian.org (Thomas Goirand) Date: Fri, 18 May 2018 15:01:37 +0200 Subject: [openstack-dev] [all] Eventlet + SSL + Python 3 = broken monkey patching leading to completely broken glance-api Message-ID: <713523d5-e9f6-24b4-b804-0fe1d39e339a@debian.org> Hi, It took me nearly a week to figure this out, as I'm not really an expert in Eventlet, OpenSSL and all, but now I've pin-pointed a big problem. My tests were around Glance, which I was trying to run over SSL and Eventlet, though it seems a general issue with SSL + Python 3. In the normal setup, when I do: openstack image list then I get: Unable to establish connection to https://127.0.0.1:9292/v2/images: ('Connection aborted.', OSError(0, 'Error')) (more detailed stack dump at the end of this message [1]) Though, with Eventlet 0.20.0, if in /usr/lib/python3/dist-packages/eventlet/green/ssl.py line 352, I comment out set_nonblocking(newsock) in the accept() function of the GreenSSLSocket, then everything works. Note that: - This also happens with latest Eventlet 0.23.0 - There's no problem without SSL - There's no commit on top of 0.23.0 relevant to the issue The issue has been reported here 2 years ago: https://github.com/eventlet/eventlet/issues/308 it's marked with "importance-bug" and "need-contributor", but nobody did anything about it. I also tried running with libapache2-mod-wsgi-py3, but then I'm hitting another bug: https://bugs.launchpad.net/glance/+bug/1518431 what's going on is that glanceclient spit out a 411 error complaining about content lenght. That issue is seen *only* when using Apache and mod_wsgi. So, I'm left with no solution here: Glance never works over SSL and Python 3. Something's really wrong should be fixed. Please help! This also pinpoints something: our CI is *not* covering the SSL case, or mod_wsgi, when really, it should. We should be having tests with: - mod_wsgi - eventlet - uwsgi and all of the above with and without SSL, plus Python 2 and 3, plus with file or swift backend. That's 24 possibility of problems, which we should IMO all cover. We don't need to run all tests, but maybe just make sure that at least the daemon works, which isn't the case at the moment for most of these use cases. The only setup that works are: - eventlet with or without SSL, using Python 2 - eventlet without SSL with Python 3 - apache with or without SSL without swift backend As much as I understand, we're only testing with eventlet with Python 2 and 3 without SSL and file backend. That's 2 setups out of 24... Can someone works on fixing this? Cheers, Thomas Goirand (zigo) [1] Unable to establish connection to https://127.0.0.1:9292/v2/images: ('Connection aborted.', OSError(0, 'Error')) Traceback (most recent call last): File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 601, in urlopen chunked=chunked) File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 346, in _make_request self._validate_conn(conn) File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line 852, in _validate_conn conn.connect() File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 326, in connect ssl_context=context) File "/usr/lib/python3/dist-packages/urllib3/util/ssl_.py", line 329, in ssl_wrap_socket return context.wrap_socket(sock, server_hostname=server_hostname) File "/usr/lib/python3.5/ssl.py", line 385, in wrap_socket _context=self) File "/usr/lib/python3.5/ssl.py", line 760, in __init__ self.do_handshake() File "/usr/lib/python3.5/ssl.py", line 996, in do_handshake self._sslobj.do_handshake() File "/usr/lib/python3.5/ssl.py", line 641, in do_handshake self._sslobj.do_handshake() OSError: [Errno 0] Error From james.page at canonical.com Fri May 18 13:09:44 2018 From: james.page at canonical.com (James Page) Date: Fri, 18 May 2018 14:09:44 +0100 Subject: [openstack-dev] [sig] [upgrades] inaugural meeting minutes & vancouver forum Message-ID: Hi All Lujin, Lee and myself held the inaugural IRC meeting for the Upgrades SIG this week (see [0]). Suffice to say that, due to other time pressures, setup of the SIG has taken a lot longer than desired, but hopefully now we have the ball rolling we can keep up a bit of momentum. The Upgrades SIG intended to meet weekly, alternating between slots that work for (hopefully) all time zones: http://eavesdrop.openstack.org/#Upgrades_SIG That said, we'll skip next weeks meeting due to the OpenStack Summit and Forum in Vancouver, where we have a BoF on the schedule (see [1]) instead. If you're interested in OpenStack Upgrades the BoF and Erik's sessions on Fast Forward Upgrades (see [2]) should be on your schedule for next week! Cheers James [0] http://eavesdrop.openstack.org/meetings/upgrade_sig/2018/upgrade_sig.2018-05-15-09.06.html [1] https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21855/upgrade-sig-bof [2] https://www.openstack.org/summit/vancouver-2018/summit-schedule/global-search?t=upgrades -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.page at canonical.com Fri May 18 13:15:07 2018 From: james.page at canonical.com (James Page) Date: Fri, 18 May 2018 14:15:07 +0100 Subject: [openstack-dev] [sig] [upgrades] inaugural meeting minutes & vancouver forum In-Reply-To: References: Message-ID: Hi All Lujin, Lee and myself held the inaugural IRC meeting for the Upgrades SIG this week (see [0]). Suffice to say that, due to other time pressures, setup of the SIG has taken a lot longer than desired, but hopefully now we have the ball rolling we can keep up a bit of momentum. The Upgrades SIG intended to meet weekly, alternating between slots that work for (hopefully) all time zones: http://eavesdrop.openstack.org/#Upgrades_SIG That said, we'll skip next weeks meeting due to the OpenStack Summit and Forum in Vancouver, where we have a BoF on the schedule (see [1]) instead. If you're interested in OpenStack Upgrades the BoF and Erik's sessions on Fast Forward Upgrades (see [2]) should be on your schedule for next week! Cheers James [0] http://eavesdrop.openstack.org/meetings/upgrade_sig/2018/upgrade_sig.2018-05-15-09.06.html [1] https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21855/upgrade-sig-bof [2] https://www.openstack.org/summit/vancouver-2018/summit-schedule/global-search?t=upgrades -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Fri May 18 13:38:26 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 18 May 2018 09:38:26 -0400 Subject: [openstack-dev] [all][api] late addition to forum schedule In-Reply-To: <20180518000334.h5vsul7ajab4tapj@yuggoth.org> References: <1526572746-sup-4787@lrrr.local> <400482cf-c7d2-39bb-7718-e09949a8d025@gmail.com> <20180518000334.h5vsul7ajab4tapj@yuggoth.org> Message-ID: <1526650552-sup-7686@lrrr.local> Excerpts from Jeremy Stanley's message of 2018-05-18 00:03:35 +0000: > On 2018-05-17 18:47:06 -0500 (-0500), Matt Riedemann wrote: > > On 5/17/2018 5:23 PM, Matt Riedemann wrote: > > > Not to troll too hard here, but it's kind of frustrating to see that > > > twitter trumps people actually proposing sessions on time and then > > > having them be rejected. > > > > I reckon this is because there were already a pre-defined set of slots / > > rooms for Forum sessions and we had fewer sessions proposed than reserved > > slots, and that's why adding something in later is not a major issue? > > Yes, as I understand it we still have some overflow space too if > planned forum sessions need continuing. Session leaders have > hopefully received details from the event planners on how to reserve > additional space in such situations. As far as I'm aware no proposed > Forum sessions were rejected this time around, and there was some > discussion among members of the TC (in #openstack-tc[*]) before it > was agreed there was room to squeeze this particular latecomer into > the lineup. > > [*] http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-05-14.log.html#t2018-05-14T17:27:05 Yes, that's right. I do remember that we've had sessions rejected in the past (for space considerations or to avoid overbalancing the schedule with too many sessions on a given topic), but it feels like it has been quite a while since that happened. Maybe I'm wrong? Has that been a persistent problem? Doug From doug at doughellmann.com Fri May 18 13:45:34 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 18 May 2018 09:45:34 -0400 Subject: [openstack-dev] [all][api] late addition to forum schedule In-Reply-To: <400482cf-c7d2-39bb-7718-e09949a8d025@gmail.com> References: <1526572746-sup-4787@lrrr.local> <400482cf-c7d2-39bb-7718-e09949a8d025@gmail.com> Message-ID: <1526650728-sup-5156@lrrr.local> Excerpts from Matt Riedemann's message of 2018-05-17 17:23:09 -0500: > On 5/17/2018 11:02 AM, Doug Hellmann wrote: > > After some discussion on twitter and IRC, we've added a new session to > > the Forum schedule for next week to discuss our options for cleaning up > > some of the design/technical debt in our REST APIs. > > Not to troll too hard here, but it's kind of frustrating to see that > twitter trumps people actually proposing sessions on time and then > having them be rejected. > > > The session description: > > > > The introduction of microversions in OpenStack APIs added a > > mechanism to incrementally change APIs without breaking users. > > We're now at the point where people would like to start making > > old things go away, which means we need to hammer out a plan and > > potentially put it forward as a community goal. > > > > [1]https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21881/api-debt-cleanup > > This also came up at the Pike PTG in Atlanta: > > https://etherpad.openstack.org/p/ptg-architecture-workgroup > > See the "raising the minimum microversion" section. The TODO was Ironic > was going to go off and do this and see how much people freaked out. > What's changed since then besides that not happening? Since I'm not on > twitter, I don't know what new thing prompted this. > What changed is that we thought doing it as a coordinated effort, rather than one team, would work better, because we wouldn't have a team appearing to be an outlier in terms of their API support "guarantees". We also wanted to start the planning early, so that teams could talk about it at the PTG and make more detailed plans for the changes over the course of Stein, to be implemented in the next cycle (assuming we all decide that's the right timing). The only aspect of this that's settled today is that we want to talk about it. Each team will still need to consider whether, and how, to do it. Doug From openstack at nemebean.com Fri May 18 14:43:21 2018 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 18 May 2018 09:43:21 -0500 Subject: [openstack-dev] [all] Eventlet + SSL + Python 3 = broken monkey patching leading to completely broken glance-api In-Reply-To: <713523d5-e9f6-24b4-b804-0fe1d39e339a@debian.org> References: <713523d5-e9f6-24b4-b804-0fe1d39e339a@debian.org> Message-ID: <477a894f-cd8b-03ca-d4f6-1456f3a790b9@nemebean.com> This is a known problem: https://bugs.launchpad.net/oslo.service/+bug/1482633 There have been some discussions on what to do about it but I don't think we have a definite plan yet. It also came up in the Python 3 support thread for some more context: http://lists.openstack.org/pipermail/openstack-dev/2018-May/130274.html On 05/18/2018 08:01 AM, Thomas Goirand wrote: > Hi, > > It took me nearly a week to figure this out, as I'm not really an expert > in Eventlet, OpenSSL and all, but now I've pin-pointed a big problem. > > My tests were around Glance, which I was trying to run over SSL and > Eventlet, though it seems a general issue with SSL + Python 3. > > In the normal setup, when I do: > openstack image list > > then I get: > Unable to establish connection to https://127.0.0.1:9292/v2/images: > ('Connection aborted.', OSError(0, 'Error')) > > (more detailed stack dump at the end of this message [1]) > > Though, with Eventlet 0.20.0, if in > /usr/lib/python3/dist-packages/eventlet/green/ssl.py line 352, I comment > out set_nonblocking(newsock) in the accept() function of the > GreenSSLSocket, then everything works. > > Note that: > - This also happens with latest Eventlet 0.23.0 > - There's no problem without SSL > - There's no commit on top of 0.23.0 relevant to the issue > > The issue has been reported here 2 years ago: > https://github.com/eventlet/eventlet/issues/308 > > it's marked with "importance-bug" and "need-contributor", but nobody did > anything about it. > > I also tried running with libapache2-mod-wsgi-py3, but then I'm hitting > another bug: https://bugs.launchpad.net/glance/+bug/1518431 > > what's going on is that glanceclient spit out a 411 error complaining > about content lenght. That issue is seen *only* when using Apache and > mod_wsgi. > > So, I'm left with no solution here: Glance never works over SSL and > Python 3. Something's really wrong should be fixed. Please help! > > This also pinpoints something: our CI is *not* covering the SSL case, or > mod_wsgi, when really, it should. We should be having tests with: > - mod_wsgi > - eventlet > - uwsgi > and all of the above with and without SSL, plus Python 2 and 3, plus > with file or swift backend. That's 24 possibility of problems, which we > should IMO all cover. We don't need to run all tests, but maybe just > make sure that at least the daemon works, which isn't the case at the > moment for most of these use cases. The only setup that works are: > - eventlet with or without SSL, using Python 2 > - eventlet without SSL with Python 3 > - apache with or without SSL without swift backend > > As much as I understand, we're only testing with eventlet with Python 2 > and 3 without SSL and file backend. That's 2 setups out of 24... Can > someone works on fixing this? > > Cheers, > > Thomas Goirand (zigo) > > [1] > > Unable to establish connection to https://127.0.0.1:9292/v2/images: > ('Connection aborted.', OSError(0, 'Error')) > Traceback (most recent call last): > File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line > 601, in urlopen > chunked=chunked) > File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line > 346, in _make_request > self._validate_conn(conn) > File "/usr/lib/python3/dist-packages/urllib3/connectionpool.py", line > 852, in _validate_conn > conn.connect() > File "/usr/lib/python3/dist-packages/urllib3/connection.py", line 326, > in connect > ssl_context=context) > File "/usr/lib/python3/dist-packages/urllib3/util/ssl_.py", line 329, > in ssl_wrap_socket > return context.wrap_socket(sock, server_hostname=server_hostname) > File "/usr/lib/python3.5/ssl.py", line 385, in wrap_socket > _context=self) > File "/usr/lib/python3.5/ssl.py", line 760, in __init__ > self.do_handshake() > File "/usr/lib/python3.5/ssl.py", line 996, in do_handshake > self._sslobj.do_handshake() > File "/usr/lib/python3.5/ssl.py", line 641, in do_handshake > self._sslobj.do_handshake() > OSError: [Errno 0] Error > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From lbragstad at gmail.com Fri May 18 15:39:12 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 18 May 2018 10:39:12 -0500 Subject: [openstack-dev] [keystone] team dinner Message-ID: <1d3de132-ead9-cd24-e5fe-f5577c3227c6@gmail.com> Hey all, I put together a survey to see if we can plan a night to have supper together [0]. I'll start parsing responses tomorrow and see what we can get lined up. Thanks and safe travels to Vancouver, Lance [0] https://goo.gl/forms/ogNsf9dUno8BHvqu1 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From emccormick at cirrusseven.com Fri May 18 16:42:17 2018 From: emccormick at cirrusseven.com (Erik McCormick) Date: Fri, 18 May 2018 09:42:17 -0700 Subject: [openstack-dev] Fast Forward Upgrades (FFU) Forum Sessions Message-ID: Hello all, There are two forum sessions in Vancouver covering Fast Forward Upgrades. Session 1 (Current State): Wednesday May 23rd, 09:00 - 09:40, Room 220 Session 2 (Future Work): Wednesday May 23rd, 09:50 - 10:30, Room 220 The combined etherpad for both sessions can be found at: https://etherpad.openstack.org/p/YVR-forum-fast-forward-upgrades Please take some time to add in topics you would like to see discussed or add any other pertinent information. There are several reference links at the top which are worth reviewing prior to the sessions if you have the time. See you all in Vancover! Cheers, Erik From johnsomor at gmail.com Fri May 18 16:43:03 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Fri, 18 May 2018 09:43:03 -0700 Subject: [openstack-dev] [devstack][python/pip][octavia] pip failure during octavia/ocata image build by devstack In-Reply-To: <0b16ed3e-a456-a704-fb7b-ffb403616cbe@roodsari.us> References: <0b16ed3e-a456-a704-fb7b-ffb403616cbe@roodsari.us> Message-ID: Hi rezroo, Yes, the recent release of pip 10 broke the disk image building. There is a patch posted here: https://review.openstack.org/#/c/562850/ pending review that works around this issue for the ocata branch by pining the pip used for the image building to a version that does not have this issue. Michael On Thu, May 17, 2018 at 7:38 PM, rezroo wrote: > Hello - I'm trying to install a working local.conf devstack ocata on a new > server, and some python packages have changed so I end up with this error > during the build of octavia image: > > 2018-05-18 01:00:26.276 | Found existing installation: Jinja2 2.8 > 2018-05-18 01:00:26.280 | Uninstalling Jinja2-2.8: > 2018-05-18 01:00:26.280 | Successfully uninstalled Jinja2-2.8 > 2018-05-18 01:00:26.839 | Found existing installation: PyYAML 3.11 > 2018-05-18 01:00:26.969 | Cannot uninstall 'PyYAML'. It is a distutils > installed project and thus we cannot accurately determine which files belong > to it which would lead to only a partial uninstall. > > 2018-05-18 02:05:44.768 | Unmount > /tmp/dib_build.2fbBBePD/mnt/var/cache/apt/archives > 2018-05-18 02:05:44.796 | Unmount /tmp/dib_build.2fbBBePD/mnt/tmp/pip > 2018-05-18 02:05:44.820 | Unmount > /tmp/dib_build.2fbBBePD/mnt/tmp/in_target.d > 2018-05-18 02:05:44.844 | Unmount /tmp/dib_build.2fbBBePD/mnt/tmp/ccache > 2018-05-18 02:05:44.868 | Unmount /tmp/dib_build.2fbBBePD/mnt/sys > 2018-05-18 02:05:44.896 | Unmount /tmp/dib_build.2fbBBePD/mnt/proc > 2018-05-18 02:05:44.920 | Unmount /tmp/dib_build.2fbBBePD/mnt/dev/pts > 2018-05-18 02:05:44.947 | Unmount /tmp/dib_build.2fbBBePD/mnt/dev > 2018-05-18 02:05:50.668 | > +/opt/stack/octavia/devstack/plugin.sh:build_octavia_worker_image:1 > exit_trap > 2018-05-18 02:05:50.679 | +./devstack/stack.sh:exit_trap:494 local > r=1 > 2018-05-18 02:05:50.690 | ++./devstack/stack.sh:exit_trap:495 jobs > -p > 2018-05-18 02:05:50.700 | +./devstack/stack.sh:exit_trap:495 jobs= > 2018-05-18 02:05:50.710 | +./devstack/stack.sh:exit_trap:498 [[ -n > '' ]] > 2018-05-18 02:05:50.720 | +./devstack/stack.sh:exit_trap:504 > kill_spinner > 2018-05-18 02:05:50.731 | +./devstack/stack.sh:kill_spinner:390 '[' '!' > -z '' ']' > 2018-05-18 02:05:50.741 | +./devstack/stack.sh:exit_trap:506 [[ 1 > -ne 0 ]] > 2018-05-18 02:05:50.751 | +./devstack/stack.sh:exit_trap:507 echo > 'Error on exit' > 2018-05-18 02:05:50.751 | Error on exit > 2018-05-18 02:05:50.761 | +./devstack/stack.sh:exit_trap:508 > generate-subunit 1526608058 1092 fail > 2018-05-18 02:05:51.148 | +./devstack/stack.sh:exit_trap:509 [[ -z > /tmp ]] > 2018-05-18 02:05:51.157 | +./devstack/stack.sh:exit_trap:512 > /home/stack/devstack/tools/worlddump.py -d /tmp > > I've tried pip uninstalling PyYAML and pip installing it before running > stack.sh, but the error comes back. > > $ sudo pip uninstall PyYAML > The directory '/home/stack/.cache/pip/http' or its parent directory is not > owned by the current user and the cache has been disabled. Please check the > permissions and owner of that directory. If executing pip with sudo, you may > want sudo's -H flag. > Uninstalling PyYAML-3.12: > /usr/local/lib/python2.7/dist-packages/PyYAML-3.12.dist-info/INSTALLER > /usr/local/lib/python2.7/dist-packages/PyYAML-3.12.dist-info/METADATA > /usr/local/lib/python2.7/dist-packages/PyYAML-3.12.dist-info/RECORD > /usr/local/lib/python2.7/dist-packages/PyYAML-3.12.dist-info/WHEEL > /usr/local/lib/python2.7/dist-packages/PyYAML-3.12.dist-info/top_level.txt > /usr/local/lib/python2.7/dist-packages/_yaml.so > Proceed (y/n)? y > Successfully uninstalled PyYAML-3.12 > > I've posted my question to the pip folks and they think it's an openstack > issue: https://github.com/pypa/pip/issues/4805 > > Is there a workaround here? > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From lbragstad at gmail.com Fri May 18 16:55:25 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 18 May 2018 11:55:25 -0500 Subject: [openstack-dev] [keystone] project onboarding Message-ID: <3fe58002-f336-9b6d-0d98-ce382c32a74b@gmail.com> Hey all, We've started an etherpad in an attempt to capture information prior to the on-boarding session on Monday [0]. If you're looking to get something specific out of the session, please let us know in the etherpad [1]. This will help us come to the session prepared and make the most of the time we have. See you there, Lance [0] https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21633/keystone-project-onboarding [1] https://etherpad.openstack.org/p/YVR-rocky-keystone-project-onboarding -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From melwittt at gmail.com Fri May 18 17:12:19 2018 From: melwittt at gmail.com (melanie witt) Date: Fri, 18 May 2018 10:12:19 -0700 Subject: [openstack-dev] [nova] summit sessions of interest Message-ID: <4943222e-68a7-596a-195c-04e26ac3e5f5@gmail.com> Howdy everyone, Here's a last-minute (sorry) list of sessions you might find interesting from a nova perspective. Some of these are cross-project sessions of general interest. -melanie Forum sessions -------------- Monday ------ * Default Roles Mon 21, 11:35am - 12:15pm https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21761/default-roles * Building the path to extracting Placement from Nova Mon 21, 3:10pm - 3:50pm https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21716/building-the-path-to-extracting-placement-from-nova * Ops/Devs: One community Mon 21, 4:20pm - 5:00pm https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21747/opsdevs-one-community * Planning to use Placement in Cinder Mon 21, 4:20pm - 5:00pm https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21718/planning-to-use-placement-in-cinder * Python 2 Deprecation Timeline Mon 21, 5:10pm - 5:50pm https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21741/python-2-deprecation-timeline Tuesday ------- * Multi-attach introduction and future direction Tue 22, 11:50am - 12:30pm https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21732/multi-attach-introduction-and-future-direction * Pre-emptible instances - the way forward Tue 22, 1:50pm - 2:30pm https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21787/pre-emptible-instances-the-way-forward * nova/neutron + ops cross-project session Tue 22, 3:30pm - 4:10pm https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21754/novaneutron-ops-cross-project-session * CellsV2 migration process sync with operators Tue 22, 4:40pm - 5:20pm https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21755/cellsv2-migration-process-sync-with-operators Wednesday --------- * Making NFV features easier to use Wed 23, 11:00am - 11:40am https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21776/making-nfv-features-easier-to-use * Nova - Project Onboarding Wed 23, 1:50pm - 2:30pm https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21641/nova-project-onboarding * Missing features in OpenStack for public clouds Wed 23, 2:40pm - 3:20pm https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21749/missing-features-in-openstack-for-public-clouds * API Debt Cleanup Wed 23, 4:40pm - 5:20pm https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21881/api-debt-cleanup Thursday -------- * Extended Maintenance part I: past, present and future 9:00am - 9:40am https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21721/extended-maintenance-part-i-past-present-and-future * Extended Maintenance part II: EM and release cycles 9:50am - 10:30am https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21745/extended-maintenance-part-ii-em-and-release-cycles * S Release Goals Thu 24, 11:50am - 12:30pm https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21738/s-release-goals * Unified Limits Thu 24, 2:40pm - 3:20pm https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21760/unified-limits Presentations ------------- Monday ------ * Moving from CellsV1 to CellsV2 at CERN Mon 21, 11:35am - 12:15pm https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/20667/moving-from-cellsv1-to-cellsv2-at-cern * Call it real : Virtual GPUs in Nova Mon 21, 3:10pm - 3:50pm https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/20802/call-it-real-virtual-gpus-in-nova * The multi-release, multi-project road to volume multi-attach Mon 21, 5:10pm - 5:50pm https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/20850/the-multi-release-multi-project-road-to-volume-multi-attach Tuesday ------- * Placement, Present and Future, in Nova and Beyond Tue 22, 4:40pm - 5:20pm https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/20813/placement-present-and-future-in-nova-and-beyond Wednesday --------- * Nova - Project Update Wed 23, 11:50am - 12:30pm https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21598/nova-project-update From colleen at gazlene.net Fri May 18 17:21:54 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 18 May 2018 19:21:54 +0200 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 14 May 2018 Message-ID: <1526664114.3554530.1377050720.035BEBE6@webmail.messagingengine.com> # Keystone Team Update - Week of 14 May 2018 ## News ### WSGI Morgan has been working on converting keystone's core application to use Flask[1], which will help us to stop using paste.deploy and simplify our WSGI middleware and routing. While we're reworking our WSGI application framework, we also need to be thinking about how we can implement the mutable configuration community goal[2] which relies on having a SIGHUP handler in the service application that can talk to oslo.config, which is a feature that is part of oslo.service which we're not using. [1] https://review.openstack.org/#/c/568377/ [2] https://governance.openstack.org/tc/goals/rocky/enable-mutable-configuration.html ### Sphinx issues This week we started seeing mysterious issues with the API docs builder in the docs jobs for keystoneauth[3][4]. They seemed to start sometime after the upper-constraint for Sphinx was bumped to 1.7.4[5] and seemed to go away when the constraint was reverted[6], but we haven't fully confirmed that correlation yet. If you have some free time and like puzzles please feel free to dive in. [3] http://logs.openstack.org/65/568365/5/check/build-openstack-sphinx-docs/368b8db/ [4] http://logs.openstack.org/40/568640/2/check/build-openstack-sphinx-docs/c66ea98/ [5] https://review.openstack.org/#/c/566451/ [6] https://review.openstack.org/#/c/568248/ ### Summit/forum next week The OpenStack Summit and Forum is next week in Vancouver, BC. A team dinner is going to be organized, so please respond to the survey[7] with your availability if you'd like to join. [7] http://lists.openstack.org/pipermail/openstack-dev/2018-May/130649.html Some sessions that might be of interest to the keystone team are: Default Roles - https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21761/default-roles Project Update - https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21584/keystone-project-update Project Onboarding - https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21633/keystone-project-onboarding Possible edge architectures for Keystone - https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21737/possible-edge-architectures-for-keystone Feedback session - https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21762/keystone-feedback-session Unified limits - https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21760/unified-limits The Open Research Cloud Alliance, which focuses on federated cloud topics, is also meeting on Thursday (requires a separate registration) - https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21845/cloud-federation-and-open-research-cloud-alliance-congress ## Open Specs Search query: https://bit.ly/2G8Ai5q In addition to the specs proposed for Rocky, we also have the Patrole in CI spec[8] proposed for Stein. It was originally being proposed in the openstack-specs repo but it's now reproposed to the keystone-specs repo. [8] https://review.openstack.org/#/c/464678/ ## Recently Merged Changes Search query: https://bit.ly/2IACk3F We merged 15 changes this week. ## Changes that need Attention Search query: https://bit.ly/2wv7QLK There are 37 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. ## Bugs These week we opened 5 new bugs and closed 7. ## Milestone Outlook https://releases.openstack.org/rocky/schedule.html The spec freeze is in about three weeks. We're starting to close in on our bigger specs so things are looking good. ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter From ekuvaja at redhat.com Fri May 18 18:39:16 2018 From: ekuvaja at redhat.com (Erno Kuvaja) Date: Fri, 18 May 2018 19:39:16 +0100 Subject: [openstack-dev] [Glance] Vancouver Summit Glance Dinner planning Message-ID: Hi all, time to see if we could get the glance folks together for dinner and perhaps some refreshing beverages. If you are interested, please do contribute to the plans here: https://etherpad.openstack.org/p/yvr-glance-dinner - jokke From lbragstad at gmail.com Fri May 18 21:02:47 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 18 May 2018 16:02:47 -0500 Subject: [openstack-dev] [User-committee] [Forum] [all] [Stable] OpenStack is "mature" -- time to get serious on Maintainers -- Session etherpad and food for thought for discussion In-Reply-To: References: Message-ID: <1d7a6055-df34-c0f6-98a0-d8a8f9cfafa8@gmail.com> Here is the link to the session in case you'd like to add it to your schedule [0]. [0] https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21759/openstack-is-mature-time-to-get-serious-on-maintainers On 05/17/2018 07:55 PM, Rochelle Grober wrote: > > Folks, > >   > > TL;DR > > The last session related to extended releases is: OpenStack is > "mature" -- time to get serious on Maintainers > It will be in room 220 at 11:00-11:40 > > The etherpad for the last session in the series on Extended releases > is here: > > https://etherpad.openstack.org/p/YVR-openstack-maintainers-maint-pt3 > >   > > There are links to info on other communities’ maintainer > process/role/responsibilities also, as reference material on how other > have made it work (or not). > >   > > The nitty gritty details: > >   > > The upcoming Forum is filled with sessions that are focused on issues > needed to improve and maintain the sustainability of OpenStack > projects for the long term.  We have discussion on reducing technical > debt, extended releases, fast forward installs, bringing Ops and User > communities closer together, etc.  The community is showing it is now > invested in activities that are often part of “Sustaining Engineering” > teams (corporate speak) or “Maintainers (OSS speak).  We are doing > this; we are thinking about the moving parts to do this; let’s think > about the contributors who want to do these and bring some clarity to > their roles and the processes they need to be successful.  I am hoping > you read this and keep these ideas in mind as you participate in the > various Forum sessions.  Then you can bring the ideas generated during > all these discussions to the Maintainers session near the end of the > Summit to brainstorm how to visualize and define this new(ish) > component of our technical community. > >   > > So, who has been doing the maintenance work so far?  Mostly (mostly) > unsung heroes like the Stable Release team, Release team, Oslo team, > project liaisons and the community goals champions (yes, moving to py3 > is a sustaining/maintenance type of activity).  And some operators > (Hi, mnaser!).  We need to lean on their experience and what we think > the community will need to reduce that technical debt to outline what > the common tasks of maintainers should be, what else might fall in > their purview, and how to partner with them to better serve them. > >   > > With API lower limits, new tool versions, placement, py3, and even > projects reaching “code complete” or “maintenance mode,” there is a > lot of work for maintainers to do (I really don’t like that term, but > is there one that fits OpenStack’s community?).  It would be great if > we could find a way to share the load such that we can have part time > contributors here.  We know that operators know how to cherrypick, > test in there clouds, do bug fixes.  How do we pair with them to get > fixes upstreamed without requiring them to be full on developers?  We > have a bunch of alumni who have stopped being “cores” and sometimes > even developers, but who love our community and might be willing and > able to put in a few hours a week, maybe reviewing small patches, > providing help with user/ops submitted patch requests, or whatever.  > They were trusted with +2 and +W in the past, so we should at least be > able to trust they know what they know.  We  would need some way to > identify them to Cores, since they would be sort of 1.5 on the voting > scale, but…… > >   > > So, burn out is high in other communities for maintainers.  We need to > find a way to make sustaining the stable parts of OpenStack sustainable. > >   > > Hope you can make the talk, or add to the etherpad, or both.  The > etherpad is very musch still a work in progress (trying to organize it > to make sense).  If you want to jump in now, go for it, otherwise it > should be in reasonable shape for use at the session.  I hope we get a > good mix of community and a good collection of those who are already > doing the job without title. > >   > > Thanks and see you next week. > > --rocky > >   > >   > >   > > ------------------------------------------------------------------------ > > 华为技术有限公司 Huawei Technologies Co., Ltd. > > Company_logo > > Rochelle Grober > > Sr. Staff Architect, Open Source > Office Phone:408-330-5472 > Email:rochelle.grober at huawei.com > > ------------------------------------------------------------------------ > > 本邮件及其附件含有华为公司的保密信息,仅限于发送给上面地址中列出的个人或群组。禁 > 止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、或散发)本邮件中 > 的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本邮件! > This e-mail and its attachments contain confidential information from > HUAWEI, which > is intended only for the person or entity whose address is listed > above. Any use of the > information contained herein in any way (including, but not limited > to, total or partial > disclosure, reproduction, or dissemination) by persons other than the > intended > recipient(s) is prohibited. If you receive this e-mail in error, > please notify the sender by > phone or email immediately and delete it! > >   > > > > _______________________________________________ > User-committee mailing list > User-committee at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 5474 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From rochelle.grober at huawei.com Fri May 18 21:07:46 2018 From: rochelle.grober at huawei.com (Rochelle Grober) Date: Fri, 18 May 2018 21:07:46 +0000 Subject: [openstack-dev] [User-committee] [Forum] [all] [Stable] OpenStack is "mature" -- time to get serious on Maintainers -- Session etherpad and food for thought for discussion In-Reply-To: <1d7a6055-df34-c0f6-98a0-d8a8f9cfafa8@gmail.com> References: <1d7a6055-df34-c0f6-98a0-d8a8f9cfafa8@gmail.com> Message-ID: Thanks, Lance! Also, the more I think about it, the more I think Maintainer has too much baggage to use that term for this role. It really is “continuity” that we are looking for. Continuous important fixes, continuous updates of tools used to produce the SW. Keep this in the back of your minds for the discussion. And yes, this is a discussion to see if we are interested, and only if there is interest, how to move forward. --Rocky From: Lance Bragstad [mailto:lbragstad at gmail.com] Sent: Friday, May 18, 2018 2:03 PM To: Rochelle Grober ; openstack-dev ; openstack-operators ; user-committee Subject: Re: [User-committee] [Forum] [all] [Stable] OpenStack is "mature" -- time to get serious on Maintainers -- Session etherpad and food for thought for discussion Here is the link to the session in case you'd like to add it to your schedule [0]. [0] https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21759/openstack-is-mature-time-to-get-serious-on-maintainers On 05/17/2018 07:55 PM, Rochelle Grober wrote: Folks, TL;DR The last session related to extended releases is: OpenStack is "mature" -- time to get serious on Maintainers It will be in room 220 at 11:00-11:40 The etherpad for the last session in the series on Extended releases is here: https://etherpad.openstack.org/p/YVR-openstack-maintainers-maint-pt3 There are links to info on other communities’ maintainer process/role/responsibilities also, as reference material on how other have made it work (or not). The nitty gritty details: The upcoming Forum is filled with sessions that are focused on issues needed to improve and maintain the sustainability of OpenStack projects for the long term. We have discussion on reducing technical debt, extended releases, fast forward installs, bringing Ops and User communities closer together, etc. The community is showing it is now invested in activities that are often part of “Sustaining Engineering” teams (corporate speak) or “Maintainers (OSS speak). We are doing this; we are thinking about the moving parts to do this; let’s think about the contributors who want to do these and bring some clarity to their roles and the processes they need to be successful. I am hoping you read this and keep these ideas in mind as you participate in the various Forum sessions. Then you can bring the ideas generated during all these discussions to the Maintainers session near the end of the Summit to brainstorm how to visualize and define this new(ish) component of our technical community. So, who has been doing the maintenance work so far? Mostly (mostly) unsung heroes like the Stable Release team, Release team, Oslo team, project liaisons and the community goals champions (yes, moving to py3 is a sustaining/maintenance type of activity). And some operators (Hi, mnaser!). We need to lean on their experience and what we think the community will need to reduce that technical debt to outline what the common tasks of maintainers should be, what else might fall in their purview, and how to partner with them to better serve them. With API lower limits, new tool versions, placement, py3, and even projects reaching “code complete” or “maintenance mode,” there is a lot of work for maintainers to do (I really don’t like that term, but is there one that fits OpenStack’s community?). It would be great if we could find a way to share the load such that we can have part time contributors here. We know that operators know how to cherrypick, test in there clouds, do bug fixes. How do we pair with them to get fixes upstreamed without requiring them to be full on developers? We have a bunch of alumni who have stopped being “cores” and sometimes even developers, but who love our community and might be willing and able to put in a few hours a week, maybe reviewing small patches, providing help with user/ops submitted patch requests, or whatever. They were trusted with +2 and +W in the past, so we should at least be able to trust they know what they know. We would need some way to identify them to Cores, since they would be sort of 1.5 on the voting scale, but…… So, burn out is high in other communities for maintainers. We need to find a way to make sustaining the stable parts of OpenStack sustainable. Hope you can make the talk, or add to the etherpad, or both. The etherpad is very musch still a work in progress (trying to organize it to make sense). If you want to jump in now, go for it, otherwise it should be in reasonable shape for use at the session. I hope we get a good mix of community and a good collection of those who are already doing the job without title. Thanks and see you next week. --rocky ________________________________ 华为技术有限公司 Huawei Technologies Co., Ltd. [Company_logo] Rochelle Grober Sr. Staff Architect, Open Source Office Phone:408-330-5472 Email:rochelle.grober at huawei.com ________________________________  本邮件及其附件含有华为公司的保密信息,仅限于发送给上面地址中列出的个人或群组。禁 止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、或散发)本邮件中 的信息。如果您错收了本邮件,请您立即电话或邮件通知发件人并删除本邮件! This e-mail and its attachments contain confidential information from HUAWEI, which is intended only for the person or entity whose address is listed above. Any use of the information contained herein in any way (including, but not limited to, total or partial disclosure, reproduction, or dissemination) by persons other than the intended recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender by phone or email immediately and delete it! _______________________________________________ User-committee mailing list User-committee at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 5474 bytes Desc: image001.png URL: From thierry at openstack.org Fri May 18 22:59:19 2018 From: thierry at openstack.org (Thierry Carrez) Date: Sat, 19 May 2018 00:59:19 +0200 Subject: [openstack-dev] Fast Forward Upgrades (FFU) Forum Sessions In-Reply-To: References: Message-ID: <0050ae4b-ea69-76ae-fe97-e90d79af4732@openstack.org> Erik McCormick wrote: > There are two forum sessions in Vancouver covering Fast Forward Upgrades. > > Session 1 (Current State): Wednesday May 23rd, 09:00 - 09:40, Room 220 > Session 2 (Future Work): Wednesday May 23rd, 09:50 - 10:30, Room 220 > > The combined etherpad for both sessions can be found at: > https://etherpad.openstack.org/p/YVR-forum-fast-forward-upgrades You should add it to the list of all etherpads at: https://wiki.openstack.org/wiki/Forum/Vancouver2018 -- Thierry Carrez (ttx) From emccormick at cirrusseven.com Sat May 19 00:33:12 2018 From: emccormick at cirrusseven.com (Erik McCormick) Date: Fri, 18 May 2018 17:33:12 -0700 Subject: [openstack-dev] Fast Forward Upgrades (FFU) Forum Sessions In-Reply-To: <0050ae4b-ea69-76ae-fe97-e90d79af4732@openstack.org> References: <0050ae4b-ea69-76ae-fe97-e90d79af4732@openstack.org> Message-ID: On Fri, May 18, 2018 at 3:59 PM, Thierry Carrez wrote: > Erik McCormick wrote: >> There are two forum sessions in Vancouver covering Fast Forward Upgrades. >> >> Session 1 (Current State): Wednesday May 23rd, 09:00 - 09:40, Room 220 >> Session 2 (Future Work): Wednesday May 23rd, 09:50 - 10:30, Room 220 >> >> The combined etherpad for both sessions can be found at: >> https://etherpad.openstack.org/p/YVR-forum-fast-forward-upgrades > > You should add it to the list of all etherpads at: > https://wiki.openstack.org/wiki/Forum/Vancouver2018 > Done > -- > Thierry Carrez (ttx) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -Erik From openstack at roodsari.us Sat May 19 05:16:22 2018 From: openstack at roodsari.us (rezroo) Date: Fri, 18 May 2018 22:16:22 -0700 Subject: [openstack-dev] [devstack][python/pip][octavia] pip failure during octavia/pike image build by devstack Message-ID: Hi - let's try this again - this time with pike :-) Any suggestions on how to get the image builder to create a larger loop device? I think that's what the problem is. Thanks in advance. 2018-05-19 05:03:04.523 | 2018-05-19 05:03:04.523 INFO diskimage_builder.block_device.level1.mbr [-] Write partition entry blockno [0] entry [0] start [2048] length [4190208]       [57/1588] 2018-05-19 05:03:04.523 | 2018-05-19 05:03:04.523 INFO diskimage_builder.block_device.utils [-] Calling [sudo sync] 2018-05-19 05:03:04.538 | 2018-05-19 05:03:04.537 INFO diskimage_builder.block_device.utils [-] Calling [sudo kpartx -avs /dev/loop3] 2018-05-19 05:03:04.642 | 2018-05-19 05:03:04.642 INFO diskimage_builder.block_device.utils [-] Calling [sudo mkfs -t ext4 -i 4096 -J size=64 -L cloudimg-rootfs -U 376d4b4d-2597-4838-963a-3d 9c5fcb5d9c -q /dev/mapper/loop3p1] 2018-05-19 05:03:04.824 | 2018-05-19 05:03:04.823 INFO diskimage_builder.block_device.utils [-] Calling [sudo mkdir -p /tmp/dib_build.zv2VZo3W/mnt/] 2018-05-19 05:03:04.833 | 2018-05-19 05:03:04.833 INFO diskimage_builder.block_device.level3.mount [-] Mounting [mount_mkfs_root] to [/tmp/dib_build.zv2VZo3W/mnt/] 2018-05-19 05:03:04.834 | 2018-05-19 05:03:04.833 INFO diskimage_builder.block_device.utils [-] Calling [sudo mount /dev/mapper/loop3p1 /tmp/dib_build.zv2VZo3W/mnt/] 2018-05-19 05:03:04.850 | 2018-05-19 05:03:04.850 INFO diskimage_builder.block_device.blockdevice [-] create() finished 2018-05-19 05:03:05.527 | 2018-05-19 05:03:05.527 INFO diskimage_builder.block_device.blockdevice [-] Getting value for [image-block-device] 2018-05-19 05:03:06.168 | 2018-05-19 05:03:06.168 INFO diskimage_builder.block_device.blockdevice [-] Getting value for [image-block-devices] 2018-05-19 05:03:06.845 | 2018-05-19 05:03:06.845 INFO diskimage_builder.block_device.blockdevice [-] Creating fstab 2018-05-19 05:03:06.845 | 2018-05-19 05:03:06.845 INFO diskimage_builder.block_device.utils [-] Calling [sudo mkdir -p /tmp/dib_build.zv2VZo3W/built/etc] 2018-05-19 05:03:06.855 | 2018-05-19 05:03:06.855 INFO diskimage_builder.block_device.utils [-] Calling [sudo cp /tmp/dib_build.zv2VZo3W/states/block-device/fstab /tmp/dib_build.zv2VZo3W/bui lt/etc/fstab] 2018-05-19 05:03:12.946 | dib-run-parts Sat May 19 05:03:12 UTC 2018 Sourcing environment file /tmp/in_target.d/finalise.d/../environment.d/10-bootloader-default-cmdline 2018-05-19 05:03:12.947 | + source /tmp/in_target.d/finalise.d/../environment.d/10-bootloader-default-cmdline 2018-05-19 05:03:12.947 | ++ export 'DIB_BOOTLOADER_DEFAULT_CMDLINE=nofb nomodeset vga=normal' 2018-05-19 05:03:12.947 | ++ DIB_BOOTLOADER_DEFAULT_CMDLINE='nofb nomodeset vga=normal' 2018-05-19 05:03:12.948 | dib-run-parts Sat May 19 05:03:12 UTC 2018 Sourcing environment file /tmp/in_target.d/finalise.d/../environment.d/10-dib-init-system.bash 2018-05-19 05:03:12.950 | + source /tmp/in_target.d/finalise.d/../environment.d/10-dib-init-system.bash 2018-05-19 05:03:12.950 | ++++ dirname /tmp/in_target.d/finalise.d/../environment.d/10-dib-init-system.bash 2018-05-19 05:03:12.951 | +++ PATH='$PATH:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tmp/in_target.d/finalise.d/../environment.d/..' 2018-05-19 05:03:12.951 | +++ dib-init-system 2018-05-19 05:03:12.953 | ++ DIB_INIT_SYSTEM=systemd 2018-05-19 05:03:12.953 | ++ export DIB_INIT_SYSTEM 2018-05-19 05:03:12.954 | dib-run-parts Sat May 19 05:03:12 UTC 2018 Sourcing environment file /tmp/in_target.d/finalise.d/../environment.d/10-pip-cache 2018-05-19 05:03:12.955 | + source /tmp/in_target.d/finalise.d/../environment.d/10-pip-cache 2018-05-19 05:03:12.955 | ++ export PIP_DOWNLOAD_CACHE=/tmp/pip 2018-05-19 05:03:12.955 | ++ PIP_DOWNLOAD_CACHE=/tmp/pip 2018-05-19 05:03:12.956 | dib-run-parts Sat May 19 05:03:12 UTC 2018 Sourcing environment file /tmp/in_target.d/finalise.d/../environment.d/10-ubuntu-distro-name.bash 2018-05-19 05:03:12.958 | + source /tmp/in_target.d/finalise.d/../environment.d/10-ubuntu-distro-name.bash 2018-05-19 05:03:12.958 | ++ export DISTRO_NAME=ubuntu 2018-05-19 05:03:12.958 | ++ DISTRO_NAME=ubuntu 2018-05-19 05:03:12.958 | ++ export DIB_RELEASE=xenial 2018-05-19 05:03:12.958 | ++ DIB_RELEASE=xenial 2018-05-19 05:03:12.959 | dib-run-parts Sat May 19 05:03:12 UTC 2018 Sourcing environment file /tmp/in_target.d/finalise.d/../environment.d/11-dib-install-type.bash 2018-05-19 05:03:12.961 | + source /tmp/in_target.d/finalise.d/../environment.d/11-dib-install-type.bash 2018-05-19 05:03:12.961 | ++ export DIB_DEFAULT_INSTALLTYPE=source 2018-05-19 05:03:12.961 | ++ DIB_DEFAULT_INSTALLTYPE=source 2018-05-19 05:03:12.962 | dib-run-parts Sat May 19 05:03:12 UTC 2018 Sourcing environment file /tmp/in_target.d/finalise.d/../environment.d/14-manifests 2018-05-19 05:03:12.963 | + source /tmp/in_target.d/finalise.d/../environment.d/14-manifests 2018-05-19 05:03:12.964 | ++ export DIB_MANIFEST_IMAGE_DIR=/etc/dib-manifests 2018-05-19 05:03:12.964 | ++ DIB_MANIFEST_IMAGE_DIR=/etc/dib-manifests 2018-05-19 05:03:12.964 | ++ export DIB_MANIFEST_SAVE_DIR=/opt/stack/octavia/diskimage-create/amphora-x64-haproxy.d/ 2018-05-19 05:03:12.964 | ++ DIB_MANIFEST_SAVE_DIR=/opt/stack/octavia/diskimage-create/amphora-x64-haproxy.d/ 2018-05-19 05:03:12.965 | dib-run-parts Sat May 19 05:03:12 UTC 2018 Sourcing environment file /tmp/in_target.d/finalise.d/../environment.d/50-dib-python-version 2018-05-19 05:03:12.966 | + source /tmp/in_target.d/finalise.d/../environment.d/50-dib-python-version 2018-05-19 05:03:12.966 | ++ '[' -z '' ']' 2018-05-19 05:03:12.966 | ++ '[' ubuntu == ubuntu ']' 2018-05-19 05:03:12.966 | ++ '[' xenial == trusty ']' 2018-05-19 05:03:12.966 | ++ '[' -z '' ']' 2018-05-19 05:03:12.966 | ++ DIB_PYTHON_VERSION=3 2018-05-19 05:03:12.966 | ++ export DIB_PYTHON_VERSION 2018-05-19 05:03:12.967 | ++ export DIB_PYTHON=python3 2018-05-19 05:03:12.967 | ++ DIB_PYTHON=python3 2018-05-19 05:03:12.967 | dib-run-parts Sat May 19 05:03:12 UTC 2018 Sourcing environment file /tmp/in_target.d/finalise.d/../environment.d/99-cloud-init-datasources.bash [1/1588] 2018-05-19 05:03:12.969 | + source /tmp/in_target.d/finalise.d/../environment.d/99-cloud-init-datasources.bash 2018-05-19 05:03:12.969 | ++ export DIB_CLOUD_INIT_DATASOURCES=ConfigDrive 2018-05-19 05:03:12.969 | ++ DIB_CLOUD_INIT_DATASOURCES=ConfigDrive 2018-05-19 05:03:12.970 | dib-run-parts Sat May 19 05:03:12 UTC 2018 Running /tmp/in_target.d/finalise.d/50-bootloader 2018-05-19 05:03:13.020 | INFO:root:Mapping for bootloader : grub-pc 2018-05-19 05:03:13.050 | Reading package lists... 2018-05-19 05:03:13.227 | Building dependency tree... 2018-05-19 05:03:13.227 | Reading state information... 2018-05-19 05:03:13.347 | The following additional packages will be installed: 2018-05-19 05:03:13.348 |   grub-common grub-gfxpayload-lists grub-pc-bin grub2-common libfreetype6 2018-05-19 05:03:13.348 | Suggested packages: 2018-05-19 05:03:13.348 |   multiboot-doc grub-emu xorriso desktop-base 2018-05-19 05:03:13.348 | Recommended packages: 2018-05-19 05:03:13.348 |   os-prober 2018-05-19 05:03:13.349 | The following NEW packages will be installed: 2018-05-19 05:03:13.349 |   grub-common grub-gfxpayload-lists grub-pc grub-pc-bin grub2-common 2018-05-19 05:03:13.350 |   libfreetype6 2018-05-19 05:03:13.365 | 0 upgraded, 6 newly installed, 0 to remove and 0 not upgraded. 2018-05-19 05:03:13.365 | Need to get 3621 kB of archives. 2018-05-19 05:03:13.365 | After this operation, 17.6 MB of additional disk space will be used. 2018-05-19 05:03:13.365 | E: You don't have enough free space in /var/cache/apt/archives/. 2018-05-19 05:03:13.981 | 2018-05-19 05:03:13.981 INFO diskimage_builder.block_device.level3.mount [-] Called for [mount_mkfs_root] 2018-05-19 05:03:13.981 | 2018-05-19 05:03:13.981 INFO diskimage_builder.block_device.utils [-] Calling [sudo umount /tmp/dib_build.zv2VZo3W/mnt/] 2018-05-19 05:03:13.991 | Traceback (most recent call last): 2018-05-19 05:03:13.991 |   File "/usr/local/bin/dib-block-device", line 11, in 2018-05-19 05:03:13.991 |     sys.exit(main()) 2018-05-19 05:03:13.991 |   File "/usr/local/lib/python2.7/dist-packages/diskimage_builder/block_device/cmd.py", line 120, in main 2018-05-19 05:03:13.991 |     return bdc.main() 2018-05-19 05:03:13.991 |   File "/usr/local/lib/python2.7/dist-packages/diskimage_builder/block_device/cmd.py", line 115, in main 2018-05-19 05:03:13.991 |     self.args.func() 2018-05-19 05:03:13.991 |   File "/usr/local/lib/python2.7/dist-packages/diskimage_builder/block_device/cmd.py", line 39, in cmd_umount 2018-05-19 05:03:13.991 |     self.bd.cmd_umount() 2018-05-19 05:03:13.991 |   File "/usr/local/lib/python2.7/dist-packages/diskimage_builder/block_device/blockdevice.py", line 420, in cmd_umount 2018-05-19 05:03:13.991 |     node.umount() 2018-05-19 05:03:13.991 |   File "/usr/local/lib/python2.7/dist-packages/diskimage_builder/block_device/level3/mount.py", line 98, in umount 2018-05-19 05:03:13.992 |     exec_sudo(["umount", self.state['mount'][self.mount_point]['path']]) 2018-05-19 05:03:13.992 |   File "/usr/local/lib/python2.7/dist-packages/diskimage_builder/block_device/utils.py", line 125, in exec_sudo 2018-05-19 05:03:13.992 |     ' '.join(sudo_cmd)) 2018-05-19 05:03:13.992 | subprocess.CalledProcessError: Command 'sudo umount /tmp/dib_build.zv2VZo3W/mnt/' returned non-zero exit status 32 2018-05-19 05:03:14.036 | +/opt/stack/octavia/devstack/plugin.sh:build_octavia_worker_image:1 exit_trap 2018-05-19 05:03:14.042 | +./stack.sh:exit_trap:521                  local r=1 2018-05-19 05:03:14.048 | ++./stack.sh:exit_trap:522                  jobs -p 2018-05-19 05:03:14.055 | +./stack.sh:exit_trap:522                  jobs= 2018-05-19 05:03:14.060 | +./stack.sh:exit_trap:525                  [[ -n '' ]] 2018-05-19 05:03:14.066 | +./stack.sh:exit_trap:531                  '[' -f /tmp/tmp.ivo3lCHyBX ']' 2018-05-19 05:03:14.071 | +./stack.sh:exit_trap:532                  rm /tmp/tmp.ivo3lCHyBX 2018-05-19 05:03:14.077 | +./stack.sh:exit_trap:536                  kill_spinner 2018-05-19 05:03:14.083 | +./stack.sh:kill_spinner:417               '[' '!' -z '' ']' 2018-05-19 05:03:14.089 | +./stack.sh:exit_trap:538                  [[ 1 -ne 0 ]] 2018-05-19 05:03:14.094 | +./stack.sh:exit_trap:539                  echo 'Error on exit' 2018-05-19 05:03:14.094 | Error on exit 2018-05-19 05:03:14.100 | +./stack.sh:exit_trap:541                  type -p generate-subunit 2018-05-19 05:03:14.105 | +./stack.sh:exit_trap:542                  generate-subunit 1526704935 1259 fail 2018-05-19 05:03:14.528 | +./stack.sh:exit_trap:544                  [[ -z /tmp ]] 2018-05-19 05:03:14.533 | +./stack.sh:exit_trap:547 /home/stack/devstack/tools/worlddump.py -d /tmp stack at os100-pike-1:~/devstack$ 2018-05-19 05:03:15.527 | +./stack.sh:exit_trap:556                  exit 1 $ df -h Filesystem           Size  Used Avail Use% Mounted on udev                 7.4G     0  7.4G   0% /dev tmpfs                1.5G   47M  1.5G   4% /run /dev/sda1             30G  5.7G   24G  20% / tmpfs                7.4G  4.0K  7.4G   1% /dev/shm tmpfs                5.0M     0  5.0M   0% /run/lock tmpfs                7.4G     0  7.4G   0% /sys/fs/cgroup tmpfs                1.5G     0  1.5G   0% /run/user/1000 /dev/loop0           2.0G   47M  2.0G   3% /opt/stack/data/swift/drives/sdb1 tmpfs                7.4G  504K  7.4G   1% /tmp/dib_build.zv2VZo3W tmpfs                7.4G  1.8G  5.6G  25% /tmp/dib_image.O61liYjw /dev/mapper/loop3p1  1.9G  1.7G  316K 100% /tmp/dib_build.zv2VZo3W/mnt -------------- next part -------------- An HTML attachment was scrubbed... URL: From lvmxhster at gmail.com Sat May 19 10:34:05 2018 From: lvmxhster at gmail.com (=?UTF-8?B?5bCR5ZCI5Yav?=) Date: Sat, 19 May 2018 18:34:05 +0800 Subject: [openstack-dev] [cyborg] [nova] Cyborg quotas In-Reply-To: <0602af19-987b-e200-d49d-754bec4c0556@intel.com> References: <6d8232e3-79ca-c61d-ad64-a99e923e2114@intel.com> <4ba31b19-98cf-25b2-a0c2-0f64a29756e8@gmail.com> <376d9f27-b264-cc2e-6dc2-5ee8ae773f95@intel.com> <0602af19-987b-e200-d49d-754bec4c0556@intel.com> Message-ID: 2018-05-18 19:58 GMT+08:00 Nadathur, Sundar : > Hi Matt, > On 5/17/2018 3:18 PM, Matt Riedemann wrote: > > On 5/17/2018 3:36 PM, Nadathur, Sundar wrote: > > This applies only to the resources that Nova handles, IIUC, which does not > handle accelerators. The generic method that Alex talks about is obviously > preferable but, if that is not available in Rocky, is the filter an option? > > > If nova isn't creating accelerator resources managed by cyborg, I have no > idea why nova would be doing quota checks on those types of resources. And > no, I don't think adding a scheduler filter to nova for checking > accelerator quota is something we'd add either. I'm not sure that would > even make sense - the quota for the resource is per tenant, not per host is > it? The scheduler filters work on a per-host basis. > > Can we not extend BaseFilter.filter_all() to get all the hosts in a > filter? > https://github.com/openstack/nova/blob/master/nova/filters. > py#L36 > > I should have made it clearer that this putative filter will be > out-of-tree, and needed only till better solutions become available. > > > Like any other resource in openstack, the project that manages that > resource should be in charge of enforcing quota limits for it. > > Agreed. Not sure how other projects handle it, but here's the situation > for Cyborg. A request may get scheduled on a compute node with no > intervention by Cyborg. So, the earliest check that can be made today is in > the selected compute node. A simple approach can result in quota violations > as in this example. > > Say there are 5 devices in a cluster. A tenant has a quota of 4 and is > currently using 3. That leaves 2 unused devices, of which the tenant is > permitted to use only one. But he may submit two concurrent requests, and > they may land on two different compute nodes. The Cyborg agent in each node > will see the current tenant usage as 3 and let the request go through, > resulting in quota violation. > > That's a bed design if Cyborg agent in each node let the request go through. And the current Cyborg quota design does not have this issue. > To prevent this, we need some kind of atomic update , like SQLAlchemy's > with_lockmode(): > https://wiki.openstack.org/wiki/OpenStack_and_SQLAlchemy# > Pessimistic_Locking_-_SELECT_FOR_UPDATE > That seems to have issues, as documented in the link above. Also, since > every compute node does that, it would also serialize the bringup of all > instances with accelerators, across the cluster. > > If there is a better solution, I'll be happy to hear it. > > Thanks, > Sundar > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Sat May 19 13:30:35 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Sat, 19 May 2018 09:30:35 -0400 Subject: [openstack-dev] [cyborg] [nova] Cyborg quotas In-Reply-To: <0602af19-987b-e200-d49d-754bec4c0556@intel.com> References: <6d8232e3-79ca-c61d-ad64-a99e923e2114@intel.com> <4ba31b19-98cf-25b2-a0c2-0f64a29756e8@gmail.com> <376d9f27-b264-cc2e-6dc2-5ee8ae773f95@intel.com> <0602af19-987b-e200-d49d-754bec4c0556@intel.com> Message-ID: <6dfca497-29fc-add7-a251-c1dfd5ae4655@gmail.com> On 05/18/2018 07:58 AM, Nadathur, Sundar wrote: > Agreed. Not sure how other projects handle it, but here's the situation > for Cyborg. A request may get scheduled on a compute node with no > intervention by Cyborg. So, the earliest check that can be made today is > in the selected compute node. A simple approach can result in quota > violations as in this example. > > Say there are 5 devices in a cluster. A tenant has a quota of 4 and > is currently using 3. That leaves 2 unused devices, of which the > tenant is permitted to use only one. But he may submit two > concurrent requests, and they may land on two different compute > nodes. The Cyborg agent in each node will see the current tenant > usage as 3 and let the request go through, resulting in quota violation. > > To prevent this, we need some kind of atomic update , like SQLAlchemy's > with_lockmode(): > https://wiki.openstack.org/wiki/OpenStack_and_SQLAlchemy#Pessimistic_Locking_-_SELECT_FOR_UPDATE > > That seems to have issues, as documented in the link above. Also, since > every compute node does that, it would also serialize the bringup of all > instances with accelerators, across the cluster. > > If there is a better solution, I'll be happy to hear it. The solution is to implement the following two specs: https://review.openstack.org/#/c/509042/ https://review.openstack.org/#/c/569011/ The problem of consuming more resources than a user/project has quota for is not a new problem. Users have been able to go over their quota in all of the services for as long as I can remember -- they can do this by essentially DDoS'ing the API with lots of concurrent single-instance build requests [1] all at once. The tenant then ends up in an over-quota situation and is essentially unable to do anything at all before deleting resources. The only operators that I can remember that complained about this issue were the public cloud operators -- and rightfully so since quota abuse in public clouds meant their reputation for fairness might be questioned. Most operators I know of solved this problem by addressing *rate-limiting*, which is not the same as quota limits. By rate-limiting requests to the APIs, the operators were able to alleviate the problem by addressing a symptom, which was that high rates of concurrent requests could lead to over-quota situations. Nobody is using Cyborg separately from Nova at the moment (or ever?). It's not as if a user will be consuming an accelerator outside of a Nova instance -- since it is the Nova instance that is the workload that uses the accelerator. That means that Cyborg resources should be treated as just another resource class whose usage should be checked in a single query to the /usages placement API endpoint before attempting to spawn the instance (again, via Nova) that ends up consuming those resources. The claiming of all resources that are consumed by a Nova instance (which would include any accelerator resources) is an atomic operation that prevents over-allocation of any provider involved in the claim transaction. [2] This atomic operation in Nova/Placement *significantly* cuts down on the chances of a user/project exceeding its quota because it reduces the amount of time to get an accurate read of the resource usage to a very small amount of time (from seconds/tens of seconds to milliseconds). So, to sum up, my recommendation is to get involved in the two Nova specs above and help to see them to completion in Rocky. Doing so will free Cyborg developers up to focus on integration with the virt driver layer via the os-acc library, implementing the update_provider_tree() interface, and coming up with some standard resource classes for describing accelerated resources. Best, -jay [1] I'm explicitly calling out multiple concurrent single build requests here, since a build request for multiple instances is actually not a cause of over-quota because the entire set of requested instances is considered as a single unit for usage calculation. [2] technically, NUMA topology resources and PCI devices do not currently participate in this single claim transaction. This is not ideal, and is something we are actively working on addressing. Keep in mind there are also no quota classes for PCI devices or NUMA topologies, though, so the over-quota problems don't exist for those resource classes. From gmann at ghanshyammann.com Sat May 19 14:24:59 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sat, 19 May 2018 23:24:59 +0900 Subject: [openstack-dev] [openstack-operators][qa] Tempest removal of test_get_service_by_service_and_host_name Message-ID: Hi All, Patch https://review.openstack.org/#/c/569112/1 removed the test_get_service_by_service_and_host_name from tempest tree which looks ok as per bug and commit msg. This satisfy the condition of test removal as per process [1] and this mail is to complete the test removal process to check the external usage of this test. There is one place this tests is listed in Trio2o doc, i have raised patch in Trio2o to remove that to avoid any future confusion[2]. If this test is required by anyone, please respond to this mail, otherwise we are good here. ..1 https://docs.openstack.org/tempest/latest/test_removal.html ..2 https://review.openstack.org/#/c/569568/ -gmann From thomas at goirand.fr Sat May 19 16:50:10 2018 From: thomas at goirand.fr (Thomas Goirand) Date: Sat, 19 May 2018 18:50:10 +0200 Subject: [openstack-dev] [all][tc][ptls] final stages of python 3 transition In-Reply-To: <200c62f6-c13c-5082-1662-692aacf2b581@ham.ie> References: <1524689037-sup-783@lrrr.local> <1525100618-sup-9669@lrrr.local> <297b4c6f-5ce1-1ab5-88db-92b7e06174de@ham.ie> <1525794769-sup-717@lrrr.local> <200c62f6-c13c-5082-1662-692aacf2b581@ham.ie> Message-ID: On 05/08/2018 06:01 PM, Graham Hayes wrote: > Glance - Has issues with image upload + uwsgi + eventlet [1] Yeah, as much as I can see, my experience with last week is that there's no working mode of operation with Glance, Python3 and SSL: - It doesn't work with eventlet, with SSL handshake failure if we don't remove that line: https://github.com/eventlet/eventlet/blob/master/eventlet/green/ssl.py#L342 (of course, removing the set_nonblocking() line in eventlet is *not* a solution) - It doesn't work with uwsgi (connection reset by peer, IIRC) - It doesn't work with Apache (Content-Length issue when uploading images) The only mode that I didn't test (yet) is using eventlet without SSL, and then using HA-Proxy to do the SSL part. Maybe using Apache with mod_proxy will work too, I probably will test that too, and see which one integrates the more easily with puppet-openstack. I don't see why the above mod_proxy or haproxy deployment wouldn't work, but after all the frustrations I had with Glance last week, I'm expecting anything... So, to generalize, yeah, we definitively need to fix this issue with Eventlet ASAP. But also fix Glance so that it can work with uwsgi and Apache mod_wsgi. Cheers, Thomas Goirand (zigo) From zigo at debian.org Sat May 19 17:04:53 2018 From: zigo at debian.org (Thomas Goirand) Date: Sat, 19 May 2018 19:04:53 +0200 Subject: [openstack-dev] [all][tc][ptls] final stages of python 3 transition In-Reply-To: <20180508162256.GA11443@zeong> References: <1524689037-sup-783@lrrr.local> <1525100618-sup-9669@lrrr.local> <297b4c6f-5ce1-1ab5-88db-92b7e06174de@ham.ie> <1525794769-sup-717@lrrr.local> <200c62f6-c13c-5082-1662-692aacf2b581@ham.ie> <20180508162256.GA11443@zeong> Message-ID: <830c30db-3dde-c3ac-8e99-937247e72e7f@debian.org> On 05/08/2018 06:22 PM, Matthew Treinish wrote: >> Glance - Has issues with image upload + uwsgi + eventlet [1] > > This actually is a bit misleading. Glance works fine with image upload and uwsgi. > That's the only configuration of glance in a wsgi app that works because > of chunked transfer encoding not being in the WSGI protocol. [2] uwsgi provides > an alternate interface to read chunked requests which enables this to work. > If you look at the bugs linked off that release note about image upload > you'll see they're all fixed. Hi Matt, I'm quite happy to read the above. Just to make sure... Can you confirm that Glance + Python 3 + uwsgi with SSL will work using the below setup? using: - RBD backend - swift backend - swift+rgw If so, then I'll probably end up pushing for such uwsgi setup. If I understand you correctly, it wont work with Apache mod_wsgi, because of these chcked transfer encoding, which is what made if fail when I tried using the RBD backend. Right? > The issues glance has with running in a wsgi app are related to it's > use of async tasks via taskflow. (which includes the tasks api and > image import stuff) This shouldn't be hard to fix, and I've had > patches up to address these for months: > > https://review.openstack.org/#/c/531498/ > https://review.openstack.org/#/c/549743/ Do I need to backport these patches to Queens to run Glance the way I described? Will it also fix running Glance with mod_wsgi? Cheers, Thomas Goirand (zigo) From zigo at debian.org Sat May 19 17:21:22 2018 From: zigo at debian.org (Thomas Goirand) Date: Sat, 19 May 2018 19:21:22 +0200 Subject: [openstack-dev] [all][tc][ptls][glance] final stages of python 3 transition In-Reply-To: <20180508175543.GB11443@zeong> References: <1524689037-sup-783@lrrr.local> <1525100618-sup-9669@lrrr.local> <297b4c6f-5ce1-1ab5-88db-92b7e06174de@ham.ie> <1525794769-sup-717@lrrr.local> <200c62f6-c13c-5082-1662-692aacf2b581@ham.ie> <20180508162256.GA11443@zeong> <1525800729-sup-4338@lrrr.local> <20180508175543.GB11443@zeong> Message-ID: <47c440ec-1df6-ef5b-008e-ea35e7996926@debian.org> On 05/08/2018 07:55 PM, Matthew Treinish wrote: > I wrote up a doc about running under > apache when I added the uwsgi chunked transfer encoding support to glance about > running glance under apache here: > > https://docs.openstack.org/glance/latest/admin/apache-httpd.html > > Which includes how you have to configure things to get it working and a section > on why mod_wsgi doesn't work. Thanks for that. Could you also push a uWSGI .ini configuration example file, as well as the mod_proxy example? There's so many options in uwsgi that I don't want to risk doing something wrong. I've pasted my config at the end of this message. Do you think it's also OK to use SSL directly with uwsgi, using the --https option? What about the 104 error that I've been experiencing? Is it because I'm not using mod_proxy? BTW, there's no need to manually do the symlink, you can use instead: a2ensite uwsgi-glance-api.conf Cheers, Thomas Goirand (zigo) [uwsgi] ############################ ### Generic UWSGI config ### ############################ # Override the default size for headers from the 4k default. buffer-size = 65535 # This avoids error 104: "Connection reset by peer" rem-header = Content-Lenght # This is running standalone master = true # Threads and processes enable-threads = true processes = 4 # uwsgi recommends this to prevent thundering herd on accept. thunder-lock = true plugins = python3 # This ensures that file descriptors aren't shared between the WSGI application processes. lazy-apps = true # Log from the wsgi application: needs python3-pastescript as runtime depends. paste-logger = true # automatically kill workers if master dies no-orphans = true # exit instead of brutal reload on SIGTERM die-on-term = true ################################## ### OpenStack service specific ### ################################## # This is the standard port for the WSGI application, listening on all available IPs http-socket = :9292 logto = /var/log/glance/glance-api.log name = glance-api uid = glance gid = glance chdir = /var/lib/glance wsgi-file = /usr/bin/glance-wsgi-api From mtreinish at kortar.org Sat May 19 17:40:56 2018 From: mtreinish at kortar.org (Matthew Treinish) Date: Sat, 19 May 2018 13:40:56 -0400 Subject: [openstack-dev] [all][tc][ptls][glance] final stages of python 3 transition In-Reply-To: <47c440ec-1df6-ef5b-008e-ea35e7996926@debian.org> References: <1525100618-sup-9669@lrrr.local> <297b4c6f-5ce1-1ab5-88db-92b7e06174de@ham.ie> <1525794769-sup-717@lrrr.local> <200c62f6-c13c-5082-1662-692aacf2b581@ham.ie> <20180508162256.GA11443@zeong> <1525800729-sup-4338@lrrr.local> <20180508175543.GB11443@zeong> <47c440ec-1df6-ef5b-008e-ea35e7996926@debian.org> Message-ID: <20180519174055.GA29003@sinanju.localdomain> On Sat, May 19, 2018 at 07:21:22PM +0200, Thomas Goirand wrote: > On 05/08/2018 07:55 PM, Matthew Treinish wrote: > > I wrote up a doc about running under > > apache when I added the uwsgi chunked transfer encoding support to glance about > > running glance under apache here: > > > > https://docs.openstack.org/glance/latest/admin/apache-httpd.html > > > > Which includes how you have to configure things to get it working and a section > > on why mod_wsgi doesn't work. > > Thanks for that. Could you also push a uWSGI .ini configuration example > file, as well as the mod_proxy example? There's so many options in uwsgi > that I don't want to risk doing something wrong. I've pasted my config > at the end of this message. Do you think it's also OK to use SSL > directly with uwsgi, using the --https option? What about the 104 error > that I've been experiencing? Is it because I'm not using mod_proxy? There already are example configs in the glance repo. I pushed them up when I added the documentation: https://github.com/openstack/glance/tree/master/httpd Those configs are are more or less just a mirror of what I setup for the gate (and my personal cloud): http://logs.openstack.org/47/566747/1/gate/tempest-full-py3/c7f3b2e/controller/logs/etc/glance/glance-uwsgi.ini.gz http://logs.openstack.org/47/566747/1/gate/tempest-full-py3/c7f3b2e/controller/logs/apache_config/glance-wsgi-api_conf.txt.gz The way I normally configure things is to do the ssl termination with apache and then just limit the uwsgi socket on localhost. I haven't tried setting up the ssl in uwsgi directly, since the idea was to share a single web server with different endpoints off of it for each api service. As for the 104 error there are several probable causes, without seeing the full configuration and looking at the traffic it's hard to say where the connections are getting reset. I would try getting your config to mirror what we know to be working and then go from there. > > BTW, there's no need to manually do the symlink, you can use instead: > a2ensite uwsgi-glance-api.conf Feel free to push a patch to update the docs. > > Cheers, > > Thomas Goirand (zigo) > > > [uwsgi] > ############################ > ### Generic UWSGI config ### > ############################ > > # Override the default size for headers from the 4k default. > buffer-size = 65535 > > # This avoids error 104: "Connection reset by peer" > rem-header = Content-Lenght > > # This is running standalone > master = true > > # Threads and processes > enable-threads = true > > processes = 4 > > # uwsgi recommends this to prevent thundering herd on accept. > thunder-lock = true > > plugins = python3 > > # This ensures that file descriptors aren't shared between the WSGI > application processes. > lazy-apps = true > > # Log from the wsgi application: needs python3-pastescript as runtime > depends. > paste-logger = true > > # automatically kill workers if master dies > no-orphans = true > > # exit instead of brutal reload on SIGTERM > die-on-term = true > > ################################## > ### OpenStack service specific ### > ################################## > > # This is the standard port for the WSGI application, listening on all > available IPs > http-socket = :9292 > logto = /var/log/glance/glance-api.log > name = glance-api > uid = glance > gid = glance > chdir = /var/lib/glance > wsgi-file = /usr/bin/glance-wsgi-api > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From mtreinish at kortar.org Sat May 19 17:54:53 2018 From: mtreinish at kortar.org (Matthew Treinish) Date: Sat, 19 May 2018 13:54:53 -0400 Subject: [openstack-dev] [all][tc][ptls] final stages of python 3 transition In-Reply-To: <830c30db-3dde-c3ac-8e99-937247e72e7f@debian.org> References: <1524689037-sup-783@lrrr.local> <1525100618-sup-9669@lrrr.local> <297b4c6f-5ce1-1ab5-88db-92b7e06174de@ham.ie> <1525794769-sup-717@lrrr.local> <200c62f6-c13c-5082-1662-692aacf2b581@ham.ie> <20180508162256.GA11443@zeong> <830c30db-3dde-c3ac-8e99-937247e72e7f@debian.org> Message-ID: <20180519175453.GB29003@sinanju.localdomain> On Sat, May 19, 2018 at 07:04:53PM +0200, Thomas Goirand wrote: > On 05/08/2018 06:22 PM, Matthew Treinish wrote: > >> Glance - Has issues with image upload + uwsgi + eventlet [1] > > > > This actually is a bit misleading. Glance works fine with image upload and uwsgi. > > That's the only configuration of glance in a wsgi app that works because > > of chunked transfer encoding not being in the WSGI protocol. [2] uwsgi provides > > an alternate interface to read chunked requests which enables this to work. > > If you look at the bugs linked off that release note about image upload > > you'll see they're all fixed. > > Hi Matt, > > I'm quite happy to read the above. Just to make sure... > > Can you confirm that Glance + Python 3 + uwsgi with SSL will work using > the below setup? So glance with uwsgi, python3, and ssl works fine. (with the caveats I mentioned below) We test that on every commit in the integrated gate today in the tempest-full-py3 job. It's been that way for almost a year at this point. > > using: > - RBD backend > - swift backend > - swift+rgw As for the backend store choice I don't have any personal experience using either of these 3 as a backend store. That being said your choice of store should be independent from the getting glance-api deployed behind uwsgi and a webserver. Although, you might have trouble with swift on py3, because IIRC that still isn't working. (unless something changed recently) But, the store config is really independent from getting the api to receive and handle api requests properly. > > If so, then I'll probably end up pushing for such uwsgi setup. > > If I understand you correctly, it wont work with Apache mod_wsgi, > because of these chcked transfer encoding, which is what made if fail > when I tried using the RBD backend. Right? This is correct, you can not use glance and mod_wsgi together because it will not handle requests with chunked transfer encoding by default. So it will fail on any image upload request made to glance that uses chunked transfer encoding. > > > The issues glance has with running in a wsgi app are related to it's > > use of async tasks via taskflow. (which includes the tasks api and > > image import stuff) This shouldn't be hard to fix, and I've had > > patches up to address these for months: > > > > https://review.openstack.org/#/c/531498/ > > https://review.openstack.org/#/c/549743/ > > Do I need to backport these patches to Queens to run Glance the way I > described? Will it also fix running Glance with mod_wsgi? These patches are independent of getting things working for you. They are only required for 2 API features in glance to work. The tasks api and the image import api (which was added in queens). You don't need either to upload images by default, and the patches will only ever be necessary if you have something using those APIs (which personally I've never encountered in the wild). There is also no test coverage in tempest or any external test suite using these apis that I'm aware of so your CI likely won't even be blocked by this. (which is how this situation arose in the first place) -Matt Treinish -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From blair.bethwaite at gmail.com Sat May 19 19:19:46 2018 From: blair.bethwaite at gmail.com (Blair Bethwaite) Date: Sun, 20 May 2018 05:19:46 +1000 Subject: [openstack-dev] [cyborg] [nova] Cyborg quotas In-Reply-To: <6dfca497-29fc-add7-a251-c1dfd5ae4655@gmail.com> References: <6d8232e3-79ca-c61d-ad64-a99e923e2114@intel.com> <4ba31b19-98cf-25b2-a0c2-0f64a29756e8@gmail.com> <376d9f27-b264-cc2e-6dc2-5ee8ae773f95@intel.com> <0602af19-987b-e200-d49d-754bec4c0556@intel.com> <6dfca497-29fc-add7-a251-c1dfd5ae4655@gmail.com> Message-ID: Relatively Cyborg-naive question here... I thought Cyborg was going to support a hot-plug model. So I certainly hope it is not the expectation that accelerators will be encoded into Nova flavors? That will severely limit its usefulness. On 19 May 2018 at 23:30, Jay Pipes wrote: > On 05/18/2018 07:58 AM, Nadathur, Sundar wrote: >> >> Agreed. Not sure how other projects handle it, but here's the situation >> for Cyborg. A request may get scheduled on a compute node with no >> intervention by Cyborg. So, the earliest check that can be made today is in >> the selected compute node. A simple approach can result in quota violations >> as in this example. >> >> Say there are 5 devices in a cluster. A tenant has a quota of 4 and >> is currently using 3. That leaves 2 unused devices, of which the >> tenant is permitted to use only one. But he may submit two >> concurrent requests, and they may land on two different compute >> nodes. The Cyborg agent in each node will see the current tenant >> usage as 3 and let the request go through, resulting in quota >> violation. > >> >> >> To prevent this, we need some kind of atomic update , like SQLAlchemy's >> with_lockmode(): >> >> https://wiki.openstack.org/wiki/OpenStack_and_SQLAlchemy#Pessimistic_Locking_-_SELECT_FOR_UPDATE >> That seems to have issues, as documented in the link above. Also, since >> every compute node does that, it would also serialize the bringup of all >> instances with accelerators, across the cluster. > >> >> >> If there is a better solution, I'll be happy to hear it. > > > The solution is to implement the following two specs: > > https://review.openstack.org/#/c/509042/ > https://review.openstack.org/#/c/569011/ > > The problem of consuming more resources than a user/project has quota for is > not a new problem. Users have been able to go over their quota in all of the > services for as long as I can remember -- they can do this by essentially > DDoS'ing the API with lots of concurrent single-instance build requests [1] > all at once. The tenant then ends up in an over-quota situation and is > essentially unable to do anything at all before deleting resources. > > The only operators that I can remember that complained about this issue were > the public cloud operators -- and rightfully so since quota abuse in public > clouds meant their reputation for fairness might be questioned. Most > operators I know of solved this problem by addressing *rate-limiting*, which > is not the same as quota limits. By rate-limiting requests to the APIs, the > operators were able to alleviate the problem by addressing a symptom, which > was that high rates of concurrent requests could lead to over-quota > situations. > > Nobody is using Cyborg separately from Nova at the moment (or ever?). It's > not as if a user will be consuming an accelerator outside of a Nova instance > -- since it is the Nova instance that is the workload that uses the > accelerator. > > That means that Cyborg resources should be treated as just another resource > class whose usage should be checked in a single query to the /usages > placement API endpoint before attempting to spawn the instance (again, via > Nova) that ends up consuming those resources. > > The claiming of all resources that are consumed by a Nova instance (which > would include any accelerator resources) is an atomic operation that > prevents over-allocation of any provider involved in the claim transaction. > [2] > > This atomic operation in Nova/Placement *significantly* cuts down on the > chances of a user/project exceeding its quota because it reduces the amount > of time to get an accurate read of the resource usage to a very small amount > of time (from seconds/tens of seconds to milliseconds). > > So, to sum up, my recommendation is to get involved in the two Nova specs > above and help to see them to completion in Rocky. Doing so will free Cyborg > developers up to focus on integration with the virt driver layer via the > os-acc library, implementing the update_provider_tree() interface, and > coming up with some standard resource classes for describing accelerated > resources. > > Best, > -jay > > [1] I'm explicitly calling out multiple concurrent single build requests > here, since a build request for multiple instances is actually not a cause > of over-quota because the entire set of requested instances is considered as > a single unit for usage calculation. > > [2] technically, NUMA topology resources and PCI devices do not currently > participate in this single claim transaction. This is not ideal, and is > something we are actively working on addressing. Keep in mind there are also > no quota classes for PCI devices or NUMA topologies, though, so the > over-quota problems don't exist for those resource classes. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Cheers, ~Blairo From jaypipes at gmail.com Sat May 19 22:37:51 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Sat, 19 May 2018 18:37:51 -0400 Subject: [openstack-dev] [cyborg] [nova] Cyborg quotas In-Reply-To: References: <6d8232e3-79ca-c61d-ad64-a99e923e2114@intel.com> <4ba31b19-98cf-25b2-a0c2-0f64a29756e8@gmail.com> <376d9f27-b264-cc2e-6dc2-5ee8ae773f95@intel.com> <0602af19-987b-e200-d49d-754bec4c0556@intel.com> <6dfca497-29fc-add7-a251-c1dfd5ae4655@gmail.com> Message-ID: On 05/19/2018 03:19 PM, Blair Bethwaite wrote: > Relatively Cyborg-naive question here... > > I thought Cyborg was going to support a hot-plug model. So I certainly > hope it is not the expectation that accelerators will be encoded into > Nova flavors? That will severely limit its usefulness. Hi Blair! If it's not the VM or baremetal machine that is using the accelerator, what is? Best, -jay From mriedemos at gmail.com Sat May 19 23:30:31 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Sat, 19 May 2018 16:30:31 -0700 Subject: [openstack-dev] [cyborg] [nova] Cyborg quotas In-Reply-To: <6dfca497-29fc-add7-a251-c1dfd5ae4655@gmail.com> References: <6d8232e3-79ca-c61d-ad64-a99e923e2114@intel.com> <4ba31b19-98cf-25b2-a0c2-0f64a29756e8@gmail.com> <376d9f27-b264-cc2e-6dc2-5ee8ae773f95@intel.com> <0602af19-987b-e200-d49d-754bec4c0556@intel.com> <6dfca497-29fc-add7-a251-c1dfd5ae4655@gmail.com> Message-ID: <2756bf3e-8fec-1d70-4e5c-6094485a9ddb@gmail.com> On 5/19/2018 6:30 AM, Jay Pipes wrote: > The solution is to implement the following two specs: > > https://review.openstack.org/#/c/509042/ Bunch of upgrade / data migration landmines that we have to solve with this, not the least of which is that people using the CachingScheduler don't have allocation records at all after Ocata... -- Thanks, Matt From blair.bethwaite at gmail.com Sun May 20 00:58:45 2018 From: blair.bethwaite at gmail.com (Blair Bethwaite) Date: Sun, 20 May 2018 10:58:45 +1000 Subject: [openstack-dev] [cyborg] [nova] Cyborg quotas In-Reply-To: References: <6d8232e3-79ca-c61d-ad64-a99e923e2114@intel.com> <4ba31b19-98cf-25b2-a0c2-0f64a29756e8@gmail.com> <376d9f27-b264-cc2e-6dc2-5ee8ae773f95@intel.com> <0602af19-987b-e200-d49d-754bec4c0556@intel.com> <6dfca497-29fc-add7-a251-c1dfd5ae4655@gmail.com> Message-ID: G'day Jay, On 20 May 2018 at 08:37, Jay Pipes wrote: > If it's not the VM or baremetal machine that is using the accelerator, what > is? It will be a VM or BM, but I don't think accelerators should be tied to the life of a single instance if that isn't technically necessary (i.e., they are hot-pluggable devices). I can see plenty of scope for use-cases where Cyborg is managing devices that are accessible to compute infrastructure via network/fabric (e.g. rCUDA or dedicated PCIe fabric). And even in the simple pci passthrough case (vfio or mdev) it isn't hard to imagine use-cases for workloads that only need an accelerator sometimes. -- Cheers, ~Blairo From johnsomor at gmail.com Sun May 20 01:57:25 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Sat, 19 May 2018 18:57:25 -0700 Subject: [openstack-dev] [devstack][python/pip][octavia] pip failure during octavia/pike image build by devstack In-Reply-To: References: Message-ID: Yes, this just started occurring with Thursday/Fridays updates to the Ubuntu cloud image upstream of us. I have posted a patch for Queens here: https://review.openstack.org/#/c/569531 We will be back porting that as soon as we can to the other stable releases. Please review the backports as they come out to help the team merge them as soon as possible. Michael (johnsom) On Fri, May 18, 2018 at 10:16 PM, rezroo wrote: > Hi - let's try this again - this time with pike :-) > Any suggestions on how to get the image builder to create a larger loop > device? I think that's what the problem is. > Thanks in advance. > > 2018-05-19 05:03:04.523 | 2018-05-19 05:03:04.523 INFO > diskimage_builder.block_device.level1.mbr [-] Write partition entry blockno > [0] entry [0] start [2048] length [4190208] [57/1588] > 2018-05-19 05:03:04.523 | 2018-05-19 05:03:04.523 INFO > diskimage_builder.block_device.utils [-] Calling [sudo sync] > 2018-05-19 05:03:04.538 | 2018-05-19 05:03:04.537 INFO > diskimage_builder.block_device.utils [-] Calling [sudo kpartx -avs > /dev/loop3] > 2018-05-19 05:03:04.642 | 2018-05-19 05:03:04.642 INFO > diskimage_builder.block_device.utils [-] Calling [sudo mkfs -t ext4 -i 4096 > -J size=64 -L cloudimg-rootfs -U 376d4b4d-2597-4838-963a-3d > 9c5fcb5d9c -q /dev/mapper/loop3p1] > 2018-05-19 05:03:04.824 | 2018-05-19 05:03:04.823 INFO > diskimage_builder.block_device.utils [-] Calling [sudo mkdir -p > /tmp/dib_build.zv2VZo3W/mnt/] > 2018-05-19 05:03:04.833 | 2018-05-19 05:03:04.833 INFO > diskimage_builder.block_device.level3.mount [-] Mounting [mount_mkfs_root] > to [/tmp/dib_build.zv2VZo3W/mnt/] > 2018-05-19 05:03:04.834 | 2018-05-19 05:03:04.833 INFO > diskimage_builder.block_device.utils [-] Calling [sudo mount > /dev/mapper/loop3p1 /tmp/dib_build.zv2VZo3W/mnt/] > 2018-05-19 05:03:04.850 | 2018-05-19 05:03:04.850 INFO > diskimage_builder.block_device.blockdevice [-] create() finished > 2018-05-19 05:03:05.527 | 2018-05-19 05:03:05.527 INFO > diskimage_builder.block_device.blockdevice [-] Getting value for > [image-block-device] > 2018-05-19 05:03:06.168 | 2018-05-19 05:03:06.168 INFO > diskimage_builder.block_device.blockdevice [-] Getting value for > [image-block-devices] > 2018-05-19 05:03:06.845 | 2018-05-19 05:03:06.845 INFO > diskimage_builder.block_device.blockdevice [-] Creating fstab > 2018-05-19 05:03:06.845 | 2018-05-19 05:03:06.845 INFO > diskimage_builder.block_device.utils [-] Calling [sudo mkdir -p > /tmp/dib_build.zv2VZo3W/built/etc] > 2018-05-19 05:03:06.855 | 2018-05-19 05:03:06.855 INFO > diskimage_builder.block_device.utils [-] Calling [sudo cp > /tmp/dib_build.zv2VZo3W/states/block-device/fstab > /tmp/dib_build.zv2VZo3W/bui > lt/etc/fstab] > 2018-05-19 05:03:12.946 | dib-run-parts Sat May 19 05:03:12 UTC 2018 > Sourcing environment file > /tmp/in_target.d/finalise.d/../environment.d/10-bootloader-default-cmdline > 2018-05-19 05:03:12.947 | + source > /tmp/in_target.d/finalise.d/../environment.d/10-bootloader-default-cmdline > 2018-05-19 05:03:12.947 | ++ export 'DIB_BOOTLOADER_DEFAULT_CMDLINE=nofb > nomodeset vga=normal' > 2018-05-19 05:03:12.947 | ++ DIB_BOOTLOADER_DEFAULT_CMDLINE='nofb nomodeset > vga=normal' > 2018-05-19 05:03:12.948 | dib-run-parts Sat May 19 05:03:12 UTC 2018 > Sourcing environment file > /tmp/in_target.d/finalise.d/../environment.d/10-dib-init-system.bash > 2018-05-19 05:03:12.950 | + source > /tmp/in_target.d/finalise.d/../environment.d/10-dib-init-system.bash > 2018-05-19 05:03:12.950 | ++++ dirname > /tmp/in_target.d/finalise.d/../environment.d/10-dib-init-system.bash > 2018-05-19 05:03:12.951 | +++ > PATH='$PATH:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tmp/in_target.d/finalise.d/../environment.d/..' > 2018-05-19 05:03:12.951 | +++ dib-init-system > 2018-05-19 05:03:12.953 | ++ DIB_INIT_SYSTEM=systemd > 2018-05-19 05:03:12.953 | ++ export DIB_INIT_SYSTEM > 2018-05-19 05:03:12.954 | dib-run-parts Sat May 19 05:03:12 UTC 2018 > Sourcing environment file > /tmp/in_target.d/finalise.d/../environment.d/10-pip-cache > 2018-05-19 05:03:12.955 | + source > /tmp/in_target.d/finalise.d/../environment.d/10-pip-cache > 2018-05-19 05:03:12.955 | ++ export PIP_DOWNLOAD_CACHE=/tmp/pip > 2018-05-19 05:03:12.955 | ++ PIP_DOWNLOAD_CACHE=/tmp/pip > 2018-05-19 05:03:12.956 | dib-run-parts Sat May 19 05:03:12 UTC 2018 > Sourcing environment file > /tmp/in_target.d/finalise.d/../environment.d/10-ubuntu-distro-name.bash > 2018-05-19 05:03:12.958 | + source > /tmp/in_target.d/finalise.d/../environment.d/10-ubuntu-distro-name.bash > 2018-05-19 05:03:12.958 | ++ export DISTRO_NAME=ubuntu > 2018-05-19 05:03:12.958 | ++ DISTRO_NAME=ubuntu > 2018-05-19 05:03:12.958 | ++ export DIB_RELEASE=xenial > 2018-05-19 05:03:12.958 | ++ DIB_RELEASE=xenial > 2018-05-19 05:03:12.959 | dib-run-parts Sat May 19 05:03:12 UTC 2018 > Sourcing environment file > /tmp/in_target.d/finalise.d/../environment.d/11-dib-install-type.bash > 2018-05-19 05:03:12.961 | + source > /tmp/in_target.d/finalise.d/../environment.d/11-dib-install-type.bash > 2018-05-19 05:03:12.961 | ++ export DIB_DEFAULT_INSTALLTYPE=source > 2018-05-19 05:03:12.961 | ++ DIB_DEFAULT_INSTALLTYPE=source > 2018-05-19 05:03:12.962 | dib-run-parts Sat May 19 05:03:12 UTC 2018 > Sourcing environment file > /tmp/in_target.d/finalise.d/../environment.d/14-manifests > 2018-05-19 05:03:12.963 | + source > /tmp/in_target.d/finalise.d/../environment.d/14-manifests > 2018-05-19 05:03:12.964 | ++ export > DIB_MANIFEST_IMAGE_DIR=/etc/dib-manifests > 2018-05-19 05:03:12.964 | ++ DIB_MANIFEST_IMAGE_DIR=/etc/dib-manifests > 2018-05-19 05:03:12.964 | ++ export > DIB_MANIFEST_SAVE_DIR=/opt/stack/octavia/diskimage-create/amphora-x64-haproxy.d/ > 2018-05-19 05:03:12.964 | ++ > DIB_MANIFEST_SAVE_DIR=/opt/stack/octavia/diskimage-create/amphora-x64-haproxy.d/ > 2018-05-19 05:03:12.965 | dib-run-parts Sat May 19 05:03:12 UTC 2018 > Sourcing environment file > /tmp/in_target.d/finalise.d/../environment.d/50-dib-python-version > 2018-05-19 05:03:12.966 | + source > /tmp/in_target.d/finalise.d/../environment.d/50-dib-python-version > 2018-05-19 05:03:12.966 | ++ '[' -z '' ']' > 2018-05-19 05:03:12.966 | ++ '[' ubuntu == ubuntu ']' > 2018-05-19 05:03:12.966 | ++ '[' xenial == trusty ']' > 2018-05-19 05:03:12.966 | ++ '[' -z '' ']' > 2018-05-19 05:03:12.966 | ++ DIB_PYTHON_VERSION=3 > 2018-05-19 05:03:12.966 | ++ export DIB_PYTHON_VERSION > 2018-05-19 05:03:12.967 | ++ export DIB_PYTHON=python3 > 2018-05-19 05:03:12.967 | ++ DIB_PYTHON=python3 > 2018-05-19 05:03:12.967 | dib-run-parts Sat May 19 05:03:12 UTC 2018 > Sourcing environment file > /tmp/in_target.d/finalise.d/../environment.d/99-cloud-init-datasources.bash > [1/1588] > 2018-05-19 05:03:12.969 | + source > /tmp/in_target.d/finalise.d/../environment.d/99-cloud-init-datasources.bash > 2018-05-19 05:03:12.969 | ++ export DIB_CLOUD_INIT_DATASOURCES=ConfigDrive > 2018-05-19 05:03:12.969 | ++ DIB_CLOUD_INIT_DATASOURCES=ConfigDrive > 2018-05-19 05:03:12.970 | dib-run-parts Sat May 19 05:03:12 UTC 2018 Running > /tmp/in_target.d/finalise.d/50-bootloader > 2018-05-19 05:03:13.020 | INFO:root:Mapping for bootloader : grub-pc > 2018-05-19 05:03:13.050 | Reading package lists... > 2018-05-19 05:03:13.227 | Building dependency tree... > 2018-05-19 05:03:13.227 | Reading state information... > 2018-05-19 05:03:13.347 | The following additional packages will be > installed: > 2018-05-19 05:03:13.348 | grub-common grub-gfxpayload-lists grub-pc-bin > grub2-common libfreetype6 > 2018-05-19 05:03:13.348 | Suggested packages: > 2018-05-19 05:03:13.348 | multiboot-doc grub-emu xorriso desktop-base > 2018-05-19 05:03:13.348 | Recommended packages: > 2018-05-19 05:03:13.348 | os-prober > 2018-05-19 05:03:13.349 | The following NEW packages will be installed: > 2018-05-19 05:03:13.349 | grub-common grub-gfxpayload-lists grub-pc > grub-pc-bin grub2-common > 2018-05-19 05:03:13.350 | libfreetype6 > 2018-05-19 05:03:13.365 | 0 upgraded, 6 newly installed, 0 to remove and 0 > not upgraded. > 2018-05-19 05:03:13.365 | Need to get 3621 kB of archives. > 2018-05-19 05:03:13.365 | After this operation, 17.6 MB of additional disk > space will be used. > 2018-05-19 05:03:13.365 | E: You don't have enough free space in > /var/cache/apt/archives/. > 2018-05-19 05:03:13.981 | 2018-05-19 05:03:13.981 INFO > diskimage_builder.block_device.level3.mount [-] Called for [mount_mkfs_root] > 2018-05-19 05:03:13.981 | 2018-05-19 05:03:13.981 INFO > diskimage_builder.block_device.utils [-] Calling [sudo umount > /tmp/dib_build.zv2VZo3W/mnt/] > 2018-05-19 05:03:13.991 | Traceback (most recent call last): > 2018-05-19 05:03:13.991 | File "/usr/local/bin/dib-block-device", line 11, > in > 2018-05-19 05:03:13.991 | sys.exit(main()) > 2018-05-19 05:03:13.991 | File > "/usr/local/lib/python2.7/dist-packages/diskimage_builder/block_device/cmd.py", > line 120, in main > 2018-05-19 05:03:13.991 | return bdc.main() > 2018-05-19 05:03:13.991 | File > "/usr/local/lib/python2.7/dist-packages/diskimage_builder/block_device/cmd.py", > line 115, in main > 2018-05-19 05:03:13.991 | self.args.func() > 2018-05-19 05:03:13.991 | File > "/usr/local/lib/python2.7/dist-packages/diskimage_builder/block_device/cmd.py", > line 39, in cmd_umount > 2018-05-19 05:03:13.991 | self.bd.cmd_umount() > 2018-05-19 05:03:13.991 | File > "/usr/local/lib/python2.7/dist-packages/diskimage_builder/block_device/blockdevice.py", > line 420, in cmd_umount > 2018-05-19 05:03:13.991 | node.umount() > 2018-05-19 05:03:13.991 | File > "/usr/local/lib/python2.7/dist-packages/diskimage_builder/block_device/level3/mount.py", > line 98, in umount > 2018-05-19 05:03:13.992 | exec_sudo(["umount", > self.state['mount'][self.mount_point]['path']]) > 2018-05-19 05:03:13.992 | File > "/usr/local/lib/python2.7/dist-packages/diskimage_builder/block_device/utils.py", > line 125, in exec_sudo > 2018-05-19 05:03:13.992 | ' '.join(sudo_cmd)) > 2018-05-19 05:03:13.992 | subprocess.CalledProcessError: Command 'sudo > umount /tmp/dib_build.zv2VZo3W/mnt/' returned non-zero exit status 32 > 2018-05-19 05:03:14.036 | > +/opt/stack/octavia/devstack/plugin.sh:build_octavia_worker_image:1 > exit_trap > 2018-05-19 05:03:14.042 | +./stack.sh:exit_trap:521 local > r=1 > 2018-05-19 05:03:14.048 | ++./stack.sh:exit_trap:522 jobs > -p > 2018-05-19 05:03:14.055 | +./stack.sh:exit_trap:522 jobs= > 2018-05-19 05:03:14.060 | +./stack.sh:exit_trap:525 [[ -n > '' ]] > 2018-05-19 05:03:14.066 | +./stack.sh:exit_trap:531 '[' -f > /tmp/tmp.ivo3lCHyBX ']' > 2018-05-19 05:03:14.071 | +./stack.sh:exit_trap:532 rm > /tmp/tmp.ivo3lCHyBX > 2018-05-19 05:03:14.077 | +./stack.sh:exit_trap:536 > kill_spinner > 2018-05-19 05:03:14.083 | +./stack.sh:kill_spinner:417 '[' '!' > -z '' ']' > 2018-05-19 05:03:14.089 | +./stack.sh:exit_trap:538 [[ 1 > -ne 0 ]] > 2018-05-19 05:03:14.094 | +./stack.sh:exit_trap:539 echo > 'Error on exit' > 2018-05-19 05:03:14.094 | Error on exit > 2018-05-19 05:03:14.100 | +./stack.sh:exit_trap:541 type -p > generate-subunit > 2018-05-19 05:03:14.105 | +./stack.sh:exit_trap:542 > generate-subunit 1526704935 1259 fail > 2018-05-19 05:03:14.528 | +./stack.sh:exit_trap:544 [[ -z > /tmp ]] > 2018-05-19 05:03:14.533 | +./stack.sh:exit_trap:547 > /home/stack/devstack/tools/worlddump.py -d /tmp > stack at os100-pike-1:~/devstack$ 2018-05-19 05:03:15.527 | > +./stack.sh:exit_trap:556 exit 1 > > $ df -h > Filesystem Size Used Avail Use% Mounted on > udev 7.4G 0 7.4G 0% /dev > tmpfs 1.5G 47M 1.5G 4% /run > /dev/sda1 30G 5.7G 24G 20% / > tmpfs 7.4G 4.0K 7.4G 1% /dev/shm > tmpfs 5.0M 0 5.0M 0% /run/lock > tmpfs 7.4G 0 7.4G 0% /sys/fs/cgroup > tmpfs 1.5G 0 1.5G 0% /run/user/1000 > /dev/loop0 2.0G 47M 2.0G 3% /opt/stack/data/swift/drives/sdb1 > tmpfs 7.4G 504K 7.4G 1% /tmp/dib_build.zv2VZo3W > tmpfs 7.4G 1.8G 5.6G 25% /tmp/dib_image.O61liYjw > /dev/mapper/loop3p1 1.9G 1.7G 316K 100% /tmp/dib_build.zv2VZo3W/mnt > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From zigo at debian.org Sun May 20 13:05:34 2018 From: zigo at debian.org (Thomas Goirand) Date: Sun, 20 May 2018 15:05:34 +0200 Subject: [openstack-dev] [all][tc][ptls] final stages of python 3 transition In-Reply-To: <20180519175453.GB29003@sinanju.localdomain> References: <1524689037-sup-783@lrrr.local> <1525100618-sup-9669@lrrr.local> <297b4c6f-5ce1-1ab5-88db-92b7e06174de@ham.ie> <1525794769-sup-717@lrrr.local> <200c62f6-c13c-5082-1662-692aacf2b581@ham.ie> <20180508162256.GA11443@zeong> <830c30db-3dde-c3ac-8e99-937247e72e7f@debian.org> <20180519175453.GB29003@sinanju.localdomain> Message-ID: <01588869-e3be-5157-013d-4d9a633108d5@debian.org> On 05/19/2018 07:54 PM, Matthew Treinish wrote: > On Sat, May 19, 2018 at 07:04:53PM +0200, Thomas Goirand wrote: >> using: >> - RBD backend >> - swift backend >> - swift+rgw > > As for the backend store choice I don't have any personal experience using > either of these 3 as a backend store. That being said your choice of store > should be independent from the getting glance-api deployed behind uwsgi > and a webserver. > > Although, you might have trouble with swift on py3, because IIRC that still > isn't working. (unless something changed recently) But, the store config is > really independent from getting the api to receive and handle api requests > properly. Thanks for these details. What exactly is the trouble with the Swift backend? Do you know? Is anyone working on fixing it? At my company, we'd be happy to work on that (if of course, it's not too time demanding). >>> The issues glance has with running in a wsgi app are related to it's >>> use of async tasks via taskflow. (which includes the tasks api and >>> image import stuff) This shouldn't be hard to fix, and I've had >>> patches up to address these for months: >>> >>> https://review.openstack.org/#/c/531498/ >>> https://review.openstack.org/#/c/549743/ >> >> Do I need to backport these patches to Queens to run Glance the way I >> described? Will it also fix running Glance with mod_wsgi? > > These patches are independent of getting things working for you. They > are only required for 2 API features in glance to work. The tasks api and > the image import api (which was added in queens). You don't need either > to upload images by default, and the patches will only ever be necessary > if you have something using those APIs (which personally I've never > encountered in the wild). There is also no test coverage in tempest or > any external test suite using these apis that I'm aware of so your CI > likely won't even be blocked by this. (which is how this situation > arose in the first place) Allright, So hopefully, I'm very close from having Debian to gate properly in puppet-openstack upstream. As much as I could tell, Glance and Cinder are the only pieces that are still failing with SSL (and everything works already without SSL), so I must be very close to a nice result (after a course of nearly 2 months already). Thanks again for all the very valuable details that you provided. I have to admit that I was starting to loose faith in the project, because of all the frustration of not finding a working solution. I'll let the list knows when I have something that fully works and gating with puppet-openstack, of course. Cheers, Thomas Goirand (zigo) From s.s.filatov94 at gmail.com Sun May 20 13:22:01 2018 From: s.s.filatov94 at gmail.com (Sergey Filatov) Date: Sun, 20 May 2018 16:22:01 +0300 Subject: [openstack-dev] [magnum] K8S apiserver key sync In-Reply-To: <0A797CB1-E1C4-4E13-AA3A-9A9000D07A07@gmail.com> References: <0A797CB1-E1C4-4E13-AA3A-9A9000D07A07@gmail.com> Message-ID: Hi! I’d like to initiate a discussion about this bug: [1]. To resolve this issue we need to generate a secret cert and pass it to master nodes. We also need to store it somewhere to support scaling. This issue is specific for kubernetes drivers. Currently in magnum we have a general cert manager which is the same for all the drivers. What do you think about moving cert_manager logic into a driver-specific area? Having this common cert_manager logic forces us to generate client cert with “admin” and “system:masters” subject & organisation names [2], which is really something that we need only for kubernetes drivers. [1] https://bugs.launchpad.net/magnum/+bug/1766546 [2] https://github.com/openstack/magnum/blob/2329cb7fb4d197e49d6c07d37b2f7ec14a11c880/magnum/conductor/handlers/common/cert_manager.py#L59-L64 ..Sergey Filatov > On 20 Apr 2018, at 20:57, Sergey Filatov wrote: > > Hello, > > I looked into k8s drivers for magnum I see that each api-server on master node generates it’s own service-account-key-file. This causes issues with service-accounts authenticating on api-server. (In case api-server endpoint moves). > As far as I understand we should have either all api-server keys synced on api-servesr or pre-generate single api-server key. > > What is the way for magnum to get over this issue? -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Sun May 20 13:37:05 2018 From: zigo at debian.org (Thomas Goirand) Date: Sun, 20 May 2018 15:37:05 +0200 Subject: [openstack-dev] Setting-up NoVNC 1.0.0 with nova Message-ID: Hi there! The novnc package in Debian and Ubuntu is getting very old. So I thought about upgrading to 1.0.0, which has lots of very nice newer features, like the full screen mode, and so on. All seemed to work, however, when trying to connect to the console of a VM, NoVNC attempts to connect to https://example.com:6080/websockify and then fails (with a 404). So I was wondering: what's missing in my setup so that there's a /websockify URL? Is there some missing code in the nova-novncproxy so that it would forward this URL to /usr/bin/websockify? If so, has anyone started working on it? Also, what's the status of NoVNC with Python 3? I saw lots of print statements which are easy to fix, though I even wonder if the code in the python-novnc package is useful. Who's using it? Nova-novncproxy? That's unlikely, since I didn't package a Python 3 version for it. Cheers, Thomas Goirand (zigo) From zigo at debian.org Sun May 20 14:03:17 2018 From: zigo at debian.org (Thomas Goirand) Date: Sun, 20 May 2018 16:03:17 +0200 Subject: [openstack-dev] [all][tc][ptls][glance] final stages of python 3 transition In-Reply-To: <20180508191640.GA16227@sinanju.localdomain> References: <1525100618-sup-9669@lrrr.local> <297b4c6f-5ce1-1ab5-88db-92b7e06174de@ham.ie> <1525794769-sup-717@lrrr.local> <200c62f6-c13c-5082-1662-692aacf2b581@ham.ie> <20180508162256.GA11443@zeong> <1525800729-sup-4338@lrrr.local> <20180508175543.GB11443@zeong> <1525805985-sup-7865@lrrr.local> <20180508191640.GA16227@sinanju.localdomain> Message-ID: On 05/08/2018 09:16 PM, Matthew Treinish wrote: > Although, I don't think glance uses oslo.service even in the case where it's > using the standalone eventlet server. It looks like it launches eventlet.wsgi > directly: > > https://github.com/openstack/glance/blob/master/glance/common/wsgi.py I can confirm this through my last week (bad) experience with Eventlet over SSL. Cheers, Thomas Goirand (zigo) From jaypipes at gmail.com Sun May 20 14:16:35 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Sun, 20 May 2018 10:16:35 -0400 Subject: [openstack-dev] [nova] Apply_cells to allow automation of nova-manage cells_v2 commands In-Reply-To: <991A2C0D-D13B-45E1-BDFC-C5A7FA931CF8@godaddy.com> References: <991A2C0D-D13B-45E1-BDFC-C5A7FA931CF8@godaddy.com> Message-ID: <79fe4844-8228-cc97-16a4-896043a6a194@gmail.com> On 05/16/2018 08:18 PM, David G. Bingham wrote: > YoNova Gurus :-), > > We here at GoDaddy are getting hot and heavy into Cells V2 these days > and would like to propose an enhancement or maybe see if something like > this is already in the works. > > Need: > > To be able to “synchronize” cells from a specified file (git controlled, > or inventory generated). > > Details: > > We are thinking about adding a new method to nova-manage called > “apply-cells” that would take a json/yaml file and “make-it-so”. This > method would make the cells in the DB match exactly that of what the > spec says matching on the cell’s name. Internally it calls its own > create_cell, update_cell, and delete_cell commands to get it done. > > We already have a POC in the works. Are you aware of any others who have > made requests for something like this? Ref: > https://review.openstack.org/#/c/568987/ Hi David! Excellent proposal. I've reviewed the patch. I'm very supportive of Nova and other OpenStack services moving towards the direction that the larger infrastructure management community has gone, which is having standardized, versioned YAML/JSON file formats to describe configuration and inventory information. I actually proposed using a versioned YAML descriptor document for resource provider and inventory information in Nova: https://review.openstack.org/#/c/550244/ It was abandoned because of various disagreements about the usefulness of introducing yet another way of representing configuration and inventory information. We already have the CONF file and REST API ways of representing that information, so having a YAML-based way of describing the information was seen as unnecessary. I continue to think deprecating the CONF file ways of describing inventory information and configuration data for objects inside the system -- as opposed to the system itself -- is the best long-term approach because it aligns OpenStack with where Terraform, Kubernetes, Ansible, Saltstack, Helm, and lots of other related infrastructure management tools are. Best, -jay From zigo at debian.org Sun May 20 14:36:22 2018 From: zigo at debian.org (Thomas Goirand) Date: Sun, 20 May 2018 16:36:22 +0200 Subject: [openstack-dev] [all][tc][ptls][glance] final stages of python 3 transition In-Reply-To: References: <1525100618-sup-9669@lrrr.local> <297b4c6f-5ce1-1ab5-88db-92b7e06174de@ham.ie> <1525794769-sup-717@lrrr.local> <200c62f6-c13c-5082-1662-692aacf2b581@ham.ie> <20180508162256.GA11443@zeong> <1525800729-sup-4338@lrrr.local> <20180508175543.GB11443@zeong> <1525805985-sup-7865@lrrr.local> <20180508191640.GA16227@sinanju.localdomain> <557b34e5-f27f-6975-07fe-85f6b6c707d7@redhat.com> Message-ID: <1d96b737-2a75-1514-c728-86b0758c7df8@debian.org> On 05/14/2018 06:42 PM, Victoria Martínez de la Cruz wrote: > Hi, > > Jumping in now as I'm helping with py3 support efforts in the manila side. > > In manila we have both support for Apache WSGI and the built-in server > (which depends in eventlet). Would it be a possible workaround to rely > on the Apache WSGI server while we wait for evenlet issues to be sorted > out? Is there any chance the upper constraints will be updated soon-ish > and this can be fixed in a newer eventlet version? Probably we can update the upper-constraints file, though the newer eventlet doesn't have a fix. It's been more than 2 years there's the issue, and nobody seems to work on it. > This is the only change it's preventing us to be fully py3 compatible, > hence it's a big deal for us. I don't think Eventlet is a blocker, as long as you're supporting uwsgi and Apache2. The case of Glance not supporting Apache2 is a real issue though, for Ubuntu at least, since they don't want uwsgi to be promoted to main (ie: it's in Universe, and they don't support it for security). As for Debian, since it is looking like I managed to find a solution for running everything in uwsgi, then I probably will do that. I have btw recently joined the team maintaining uwsgi in Debian, and managed to fix all RC bugs on it. Like Matt, I do prefer to be able to restart only *one* daemon at a time, which is why I don't really like setting-up everything with mod_wsgi. Though in such setup, I wonder what the point is to still use Apache for proxying the requests. Is there any added value doing that? Also, does anyone know if uwsgi uses the Python subinterpreter thing, which is the reason why mod_wsgi is outperforming everything else? If I'm not mistaking, the API is described at: https://www.python.org/dev/peps/pep-0554/ (ie: PEP554), and that, if I understand correctly, works around the global interpreter lock issue. As much as I could see, uwsgi doesn't use that, so Apache should still be outperforming uwsgi. Cheers, Thomas Goirand (zigo) From juliaashleykreger at gmail.com Sun May 20 14:45:46 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Sun, 20 May 2018 10:45:46 -0400 Subject: [openstack-dev] Proposing Mark Goddard to ironic-core Message-ID: Greetings everyone! I would like to propose Mark Goddard to ironic-core. I am aware he recently joined kolla-core, but his contributions in ironic have been insightful and valuable. The kind of value that comes from operative use. I also make this nomination knowing that our community landscape is changing and that we must not silo our team responsibilities or ability to move things forward to small highly focused team. I trust Mark to use his judgement as he has time or need to do so. He might not always have time, but I think at the end of the day, we’re all in that same boat. -Julia -------------- next part -------------- An HTML attachment was scrubbed... URL: From mtreinish at kortar.org Sun May 20 16:24:47 2018 From: mtreinish at kortar.org (Matthew Treinish) Date: Sun, 20 May 2018 12:24:47 -0400 Subject: [openstack-dev] [all][tc][ptls] final stages of python 3 transition In-Reply-To: <01588869-e3be-5157-013d-4d9a633108d5@debian.org> References: <1525100618-sup-9669@lrrr.local> <297b4c6f-5ce1-1ab5-88db-92b7e06174de@ham.ie> <1525794769-sup-717@lrrr.local> <200c62f6-c13c-5082-1662-692aacf2b581@ham.ie> <20180508162256.GA11443@zeong> <830c30db-3dde-c3ac-8e99-937247e72e7f@debian.org> <20180519175453.GB29003@sinanju.localdomain> <01588869-e3be-5157-013d-4d9a633108d5@debian.org> Message-ID: <20180520162446.GA13805@sinanju.localdomain> On Sun, May 20, 2018 at 03:05:34PM +0200, Thomas Goirand wrote: > On 05/19/2018 07:54 PM, Matthew Treinish wrote: > > On Sat, May 19, 2018 at 07:04:53PM +0200, Thomas Goirand wrote: > >> using: > >> - RBD backend > >> - swift backend > >> - swift+rgw > > > > As for the backend store choice I don't have any personal experience using > > either of these 3 as a backend store. That being said your choice of store > > should be independent from the getting glance-api deployed behind uwsgi > > and a webserver. > > > > Although, you might have trouble with swift on py3, because IIRC that still > > isn't working. (unless something changed recently) But, the store config is > > really independent from getting the api to receive and handle api requests > > properly. > > Thanks for these details. What exactly is the trouble with the Swift > backend? Do you know? Is anyone working on fixing it? At my company, > we'd be happy to work on that (if of course, it's not too time demanding). > Sorry I didn't mean the swift backend, but swift itself under python3: https://wiki.openstack.org/wiki/Python3#OpenStack_applications_.28tc:approved-release.29 If you're trying to deploy everything under python3 I don't think you'll be able to deploy swift. But if you already have a swift running then the glance backend should work fine under pythom 3. > >>> The issues glance has with running in a wsgi app are related to it's > >>> use of async tasks via taskflow. (which includes the tasks api and > >>> image import stuff) This shouldn't be hard to fix, and I've had > >>> patches up to address these for months: > >>> > >>> https://review.openstack.org/#/c/531498/ > >>> https://review.openstack.org/#/c/549743/ > >> > >> Do I need to backport these patches to Queens to run Glance the way I > >> described? Will it also fix running Glance with mod_wsgi? > > > > These patches are independent of getting things working for you. They > > are only required for 2 API features in glance to work. The tasks api and > > the image import api (which was added in queens). You don't need either > > to upload images by default, and the patches will only ever be necessary > > if you have something using those APIs (which personally I've never > > encountered in the wild). There is also no test coverage in tempest or > > any external test suite using these apis that I'm aware of so your CI > > likely won't even be blocked by this. (which is how this situation > > arose in the first place) > > Allright, So hopefully, I'm very close from having Debian to gate > properly in puppet-openstack upstream. As much as I could tell, Glance > and Cinder are the only pieces that are still failing with SSL (and > everything works already without SSL), so I must be very close to a nice > result (after a course of nearly 2 months already). > > Thanks again for all the very valuable details that you provided. I have > to admit that I was starting to loose faith in the project, because of > all the frustration of not finding a working solution. > > I'll let the list knows when I have something that fully works and > gating with puppet-openstack, of course. > > Cheers, > > Thomas Goirand (zigo) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From mriedemos at gmail.com Sun May 20 16:33:37 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Sun, 20 May 2018 09:33:37 -0700 Subject: [openstack-dev] Setting-up NoVNC 1.0.0 with nova In-Reply-To: References: Message-ID: <9d76a294-82f0-4384-fee2-01043be19789@gmail.com> On 5/20/2018 6:37 AM, Thomas Goirand wrote: > The novnc package in Debian and Ubuntu is getting very old. So I thought > about upgrading to 1.0.0, which has lots of very nice newer features, > like the full screen mode, and so on. > > All seemed to work, however, when trying to connect to the console of a > VM, NoVNC attempts to connect tohttps://example.com:6080/websockify and > then fails (with a 404). > > So I was wondering: what's missing in my setup so that there's a > /websockify URL? Is there some missing code in the nova-novncproxy so > that it would forward this URL to /usr/bin/websockify? If so, has anyone > started working on it? > > Also, what's the status of NoVNC with Python 3? I saw lots of print > statements which are easy to fix, though I even wonder if the code in > the python-novnc package is useful. Who's using it? Nova-novncproxy? > That's unlikely, since I didn't package a Python 3 version for it. Stephen Finucane (stephenfin on irc) would know best at this point, but I know he ran into some issues with configuring nova when using novnc 1.0.0, so check your novncproxy_base_url config option value: https://docs.openstack.org/nova/latest/configuration/config.html#vnc.novncproxy_base_url Specifically: "If using noVNC >= 1.0.0, you should use vnc_lite.html instead of vnc_auto.html." -- Thanks, Matt From lbragstad at gmail.com Sun May 20 18:04:02 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Sun, 20 May 2018 11:04:02 -0700 Subject: [openstack-dev] [keystone] team dinner In-Reply-To: <1d3de132-ead9-cd24-e5fe-f5577c3227c6@gmail.com> References: <1d3de132-ead9-cd24-e5fe-f5577c3227c6@gmail.com> Message-ID: <6ef40bb3-c0b9-c5a7-a612-dd6739729ad0@gmail.com> Alright, based on the responses it looks like Tuesday is going to be the best option for everyone. There was one suggestion for sushi and it looks like there are more than a few places around. Here are the ones I've found: http://www.momogastown.ca/menus/ http://sushiyan.ca/#/menu http://urbansushi.com/ There is also other stuff close by like: http://steamworks.com/brew-pub https://www.cactusclubcafe.com/?utm_source=google-maps&utm_medium=organic&utm_campaign=coal-harbour Or if you've gone to a place you'd like to recommend, suggestions are welcome! On 05/18/2018 08:39 AM, Lance Bragstad wrote: > Hey all, > > I put together a survey to see if we can plan a night to have supper > together [0]. I'll start parsing responses tomorrow and see what we can > get lined up. > > Thanks and safe travels to Vancouver, > > Lance > > [0] https://goo.gl/forms/ogNsf9dUno8BHvqu1 > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From john.studarus at openstacksandiego.org Sun May 20 20:28:09 2018 From: john.studarus at openstacksandiego.org (John Studarus) Date: Sun, 20 May 2018 13:28:09 -0700 Subject: [openstack-dev] OpenStack US & CA speaker opportunities Message-ID: <1637f3cef87.11d475b0f196695.942736707121116595@openstacksandiego.org> Dear OpenStack PTLs, devs, operators, and community leaders, We're reaching out to those interested in presenting at events across the US & Canada. The first opportunity is this July 10th, at the Intel Campus in Santa Clara, CA. The SF Bay Area OpenStack group is organizing a half day of presentations and labs with an evening social event to showcase Open Infrastructure and Cloud Native technologies (like Containers, and SDN). We have a number of invited, sponsored breakout sessions and lightening talks available. If you're interested, feel free to contact us directly via email or at the submission page below. https://www.papercall.io/openstack-8th-san-jose We're also happy to co-ordinate events with the Meetup groups across the US and Canada. If you're looking to get out and talk, just drop us a note and we can co-ordinate which groups would be convenient to you. Perhaps you'll be traveling and have the evening free to speak to a local group? We can make it happen! All three of us will be in Vancouver this week if you'd like to talk in person. John, Lisa, & Stacy OpenStack Ambassadors for North America and Canada ---- John Studarus - OpenStack Ambassador - John at OpenStackSanDiego.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Mon May 21 00:15:24 2018 From: zigo at debian.org (Thomas Goirand) Date: Mon, 21 May 2018 02:15:24 +0200 Subject: [openstack-dev] [all][tc][ptls] final stages of python 3 transition In-Reply-To: References: <1524689037-sup-783@lrrr.local> <1525100618-sup-9669@lrrr.local> <1525125561-sup-8369@lrrr.local> Message-ID: On 05/07/2018 01:36 PM, Jean-Philippe Evrard wrote: > We've been juggling with python3, ansible and multiple distros for a while now. > That dance hasn't been fruitful: many hidden issues, either due to > ansible modules, or our own modules, or upgrade issues. > > I've recently decided to simplify the python2/3 story. > > Queens and all the stable branches will be python2 only (python3 will > not be used anymore, to simplify the code) > > For Rocky, we plan to use as much as possible the distribution > packages for the python stack, if it's recent enough for our source > installs. > Ubuntu 16.04 will have python2, SUSE has python2, CentOS has no > appropriate package, so we are pip installing things (and using > python2). > So... If people work on Ubuntu 18.04 support, we could try a python3 > only system. Nobody worked on it right now. /me raises hand! At the moment, I got a nearly full Queens stack up and running on top of Debian Stretch, using Python 3 only (and of course all of that is also uploaded to Debian Sid). The puppet-openstack scenario001 works fully without SSL, last week I could fix cinder, today Glance (thanks to Matt) and Nova, and it's looking like I got one remaining issue with Neutron that I didn't have time to investigate fully yet (got to dig in the logs). But it's looking good. Hopefully, next week I'll be able to tell everything works. So, have a try with Debian? :) Cheers, Thomas Goirand (zigo) From zigo at debian.org Mon May 21 00:19:14 2018 From: zigo at debian.org (Thomas Goirand) Date: Mon, 21 May 2018 02:19:14 +0200 Subject: [openstack-dev] [all][tc][ptls] final stages of python 3 transition In-Reply-To: <20180520162446.GA13805@sinanju.localdomain> References: <1525100618-sup-9669@lrrr.local> <297b4c6f-5ce1-1ab5-88db-92b7e06174de@ham.ie> <1525794769-sup-717@lrrr.local> <200c62f6-c13c-5082-1662-692aacf2b581@ham.ie> <20180508162256.GA11443@zeong> <830c30db-3dde-c3ac-8e99-937247e72e7f@debian.org> <20180519175453.GB29003@sinanju.localdomain> <01588869-e3be-5157-013d-4d9a633108d5@debian.org> <20180520162446.GA13805@sinanju.localdomain> Message-ID: <1ce99379-52a6-695b-97c9-6162b7f8a2bd@debian.org> On 05/20/2018 06:24 PM, Matthew Treinish wrote: > On Sun, May 20, 2018 at 03:05:34PM +0200, Thomas Goirand wrote: >> Thanks for these details. What exactly is the trouble with the Swift >> backend? Do you know? Is anyone working on fixing it? At my company, >> we'd be happy to work on that (if of course, it's not too time demanding). >> > > Sorry I didn't mean the swift backend, but swift itself under python3: > > https://wiki.openstack.org/wiki/Python3#OpenStack_applications_.28tc:approved-release.29 > > If you're trying to deploy everything under python3 I don't think you'll be > able to deploy swift. But if you already have a swift running then the glance > backend should work fine under pythom 3. Of course I know Swift isn't Python 3 ready. And that's sad... :/ However, we did also experience issues with the swift backend last week. Hopefully, with the switch to uwsgi it's going to work. I'll let you know if that's not the case. Cheers, Thomas Goirand (zigo) From chris.friesen at windriver.com Mon May 21 06:27:10 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Sun, 20 May 2018 23:27:10 -0700 Subject: [openstack-dev] [cyborg] [nova] Cyborg quotas In-Reply-To: References: <6d8232e3-79ca-c61d-ad64-a99e923e2114@intel.com> <4ba31b19-98cf-25b2-a0c2-0f64a29756e8@gmail.com> <376d9f27-b264-cc2e-6dc2-5ee8ae773f95@intel.com> <0602af19-987b-e200-d49d-754bec4c0556@intel.com> <6dfca497-29fc-add7-a251-c1dfd5ae4655@gmail.com> Message-ID: <5B0266BE.20700@windriver.com> On 05/19/2018 05:58 PM, Blair Bethwaite wrote: > G'day Jay, > > On 20 May 2018 at 08:37, Jay Pipes wrote: >> If it's not the VM or baremetal machine that is using the accelerator, what >> is? > > It will be a VM or BM, but I don't think accelerators should be tied > to the life of a single instance if that isn't technically necessary > (i.e., they are hot-pluggable devices). I can see plenty of scope for > use-cases where Cyborg is managing devices that are accessible to > compute infrastructure via network/fabric (e.g. rCUDA or dedicated > PCIe fabric). And even in the simple pci passthrough case (vfio or > mdev) it isn't hard to imagine use-cases for workloads that only need > an accelerator sometimes. Currently nova only supports attach/detach of volumes and network interfaces. Is Cyborg looking to implement new Compute API operations to support hot attach/detach of various types of accelerators? Chris From zhipengh512 at gmail.com Mon May 21 07:11:55 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Mon, 21 May 2018 15:11:55 +0800 Subject: [openstack-dev] [cyborg] [nova] Cyborg quotas In-Reply-To: <5B0266BE.20700@windriver.com> References: <6d8232e3-79ca-c61d-ad64-a99e923e2114@intel.com> <4ba31b19-98cf-25b2-a0c2-0f64a29756e8@gmail.com> <376d9f27-b264-cc2e-6dc2-5ee8ae773f95@intel.com> <0602af19-987b-e200-d49d-754bec4c0556@intel.com> <6dfca497-29fc-add7-a251-c1dfd5ae4655@gmail.com> <5B0266BE.20700@windriver.com> Message-ID: @Chris yes we are actively exploring this option :) On Mon, May 21, 2018 at 2:27 PM, Chris Friesen wrote: > On 05/19/2018 05:58 PM, Blair Bethwaite wrote: > >> G'day Jay, >> >> On 20 May 2018 at 08:37, Jay Pipes wrote: >> >>> If it's not the VM or baremetal machine that is using the accelerator, >>> what >>> is? >>> >> >> It will be a VM or BM, but I don't think accelerators should be tied >> to the life of a single instance if that isn't technically necessary >> (i.e., they are hot-pluggable devices). I can see plenty of scope for >> use-cases where Cyborg is managing devices that are accessible to >> compute infrastructure via network/fabric (e.g. rCUDA or dedicated >> PCIe fabric). And even in the simple pci passthrough case (vfio or >> mdev) it isn't hard to imagine use-cases for workloads that only need >> an accelerator sometimes. >> > > Currently nova only supports attach/detach of volumes and network > interfaces. Is Cyborg looking to implement new Compute API operations to > support hot attach/detach of various types of accelerators? > > Chris > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhang.lei.fly at gmail.com Mon May 21 07:28:37 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Mon, 21 May 2018 15:28:37 +0800 Subject: [openstack-dev] [kolla] weekly meeting is cancelled May 23 Message-ID: Hi guys, we are canceling next Kolla weekly meeting on May 23th, due to the summit. It will be resumed at May 30th. -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From blair.bethwaite at gmail.com Mon May 21 08:05:22 2018 From: blair.bethwaite at gmail.com (Blair Bethwaite) Date: Mon, 21 May 2018 01:05:22 -0700 Subject: [openstack-dev] [cyborg] [nova] Cyborg quotas In-Reply-To: <5B0266BE.20700@windriver.com> References: <6d8232e3-79ca-c61d-ad64-a99e923e2114@intel.com> <4ba31b19-98cf-25b2-a0c2-0f64a29756e8@gmail.com> <376d9f27-b264-cc2e-6dc2-5ee8ae773f95@intel.com> <0602af19-987b-e200-d49d-754bec4c0556@intel.com> <6dfca497-29fc-add7-a251-c1dfd5ae4655@gmail.com> <5B0266BE.20700@windriver.com> Message-ID: (Please excuse the top-posting) The other possibility is that the Cyborg managed devices are plumbed in via IP in guest network space. Then "attach" isn't so much a Nova problem as a Neutron one - probably similar to Manila. Has the Cyborg team considered a RESTful-API proxy driver, i.e., something that wraps a vendor-specific accelerator service and makes it friendly to a multi-tenant OpenStack cloud? Quantum co-processors might be a compelling example which fit this model. Cheers, On Sun., 20 May 2018, 23:28 Chris Friesen, wrote: > On 05/19/2018 05:58 PM, Blair Bethwaite wrote: > > G'day Jay, > > > > On 20 May 2018 at 08:37, Jay Pipes wrote: > >> If it's not the VM or baremetal machine that is using the accelerator, > what > >> is? > > > > It will be a VM or BM, but I don't think accelerators should be tied > > to the life of a single instance if that isn't technically necessary > > (i.e., they are hot-pluggable devices). I can see plenty of scope for > > use-cases where Cyborg is managing devices that are accessible to > > compute infrastructure via network/fabric (e.g. rCUDA or dedicated > > PCIe fabric). And even in the simple pci passthrough case (vfio or > > mdev) it isn't hard to imagine use-cases for workloads that only need > > an accelerator sometimes. > > Currently nova only supports attach/detach of volumes and network > interfaces. > Is Cyborg looking to implement new Compute API operations to support hot > attach/detach of various types of accelerators? > > Chris > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Mon May 21 09:15:33 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Mon, 21 May 2018 17:15:33 +0800 Subject: [openstack-dev] [Openstack-operators][heat] Heat sessions in Vancouver summit!! And they're all in Tuesday! Message-ID: Dear all As Summit is about to start, looking forward to meet all of you here. Don't miss out sessions from Heat team. They're all on Tuesday! Feel free to let me know if you hope to see anything or learn anything from sessions. Will try my best to prepare it for you. *Tuesday 229:00am - 9:40am Users & Ops feedback for Heat * Vancouver Convention Centre West - Level Two - Room 220 https://www.openstack.org/summit/vancouver-2018/summit- schedule/events/21713/users-and-ops-feedback-for-heat *11:00am - 11:20am Heat - Project Update* Vancouver Convention Centre West - Level Two - Room 212 https://www.openstack.org/summit/vancouver-2018/summit- schedule/events/21595/heat-project-update *1:50pm - 2:30pm Heat - Project Onboarding* Vancouver Convention Centre West - Level Two - Room 223 https://www.openstack.org/summit/vancouver-2018/summit- schedule/events/21629/heat-project-onboarding See you all on Tuesday!! -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From janzian at us.ibm.com Mon May 21 13:38:15 2018 From: janzian at us.ibm.com (James Anziano) Date: Mon, 21 May 2018 13:38:15 +0000 Subject: [openstack-dev] Neutron bug deputy report Message-ID: An HTML attachment was scrubbed... URL: From lhinds at redhat.com Mon May 21 13:49:43 2018 From: lhinds at redhat.com (Luke Hinds) Date: Mon, 21 May 2018 14:49:43 +0100 Subject: [openstack-dev] [tripleo] Limiting sudo coverage of heat-admin / stack and other users. Message-ID: A few operators have requested if its possible to limit sudo's coverage on both the under / overcloud. There is concern over `ALL=(ALL) NOPASSWD:ALL` , which allows someone to `sudo su`. This task has come under the care of the tripleo security squad. The work is being tracked and discussed here [0]. So far it looks like the approach will be to use regexp within /etc/sudoers.d/*., to narrow down as close as possible to the specific commands called. Some services already do this with rootwrap: ironic ALL = (root) NOPASSWD: /usr/bin/ironic-rootwrap /etc/ironic/rootwrap.conf * It's fairly easy to pick up a list of all sudo calls using a simple script [1] The other prolific user of sudo is ansible / stack, for example: /bin/sh -c echo BECOME-SUCCESS-kldpbeueyodisjajjqthpafzadrncdff; /usr/bin/python /home/stack/.ansible/tmp/ansible-tmp-1526579105.0-109863952786117/systemd.py; rm -rf "/home/stack/.ansible/tmp/ansible-tmp-1526579105.0-109863952786117/" > /dev/null 2>&1 My feelings here are to again use regexp around the immutable non random parts of the command. cjeanner also made some suggestions in the etherpad [0]. However aside to the approach, we need to consider the impact locking down might have should someone create a develop a new bit of code that leverages commands wrapped in sudo and assumes ALL with be in place. This of course will be blocked. Now my guess is that our CI would capture this as the deploy would fail(?) and the developer should work out an entry is needed when testing their patch, but wanted to open this up to others who know testing at gate better much better than myself. Also encourage any thoughts on the topic to be introduced to the etherpad [0] [0] https://etherpad.openstack.org/p/tripleo-heat-admin-security [1] https://gist.github.com/lukehinds/4cdb1bf4de526a049c51f05698b8b04f -- Luke Hinds -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at jimrollenhagen.com Mon May 21 13:49:43 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Mon, 21 May 2018 09:49:43 -0400 Subject: [openstack-dev] Proposing Mark Goddard to ironic-core In-Reply-To: References: Message-ID: On Sun, May 20, 2018 at 10:45 AM, Julia Kreger wrote: > Greetings everyone! > > I would like to propose Mark Goddard to ironic-core. I am aware he > recently joined kolla-core, but his contributions in ironic have been > insightful and valuable. The kind of value that comes from operative use. > > I also make this nomination knowing that our community landscape is > changing and that we must not silo our team responsibilities or ability to > move things forward to small highly focused team. I trust Mark to use his > judgement as he has time or need to do so. He might not always have time, > but I think at the end of the day, we’re all in that same boat. > +2! // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From pkovar at redhat.com Mon May 21 16:56:53 2018 From: pkovar at redhat.com (Petr Kovar) Date: Mon, 21 May 2018 18:56:53 +0200 Subject: [openstack-dev] [docs] Updates to openstack-doc-core Message-ID: <20180521185653.bede6b31b8e09c259047774d@redhat.com> Hi all, I'd like to thank Brian Moss and Maria Zlatkova who recently stepped down from the docs core team, for all their contributions to OpenStack documentation in the past years. Thank you for your community work, insight, and ideas you shared with the community while in your core role. Hope to see you around OpenStack or other open source project in the future! Cheers, pk From tenobreg at redhat.com Mon May 21 17:38:33 2018 From: tenobreg at redhat.com (Telles Nobrega) Date: Mon, 21 May 2018 10:38:33 -0700 Subject: [openstack-dev] [sahara] Meeting Canceled Message-ID: Hi folks, Since I'm attending the OpenStack Summit I won't be able to attend the meeting this week and therefore I'm canceling it. If anything critical comes up and needs some discussion this week, send an email on ML and I will reply ASAP. Thanks, -- TELLES NOBREGA SOFTWARE ENGINEER Red Hat Brasil Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo tenobreg at redhat.com TRIED. TESTED. TRUSTED. Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil pelo Great Place to Work. -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Mon May 21 18:40:13 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Mon, 21 May 2018 11:40:13 -0700 Subject: [openstack-dev] Vancouver Forum Etherpad List In-Reply-To: <20180509144811.GA16802@sm-xps> References: <20180509144811.GA16802@sm-xps> Message-ID: I would like to re-post this Sean's post on the YVR forum etherpad list as I believe this is worth shared now again. 2018年5月9日(水) 7:48 Sean McGinnis : > We are now less than two weeks away from the next Summit/Forum in > Vancouver. > Hopefully teams are able to spend some time preparing for their Forum > sessions > to make them productive. > > I have updated the Forum wiki page to start collecting links to session > etherpads: > > https://wiki.openstack.org/wiki/Forum/Vancouver2018 > > Please update this page with your etherpads as they are ready to make this > one > easy place to go to for all sessions. I have started populating some > sessions > so there is a start, but there are many that still need to be filled in. > > Looking forward to another week in Vancouver. > > Thanks! > Sean > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at not.mn Mon May 21 19:16:04 2018 From: me at not.mn (John Dickinson) Date: Mon, 21 May 2018 12:16:04 -0700 Subject: [openstack-dev] [all][tc][ptls] final stages of python 3 transition In-Reply-To: <1ce99379-52a6-695b-97c9-6162b7f8a2bd@debian.org> References: <1525100618-sup-9669@lrrr.local> <297b4c6f-5ce1-1ab5-88db-92b7e06174de@ham.ie> <1525794769-sup-717@lrrr.local> <200c62f6-c13c-5082-1662-692aacf2b581@ham.ie> <20180508162256.GA11443@zeong> <830c30db-3dde-c3ac-8e99-937247e72e7f@debian.org> <20180519175453.GB29003@sinanju.localdomain> <01588869-e3be-5157-013d-4d9a633108d5@debian.org> <20180520162446.GA13805@sinanju.localdomain> <1ce99379-52a6-695b-97c9-6162b7f8a2bd@debian.org> Message-ID: On 20 May 2018, at 17:19, Thomas Goirand wrote: > On 05/20/2018 06:24 PM, Matthew Treinish wrote: >> On Sun, May 20, 2018 at 03:05:34PM +0200, Thomas Goirand wrote: >>> Thanks for these details. What exactly is the trouble with the Swift >>> backend? Do you know? Is anyone working on fixing it? At my company, >>> we'd be happy to work on that (if of course, it's not too time >>> demanding). >>> >> >> Sorry I didn't mean the swift backend, but swift itself under >> python3: >> >> https://wiki.openstack.org/wiki/Python3#OpenStack_applications_.28tc:approved-release.29 >> >> If you're trying to deploy everything under python3 I don't think >> you'll be >> able to deploy swift. But if you already have a swift running then >> the glance >> backend should work fine under pythom 3. > > Of course I know Swift isn't Python 3 ready. And that's sad... :/ yep. we're still working on it. slowly. > > However, we did also experience issues with the swift backend last > week. > Hopefully, with the switch to uwsgi it's going to work. I'll let you > know if that's not the case. Is the "switch to uwsgi" something about how you're running swift or something about how you're running glance? FWIW, my experience with putting TLS in front of Swift is to run Swift as "normal" (ie run `swift-proxy-server /etc/swift/proxy-server.conf` itself instead of under apache or nginx or something else). Then use HAProxy or hitch to terminate TLS and forward that internally to the proxy server. > > Cheers, > > Thomas Goirand (zigo) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From smonderer at vasonanetworks.com Mon May 21 19:19:26 2018 From: smonderer at vasonanetworks.com (Samuel Monderer) Date: Mon, 21 May 2018 22:19:26 +0300 Subject: [openstack-dev] [tripleo] cannot configure host kernel-args for pci passthrough with first-boot Message-ID: Hi, I'm trying to build a new OS environment with RHOSP 11 with a compute has that has GPU card. I've added a new role and a firstboot template to configure the kernel args to allow pci-passthrough. For some reason the firstboot is not working (can't see the changes on the compute node) Attached are the templates I used to deploy the environment. I used the same configuration I used for a compute role with sr-iov and it worked there. Could someone tell me what I missed? Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: templates.zip Type: application/zip Size: 18741 bytes Desc: not available URL: From emilien at redhat.com Mon May 21 20:58:26 2018 From: emilien at redhat.com (Emilien Macchi) Date: Mon, 21 May 2018 13:58:26 -0700 Subject: [openstack-dev] [tripleo] Migration to Storyboard In-Reply-To: References: Message-ID: During the Storyboard session today: https://etherpad.openstack.org/p/continuing-the-migration-lp-sb We mentioned that TripleO would continue to migrate during Rocky cycle. Like Alex mentioned in this thread, we need to migrate the scripts used by the CI squad so they work with SB. Once this is done, we'll proceed to the full migration of all blueprints and bugs into tripleo-common project in SB. Projects like tripleo-validations, tripleo-ui (more?) who have 1:1 mapping between their "name" and project repository could use a dedicated project in SB, although we need to keep things simple for our users so they know where to file a bug without confusion. We hope to proceed during Rocky but it'll probably take some time to update our scripts and documentation, also educate our community to use the tool, so we expect the Stein cycle the first cycle where we actually consume SB. I really wanted to thank the SB team for their patience and help, TripleO is big and this migration hasn't been easy but we'll make it :-) Thanks, On Tue, May 15, 2018 at 7:53 AM, Alex Schultz wrote: > Bumping this up so folks can review this. It was mentioned in this > week's meeting that it would be a good idea for folks to take a look > at Storyboard to get familiar with it. The upstream docs have been > updated[0] to point to the differences when dealing with proposed > patches. Please take some time to review this and raise any > concerns/issues now. > > Thanks, > -Alex > > [0] https://docs.openstack.org/infra/manual/developers.html# > development-workflow > > On Wed, May 9, 2018 at 1:24 PM, Alex Schultz wrote: > > Hello tripleo folks, > > > > So we've been experimenting with migrating some squads over to > > storyboard[0] but this seems to be causing more issues than perhaps > > it's worth. Since the upstream community would like to standardize on > > Storyboard at some point, I would propose that we do a cut over of all > > the tripleo bugs/blueprints from Launchpad to Storyboard. > > > > In the irc meeting this week[1], I asked that the tripleo-ci team make > > sure the existing scripts that we use to monitor bugs for CI support > > Storyboard. I would consider this a prerequisite for the migration. > > I am thinking it would be beneficial to get this done before or as > > close to M2. > > > > Thoughts, concerns, etc? > > > > Thanks, > > -Alex > > > > [0] https://storyboard.openstack.org/#!/project_group/76 > > [1] http://eavesdrop.openstack.org/meetings/tripleo/2018/ > tripleo.2018-05-08-14.00.log.html#l-42 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelweingartner at gmail.com Mon May 21 22:51:37 2018 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Mon, 21 May 2018 19:51:37 -0300 Subject: [openstack-dev] Problem when deploying Openstack with Kolla Message-ID: Hello OpenStackers, First of all, I am not sure if this is the right list to post this question. Therefore, please excuse me if I am sending an e-mail to the wrong place. So, I have been trying to use Kolla to deploy a POC environment of OpenStack. However, I have not been able to do so. Right now I am getting the following error: fatal: [localhost]: FAILED! => {"msg": "The conditional check > '(neutron_l3_agent.enabled | bool and neutron_l3_agent.host_in_groups | > bool) or (neutron_vpnaas_agent.enabled | bool and > neutron_vpnaas_agent.host_in_groups | bool)' failed. The error was: error > while evaluating conditional ((neutron_l3_agent.enabled | bool and > neutron_l3_agent.host_in_groups | bool) or (neutron_vpnaas_agent.enabled | > bool and neutron_vpnaas_agent.host_in_groups | bool)): Unable to look up a > name or access an attribute in template string ({{ inventory_hostname in > groups['neutron-vpnaas-agent'] }}).\nMake sure your variable name does not > contain invalid characters like '-': argument of type 'StrictUndefined' is > not iterable\n\nThe error appears to have been in > '/usr/local/share/kolla-ansible/ansible/roles/neutron/tasks/config.yml': > line 2, column 3, but may\nbe elsewhere in the file depending on the exact > syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: > Setting sysctl values\n ^ here\n"} > It looks like an Ansible problem. I checked the file “/usr/local/share/kolla-ansible/ansible/roles/neutron/tasks/config.yml” at line 5, it has the following declaration: > neutron_l3_agent: "{{ neutron_services['neutron-l3-agent'] }}" > As far as I understand everything is ok with this variable declaration. There is the “neutron-l3-agent” parameter used to retrieve an element from “neutron_services” map, but that does look ok. Has anybody else experienced this problem before? I am using Kolla for OpenStack queens. I am using kolla with the following command. > kolla-ansible -i all-in-one bootstrap-servers && kolla-ansible -i > all-in-one prechecks && kolla-ansible -i all-in-one deploy > As you can see, it is a simple use case to deploy OpenStack in a single node. The command that is failing is the following. > kolla-ansible -i all-in-one deploy > -- Rafael Weingärtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhang.lei.fly at gmail.com Tue May 22 01:49:07 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Tue, 22 May 2018 09:49:07 +0800 Subject: [openstack-dev] Problem when deploying Openstack with Kolla In-Reply-To: References: Message-ID: seems there are some issue in you inventory file. Could you compare your inventory file with the one in kolla-ansible code? if you are still not fix it, try to provide you globals.yml file and inventory file in ML. On Tue, May 22, 2018 at 6:51 AM, Rafael Weingärtner < rafaelweingartner at gmail.com> wrote: > Hello OpenStackers, > First of all, I am not sure if this is the right list to post this > question. Therefore, please excuse me if I am sending an e-mail to the > wrong place. > > So, I have been trying to use Kolla to deploy a POC environment of > OpenStack. However, I have not been able to do so. Right now I am getting > the following error: > > fatal: [localhost]: FAILED! => {"msg": "The conditional check >> '(neutron_l3_agent.enabled | bool and neutron_l3_agent.host_in_groups | >> bool) or (neutron_vpnaas_agent.enabled | bool and >> neutron_vpnaas_agent.host_in_groups | bool)' failed. The error was: >> error while evaluating conditional ((neutron_l3_agent.enabled | bool and >> neutron_l3_agent.host_in_groups | bool) or (neutron_vpnaas_agent.enabled >> | bool and neutron_vpnaas_agent.host_in_groups | bool)): Unable to look >> up a name or access an attribute in template string ({{ inventory_hostname >> in groups['neutron-vpnaas-agent'] }}).\nMake sure your variable name does >> not contain invalid characters like '-': argument of type 'StrictUndefined' >> is not iterable\n\nThe error appears to have been in >> '/usr/local/share/kolla-ansible/ansible/roles/neutron/tasks/config.yml': >> line 2, column 3, but may\nbe elsewhere in the file depending on the exact >> syntax problem.\n\nThe offending line appears to be:\n\n---\n- name: >> Setting sysctl values\n ^ here\n"} >> > > It looks like an Ansible problem. I checked the file > “/usr/local/share/kolla-ansible/ansible/roles/neutron/tasks/config.yml” > at line 5, it has the following declaration: > >> neutron_l3_agent: "{{ neutron_services['neutron-l3-agent'] }}" >> > > As far as I understand everything is ok with this variable declaration. > There is the “neutron-l3-agent” parameter used to retrieve an element from > “neutron_services” map, but that does look ok. Has anybody else experienced > this problem before? > > I am using Kolla for OpenStack queens. I am using kolla with the following > command. > >> kolla-ansible -i all-in-one bootstrap-servers && kolla-ansible -i >> all-in-one prechecks && kolla-ansible -i all-in-one deploy >> > > As you can see, it is a simple use case to deploy OpenStack in a single > node. The command that is failing is the following. > >> kolla-ansible -i all-in-one deploy >> > > -- > Rafael Weingärtner > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelweingartner at gmail.com Tue May 22 02:27:00 2018 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Mon, 21 May 2018 23:27:00 -0300 Subject: [openstack-dev] Problem when deploying Openstack with Kolla In-Reply-To: References: Message-ID: Well, everything is pretty standard (I am only deploying a POC), I am following this "documentation"[1]. I did not change much from the default files. [1] https://docs.openstack.org/project-deploy-guide/kolla-ansible/queens/quickstart.html# Globals: > --- > # You can use this file to override _any_ variable throughout Kolla. > # Additional options can be found in the > # 'kolla-ansible/ansible/group_vars/all.yml' file. Default value of all the > # commented parameters are shown here, To override the default value > uncomment > # the parameter and change its value. > > ############### > # Kolla options > ############### > # Valid options are [ COPY_ONCE, COPY_ALWAYS ] > #config_strategy: "COPY_ALWAYS" > > # Valid options are ['centos', 'debian', 'oraclelinux', 'rhel', 'ubuntu'] > #kolla_base_distro: "centos" > kolla_base_distro: "ubuntu" > > # Valid options are [ binary, source ] > #kolla_install_type: "binary" > kolla_install_type: "source" > > # Valid option is Docker repository tag > #openstack_release: "" > #openstack_release: "master" > openstack_release: "queens" > > # Location of configuration overrides > #node_custom_config: "/etc/kolla/config" > > # This should be a VIP, an unused IP on your network that will float > between > # the hosts running keepalived for high-availability. If you want to run an > # All-In-One without haproxy and keepalived, you can set enable_haproxy to > no > # in "OpenStack options" section, and set this value to the IP of your > # 'network_interface' as set in the Networking section below. > network_interface: "enp0s8" > kolla_internal_vip_address: "192.168.56.250" > > # This is the DNS name that maps to the kolla_internal_vip_address VIP. By > # default it is the same as kolla_internal_vip_address. > #kolla_internal_fqdn: "{{ kolla_internal_vip_address }}" > > # This should be a VIP, an unused IP on your network that will float > between > # the hosts running keepalived for high-availability. It defaults to the > # kolla_internal_vip_address, allowing internal and external communication > to > # share the same address. Specify a kolla_external_vip_address to separate > # internal and external requests between two VIPs. > #kolla_external_vip_address: "{{ kolla_internal_vip_address }}" > > # The Public address used to communicate with OpenStack as set in the > public_url > # for the endpoints that will be created. This DNS name should map to > # kolla_external_vip_address. > #kolla_external_fqdn: "{{ kolla_external_vip_address }}" > > ################ > # Docker options > ################ > # Below is an example of a private repository with authentication. Note the > # Docker registry password can also be set in the passwords.yml file. > > #docker_registry: "172.16.0.10:4000" > #docker_namespace: "companyname" > #docker_registry_username: "sam" > #docker_registry_password: "correcthorsebatterystaple" > > ################### > # Messaging options > ################### > # Below is an example of an separate backend that provides brokerless > # messaging for oslo.messaging RPC communications > > #om_rpc_transport: "amqp" > #om_rpc_user: "{{ qdrouterd_user }}" > #om_rpc_password: "{{ qdrouterd_password }}" > #om_rpc_port: "{{ qdrouterd_port }}" > #om_rpc_group: "qdrouterd" > > > ############################## > # Neutron - Networking Options > ############################## > # This interface is what all your api services will be bound to by default. > # Additionally, all vxlan/tunnel and storage network traffic will go over > this > # interface by default. This interface must contain an IPv4 address. > # It is possible for hosts to have non-matching names of interfaces - > these can > # be set in an inventory file per host or per group or stored separately, > see > # http://docs.ansible.com/ansible/intro_inventory.html > # Yet another way to workaround the naming problem is to create a bond for > the > # interface on all hosts and give the bond name here. Similar strategy can > be > # followed for other types of interfaces. > #network_interface: "eth0" > > # These can be adjusted for even more customization. The default is the > same as > # the 'network_interface'. These interfaces must contain an IPv4 address. > #kolla_external_vip_interface: "{{ network_interface }}" > #api_interface: "{{ network_interface }}" > #storage_interface: "{{ network_interface }}" > #cluster_interface: "{{ network_interface }}" > #tunnel_interface: "{{ network_interface }}" > #dns_interface: "{{ network_interface }}" > > # This is the raw interface given to neutron as its external network port. > Even > # though an IP address can exist on this interface, it will be unusable in > most > # configurations. It is recommended this interface not be configured with > any IP > # addresses for that reason. > #neutron_external_interface: "eth1" > neutron_external_interface: "enp0s9" > > # Valid options are [ openvswitch, linuxbridge, vmware_nsxv, vmware_dvs, > opendaylight ] > neutron_plugin_agent: "linuxbridge" > > # Valid options are [ internal, infoblox ] > #neutron_ipam_driver: "internal" > > > #################### > # keepalived options > #################### > # Arbitrary unique number from 0..255 > #keepalived_virtual_router_id: "51" > > > ############# > # TLS options > ############# > # To provide encryption and authentication on the > kolla_external_vip_interface, > # TLS can be enabled. When TLS is enabled, certificates must be provided > to > # allow clients to perform authentication. > #kolla_enable_tls_external: "no" > #kolla_external_fqdn_cert: "{{ node_config_directory > }}/certificates/haproxy.pem" > > > ############## > # OpenDaylight > ############## > #enable_opendaylight_qos: "no" > #enable_opendaylight_l3: "yes" > > ################### > # OpenStack options > ################### > # Use these options to set the various log levels across all OpenStack > projects > # Valid options are [ True, False ] > #openstack_logging_debug: "False" > > # Valid options are [ none, novnc, spice, rdp ] > #nova_console: "novnc" > > # OpenStack services can be enabled or disabled with these options > #enable_aodh: "no" > #enable_barbican: "no" > #enable_blazar: "no" > #enable_ceilometer: "no" > #enable_central_logging: "no" > #enable_ceph: "no" > #enable_ceph_mds: "no" > #enable_ceph_rgw: "no" > #enable_ceph_nfs: "no" > #enable_chrony: "no" > #enable_cinder: "yes" > #enable_cinder_backup: "yes" > #enable_cinder_backend_hnas_iscsi: "no" > #enable_cinder_backend_hnas_nfs: "no" > #enable_cinder_backend_iscsi: "no" > #enable_cinder_backend_lvm: "no" > #enable_cinder_backend_nfs: "yes" > #enable_cloudkitty: "no" > #enable_collectd: "no" > #enable_congress: "no" > #enable_designate: "no" > #enable_destroy_images: "no" > #enable_etcd: "no" > #enable_fluentd: "yes" > #enable_freezer: "no" > #enable_gnocchi: "no" > #enable_grafana: "no" > enable_haproxy: "no" > #enable_heat: "yes" > #enable_horizon: "yes" > #enable_horizon_blazar: "{{ enable_blazar | bool }}" > #enable_horizon_cloudkitty: "{{ enable_cloudkitty | bool }}" > #enable_horizon_designate: "{{ enable_designate | bool }}" > #enable_horizon_freezer: "{{ enable_freezer | bool }}" > #enable_horizon_ironic: "{{ enable_ironic | bool }}" > #enable_horizon_karbor: "{{ enable_karbor | bool }}" > #enable_horizon_magnum: "{{ enable_magnum | bool }}" > #enable_horizon_manila: "{{ enable_manila | bool }}" > #enable_horizon_mistral: "{{ enable_mistral | bool }}" > #enable_horizon_murano: "{{ enable_murano | bool }}" > #enable_horizon_neutron_lbaas: "{{ enable_neutron_lbaas | bool }}" > #enable_horizon_octavia: "{{ enable_octavia | bool }}" > #enable_horizon_sahara: "{{ enable_sahara | bool }}" > #enable_horizon_searchlight: "{{ enable_searchlight | bool }}" > #enable_horizon_senlin: "{{ enable_senlin | bool }}" > #enable_horizon_solum: "{{ enable_solum | bool }}" > #enable_horizon_tacker: "{{ enable_tacker | bool }}" > #enable_horizon_trove: "{{ enable_trove | bool }}" > #enable_horizon_watcher: "{{ enable_watcher | bool }}" > #enable_horizon_zun: "{{ enable_zun | bool }}" > #enable_hyperv: "no" > #enable_influxdb: "no" > #enable_ironic: "no" > #enable_ironic_pxe_uefi: "no" > #enable_kafka: "no" > #enable_karbor: "no" > #enable_kuryr: "no" > #enable_magnum: "no" > #enable_manila: "no" > #enable_manila_backend_generic: "no" > #enable_manila_backend_hnas: "no" > #enable_manila_backend_cephfs_native: "no" > #enable_manila_backend_cephfs_nfs: "no" > #enable_mistral: "no" > #enable_mongodb: "no" > #enable_murano: "no" > #enable_multipathd: "no" > #enable_neutron_bgp_dragent: "no" > #enable_neutron_dvr: "no" > #enable_neutron_lbaas: "no" > #enable_neutron_fwaas: "no" > #enable_neutron_qos: "no" > #enable_neutron_agent_ha: "no" > enable_neutron_vpnaas: "no" > #enable_neutron_sriov: "no" > #enable_neutron_sfc: "no" > #enable_nova_fake: "no" > #enable_nova_serialconsole_proxy: "no" > #enable_octavia: "no" > #enable_opendaylight: "no" > #enable_openvswitch: "{{ neutron_plugin_agent != 'linuxbridge' }}" > #enable_ovs_dpdk: "no" > #enable_osprofiler: "no" > #enable_panko: "no" > #enable_prometheus: "no" > #enable_qdrouterd: "no" > #enable_rally: "no" > #enable_redis: "no" > #enable_sahara: "no" > #enable_searchlight: "no" > #enable_senlin: "no" > #enable_skydive: "no" > #enable_solum: "no" > #enable_swift: "no" > #enable_telegraf: "no" > #enable_tacker: "no" > #enable_tempest: "no" > #enable_trove: "no" > #enable_trove_singletenant: "no" > #enable_vitrage: "no" > #enable_vmtp: "no" > #enable_watcher: "no" > #enable_zookeeper: "no" > #enable_zun: "no" > > ############## > # Ceph options > ############## > # Ceph can be setup with a caching to improve performance. To use the > cache you > # must provide separate disks than those for the OSDs > #ceph_enable_cache: "no" > > # Set to no if using external Ceph without cephx. > #external_ceph_cephx_enabled: "yes" > > # Ceph is not able to determine the size of a cache pool automatically, > # so the configuration on the absolute size is required here, otherwise > the flush/evict will not work. > #ceph_target_max_bytes: "" > #ceph_target_max_objects: "" > > # Valid options are [ forward, none, writeback ] > #ceph_cache_mode: "writeback" > > # A requirement for using the erasure-coded pools is you must setup a > cache tier > # Valid options are [ erasure, replicated ] > #ceph_pool_type: "replicated" > > # Integrate ceph rados object gateway with openstack keystone > #enable_ceph_rgw_keystone: "no" > > # Set the pgs and pgps for pool > # WARNING! These values are dependant on the size and shape of your > cluster - > # the default values are not suitable for production use. Please refer to > the > # Kolla Ceph documentation for more information. > #ceph_pool_pg_num: 8 > #ceph_pool_pgp_num: 8 > > ############################# > # Keystone - Identity Options > ############################# > > # Valid options are [ fernet ] > #keystone_token_provider: 'fernet' > > # Interval to rotate fernet keys by (in seconds). Must be an interval of > # 60(1 min), 120(2 min), 180(3 min), 240(4 min), 300(5 min), 360(6 min), > # 600(10 min), 720(12 min), 900(15 min), 1200(20 min), 1800(30 min), > # 3600(1 hour), 7200(2 hour), 10800(3 hour), 14400(4 hour), 21600(6 hour), > # 28800(8 hour), 43200(12 hour), 86400(1 day), 604800(1 week). > #fernet_token_expiry: 86400 > > > ######################## > # Glance - Image Options > ######################## > # Configure image backend. > #glance_backend_ceph: "no" > #glance_backend_file: "yes" > #glance_backend_swift: "no" > #glance_backend_vmware: "no" > # Configure glance upgrade option, due to this feature is experimental > # in glance, so default value should be set to "no". > glance_enable_rolling_upgrade: "no" > > > ################## > # Barbican options > ################## > # Valid options are [ simple_crypto, p11_crypto ] > #barbican_crypto_plugin: "simple_crypto" > #barbican_library_path: "/usr/lib/libCryptoki2_64.so" > > ################ > ## Panko options > ################ > # Valid options are [ mongodb, mysql ] > #panko_database_type: "mysql" > > ################# > # Gnocchi options > ################# > # Valid options are [ file, ceph ] > #gnocchi_backend_storage: "{{ 'ceph' if enable_ceph|bool else 'file' }}" > > # Valid options are [redis, ''] > #gnocchi_incoming_storage: "{{ 'redis' if enable_redis | bool else '' }}" > > ################################ > # Cinder - Block Storage Options > ################################ > # Enable / disable Cinder backends > #cinder_backend_ceph: "{{ enable_ceph }}" > #cinder_backend_vmwarevc_vmdk: "no" > #cinder_volume_group: "cinder-volumes" > > # Valid options are [ nfs, swift, ceph ] > #cinder_backup_driver: "ceph" > #cinder_backup_share: "" > #cinder_backup_mount_options_nfs: "" > > > ################### > # Designate options > ################### > # Valid options are [ bind9 ] > #designate_backend: "bind9" > #designate_ns_record: "sample.openstack.org" > > ######################## > # Nova - Compute Options > ######################## > #nova_backend_ceph: "{{ enable_ceph }}" > > # Valid options are [ qemu, kvm, vmware, xenapi ] > #nova_compute_virt_type: "kvm" > > # The number of fake driver per compute node > #num_nova_fake_per_node: 5 > > ################# > # Hyper-V options > ################# > # Hyper-V can be used as hypervisor > #hyperv_username: "user" > #hyperv_password: "password" > #vswitch_name: "vswitch" > # URL from which Nova Hyper-V MSI is downloaded > #nova_msi_url: " > https://www.cloudbase.it/downloads/HyperVNovaCompute_Beta.msi" > > ############################# > # Horizon - Dashboard Options > ############################# > #horizon_backend_database: "{{ enable_murano | bool }}" > > ############################# > # Ironic options > ############################# > # following value must be set when enable ironic, the value format > # is "192.168.0.10,192.168.0.100". > ironic_dnsmasq_dhcp_range: > > ###################################### > # Manila - Shared File Systems Options > ###################################### > # HNAS backend configuration > #hnas_ip: > #hnas_user: > #hnas_password: > #hnas_evs_id: > #hnas_evs_ip: > #hnas_file_system_name: > > ################################ > # Swift - Object Storage Options > ################################ > # Swift expects block devices to be available for storage. Two types of > storage > # are supported: 1 - storage device with a special partition name and > filesystem > # label, 2 - unpartitioned disk with a filesystem. The label of this > filesystem > # is used to detect the disk which Swift will be using. > > # Swift support two matching modes, valid options are [ prefix, strict ] > #swift_devices_match_mode: "strict" > > # This parameter defines matching pattern: if "strict" mode was selected, > # for swift_devices_match_mode then swift_device_name should specify the > name of > # the special swift partition for example: "KOLLA_SWIFT_DATA", if "prefix" > mode was > # selected then swift_devices_name should specify a pattern which would > match to > # filesystems' labels prepared for swift. > #swift_devices_name: "KOLLA_SWIFT_DATA" > > > ################################################ > # Tempest - The OpenStack Integration Test Suite > ################################################ > # following value must be set when enable tempest > tempest_image_id: > tempest_flavor_ref_id: > tempest_public_network_id: > tempest_floating_network_name: > > # tempest_image_alt_id: "{{ tempest_image_id }}" > # tempest_flavor_ref_alt_id: "{{ tempest_flavor_ref_id }}" > > ################################### > # VMware - OpenStack VMware support > ################################### > #vmware_vcenter_host_ip: > #vmware_vcenter_host_username: > #vmware_vcenter_host_password: > #vmware_datastore_name: > #vmware_vcenter_name: > #vmware_vcenter_cluster_name: > > ####################################### > # XenAPI - Support XenAPI for XenServer > ####################################### > # XenAPI driver use HIMN(Host Internal Management Network) > # to communicate with XenServer host. > #xenserver_himn_ip: > #xenserver_username: > #xenserver_connect_protocol: > > ############ > # Prometheus > ############ > #enable_prometheus_haproxy_exporter: "{{ enable_haproxy | bool }}" > #enable_prometheus_mysqld_exporter: "{{ enable_mariadb | bool }}" > #enable_prometheus_node_exporter: "yes" > On Mon, May 21, 2018 at 10:49 PM, Jeffrey Zhang wrote: > seems there are some issue in you inventory file. > Could you compare your inventory file with the one in kolla-ansible code? > > if you are still not fix it, try to provide you globals.yml file and > inventory file in ML. > > On Tue, May 22, 2018 at 6:51 AM, Rafael Weingärtner < > rafaelweingartner at gmail.com> wrote: > >> Hello OpenStackers, >> First of all, I am not sure if this is the right list to post this >> question. Therefore, please excuse me if I am sending an e-mail to the >> wrong place. >> >> So, I have been trying to use Kolla to deploy a POC environment of >> OpenStack. However, I have not been able to do so. Right now I am getting >> the following error: >> >> fatal: [localhost]: FAILED! => {"msg": "The conditional check >>> '(neutron_l3_agent.enabled | bool and neutron_l3_agent.host_in_groups | >>> bool) or (neutron_vpnaas_agent.enabled | bool and >>> neutron_vpnaas_agent.host_in_groups | bool)' failed. The error was: >>> error while evaluating conditional ((neutron_l3_agent.enabled | bool and >>> neutron_l3_agent.host_in_groups | bool) or >>> (neutron_vpnaas_agent.enabled | bool and neutron_vpnaas_agent.host_in_groups >>> | bool)): Unable to look up a name or access an attribute in template >>> string ({{ inventory_hostname in groups['neutron-vpnaas-agent'] }}).\nMake >>> sure your variable name does not contain invalid characters like '-': >>> argument of type 'StrictUndefined' is not iterable\n\nThe error appears to >>> have been in '/usr/local/share/kolla-ansibl >>> e/ansible/roles/neutron/tasks/config.yml': line 2, column 3, but >>> may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe >>> offending line appears to be:\n\n---\n- name: Setting sysctl values\n ^ >>> here\n"} >>> >> >> It looks like an Ansible problem. I checked the file >> “/usr/local/share/kolla-ansible/ansible/roles/neutron/tasks/config.yml” >> at line 5, it has the following declaration: >> >>> neutron_l3_agent: "{{ neutron_services['neutron-l3-agent'] }}" >>> >> >> As far as I understand everything is ok with this variable declaration. >> There is the “neutron-l3-agent” parameter used to retrieve an element from >> “neutron_services” map, but that does look ok. Has anybody else experienced >> this problem before? >> >> I am using Kolla for OpenStack queens. I am using kolla with the >> following command. >> >>> kolla-ansible -i all-in-one bootstrap-servers && kolla-ansible -i >>> all-in-one prechecks && kolla-ansible -i all-in-one deploy >>> >> >> As you can see, it is a simple use case to deploy OpenStack in a single >> node. The command that is failing is the following. >> >>> kolla-ansible -i all-in-one deploy >>> >> >> -- >> Rafael Weingärtner >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > Regards, > Jeffrey Zhang > Blog: http://xcodest.me > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Rafael Weingärtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelweingartner at gmail.com Tue May 22 02:31:49 2018 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Mon, 21 May 2018 23:31:49 -0300 Subject: [openstack-dev] Problem when deploying Openstack with Kolla In-Reply-To: References: Message-ID: Sorry, I pressed send without the inventory file. My inventory file has the following content (this is the standard file I get when following the procedure described in [1]; I am deploying all-in one node, therefore, I did not change anything here): > # These initial groups are the only groups required to be modified. The > # additional groups are for more control of the environment. > [control] > localhost ansible_connection=local > > [network] > localhost ansible_connection=local > > # inner-compute is the groups of compute nodes which do not have > # external reachability. > # DEPRECATED, the group will be removed in S release of OpenStack, > # use variable neutron_compute_dvr_mode instead. > [inner-compute] > > # external-compute is the groups of compute nodes which can reach > # outside. > # DEPRECATED, the group will be removed in S release of OpenStack, > # use variable neutron_compute_dvr_mode instead. > [external-compute] > localhost ansible_connection=local > > [compute:children] > inner-compute > external-compute > > [storage] > localhost ansible_connection=local > > [monitoring] > localhost ansible_connection=local > > [deployment] > localhost ansible_connection=local > > # You can explicitly specify which hosts run each project by updating the > # groups in the sections below. Common services are grouped together. > [chrony-server:children] > haproxy > > [chrony:children] > network > compute > storage > monitoring > > [collectd:children] > compute > > [baremetal:children] > control > > [grafana:children] > monitoring > > [etcd:children] > control > compute > > [kafka:children] > control > > [karbor:children] > control > > [kibana:children] > control > > [telegraf:children] > compute > control > monitoring > network > storage > > [elasticsearch:children] > control > > [haproxy:children] > network > > [hyperv] > #hyperv_host > > [hyperv:vars] > #ansible_user=user > #ansible_password=password > #ansible_port=5986 > #ansible_connection=winrm > #ansible_winrm_server_cert_validation=ignore > > [mariadb:children] > control > > [rabbitmq:children] > control > > [outward-rabbitmq:children] > control > > [qdrouterd:children] > control > > [mongodb:children] > control > > [keystone:children] > control > > [glance:children] > control > > [nova:children] > control > > [neutron:children] > network > > [openvswitch:children] > network > compute > manila-share > > [opendaylight:children] > network > > [cinder:children] > control > > [cloudkitty:children] > control > > [freezer:children] > control > > [memcached:children] > control > > [horizon:children] > control > > [swift:children] > control > > [barbican:children] > control > > [heat:children] > control > > [murano:children] > control > > [ceph:children] > control > > [ironic:children] > control > > [influxdb:children] > monitoring > > [prometheus:children] > monitoring > > [magnum:children] > control > > [sahara:children] > control > > [solum:children] > control > > [mistral:children] > control > > [manila:children] > control > > [panko:children] > control > > [gnocchi:children] > control > > [ceilometer:children] > control > > [aodh:children] > control > > [congress:children] > control > > [tacker:children] > control > > [vitrage:children] > control > > # Tempest > [tempest:children] > control > > [senlin:children] > control > > [vmtp:children] > control > > [trove:children] > control > > [watcher:children] > control > > [rally:children] > control > > [searchlight:children] > control > > [octavia:children] > control > > [designate:children] > control > > [placement:children] > control > > [bifrost:children] > deployment > > [zookeeper:children] > control > > [zun:children] > control > > [skydive:children] > monitoring > > [redis:children] > control > > [blazar:children] > control > > # Additional control implemented here. These groups allow you to control > which > # services run on which hosts at a per-service level. > # > # Word of caution: Some services are required to run on the same host to > # function appropriately. For example, neutron-metadata-agent must run on > the > # same host as the l3-agent and (depending on configuration) the > dhcp-agent. > > # Glance > [glance-api:children] > glance > > [glance-registry:children] > glance > > # Nova > [nova-api:children] > nova > > [nova-conductor:children] > nova > > [nova-consoleauth:children] > nova > > [nova-novncproxy:children] > nova > > [nova-scheduler:children] > nova > > [nova-spicehtml5proxy:children] > nova > > [nova-compute-ironic:children] > nova > > [nova-serialproxy:children] > nova > > # Neutron > [neutron-server:children] > control > > [neutron-dhcp-agent:children] > neutron > > [neutron-l3-agent:children] > neutron > > [neutron-lbaas-agent:children] > neutron > > [neutron-metadata-agent:children] > neutron > > [neutron-bgp-dragent:children] > neutron > > [neutron-infoblox-ipam-agent:children] > neutron > > # Ceph > [ceph-mds:children] > ceph > > [ceph-mgr:children] > ceph > > [ceph-nfs:children] > ceph > > [ceph-mon:children] > ceph > > [ceph-rgw:children] > ceph > > [ceph-osd:children] > storage > > # Cinder > [cinder-api:children] > cinder > > [cinder-backup:children] > storage > > [cinder-scheduler:children] > cinder > > [cinder-volume:children] > storage > > # Cloudkitty > [cloudkitty-api:children] > cloudkitty > > [cloudkitty-processor:children] > cloudkitty > > # Freezer > [freezer-api:children] > freezer > > [freezer-scheduler:children] > freezer > > # iSCSI > [iscsid:children] > compute > storage > ironic > > [tgtd:children] > storage > > # Karbor > [karbor-api:children] > karbor > > [karbor-protection:children] > karbor > > [karbor-operationengine:children] > karbor > > # Manila > [manila-api:children] > manila > > [manila-scheduler:children] > manila > > [manila-share:children] > network > > [manila-data:children] > manila > > # Swift > [swift-proxy-server:children] > swift > > [swift-account-server:children] > storage > > [swift-container-server:children] > storage > > [swift-object-server:children] > storage > > # Barbican > [barbican-api:children] > barbican > > [barbican-keystone-listener:children] > barbican > > [barbican-worker:children] > barbican > > # Trove > [trove-api:children] > trove > > [trove-conductor:children] > trove > > [trove-taskmanager:children] > trove > > # Heat > [heat-api:children] > heat > > [heat-api-cfn:children] > heat > > [heat-engine:children] > heat > > # Murano > [murano-api:children] > murano > > [murano-engine:children] > murano > > # Ironic > [ironic-api:children] > ironic > > [ironic-conductor:children] > ironic > > [ironic-inspector:children] > ironic > > [ironic-pxe:children] > ironic > > # Magnum > [magnum-api:children] > magnum > > [magnum-conductor:children] > magnum > > # Solum > [solum-api:children] > solum > > [solum-worker:children] > solum > > [solum-deployer:children] > solum > > [solum-conductor:children] > solum > > # Mistral > [mistral-api:children] > mistral > > [mistral-executor:children] > mistral > > [mistral-engine:children] > mistral > > # Aodh > [aodh-api:children] > aodh > > [aodh-evaluator:children] > aodh > > [aodh-listener:children] > aodh > > [aodh-notifier:children] > aodh > > # Panko > [panko-api:children] > panko > > # Gnocchi > [gnocchi-api:children] > gnocchi > > [gnocchi-statsd:children] > gnocchi > > [gnocchi-metricd:children] > gnocchi > > # Sahara > [sahara-api:children] > sahara > > [sahara-engine:children] > sahara > > # Ceilometer > [ceilometer-central:children] > ceilometer > > [ceilometer-notification:children] > ceilometer > > [ceilometer-compute:children] > compute > > # Congress > [congress-api:children] > congress > > [congress-datasource:children] > congress > > [congress-policy-engine:children] > congress > > # Multipathd > [multipathd:children] > compute > > # Watcher > [watcher-api:children] > watcher > > [watcher-engine:children] > watcher > > [watcher-applier:children] > watcher > > # Senlin > [senlin-api:children] > senlin > > [senlin-engine:children] > senlin > > # Searchlight > [searchlight-api:children] > searchlight > > [searchlight-listener:children] > searchlight > > # Octavia > [octavia-api:children] > octavia > > [octavia-health-manager:children] > octavia > > [octavia-housekeeping:children] > octavia > > [octavia-worker:children] > octavia > > # Designate > [designate-api:children] > designate > > [designate-central:children] > designate > > [designate-producer:children] > designate > > [designate-mdns:children] > network > > [designate-worker:children] > designate > > [designate-sink:children] > designate > > [designate-backend-bind9:children] > designate > > # Placement > [placement-api:children] > placement > > # Zun > [zun-api:children] > zun > > [zun-compute:children] > compute > > # Skydive > [skydive-analyzer:children] > skydive > > [skydive-agent:children] > compute > network > > # Tacker > [tacker-server:children] > tacker > > [tacker-conductor:children] > tacker > > # Vitrage > [vitrage-api:children] > vitrage > > [vitrage-notifier:children] > vitrage > > [vitrage-graph:children] > vitrage > > [vitrage-collector:children] > vitrage > > [vitrage-ml:children] > vitrage > > # Blazar > [blazar-api:children] > blazar > > [blazar-manager:children] > blazar > > # Prometheus > [prometheus-node-exporter:children] > monitoring > control > compute > network > storage > > [prometheus-mysqld-exporter:children] > mariadb > > [prometheus-haproxy-exporter:children] > haproxy > On Mon, May 21, 2018 at 11:27 PM, Rafael Weingärtner < rafaelweingartner at gmail.com> wrote: > > Well, everything is pretty standard (I am only deploying a POC), I am > following this "documentation"[1]. I did not change much from the default > files. > [1] https://docs.openstack.org/project-deploy-guide/kolla- > ansible/queens/quickstart.html# > > Globals: > >> --- >> # You can use this file to override _any_ variable throughout Kolla. >> # Additional options can be found in the >> # 'kolla-ansible/ansible/group_vars/all.yml' file. Default value of all >> the >> # commented parameters are shown here, To override the default value >> uncomment >> # the parameter and change its value. >> >> ############### >> # Kolla options >> ############### >> # Valid options are [ COPY_ONCE, COPY_ALWAYS ] >> #config_strategy: "COPY_ALWAYS" >> >> # Valid options are ['centos', 'debian', 'oraclelinux', 'rhel', 'ubuntu'] >> #kolla_base_distro: "centos" >> kolla_base_distro: "ubuntu" >> >> # Valid options are [ binary, source ] >> #kolla_install_type: "binary" >> kolla_install_type: "source" >> >> # Valid option is Docker repository tag >> #openstack_release: "" >> #openstack_release: "master" >> openstack_release: "queens" >> >> # Location of configuration overrides >> #node_custom_config: "/etc/kolla/config" >> >> # This should be a VIP, an unused IP on your network that will float >> between >> # the hosts running keepalived for high-availability. If you want to run >> an >> # All-In-One without haproxy and keepalived, you can set enable_haproxy >> to no >> # in "OpenStack options" section, and set this value to the IP of your >> # 'network_interface' as set in the Networking section below. >> network_interface: "enp0s8" >> kolla_internal_vip_address: "192.168.56.250" >> >> # This is the DNS name that maps to the kolla_internal_vip_address VIP. By >> # default it is the same as kolla_internal_vip_address. >> #kolla_internal_fqdn: "{{ kolla_internal_vip_address }}" >> >> # This should be a VIP, an unused IP on your network that will float >> between >> # the hosts running keepalived for high-availability. It defaults to the >> # kolla_internal_vip_address, allowing internal and external >> communication to >> # share the same address. Specify a kolla_external_vip_address to >> separate >> # internal and external requests between two VIPs. >> #kolla_external_vip_address: "{{ kolla_internal_vip_address }}" >> >> # The Public address used to communicate with OpenStack as set in the >> public_url >> # for the endpoints that will be created. This DNS name should map to >> # kolla_external_vip_address. >> #kolla_external_fqdn: "{{ kolla_external_vip_address }}" >> >> ################ >> # Docker options >> ################ >> # Below is an example of a private repository with authentication. Note >> the >> # Docker registry password can also be set in the passwords.yml file. >> >> #docker_registry: "172.16.0.10:4000" >> #docker_namespace: "companyname" >> #docker_registry_username: "sam" >> #docker_registry_password: "correcthorsebatterystaple" >> >> ################### >> # Messaging options >> ################### >> # Below is an example of an separate backend that provides brokerless >> # messaging for oslo.messaging RPC communications >> >> #om_rpc_transport: "amqp" >> #om_rpc_user: "{{ qdrouterd_user }}" >> #om_rpc_password: "{{ qdrouterd_password }}" >> #om_rpc_port: "{{ qdrouterd_port }}" >> #om_rpc_group: "qdrouterd" >> >> >> ############################## >> # Neutron - Networking Options >> ############################## >> # This interface is what all your api services will be bound to by >> default. >> # Additionally, all vxlan/tunnel and storage network traffic will go over >> this >> # interface by default. This interface must contain an IPv4 address. >> # It is possible for hosts to have non-matching names of interfaces - >> these can >> # be set in an inventory file per host or per group or stored separately, >> see >> # http://docs.ansible.com/ansible/intro_inventory.html >> # Yet another way to workaround the naming problem is to create a bond >> for the >> # interface on all hosts and give the bond name here. Similar strategy >> can be >> # followed for other types of interfaces. >> #network_interface: "eth0" >> >> # These can be adjusted for even more customization. The default is the >> same as >> # the 'network_interface'. These interfaces must contain an IPv4 address. >> #kolla_external_vip_interface: "{{ network_interface }}" >> #api_interface: "{{ network_interface }}" >> #storage_interface: "{{ network_interface }}" >> #cluster_interface: "{{ network_interface }}" >> #tunnel_interface: "{{ network_interface }}" >> #dns_interface: "{{ network_interface }}" >> >> # This is the raw interface given to neutron as its external network >> port. Even >> # though an IP address can exist on this interface, it will be unusable >> in most >> # configurations. It is recommended this interface not be configured with >> any IP >> # addresses for that reason. >> #neutron_external_interface: "eth1" >> neutron_external_interface: "enp0s9" >> >> # Valid options are [ openvswitch, linuxbridge, vmware_nsxv, vmware_dvs, >> opendaylight ] >> neutron_plugin_agent: "linuxbridge" >> >> # Valid options are [ internal, infoblox ] >> #neutron_ipam_driver: "internal" >> >> >> #################### >> # keepalived options >> #################### >> # Arbitrary unique number from 0..255 >> #keepalived_virtual_router_id: "51" >> >> >> ############# >> # TLS options >> ############# >> # To provide encryption and authentication on the >> kolla_external_vip_interface, >> # TLS can be enabled. When TLS is enabled, certificates must be provided >> to >> # allow clients to perform authentication. >> #kolla_enable_tls_external: "no" >> #kolla_external_fqdn_cert: "{{ node_config_directory >> }}/certificates/haproxy.pem" >> >> >> ############## >> # OpenDaylight >> ############## >> #enable_opendaylight_qos: "no" >> #enable_opendaylight_l3: "yes" >> >> ################### >> # OpenStack options >> ################### >> # Use these options to set the various log levels across all OpenStack >> projects >> # Valid options are [ True, False ] >> #openstack_logging_debug: "False" >> >> # Valid options are [ none, novnc, spice, rdp ] >> #nova_console: "novnc" >> >> # OpenStack services can be enabled or disabled with these options >> #enable_aodh: "no" >> #enable_barbican: "no" >> #enable_blazar: "no" >> #enable_ceilometer: "no" >> #enable_central_logging: "no" >> #enable_ceph: "no" >> #enable_ceph_mds: "no" >> #enable_ceph_rgw: "no" >> #enable_ceph_nfs: "no" >> #enable_chrony: "no" >> #enable_cinder: "yes" >> #enable_cinder_backup: "yes" >> #enable_cinder_backend_hnas_iscsi: "no" >> #enable_cinder_backend_hnas_nfs: "no" >> #enable_cinder_backend_iscsi: "no" >> #enable_cinder_backend_lvm: "no" >> #enable_cinder_backend_nfs: "yes" >> #enable_cloudkitty: "no" >> #enable_collectd: "no" >> #enable_congress: "no" >> #enable_designate: "no" >> #enable_destroy_images: "no" >> #enable_etcd: "no" >> #enable_fluentd: "yes" >> #enable_freezer: "no" >> #enable_gnocchi: "no" >> #enable_grafana: "no" >> enable_haproxy: "no" >> #enable_heat: "yes" >> #enable_horizon: "yes" >> #enable_horizon_blazar: "{{ enable_blazar | bool }}" >> #enable_horizon_cloudkitty: "{{ enable_cloudkitty | bool }}" >> #enable_horizon_designate: "{{ enable_designate | bool }}" >> #enable_horizon_freezer: "{{ enable_freezer | bool }}" >> #enable_horizon_ironic: "{{ enable_ironic | bool }}" >> #enable_horizon_karbor: "{{ enable_karbor | bool }}" >> #enable_horizon_magnum: "{{ enable_magnum | bool }}" >> #enable_horizon_manila: "{{ enable_manila | bool }}" >> #enable_horizon_mistral: "{{ enable_mistral | bool }}" >> #enable_horizon_murano: "{{ enable_murano | bool }}" >> #enable_horizon_neutron_lbaas: "{{ enable_neutron_lbaas | bool }}" >> #enable_horizon_octavia: "{{ enable_octavia | bool }}" >> #enable_horizon_sahara: "{{ enable_sahara | bool }}" >> #enable_horizon_searchlight: "{{ enable_searchlight | bool }}" >> #enable_horizon_senlin: "{{ enable_senlin | bool }}" >> #enable_horizon_solum: "{{ enable_solum | bool }}" >> #enable_horizon_tacker: "{{ enable_tacker | bool }}" >> #enable_horizon_trove: "{{ enable_trove | bool }}" >> #enable_horizon_watcher: "{{ enable_watcher | bool }}" >> #enable_horizon_zun: "{{ enable_zun | bool }}" >> #enable_hyperv: "no" >> #enable_influxdb: "no" >> #enable_ironic: "no" >> #enable_ironic_pxe_uefi: "no" >> #enable_kafka: "no" >> #enable_karbor: "no" >> #enable_kuryr: "no" >> #enable_magnum: "no" >> #enable_manila: "no" >> #enable_manila_backend_generic: "no" >> #enable_manila_backend_hnas: "no" >> #enable_manila_backend_cephfs_native: "no" >> #enable_manila_backend_cephfs_nfs: "no" >> #enable_mistral: "no" >> #enable_mongodb: "no" >> #enable_murano: "no" >> #enable_multipathd: "no" >> #enable_neutron_bgp_dragent: "no" >> #enable_neutron_dvr: "no" >> #enable_neutron_lbaas: "no" >> #enable_neutron_fwaas: "no" >> #enable_neutron_qos: "no" >> #enable_neutron_agent_ha: "no" >> enable_neutron_vpnaas: "no" >> #enable_neutron_sriov: "no" >> #enable_neutron_sfc: "no" >> #enable_nova_fake: "no" >> #enable_nova_serialconsole_proxy: "no" >> #enable_octavia: "no" >> #enable_opendaylight: "no" >> #enable_openvswitch: "{{ neutron_plugin_agent != 'linuxbridge' }}" >> #enable_ovs_dpdk: "no" >> #enable_osprofiler: "no" >> #enable_panko: "no" >> #enable_prometheus: "no" >> #enable_qdrouterd: "no" >> #enable_rally: "no" >> #enable_redis: "no" >> #enable_sahara: "no" >> #enable_searchlight: "no" >> #enable_senlin: "no" >> #enable_skydive: "no" >> #enable_solum: "no" >> #enable_swift: "no" >> #enable_telegraf: "no" >> #enable_tacker: "no" >> #enable_tempest: "no" >> #enable_trove: "no" >> #enable_trove_singletenant: "no" >> #enable_vitrage: "no" >> #enable_vmtp: "no" >> #enable_watcher: "no" >> #enable_zookeeper: "no" >> #enable_zun: "no" >> >> ############## >> # Ceph options >> ############## >> # Ceph can be setup with a caching to improve performance. To use the >> cache you >> # must provide separate disks than those for the OSDs >> #ceph_enable_cache: "no" >> >> # Set to no if using external Ceph without cephx. >> #external_ceph_cephx_enabled: "yes" >> >> # Ceph is not able to determine the size of a cache pool automatically, >> # so the configuration on the absolute size is required here, otherwise >> the flush/evict will not work. >> #ceph_target_max_bytes: "" >> #ceph_target_max_objects: "" >> >> # Valid options are [ forward, none, writeback ] >> #ceph_cache_mode: "writeback" >> >> # A requirement for using the erasure-coded pools is you must setup a >> cache tier >> # Valid options are [ erasure, replicated ] >> #ceph_pool_type: "replicated" >> >> # Integrate ceph rados object gateway with openstack keystone >> #enable_ceph_rgw_keystone: "no" >> >> # Set the pgs and pgps for pool >> # WARNING! These values are dependant on the size and shape of your >> cluster - >> # the default values are not suitable for production use. Please refer to >> the >> # Kolla Ceph documentation for more information. >> #ceph_pool_pg_num: 8 >> #ceph_pool_pgp_num: 8 >> >> ############################# >> # Keystone - Identity Options >> ############################# >> >> # Valid options are [ fernet ] >> #keystone_token_provider: 'fernet' >> >> # Interval to rotate fernet keys by (in seconds). Must be an interval of >> # 60(1 min), 120(2 min), 180(3 min), 240(4 min), 300(5 min), 360(6 min), >> # 600(10 min), 720(12 min), 900(15 min), 1200(20 min), 1800(30 min), >> # 3600(1 hour), 7200(2 hour), 10800(3 hour), 14400(4 hour), 21600(6 hour), >> # 28800(8 hour), 43200(12 hour), 86400(1 day), 604800(1 week). >> #fernet_token_expiry: 86400 >> >> >> ######################## >> # Glance - Image Options >> ######################## >> # Configure image backend. >> #glance_backend_ceph: "no" >> #glance_backend_file: "yes" >> #glance_backend_swift: "no" >> #glance_backend_vmware: "no" >> # Configure glance upgrade option, due to this feature is experimental >> # in glance, so default value should be set to "no". >> glance_enable_rolling_upgrade: "no" >> >> >> ################## >> # Barbican options >> ################## >> # Valid options are [ simple_crypto, p11_crypto ] >> #barbican_crypto_plugin: "simple_crypto" >> #barbican_library_path: "/usr/lib/libCryptoki2_64.so" >> >> ################ >> ## Panko options >> ################ >> # Valid options are [ mongodb, mysql ] >> #panko_database_type: "mysql" >> >> ################# >> # Gnocchi options >> ################# >> # Valid options are [ file, ceph ] >> #gnocchi_backend_storage: "{{ 'ceph' if enable_ceph|bool else 'file' }}" >> >> # Valid options are [redis, ''] >> #gnocchi_incoming_storage: "{{ 'redis' if enable_redis | bool else '' }}" >> >> ################################ >> # Cinder - Block Storage Options >> ################################ >> # Enable / disable Cinder backends >> #cinder_backend_ceph: "{{ enable_ceph }}" >> #cinder_backend_vmwarevc_vmdk: "no" >> #cinder_volume_group: "cinder-volumes" >> >> # Valid options are [ nfs, swift, ceph ] >> #cinder_backup_driver: "ceph" >> #cinder_backup_share: "" >> #cinder_backup_mount_options_nfs: "" >> >> >> ################### >> # Designate options >> ################### >> # Valid options are [ bind9 ] >> #designate_backend: "bind9" >> #designate_ns_record: "sample.openstack.org" >> >> ######################## >> # Nova - Compute Options >> ######################## >> #nova_backend_ceph: "{{ enable_ceph }}" >> >> # Valid options are [ qemu, kvm, vmware, xenapi ] >> #nova_compute_virt_type: "kvm" >> >> # The number of fake driver per compute node >> #num_nova_fake_per_node: 5 >> >> ################# >> # Hyper-V options >> ################# >> # Hyper-V can be used as hypervisor >> #hyperv_username: "user" >> #hyperv_password: "password" >> #vswitch_name: "vswitch" >> # URL from which Nova Hyper-V MSI is downloaded >> #nova_msi_url: "https://www.cloudbase.it/downloads/HyperVNovaCompute_ >> Beta.msi" >> >> ############################# >> # Horizon - Dashboard Options >> ############################# >> #horizon_backend_database: "{{ enable_murano | bool }}" >> >> ############################# >> # Ironic options >> ############################# >> # following value must be set when enable ironic, the value format >> # is "192.168.0.10,192.168.0.100". >> ironic_dnsmasq_dhcp_range: >> >> ###################################### >> # Manila - Shared File Systems Options >> ###################################### >> # HNAS backend configuration >> #hnas_ip: >> #hnas_user: >> #hnas_password: >> #hnas_evs_id: >> #hnas_evs_ip: >> #hnas_file_system_name: >> >> ################################ >> # Swift - Object Storage Options >> ################################ >> # Swift expects block devices to be available for storage. Two types of >> storage >> # are supported: 1 - storage device with a special partition name and >> filesystem >> # label, 2 - unpartitioned disk with a filesystem. The label of this >> filesystem >> # is used to detect the disk which Swift will be using. >> >> # Swift support two matching modes, valid options are [ prefix, strict ] >> #swift_devices_match_mode: "strict" >> >> # This parameter defines matching pattern: if "strict" mode was selected, >> # for swift_devices_match_mode then swift_device_name should specify the >> name of >> # the special swift partition for example: "KOLLA_SWIFT_DATA", if >> "prefix" mode was >> # selected then swift_devices_name should specify a pattern which would >> match to >> # filesystems' labels prepared for swift. >> #swift_devices_name: "KOLLA_SWIFT_DATA" >> >> >> ################################################ >> # Tempest - The OpenStack Integration Test Suite >> ################################################ >> # following value must be set when enable tempest >> tempest_image_id: >> tempest_flavor_ref_id: >> tempest_public_network_id: >> tempest_floating_network_name: >> >> # tempest_image_alt_id: "{{ tempest_image_id }}" >> # tempest_flavor_ref_alt_id: "{{ tempest_flavor_ref_id }}" >> >> ################################### >> # VMware - OpenStack VMware support >> ################################### >> #vmware_vcenter_host_ip: >> #vmware_vcenter_host_username: >> #vmware_vcenter_host_password: >> #vmware_datastore_name: >> #vmware_vcenter_name: >> #vmware_vcenter_cluster_name: >> >> ####################################### >> # XenAPI - Support XenAPI for XenServer >> ####################################### >> # XenAPI driver use HIMN(Host Internal Management Network) >> # to communicate with XenServer host. >> #xenserver_himn_ip: >> #xenserver_username: >> #xenserver_connect_protocol: >> >> ############ >> # Prometheus >> ############ >> #enable_prometheus_haproxy_exporter: "{{ enable_haproxy | bool }}" >> #enable_prometheus_mysqld_exporter: "{{ enable_mariadb | bool }}" >> #enable_prometheus_node_exporter: "yes" >> > > > On Mon, May 21, 2018 at 10:49 PM, Jeffrey Zhang > wrote: > >> seems there are some issue in you inventory file. >> Could you compare your inventory file with the one in kolla-ansible code? >> >> if you are still not fix it, try to provide you globals.yml file and >> inventory file in ML. >> >> On Tue, May 22, 2018 at 6:51 AM, Rafael Weingärtner < >> rafaelweingartner at gmail.com> wrote: >> >>> Hello OpenStackers, >>> First of all, I am not sure if this is the right list to post this >>> question. Therefore, please excuse me if I am sending an e-mail to the >>> wrong place. >>> >>> So, I have been trying to use Kolla to deploy a POC environment of >>> OpenStack. However, I have not been able to do so. Right now I am getting >>> the following error: >>> >>> fatal: [localhost]: FAILED! => {"msg": "The conditional check >>>> '(neutron_l3_agent.enabled | bool and neutron_l3_agent.host_in_groups >>>> | bool) or (neutron_vpnaas_agent.enabled | bool and >>>> neutron_vpnaas_agent.host_in_groups | bool)' failed. The error was: >>>> error while evaluating conditional ((neutron_l3_agent.enabled | bool and >>>> neutron_l3_agent.host_in_groups | bool) or >>>> (neutron_vpnaas_agent.enabled | bool and neutron_vpnaas_agent.host_in_groups >>>> | bool)): Unable to look up a name or access an attribute in template >>>> string ({{ inventory_hostname in groups['neutron-vpnaas-agent'] }}).\nMake >>>> sure your variable name does not contain invalid characters like '-': >>>> argument of type 'StrictUndefined' is not iterable\n\nThe error appears to >>>> have been in '/usr/local/share/kolla-ansibl >>>> e/ansible/roles/neutron/tasks/config.yml': line 2, column 3, but >>>> may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe >>>> offending line appears to be:\n\n---\n- name: Setting sysctl values\n ^ >>>> here\n"} >>>> >>> >>> It looks like an Ansible problem. I checked the file >>> “/usr/local/share/kolla-ansible/ansible/roles/neutron/tasks/config.yml” >>> at line 5, it has the following declaration: >>> >>>> neutron_l3_agent: "{{ neutron_services['neutron-l3-agent'] }}" >>>> >>> >>> As far as I understand everything is ok with this variable declaration. >>> There is the “neutron-l3-agent” parameter used to retrieve an element from >>> “neutron_services” map, but that does look ok. Has anybody else experienced >>> this problem before? >>> >>> I am using Kolla for OpenStack queens. I am using kolla with the >>> following command. >>> >>>> kolla-ansible -i all-in-one bootstrap-servers && kolla-ansible -i >>>> all-in-one prechecks && kolla-ansible -i all-in-one deploy >>>> >>> >>> As you can see, it is a simple use case to deploy OpenStack in a single >>> node. The command that is failing is the following. >>> >>>> kolla-ansible -i all-in-one deploy >>>> >>> >>> -- >>> Rafael Weingärtner >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> >> -- >> Regards, >> Jeffrey Zhang >> Blog: http://xcodest.me >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > Rafael Weingärtner > -- Rafael Weingärtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From cjeanner at redhat.com Tue May 22 04:27:02 2018 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Tue, 22 May 2018 06:27:02 +0200 Subject: [openstack-dev] [tripleo] Limiting sudo coverage of heat-admin / stack and other users. In-Reply-To: References: Message-ID: <825b0cca-b9cb-181e-6b1b-388d6e90542f@redhat.com> On 05/21/2018 03:49 PM, Luke Hinds wrote: > A few operators have requested if its possible to limit sudo's coverage > on both the under / overcloud. There is concern over `ALL=(ALL) > NOPASSWD:ALL` , which allows someone to  `sudo su`. > > This task has come under the care of the tripleo security squad. > > The work is being tracked and discussed here [0]. > > So far it looks like the approach will be to use regexp within > /etc/sudoers.d/*., to narrow down as close as possible to the specific > commands called. Some services already do this with rootwrap: > > ironic ALL = (root) NOPASSWD: /usr/bin/ironic-rootwrap > /etc/ironic/rootwrap.conf *    > > It's fairly easy to pick up a list of all sudo calls using a simple > script [1] > > The other prolific user of sudo is ansible / stack, for example: > > /bin/sh -c echo BECOME-SUCCESS-kldpbeueyodisjajjqthpafzadrncdff; > /usr/bin/python > /home/stack/.ansible/tmp/ansible-tmp-1526579105.0-109863952786117/systemd.py; > rm -rf > "/home/stack/.ansible/tmp/ansible-tmp-1526579105.0-109863952786117/" > > /dev/null 2>&1 > > My feelings here are to again use regexp around the immutable non random > parts of the command.  cjeanner also made some suggestions in the > etherpad [0]. Might be a temporary way to limit the surface indeed, but an upstream change in Ansible would still be really nice. Predictable names is the only "right" way, although this will create a long sudo ruleset. A really long one to be honnest. Maintainability is also to be discussed in either way (maintain a couple of regexp vs 200+ rules.. hmmm). > > However aside to the approach, we need to consider the impact locking > down might have should someone create a develop a new bit of code that > leverages commands wrapped in sudo and assumes ALL with be in place. > This of course will be blocked. This will indeed require some doc, as this is a "major" change. However, the use of regexp should somewhat limit the impact, especially since Ansible pushes its exec script in the same location. Even new parts should be allowed (that might be a bit of concern if we want to really dig in the consequences of a bad template being injected in some way [looking config-download ;)]). But at some point, we might also decide to let the OPs ensure their infra isn't compromised. Always the same thread-of with Security vs The World - convenience vs cumbersome management, and so on. > > Now my guess is that our CI would capture this as the deploy would > fail(?) and the developer should work out an entry is needed when > testing their patch, but wanted to open this up to others who know > testing at gate better much better than myself.  Also encourage any > thoughts on the topic to be introduced to the etherpad [0] > > [0] https://etherpad.openstack.org/p/tripleo-heat-admin-security > [1] https://gist.github.com/lukehinds/4cdb1bf4de526a049c51f05698b8b04f > > -- > Luke Hinds -- Cédric Jeanneret Software Engineer DFG:DF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From skramaja at redhat.com Tue May 22 05:03:26 2018 From: skramaja at redhat.com (Saravanan KR) Date: Tue, 22 May 2018 10:33:26 +0530 Subject: [openstack-dev] [tripleo] cannot configure host kernel-args for pci passthrough with first-boot In-Reply-To: References: Message-ID: Could you check the log in the /var/log/cloud-init-output.log file to see what are the first-boot scripts which are executed on the node? Add "set -x" in the kernel-args.sh file to better logs. Regards, Saravanan KR On Tue, May 22, 2018 at 12:49 AM, Samuel Monderer wrote: > Hi, > > I'm trying to build a new OS environment with RHOSP 11 with a compute has > that has GPU card. > I've added a new role and a firstboot template to configure the kernel args > to allow pci-passthrough. > For some reason the firstboot is not working (can't see the changes on the > compute node) > Attached are the templates I used to deploy the environment. > > I used the same configuration I used for a compute role with sr-iov and it > worked there. > Could someone tell me what I missed? > > Regards, > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From smonderer at vasonanetworks.com Tue May 22 06:14:57 2018 From: smonderer at vasonanetworks.com (Samuel Monderer) Date: Tue, 22 May 2018 09:14:57 +0300 Subject: [openstack-dev] [tripleo] cannot configure host kernel-args for pci passthrough with first-boot In-Reply-To: References: Message-ID: Hi, We found the cause of the problem. We forgot the following in the first-boot.yaml outputs: # This means get_resource from the parent template will get the userdata, see: # http://docs.openstack.org/developer/heat/template_guide/composition.html#making-your-template-resource-more-transparent # Note this is new-for-kilo, an alternative is returning a value then using # get_attr in the parent template instead. OS::stack_id: value: {get_resource: userdata} Samuel On Tue, May 22, 2018 at 8:05 AM Saravanan KR wrote: > Could you check the log in the /var/log/cloud-init-output.log file to > see what are the first-boot scripts which are executed on the node? > Add "set -x" in the kernel-args.sh file to better logs. > > Regards, > Saravanan KR > > On Tue, May 22, 2018 at 12:49 AM, Samuel Monderer > wrote: > > Hi, > > > > I'm trying to build a new OS environment with RHOSP 11 with a compute has > > that has GPU card. > > I've added a new role and a firstboot template to configure the kernel > args > > to allow pci-passthrough. > > For some reason the firstboot is not working (can't see the changes on > the > > compute node) > > Attached are the templates I used to deploy the environment. > > > > I used the same configuration I used for a compute role with sr-iov and > it > > worked there. > > Could someone tell me what I missed? > > > > Regards, > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lhinds at redhat.com Tue May 22 07:08:16 2018 From: lhinds at redhat.com (Luke Hinds) Date: Tue, 22 May 2018 08:08:16 +0100 Subject: [openstack-dev] [tripleo] Limiting sudo coverage of heat-admin / stack and other users. In-Reply-To: <825b0cca-b9cb-181e-6b1b-388d6e90542f@redhat.com> References: <825b0cca-b9cb-181e-6b1b-388d6e90542f@redhat.com> Message-ID: On Tue, May 22, 2018 at 5:27 AM, Cédric Jeanneret wrote: > > > On 05/21/2018 03:49 PM, Luke Hinds wrote: > > A few operators have requested if its possible to limit sudo's coverage > > on both the under / overcloud. There is concern over `ALL=(ALL) > > NOPASSWD:ALL` , which allows someone to `sudo su`. > > > > This task has come under the care of the tripleo security squad. > > > > The work is being tracked and discussed here [0]. > > > > So far it looks like the approach will be to use regexp within > > /etc/sudoers.d/*., to narrow down as close as possible to the specific > > commands called. Some services already do this with rootwrap: > > > > ironic ALL = (root) NOPASSWD: /usr/bin/ironic-rootwrap > > /etc/ironic/rootwrap.conf * > > > > It's fairly easy to pick up a list of all sudo calls using a simple > > script [1] > > > > The other prolific user of sudo is ansible / stack, for example: > > > > /bin/sh -c echo BECOME-SUCCESS-kldpbeueyodisjajjqthpafzadrncdff; > > /usr/bin/python > > /home/stack/.ansible/tmp/ansible-tmp-1526579105.0- > 109863952786117/systemd.py; > > rm -rf > > "/home/stack/.ansible/tmp/ansible-tmp-1526579105.0-109863952786117/" > > > /dev/null 2>&1 > > > > My feelings here are to again use regexp around the immutable non random > > parts of the command. cjeanner also made some suggestions in the > > etherpad [0]. > > Might be a temporary way to limit the surface indeed, but an upstream > change in Ansible would still be really nice. Predictable names is the > only "right" way, although this will create a long sudo ruleset. A > really long one to be honnest. Maintainability is also to be discussed > in either way (maintain a couple of regexp vs 200+ rules.. hmmm). > > As I understand it, the problem with predicable names is they also become predictable to attackers (this would be the reason ansible adds in the random string). It helps prevent someone creating a race condition to replace the python script with something more nefarious. Its the same reason commands such as mktemp exists. > > > However aside to the approach, we need to consider the impact locking > > down might have should someone create a develop a new bit of code that > > leverages commands wrapped in sudo and assumes ALL with be in place. > > This of course will be blocked. > > This will indeed require some doc, as this is a "major" change. However, > the use of regexp should somewhat limit the impact, especially since > Ansible pushes its exec script in the same location. > Even new parts should be allowed (that might be a bit of concern if we > want to really dig in the consequences of a bad template being injected > in some way [looking config-download ;)]). > But at some point, we might also decide to let the OPs ensure their > infra isn't compromised. > Always the same thread-of with Security vs The World - convenience vs > cumbersome management, and so on. > > > > > Now my guess is that our CI would capture this as the deploy would > > fail(?) and the developer should work out an entry is needed when > > testing their patch, but wanted to open this up to others who know > > testing at gate better much better than myself. Also encourage any > > thoughts on the topic to be introduced to the etherpad [0] > > > > [0] https://etherpad.openstack.org/p/tripleo-heat-admin-security > > [1] https://gist.github.com/lukehinds/4cdb1bf4de526a049c51f05698b8b04f > > > > -- > > Luke Hinds > > -- > Cédric Jeanneret > Software Engineer > DFG:DF > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Luke Hinds | NFV Partner Engineering | CTO Office | Red Hat e: lhinds at redhat.com | irc: lhinds @freenode | t: +44 12 52 36 2483 -------------- next part -------------- An HTML attachment was scrubbed... URL: From cjeanner at redhat.com Tue May 22 07:24:41 2018 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Tue, 22 May 2018 09:24:41 +0200 Subject: [openstack-dev] [tripleo] Limiting sudo coverage of heat-admin / stack and other users. In-Reply-To: References: <825b0cca-b9cb-181e-6b1b-388d6e90542f@redhat.com> Message-ID: <91cfbb4e-33de-79b6-f1c6-89bd9025e448@redhat.com> On 05/22/2018 09:08 AM, Luke Hinds wrote: > > > On Tue, May 22, 2018 at 5:27 AM, Cédric Jeanneret > wrote: > > > > On 05/21/2018 03:49 PM, Luke Hinds wrote: > > A few operators have requested if its possible to limit sudo's coverage > > on both the under / overcloud. There is concern over `ALL=(ALL) > > NOPASSWD:ALL` , which allows someone to  `sudo su`. > > > > This task has come under the care of the tripleo security squad. > > > > The work is being tracked and discussed here [0]. > > > > So far it looks like the approach will be to use regexp within > > /etc/sudoers.d/*., to narrow down as close as possible to the specific > > commands called. Some services already do this with rootwrap: > > > > ironic ALL = (root) NOPASSWD: /usr/bin/ironic-rootwrap > > /etc/ironic/rootwrap.conf *    > > > > It's fairly easy to pick up a list of all sudo calls using a simple > > script [1] > > > > The other prolific user of sudo is ansible / stack, for example: > > > > /bin/sh -c echo BECOME-SUCCESS-kldpbeueyodisjajjqthpafzadrncdff; > > /usr/bin/python > > /home/stack/.ansible/tmp/ansible-tmp-1526579105.0-109863952786117/systemd.py; > > rm -rf > > "/home/stack/.ansible/tmp/ansible-tmp-1526579105.0-109863952786117/" > > > /dev/null 2>&1 > > > > My feelings here are to again use regexp around the immutable non random > > parts of the command.  cjeanner also made some suggestions in the > > etherpad [0]. > > Might be a temporary way to limit the surface indeed, but an upstream > change in Ansible would still be really nice. Predictable names is the > only "right" way, although this will create a long sudo ruleset. A > really long one to be honnest. Maintainability is also to be discussed > in either way (maintain a couple of regexp vs 200+ rules.. hmmm). > > > As I understand it, the problem with predicable names is they also > become predictable to attackers (this would be the reason ansible adds > in the random string). It helps prevent someone creating a race > condition to replace the python script with something more nefarious. > Its the same reason commands such as mktemp exists. Fair enough indeed. Both solution have their pros and cons. In order to move on, I think the regexp in sudoers is acceptable for the following reasons: - limits accesses outside of ansible generated code - allows others to still push new content without having to change sudo listing (thanks to regexp) - still hard to inject bad things in the executed script/code - quick to implement (well, fastest than requiring an upstream change that will most probably break some internal things before working properly, and without adding more security as you explained it) @Juan do you agree with that statement? As we had some quick chat about it. Note: I'm not part of the security squad ;). But I like secured things. > > > > > However aside to the approach, we need to consider the impact locking > > down might have should someone create a develop a new bit of code that > > leverages commands wrapped in sudo and assumes ALL with be in place. > > This of course will be blocked. > > This will indeed require some doc, as this is a "major" change. However, > the use of regexp should somewhat limit the impact, especially since > Ansible pushes its exec script in the same location. > Even new parts should be allowed (that might be a bit of concern if we > want to really dig in the consequences of a bad template being injected > in some way [looking config-download ;)]). > But at some point, we might also decide to let the OPs ensure their > infra isn't compromised. > Always the same thread-of with Security vs The World - convenience vs > cumbersome management, and so on. > > > > > Now my guess is that our CI would capture this as the deploy would > > fail(?) and the developer should work out an entry is needed when > > testing their patch, but wanted to open this up to others who know > > testing at gate better much better than myself.  Also encourage any > > thoughts on the topic to be introduced to the etherpad [0] > > > > [0] https://etherpad.openstack.org/p/tripleo-heat-admin-security > > > [1] > https://gist.github.com/lukehinds/4cdb1bf4de526a049c51f05698b8b04f > > > > > -- > > Luke Hinds > > -- > Cédric Jeanneret > Software Engineer > DFG:DF > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > -- > Luke Hinds | NFV Partner Engineering | CTO Office | Red Hat > e: lhinds at redhat.com  | irc: lhinds @freenode > |t: +44 12 52 36 2483 > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Cédric Jeanneret Software Engineer DFG:DF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From cjeanner at redhat.com Tue May 22 08:36:23 2018 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Tue, 22 May 2018 10:36:23 +0200 Subject: [openstack-dev] [tripleo] Limiting sudo coverage of heat-admin / stack and other users. In-Reply-To: <91cfbb4e-33de-79b6-f1c6-89bd9025e448@redhat.com> References: <825b0cca-b9cb-181e-6b1b-388d6e90542f@redhat.com> <91cfbb4e-33de-79b6-f1c6-89bd9025e448@redhat.com> Message-ID: <877985d8-e4b9-fe57-ef6b-ba2ccf903c3b@redhat.com> On 05/22/2018 09:24 AM, Cédric Jeanneret wrote: > > > On 05/22/2018 09:08 AM, Luke Hinds wrote: >> >> >> On Tue, May 22, 2018 at 5:27 AM, Cédric Jeanneret > > wrote: >> >> >> >> On 05/21/2018 03:49 PM, Luke Hinds wrote: >> > A few operators have requested if its possible to limit sudo's coverage >> > on both the under / overcloud. There is concern over `ALL=(ALL) >> > NOPASSWD:ALL` , which allows someone to  `sudo su`. >> > >> > This task has come under the care of the tripleo security squad. >> > >> > The work is being tracked and discussed here [0]. >> > >> > So far it looks like the approach will be to use regexp within >> > /etc/sudoers.d/*., to narrow down as close as possible to the specific >> > commands called. Some services already do this with rootwrap: >> > >> > ironic ALL = (root) NOPASSWD: /usr/bin/ironic-rootwrap >> > /etc/ironic/rootwrap.conf *    >> > >> > It's fairly easy to pick up a list of all sudo calls using a simple >> > script [1] >> > >> > The other prolific user of sudo is ansible / stack, for example: >> > >> > /bin/sh -c echo BECOME-SUCCESS-kldpbeueyodisjajjqthpafzadrncdff; >> > /usr/bin/python >> > /home/stack/.ansible/tmp/ansible-tmp-1526579105.0-109863952786117/systemd.py; >> > rm -rf >> > "/home/stack/.ansible/tmp/ansible-tmp-1526579105.0-109863952786117/" > >> > /dev/null 2>&1 >> > >> > My feelings here are to again use regexp around the immutable non random >> > parts of the command.  cjeanner also made some suggestions in the >> > etherpad [0]. >> >> Might be a temporary way to limit the surface indeed, but an upstream >> change in Ansible would still be really nice. Predictable names is the >> only "right" way, although this will create a long sudo ruleset. A >> really long one to be honnest. Maintainability is also to be discussed >> in either way (maintain a couple of regexp vs 200+ rules.. hmmm). >> >> >> As I understand it, the problem with predicable names is they also >> become predictable to attackers (this would be the reason ansible adds >> in the random string). It helps prevent someone creating a race >> condition to replace the python script with something more nefarious. >> Its the same reason commands such as mktemp exists. > > Fair enough indeed. Both solution have their pros and cons. In order to > move on, I think the regexp in sudoers is acceptable for the following > reasons: > - limits accesses outside of ansible generated code > - allows others to still push new content without having to change sudo > listing (thanks to regexp) > - still hard to inject bad things in the executed script/code > - quick to implement (well, fastest than requiring an upstream change > that will most probably break some internal things before working > properly, and without adding more security as you explained it) Small idea: it might be interesting to check if SELinux can't be a ally for that issue in fact: dedicated context, separation, that's a SELinux kind of thing isn't it? I'm no SELinux poweruser¹, but that kind of usage is, to my small knowledge of this product, a perfect fit. Would be good to dig in that direction, don't you think? ### ¹ I'm more the poor guy at the end head-banging his desk when SELinux comes in the way ;). That hurts. > > @Juan do you agree with that statement? As we had some quick chat about it. > > Note: I'm not part of the security squad ;). But I like secured things. > >> >> > >> > However aside to the approach, we need to consider the impact locking >> > down might have should someone create a develop a new bit of code that >> > leverages commands wrapped in sudo and assumes ALL with be in place. >> > This of course will be blocked. >> >> This will indeed require some doc, as this is a "major" change. However, >> the use of regexp should somewhat limit the impact, especially since >> Ansible pushes its exec script in the same location. >> Even new parts should be allowed (that might be a bit of concern if we >> want to really dig in the consequences of a bad template being injected >> in some way [looking config-download ;)]). >> But at some point, we might also decide to let the OPs ensure their >> infra isn't compromised. >> Always the same thread-of with Security vs The World - convenience vs >> cumbersome management, and so on. >> >> > >> > Now my guess is that our CI would capture this as the deploy would >> > fail(?) and the developer should work out an entry is needed when >> > testing their patch, but wanted to open this up to others who know >> > testing at gate better much better than myself.  Also encourage any >> > thoughts on the topic to be introduced to the etherpad [0] >> > >> > [0] https://etherpad.openstack.org/p/tripleo-heat-admin-security >> >> > [1] >> https://gist.github.com/lukehinds/4cdb1bf4de526a049c51f05698b8b04f >> >> > >> > -- >> > Luke Hinds >> >> -- >> Cédric Jeanneret >> Software Engineer >> DFG:DF >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> >> >> -- >> Luke Hinds | NFV Partner Engineering | CTO Office | Red Hat >> e: lhinds at redhat.com  | irc: lhinds @freenode >> |t: +44 12 52 36 2483 >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -- Cédric Jeanneret Software Engineer DFG:DF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From lhinds at redhat.com Tue May 22 09:01:36 2018 From: lhinds at redhat.com (Luke Hinds) Date: Tue, 22 May 2018 10:01:36 +0100 Subject: [openstack-dev] [tripleo] Limiting sudo coverage of heat-admin / stack and other users. In-Reply-To: <91cfbb4e-33de-79b6-f1c6-89bd9025e448@redhat.com> References: <825b0cca-b9cb-181e-6b1b-388d6e90542f@redhat.com> <91cfbb4e-33de-79b6-f1c6-89bd9025e448@redhat.com> Message-ID: On Tue, May 22, 2018 at 8:24 AM, Cédric Jeanneret wrote: > > > On 05/22/2018 09:08 AM, Luke Hinds wrote: > > > > > > On Tue, May 22, 2018 at 5:27 AM, Cédric Jeanneret > > wrote: > > > > > > > > On 05/21/2018 03:49 PM, Luke Hinds wrote: > > > A few operators have requested if its possible to limit sudo's > coverage > > > on both the under / overcloud. There is concern over `ALL=(ALL) > > > NOPASSWD:ALL` , which allows someone to `sudo su`. > > > > > > This task has come under the care of the tripleo security squad. > > > > > > The work is being tracked and discussed here [0]. > > > > > > So far it looks like the approach will be to use regexp within > > > /etc/sudoers.d/*., to narrow down as close as possible to the > specific > > > commands called. Some services already do this with rootwrap: > > > > > > ironic ALL = (root) NOPASSWD: /usr/bin/ironic-rootwrap > > > /etc/ironic/rootwrap.conf * > > > > > > It's fairly easy to pick up a list of all sudo calls using a simple > > > script [1] > > > > > > The other prolific user of sudo is ansible / stack, for example: > > > > > > /bin/sh -c echo BECOME-SUCCESS-kldpbeueyodisjajjqthpafzadrncdff; > > > /usr/bin/python > > > /home/stack/.ansible/tmp/ansible-tmp-1526579105.0- > 109863952786117/systemd.py; > > > rm -rf > > > "/home/stack/.ansible/tmp/ansible-tmp-1526579105.0-109863952786117/" > > > > > /dev/null 2>&1 > > > > > > My feelings here are to again use regexp around the immutable non > random > > > parts of the command. cjeanner also made some suggestions in the > > > etherpad [0]. > > > > Might be a temporary way to limit the surface indeed, but an upstream > > change in Ansible would still be really nice. Predictable names is > the > > only "right" way, although this will create a long sudo ruleset. A > > really long one to be honnest. Maintainability is also to be > discussed > > in either way (maintain a couple of regexp vs 200+ rules.. hmmm). > > > > > > As I understand it, the problem with predicable names is they also > > become predictable to attackers (this would be the reason ansible adds > > in the random string). It helps prevent someone creating a race > > condition to replace the python script with something more nefarious. > > Its the same reason commands such as mktemp exists. > > Fair enough indeed. Both solution have their pros and cons. In order to > move on, I think the regexp in sudoers is acceptable for the following > reasons: > - limits accesses outside of ansible generated code > - allows others to still push new content without having to change sudo > listing (thanks to regexp) > - still hard to inject bad things in the executed script/code > - quick to implement (well, fastest than requiring an upstream change > that will most probably break some internal things before working > properly, and without adding more security as you explained it) > > Thanks for chiming in Cédric , value your contributions here. I was thinking about this earlier on my way to work.. Perhaps we could have a script in CI that fails on sudo calls being blocked (as no regexp exists for them)? This way it will prevent people going on a wild goose chase trying to work out why a patch they are working on has failed. As an example, someone might change a single argument in an iptables command (a few of those run under sudo) that desyncs the command to the sudo regexp? @Juan do you agree with that statement? As we had some quick chat about it. > > Note: I'm not part of the security squad ;). But I like secured things. > > > > > > > > > However aside to the approach, we need to consider the impact > locking > > > down might have should someone create a develop a new bit of code > that > > > leverages commands wrapped in sudo and assumes ALL with be in > place. > > > This of course will be blocked. > > > > This will indeed require some doc, as this is a "major" change. > However, > > the use of regexp should somewhat limit the impact, especially since > > Ansible pushes its exec script in the same location. > > Even new parts should be allowed (that might be a bit of concern if > we > > want to really dig in the consequences of a bad template being > injected > > in some way [looking config-download ;)]). > > But at some point, we might also decide to let the OPs ensure their > > infra isn't compromised. > > Always the same thread-of with Security vs The World - convenience vs > > cumbersome management, and so on. > > > > > > > > Now my guess is that our CI would capture this as the deploy would > > > fail(?) and the developer should work out an entry is needed when > > > testing their patch, but wanted to open this up to others who know > > > testing at gate better much better than myself. Also encourage any > > > thoughts on the topic to be introduced to the etherpad [0] > > > > > > [0] https://etherpad.openstack.org/p/tripleo-heat-admin-security > > > > > [1] > > https://gist.github.com/lukehinds/4cdb1bf4de526a049c51f05698b8b04f > > > > > > > > -- > > > Luke Hinds > > > > -- > > Cédric Jeanneret > > Software Engineer > > DFG:DF > > > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > unsubscribe> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > > -- > > Luke Hinds | NFV Partner Engineering | CTO Office | Red Hat > > e: lhinds at redhat.com | irc: lhinds @freenode > > |t: +44 12 52 36 2483 > > > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Cédric Jeanneret > Software Engineer > DFG:DF > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Luke Hinds | NFV Partner Engineering | CTO Office | Red Hat e: lhinds at redhat.com | irc: lhinds @freenode | t: +44 12 52 36 2483 -------------- next part -------------- An HTML attachment was scrubbed... URL: From MM9745 at att.com Tue May 22 12:59:34 2018 From: MM9745 at att.com (MCEUEN, MATT) Date: Tue, 22 May 2018 12:59:34 +0000 Subject: [openstack-dev] [openstack-helm] No team meeting this week Message-ID: <7C64A75C21BB8D43BD75BB18635E4D8965D7394D@MOSTLS1MSGUSRFF.ITServices.sbc.com> Reminder: there will be no OpenStack-Helm team meeting this week due to the OpenStack Summit. Thanks, Matt McEuen From vdrok at mirantis.com Tue May 22 15:01:53 2018 From: vdrok at mirantis.com (Vladyslav Drok) Date: Tue, 22 May 2018 18:01:53 +0300 Subject: [openstack-dev] Proposing Mark Goddard to ironic-core In-Reply-To: References: Message-ID: On Mon, May 21, 2018 at 4:49 PM Jim Rollenhagen wrote: > On Sun, May 20, 2018 at 10:45 AM, Julia Kreger < > juliaashleykreger at gmail.com> wrote: > >> Greetings everyone! >> >> I would like to propose Mark Goddard to ironic-core. I am aware he >> recently joined kolla-core, but his contributions in ironic have been >> insightful and valuable. The kind of value that comes from operative use. >> >> I also make this nomination knowing that our community landscape is >> changing and that we must not silo our team responsibilities or ability to >> move things forward to small highly focused team. I trust Mark to use his >> judgement as he has time or need to do so. He might not always have time, >> but I think at the end of the day, we’re all in that same boat. >> > > +2! > > // jim > +1 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.bourke at oracle.com Tue May 22 18:29:49 2018 From: paul.bourke at oracle.com (Paul Bourke) Date: Tue, 22 May 2018 11:29:49 -0700 Subject: [openstack-dev] Slides for Kolla onboarding & project update Message-ID: Hi all, Here are the slide decks for these sessions. The project update should be available shortly on YouTube also. https://www.slideshare.net/PaulBourke1/kolla-onboarding-vancouver-2018 https://www.slideshare.net/PaulBourke1/kolla-project-update-vancouver-2018 Thanks, -Paul From dtroyer at gmail.com Tue May 22 18:54:59 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Tue, 22 May 2018 13:54:59 -0500 Subject: [openstack-dev] [StarlingX] StarlingX code followup discussions Message-ID: StarlingX (aka STX) was announced this week at the summit, there is a PR to create project repos in Gerrit at [0]. STX is basically Wind River's Titanium Cloud product, which is a turn-key cloud deployment. For background I have started putting notes, some faq-ish questions and references to blog-ish materials in [1]. The alternatives I have thought of or have been suggested so far all seem to be worse in some way. The major objections I have heard are around the precedent and messaging of the existence of OpenStack project forks, independent of the form they take[2]. There is a secondary concern about OpenStack Foundation hosting fork of other outside projects. At this point I am planning to change the review[0] to only add the repos for the new sub-projects so we can continue getting things set up while the discussions continue on how best to handle the upstream work. I want to continue those discussions wherever they will be productive, respond here or find me at the summit. IRC discussion has been in #openstack-tc so far. More background The set of STX repos include a number of patches to upstream projects, most of which are intended to be proposed upstream. The patches include features specific to Titanium's use cases and bug fixes as well as some bits that may or may not be useful in other use cases. The intention is to reduce this technical debt to zero; there were a handful of repos where the patch count was reduced to zero that we were able to eliminate in the transition to StarlingX. This is the goal for all of the remaining upstream repos. I chose to maintain the status of the Titanium upstream work as git repos for developer and testing convenience, as opposed to publishing patch file sets. Developers will need to re-create a repo locally in order to work or test the code and create reviews (there are more git challenges here). It would be challenging to do functional testing on the rest of STX in CI without access to all of the code. dt [0] https://review.openstack.org/#/c/569562/ [1] https://etherpad.openstack.org/p/stx-faq [2] Honestly I don't think that hosting a repo of patches to OpenStack projects is any different than hosting the repo itself. Also, anyone remember Qmail? :) -- Dean Troyer dtroyer at gmail.com From jaypipes at gmail.com Tue May 22 20:57:42 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 22 May 2018 16:57:42 -0400 Subject: [openstack-dev] [StarlingX] StarlingX code followup discussions In-Reply-To: References: Message-ID: Warning: strong opinions ahead. On 05/22/2018 02:54 PM, Dean Troyer wrote: > Developers will need to re-create a repo locally in > order to work or test the code and create reviews (there are more git > challenges here). It would be challenging to do functional testing on > the rest of STX in CI without access to all of the code. Please don't take this the wrong way, Dean, but you aren't seriously suggesting that anyone outside of Windriver/Intel would ever contribute to these repos are you? What motivation would anyone outside of Windriver/Intel -- who must make money on this effort otherwise I have no idea why they are doing it -- have to commit any code at all to StarlingX? I'm truly wondering why was this even open-sourced to begin with? I'm as big a supporter of open source as anyone, but I'm really struggling to comprehend the business, technical, or marketing decisions behind this action. Please help me understand. What am I missing? My personal opinion is that I don't think that any products, derivatives or distributions should be hosted on openstack.org infrastructure. Are any of the distributions of OpenStack listed at https://www.openstack.org/marketplace/distros/ hosted on openstack.org infrastructure? No. And I think that is completely appropriate. Best, -jay From haleyb.dev at gmail.com Tue May 22 21:41:18 2018 From: haleyb.dev at gmail.com (Brian Haley) Date: Tue, 22 May 2018 17:41:18 -0400 Subject: [openstack-dev] [StarlingX] StarlingX code followup discussions In-Reply-To: References: Message-ID: <197a5738-c714-a50a-2eb0-ed23e2fb9754@gmail.com> On 05/22/2018 04:57 PM, Jay Pipes wrote: > Warning: strong opinions ahead. > > On 05/22/2018 02:54 PM, Dean Troyer wrote: >> Developers will need to re-create a repo locally in >> order to work or test the code and create reviews (there are more git >> challenges here). It would be challenging to do functional testing on >> the rest of STX in CI without access to all of the code. > > Please don't take this the wrong way, Dean, but you aren't seriously > suggesting that anyone outside of Windriver/Intel would ever contribute > to these repos are you? > > What motivation would anyone outside of Windriver/Intel -- who must make > money on this effort otherwise I have no idea why they are doing it -- > have to commit any code at all to StarlingX? I read this the other way - the goal is to get all the forked code from StarlingX into upstream repos. That seems backwards from how this should have been done (i.e. upstream first), and I don't see how a project would prioritize that over other work. > I'm truly wondering why was this even open-sourced to begin with? I'm as > big a supporter of open source as anyone, but I'm really struggling to > comprehend the business, technical, or marketing decisions behind this > action. Please help me understand. What am I missing? I'm just as confused. -Brian > My personal opinion is that I don't think that any products, derivatives > or distributions should be hosted on openstack.org infrastructure. > > Are any of the distributions of OpenStack listed at > https://www.openstack.org/marketplace/distros/ hosted on openstack.org > infrastructure? No. And I think that is completely appropriate. > > Best, > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From harlowja at fastmail.com Tue May 22 23:30:32 2018 From: harlowja at fastmail.com (Joshua Harlow) Date: Tue, 22 May 2018 16:30:32 -0700 Subject: [openstack-dev] [StarlingX] StarlingX code followup discussions In-Reply-To: <197a5738-c714-a50a-2eb0-ed23e2fb9754@gmail.com> References: <197a5738-c714-a50a-2eb0-ed23e2fb9754@gmail.com> Message-ID: <5B04A818.7070402@fastmail.com> Also I am concerned that the repo just seems to have mega-commits like: https://github.com/starlingx-staging/stx-glance/commit/1ec64167057e3368f27a1a81aca294b771e79c5e https://github.com/starlingx-staging/stx-nova/commit/71acfeae0d1c59fdc77704527d763bd85a276f9a (not so mega) https://github.com/starlingx-staging/stx-glance/commit/1ec64167057e3368f27a1a81aca294b771e79c5e I am very confused now as well; it feels a lot like a code dump (which I get and it's nice to see companies patches, but it seems odd that this would ever be put anywhere official and expect?/hope? people to dissect and extract code that starlingx obviously couldn't put the manpower behind to do the same). Brian Haley wrote: > On 05/22/2018 04:57 PM, Jay Pipes wrote: >> Warning: strong opinions ahead. >> >> On 05/22/2018 02:54 PM, Dean Troyer wrote: >>> Developers will need to re-create a repo locally in >>> order to work or test the code and create reviews (there are more git >>> challenges here). It would be challenging to do functional testing on >>> the rest of STX in CI without access to all of the code. >> >> Please don't take this the wrong way, Dean, but you aren't seriously >> suggesting that anyone outside of Windriver/Intel would ever >> contribute to these repos are you? >> >> What motivation would anyone outside of Windriver/Intel -- who must >> make money on this effort otherwise I have no idea why they are doing >> it -- have to commit any code at all to StarlingX? > > I read this the other way - the goal is to get all the forked code from > StarlingX into upstream repos. That seems backwards from how this should > have been done (i.e. upstream first), and I don't see how a project > would prioritize that over other work. > >> I'm truly wondering why was this even open-sourced to begin with? I'm >> as big a supporter of open source as anyone, but I'm really struggling >> to comprehend the business, technical, or marketing decisions behind >> this action. Please help me understand. What am I missing? > > I'm just as confused. > > -Brian > > >> My personal opinion is that I don't think that any products, >> derivatives or distributions should be hosted on openstack.org >> infrastructure. >> >> Are any of the distributions of OpenStack listed at >> https://www.openstack.org/marketplace/distros/ hosted on openstack.org >> infrastructure? No. And I think that is completely appropriate. >> >> Best, >> -jay >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From zshi at redhat.com Wed May 23 00:09:15 2018 From: zshi at redhat.com (Zenghui Shi) Date: Wed, 23 May 2018 08:09:15 +0800 Subject: [openstack-dev] Proposing Mark Goddard to ironic-core In-Reply-To: References: Message-ID: +1 On Tue, May 22, 2018 at 11:01 PM, Vladyslav Drok wrote: > > > On Mon, May 21, 2018 at 4:49 PM Jim Rollenhagen > wrote: > >> On Sun, May 20, 2018 at 10:45 AM, Julia Kreger < >> juliaashleykreger at gmail.com> wrote: >> >>> Greetings everyone! >>> >>> I would like to propose Mark Goddard to ironic-core. I am aware he >>> recently joined kolla-core, but his contributions in ironic have been >>> insightful and valuable. The kind of value that comes from operative use. >>> >>> I also make this nomination knowing that our community landscape is >>> changing and that we must not silo our team responsibilities or ability to >>> move things forward to small highly focused team. I trust Mark to use his >>> judgement as he has time or need to do so. He might not always have time, >>> but I think at the end of the day, we’re all in that same boat. >>> >> >> +2! >> >> // jim >> > > +1 > > >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaosorior at gmail.com Wed May 23 05:18:58 2018 From: jaosorior at gmail.com (Juan Antonio Osorio) Date: Wed, 23 May 2018 08:18:58 +0300 Subject: [openstack-dev] [tripleo] Security Squad meeting cancelled this week Message-ID: Hello, A lot of folks are in the OpenStack summit, so we'll cancel the Security Squad meeting today. BR -- Juan Antonio Osorio R. e-mail: jaosorior at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Wed May 23 11:05:51 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Wed, 23 May 2018 04:05:51 -0700 Subject: [openstack-dev] [heat]no meeting is week Message-ID: Hi all As OpenStack summit happening this week, let’s skip Heat meeting today. -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Wed May 23 11:46:51 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 23 May 2018 19:46:51 +0800 Subject: [openstack-dev] [cyborg]no meeting today Message-ID: Enjoy the water view folks :) -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgolovat at redhat.com Wed May 23 12:43:42 2018 From: sgolovat at redhat.com (Sergii Golovatiuk) Date: Wed, 23 May 2018 14:43:42 +0200 Subject: [openstack-dev] [tripleo][ci][infra] Quickstart Branching Message-ID: Hi, Looking at [1], I am thinking about the price we paid for not branching tripleo-quickstart. Can we discuss the options to prevent the issues such as [1]? Thank you in advance. [1] https://review.openstack.org/#/c/569830/4 -- Best Regards, Sergii Golovatiuk From bdobreli at redhat.com Wed May 23 12:58:53 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Wed, 23 May 2018 14:58:53 +0200 Subject: [openstack-dev] [tripleo][ci][infra] Quickstart Branching In-Reply-To: References: Message-ID: <29192d89-1dc2-52f1-65b2-6896512ea9fd@redhat.com> On 5/23/18 2:43 PM, Sergii Golovatiuk wrote: > Hi, > > Looking at [1], I am thinking about the price we paid for not > branching tripleo-quickstart. Can we discuss the options to prevent > the issues such as [1]? Thank you in advance. > > [1] https://review.openstack.org/#/c/569830/4 > That was only a half of the full price, actually, see also additional multinode containers check/gate jobs [0],[1] from now on executed against the master branches of all tripleo repos (IIUC), for release -2 and -1 from master. [0] https://review.openstack.org/#/c/569932/ [1] https://review.openstack.org/#/c/569854/ -- Best regards, Bogdan Dobrelya, Irc #bogdando From dtantsur at redhat.com Wed May 23 13:54:55 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 23 May 2018 15:54:55 +0200 Subject: [openstack-dev] Proposing Mark Goddard to ironic-core In-Reply-To: References: Message-ID: <0b1a929a-9679-9dd7-4a45-42daa4cbe0fd@redhat.com> On 05/20/2018 04:45 PM, Julia Kreger wrote: > Greetings everyone! > > I would like to propose Mark Goddard to ironic-core. I am aware he recently > joined kolla-core, but his contributions in ironic have been insightful and > valuable. The kind of value that comes from operative use. > > I also make this nomination knowing that our community landscape is changing and > that we must not silo our team responsibilities or ability to move things > forward to small highly focused team. I trust Mark to use his judgement as he > has time or need to do so. He might not always have time, but I think at the end > of the day, we’re all in that same boat. I'm not sure I understand the first sentence, but I'm fully in support of adding Mark anyway. > > -Julia > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From geguileo at redhat.com Wed May 23 14:09:49 2018 From: geguileo at redhat.com (Gorka Eguileor) Date: Wed, 23 May 2018 16:09:49 +0200 Subject: [openstack-dev] [Cinder] Reusing Cinder drivers' code directly without running Cinder: In our applications, from Ansible, and for Containers Message-ID: <20180523140949.4gppdo4tbqvz2cjb@localhost> Hi, During the last OpenStack PTG, I announced in the Cinder room the development of cinderlib, and explained how this library allowed any Python application to use Cinder storage drivers (there are over 80) without running any services. This takes the standalone effort one step further. Now you don't need to run any Cinder services (API, Scheduler, Volume), RabbitMQ, or even a DB, to manage and attach volumes and snapshots. Even though we don't need a DB we still need to persist the metadata, but the library supports JSON serialization so we can save it wherever we want. I'm also finishing a metadata persistence plugin mechanism to allow external plugins for different storage solutions (DB, K8s CRDs, Key-Value systems...). This library opens a broad range of possibilities for the Cinder drivers, and I have explored a couple of them: Using it from Ansible and in containers with CSI driver that includes the latest features including snapshots that were introduced last week. The projects' documentation is lacking, but I've written a couple of blog posts with a brief introduction to these POCs for anybody that is interested: - Cinderlib: https://gorka.eguileor.com/cinderlib - Ansible storage role: https://gorka.eguileor.com/ansible-role-storage - Cinderlib-CSI: https://gorka.eguileor.com/cinderlib-csi And the repositories can be found in GitHub: - Cinderlib: https://github.com/akrog/cinderlib - Ansible storage role: https://github.com/akrog/ansible-role-storage - Cinderlib-CSI:https://github.com/akrog/cinderlib-csi Cheers, Gorka. From sshnaidm at redhat.com Wed May 23 14:30:23 2018 From: sshnaidm at redhat.com (Sagi Shnaidman) Date: Wed, 23 May 2018 17:30:23 +0300 Subject: [openstack-dev] [tripleo][ci][infra] Quickstart Branching Message-ID: Hi, Sergii thanks for the question. It's not first time that this topic is raised and from first view it could seem that branching would help to that sort of issues. Although it's not the case. Tripleo-quickstart(-extras) is part of CI code, as well as tripleo-ci repo which have never been branched. The reason for that is relative small impact on CI code from product branching. Think about backport almost *every* patch to oooq and extras to all supported branches, down to newton at least. This will be a really *huge* price and non reasonable work. Just think about active maintenance of 3-4 versions of CI code in each of 3 repositories. It will take all time of CI team with almost zero value of this work. What regards patch you listed, we would have backport this change to *every* branch, and it wouldn't really help to avoid the issue. The source of problem is not branchless repo here. Regarding catching such issues and Bogdans point, that's right we added a few jobs to catch such issues in the future and prevent breakages, and a few running jobs is reasonable price to keep configuration working in all branches. Comparing to maintenance nightmare with branches of CI code, it's really a *zero* price. Thanks On Wed, May 23, 2018 at 3:43 PM, Sergii Golovatiuk wrote: > Hi, > > Looking at [1], I am thinking about the price we paid for not > branching tripleo-quickstart. Can we discuss the options to prevent > the issues such as [1]? Thank you in advance. > > [1] https://review.openstack.org/#/c/569830/4 > > -- > Best Regards, > Sergii Golovatiuk > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards Sagi Shnaidman -------------- next part -------------- An HTML attachment was scrubbed... URL: From opensrloo at gmail.com Wed May 23 15:28:42 2018 From: opensrloo at gmail.com (Ruby Loo) Date: Wed, 23 May 2018 11:28:42 -0400 Subject: [openstack-dev] Proposing Mark Goddard to ironic-core In-Reply-To: References: Message-ID: ++. Great suggestion! --ruby On Sun, May 20, 2018 at 10:45 AM, Julia Kreger wrote: > Greetings everyone! > > I would like to propose Mark Goddard to ironic-core. I am aware he > recently joined kolla-core, but his contributions in ironic have been > insightful and valuable. The kind of value that comes from operative use. > > I also make this nomination knowing that our community landscape is > changing and that we must not silo our team responsibilities or ability to > move things forward to small highly focused team. I trust Mark to use his > judgement as he has time or need to do so. He might not always have time, > but I think at the end of the day, we’re all in that same boat. > > -Julia > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsbryant at electronicjungle.net Wed May 23 15:38:54 2018 From: jsbryant at electronicjungle.net (Jay Bryant) Date: Wed, 23 May 2018 08:38:54 -0700 Subject: [openstack-dev] [Cinder] no meeting today Message-ID: Just a reminder there is no meeting because ofthe summit today. Jay -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Wed May 23 16:04:57 2018 From: aschultz at redhat.com (Alex Schultz) Date: Wed, 23 May 2018 10:04:57 -0600 Subject: [openstack-dev] [tripleo][ci][infra] Quickstart Branching In-Reply-To: References: Message-ID: On Wed, May 23, 2018 at 8:30 AM, Sagi Shnaidman wrote: > Hi, Sergii > > thanks for the question. It's not first time that this topic is raised and > from first view it could seem that branching would help to that sort of > issues. > > Although it's not the case. Tripleo-quickstart(-extras) is part of CI code, > as well as tripleo-ci repo which have never been branched. The reason for > that is relative small impact on CI code from product branching. Think about > backport almost *every* patch to oooq and extras to all supported branches, > down to newton at least. This will be a really *huge* price and non > reasonable work. Just think about active maintenance of 3-4 versions of CI > code in each of 3 repositories. It will take all time of CI team with almost > zero value of this work. > So I'm not sure I completely agree with this assessment as there is a price paid for every {%if release in [...]%} that we have to carry in oooq{,-extras}. These go away if we branch because we don't have to worry about breaking previous releases or current release (which may or may not actually have CI results). > What regards patch you listed, we would have backport this change to *every* > branch, and it wouldn't really help to avoid the issue. The source of > problem is not branchless repo here. > No we shouldn't be backporting every change. The logic in oooq-extras should be version specific and if we're changing an interface in tripleo in a breaking fashion we're doing it wrong in tripleo. If we're backporting things to work around tripleo issues, we're doing it wrong in quickstart. > Regarding catching such issues and Bogdans point, that's right we added a > few jobs to catch such issues in the future and prevent breakages, and a few > running jobs is reasonable price to keep configuration working in all > branches. Comparing to maintenance nightmare with branches of CI code, it's > really a *zero* price. > Nothing is free. If there's a high maintenance cost, we haven't properly identified the optimal way to separate functionality between tripleo/quickstart. I have repeatedly said that the provisioning parts of quickstart should be separate because those aren't tied to a tripleo version and this along with the scenario configs should be the only unbranched repo we have. Any roles related to how to configure/work with tripleo should be branched and tied to a stable branch of tripleo. This would actually be beneficial for tripleo as well because then we can see when we are introducing backwards incompatible changes. Thanks, -Alex > Thanks > > > On Wed, May 23, 2018 at 3:43 PM, Sergii Golovatiuk > wrote: >> >> Hi, >> >> Looking at [1], I am thinking about the price we paid for not >> branching tripleo-quickstart. Can we discuss the options to prevent >> the issues such as [1]? Thank you in advance. >> >> [1] https://review.openstack.org/#/c/569830/4 >> >> -- >> Best Regards, >> Sergii Golovatiuk >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > Best regards > Sagi Shnaidman > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From colleen at gazlene.net Wed May 23 16:49:16 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Wed, 23 May 2018 18:49:16 +0200 Subject: [openstack-dev] [StarlingX] StarlingX code followup discussions In-Reply-To: References: Message-ID: <1527094156.859854.1382307608.21C28EB4@webmail.messagingengine.com> On Tue, May 22, 2018, at 10:57 PM, Jay Pipes wrote: > > Are any of the distributions of OpenStack listed at > https://www.openstack.org/marketplace/distros/ hosted on openstack.org > infrastructure? No. And I think that is completely appropriate. Hang on, that's not quite true. From that list I see Mirantis, Debian, Ubuntu, and RedHat, who all have (or had until recently) significant parts of their distros hosted on openstack.org infrastructure and are/were even official OpenStack projects governed by the TC. It's also important to make the distinction between hosting something on openstack.org infrastructure and recognizing it in an official capacity. StarlingX is seeking both, but in my opinion the code hosting is not the problem here. Colleen From sshnaidm at redhat.com Wed May 23 16:49:46 2018 From: sshnaidm at redhat.com (Sagi Shnaidman) Date: Wed, 23 May 2018 19:49:46 +0300 Subject: [openstack-dev] [tripleo][ci][infra] Quickstart Branching In-Reply-To: References: Message-ID: Alex, the problem is that you're working and focusing mostly on release specific code like featuresets and some scripts. But tripleo-quickstart(-extras) and tripleo-ci is much *much* more than set of featuresets. Only 10% of the code may be related to releases and branches, while other 90% is completely independent and not related to releases. So in 90% code we DO need to backport every change, take for example the latest patch to extras: https://review.openstack.org/#/c/570167/, it's fixing reproducer. If oooq-extra was branched, we would need to backport this fix to every and every branch. And the same for all other 90% of code, which is complete nonsense. Just because not using "{% if release %}" construct - to block the whole work of CI team and make the CI code is absolutely unmaintainable? Some of release related templates we moved recently from tripleo-ci to THT repo like scenarios, OC templates, etc. If we discover another things in oooq that could be moved to branched THT I'd be only happy for that. Sometimes it could be hard to maintain one file in extras templates with different logic for releases, like we have in tempest configuration for example. The solution is to create a few release-related templates and use one that match the current branch. It doesn't affect 90% of code and still "branch-like" approach. But I didn't see other scripts that are so release dependent. If we'll have ones, we could do the same. For now I see "{% if release %}" construct working very well. I didn't see still any advantage of branching CI code, except of a little bit nicer jinja templates without "{% if release ", but amount of disadvantages is so huge, that it'll literally block all current work in CI. Thanks On Wed, May 23, 2018 at 7:04 PM, Alex Schultz wrote: > On Wed, May 23, 2018 at 8:30 AM, Sagi Shnaidman > wrote: > > Hi, Sergii > > > > thanks for the question. It's not first time that this topic is raised > and > > from first view it could seem that branching would help to that sort of > > issues. > > > > Although it's not the case. Tripleo-quickstart(-extras) is part of CI > code, > > as well as tripleo-ci repo which have never been branched. The reason for > > that is relative small impact on CI code from product branching. Think > about > > backport almost *every* patch to oooq and extras to all supported > branches, > > down to newton at least. This will be a really *huge* price and non > > reasonable work. Just think about active maintenance of 3-4 versions of > CI > > code in each of 3 repositories. It will take all time of CI team with > almost > > zero value of this work. > > > > So I'm not sure I completely agree with this assessment as there is a > price paid for every {%if release in [...]%} that we have to carry in > oooq{,-extras}. These go away if we branch because we don't have to > worry about breaking previous releases or current release (which may > or may not actually have CI results). > > > What regards patch you listed, we would have backport this change to > *every* > > branch, and it wouldn't really help to avoid the issue. The source of > > problem is not branchless repo here. > > > > No we shouldn't be backporting every change. The logic in oooq-extras > should be version specific and if we're changing an interface in > tripleo in a breaking fashion we're doing it wrong in tripleo. If > we're backporting things to work around tripleo issues, we're doing it > wrong in quickstart. > > > Regarding catching such issues and Bogdans point, that's right we added a > > few jobs to catch such issues in the future and prevent breakages, and a > few > > running jobs is reasonable price to keep configuration working in all > > branches. Comparing to maintenance nightmare with branches of CI code, > it's > > really a *zero* price. > > > > Nothing is free. If there's a high maintenance cost, we haven't > properly identified the optimal way to separate functionality between > tripleo/quickstart. I have repeatedly said that the provisioning > parts of quickstart should be separate because those aren't tied to a > tripleo version and this along with the scenario configs should be the > only unbranched repo we have. Any roles related to how to > configure/work with tripleo should be branched and tied to a stable > branch of tripleo. This would actually be beneficial for tripleo as > well because then we can see when we are introducing backwards > incompatible changes. > > Thanks, > -Alex > > > Thanks > > > > > > On Wed, May 23, 2018 at 3:43 PM, Sergii Golovatiuk > > wrote: > >> > >> Hi, > >> > >> Looking at [1], I am thinking about the price we paid for not > >> branching tripleo-quickstart. Can we discuss the options to prevent > >> the issues such as [1]? Thank you in advance. > >> > >> [1] https://review.openstack.org/#/c/569830/4 > >> > >> -- > >> Best Regards, > >> Sergii Golovatiuk > >> > >> ____________________________________________________________ > ______________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > -- > > Best regards > > Sagi Shnaidman > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards Sagi Shnaidman -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Wed May 23 17:26:29 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 23 May 2018 10:26:29 -0700 Subject: [openstack-dev] [StarlingX] StarlingX code followup discussions In-Reply-To: <1527094156.859854.1382307608.21C28EB4@webmail.messagingengine.com> References: <1527094156.859854.1382307608.21C28EB4@webmail.messagingengine.com> Message-ID: <52ac5e31-4361-9e23-bacd-8765c3c31875@gmail.com> On 5/23/2018 9:49 AM, Colleen Murphy wrote: > Hang on, that's not quite true. From that list I see Mirantis, Debian, Ubuntu, and RedHat, who all have (or had until recently) significant parts of their distros hosted on openstack.org infrastructure and are/were even official OpenStack projects governed by the TC. But isn't that primarily deployment tooling (Fuel, Charms, TripleO) rather than forks of other existing service projects like nova/cinder/ironic? -- Thanks, Matt From aschultz at redhat.com Wed May 23 17:29:47 2018 From: aschultz at redhat.com (Alex Schultz) Date: Wed, 23 May 2018 11:29:47 -0600 Subject: [openstack-dev] [tripleo][ci][infra] Quickstart Branching In-Reply-To: References: Message-ID: On Wed, May 23, 2018 at 10:49 AM, Sagi Shnaidman wrote: > Alex, > > the problem is that you're working and focusing mostly on release specific > code like featuresets and some scripts. But tripleo-quickstart(-extras) and > tripleo-ci is much *much* more than set of featuresets. Only 10% of the code > may be related to releases and branches, while other 90% is completely > independent and not related to releases. > It is not necessarily about release specific code, it's about being able to reduce the impact of a change. From my original reply: > If there's a high maintenance cost, we haven't properly identified the optimal way to separate functionality between tripleo/quickstart. IMHO this is a side effect of having a whole bunch of roles in a single repo. oooq-extras has a mix of tripleo and non-tripleo related content. The reproducer IMHO is related to provisioning and could fall in the oooq repo and not oooq-extras. This is a structure problem with quickstart. If it's not version specific, then don't put it in a version specific repo. But that doesn't mean don't use version specific repos at all. This is one of the reasons why we're opting not to use this pattern of a bunch of roles in a single repo for tripleo itself[0][1][2]. We learned with the puppet modules that carrying all this stuff in a single repo has a huge maintenance cost and if you split them out you can identify re-usability and establish proper patterns for moving functionality into a shared place[3]. Yes there is a maintenance cost of maintaining independent repos, but at the same time there's a benefit of re-usability by other projects/groups when you expose important pieces of functionality as a standalone. You can establish clear ways to interact with each piece, test items, and release independently. For example the ansible-role-container-registry is not tripleo specific and anyone looking to manage a standalone docker registry can use it & contribute. > So in 90% code we DO need to backport every change, take for example the > latest patch to extras: https://review.openstack.org/#/c/570167/, it's > fixing reproducer. If oooq-extra was branched, we would need to backport > this fix to every and every branch. And the same for all other 90% of code, > which is complete nonsense. > Just because not using "{% if release %}" construct - to block the whole > work of CI team and make the CI code is absolutely unmaintainable? > And you're saying what we currently have is maintainable? We keep breaking ourselves, there's big gaps in coverage and it takes time[4][5] to identify breakages. I don't consider that maintainable because this is a recurring topic because we clearly haven't fixed it with the current setup. It's time to re-evaluate what we have an see if there's room for improvement. I know I wasn't proposing to branch all the repositories, but it might make sense to figure out if there's a way to reduce our recurring issues with stable branches or independent modules for some of the functions in CI. > Some of release related templates we moved recently from tripleo-ci to THT > repo like scenarios, OC templates, etc. If we discover another things in > oooq that could be moved to branched THT I'd be only happy for that. > > Sometimes it could be hard to maintain one file in extras templates with > different logic for releases, like we have in tempest configuration for > example. The solution is to create a few release-related templates and use > one that match the current branch. It doesn't affect 90% of code and still > "branch-like" approach. But I didn't see other scripts that are so release > dependent. If we'll have ones, we could do the same. For now I see "{% if > release %}" construct working very well. Considering this is how we broke Queens, I'm not sure I agree. > > I didn't see still any advantage of branching CI code, except of a little > bit nicer jinja templates without "{% if release ", but amount of > disadvantages is so huge, that it'll literally block all current work in CI. > It's about reducing our risk with test coverage. We do not properly test all jobs and all configurations when we make these changes. This is a repeated problem and when we have to add version specific logic, unless we're able to identify what this is actually impacting and verify with jobs we have a risk of breaking ourselves. We've seen that code review is not sufficient for these changes as we merge things and only find out after they've been merged that we broke stable branches. Then it takes folks tracking down changes to decipher what we broke. For example the original patch[4] broke Queens for about a week. That's 7 days of nothing being able to be merged, that's not OK. Thanks, -Alex [0] http://git.openstack.org/cgit/openstack/ansible-role-container-registry/ [1] http://git.openstack.org/cgit/openstack/ansible-role-redhat-subscription/ [2] http://git.openstack.org/cgit/openstack/ansible-role-tripleo-keystone/ [3] http://git.openstack.org/cgit/openstack/puppet-openstacklib/ [4] https://review.openstack.org/#/c/565856/ [5] https://review.openstack.org/#/c/569830 > Thanks > > > > On Wed, May 23, 2018 at 7:04 PM, Alex Schultz wrote: >> >> On Wed, May 23, 2018 at 8:30 AM, Sagi Shnaidman >> wrote: >> > Hi, Sergii >> > >> > thanks for the question. It's not first time that this topic is raised >> > and >> > from first view it could seem that branching would help to that sort of >> > issues. >> > >> > Although it's not the case. Tripleo-quickstart(-extras) is part of CI >> > code, >> > as well as tripleo-ci repo which have never been branched. The reason >> > for >> > that is relative small impact on CI code from product branching. Think >> > about >> > backport almost *every* patch to oooq and extras to all supported >> > branches, >> > down to newton at least. This will be a really *huge* price and non >> > reasonable work. Just think about active maintenance of 3-4 versions of >> > CI >> > code in each of 3 repositories. It will take all time of CI team with >> > almost >> > zero value of this work. >> > >> >> So I'm not sure I completely agree with this assessment as there is a >> price paid for every {%if release in [...]%} that we have to carry in >> oooq{,-extras}. These go away if we branch because we don't have to >> worry about breaking previous releases or current release (which may >> or may not actually have CI results). >> >> > What regards patch you listed, we would have backport this change to >> > *every* >> > branch, and it wouldn't really help to avoid the issue. The source of >> > problem is not branchless repo here. >> > >> >> No we shouldn't be backporting every change. The logic in oooq-extras >> should be version specific and if we're changing an interface in >> tripleo in a breaking fashion we're doing it wrong in tripleo. If >> we're backporting things to work around tripleo issues, we're doing it >> wrong in quickstart. >> >> > Regarding catching such issues and Bogdans point, that's right we added >> > a >> > few jobs to catch such issues in the future and prevent breakages, and a >> > few >> > running jobs is reasonable price to keep configuration working in all >> > branches. Comparing to maintenance nightmare with branches of CI code, >> > it's >> > really a *zero* price. >> > >> >> Nothing is free. If there's a high maintenance cost, we haven't >> properly identified the optimal way to separate functionality between >> tripleo/quickstart. I have repeatedly said that the provisioning >> parts of quickstart should be separate because those aren't tied to a >> tripleo version and this along with the scenario configs should be the >> only unbranched repo we have. Any roles related to how to >> configure/work with tripleo should be branched and tied to a stable >> branch of tripleo. This would actually be beneficial for tripleo as >> well because then we can see when we are introducing backwards >> incompatible changes. >> >> Thanks, >> -Alex >> >> > Thanks >> > >> > >> > On Wed, May 23, 2018 at 3:43 PM, Sergii Golovatiuk >> > wrote: >> >> >> >> Hi, >> >> >> >> Looking at [1], I am thinking about the price we paid for not >> >> branching tripleo-quickstart. Can we discuss the options to prevent >> >> the issues such as [1]? Thank you in advance. >> >> >> >> [1] https://review.openstack.org/#/c/569830/4 >> >> >> >> -- >> >> Best Regards, >> >> Sergii Golovatiuk >> >> >> >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> >> Unsubscribe: >> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> > >> > >> > >> > -- >> > Best regards >> > Sagi Shnaidman >> > >> > >> > __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > Best regards > Sagi Shnaidman > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From fungi at yuggoth.org Wed May 23 17:35:36 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 23 May 2018 17:35:36 +0000 Subject: [openstack-dev] [StarlingX] StarlingX code followup discussions In-Reply-To: <1527094156.859854.1382307608.21C28EB4@webmail.messagingengine.com> References: <1527094156.859854.1382307608.21C28EB4@webmail.messagingengine.com> Message-ID: <20180523173535.zvozyf2wncymnokf@yuggoth.org> On 2018-05-23 18:49:16 +0200 (+0200), Colleen Murphy wrote: [...] > It's also important to make the distinction between hosting > something on openstack.org infrastructure and recognizing it in an > official capacity. StarlingX is seeking both, but in my opinion > the code hosting is not the problem here. This may also be a poor time to mention that there have been discussions within the Infra team for over a year about renaming the infrastructure we're managing, since it's done in service of more than just the OpenStack project. The hardest part has been coming up with a good name. ;) -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jaypipes at gmail.com Wed May 23 17:48:56 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 23 May 2018 13:48:56 -0400 Subject: [openstack-dev] [StarlingX] StarlingX code followup discussions In-Reply-To: <1527094156.859854.1382307608.21C28EB4@webmail.messagingengine.com> References: <1527094156.859854.1382307608.21C28EB4@webmail.messagingengine.com> Message-ID: On 05/23/2018 12:49 PM, Colleen Murphy wrote: > On Tue, May 22, 2018, at 10:57 PM, Jay Pipes wrote: >> >> Are any of the distributions of OpenStack listed at >> https://www.openstack.org/marketplace/distros/ hosted on openstack.org >> infrastructure? No. And I think that is completely appropriate. > > Hang on, that's not quite true. From that list I see Mirantis, Debian, Ubuntu, and RedHat, who all have (or had until recently) significant parts of their distros hosted on openstack.org infrastructure and are/were even official OpenStack projects governed by the TC. I believe you may be confusing packages (or package specs) with distributions? Mirantis OpenStack was never hosted on an openstack infrastructure. Fuel is, as are deb spec files and Puppet manifests, etc. But the distribution of OpenStack is the collection of all those specs/build files along with a default configuration and things like project deltas exposed as patch files. Same goes for RDO, Canonical OpenStack, etc. > It's also important to make the distinction between hosting something on openstack.org infrastructure and recognizing it in an official capacity. StarlingX is seeking both, but in my opinion the code hosting is not the problem here. Yep, you're absolutely right that there is a distinction between hosting and consuming the foundation's resources and recognizing StarlingX in some official capacity. I'm concerned about both items. My concern with the former item is that I believe this is setting a precedent that the foundation's resources are being used to host a particular OpenStack distribution -- which is something I don't believe should happen. Vendor products/distributions [1] should be supported by that vendor, IMHO. [2] My concern with the latter item is more an annoyance with what I see as Intel / Wind River playing the Linux Foundation against the OpenStack foundation to see which will bear the burden of supporting code that I feel is being dumped on the upstream community. I fully understand that Dean has been put into a very awkward situation with all of this, and I want to be clear that I mean no disrespect towards any Intel or Wind River engineer/contributor. My gripe is with the business/management decisions that led to this. Dean was very gracious in answering a number of my questions on the etherpad linked in the original post. Thank you to Dean for being gracious under fire. Finally, I'd like to say that I did read the long discussion thread the TC had about this [3]. A number of the TC folks brought up interesting points about the subject at hand, and I recognize there's a bit of a damned-if-we-do-damned-if-we-don't situation. Jeremy pointed out concern about the optics of having the Linux Foundation hosting a fork of OpenStack and how bad that would look. A number of folks, including Jeremy, also brought up the potential renaming of the OpenStack Foundation to the Open Infrastructure Foundation and what such a rename might do to ease concerns over things like Airship and StarlingX. I don't personally feel a rename would ease much of the discontent, but I'm also clearly biased and recognize that I am so. One point that I brought up on the etherpad was whether folks have considered an "edge constellation" instead of a fork of OpenStack? In other words, the edge constellation would be a description of an opinionated build of OpenStack (and other supporting services) that would be focused on the mobile/edge cloud use cases, but there would not be a fork of OpenStack. Anyway, I think it's worth considering at least; it's a sticky and awkward situation, for sure. Best, -jay [1] Yes, even if that vendor has now chosen a different strategy of open sourcing their code versus keeping it proprietary [2] For the record, I believe it was a mistake to put Mirantis' Fuel product (and let's face it, Fuel was a product of Mirantis) under the openstack.org's hosting. [3] http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-05-20.log.html From juliaashleykreger at gmail.com Wed May 23 17:58:13 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 23 May 2018 13:58:13 -0400 Subject: [openstack-dev] [StarlingX] StarlingX code followup discussions In-Reply-To: <197a5738-c714-a50a-2eb0-ed23e2fb9754@gmail.com> References: <197a5738-c714-a50a-2eb0-ed23e2fb9754@gmail.com> Message-ID: On Tue, May 22, 2018 at 5:41 PM, Brian Haley wrote: > On 05/22/2018 04:57 PM, Jay Pipes wrote: [trim] > I read this the other way - the goal is to get all the forked code from > StarlingX into upstream repos. That seems backwards from how this should > have been done (i.e. upstream first), and I don't see how a project would > prioritize that over other work. There is definitely value to be gained for both projects in terms of a different point of view that might not have been able to play out in the public community, but since we're dealing with squashed commits of changes, it is really hard for us to delineate history/origin of code fragments, and without that it makes it near impossible for projects to even help them reconcile their technical debt because of that and the lacking context surrounding that. It would be so much more friendly to the community if we had stacks of patch files that we could work with git. >> I'm truly wondering why was this even open-sourced to begin with? I'm as >> big a supporter of open source as anyone, but I'm really struggling to >> comprehend the business, technical, or marketing decisions behind this >> action. Please help me understand. What am I missing? > > > I'm just as confused. Can I add myself to the list of confused people wanting to understand better? I can see and understand value, but context and understanding as to why as I mentioned above is going to be the main limiter for interaction. From skaplons at redhat.com Wed May 23 17:58:42 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Wed, 23 May 2018 10:58:42 -0700 Subject: [openstack-dev] [neutron] Failing fullstack and ovsfw jobs Message-ID: <5908DFC2-9D8D-4794-88A8-7DA04E4AA1DB@redhat.com> Hi, Yesterday we had issue [1] with compiling openvswitch kernel module during fullstack and ovsfw scenario jobs. This is now fixed by [2] so if You have a patch and those jobs are failing for You, please rebase it to have included this fix and it should works fine. [1] https://bugs.launchpad.net/neutron/+bug/1772689 [2] https://review.openstack.org/#/c/570085/ — Slawek Kaplonski Senior software engineer Red Hat From fungi at yuggoth.org Wed May 23 18:00:26 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 23 May 2018 18:00:26 +0000 Subject: [openstack-dev] [StarlingX] StarlingX code followup discussions In-Reply-To: <197a5738-c714-a50a-2eb0-ed23e2fb9754@gmail.com> References: <197a5738-c714-a50a-2eb0-ed23e2fb9754@gmail.com> Message-ID: <20180523180026.mrtpaetvxw4rxdrj@yuggoth.org> On 2018-05-22 17:41:18 -0400 (-0400), Brian Haley wrote: [...] > I read this the other way - the goal is to get all the forked code from > StarlingX into upstream repos. That seems backwards from how this should > have been done (i.e. upstream first), and I don't see how a project would > prioritize that over other work. [...] I have yet to see anyone suggest it should be prioritized over other work. I expect the extracted and proposed changes/specs corresponding to the divergence would be viewed on their own merits just like any other change and ignored, reviewed, rejected, et cetera as appropriate. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Wed May 23 18:07:14 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 23 May 2018 18:07:14 +0000 Subject: [openstack-dev] [StarlingX] StarlingX code followup discussions In-Reply-To: References: <1527094156.859854.1382307608.21C28EB4@webmail.messagingengine.com> Message-ID: <20180523180714.tyhxspeyi2tmbdeu@yuggoth.org> On 2018-05-23 13:48:56 -0400 (-0400), Jay Pipes wrote: [...] > I believe you may be confusing packages (or package specs) with > distributions? > > Mirantis OpenStack was never hosted on an openstack > infrastructure. Fuel is, as are deb spec files and Puppet > manifests, etc. But the distribution of OpenStack is the > collection of all those specs/build files along with a default > configuration and things like project deltas exposed as patch > files. Same goes for RDO, Canonical OpenStack, etc. [...] The Debian OpenStack packaging effort, when we were hosting it (the maintainers eventually decided for the sake of consistency to move it back into Debian's collaborative hosting instead) were in fact done as forked copies of the Git repositories of official OpenStack deliverables. Patch series and Git forks can be converted back and forth, at some cost to developer efficiency, but ultimately are an implementation detail. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From dtroyer at gmail.com Wed May 23 18:07:45 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Wed, 23 May 2018 13:07:45 -0500 Subject: [openstack-dev] [StarlingX] StarlingX code followup discussions In-Reply-To: <1527094156.859854.1382307608.21C28EB4@webmail.messagingengine.com> References: <1527094156.859854.1382307608.21C28EB4@webmail.messagingengine.com> Message-ID: On Wed, May 23, 2018 at 11:49 AM, Colleen Murphy wrote: > It's also important to make the distinction between hosting something on openstack.org infrastructure and recognizing it in an official capacity. StarlingX is seeking both, but in my opinion the code hosting is not the problem here. StarlingX is an OpenStack Foundation Edge focus area project and is seeking to use the CI infrastructure. There may be a project or two contained within that may make sense as OpenStack projects in the not-called-big-tent-anymore sense but that is not on the table, there is a lot of work to digest before we could even consider that. Is that the official capacity you are talking about? dt -- Dean Troyer dtroyer at gmail.com From sshnaidm at redhat.com Wed May 23 18:20:17 2018 From: sshnaidm at redhat.com (Sagi Shnaidman) Date: Wed, 23 May 2018 21:20:17 +0300 Subject: [openstack-dev] [tripleo][ci][infra] Quickstart Branching In-Reply-To: References: Message-ID: > to reduce the impact of a change. From my original reply: > > > If there's a high maintenance cost, we haven't properly identified the > optimal way to separate functionality between tripleo/quickstart. > > IMHO this is a side effect of having a whole bunch of roles in a > single repo. oooq-extras has a mix of tripleo and non-tripleo related > content. The reproducer IMHO is related to provisioning and could fall > in the oooq repo and not oooq-extras. This is a structure problem > with quickstart. If it's not version specific, then don't put it in a > version specific repo. But that doesn't mean don't use version > specific repos at all. > > This is one of the reasons why we're opting not to use this pattern of > a bunch of roles in a single repo for tripleo itself[0][1][2]. We > learned with the puppet modules that carrying all this stuff in a > single repo has a huge maintenance cost and if you split them out you > can identify re-usability and establish proper patterns for moving > functionality into a shared place[3]. Yes there is a maintenance cost > of maintaining independent repos, but at the same time there's a > benefit of re-usability by other projects/groups when you expose > important pieces of functionality as a standalone. You can establish > clear ways to interact with each piece, test items, and release > independently. For example the ansible-role-container-registry is not > tripleo specific and anyone looking to manage a standalone docker > registry can use it & contribute. > > We were moving between having all roles in one repo and having a separate repo for each role a few times. Each case has it's advantages and disadvantages. Last time we moved to have roles in 2 repos - quickstart and extras, it was a year ago I think. So far IMHO it's the best approach. There will be a mechanism to install additional roles, like we have for tirpleo-upgrade, ops-tools, etc etc. It may be a much broader topic to discuss, although I think having part of roles branched and part of not branched is much more headache. Tripleo-upgrade is a good example of it. > > So in 90% code we DO need to backport every change, take for example the > > latest patch to extras: https://review.openstack.org/#/c/570167/, it's > > fixing reproducer. If oooq-extra was branched, we would need to backport > > this fix to every and every branch. And the same for all other 90% of > code, > > which is complete nonsense. > > Just because not using "{% if release %}" construct - to block the whole > > work of CI team and make the CI code is absolutely unmaintainable? > > > > And you're saying what we currently have is maintainable? We keep > breaking ourselves, there's big gaps in coverage and it takes > time[4][5] to identify breakages. I don't consider that maintainable > because this is a recurring topic because we clearly haven't fixed it > with the current setup. It's time to re-evaluate what we have an see > if there's room for improvement. I know I wasn't proposing to branch > all the repositories, but it might make sense to figure out if there's > a way to reduce our recurring issues with stable branches or > independent modules for some of the functions in CI. > Considering this is how we broke Queens, I'm not sure I agree. > > First of all I don't see any connection between maintenance and CI breakages, it's different topics. And yes, it IS maintainable CI that we have now, and I have what to compare it with. I remember very well tripleo.sh based approach, also you can see almost green dashboards last time which proves my statement. CI is not ideal now, but it's definitely much better than 1-2 years ago. Of course we have breakages, the CI is actually history of breakages and fixes, as any other product. Wrt queens issue, it took about a week to solve it not because it was so hard, but because we had a very difficult weeks when trying to fix all Centos 7.5 issues and queens branch was in second priority. And by the way, we fixed everything much faster then it was with CentOS 7.4. Having the negative attitude that every CI breakage is proof of wrong CI structure is not correct and doesn't help. If branching helped in this case, it would create much bigger problems in all other cases. Anyway, we saw that having branch jobs in OVB only didn't catch queens issue (why - you know better) so we added multinode branch specific ones, which will catch such issues in the future. We hit the problem, solved it, set preventive actions and are ready to catch it next time. This is a normal CI workflow and I don't see any problem with it. Having multinode branch jobs is actually pretty similar to "branching" repos but without maintenance nightmare. Thanks Thanks, > -Alex > > [0] http://git.openstack.org/cgit/openstack/ansible-role- > container-registry/ > [1] http://git.openstack.org/cgit/openstack/ansible-role-redhat- > subscription/ > [2] http://git.openstack.org/cgit/openstack/ansible-role-tripleo-keystone/ > [3] http://git.openstack.org/cgit/openstack/puppet-openstacklib/ > [4] https://review.openstack.org/#/c/565856/ > [5] https://review.openstack.org/#/c/569830 > > > Thanks > > > > > > > > On Wed, May 23, 2018 at 7:04 PM, Alex Schultz > wrote: > >> > >> On Wed, May 23, 2018 at 8:30 AM, Sagi Shnaidman > >> wrote: > >> > Hi, Sergii > >> > > >> > thanks for the question. It's not first time that this topic is raised > >> > and > >> > from first view it could seem that branching would help to that sort > of > >> > issues. > >> > > >> > Although it's not the case. Tripleo-quickstart(-extras) is part of CI > >> > code, > >> > as well as tripleo-ci repo which have never been branched. The reason > >> > for > >> > that is relative small impact on CI code from product branching. Think > >> > about > >> > backport almost *every* patch to oooq and extras to all supported > >> > branches, > >> > down to newton at least. This will be a really *huge* price and non > >> > reasonable work. Just think about active maintenance of 3-4 versions > of > >> > CI > >> > code in each of 3 repositories. It will take all time of CI team with > >> > almost > >> > zero value of this work. > >> > > >> > >> So I'm not sure I completely agree with this assessment as there is a > >> price paid for every {%if release in [...]%} that we have to carry in > >> oooq{,-extras}. These go away if we branch because we don't have to > >> worry about breaking previous releases or current release (which may > >> or may not actually have CI results). > >> > >> > What regards patch you listed, we would have backport this change to > >> > *every* > >> > branch, and it wouldn't really help to avoid the issue. The source of > >> > problem is not branchless repo here. > >> > > >> > >> No we shouldn't be backporting every change. The logic in oooq-extras > >> should be version specific and if we're changing an interface in > >> tripleo in a breaking fashion we're doing it wrong in tripleo. If > >> we're backporting things to work around tripleo issues, we're doing it > >> wrong in quickstart. > >> > >> > Regarding catching such issues and Bogdans point, that's right we > added > >> > a > >> > few jobs to catch such issues in the future and prevent breakages, > and a > >> > few > >> > running jobs is reasonable price to keep configuration working in all > >> > branches. Comparing to maintenance nightmare with branches of CI code, > >> > it's > >> > really a *zero* price. > >> > > >> > >> Nothing is free. If there's a high maintenance cost, we haven't > >> properly identified the optimal way to separate functionality between > >> tripleo/quickstart. I have repeatedly said that the provisioning > >> parts of quickstart should be separate because those aren't tied to a > >> tripleo version and this along with the scenario configs should be the > >> only unbranched repo we have. Any roles related to how to > >> configure/work with tripleo should be branched and tied to a stable > >> branch of tripleo. This would actually be beneficial for tripleo as > >> well because then we can see when we are introducing backwards > >> incompatible changes. > >> > >> Thanks, > >> -Alex > >> > >> > Thanks > >> > > >> > > >> > On Wed, May 23, 2018 at 3:43 PM, Sergii Golovatiuk < > sgolovat at redhat.com> > >> > wrote: > >> >> > >> >> Hi, > >> >> > >> >> Looking at [1], I am thinking about the price we paid for not > >> >> branching tripleo-quickstart. Can we discuss the options to prevent > >> >> the issues such as [1]? Thank you in advance. > >> >> > >> >> [1] https://review.openstack.org/#/c/569830/4 > >> >> > >> >> -- > >> >> Best Regards, > >> >> Sergii Golovatiuk > >> >> > >> >> > >> >> ____________________________________________________________ > ______________ > >> >> OpenStack Development Mailing List (not for usage questions) > >> >> Unsubscribe: > >> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > >> > > >> > > >> > > >> > -- > >> > Best regards > >> > Sagi Shnaidman > >> > > >> > > >> > ____________________________________________________________ > ______________ > >> > OpenStack Development Mailing List (not for usage questions) > >> > Unsubscribe: > >> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > >> > >> ____________________________________________________________ > ______________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > -- > > Best regards > > Sagi Shnaidman > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards Sagi Shnaidman -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Wed May 23 18:24:05 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 23 May 2018 11:24:05 -0700 Subject: [openstack-dev] [StarlingX] StarlingX code followup discussions In-Reply-To: <20180523180026.mrtpaetvxw4rxdrj@yuggoth.org> References: <197a5738-c714-a50a-2eb0-ed23e2fb9754@gmail.com> <20180523180026.mrtpaetvxw4rxdrj@yuggoth.org> Message-ID: <33ad7bf5-c05f-f9d6-223a-cf0bc76ebd6f@gmail.com> On 5/23/2018 11:00 AM, Jeremy Stanley wrote: > I have yet to see anyone suggest it should be prioritized over other > work. I expect the extracted and proposed changes/specs > corresponding to the divergence would be viewed on their own merits > just like any other change and ignored, reviewed, rejected, et > cetera as appropriate. Rather than literally making this a priority, I expect most of the feeling is that because of the politics and pressure of competition with a fork in another foundation is driving the defensiveness about feeling pressured to prioritize review on whatever specs/patches are proposed as a result of the code dump. -- Thanks, Matt From dtroyer at gmail.com Wed May 23 18:25:02 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Wed, 23 May 2018 13:25:02 -0500 Subject: [openstack-dev] [StarlingX] StarlingX code followup discussions In-Reply-To: References: <197a5738-c714-a50a-2eb0-ed23e2fb9754@gmail.com> Message-ID: On Wed, May 23, 2018 at 12:58 PM, Julia Kreger wrote: > There is definitely value to be gained for both projects in terms of a > different point of view that might not have been able to play out in Ironic is a bit different in this regard to the released code since there _is_ overlap with the STX Bare Metal service. There is also not-overlapping aspects to it. I would like to talk with you and the Ironic team at some point about scope and goals for the long term. > the public community, but since we're dealing with squashed commits of > changes, it is really hard for us to delineate history/origin of code > fragments, and without that it makes it near impossible for projects > to even help them reconcile their technical debt because of that and > the lacking context surrounding that. It would be so much more > friendly to the community if we had stacks of patch files that we > could work with git. Unfortunately it was a requirement to not release the history. There are some bits that we were not allowed to release (for legal reasons, not open core reasons) that are present in the history. And yes it is in most cases unusable to do anything more than browse for pulling things upstream. What I did manage to get was permission to publish the individual commits on top of the upstream base that do not run afoul of the legal issues. Given that this is all against Pike and we need to propose to master first, they are not likely directly usable but the information needed for the upstream work will be available. These have not been cleaned up yet but I plan to add them directly to the repos containing the squashes as they are done. > Can I add myself to the list of confused people wanting to understand > better? I can see and understand value, but context and understanding > as to why as I mentioned above is going to be the main limiter for > interaction. I have heard multiple reasons why this has been done, this is one area I am not going to go into detail about other than the stuff that has been cleared and released. Understanding (some) business decisions are not one of my strengths. I will say that my opinion from working with WRS for a few months is they do truly want to form a community around StarlingX and will be moving their ongoing Titanium development there. dt -- Dean Troyer dtroyer at gmail.com From dtroyer at gmail.com Wed May 23 18:37:32 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Wed, 23 May 2018 13:37:32 -0500 Subject: [openstack-dev] [StarlingX] StarlingX code followup discussions In-Reply-To: <33ad7bf5-c05f-f9d6-223a-cf0bc76ebd6f@gmail.com> References: <197a5738-c714-a50a-2eb0-ed23e2fb9754@gmail.com> <20180523180026.mrtpaetvxw4rxdrj@yuggoth.org> <33ad7bf5-c05f-f9d6-223a-cf0bc76ebd6f@gmail.com> Message-ID: On Wed, May 23, 2018 at 1:24 PM, Matt Riedemann wrote: > Rather than literally making this a priority, I expect most of the feeling > is that because of the politics and pressure of competition with a fork in > another foundation is driving the defensiveness about feeling pressured to > prioritize review on whatever specs/patches are proposed as a result of the > code dump. David Letterman used to say "This is not a competition it is just an exhibition. No wagering!" for Stupid Pet Tricks. The feelings that is is a competition is one aspect that I want to help ease if I can. Once we have the list of individual upstream-desired changes we can talk about priorities (we do have a priority list internally) and desirability. The targeted use cases for StarlingX/Titanium has requirements that do not fit other use cases or may not be widely useful. We need to figure out how to handle those in the long term. dt -- Dean Troyer dtroyer at gmail.com From gergely.csatari at nokia.com Wed May 23 18:59:29 2018 From: gergely.csatari at nokia.com (Csatari, Gergely (Nokia - HU/Budapest)) Date: Wed, 23 May 2018 18:59:29 +0000 Subject: [openstack-dev] [edge][glance]: Wiki of the possible architectures for image synchronisation Message-ID: Hi, Here I send the wiki page [1] where I summarize what I understood from the Forum session about image synchronisation in edge environment [2], [3]. Please check and correct/comment. Thanks, Gerg0 [1]: https://wiki.openstack.org/wiki/Image_handling_in_edge_environment [2]: https://etherpad.openstack.org/p/yvr-edge-cloud-images [3]: https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21768/image-handling-in-an-edge-cloud-infrastructure -------------- next part -------------- An HTML attachment was scrubbed... URL: From colleen at gazlene.net Wed May 23 19:03:11 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Wed, 23 May 2018 21:03:11 +0200 Subject: [openstack-dev] [StarlingX] StarlingX code followup discussions In-Reply-To: References: <1527094156.859854.1382307608.21C28EB4@webmail.messagingengine.com> Message-ID: <1527102191.150587.1382480392.0A3BA28F@webmail.messagingengine.com> On Wed, May 23, 2018, at 8:07 PM, Dean Troyer wrote: > On Wed, May 23, 2018 at 11:49 AM, Colleen Murphy wrote: > > It's also important to make the distinction between hosting something on openstack.org infrastructure and recognizing it in an official capacity. StarlingX is seeking both, but in my opinion the code hosting is not the problem here. > > StarlingX is an OpenStack Foundation Edge focus area project and is > seeking to use the CI infrastructure. There may be a project or two > contained within that may make sense as OpenStack projects in the > not-called-big-tent-anymore sense but that is not on the table, there > is a lot of work to digest before we could even consider that. Is > that the official capacity you are talking about? I was talking about it being recognized by the OpenStack Foundation as part of one of its strategic focus areas. I understand StarlingX isn't seeking official recognition within the OpenStack project under the TC's governance. Colleen From haleyb.dev at gmail.com Wed May 23 19:20:28 2018 From: haleyb.dev at gmail.com (Brian Haley) Date: Wed, 23 May 2018 15:20:28 -0400 Subject: [openstack-dev] [StarlingX] StarlingX code followup discussions In-Reply-To: <20180523180026.mrtpaetvxw4rxdrj@yuggoth.org> References: <197a5738-c714-a50a-2eb0-ed23e2fb9754@gmail.com> <20180523180026.mrtpaetvxw4rxdrj@yuggoth.org> Message-ID: On 05/23/2018 02:00 PM, Jeremy Stanley wrote: > On 2018-05-22 17:41:18 -0400 (-0400), Brian Haley wrote: > [...] >> I read this the other way - the goal is to get all the forked code from >> StarlingX into upstream repos. That seems backwards from how this should >> have been done (i.e. upstream first), and I don't see how a project would >> prioritize that over other work. > [...] > > I have yet to see anyone suggest it should be prioritized over other > work. I expect the extracted and proposed changes/specs > corresponding to the divergence would be viewed on their own merits > just like any other change and ignored, reviewed, rejected, et > cetera as appropriate. Even doing that is work - going through changes, finding nuggets, proposing new specs.... I don't think we can expect a project to even go there, it has to be driven by someone already involved in StarlingX, IMHO. -Brian From fungi at yuggoth.org Wed May 23 19:28:55 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 23 May 2018 19:28:55 +0000 Subject: [openstack-dev] [StarlingX] StarlingX code followup discussions In-Reply-To: References: <197a5738-c714-a50a-2eb0-ed23e2fb9754@gmail.com> <20180523180026.mrtpaetvxw4rxdrj@yuggoth.org> Message-ID: <20180523192855.yy6dgkzwjgzsed3q@yuggoth.org> On 2018-05-23 15:20:28 -0400 (-0400), Brian Haley wrote: > On 05/23/2018 02:00 PM, Jeremy Stanley wrote: > > On 2018-05-22 17:41:18 -0400 (-0400), Brian Haley wrote: > > [...] > > > I read this the other way - the goal is to get all the forked code from > > > StarlingX into upstream repos. That seems backwards from how this should > > > have been done (i.e. upstream first), and I don't see how a project would > > > prioritize that over other work. > > [...] > > > > I have yet to see anyone suggest it should be prioritized over other > > work. I expect the extracted and proposed changes/specs > > corresponding to the divergence would be viewed on their own merits > > just like any other change and ignored, reviewed, rejected, et > > cetera as appropriate. > > Even doing that is work - going through changes, finding nuggets, > proposing new specs.... I don't think we can expect a project to > even go there, it has to be driven by someone already involved in > StarlingX, IMHO. I gather that's the proposal at hand. The StarlingX development team would do the work to write specs for these feature additions, propose them through the usual processes, then start extracting the relevant parts of their "technical debt" corresponding to any specs which get approved and propose patches to those services for review. If they don't, then I agree this will go nowhere. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From dtroyer at gmail.com Wed May 23 20:36:21 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Wed, 23 May 2018 15:36:21 -0500 Subject: [openstack-dev] [StarlingX] StarlingX code followup discussions In-Reply-To: References: <197a5738-c714-a50a-2eb0-ed23e2fb9754@gmail.com> <20180523180026.mrtpaetvxw4rxdrj@yuggoth.org> Message-ID: On Wed, May 23, 2018 at 2:20 PM, Brian Haley wrote: > Even doing that is work - going through changes, finding nuggets, proposing > new specs.... I don't think we can expect a project to even go there, it has > to be driven by someone already involved in StarlingX, IMHO. In the beginning at least it will be. We have prioritized lists for where we want to start. Once I get the list and commits cleaned up everyone can look at them and weigh in on our starting point. dt -- Dean Troyer dtroyer at gmail.com From sgolovat at redhat.com Wed May 23 20:38:29 2018 From: sgolovat at redhat.com (Sergii Golovatiuk) Date: Wed, 23 May 2018 22:38:29 +0200 Subject: [openstack-dev] [tripleo][ci][infra] Quickstart Branching In-Reply-To: References: Message-ID: Hi, On Wed, May 23, 2018 at 8:20 PM, Sagi Shnaidman wrote: > >> >> to reduce the impact of a change. From my original reply: >> >> > If there's a high maintenance cost, we haven't properly identified the >> > optimal way to separate functionality between tripleo/quickstart. >> >> IMHO this is a side effect of having a whole bunch of roles in a >> single repo. oooq-extras has a mix of tripleo and non-tripleo related >> content. The reproducer IMHO is related to provisioning and could fall >> in the oooq repo and not oooq-extras. This is a structure problem >> with quickstart. If it's not version specific, then don't put it in a >> version specific repo. But that doesn't mean don't use version >> specific repos at all. >> >> This is one of the reasons why we're opting not to use this pattern of >> a bunch of roles in a single repo for tripleo itself[0][1][2]. We >> learned with the puppet modules that carrying all this stuff in a >> single repo has a huge maintenance cost and if you split them out you >> can identify re-usability and establish proper patterns for moving >> functionality into a shared place[3]. Yes there is a maintenance cost >> of maintaining independent repos, but at the same time there's a >> benefit of re-usability by other projects/groups when you expose >> important pieces of functionality as a standalone. You can establish >> clear ways to interact with each piece, test items, and release >> independently. For example the ansible-role-container-registry is not >> tripleo specific and anyone looking to manage a standalone docker >> registry can use it & contribute. >> > > We were moving between having all roles in one repo and having a separate > repo for each role a few times. Each case has it's advantages and > disadvantages. Last time we moved to have roles in 2 repos - quickstart and > extras, it was a year ago I think. So far IMHO it's the best approach. There > will be a mechanism to install additional roles, like we have for > tirpleo-upgrade, ops-tools, etc etc. But at the moment we don't have that mechanism so we should live somehow until it's implemented. > It may be a much broader topic to discuss, although I think having part of > roles branched and part of not branched is much more headache. > Tripleo-upgrade is a good example of it. > >> >> > So in 90% code we DO need to backport every change, take for example the >> > latest patch to extras: https://review.openstack.org/#/c/570167/, it's >> > fixing reproducer. If oooq-extra was branched, we would need to backport >> > this fix to every and every branch. And the same for all other 90% of >> > code, >> > which is complete nonsense. >> > Just because not using "{% if release %}" construct - to block the whole >> > work of CI team and make the CI code is absolutely unmaintainable? >> > >> >> And you're saying what we currently have is maintainable? We keep >> breaking ourselves, there's big gaps in coverage and it takes >> time[4][5] to identify breakages. I don't consider that maintainable >> because this is a recurring topic because we clearly haven't fixed it >> with the current setup. It's time to re-evaluate what we have an see >> if there's room for improvement. I know I wasn't proposing to branch >> all the repositories, but it might make sense to figure out if there's >> a way to reduce our recurring issues with stable branches or >> independent modules for some of the functions in CI. > > >> Considering this is how we broke Queens, I'm not sure I agree. We broke Queens, Pike, Newton by merging [1] without testing against these releases. >> > > First of all I don't see any connection between maintenance and CI > breakages, it's different topics. And yes, it IS maintainable CI that we > have now, and I have what to compare it with. I remember very well > tripleo.sh based approach, also you can see almost green dashboards last > time which proves my statement. CI is not ideal now, but it's definitely > much better than 1-2 years ago. > > > Of course we have breakages, the CI is actually history of breakages and > fixes, as any other product. Wrt queens issue, it took about a week to solve > it not because it was so hard, but because we had a very difficult weeks > when trying to fix all Centos 7.5 issues and queens branch was in second > priority. And by the way, we fixed everything much faster then it was with > CentOS 7.4. Having the negative attitude that every CI breakage is proof of > wrong CI structure is not correct and doesn't help. If branching helped in > this case, it would create much bigger problems in all other cases. I would like to forget about feeling and discuss the technical side of 2 solutions, cost for every team and product in general to find the solution that fits all. > > Anyway, we saw that having branch jobs in OVB only didn't catch queens issue > (why - you know better) so we added multinode branch specific ones, which > will catch such issues in the future. We hit the problem, solved it, set > preventive actions and are ready to catch it next time. This is a normal CI > workflow and I don't see any problem with it. Having multinode branch jobs > is actually pretty similar to "branching" repos but without maintenance > nightmare. There are 2 approaches, IMHO. The first one is branchless which has own advantages and disadvantages. In branchless approach CI should test against all support version. So, CI for [1] was supposed to have been tested against all supported releases. The cost of this solution is larger hardware fleet to test all supported releases. The second approach is branching. In branch approach CI will be triggered when particular patch will be cherry-picked to particular version branch. The testing cost of this solution will be less as require less hardware but increases the cost of maintainability as engineers should do backporting, review/voting. > > Thanks > >> Thanks, >> -Alex >> >> [0] >> http://git.openstack.org/cgit/openstack/ansible-role-container-registry/ >> [1] >> http://git.openstack.org/cgit/openstack/ansible-role-redhat-subscription/ >> [2] http://git.openstack.org/cgit/openstack/ansible-role-tripleo-keystone/ >> [3] http://git.openstack.org/cgit/openstack/puppet-openstacklib/ >> [4] https://review.openstack.org/#/c/565856/ >> [5] https://review.openstack.org/#/c/569830 >> >> > Thanks >> > >> > >> > >> > On Wed, May 23, 2018 at 7:04 PM, Alex Schultz >> > wrote: >> >> >> >> On Wed, May 23, 2018 at 8:30 AM, Sagi Shnaidman >> >> wrote: >> >> > Hi, Sergii >> >> > >> >> > thanks for the question. It's not first time that this topic is >> >> > raised >> >> > and >> >> > from first view it could seem that branching would help to that sort >> >> > of >> >> > issues. >> >> > >> >> > Although it's not the case. Tripleo-quickstart(-extras) is part of CI >> >> > code, >> >> > as well as tripleo-ci repo which have never been branched. The reason >> >> > for >> >> > that is relative small impact on CI code from product branching. >> >> > Think >> >> > about >> >> > backport almost *every* patch to oooq and extras to all supported >> >> > branches, >> >> > down to newton at least. This will be a really *huge* price and non >> >> > reasonable work. Just think about active maintenance of 3-4 versions >> >> > of >> >> > CI >> >> > code in each of 3 repositories. It will take all time of CI team with >> >> > almost >> >> > zero value of this work. >> >> > >> >> >> >> So I'm not sure I completely agree with this assessment as there is a >> >> price paid for every {%if release in [...]%} that we have to carry in >> >> oooq{,-extras}. These go away if we branch because we don't have to >> >> worry about breaking previous releases or current release (which may >> >> or may not actually have CI results). >> >> >> >> > What regards patch you listed, we would have backport this change to >> >> > *every* >> >> > branch, and it wouldn't really help to avoid the issue. The source of >> >> > problem is not branchless repo here. >> >> > >> >> >> >> No we shouldn't be backporting every change. The logic in oooq-extras >> >> should be version specific and if we're changing an interface in >> >> tripleo in a breaking fashion we're doing it wrong in tripleo. If >> >> we're backporting things to work around tripleo issues, we're doing it >> >> wrong in quickstart. >> >> >> >> > Regarding catching such issues and Bogdans point, that's right we >> >> > added >> >> > a >> >> > few jobs to catch such issues in the future and prevent breakages, >> >> > and a >> >> > few >> >> > running jobs is reasonable price to keep configuration working in all >> >> > branches. Comparing to maintenance nightmare with branches of CI >> >> > code, >> >> > it's >> >> > really a *zero* price. >> >> > >> >> >> >> Nothing is free. If there's a high maintenance cost, we haven't >> >> properly identified the optimal way to separate functionality between >> >> tripleo/quickstart. I have repeatedly said that the provisioning >> >> parts of quickstart should be separate because those aren't tied to a >> >> tripleo version and this along with the scenario configs should be the >> >> only unbranched repo we have. Any roles related to how to >> >> configure/work with tripleo should be branched and tied to a stable >> >> branch of tripleo. This would actually be beneficial for tripleo as >> >> well because then we can see when we are introducing backwards >> >> incompatible changes. >> >> >> >> Thanks, >> >> -Alex >> >> >> >> > Thanks >> >> > >> >> > >> >> > On Wed, May 23, 2018 at 3:43 PM, Sergii Golovatiuk >> >> > >> >> > wrote: >> >> >> >> >> >> Hi, >> >> >> >> >> >> Looking at [1], I am thinking about the price we paid for not >> >> >> branching tripleo-quickstart. Can we discuss the options to prevent >> >> >> the issues such as [1]? Thank you in advance. >> >> >> >> >> >> [1] https://review.openstack.org/#/c/569830/4 >> >> >> >> >> >> -- >> >> >> Best Regards, >> >> >> Sergii Golovatiuk >> >> >> >> >> >> >> >> >> >> >> >> __________________________________________________________________________ >> >> >> OpenStack Development Mailing List (not for usage questions) >> >> >> Unsubscribe: >> >> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > >> >> > >> >> > >> >> > >> >> > -- >> >> > Best regards >> >> > Sagi Shnaidman >> >> > >> >> > >> >> > >> >> > __________________________________________________________________________ >> >> > OpenStack Development Mailing List (not for usage questions) >> >> > Unsubscribe: >> >> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > >> >> >> >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> >> Unsubscribe: >> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> > >> > >> > >> > -- >> > Best regards >> > Sagi Shnaidman >> > >> > >> > __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > Best regards > Sagi Shnaidman > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best Regards, Sergii Golovatiuk From mikal at stillhq.com Wed May 23 20:53:18 2018 From: mikal at stillhq.com (Michael Still) Date: Thu, 24 May 2018 06:53:18 +1000 Subject: [openstack-dev] [StarlingX] StarlingX code followup discussions In-Reply-To: References: <197a5738-c714-a50a-2eb0-ed23e2fb9754@gmail.com> <20180523180026.mrtpaetvxw4rxdrj@yuggoth.org> Message-ID: I think a good start would be a concrete list of the places you felt you needed to change upstream and the specific reasons for each that it wasn't done as part of the community. For example, I look at your nova fork and it has a "don't allow this call during an upgrade" decorator on many API calls. Why wasn't that done upstream? It doesn't seem overly controversial, so it would be useful to understand the reasoning for that change. To be blunt I had a quick scan of the Nova fork and I don't see much of interest there, but its hard to tell given how things are laid out now. Hence the request for a list. Michael On Thu, May 24, 2018 at 6:36 AM, Dean Troyer wrote: > On Wed, May 23, 2018 at 2:20 PM, Brian Haley wrote: > > Even doing that is work - going through changes, finding nuggets, > proposing > > new specs.... I don't think we can expect a project to even go there, it > has > > to be driven by someone already involved in StarlingX, IMHO. > > In the beginning at least it will be. We have prioritized lists for > where we want to start. Once I get the list and commits cleaned up > everyone can look at them and weigh in on our starting point. > > dt > > -- > > Dean Troyer > dtroyer at gmail.com > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Did this email leave you hoping to cause me pain? Good news! Sponsor me in city2surf 2018 and I promise to suffer greatly. http://www.madebymikal.com/city2surf-2018/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at medberry.net Wed May 23 22:09:40 2018 From: openstack at medberry.net (David Medberry) Date: Wed, 23 May 2018 15:09:40 -0700 Subject: [openstack-dev] Fwd: Follow Up: Private Enterprise Cloud Issues In-Reply-To: References: Message-ID: There was a great turnout at the Private Enterprise Cloud Issues session here in Vancouver. I'll propose a follow-on discussion for Denver PTG as well as trying to sift the data a bit and pre-populate. Look for that sifted data soon. For folks unable to participate locally, the etherpad is here: https://etherpad.openstack.org/p/YVR-private-enterprise-cloud-issues (and I've cached a copy offline in case it gets reset/etc.) -- -dave -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Wed May 23 22:52:10 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 23 May 2018 15:52:10 -0700 Subject: [openstack-dev] [StarlingX] StarlingX code followup discussions In-Reply-To: References: <197a5738-c714-a50a-2eb0-ed23e2fb9754@gmail.com> Message-ID: <467cb821-e9ea-0df2-ceb0-020ad0e7dee8@redhat.com> On 23/05/18 11:25, Dean Troyer wrote: > On Wed, May 23, 2018 at 12:58 PM, Julia Kreger > wrote: >> There is definitely value to be gained for both projects in terms of a >> different point of view that might not have been able to play out in > > Ironic is a bit different in this regard to the released code since > there _is_ overlap with the STX Bare Metal service. There is also > not-overlapping aspects to it. I would like to talk with you and the > Ironic team at some point about scope and goals for the long term. > >> the public community, but since we're dealing with squashed commits of >> changes, it is really hard for us to delineate history/origin of code >> fragments, and without that it makes it near impossible for projects >> to even help them reconcile their technical debt because of that and >> the lacking context surrounding that. It would be so much more >> friendly to the community if we had stacks of patch files that we >> could work with git. +1 > Unfortunately it was a requirement to not release the history. There > are some bits that we were not allowed to release (for legal reasons, > not open core reasons) that are present in the history. And yes it is > in most cases unusable to do anything more than browse for pulling > things upstream. 'git filter-branch' is your friend :) > What I did manage to get was permission to publish the individual > commits on top of the upstream base that do not run afoul of the legal > issues. Given that this is all against Pike and we need to propose to > master first, they are not likely directly usable but the information > needed for the upstream work will be available. These have not been > cleaned up yet but I plan to add them directly to the repos containing > the squashes as they are done. > >> Can I add myself to the list of confused people wanting to understand >> better? I can see and understand value, but context and understanding >> as to why as I mentioned above is going to be the main limiter for >> interaction. > > I have heard multiple reasons why this has been done, this is one area > I am not going to go into detail about other than the stuff that has > been cleared and released. Understanding (some) business decisions > are not one of my strengths. > > I will say that my opinion from working with WRS for a few months is > they do truly want to form a community around StarlingX and will be > moving their ongoing Titanium development there. > > dt > From shiina.hironori at jp.fujitsu.com Thu May 24 01:52:24 2018 From: shiina.hironori at jp.fujitsu.com (Shiina, Hironori) Date: Thu, 24 May 2018 01:52:24 +0000 Subject: [openstack-dev] Proposing Mark Goddard to ironic-core In-Reply-To: References: Message-ID: +1 > -----Original Message----- > From: Julia Kreger [mailto:juliaashleykreger at gmail.com] > Sent: Sunday, May 20, 2018 11:46 PM > To: OpenStack Development Mailing List (not for usage questions) > Subject: [openstack-dev] Proposing Mark Goddard to ironic-core > > Greetings everyone! > > I would like to propose Mark Goddard to ironic-core. I am aware he recently joined kolla-core, but his contributions in ironic > have been insightful and valuable. The kind of value that comes from operative use. > > I also make this nomination knowing that our community landscape is changing and that we must not silo our team responsibilities > or ability to move things forward to small highly focused team. I trust Mark to use his judgement as he has time or need to > do so. He might not always have time, but I think at the end of the day, we’re all in that same boat. > > -Julia From stendulker at gmail.com Thu May 24 05:57:23 2018 From: stendulker at gmail.com (Shivanand Tendulker) Date: Thu, 24 May 2018 11:27:23 +0530 Subject: [openstack-dev] Proposing Mark Goddard to ironic-core In-Reply-To: References: Message-ID: +1 from me. On Sun, May 20, 2018 at 8:15 PM, Julia Kreger wrote: > Greetings everyone! > > I would like to propose Mark Goddard to ironic-core. I am aware he > recently joined kolla-core, but his contributions in ironic have been > insightful and valuable. The kind of value that comes from operative use. > > I also make this nomination knowing that our community landscape is > changing and that we must not silo our team responsibilities or ability to > move things forward to small highly focused team. I trust Mark to use his > judgement as he has time or need to do so. He might not always have time, > but I think at the end of the day, we’re all in that same boat. > > -Julia > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yroblamo at redhat.com Thu May 24 06:33:35 2018 From: yroblamo at redhat.com (Yolanda Robla Mota) Date: Thu, 24 May 2018 08:33:35 +0200 Subject: [openstack-dev] Proposing Mark Goddard to ironic-core In-Reply-To: References: Message-ID: +1 .. his reviews have always been helpful to me On Thu, May 24, 2018 at 7:57 AM, Shivanand Tendulker wrote: > +1 from me. > > > > On Sun, May 20, 2018 at 8:15 PM, Julia Kreger > wrote: > >> Greetings everyone! >> >> I would like to propose Mark Goddard to ironic-core. I am aware he >> recently joined kolla-core, but his contributions in ironic have been >> insightful and valuable. The kind of value that comes from operative use. >> >> I also make this nomination knowing that our community landscape is >> changing and that we must not silo our team responsibilities or ability to >> move things forward to small highly focused team. I trust Mark to use his >> judgement as he has time or need to do so. He might not always have time, >> but I think at the end of the day, we’re all in that same boat. >> >> -Julia >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Yolanda Robla Mota Principal Software Engineer, RHCE Red Hat C/Avellana 213 Urb Portugal yroblamo at redhat.com M: +34605641639 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Helen.Walsh at dell.com Thu May 24 09:34:42 2018 From: Helen.Walsh at dell.com (Walsh, Helen) Date: Thu, 24 May 2018 09:34:42 +0000 Subject: [openstack-dev] FW: [cinder] In-Reply-To: References: Message-ID: <6031C821D2144A4CB722005A21B34BD53ACDCD3C@MX202CL02.corp.emc.com> Sending on Michael's behalf... From: McAleer, Michael Sent: Monday 21 May 2018 15:18 To: 'openstack-dev at lists.openstack.org' Subject: FW: [openstack-dev] [cinder] Hi Cinder Devs, I would like to ask a question concerning Cinder CLI commands in DevStack 13.0.0.0b2.dev167. I stacked a clean environment this morning to run through some sanity tests of new features, two of which are list manageable volumes and snapshots. When I attempt to run this command using Cinder CLI I am getting an invalid choice error in response: stack at openstack-dev:~/devstack$ cinder manageable-list openstack-dev at VMAX_ISCSI_DIAMOND#Diamond+DSS+SRP_1+000297000333 [usage output omitted] error: argument : invalid choice: u'manageable-list' The same behaviour can be seen for listing manageable-snapshots also, invalid choice error. I looked for a similar command using the OpenStack Volume CLI but there wasn't any similar commands found which would return a list of manageable volumes or snapshots. I didn't see any deprecation notices for the command, and the commands worked fine in earlier DevStack environments in this Rocky dev cycle, so just wondering what the status is of the commands and if this is possibly an oversight. Thanks! Michael Michael McAleer Software Engineer 1, Core Technologies Dell EMC | Enterprise Storage Division Phone: +353 21 428 1729 Michael.Mcaleer at Dell.com Ireland COE, Ovens, Co. Cork, Ireland -------------- next part -------------- An HTML attachment was scrubbed... URL: From skramaja at redhat.com Thu May 24 10:52:48 2018 From: skramaja at redhat.com (Saravanan KR) Date: Thu, 24 May 2018 16:22:48 +0530 Subject: [openstack-dev] [tripleo] Using derive parameters workflow for FixedIPs Message-ID: As discussed in the IRC over , here is the outline: * Derive parameters workflow could be used for deriving FixedIPs parameters also (started as part of the review https://review.openstack.org/#/c/569818/) * Above derivation should be done for all the deployments, so invoking of derive parameters should be brought out side the "-p" option check * But still invoking the NFV and HCI formulas should be based on the user option. Either add a condition by using the existing workflow_parameter of the feature [or] introduce a workflow_parameter to control the user preference * In the derive params workflow, we need to bring in the separation on whether, we need introspection data or not. Based on user preference and feature presence, add checks to see if introspection data is required. If we don't do this, then introspection will be become mandatory for all deployments. * Merging of parameter will be same as existing with preference to the user provided parameters Future Enhancement ---------------------------- * Instead of using plan-environment.yaml, write the derived parameters to a separate environment file, add add it to environments list of plan-environment.yaml to allow heat merging to work https://review.openstack.org/#/c/448209 Regards, Saravanan KR From bdobreli at redhat.com Thu May 24 14:15:55 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Thu, 24 May 2018 16:15:55 +0200 Subject: [openstack-dev] [tripleo][ci][infra] Quickstart Branching In-Reply-To: References: Message-ID: On 5/23/18 6:49 PM, Sagi Shnaidman wrote: > Alex, > > the problem is that you're working and focusing mostly on release > specific code like featuresets and some scripts. But > tripleo-quickstart(-extras) and tripleo-ci is much *much* more than set > of featuresets. Only 10% of the code may be related to releases and > branches, while other 90% is completely independent and not related to > releases. > > So in 90% code we DO need to backport every change, take for example the > latest patch to extras: https://review.openstack.org/#/c/570167/, it's > fixing reproducer. If oooq-extra was branched, we would need to backport > this fix to every and every branch. And the same for all other 90% of > code, which is complete nonsense. > Just because not using "{% if release %}" construct - to block the whole > work of CI team and make the CI code is absolutely unmaintainable? > > Some of release related templates we moved recently from tripleo-ci to > THT repo like scenarios, OC templates, etc. If we discover another > things in oooq that could be moved to branched THT I'd be only happy for > that. > > Sometimes it could be hard to maintain one file in extras templates with > different logic for releases, like we have in tempest configuration for > example. The solution is to create a few release-related templates and > use one that match the current branch. It doesn't affect 90% of code and > still "branch-like" approach. But I didn't see other scripts that are so > release dependent. If we'll have ones, we could do the same. For now I > see "{% if release %}" construct working very well. > > I didn't see still any advantage of branching CI code, except of a > little bit nicer jinja templates without "{% if release ", but amount of > disadvantages is so huge, that it'll literally block all current work in CI. [tl;dr] branching allows to not run cloned branched jobs against master patches. Or patches will wait longer in queues, and fail more often cuz of intermittent infra issues. See explanation and some calculations below. So my main concern against additional stable release cloned jobs executed for master branches is that there is an "infra failure fee", which is a failure unrelated to the patch under check or gate, like an intermittent connectivity/timeout inducted failure. This normally is followed by a 'recheck' comment posted by an engineer, and sometimes is noticed by the elastic recheck bot as well. Say, that sort of a failure has a probability of N. And the real "product failure", which is related to the subject patch and not infra, takes P. So chances to fail for a job is F = (1 - ((1 - N)*(1 - P)). Now that we have added a two more "branched clones" for RDO CI OVB jobs and a two more zuul jobs, we have this equation as F = (1 - ((1 - N)^4*(1 - P)). (I assumed the chances to face a product defect for the cloned branched jobs remain unchanged). This might bring significantly increased chances to fail (see some examples [0] for the N/P distribution cases). So folks will start posting 'recheck' comments now even more often, like x2 times more often. Which would make zuul and RDO CI queues larger, and patches sitting there longer - ending up with more time to wait for jobs to start its check/gate pipelines. That's what I call 'recheck storms'. And w/o branched quickstart/extras, we might have those storms amplified, tho that fully depends on real N/P distributions. [0] https://pastebin.com/ckG5G7NG > > Thanks > > > > On Wed, May 23, 2018 at 7:04 PM, Alex Schultz > wrote: > > On Wed, May 23, 2018 at 8:30 AM, Sagi Shnaidman > wrote: > > Hi, Sergii > > > > thanks for the question. It's not first time that this topic is raised and > > from first view it could seem that branching would help to that sort of > > issues. > > > > Although it's not the case. Tripleo-quickstart(-extras) is part of CI code, > > as well as tripleo-ci repo which have never been branched. The reason for > > that is relative small impact on CI code from product branching. Think about > > backport almost *every* patch to oooq and extras to all supported branches, > > down to newton at least. This will be a really *huge* price and non > > reasonable work. Just think about active maintenance of 3-4 versions of CI > > code in each of 3 repositories. It will take all time of CI team with almost > > zero value of this work. > > > > So I'm not sure I completely agree with this assessment as there is a > price paid for every {%if release in [...]%} that we have to carry in > oooq{,-extras}.  These go away if we branch because we don't have to > worry about breaking previous releases or current release (which may > or may not actually have CI results). > > > What regards patch you listed, we would have backport this change to *every* > > branch, and it wouldn't really help to avoid the issue. The source of > > problem is not branchless repo here. > > > > No we shouldn't be backporting every change.  The logic in oooq-extras > should be version specific and if we're changing an interface in > tripleo in a breaking fashion we're doing it wrong in tripleo. If > we're backporting things to work around tripleo issues, we're doing it > wrong in quickstart. > > > Regarding catching such issues and Bogdans point, that's right we added a > > few jobs to catch such issues in the future and prevent breakages, and a few > > running jobs is reasonable price to keep configuration working in all > > branches. Comparing to maintenance nightmare with branches of CI code, it's > > really a *zero* price. > > > > Nothing is free. If there's a high maintenance cost, we haven't > properly identified the optimal way to separate functionality between > tripleo/quickstart.  I have repeatedly said that the provisioning > parts of quickstart should be separate because those aren't tied to a > tripleo version and this along with the scenario configs should be the > only unbranched repo we have. Any roles related to how to > configure/work with tripleo should be branched and tied to a stable > branch of tripleo. This would actually be beneficial for tripleo as > well because then we can see when we are introducing backwards > incompatible changes. > > Thanks, > -Alex > > > Thanks > > > > > > On Wed, May 23, 2018 at 3:43 PM, Sergii Golovatiuk > > > > wrote: > >> > >> Hi, > >> > >> Looking at [1], I am thinking about the price we paid for not > >> branching tripleo-quickstart. Can we discuss the options to prevent > >> the issues such as [1]? Thank you in advance. > >> > >> [1] https://review.openstack.org/#/c/569830/4 > > >> > >> -- > >> Best Regards, > >> Sergii Golovatiuk > >> > >> > __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > -- > > Best regards > > Sagi Shnaidman > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > -- > Best regards > Sagi Shnaidman > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Bogdan Dobrelya, Irc #bogdando From e0ne at e0ne.info Thu May 24 15:30:49 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Thu, 24 May 2018 08:30:49 -0700 Subject: [openstack-dev] FW: [cinder] In-Reply-To: <6031C821D2144A4CB722005A21B34BD53ACDCD3C@MX202CL02.corp.emc.com> References: <6031C821D2144A4CB722005A21B34BD53ACDCD3C@MX202CL02.corp.emc.com> Message-ID: Hello, Please, try `cinder --os-volume-api-versio=3 manageable-list openstack-dev at VMAX_ISCSI_DIAMOND#Diamond+DSS+SRP_1+000297000333` or `OS_VOLUME_API_VERSION=3 cinder manageable-list openstack-dev at VMAX_ISCSI_ DIAMOND#Diamond+DSS+SRP_1+000297000333` Devstack used Cinder API v2 by default before https://review.openstack.org/#/c/566747/ was merged Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Thu, May 24, 2018 at 2:34 AM, Walsh, Helen wrote: > Sending on Michael’s behalf… > > > > > > *From:* McAleer, Michael > *Sent:* Monday 21 May 2018 15:18 > *To:* 'openstack-dev at lists.openstack.org' > *Subject:* FW: [openstack-dev] [cinder] > > > > Hi Cinder Devs, > > > > I would like to ask a question concerning Cinder CLI commands in DevStack > 13.0.0.0b2.dev167. > > > > I stacked a clean environment this morning to run through some sanity > tests of new features, two of which are list manageable volumes > > and snapshots > . > When I attempt to run this command using Cinder CLI I am getting an invalid > choice error in response: > > > > stack at openstack-dev:~/devstack$ cinder manageable-list > openstack-dev at VMAX_ISCSI_DIAMOND#Diamond+DSS+SRP_1+000297000333 > > [usage output omitted] > > error: argument : invalid choice: u'manageable-list' > > > > The same behaviour can be seen for listing manageable-snapshots also, > invalid choice error. I looked for a similar command using the OpenStack > Volume CLI but there wasn’t any similar commands found which would return a > list of manageable volumes or snapshots. > > > I didn’t see any deprecation notices for the command, and the commands > worked fine in earlier DevStack environments in this Rocky dev cycle, so > just wondering what the status is of the commands and if this is possibly > an oversight. > > > > Thanks! > > Michael > > > > *Michael McAleer* > > Software Engineer 1, Core Technologies > > *Dell **EMC **| *Enterprise Storage Division > > Phone: +353 21 428 1729 > > Michael.Mcaleer at Dell.com > > Ireland COE, Ovens, Co. Cork, Ireland > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at danplanet.com Thu May 24 18:40:14 2018 From: dms at danplanet.com (Dan Smith) Date: Thu, 24 May 2018 11:40:14 -0700 Subject: [openstack-dev] [StarlingX] StarlingX code followup discussions In-Reply-To: (Michael Still's message of "Thu, 24 May 2018 06:53:18 +1000") References: <197a5738-c714-a50a-2eb0-ed23e2fb9754@gmail.com> <20180523180026.mrtpaetvxw4rxdrj@yuggoth.org> Message-ID: > For example, I look at your nova fork and it has a "don't allow this > call during an upgrade" decorator on many API calls. Why wasn't that > done upstream? It doesn't seem overly controversial, so it would be > useful to understand the reasoning for that change. Interesting. We have internal accounting for service versions and can make a determination of if we're in an upgrade scenario (and do block operations until the upgrade is over). Unless this decorator you're looking at checks some non-upstream is-during-upgrade flag, this would be an easy thing to close the gap on. --Dan From mriedemos at gmail.com Thu May 24 22:19:49 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 24 May 2018 15:19:49 -0700 Subject: [openstack-dev] [nova] Need some feedback on the proposed heal_allocations CLI Message-ID: I've written a nova-manage placement heal_allocations CLI [1] which was a TODO from the PTG in Dublin as a step toward getting existing CachingScheduler users to roll off that (which is deprecated). During the CERN cells v1 upgrade talk it was pointed out that CERN was able to go from placement-per-cell to centralized placement in Ocata because the nova-computes in each cell would automatically recreate the allocations in Placement in a periodic task, but that code is gone once you're upgraded to Pike or later. In various other talks during the summit this week, we've talked about things during upgrades where, for instance, if placement is down for some reason during an upgrade, a user deletes an instance and the allocation doesn't get cleaned up from placement so it's going to continue counting against resource usage on that compute node even though the server instance in nova is gone. So this CLI could be expanded to help clean up situations like that, e.g. provide it a specific server ID and the CLI can figure out if it needs to clean things up in placement. So there are plenty of things we can build into this, but the patch is already quite large. I expect we'll also be backporting this to stable branches to help operators upgrade/fix allocation issues. It already has several things listed in a code comment inline about things to build into this later. My question is, is this good enough for a first iteration or is there something severely missing before we can merge this, like the automatic marker tracking mentioned in the code (that will probably be a non-trivial amount of code to add). I could really use some operator feedback on this to just take a look at what it already is capable of and if it's not going to be useful in this iteration, let me know what's missing and I can add that in to the patch. [1] https://review.openstack.org/#/c/565886/ -- Thanks, Matt From zigo at debian.org Thu May 24 22:43:59 2018 From: zigo at debian.org (Thomas Goirand) Date: Fri, 25 May 2018 00:43:59 +0200 Subject: [openstack-dev] [neutron] Status of neutron-rpc-server Message-ID: Hi, I'd like to know what's the status of neutron-rpc-server. As I switched the Debian package from neutron-server to neutron-api using uwsgi, I tried using it, and it seems it kind of works, if I apply this patch: https://review.openstack.org/#/c/555608 Is there anything else that I should know? Cheers, Thomas Goirand (zigo) From zhang.lei.fly at gmail.com Fri May 25 03:33:43 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Fri, 25 May 2018 11:33:43 +0800 Subject: [openstack-dev] [nova] nova aggregate and nova placement api aggregate Message-ID: Recently, i am trying to implement a function which aggregate nova hypervisors rather than nova compute host. But seems nova only aggregate nova-compute host. On the other hand, since Ocata, nova depends on placement api which supports aggregating resource providers. But nova-scheduler doesn't use this feature now. So is there any better way to solve such issue? and is there any plan which make nova legacy aggregate and placement api aggregate cloud work together? -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tim.Bell at cern.ch Fri May 25 04:23:47 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Fri, 25 May 2018 04:23:47 +0000 Subject: [openstack-dev] [StarlingX] StarlingX code followup discussions In-Reply-To: References: <1527094156.859854.1382307608.21C28EB4@webmail.messagingengine.com> Message-ID: <54557B59-BEAF-4094-B5DB-ADAD802693A2@cern.ch> I'd like to understand the phrase "StarlingX is an OpenStack Foundation Edge focus area project". My understanding of the current situation is that "StarlingX would like to be OpenStack Foundation Edge focus area project". I have not been able to keep up with all of the discussions so I'd be happy for further URLs to help me understand the current situation and the processes (formal/informal) to arrive at this conclusion. Tim -----Original Message----- From: Dean Troyer Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Wednesday, 23 May 2018 at 11:08 To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] [StarlingX] StarlingX code followup discussions On Wed, May 23, 2018 at 11:49 AM, Colleen Murphy wrote: > It's also important to make the distinction between hosting something on openstack.org infrastructure and recognizing it in an official capacity. StarlingX is seeking both, but in my opinion the code hosting is not the problem here. StarlingX is an OpenStack Foundation Edge focus area project and is seeking to use the CI infrastructure. There may be a project or two contained within that may make sense as OpenStack projects in the not-called-big-tent-anymore sense but that is not on the table, there is a lot of work to digest before we could even consider that. Is that the official capacity you are talking about? dt -- Dean Troyer dtroyer at gmail.com __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jichenjc at cn.ibm.com Fri May 25 04:24:07 2018 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Fri, 25 May 2018 12:24:07 +0800 Subject: [openstack-dev] [nova] nova aggregate and nova placement apiaggregate In-Reply-To: References: Message-ID: not sure whether it will helpful , FYI https://developer.openstack.org/api-ref/placement/ The primary differences between Nova’s host aggregates and placement aggregates are the following: In Nova, a host aggregate associates a nova-compute service with other nova-compute services. Placement aggregates are not specific to a nova-compute service and are, in fact, not compute-specific at all. A resource provider in the Placement API is generic, and placement aggregates are simply groups of generic resource providers. This is an important difference especially for Ironic, which when used with Nova, has many Ironic baremetal nodes attached to a single nova-compute service. In the Placement API, each Ironic baremetal node is its own resource provider and can therefore be associated to other Ironic baremetal nodes via a placement aggregate association. In Nova, a host aggregate may have metadata key/value pairs attached to it. All nova-compute services associated with a Nova host aggregate share the same metadata. Placement aggregates have no such metadata because placement aggregates only represent the grouping of resource providers. In the Placement API, resource providers are individually decorated with traits that provide qualitative information about the resource provider. In Nova, a host aggregate dictates the availability zone within which one or more nova-compute services reside. While placement aggregates may be used to model availability zones, they have no inherent concept thereof. Best Regards! Kevin (Chen) Ji 纪 晨 Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com Phone: +86-10-82451493 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC From: Jeffrey Zhang To: OpenStack Development Mailing List Date: 05/25/2018 11:34 AM Subject: [openstack-dev] [nova] nova aggregate and nova placement api aggregate Recently, i am trying to implement a function which aggregate nova hypervisors rather than nova compute host. But seems nova only aggregate nova-compute host. On the other hand, since Ocata, nova depends on placement api which supports aggregating resource providers. But nova-scheduler doesn't use this feature now. So  is there any better way to solve such issue? and is there any plan which make nova legacy aggregate and placement api aggregate cloud work together? -- Regards, Jeffrey Zhang Blog: http://xcodest.me __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From mriedemos at gmail.com Fri May 25 05:56:33 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 24 May 2018 22:56:33 -0700 Subject: [openstack-dev] [nova] nova aggregate and nova placement api aggregate In-Reply-To: References: Message-ID: On 5/24/2018 8:33 PM, Jeffrey Zhang wrote: > Recently, i am trying to implement a function which aggregate nova > hypervisors > rather than nova compute host. But seems nova only aggregate > nova-compute host. > > On the other hand, since Ocata, nova depends on placement api which supports > aggregating resource providers. But nova-scheduler doesn't use this feature > now. > > So  is there any better way to solve such issue? and is there any plan which > make nova legacy aggregate and placement api aggregate cloud work together? There are some new features in Rocky [1] that involve resource provider aggregates for compute nodes which can be used for scheduling and will actually allow you to remove some older filters (AggregateMultiTenancyIsolation and AvailabilityZoneFilter). CERN is using these to improve performance with their cells v2 deployment. -- Thanks, Matt From jichenjc at cn.ibm.com Fri May 25 09:18:11 2018 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Fri, 25 May 2018 17:18:11 +0800 Subject: [openstack-dev] [Nova] z/VM introducing a new config driveformat In-Reply-To: References: <7efe6916-17fc-59c5-d666-6bdfc19c3329@gmail.com> Message-ID: we are continue to evaluating the ways to remove the restrictions in the future, one question on following comments: >>>Why don't you support the metadata service? That's a pretty fundamental mechanism for nova and openstack. It's the only way you can get a live copy of metadata, and it's the only way you can get access to device tags when you hot-attach something. Personally, I think that it's something that needs to work. As Matt mentioned in https://review.openstack.org/#/c/562154/ PS#4 As far as I know the metadata service is not a basic feature, it's optional and some deployments don't run it because of possible security concerns. so seems it's different suggestion,... and for the following suggestion It's the only way you can get a live copy of metadata, and it's the only way you can get access to device tags when you hot-attach something can I know a use case for this 'live copy metadata or ' the 'only way to access device tags when hot-attach? my thought is this is one time thing in cloud-init side either through metatdata service or config drive and won't be used later? then why I need a live copy? and because nova do the hot attach why it's the only way to access the tags? what exec in the deployed VM will access the device? cloud-init or something else? Thanks a lot for your help Best Regards! Kevin (Chen) Ji 纪 晨 Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com Phone: +86-10-82451493 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC From: Dan Smith To: "Chen CH Ji" Cc: "OpenStack Development Mailing List \(not for usage questions \)" Date: 04/13/2018 09:46 PM Subject: Re: [openstack-dev] [Nova] z/VM introducing a new config driveformat > for the run_validation=False issue, you are right, because z/VM driver > only support config drive and don't support metadata service ,we made > bad assumption and took wrong action to disabled the whole ssh check, > actually according to [1] , we should only disable > CONF.compute_feature_enabled.metadata_service but keep both > self.run_ssh and CONF.compute_feature_enabled.config_drive as True in > order to make config drive test validation take effect, our CI will > handle that Why don't you support the metadata service? That's a pretty fundamental mechanism for nova and openstack. It's the only way you can get a live copy of metadata, and it's the only way you can get access to device tags when you hot-attach something. Personally, I think that it's something that needs to work. > For the tgz/iso9660 question below, this is because we got wrong info > from low layer component folks back to 2012 and after discuss with > some experts again, actually we can create iso9660 in the driver layer > and pass down to the spawned virtual machine and during startup > process, the VM itself will mount the iso file and consume it, because > from linux perspective, either tgz or iso9660 doesn't matter , only > need some files in order to transfer the information from openstack > compute node to the spawned VM. so our action is to change the format > from tgz to iso9660 and keep consistent to other drivers. The "iso file" will not be inside the guest, but rather passed to the guest as a block device, right? > For the config drive working mechanism question, according to [2] z/VM > is Type 1 hypervisor while Qemu/KVM are mostly likely to be Type 2 > hypervisor, there is no file system in z/VM hypervisor (I omit too > much detail here) , so we can't do something like linux operation > system to keep a file as qcow2 image in the host operating system, I'm not sure what the type-1-ness has to do with this. The hypervisor doesn't need to support any specific filesystem for this to work. Many drivers we have in the tree are type-1 (xen, vmware, hyperv, powervm) and you can argue that KVM is type-1-ish. They support configdrive. > what we do is use a special file pool to store the config drive and > during VM init process, we read that file from special device and > attach to VM as iso9660 format then cloud-init will handle the follow > up, the cloud-init handle process is identical to other platform This and the previous mention of this sort of behavior has me concerned. Are you describing some sort of process that runs when the instance is starting to initialize its environment, or something that runs *inside* the instance and thus functionality that has to exist in the *image* to work? --Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From mihaela.balas at orange.com Fri May 25 10:18:46 2018 From: mihaela.balas at orange.com (mihaela.balas at orange.com) Date: Fri, 25 May 2018 10:18:46 +0000 Subject: [openstack-dev] [octavia] Multiple availability zone and network region support Message-ID: <3625_1527243526_5B07E306_3625_457_1_e75e322181014ff8ae04c921e95d71ed@orange.com> Hello, Is there any way to set up Octavia so that we are able to launch amphora in different AZs and connected to different network per each AZ? Than you, Mihaela Balas _________________________________________________________________________________________________________________________ Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci. This message and its attachments may contain confidential or privileged information that may be protected by law; they should not be distributed, used or copied without authorisation. If you have received this email in error, please notify the sender and delete this message and its attachments. As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdobreli at redhat.com Fri May 25 12:45:01 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Fri, 25 May 2018 14:45:01 +0200 Subject: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal In-Reply-To: <8060ff1f-e546-868b-8729-090d643969d7@redhat.com> References: <90762663-3bc2-d7b8-1242-ea5a67543c68@redhat.com> <8060ff1f-e546-868b-8729-090d643969d7@redhat.com> Message-ID: <1a143f50-e6d9-b162-3284-35978bd1d084@redhat.com> Job dependencies seem ignored by zuul, see jobs [0],[1],[2] started simultaneously. While I expected them run one by one. According to the patch 568536 [3], [1] is a dependency for [2] and [3]. The same can be observed for the remaining patches in the topic [4]. Is that a bug or I misunderstood what zuul job dependencies actually do? [0] http://logs.openstack.org/36/568536/2/check/tripleo-ci-centos-7-undercloud-containers/731183a/ara-report/ [1] http://logs.openstack.org/36/568536/2/check/tripleo-ci-centos-7-3nodes-multinode/a1353ed/ara-report/ [2] http://logs.openstack.org/36/568536/2/check/tripleo-ci-centos-7-containers-multinode/9777136/ara-report/ [3] https://review.openstack.org/#/c/568536/ [4] https://review.openstack.org/#/q/topic:ci_pipelines+(status:open+OR+status:merged) On 5/15/18 11:39 AM, Bogdan Dobrelya wrote: > Added a few more patches [0], [1] by the discussion results. PTAL folks. > Wrt remaining in the topic, I'd propose to give it a try and revert it, > if it proved to be worse than better. > Thank you for feedback! > > The next step could be reusing artifacts, like DLRN repos and containers > built for patches and hosted undercloud, in the consequent pipelined > jobs. But I'm not sure how to even approach that. > > [0] https://review.openstack.org/#/c/568536/ > [1] https://review.openstack.org/#/c/568543/ > > On 5/15/18 10:54 AM, Bogdan Dobrelya wrote: >> On 5/14/18 10:06 PM, Alex Schultz wrote: >>> On Mon, May 14, 2018 at 10:15 AM, Bogdan Dobrelya >>> wrote: >>>> An update for your review please folks >>>> >>>>> Bogdan Dobrelya writes: >>>>> >>>>>> Hello. >>>>>> As Zuul documentation [0] explains, the names "check", "gate", and >>>>>> "post"  may be altered for more advanced pipelines. Is it doable to >>>>>> introduce, for particular openstack projects, multiple check >>>>>> stages/steps as check-1, check-2 and so on? And is it possible to >>>>>> make >>>>>> the consequent steps reusing environments from the previous steps >>>>>> finished with? >>>>>> >>>>>> Narrowing down to tripleo CI scope, the problem I'd want we to solve >>>>>> with this "virtual RFE", and using such multi-staged check pipelines, >>>>>> is reducing (ideally, de-duplicating) some of the common steps for >>>>>> existing CI jobs. >>>>> >>>>> >>>>> What you're describing sounds more like a job graph within a pipeline. >>>>> See: >>>>> https://docs.openstack.org/infra/zuul/user/config.html#attr-job.dependencies >>>>> >>>>> for how to configure a job to run only after another job has >>>>> completed. >>>>> There is also a facility to pass data between such jobs. >>>>> >>>>> ... (skipped) ... >>>>> >>>>> Creating a job graph to have one job use the results of the >>>>> previous job >>>>> can make sense in a lot of cases.  It doesn't always save *time* >>>>> however. >>>>> >>>>> It's worth noting that in OpenStack's Zuul, we have made an explicit >>>>> choice not to have long-running integration jobs depend on shorter >>>>> pep8 >>>>> or tox jobs, and that's because we value developer time more than CPU >>>>> time.  We would rather run all of the tests and return all of the >>>>> results so a developer can fix all of the errors as quickly as >>>>> possible, >>>>> rather than forcing an iterative workflow where they have to fix >>>>> all the >>>>> whitespace issues before the CI system will tell them which actual >>>>> tests >>>>> broke. >>>>> >>>>> -Jim >>>> >>>> >>>> I proposed a few zuul dependencies [0], [1] to tripleo CI pipelines for >>>> undercloud deployments vs upgrades testing (and some more). Given >>>> that those >>>> undercloud jobs have not so high fail rates though, I think Emilien >>>> is right >>>> in his comments and those would buy us nothing. >>>> >>>>  From the other side, what do you think folks of making the >>>> tripleo-ci-centos-7-3nodes-multinode depend on >>>> tripleo-ci-centos-7-containers-multinode [2]? The former seems quite >>>> faily >>>> and long running, and is non-voting. It deploys (see featuresets >>>> configs >>>> [3]*) a 3 nodes in HA fashion. And it seems almost never passing, >>>> when the >>>> containers-multinode fails - see the CI stats page [4]. I've found >>>> only a 2 >>>> cases there for the otherwise situation, when containers-multinode >>>> fails, >>>> but 3nodes-multinode passes. So cutting off those future failures >>>> via the >>>> dependency added, *would* buy us something and allow other jobs to >>>> wait less >>>> to commence, by a reasonable price of somewhat extended time of the >>>> main >>>> zuul pipeline. I think it makes sense and that extended CI time will >>>> not >>>> overhead the RDO CI execution times so much to become a problem. WDYT? >>>> >>> >>> I'm not sure it makes sense to add a dependency on other deployment >>> tests. It's going to add additional time to the CI run because the >>> upgrade won't start until well over an hour after the rest of the >> >> The things are not so simple. There is also a significant >> time-to-wait-in-queue jobs start delay. And it takes probably even >> longer than the time to execute jobs. And that delay is a function of >> available HW resources and zuul queue length. And the proposed change >> affects those parameters as well, assuming jobs with failed >> dependencies won't run at all. So we could expect longer execution >> times compensated with shorter wait times! I'm not sure how to >> estimate that tho. You folks have all numbers and knowledge, let's use >> that please. >> >>> jobs.  The only thing I could think of where this makes more sense is >>> to delay the deployment tests until the pep8/unit tests pass.  e.g. >>> let's not burn resources when the code is bad. There might be >>> arguments about lack of information from a deployment when developing >>> things but I would argue that the patch should be vetted properly >>> first in a local environment before taking CI resources. >> >> I support this idea as well, though I'm sceptical about having that >> blessed in the end :) I'll add a patch though. >> >>> >>> Thanks, >>> -Alex >>> >>>> [0] https://review.openstack.org/#/c/568275/ >>>> [1] https://review.openstack.org/#/c/568278/ >>>> [2] https://review.openstack.org/#/c/568326/ >>>> [3] >>>> https://docs.openstack.org/tripleo-quickstart/latest/feature-configuration.html >>>> >>>> [4] http://tripleo.org/cistatus.html >>>> >>>> * ignore the column 1, it's obsolete, all CI jobs now using configs >>>> download >>>> AFAICT... >>>> >>>> -- >>>> Best regards, >>>> Bogdan Dobrelya, >>>> Irc #bogdando >>>> >>>> __________________________________________________________________________ >>>> >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> __________________________________________________________________________ >>> >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> > > -- Best regards, Bogdan Dobrelya, Irc #bogdando From tdecacqu at redhat.com Fri May 25 16:40:50 2018 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Fri, 25 May 2018 16:40:50 +0000 Subject: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal In-Reply-To: <1a143f50-e6d9-b162-3284-35978bd1d084@redhat.com> References: <90762663-3bc2-d7b8-1242-ea5a67543c68@redhat.com> <8060ff1f-e546-868b-8729-090d643969d7@redhat.com> <1a143f50-e6d9-b162-3284-35978bd1d084@redhat.com> Message-ID: <1527266023.toq0ou1iml.tristanC@fedora> Hello Bogdan, Perhaps this has something to do with jobs evaluation order, it may be worth trying to add the dependencies list in the project-templates, like it is done here for example: http://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul.d/projects.yaml#n9799 It also easier to read dependencies from pipelines definition imo. -Tristan On May 25, 2018 12:45 pm, Bogdan Dobrelya wrote: > Job dependencies seem ignored by zuul, see jobs [0],[1],[2] started > simultaneously. While I expected them run one by one. According to the > patch 568536 [3], [1] is a dependency for [2] and [3]. > > The same can be observed for the remaining patches in the topic [4]. > Is that a bug or I misunderstood what zuul job dependencies actually do? > > [0] > http://logs.openstack.org/36/568536/2/check/tripleo-ci-centos-7-undercloud-containers/731183a/ara-report/ > [1] > http://logs.openstack.org/36/568536/2/check/tripleo-ci-centos-7-3nodes-multinode/a1353ed/ara-report/ > [2] > http://logs.openstack.org/36/568536/2/check/tripleo-ci-centos-7-containers-multinode/9777136/ara-report/ > [3] https://review.openstack.org/#/c/568536/ > [4] > https://review.openstack.org/#/q/topic:ci_pipelines+(status:open+OR+status:merged) > > On 5/15/18 11:39 AM, Bogdan Dobrelya wrote: >> Added a few more patches [0], [1] by the discussion results. PTAL folks. >> Wrt remaining in the topic, I'd propose to give it a try and revert it, >> if it proved to be worse than better. >> Thank you for feedback! >> >> The next step could be reusing artifacts, like DLRN repos and containers >> built for patches and hosted undercloud, in the consequent pipelined >> jobs. But I'm not sure how to even approach that. >> >> [0] https://review.openstack.org/#/c/568536/ >> [1] https://review.openstack.org/#/c/568543/ >> >> On 5/15/18 10:54 AM, Bogdan Dobrelya wrote: >>> On 5/14/18 10:06 PM, Alex Schultz wrote: >>>> On Mon, May 14, 2018 at 10:15 AM, Bogdan Dobrelya >>>> wrote: >>>>> An update for your review please folks >>>>> >>>>>> Bogdan Dobrelya writes: >>>>>> >>>>>>> Hello. >>>>>>> As Zuul documentation [0] explains, the names "check", "gate", and >>>>>>> "post"  may be altered for more advanced pipelines. Is it doable to >>>>>>> introduce, for particular openstack projects, multiple check >>>>>>> stages/steps as check-1, check-2 and so on? And is it possible to >>>>>>> make >>>>>>> the consequent steps reusing environments from the previous steps >>>>>>> finished with? >>>>>>> >>>>>>> Narrowing down to tripleo CI scope, the problem I'd want we to solve >>>>>>> with this "virtual RFE", and using such multi-staged check pipelines, >>>>>>> is reducing (ideally, de-duplicating) some of the common steps for >>>>>>> existing CI jobs. >>>>>> >>>>>> >>>>>> What you're describing sounds more like a job graph within a pipeline. >>>>>> See: >>>>>> https://docs.openstack.org/infra/zuul/user/config.html#attr-job.dependencies >>>>>> >>>>>> for how to configure a job to run only after another job has >>>>>> completed. >>>>>> There is also a facility to pass data between such jobs. >>>>>> >>>>>> ... (skipped) ... >>>>>> >>>>>> Creating a job graph to have one job use the results of the >>>>>> previous job >>>>>> can make sense in a lot of cases.  It doesn't always save *time* >>>>>> however. >>>>>> >>>>>> It's worth noting that in OpenStack's Zuul, we have made an explicit >>>>>> choice not to have long-running integration jobs depend on shorter >>>>>> pep8 >>>>>> or tox jobs, and that's because we value developer time more than CPU >>>>>> time.  We would rather run all of the tests and return all of the >>>>>> results so a developer can fix all of the errors as quickly as >>>>>> possible, >>>>>> rather than forcing an iterative workflow where they have to fix >>>>>> all the >>>>>> whitespace issues before the CI system will tell them which actual >>>>>> tests >>>>>> broke. >>>>>> >>>>>> -Jim >>>>> >>>>> >>>>> I proposed a few zuul dependencies [0], [1] to tripleo CI pipelines for >>>>> undercloud deployments vs upgrades testing (and some more). Given >>>>> that those >>>>> undercloud jobs have not so high fail rates though, I think Emilien >>>>> is right >>>>> in his comments and those would buy us nothing. >>>>> >>>>>  From the other side, what do you think folks of making the >>>>> tripleo-ci-centos-7-3nodes-multinode depend on >>>>> tripleo-ci-centos-7-containers-multinode [2]? The former seems quite >>>>> faily >>>>> and long running, and is non-voting. It deploys (see featuresets >>>>> configs >>>>> [3]*) a 3 nodes in HA fashion. And it seems almost never passing, >>>>> when the >>>>> containers-multinode fails - see the CI stats page [4]. I've found >>>>> only a 2 >>>>> cases there for the otherwise situation, when containers-multinode >>>>> fails, >>>>> but 3nodes-multinode passes. So cutting off those future failures >>>>> via the >>>>> dependency added, *would* buy us something and allow other jobs to >>>>> wait less >>>>> to commence, by a reasonable price of somewhat extended time of the >>>>> main >>>>> zuul pipeline. I think it makes sense and that extended CI time will >>>>> not >>>>> overhead the RDO CI execution times so much to become a problem. WDYT? >>>>> >>>> >>>> I'm not sure it makes sense to add a dependency on other deployment >>>> tests. It's going to add additional time to the CI run because the >>>> upgrade won't start until well over an hour after the rest of the >>> >>> The things are not so simple. There is also a significant >>> time-to-wait-in-queue jobs start delay. And it takes probably even >>> longer than the time to execute jobs. And that delay is a function of >>> available HW resources and zuul queue length. And the proposed change >>> affects those parameters as well, assuming jobs with failed >>> dependencies won't run at all. So we could expect longer execution >>> times compensated with shorter wait times! I'm not sure how to >>> estimate that tho. You folks have all numbers and knowledge, let's use >>> that please. >>> >>>> jobs.  The only thing I could think of where this makes more sense is >>>> to delay the deployment tests until the pep8/unit tests pass.  e.g. >>>> let's not burn resources when the code is bad. There might be >>>> arguments about lack of information from a deployment when developing >>>> things but I would argue that the patch should be vetted properly >>>> first in a local environment before taking CI resources. >>> >>> I support this idea as well, though I'm sceptical about having that >>> blessed in the end :) I'll add a patch though. >>> >>>> >>>> Thanks, >>>> -Alex >>>> >>>>> [0] https://review.openstack.org/#/c/568275/ >>>>> [1] https://review.openstack.org/#/c/568278/ >>>>> [2] https://review.openstack.org/#/c/568326/ >>>>> [3] >>>>> https://docs.openstack.org/tripleo-quickstart/latest/feature-configuration.html >>>>> >>>>> [4] http://tripleo.org/cistatus.html >>>>> >>>>> * ignore the column 1, it's obsolete, all CI jobs now using configs >>>>> download >>>>> AFAICT... >>>>> >>>>> -- >>>>> Best regards, >>>>> Bogdan Dobrelya, >>>>> Irc #bogdando >>>>> >>>>> __________________________________________________________________________ >>>>> >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: >>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> __________________________________________________________________________ >>>> >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> >>> >> >> > > > -- > Best regards, > Bogdan Dobrelya, > Irc #bogdando > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From dtroyer at gmail.com Fri May 25 18:01:35 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Fri, 25 May 2018 13:01:35 -0500 Subject: [openstack-dev] [StarlingX] StarlingX code followup discussions In-Reply-To: <54557B59-BEAF-4094-B5DB-ADAD802693A2@cern.ch> References: <1527094156.859854.1382307608.21C28EB4@webmail.messagingengine.com> <54557B59-BEAF-4094-B5DB-ADAD802693A2@cern.ch> Message-ID: On Thu, May 24, 2018 at 11:23 PM, Tim Bell wrote: > I'd like to understand the phrase "StarlingX is an OpenStack Foundation Edge focus area project". > > My understanding of the current situation is that "StarlingX would like to be OpenStack Foundation Edge focus area project". > > I have not been able to keep up with all of the discussions so I'd be happy for further URLs to help me understand the current situation and the processes (formal/informal) to arrive at this conclusion. Agreed Tim, my apologies for being quick on the conclusions there. Even after some discussions yesterday it is not clear to me exactly the right phrasing. I understand that the intention is to become an incubated edge project, I do not know at what point StarlingX nor Airship exactly are at today. dt -- Dean Troyer dtroyer at gmail.com From erkam.murat.bozkurt at gmail.com Sat May 26 01:49:26 2018 From: erkam.murat.bozkurt at gmail.com (Erkam Murat Bozkurt) Date: Sat, 26 May 2018 04:49:26 +0300 Subject: [openstack-dev] ThreadStack Project: an innovative open-source software for multi-thread computing Message-ID: I have developed a new open source software as a result of a scientific research and I want to share my study with scientists and/or software developers. ThreadStack is an innovative software which produces a class library for C++ multi-thread programming and the outcome of the ThreadStack acts as an autonomous management system for the thread synchronization tasks. ThreadStack has a nice and useful graphical user interface and includes a short tutorial and code examples. ThreadStack offers a new way for multi-thread computing and it uses a meta program in order to produce an application specific thread synchronization library. Therefore, the programmer must read the tutorial to be able to use the software. The tutorial includes the main designs of the program. An academic journal submission has been performed for the study and the scientific introduction of the project will be readable from an academic journal as soon as possible. ThreadStack can be downloaded from sourcefource and the link is given in below. https://sourceforge.net/projects/threadstack/ threadstack.help at gmail.com I am waiting your valuable comments. Thanks and best regards. Erkam Murat Bozkurt, M. Sc Control Systems Engineering. Istanbul / Turkey -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Sat May 26 17:37:11 2018 From: zigo at debian.org (Thomas Goirand) Date: Sat, 26 May 2018 19:37:11 +0200 Subject: [openstack-dev] [horizon] Font awesome currently broken with Debian Sid and Horizon Message-ID: <48349161-33c5-59cb-97ca-6b397b93d592@debian.org> Hi Horizon team! I'm not sure if you're aware of that, but the upstream authors of fontawesome decided it was a good idea to break everyone. See this Debian bug entry: https://bugs.debian.org/899124 So, what happened is that fontawesome-webfont has been split into 3 sets of fonts: solid, regular and brands fonts. Thus there is no drop in replacement for the old fontawesome-webfont.xxx. As my python3-xstatic-font-awesome removes the embedded fonts, and just points to /usr/share/fonts-font-awesome, Horizon is broken and cannot even be installed currently in Debian Sid. Of course, I'm considering reverting the removal of the data folder from the xstatic package, but it then defeats the purpose of Xstatic, which is avoiding duplication of static files already packaged in the distribution. So, ideally, I would like to know first if I can use the fa-solid-900 for Horizon, or if other glyphs are in use (it's very much possible that only fa-solid-900 stuff are in use, but I really don't know how to check for that the correct way). If that's the case, then I can workaround the issue (at least temporarily), and synlink stuff in the data folder to the new fa-solid-900 files. Second, it'd be nice if Horizon could adapt and use the new v5 font-awesome, so that the problem is completely solved. Cheers, Thomas Goirand (zigo) From mnaser at vexxhost.com Sat May 26 21:46:06 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Sat, 26 May 2018 14:46:06 -0700 Subject: [openstack-dev] [tc] Organizational diversity tag Message-ID: Hi everyone! During the TC retrospective at the OpenStack summit last week, the topic of the organizational diversity tag is becoming irrelevant was brought up by Thierry (ttx)[1]. It seems that for projects that are not very active, they can easily lose this tag with a few changes by perhaps the infrastructure team for CI related fixes. As an action item, Thierry and I have paired up in order to look into a way to resolve this issue. There have been ideas to switch this to a report that is published at the end of the cycle rather than continuously. Julia (TheJulia) suggested that we change or track different types of diversity. Before we start diving into solutions, I wanted to bring this topic up to the mailing list and ask for any suggestions. In digging the codebase behind this[2], I've found that there are some knobs that we can also tweak if need-be, or perhaps we can adjust those numbers depending on the number of commits. All feedback welcome. :) Regards, Mohammed [1]: https://etherpad.openstack.org/p/YVR-tc-retrospective [2]: https://github.com/openstack/governance/blob/master/tools/teamstats.py From flux.adam at gmail.com Sun May 27 03:54:53 2018 From: flux.adam at gmail.com (Adam Harwell) Date: Sat, 26 May 2018 21:54:53 -0600 Subject: [openstack-dev] [octavia] Multiple availability zone and network region support In-Reply-To: <3625_1527243526_5B07E306_3625_457_1_e75e322181014ff8ae04c921e95d71ed@orange.com> References: <3625_1527243526_5B07E306_3625_457_1_e75e322181014ff8ae04c921e95d71ed@orange.com> Message-ID: I have a patch up here for multi AZ: https://review.openstack.org/558962 But, it doesn't really handle other networks... It works for me because I use my own L3 network driver: https://review.openstack.org/435612 It might be possible to use it with an AZ aware networking driver as well? If you wanted to contribute something like that, please do! It's probably not going to be possible with anything L2 though... Good luck, --Adam (rm_work) On Fri, May 25, 2018, 04:19 wrote: > Hello, > > > > Is there any way to set up Octavia so that we are able to launch amphora > in different AZs and connected to different network per each AZ? > > > > Than you, > > Mihaela Balas > > _________________________________________________________________________________________________________________________ > > Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc > pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler > a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, > Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci. > > This message and its attachments may contain confidential or privileged information that may be protected by law; > they should not be distributed, used or copied without authorisation. > If you have received this email in error, please notify the sender and delete this message and its attachments. > As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified. > Thank you. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From s at cassiba.com Sun May 27 16:31:43 2018 From: s at cassiba.com (Samuel Cassiba) Date: Sun, 27 May 2018 09:31:43 -0700 Subject: [openstack-dev] [chef] State of the Kitchen: 4th Edition Message-ID: HTML: https://s.cassiba.com/openstack/state-of-the-kitchen-4th-edition/ This is the fourth installment of what is going on with Chef OpenStack. The aim is to give a quick overview to see our progress and what is on the menu. Feedback is always welcome on the efficacy of the content. This edition will take a slightly different direction, as I am now cross-posting this to my blog to increase exposure and to get the content showing up on OpenStack [Planet](http://planet.openstack.org). Going forward, this will be formatted as Markdown. ### Announcements * Queens release is nearing. Summit week slowed things down a little, but we're looking to be in good shape. * Kitchen scenarios are now pinned to Chef 14. While Chef 13 is supported until Chef 15 release (April 2019 timeframe), master is not currently developing against it. All changes are currently still gated against Chef 13, so we have test coverage of both supported Chef major releases. * ChefDK 3 has been released. Testing has not commenced with it, but patches are always welcome if you're impatient. ### Documentation * [Contributor and install guides](https://review.openstack.org/569571) have been written to replace the ever-aging documentation in openstack-chef-repo. * A more comprehensive deploy guide is beginning to take shape. ### Integration * The mass deprecation of Rakefiles is still looking to be possible. The functionality from openstack-chef-repo/Rakefile will have to be retrofitted into Zuul jobs to get gating jobs for the supported platforms. * Chef Delivery support has made it to the cookbooks. It is currently used in local testing, but will be making it to the gate soon. ### Containers * Dokken works-ish. Yes, ish. Though, not for lack of trying. RDO has issues in networking due to iptables. * All-in-one is the current focus, with [clean builds](https://review.openstack.org/566440) using UCA packages. ### Upgrades * No updates this month. ### On The Menu *Chicken Cordon Bleu Casserole* (makes 8-10 portions) * 1500g chicken, cubed in 1" pieces * 300g ham steak, cubed in 0.5" pieces * 300g Swiss cheese * 230ml Heavy Whipping Cream * 230ml cream cheese / Neufchatel * To taste: salt, pepper, garlic powder #### Instructions 1. Cook whole pieces of chicken most of the way through so it isn't tough and rubbery. A little pink here is a good thing - it will finish in the oven. Slice into roughly 1" cubes. 2. Line the bottom of the pan with chicken cubes 3. Sprinkle salt, pepper and garlic powder (sorry non-US folks) over the chicken 4. Sprinkle ham cubes on top of the chicken 5. Shred Swiss cheese and spread over the mixture 6. Heat the cream cheese in the microwave, add the cream and mix. Pour mixture over the casserole. 7. Mix ingredients until incorporated. Overmixing will give a more pate-like texture. 8. Bake @ 350F / 176C for 40 minutes. Your humble line cook, Samuel Cassiba (scas) -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at sheep.art.pl Mon May 28 06:35:36 2018 From: openstack at sheep.art.pl (Radomir Dopieralski) Date: Mon, 28 May 2018 08:35:36 +0200 Subject: [openstack-dev] [horizon] Font awesome currently broken with Debian Sid and Horizon In-Reply-To: <48349161-33c5-59cb-97ca-6b397b93d592@debian.org> References: <48349161-33c5-59cb-97ca-6b397b93d592@debian.org> Message-ID: I did a quick search for all the glyphs we are using: ~/dev/horizon(master)> ag 'fa-' | egrep -o 'fa-[a-z-]*' | sort | uniq fa- fa-angle-left fa-angle-right fa-arrow-down fa-arrow-up fa-asterisk fa-b fa-bars fa-bolt fa-bug fa-calculator fa-calendar fa-caret-down fa-caret-up fa-check fa-chevron-down fa-chevron-right fa-circle fa-close fa-cloud fa-cloud-upload fa-code fa-cog fa-desktop fa-download fa-exclamation fa-exclamation-circle fa-exclamation-triangle fa-eye fa-eye-slash fa-font-path fa-fw fa-group fa-home fa-icon fa-info-circle fa-lg fa-list-alt fa-lock fa-minus fa-pencil fa-plus fa-question-circle fa-refresh fa-save fa-search fa-server fa-share-square-o fa-sign-out fa-sort fa-spin fa-spinner fa-square fa-th fa-th-large fa-times fa-trash fa-unlock fa-upload fa-user fa-var-check-square-o fa-var-circle-o fa-var-dot-circle-o fa-var-sort fa-var-sort-asc fa-var-sort-desc fa-var-square-o fa-warning The lone "fa-" comes from table actions, where the icon is defined in code, and pretty much anything can be used — since plugins can define their own actions with their own icons — but I don't think we use anything exotic there. On Sat, May 26, 2018 at 7:37 PM, Thomas Goirand wrote: > Hi Horizon team! > > I'm not sure if you're aware of that, but the upstream authors of > fontawesome decided it was a good idea to break everyone. See this > Debian bug entry: > > https://bugs.debian.org/899124 > > So, what happened is that fontawesome-webfont has been split into 3 sets > of fonts: solid, regular and brands fonts. Thus there is no drop in > replacement for the old fontawesome-webfont.xxx. > > As my python3-xstatic-font-awesome removes the embedded fonts, and just > points to /usr/share/fonts-font-awesome, Horizon is broken and cannot > even be installed currently in Debian Sid. Of course, I'm considering > reverting the removal of the data folder from the xstatic package, but > it then defeats the purpose of Xstatic, which is avoiding duplication of > static files already packaged in the distribution. > > So, ideally, I would like to know first if I can use the fa-solid-900 > for Horizon, or if other glyphs are in use (it's very much possible that > only fa-solid-900 stuff are in use, but I really don't know how to check > for that the correct way). If that's the case, then I can workaround the > issue (at least temporarily), and synlink stuff in the data folder to > the new fa-solid-900 files. > > Second, it'd be nice if Horizon could adapt and use the new v5 > font-awesome, so that the problem is completely solved. > > Cheers, > > Thomas Goirand (zigo) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdobreli at redhat.com Mon May 28 09:43:55 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Mon, 28 May 2018 11:43:55 +0200 Subject: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal In-Reply-To: <1527266023.toq0ou1iml.tristanC@fedora> References: <90762663-3bc2-d7b8-1242-ea5a67543c68@redhat.com> <8060ff1f-e546-868b-8729-090d643969d7@redhat.com> <1a143f50-e6d9-b162-3284-35978bd1d084@redhat.com> <1527266023.toq0ou1iml.tristanC@fedora> Message-ID: <96b702d9-c8b0-b40d-c75c-827c408fd23d@redhat.com> On 5/25/18 6:40 PM, Tristan Cacqueray wrote: > Hello Bogdan, > > Perhaps this has something to do with jobs evaluation order, it may be > worth trying to add the dependencies list in the project-templates, like > it is done here for example: > http://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul.d/projects.yaml#n9799 > > > It also easier to read dependencies from pipelines definition imo. Thank you! It seems for the most places, tripleo uses pre-defined templates, see [0]. And templates can not import dependencies [1] :( [0] http://codesearch.openstack.org/?q=-%20project%3A&i=nope&files=&repos=tripleo-ci,tripleo-common,tripleo-common-tempest-plugin,tripleo-docs,tripleo-ha-utils,tripleo-heat-templates,tripleo-image-elements,tripleo-ipsec,tripleo-puppet-elements,tripleo-quickstart,tripleo-quickstart-extras,tripleo-repos,tripleo-specs,tripleo-ui,tripleo-upgrade,tripleo-validations [1] https://review.openstack.org/#/c/568536/4 > > -Tristan > > On May 25, 2018 12:45 pm, Bogdan Dobrelya wrote: >> Job dependencies seem ignored by zuul, see jobs [0],[1],[2] started >> simultaneously. While I expected them run one by one. According to the >> patch 568536 [3], [1] is a dependency for [2] and [3]. >> >> The same can be observed for the remaining patches in the topic [4]. >> Is that a bug or I misunderstood what zuul job dependencies actually do? >> >> [0] >> http://logs.openstack.org/36/568536/2/check/tripleo-ci-centos-7-undercloud-containers/731183a/ara-report/ >> >> [1] >> http://logs.openstack.org/36/568536/2/check/tripleo-ci-centos-7-3nodes-multinode/a1353ed/ara-report/ >> >> [2] >> http://logs.openstack.org/36/568536/2/check/tripleo-ci-centos-7-containers-multinode/9777136/ara-report/ >> >> [3] https://review.openstack.org/#/c/568536/ >> [4] >> https://review.openstack.org/#/q/topic:ci_pipelines+(status:open+OR+status:merged) >> >> -- Best regards, Bogdan Dobrelya, Irc #bogdando From bdobreli at redhat.com Mon May 28 09:53:27 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Mon, 28 May 2018 11:53:27 +0200 Subject: [openstack-dev] [ci][infra][tripleo] Multi-staged check pipelines for Zuul v3 proposal In-Reply-To: <96b702d9-c8b0-b40d-c75c-827c408fd23d@redhat.com> References: <90762663-3bc2-d7b8-1242-ea5a67543c68@redhat.com> <8060ff1f-e546-868b-8729-090d643969d7@redhat.com> <1a143f50-e6d9-b162-3284-35978bd1d084@redhat.com> <1527266023.toq0ou1iml.tristanC@fedora> <96b702d9-c8b0-b40d-c75c-827c408fd23d@redhat.com> Message-ID: <66c4c976-0faa-bfaf-5070-c9f491aea5ae@redhat.com> On 5/28/18 11:43 AM, Bogdan Dobrelya wrote: > On 5/25/18 6:40 PM, Tristan Cacqueray wrote: >> Hello Bogdan, >> >> Perhaps this has something to do with jobs evaluation order, it may be >> worth trying to add the dependencies list in the project-templates, like >> it is done here for example: >> http://git.openstack.org/cgit/openstack-infra/project-config/tree/zuul.d/projects.yaml#n9799 >> >> >> It also easier to read dependencies from pipelines definition imo. > > Thank you! > It seems for the most places, tripleo uses pre-defined templates, see > [0]. And templates can not import dependencies [1] :( Here is a zuul story for that [2] [2] https://storyboard.openstack.org/#!/story/2002113 > > [0] > http://codesearch.openstack.org/?q=-%20project%3A&i=nope&files=&repos=tripleo-ci,tripleo-common,tripleo-common-tempest-plugin,tripleo-docs,tripleo-ha-utils,tripleo-heat-templates,tripleo-image-elements,tripleo-ipsec,tripleo-puppet-elements,tripleo-quickstart,tripleo-quickstart-extras,tripleo-repos,tripleo-specs,tripleo-ui,tripleo-upgrade,tripleo-validations > > > [1] https://review.openstack.org/#/c/568536/4 > >> >> -Tristan >> >> On May 25, 2018 12:45 pm, Bogdan Dobrelya wrote: >>> Job dependencies seem ignored by zuul, see jobs [0],[1],[2] started >>> simultaneously. While I expected them run one by one. According to >>> the patch 568536 [3], [1] is a dependency for [2] and [3]. >>> >>> The same can be observed for the remaining patches in the topic [4]. >>> Is that a bug or I misunderstood what zuul job dependencies actually do? >>> >>> [0] >>> http://logs.openstack.org/36/568536/2/check/tripleo-ci-centos-7-undercloud-containers/731183a/ara-report/ >>> >>> [1] >>> http://logs.openstack.org/36/568536/2/check/tripleo-ci-centos-7-3nodes-multinode/a1353ed/ara-report/ >>> >>> [2] >>> http://logs.openstack.org/36/568536/2/check/tripleo-ci-centos-7-containers-multinode/9777136/ara-report/ >>> >>> [3] https://review.openstack.org/#/c/568536/ >>> [4] >>> https://review.openstack.org/#/q/topic:ci_pipelines+(status:open+OR+status:merged) >>> >>> > -- Best regards, Bogdan Dobrelya, Irc #bogdando From sbauza at redhat.com Mon May 28 12:31:59 2018 From: sbauza at redhat.com (Sylvain Bauza) Date: Mon, 28 May 2018 14:31:59 +0200 Subject: [openstack-dev] [Openstack-operators] [nova] Need some feedback on the proposed heal_allocations CLI In-Reply-To: References: Message-ID: On Fri, May 25, 2018 at 12:19 AM, Matt Riedemann wrote: > I've written a nova-manage placement heal_allocations CLI [1] which was a > TODO from the PTG in Dublin as a step toward getting existing > CachingScheduler users to roll off that (which is deprecated). > > During the CERN cells v1 upgrade talk it was pointed out that CERN was > able to go from placement-per-cell to centralized placement in Ocata > because the nova-computes in each cell would automatically recreate the > allocations in Placement in a periodic task, but that code is gone once > you're upgraded to Pike or later. > > In various other talks during the summit this week, we've talked about > things during upgrades where, for instance, if placement is down for some > reason during an upgrade, a user deletes an instance and the allocation > doesn't get cleaned up from placement so it's going to continue counting > against resource usage on that compute node even though the server instance > in nova is gone. So this CLI could be expanded to help clean up situations > like that, e.g. provide it a specific server ID and the CLI can figure out > if it needs to clean things up in placement. > > So there are plenty of things we can build into this, but the patch is > already quite large. I expect we'll also be backporting this to stable > branches to help operators upgrade/fix allocation issues. It already has > several things listed in a code comment inline about things to build into > this later. > > My question is, is this good enough for a first iteration or is there > something severely missing before we can merge this, like the automatic > marker tracking mentioned in the code (that will probably be a non-trivial > amount of code to add). I could really use some operator feedback on this > to just take a look at what it already is capable of and if it's not going > to be useful in this iteration, let me know what's missing and I can add > that in to the patch. > > [1] https://review.openstack.org/#/c/565886/ > > It does sound for me a good way to help operators. That said, given I'm now working on using Nested Resource Providers for VGPU inventories, I wonder about a possible upgrade problem with VGPU allocations. Given that : - in Queens, VGPU inventories are for the root RP (ie. the compute node RP), but, - in Rocky, VGPU inventories will be for children RPs (ie. against a specific VGPU type), then if we have VGPU allocations in Queens, when upgrading to Rocky, we should maybe recreate the allocations to a specific other inventory ? Hope you see the problem with upgrading by creating nested RPs ? > -- > > Thanks, > > Matt > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Mon May 28 14:18:45 2018 From: sbauza at redhat.com (Sylvain Bauza) Date: Mon, 28 May 2018 16:18:45 +0200 Subject: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers Message-ID: Hi, I already told about that in a separate thread, but let's put it here too for more visibility. tl;dr: I suspect existing allocations are being lost when we upgrade a compute service from Queens to Rocky, if those allocations are made against inventories that are now provided by a child Resource Provider. I started reviewing https://review.openstack.org/#/c/565487/ and bottom patches to understand the logic with querying nested resource providers. >From what I understand, the scheduler will query Placement using the same query but will get (thanks to a new microversion) not only allocation candidates that are root resource providers but also any possible child. If so, that's great as in a rolling upgrade scenario with mixed computes (both Queens and Rocky), we will still continue to return both old RPs and new child RPs if they both support the same resource classes ask. Accordingly, allocations done by the scheduler will be made against the corresponding Resource Provider, whether it's a root RP (old way) or a child RP (new way). Do I still understand correctly ? If yes, perfect, let's jump to my upgrade concern. Now, consider the Queens->Rocky compute upgrade. If I'm an operator and I start deploying Rocky on one compute node, it will provide to Placement API new inventories that are possibly nested. In that situation, say for example with VGPU inventories, that would mean that the compute node would stop reporting inventories for its root RP, but would rather report inventories for at least one single child RP. In that model, do we reconcile the allocations that were already made against the "root RP" inventory ? I don't think so, hence my question here. Thanks, -Sylvain -------------- next part -------------- An HTML attachment was scrubbed... URL: From pkovar at redhat.com Mon May 28 14:40:13 2018 From: pkovar at redhat.com (Petr Kovar) Date: Mon, 28 May 2018 16:40:13 +0200 Subject: [openstack-dev] [docs] Style guide for OpenStack documentation In-Reply-To: <20180517150323.47mdca3625l5dfj7@yuggoth.org> References: <20180516182445.3b9286418271e98ad9581474@redhat.com> <20180516170515.2gvyxqrnoacitndp@yuggoth.org> <20180517163536.39fa9c4e1fe97fece9f4775e@redhat.com> <20180517150323.47mdca3625l5dfj7@yuggoth.org> Message-ID: <20180528164013.db58434a01c312c23c147cf1@redhat.com> On Thu, 17 May 2018 15:03:23 +0000 Jeremy Stanley wrote: > On 2018-05-17 16:35:36 +0200 (+0200), Petr Kovar wrote: > > On Wed, 16 May 2018 17:05:15 +0000 > > Jeremy Stanley wrote: > > > > > On 2018-05-16 18:24:45 +0200 (+0200), Petr Kovar wrote: > > > [...] > > > > I'd like to propose replacing the reference to the IBM Style Guide > > > > with a reference to the developerWorks editorial style guide > > > > (https://www.ibm.com/developerworks/library/styleguidelines/). > > > > This lightweight version comes from the same company and is based > > > > on the same guidelines, but most importantly, it is available for > > > > free. > > > [...] > > > > > > I suppose replacing a style guide nobody can access with one > > > everyone can (modulo legal concerns) is a step up. Still, are there > > > no style guides published under an actual free/open license? If > > > https://www.ibm.com/developerworks/community/terms/use/ is correct > > > then even accidental creation of a derivative work might be > > > prosecuted as copyright infringement. > > > > > > We don't really plan on reusing content from that site, just referring to > > it, so is it a concern? > [...] > > A style guide is a tool. Free and open collaboration needs free > (libre, not merely gratis) tools, and that doesn't just mean > software. If, down the road, you want an OpenStack Documentation > Style Guide which covers OpenStack-specific concerns to quote or > transclude information from a more thorough guide, that becomes a > derivative work and is subject to the licensing terms for the guide > from which you're copying. Okay, but that's not what we want to do here. > There are a lot of other parallels between writing software and > writing prose here beyond mere intellectual property concerns too. > Saying that OpenStack Documentation is free and open, but then > endorsing an effectively proprietary guide as something its authors > should read and follow, sends a mixed message as to our position on > open documentation (as a style guide is of course also documentation > in its own right). On the other hand, recommending use of a style > guide which is available under a free/libre open source license or > within the public domain resonates with our ideals and principles as > a community, serving only to strengthen our position on openness in > all its endeavors (including documentation). I'm all for openness but maintaining consistency is why style guides matter. Switching to a different style guide would require the following: 1) agreeing on the right style guide, 2) reviewing our current style guidelines in doc-contrib-guide and updating them as needed so that they comply with the new style guide, and, 3) ideally, begin reviewing all of OpenStack docs for style changes. Do we have a volunteer who would be interested in taking on these tasks? If not, we have to go for a quick fix. Either reference developerWorks, or, if that's a concern, remove references to external style guides altogether (and provide less information as a result). I prefer the former. Cheers, pk From sc at linux.it Mon May 28 15:46:47 2018 From: sc at linux.it (Stefano Canepa) Date: Mon, 28 May 2018 16:46:47 +0100 Subject: [openstack-dev] [docs] Style guide for OpenStack documentation In-Reply-To: <20180528164013.db58434a01c312c23c147cf1@redhat.com> References: <20180516182445.3b9286418271e98ad9581474@redhat.com> <20180516170515.2gvyxqrnoacitndp@yuggoth.org> <20180517163536.39fa9c4e1fe97fece9f4775e@redhat.com> <20180517150323.47mdca3625l5dfj7@yuggoth.org> <20180528164013.db58434a01c312c23c147cf1@redhat.com> Message-ID: On 28 May 2018 at 15:40, Petr Kovar wrote: > On Thu, 17 May 2018 15:03:23 +0000 > Jeremy Stanley wrote: > > > On 2018-05-17 16:35:36 +0200 (+0200), Petr Kovar wrote: > ​%<----- ​ > I'm all for openness but maintaining consistency is why style guides > matter. Switching to a different style guide would require the following: > > 1) agreeing on the right style guide, > 2) reviewing our current style guidelines in doc-contrib-guide and updating > them as needed so that they comply with the new style guide, and, > 3) ideally, begin reviewing all of OpenStack docs for style changes. > > Do we have a volunteer who would be interested in taking on these tasks? If > not, we have to go for a quick fix. Either reference developerWorks, or, if > that's a concern, remove references to external style guides > altogether (and provide less information as a result). I prefer the former. > > Cheers, > pk > Petr, ​do we really need to reference to another style guide? How many times people clicked on the link to the IBM guide?​ If first answer is yes and second is hundreds of times than in my opinion you first option is the right one otherwise I'd go for the second. My 2¢ Stefano PS: a good free doc style guideline is the gnome one. -------------- next part -------------- An HTML attachment was scrubbed... URL: From opensrloo at gmail.com Mon May 28 16:23:25 2018 From: opensrloo at gmail.com (Ruby Loo) Date: Mon, 28 May 2018 12:23:25 -0400 Subject: [openstack-dev] [docs] Style guide for OpenStack documentation In-Reply-To: References: <20180516182445.3b9286418271e98ad9581474@redhat.com> <20180516170515.2gvyxqrnoacitndp@yuggoth.org> <20180517163536.39fa9c4e1fe97fece9f4775e@redhat.com> <20180517150323.47mdca3625l5dfj7@yuggoth.org> <20180528164013.db58434a01c312c23c147cf1@redhat.com> Message-ID: On Mon, May 28, 2018 at 11:46 AM, Stefano Canepa wrote: > > > Petr, > ​do we really need to reference to another style guide? > How many times people clicked on the link to the IBM guide?​ > > Many years ago, I tried to understand the rationale behind some of the doc team's recommendations. All roads led to this secret IBM guide. I did not know of any easy way to get access to it, gave up, and stuck my head back in the sand. Having said that, I figured if that was the best way for the docs team to do their work, who was I to argue about it, since I wasn't going to ante up and do any of that work myself. *Some* documentation is better than *no* documentation :) How much of that IBM guide is used? Would it be easier to just summarize the parts of interest? I've managed to live w/o looking at it (which doesn't say much I guess). I dread another massive change to documentation across all projects, due to changes of style. Unless it is all automated, I don't feel like it is the best use of our human resources at this point in time. (If you want to use your pets for this, I'm all for it.) --ruby -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon May 28 17:24:57 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 28 May 2018 17:24:57 +0000 Subject: [openstack-dev] [docs] Style guide for OpenStack documentation In-Reply-To: <20180528164013.db58434a01c312c23c147cf1@redhat.com> References: <20180516182445.3b9286418271e98ad9581474@redhat.com> <20180516170515.2gvyxqrnoacitndp@yuggoth.org> <20180517163536.39fa9c4e1fe97fece9f4775e@redhat.com> <20180517150323.47mdca3625l5dfj7@yuggoth.org> <20180528164013.db58434a01c312c23c147cf1@redhat.com> Message-ID: <20180528172456.yuilcoautg66ufn6@yuggoth.org> On 2018-05-28 16:40:13 +0200 (+0200), Petr Kovar wrote: [...] > I'm all for openness but maintaining consistency is why style guides > matter. Switching to a different style guide would require the following: > > 1) agreeing on the right style guide, > 2) reviewing our current style guidelines in doc-contrib-guide and updating > them as needed so that they comply with the new style guide, and, > 3) ideally, begin reviewing all of OpenStack docs for style changes. [...] I get this (and alluded to as much in my first message in this thread, in fact). My point was that _when_ you're to the point of evaluating switching to a wholly different style guide it would be great to take such concerns into account. It also serves as a cautionary tale to other newly forming projects (outside OpenStack) who may at some point stumble across this discussion. Please choose free tools at every opportunity, I sure wish we had in this case. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From nakamura.tetsuro at lab.ntt.co.jp Tue May 29 01:08:55 2018 From: nakamura.tetsuro at lab.ntt.co.jp (TETSURO NAKAMURA) Date: Tue, 29 May 2018 10:08:55 +0900 Subject: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers In-Reply-To: References: Message-ID: <8eefd93a-abbf-1436-07a3-d18223ed8fa8@lab.ntt.co.jp> Hi, > Do I still understand correctly ? If yes, perfect, let's jump to my upgrade > concern. Yes, I think. The old microversions look into only root providers and give up providing resources if a root provider itself doesn't have enough inventories for requested resources. But the new microversion looks into the root's descendents also and see if it can provide requested resources *collectively* in that tree. The tests from [1] would help you understand this, where VCPUs come from the root(compute host) and SRIOV_NET_VFs from its grandchild. [1] https://review.openstack.org/#/c/565487/15/nova/tests/functional/api/openstack/placement/gabbits/allocation-candidates.yaml at 362 > In that situation, say for example with VGPU inventories, that would mean > that the compute node would stop reporting inventories for its root RP, but > would rather report inventories for at least one single child RP. > In that model, do we reconcile the allocations that were already made > against the "root RP" inventory ? It would be nice to see Eric and Jay comment on this, but if I'm not mistaken, when the virt driver stops reporting inventories for its root RP, placement would try to delete that inventory inside and raise InventoryInUse exception if any allocations still exist on that resource. ``` update_from_provider_tree() (nova/compute/resource_tracker.py) + _set_inventory_for_provider() (nova/scheduler/client/report.py) + put() - PUT /resource_providers//inventories with new inventories (scheduler/client/report.py) + set_inventories() (placement/handler/inventory.py) + _set_inventory() (placement/objects/resource_proveider.py) + _delete_inventory_from_provider() (placement/objects/resource_proveider.py) -> raise exception.InventoryInUse ``` So we need some trick something like deleting VGPU allocations before upgrading and set the allocation again for the created new child after upgrading? On 2018/05/28 23:18, Sylvain Bauza wrote: > Hi, > > I already told about that in a separate thread, but let's put it here too > for more visibility. > > tl;dr: I suspect existing allocations are being lost when we upgrade a > compute service from Queens to Rocky, if those allocations are made against > inventories that are now provided by a child Resource Provider. > > > I started reviewing https://review.openstack.org/#/c/565487/ and bottom > patches to understand the logic with querying nested resource providers. >>From what I understand, the scheduler will query Placement using the same > query but will get (thanks to a new microversion) not only allocation > candidates that are root resource providers but also any possible child. > > If so, that's great as in a rolling upgrade scenario with mixed computes > (both Queens and Rocky), we will still continue to return both old RPs and > new child RPs if they both support the same resource classes ask. > Accordingly, allocations done by the scheduler will be made against the > corresponding Resource Provider, whether it's a root RP (old way) or a > child RP (new way). > > Do I still understand correctly ? If yes, perfect, let's jump to my upgrade > concern. > Now, consider the Queens->Rocky compute upgrade. If I'm an operator and I > start deploying Rocky on one compute node, it will provide to Placement API > new inventories that are possibly nested. > In that situation, say for example with VGPU inventories, that would mean > that the compute node would stop reporting inventories for its root RP, but > would rather report inventories for at least one single child RP. > In that model, do we reconcile the allocations that were already made > against the "root RP" inventory ? I don't think so, hence my question here. > > Thanks, > -Sylvain > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Tetsuro Nakamura NTT Network Service Systems Laboratories TEL:0422 59 6914(National)/+81 422 59 6914(International) 3-9-11, Midori-Cho Musashino-Shi, Tokyo 180-8585 Japan From zhipengh512 at gmail.com Tue May 29 02:27:01 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Tue, 29 May 2018 10:27:01 +0800 Subject: [openstack-dev] [cyborg]No Meeting This Week Message-ID: Hi team, Given that people still recover from the summit and memorial day holiday, let's cancel the team weekly meeting this week as well. At the mean time feel free to communicate over irc or email :) -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Tue May 29 07:38:39 2018 From: sbauza at redhat.com (Sylvain Bauza) Date: Tue, 29 May 2018 09:38:39 +0200 Subject: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers In-Reply-To: <8eefd93a-abbf-1436-07a3-d18223ed8fa8@lab.ntt.co.jp> References: <8eefd93a-abbf-1436-07a3-d18223ed8fa8@lab.ntt.co.jp> Message-ID: On Tue, May 29, 2018 at 3:08 AM, TETSURO NAKAMURA < nakamura.tetsuro at lab.ntt.co.jp> wrote: > Hi, > > > Do I still understand correctly ? If yes, perfect, let's jump to my > upgrade > > concern. > > Yes, I think. The old microversions look into only root providers and give > up providing resources if a root provider itself doesn't have enough > inventories for requested resources. But the new microversion looks into > the root's descendents also and see if it can provide requested resources > *collectively* in that tree. > > The tests from [1] would help you understand this, where VCPUs come from > the root(compute host) and SRIOV_NET_VFs from its grandchild. > > [1] https://review.openstack.org/#/c/565487/15/nova/tests/functi > onal/api/openstack/placement/gabbits/allocation-candidates.yaml at 362 > > Yeah I already saw those tests, but I wanted to make sure I was correctly understanding. > > In that situation, say for example with VGPU inventories, that would mean > > that the compute node would stop reporting inventories for its root RP, > but > > would rather report inventories for at least one single child RP. > > In that model, do we reconcile the allocations that were already made > > against the "root RP" inventory ? > > It would be nice to see Eric and Jay comment on this, > but if I'm not mistaken, when the virt driver stops reporting inventories > for its root RP, placement would try to delete that inventory inside and > raise InventoryInUse exception if any allocations still exist on that > resource. > > ``` > update_from_provider_tree() (nova/compute/resource_tracker.py) > + _set_inventory_for_provider() (nova/scheduler/client/report.py) > + put() - PUT /resource_providers//inventories with new > inventories (scheduler/client/report.py) > + set_inventories() (placement/handler/inventory.py) > + _set_inventory() (placement/objects/resource_proveider.py) > + _delete_inventory_from_provider() > (placement/objects/resource_proveider.py) > -> raise exception.InventoryInUse > ``` > > So we need some trick something like deleting VGPU allocations before > upgrading and set the allocation again for the created new child after > upgrading? > > I wonder if we should keep the existing inventory in the root RP, and somehow just reserve the left resources (so Placement wouldn't pass that root RP for queries, but would still have allocations). But then, where and how to do this ? By the resource tracker ? -Sylvain > On 2018/05/28 23:18, Sylvain Bauza wrote: > >> Hi, >> >> I already told about that in a separate thread, but let's put it here too >> for more visibility. >> >> tl;dr: I suspect existing allocations are being lost when we upgrade a >> compute service from Queens to Rocky, if those allocations are made >> against >> inventories that are now provided by a child Resource Provider. >> >> >> I started reviewing https://review.openstack.org/#/c/565487/ and bottom >> patches to understand the logic with querying nested resource providers. >> >>> From what I understand, the scheduler will query Placement using the same >>> >> query but will get (thanks to a new microversion) not only allocation >> candidates that are root resource providers but also any possible child. >> >> If so, that's great as in a rolling upgrade scenario with mixed computes >> (both Queens and Rocky), we will still continue to return both old RPs and >> new child RPs if they both support the same resource classes ask. >> Accordingly, allocations done by the scheduler will be made against the >> corresponding Resource Provider, whether it's a root RP (old way) or a >> child RP (new way). >> >> Do I still understand correctly ? If yes, perfect, let's jump to my >> upgrade >> concern. >> Now, consider the Queens->Rocky compute upgrade. If I'm an operator and I >> start deploying Rocky on one compute node, it will provide to Placement >> API >> new inventories that are possibly nested. >> In that situation, say for example with VGPU inventories, that would mean >> that the compute node would stop reporting inventories for its root RP, >> but >> would rather report inventories for at least one single child RP. >> In that model, do we reconcile the allocations that were already made >> against the "root RP" inventory ? I don't think so, hence my question >> here. >> >> Thanks, >> -Sylvain >> >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > -- > Tetsuro Nakamura > NTT Network Service Systems Laboratories > TEL:0422 59 6914(National)/+81 422 59 6914(International) > 3-9-11, Midori-Cho Musashino-Shi, Tokyo 180-8585 Japan > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vladislav.belogrudov at oracle.com Tue May 29 08:58:40 2018 From: vladislav.belogrudov at oracle.com (vladislav.belogrudov at oracle.com) Date: Tue, 29 May 2018 11:58:40 +0300 Subject: [openstack-dev] [kolla-ansible] dns_interface, inv Message-ID: <663917d6-e2ab-4bfe-b4fa-2787ec0747e2@oracle.com> Hi, in multinode inventory I see that both haproxy and l3 agents run on network nodes. Does it mean that network nodes need two public interfaces - one for external VIP and another (unassigned) for external bridge? Will it work without conflicts? Another question, designate-mdns runs on network nodes while others including designate-backend-bind - on controllers. Would it be better to run *bind on network node to facilitate easier external connectivity? I mean requirement for dns_interface, in other words, why not to make dns_interface == external one? Thanks, Vladislav From balazs.gibizer at ericsson.com Tue May 29 09:01:51 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Tue, 29 May 2018 11:01:51 +0200 Subject: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers In-Reply-To: References: <8eefd93a-abbf-1436-07a3-d18223ed8fa8@lab.ntt.co.jp> Message-ID: <1527584511.6381.1@smtp.office365.com> On Tue, May 29, 2018 at 9:38 AM, Sylvain Bauza wrote: > > > On Tue, May 29, 2018 at 3:08 AM, TETSURO NAKAMURA > wrote > >> > In that situation, say for example with VGPU inventories, that >> would mean >> > that the compute node would stop reporting inventories for its >> root RP, but >> > would rather report inventories for at least one single child RP. >> > In that model, do we reconcile the allocations that were already >> made >> > against the "root RP" inventory ? >> >> It would be nice to see Eric and Jay comment on this, >> but if I'm not mistaken, when the virt driver stops reporting >> inventories for its root RP, placement would try to delete that >> inventory inside and raise InventoryInUse exception if any >> allocations still exist on that resource. >> >> ``` >> update_from_provider_tree() (nova/compute/resource_tracker.py) >> + _set_inventory_for_provider() (nova/scheduler/client/report.py) >> + put() - PUT /resource_providers//inventories with >> new inventories (scheduler/client/report.py) >> + set_inventories() (placement/handler/inventory.py) >> + _set_inventory() >> (placement/objects/resource_proveider.py) >> + _delete_inventory_from_provider() >> (placement/objects/resource_proveider.py) >> -> raise exception.InventoryInUse >> ``` >> >> So we need some trick something like deleting VGPU allocations >> before upgrading and set the allocation again for the created new >> child after upgrading? >> > > I wonder if we should keep the existing inventory in the root RP, and > somehow just reserve the left resources (so Placement wouldn't pass > that root RP for queries, but would still have allocations). But > then, where and how to do this ? By the resource tracker ? > AFAIK it is the virt driver that decides to model the VGU resource at a different place in the RP tree so I think it is the responsibility of the same virt driver to move any existing allocation from the old place to the new place during this change. Cheers, gibi > -Sylvain > From sylvain.bauza at gmail.com Tue May 29 09:52:05 2018 From: sylvain.bauza at gmail.com (Sylvain Bauza) Date: Tue, 29 May 2018 11:52:05 +0200 Subject: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers In-Reply-To: <1527584511.6381.1@smtp.office365.com> References: <8eefd93a-abbf-1436-07a3-d18223ed8fa8@lab.ntt.co.jp> <1527584511.6381.1@smtp.office365.com> Message-ID: 2018-05-29 11:01 GMT+02:00 Balázs Gibizer : > > > On Tue, May 29, 2018 at 9:38 AM, Sylvain Bauza wrote: > >> >> >> On Tue, May 29, 2018 at 3:08 AM, TETSURO NAKAMURA < >> nakamura.tetsuro at lab.ntt.co.jp> wrote >> >> > In that situation, say for example with VGPU inventories, that would >>> mean >>> > that the compute node would stop reporting inventories for its root >>> RP, but >>> > would rather report inventories for at least one single child RP. >>> > In that model, do we reconcile the allocations that were already made >>> > against the "root RP" inventory ? >>> >>> It would be nice to see Eric and Jay comment on this, >>> but if I'm not mistaken, when the virt driver stops reporting >>> inventories for its root RP, placement would try to delete that inventory >>> inside and raise InventoryInUse exception if any allocations still exist on >>> that resource. >>> >>> ``` >>> update_from_provider_tree() (nova/compute/resource_tracker.py) >>> + _set_inventory_for_provider() (nova/scheduler/client/report.py) >>> + put() - PUT /resource_providers//inventories with new >>> inventories (scheduler/client/report.py) >>> + set_inventories() (placement/handler/inventory.py) >>> + _set_inventory() (placement/objects/resource_pr >>> oveider.py) >>> + _delete_inventory_from_provider() >>> (placement/objects/resource_proveider.py) >>> -> raise exception.InventoryInUse >>> ``` >>> >>> So we need some trick something like deleting VGPU allocations before >>> upgrading and set the allocation again for the created new child after >>> upgrading? >>> >>> >> I wonder if we should keep the existing inventory in the root RP, and >> somehow just reserve the left resources (so Placement wouldn't pass that >> root RP for queries, but would still have allocations). But then, where and >> how to do this ? By the resource tracker ? >> >> > AFAIK it is the virt driver that decides to model the VGU resource at a > different place in the RP tree so I think it is the responsibility of the > same virt driver to move any existing allocation from the old place to the > new place during this change. > > No. Allocations are done by the scheduler or by the conductor. Virt drivers only provide inventories. > Cheers, > gibi > > > -Sylvain >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at ericsson.com Tue May 29 10:58:47 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Tue, 29 May 2018 12:58:47 +0200 Subject: [openstack-dev] [nova] Notification update week 22 Message-ID: <1527591527.6381.2@smtp.office365.com> Hi, Here is the latest notification subteam update. Bugs ---- No new bugs, no progress on open bugs. Features -------- Sending full traceback in versioned notifications ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ https://blueprints.launchpad.net/nova/+spec/add-full-traceback-to-error-notifications I left some comment in the implementation patch https://review.openstack.org/#/c/564092/ Add notification support for trusted_certs ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This is part of the bp nova-validate-certificates implementation series to extend some of the instance notifications. I'm +2 on the notification impact in https://review.openstack.org/#/c/563269 waiting for the rest of the series to merge. Introduce Pending VM state ~~~~~~~~~~~~~~~~~~~~~~~~~~ The spec https://review.openstack.org/#/c/554212 proposes some notification change to signal when a VM goes to PENDING state. We discussed the notification impact on the summit and agreed to transform the legacy scheduler.select_destinations notification and extend it if necessary. Detailed discussion still ongoing in the spec review. No progress: ~~~~~~~~~~~~ * Versioned notification transformation https://review.openstack.org/#/q/topic:bp/versioned-notification-transformation-rocky+status:open * Introduce instance.lock and instance.unlock notifications https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances * Add the user id and project id of the user initiated the instance action to the notification https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications Blocked: ~~~~~~~~ * Add versioned notifications for removing a member from a server group https://blueprints.launchpad.net/nova/+spec/add-server-group-remove-member-notifications Weekly meeting -------------- The next meeting will be held on 29th of May (Today!) on #openstack-meeting-4 https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180529T170000 Cheers, gibi From balazs.gibizer at ericsson.com Tue May 29 11:02:29 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Tue, 29 May 2018 13:02:29 +0200 Subject: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers In-Reply-To: References: <8eefd93a-abbf-1436-07a3-d18223ed8fa8@lab.ntt.co.jp> <1527584511.6381.1@smtp.office365.com> Message-ID: <1527591749.6381.3@smtp.office365.com> On Tue, May 29, 2018 at 11:52 AM, Sylvain Bauza wrote: > > > 2018-05-29 11:01 GMT+02:00 Balázs Gibizer > : >> >> >> On Tue, May 29, 2018 at 9:38 AM, Sylvain Bauza >> wrote: >>> >>> >>> On Tue, May 29, 2018 at 3:08 AM, TETSURO NAKAMURA >>> wrote >>> >>>> > In that situation, say for example with VGPU inventories, that >>>> would mean >>>> > that the compute node would stop reporting inventories for its >>>> root RP, but >>>> > would rather report inventories for at least one single child RP. >>>> > In that model, do we reconcile the allocations that were already >>>> made >>>> > against the "root RP" inventory ? >>>> >>>> It would be nice to see Eric and Jay comment on this, >>>> but if I'm not mistaken, when the virt driver stops reporting >>>> inventories for its root RP, placement would try to delete that >>>> inventory inside and raise InventoryInUse exception if any >>>> allocations still exist on that resource. >>>> >>>> ``` >>>> update_from_provider_tree() (nova/compute/resource_tracker.py) >>>> + _set_inventory_for_provider() (nova/scheduler/client/report.py) >>>> + put() - PUT /resource_providers//inventories with >>>> new inventories (scheduler/client/report.py) >>>> + set_inventories() (placement/handler/inventory.py) >>>> + _set_inventory() >>>> (placement/objects/resource_proveider.py) >>>> + _delete_inventory_from_provider() >>>> (placement/objects/resource_proveider.py) >>>> -> raise exception.InventoryInUse >>>> ``` >>>> >>>> So we need some trick something like deleting VGPU allocations >>>> before upgrading and set the allocation again for the created new >>>> child after upgrading? >>>> >>> >>> I wonder if we should keep the existing inventory in the root RP, >>> and somehow just reserve the left resources (so Placement wouldn't >>> pass that root RP for queries, but would still have allocations). >>> But then, where and how to do this ? By the resource tracker ? >>> >> >> AFAIK it is the virt driver that decides to model the VGU resource >> at a different place in the RP tree so I think it is the >> responsibility of the same virt driver to move any existing >> allocation from the old place to the new place during this change. >> > > No. Allocations are done by the scheduler or by the conductor. Virt > drivers only provide inventories. I understand that the allocation is made by the scheduler and the conductor but today the scheduler and the conductor do not have to know the structure for the RP tree to make such allocations. Therefore for me the scheduler and the conductor is a bad place to try to move allocation around due to a change in the modelling of the resources in the RP tree. In the other hand the virt driver knows the structure of the RP tree so it has the necessary information to move the existing allocaiton from the old place to the new place. gibi > > >> Cheers, >> gibi >> >> >>> -Sylvain >>> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From tomi.juvonen at nokia.com Tue May 29 11:14:12 2018 From: tomi.juvonen at nokia.com (Juvonen, Tomi (Nokia - FI/Espoo)) Date: Tue, 29 May 2018 11:14:12 +0000 Subject: [openstack-dev] New OpenStack project for rolling maintenance and upgrade in interaction with application on top of it Message-ID: Hi, I am the PTL of the OPNFV Doctor project. I have been working for a couple of years figuring out the infrastructure maintenance in interaction with application on top of it. Looked into Nova, Craton and had several Ops sessions. Past half a year there has been couple of different POCs, the last in March in the ONS [1] [2] In OpenStack Vancouver summit last week it was time to present [3]. In Forum discussion following the presentation it was whether to make this just by utilizing different existing projects, but to make this generic, pluggable, easily adapted and future proof, it now goes down to start what I almost started a couple of years ago; the OpenStack Fenix project [4]. On behalf of OPNFV Doctor I would welcome any last thoughts before starting the project and would also love to see somebody joining to make the Fenix fly. Main use cases to list most of them: * As a cloud admin I want to maintain and upgrade my infrastructure in a rolling fashion. * As a cloud admin I want to have a pluggable workflow to maintain and upgrade my infrastructure, to ensure it can be done with complicated infrastructure components and in interaction with different application payloads on top of it. * As a infrastructure service, I need to know whether infrastructure unavailability is because of planned maintenance. * As a critical application owner, I want to be aware of any planned downtime effecting to my service. * As a critical application owner, I want to have interaction with infrastructure rolling maintenance workflow to have a time window to ensure zero down time for my service and to be able to decide to make admin actions like migration of my instance. * As an application owner, I need to know when admin action like migration is complete. * As an application owner, I want to know about new capabilities coming because of infrastructure maintenance or upgrade, so I can take it also into use by my application. This could be hardware capability or for example OpenStack upgrade. * As a critical application that needs to scale by varying load, I need to interactively know about infrastructure resources scaling up and down, so I can scale my application at the same and keeping zero downtime for my service * As a critical application, I want to have retirement of my service done in controlled fashion. [1] Infrastructure Maintenance & Upgrade: Zero VNF Downtime with OPNFV Doctor on OCP Hardware video [2] Infrastructure Maintenance & Upgrade: Zero VNF Downtime with OPNFV Doctor on OCP Hardware slides [3] How to gain VNF zero down-time during Infrastructure Maintenance and Upgrade [4] Fenix project wiki [5] Doctor design guideline draft Best Regards, Tomi Juvonen -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Tue May 29 11:47:23 2018 From: sbauza at redhat.com (Sylvain Bauza) Date: Tue, 29 May 2018 13:47:23 +0200 Subject: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers In-Reply-To: <1527584511.6381.1@smtp.office365.com> References: <8eefd93a-abbf-1436-07a3-d18223ed8fa8@lab.ntt.co.jp> <1527584511.6381.1@smtp.office365.com> Message-ID: Le mar. 29 mai 2018 à 11:02, Balázs Gibizer a écrit : > > > On Tue, May 29, 2018 at 9:38 AM, Sylvain Bauza > wrote: > > > > > > On Tue, May 29, 2018 at 3:08 AM, TETSURO NAKAMURA > > wrote > > > >> > In that situation, say for example with VGPU inventories, that > >> would mean > >> > that the compute node would stop reporting inventories for its > >> root RP, but > >> > would rather report inventories for at least one single child RP. > >> > In that model, do we reconcile the allocations that were already > >> made > >> > against the "root RP" inventory ? > >> > >> It would be nice to see Eric and Jay comment on this, > >> but if I'm not mistaken, when the virt driver stops reporting > >> inventories for its root RP, placement would try to delete that > >> inventory inside and raise InventoryInUse exception if any > >> allocations still exist on that resource. > >> > >> ``` > >> update_from_provider_tree() (nova/compute/resource_tracker.py) > >> + _set_inventory_for_provider() (nova/scheduler/client/report.py) > >> + put() - PUT /resource_providers//inventories with > >> new inventories (scheduler/client/report.py) > >> + set_inventories() (placement/handler/inventory.py) > >> + _set_inventory() > >> (placement/objects/resource_proveider.py) > >> + _delete_inventory_from_provider() > >> (placement/objects/resource_proveider.py) > >> -> raise exception.InventoryInUse > >> ``` > >> > >> So we need some trick something like deleting VGPU allocations > >> before upgrading and set the allocation again for the created new > >> child after upgrading? > >> > > > > I wonder if we should keep the existing inventory in the root RP, and > > somehow just reserve the left resources (so Placement wouldn't pass > > that root RP for queries, but would still have allocations). But > > then, where and how to do this ? By the resource tracker ? > > > > AFAIK it is the virt driver that decides to model the VGU resource at a > different place in the RP tree so I think it is the responsibility of > the same virt driver to move any existing allocation from the old place > to the new place during this change. > > Cheers, > gibi > Why not instead not move the allocation but rather have the virt driver updating the root RP by modifying the reserved value to the total size? That way, the virt driver wouldn't need to ask for an allocation but rather continue to provide inventories... Thoughts? > > -Sylvain > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Tue May 29 11:59:27 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 29 May 2018 13:59:27 +0200 Subject: [openstack-dev] [tc] Organizational diversity tag In-Reply-To: References: Message-ID: <31d5e78c-276c-3ac5-6b42-c20399b34a66@openstack.org> Mohammed Naser wrote: > During the TC retrospective at the OpenStack summit last week, the > topic of the organizational diversity tag is becoming irrelevant was > brought up by Thierry (ttx)[1]. It seems that for projects that are > not very active, they can easily lose this tag with a few changes by > perhaps the infrastructure team for CI related fixes. > > As an action item, Thierry and I have paired up in order to look into > a way to resolve this issue. There have been ideas to switch this to > a report that is published at the end of the cycle rather than > continuously. Julia (TheJulia) suggested that we change or track > different types of diversity. > > Before we start diving into solutions, I wanted to bring this topic up > to the mailing list and ask for any suggestions. In digging the > codebase behind this[2], I've found that there are some knobs that we > can also tweak if need-be, or perhaps we can adjust those numbers > depending on the number of commits. Right, the issue is that under a given level of team activity, there is a lot of state flapping between single-vendor, no tag, and diverse-affiliation. Some isolated events (someone changing affiliation, a dozen of infra-related changes) end up having a significant impact. My current thinking was that rather than apply a mathematical rule to produce quantitative results every month, we could take the time for a deeper analysis and produce a qualitative report every quarter. Alternatively (if that's too much work), we could add a new team tag (low-activity ?) that would appear for all projects where the activity is so low that the team diversity tags no longer really apply. -- Thierry Carrez (ttx) From balazs.gibizer at ericsson.com Tue May 29 12:21:21 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Tue, 29 May 2018 14:21:21 +0200 Subject: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers In-Reply-To: References: <8eefd93a-abbf-1436-07a3-d18223ed8fa8@lab.ntt.co.jp> <1527584511.6381.1@smtp.office365.com> Message-ID: <1527596481.3825.0@smtp.office365.com> On Tue, May 29, 2018 at 1:47 PM, Sylvain Bauza wrote: > > > Le mar. 29 mai 2018 à 11:02, Balázs Gibizer > a écrit : >> >> >> On Tue, May 29, 2018 at 9:38 AM, Sylvain Bauza >> wrote: >> > >> > >> > On Tue, May 29, 2018 at 3:08 AM, TETSURO NAKAMURA >> > wrote >> > >> >> > In that situation, say for example with VGPU inventories, that >> >> would mean >> >> > that the compute node would stop reporting inventories for its >> >> root RP, but >> >> > would rather report inventories for at least one single child >> RP. >> >> > In that model, do we reconcile the allocations that were already >> >> made >> >> > against the "root RP" inventory ? >> >> >> >> It would be nice to see Eric and Jay comment on this, >> >> but if I'm not mistaken, when the virt driver stops reporting >> >> inventories for its root RP, placement would try to delete that >> >> inventory inside and raise InventoryInUse exception if any >> >> allocations still exist on that resource. >> >> >> >> ``` >> >> update_from_provider_tree() (nova/compute/resource_tracker.py) >> >> + _set_inventory_for_provider() >> (nova/scheduler/client/report.py) >> >> + put() - PUT /resource_providers//inventories with >> >> new inventories (scheduler/client/report.py) >> >> + set_inventories() (placement/handler/inventory.py) >> >> + _set_inventory() >> >> (placement/objects/resource_proveider.py) >> >> + _delete_inventory_from_provider() >> >> (placement/objects/resource_proveider.py) >> >> -> raise exception.InventoryInUse >> >> ``` >> >> >> >> So we need some trick something like deleting VGPU allocations >> >> before upgrading and set the allocation again for the created new >> >> child after upgrading? >> >> >> > >> > I wonder if we should keep the existing inventory in the root RP, >> and >> > somehow just reserve the left resources (so Placement wouldn't pass >> > that root RP for queries, but would still have allocations). But >> > then, where and how to do this ? By the resource tracker ? >> > >> >> AFAIK it is the virt driver that decides to model the VGU resource >> at a >> different place in the RP tree so I think it is the responsibility of >> the same virt driver to move any existing allocation from the old >> place >> to the new place during this change. >> >> Cheers, >> gibi > > Why not instead not move the allocation but rather have the virt > driver updating the root RP by modifying the reserved value to the > total size? > > That way, the virt driver wouldn't need to ask for an allocation but > rather continue to provide inventories... > > Thoughts? Keeping the old allocaton at the old RP and adding a similar sized reservation in the new RP feels hackis as those are not really reserved GPUs but used GPUs just from the old RP. If somebody sums up the total reported GPUs in this setup via the placement API then she will get more GPUs in total that what is physically visible for the hypervisor as the GPUs part of the old allocation reported twice in two different total value. Could we just report less GPU inventories to the new RP until the old RP has GPU allocations? Some alternatives from my jetlagged brain: a) Implement a move inventory/allocation API in placement. Given a resource class and a source RP uuid and a destination RP uuid placement moves the inventory and allocations of that resource class from the source RP to the destination RP. Then the virt drive can call this API to move the allocation. This has an impact on the fast forward upgrade as it needs running virt driver to do the allocation move. b) For this I assume that live migrating an instance having a GPU allocation on the old RP will allocate GPU for that instance from the new RP. In the virt driver do not report GPUs to the new RP while there is allocation for such GPUs in the old RP. Let the deployer live migrate away the instances. When the virt driver detects that there is no more GPU allocations on the old RP it can delete the inventory from the old RP and report it to the new RP. c) For this I assume that there is no support for live migration of an instance having a GPU. If there is GPU allocation in the old RP then virt driver does not report GPU inventory to the new RP just creates the new nested RPs. Provide a placement-manage command to do the inventory + allocation copy from the old RP to the new RP. Cheers, gibi > >> >> > -Sylvain >> > >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mnaser at vexxhost.com Tue May 29 12:51:16 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 29 May 2018 08:51:16 -0400 Subject: [openstack-dev] [tc] Organizational diversity tag In-Reply-To: <31d5e78c-276c-3ac5-6b42-c20399b34a66@openstack.org> References: <31d5e78c-276c-3ac5-6b42-c20399b34a66@openstack.org> Message-ID: On Tue, May 29, 2018 at 7:59 AM, Thierry Carrez wrote: > Mohammed Naser wrote: >> >> During the TC retrospective at the OpenStack summit last week, the >> topic of the organizational diversity tag is becoming irrelevant was >> brought up by Thierry (ttx)[1]. It seems that for projects that are >> not very active, they can easily lose this tag with a few changes by >> perhaps the infrastructure team for CI related fixes. >> >> As an action item, Thierry and I have paired up in order to look into >> a way to resolve this issue. There have been ideas to switch this to >> a report that is published at the end of the cycle rather than >> continuously. Julia (TheJulia) suggested that we change or track >> different types of diversity. >> >> Before we start diving into solutions, I wanted to bring this topic up >> to the mailing list and ask for any suggestions. In digging the >> codebase behind this[2], I've found that there are some knobs that we >> can also tweak if need-be, or perhaps we can adjust those numbers >> depending on the number of commits. > > > Right, the issue is that under a given level of team activity, there is a > lot of state flapping between single-vendor, no tag, and > diverse-affiliation. Some isolated events (someone changing affiliation, a > dozen of infra-related changes) end up having a significant impact. > > My current thinking was that rather than apply a mathematical rule to > produce quantitative results every month, we could take the time for a > deeper analysis and produce a qualitative report every quarter. I like this idea, however... > Alternatively (if that's too much work), we could add a new team tag > (low-activity ?) that would appear for all projects where the activity is so > low that the team diversity tags no longer really apply. I think as a first step, it would be better to look into adding a low-activity team that so that anything under X number of commits would fall under that tag. I personally lean towards this because it'll be a useful indication for consumers of deliverables of these projects, because I think low activity is just as important as diversity/single-vendor driven projects. The only thing I have in mind is the possible 'feeling' for projects which are very stable, quiet and functioning to end up with low-activity tag, giving an impression that they are unmaintained. I think in general most associate low activity = unmaintained.. but I can't come up with any better options either. > -- > Thierry Carrez (ttx) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sbauza at redhat.com Tue May 29 13:12:12 2018 From: sbauza at redhat.com (Sylvain Bauza) Date: Tue, 29 May 2018 15:12:12 +0200 Subject: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers In-Reply-To: <1527596481.3825.0@smtp.office365.com> References: <8eefd93a-abbf-1436-07a3-d18223ed8fa8@lab.ntt.co.jp> <1527584511.6381.1@smtp.office365.com> <1527596481.3825.0@smtp.office365.com> Message-ID: On Tue, May 29, 2018 at 2:21 PM, Balázs Gibizer wrote: > > > On Tue, May 29, 2018 at 1:47 PM, Sylvain Bauza wrote: > >> >> >> Le mar. 29 mai 2018 à 11:02, Balázs Gibizer >> a écrit : >> >>> >>> >>> On Tue, May 29, 2018 at 9:38 AM, Sylvain Bauza >>> wrote: >>> > >>> > >>> > On Tue, May 29, 2018 at 3:08 AM, TETSURO NAKAMURA >>> > wrote >>> > >>> >> > In that situation, say for example with VGPU inventories, that >>> >> would mean >>> >> > that the compute node would stop reporting inventories for its >>> >> root RP, but >>> >> > would rather report inventories for at least one single child RP. >>> >> > In that model, do we reconcile the allocations that were already >>> >> made >>> >> > against the "root RP" inventory ? >>> >> >>> >> It would be nice to see Eric and Jay comment on this, >>> >> but if I'm not mistaken, when the virt driver stops reporting >>> >> inventories for its root RP, placement would try to delete that >>> >> inventory inside and raise InventoryInUse exception if any >>> >> allocations still exist on that resource. >>> >> >>> >> ``` >>> >> update_from_provider_tree() (nova/compute/resource_tracker.py) >>> >> + _set_inventory_for_provider() (nova/scheduler/client/report.py) >>> >> + put() - PUT /resource_providers//inventories with >>> >> new inventories (scheduler/client/report.py) >>> >> + set_inventories() (placement/handler/inventory.py) >>> >> + _set_inventory() >>> >> (placement/objects/resource_proveider.py) >>> >> + _delete_inventory_from_provider() >>> >> (placement/objects/resource_proveider.py) >>> >> -> raise exception.InventoryInUse >>> >> ``` >>> >> >>> >> So we need some trick something like deleting VGPU allocations >>> >> before upgrading and set the allocation again for the created new >>> >> child after upgrading? >>> >> >>> > >>> > I wonder if we should keep the existing inventory in the root RP, and >>> > somehow just reserve the left resources (so Placement wouldn't pass >>> > that root RP for queries, but would still have allocations). But >>> > then, where and how to do this ? By the resource tracker ? >>> > >>> >>> AFAIK it is the virt driver that decides to model the VGU resource at a >>> different place in the RP tree so I think it is the responsibility of >>> the same virt driver to move any existing allocation from the old place >>> to the new place during this change. >>> >>> Cheers, >>> gibi >>> >> >> Why not instead not move the allocation but rather have the virt driver >> updating the root RP by modifying the reserved value to the total size? >> >> That way, the virt driver wouldn't need to ask for an allocation but >> rather continue to provide inventories... >> >> Thoughts? >> > > Keeping the old allocaton at the old RP and adding a similar sized > reservation in the new RP feels hackis as those are not really reserved > GPUs but used GPUs just from the old RP. If somebody sums up the total > reported GPUs in this setup via the placement API then she will get more > GPUs in total that what is physically visible for the hypervisor as the > GPUs part of the old allocation reported twice in two different total > value. Could we just report less GPU inventories to the new RP until the > old RP has GPU allocations? > > We could keep the old inventory in the root RP for the previous vGPU type already supported in Queens and just add other inventories for other vGPU types now supported. That looks possibly the simpliest option as the virt driver knows that. > Some alternatives from my jetlagged brain: > > a) Implement a move inventory/allocation API in placement. Given a > resource class and a source RP uuid and a destination RP uuid placement > moves the inventory and allocations of that resource class from the source > RP to the destination RP. Then the virt drive can call this API to move the > allocation. This has an impact on the fast forward upgrade as it needs > running virt driver to do the allocation move. > > Instead of having the virt driver doing that (TBH, I don't like that given both Xen and libvirt drivers have the same problem), we could write a nova-manage upgrade call for that that would call the Placement API, sure. > b) For this I assume that live migrating an instance having a GPU > allocation on the old RP will allocate GPU for that instance from the new > RP. In the virt driver do not report GPUs to the new RP while there is > allocation for such GPUs in the old RP. Let the deployer live migrate away > the instances. When the virt driver detects that there is no more GPU > allocations on the old RP it can delete the inventory from the old RP and > report it to the new RP. > > For the moment, vGPUs don't support live migration, even within QEMU. I haven't checked that, but IIUC when you live-migrate an instance that have vGPUs, it will just migrate it without recreating the vGPUs. Now, the problem is with the VGPU allocation, we should delete it then. Maybe a new bug report ? > c) For this I assume that there is no support for live migration of an > instance having a GPU. If there is GPU allocation in the old RP then virt > driver does not report GPU inventory to the new RP just creates the new > nested RPs. Provide a placement-manage command to do the inventory + > allocation copy from the old RP to the new RP. > > what's the difference with the first alternative ? Anyway, looks like it's pretty simple to just keep the inventory for the already existing vGPU type in the root RP, and just add nested RPs for other vGPU types. Oh, and btw. we could possibly have the same problem when we implement the NUMA spec that I need to rework https://review.openstack.org/#/c/552924/ -Sylvain > Cheers, > gibi > > > >> >>> > -Sylvain >>> > >>> >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Tue May 29 13:55:11 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 29 May 2018 09:55:11 -0400 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) Message-ID: During the Forum, the topic of review culture came up in session after session. During these discussions, the subject of our use of nitpicks were often raised as a point of contention and frustration, especially by community members that have left the community and that were attempting to re-engage the community. Contributors raised the point of review feedback requiring for extremely precise English, or compliance to a particular core reviewer's style preferences, which may not be the same as another core reviewer. These things are not just frustrating, but also very inhibiting for part time contributors such as students who may also be time limited. Or an operator who noticed something that was clearly a bug and that put forth a very minor fix and doesn't have the time to revise it over and over. While nitpicks do help guide and teach, the consensus seemed to be that we do need to shift the culture a little bit. As such, I've proposed a change to our principles[1] in governance that attempts to capture the essence and spirit of the nitpicking topic as a first step. -Julia --------- [1]: https://review.openstack.org/570940 From mriedemos at gmail.com Tue May 29 13:57:37 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 29 May 2018 08:57:37 -0500 Subject: [openstack-dev] [nova] nova aggregate and nova placement api aggregate In-Reply-To: References: Message-ID: <5896b630-6602-72f5-c441-a08f63d518f7@gmail.com> On 5/25/2018 12:56 AM, Matt Riedemann wrote: > On 5/24/2018 8:33 PM, Jeffrey Zhang wrote: >> Recently, i am trying to implement a function which aggregate nova >> hypervisors >> rather than nova compute host. But seems nova only aggregate >> nova-compute host. >> >> On the other hand, since Ocata, nova depends on placement api which >> supports >> aggregating resource providers. But nova-scheduler doesn't use this >> feature >> now. >> >> So  is there any better way to solve such issue? and is there any plan >> which >> make nova legacy aggregate and placement api aggregate cloud work >> together? > > There are some new features in Rocky [1] that involve resource provider > aggregates for compute nodes which can be used for scheduling and will > actually allow you to remove some older filters > (AggregateMultiTenancyIsolation and AvailabilityZoneFilter). CERN is > using these to improve performance with their cells v2 deployment. > Oops, I forgot to include the link to the reference [1]. [1] https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html#aggregates-in-placement -- Thanks, Matt From mriedemos at gmail.com Tue May 29 14:02:58 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 29 May 2018 09:02:58 -0500 Subject: [openstack-dev] [Openstack-operators] [nova] Need some feedback on the proposed heal_allocations CLI In-Reply-To: References: Message-ID: <97fc2ff4-ef97-4c36-0f89-3ef8d9c874fb@gmail.com> On 5/28/2018 7:31 AM, Sylvain Bauza wrote: > That said, given I'm now working on using Nested Resource Providers for > VGPU inventories, I wonder about a possible upgrade problem with VGPU > allocations. Given that : >  - in Queens, VGPU inventories are for the root RP (ie. the compute > node RP), but, >  - in Rocky, VGPU inventories will be for children RPs (ie. against a > specific VGPU type), then > > if we have VGPU allocations in Queens, when upgrading to Rocky, we > should maybe recreate the allocations to a specific other inventory ? For how the heal_allocations CLI works today, if the instance has any allocations in placement, it skips that instance. So this scenario wouldn't be a problem. > > Hope you see the problem with upgrading by creating nested RPs ? Yes, the CLI doesn't attempt to have any knowledge about nested resource providers, it just takes the flavor embedded in the instance and creates allocations against the compute node provider using the flavor. It has no explicit knowledge about granular request groups or more advanced features like that. -- Thanks, Matt From alifshit at redhat.com Tue May 29 14:43:15 2018 From: alifshit at redhat.com (Artom Lifshitz) Date: Tue, 29 May 2018 10:43:15 -0400 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: References: Message-ID: I dunno, there's a fine line to be drawn between getting a finished product that looks unprofessional (because of typos, English mistakes, etc), and nitpicking to the point of smothering and being counter-productive. One idea would be that, once the meat of the patch has passed multiple rounds of reviews and looks good, and what remains is only nits, the reviewer themselves take on the responsibility of pushing a new patch that fixes the nits that they found. On Tue, May 29, 2018 at 9:55 AM, Julia Kreger wrote: > During the Forum, the topic of review culture came up in session after > session. During these discussions, the subject of our use of nitpicks > were often raised as a point of contention and frustration, especially > by community members that have left the community and that were > attempting to re-engage the community. Contributors raised the point > of review feedback requiring for extremely precise English, or > compliance to a particular core reviewer's style preferences, which > may not be the same as another core reviewer. > > These things are not just frustrating, but also very inhibiting for > part time contributors such as students who may also be time limited. > Or an operator who noticed something that was clearly a bug and that > put forth a very minor fix and doesn't have the time to revise it over > and over. > > While nitpicks do help guide and teach, the consensus seemed to be > that we do need to shift the culture a little bit. As such, I've > proposed a change to our principles[1] in governance that attempts to > capture the essence and spirit of the nitpicking topic as a first > step. > > -Julia > --------- > [1]: https://review.openstack.org/570940 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- -- Artom Lifshitz Software Engineer, OpenStack Compute DFG From mnaser at vexxhost.com Tue May 29 14:52:04 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 29 May 2018 10:52:04 -0400 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: References: Message-ID: On Tue, May 29, 2018 at 10:43 AM, Artom Lifshitz wrote: > I dunno, there's a fine line to be drawn between getting a finished > product that looks unprofessional (because of typos, English mistakes, > etc), and nitpicking to the point of smothering and being > counter-productive. One idea would be that, once the meat of the patch > has passed multiple rounds of reviews and looks good, and what remains > is only nits, the reviewer themselves take on the responsibility of > pushing a new patch that fixes the nits that they found. I'd just like to point out that what you perceive as a 'finished product that looks unprofessional' might be already hard enough for a contributor to achieve. We have a lot of new contributors coming from all over the world and it is very discouraging for them to have their technical knowledge and work be categorized as 'unprofessional' because of the language barrier. git-nit and a few minutes of your time will go a long way, IMHO. > On Tue, May 29, 2018 at 9:55 AM, Julia Kreger > wrote: >> During the Forum, the topic of review culture came up in session after >> session. During these discussions, the subject of our use of nitpicks >> were often raised as a point of contention and frustration, especially >> by community members that have left the community and that were >> attempting to re-engage the community. Contributors raised the point >> of review feedback requiring for extremely precise English, or >> compliance to a particular core reviewer's style preferences, which >> may not be the same as another core reviewer. >> >> These things are not just frustrating, but also very inhibiting for >> part time contributors such as students who may also be time limited. >> Or an operator who noticed something that was clearly a bug and that >> put forth a very minor fix and doesn't have the time to revise it over >> and over. >> >> While nitpicks do help guide and teach, the consensus seemed to be >> that we do need to shift the culture a little bit. As such, I've >> proposed a change to our principles[1] in governance that attempts to >> capture the essence and spirit of the nitpicking topic as a first >> step. >> >> -Julia >> --------- >> [1]: https://review.openstack.org/570940 >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > -- > Artom Lifshitz > Software Engineer, OpenStack Compute DFG > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From zbitter at redhat.com Tue May 29 14:53:03 2018 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 29 May 2018 10:53:03 -0400 Subject: [openstack-dev] [TC] Terms of service for hosted projects Message-ID: We allow various open source projects that are not an official part of OpenStack or necessarily used by OpenStack to be hosted on OpenStack infrastructure - previously under the 'StackForge' branding, but now without separate branding. Do we document anywhere the terms of service under which we offer such hosting? It is my understanding that the infra team will enforce the following conditions when a repo import request is received: * The repo must be licensed under an OSI-approved open source license. * If the repo is a fork of another project, there must be (public) evidence of an attempt to co-ordinate with the upstream first. Neither of those appears to be documented (specifically, https://governance.openstack.org/tc/reference/licensing.html only specifies licensing requirements for official projects, libraries imported by official projects, and software used by the Infra team). In addition, I think we should require projects hosted on our infrastructure to agree to other policies: * Adhere to the OpenStack Foundation Code of Conduct. * Not misrepresent their relationship to the official OpenStack project or the Foundation. Ideally we'd come up with language that they *can* use to describe their status, such as "hosted on the OpenStack infrastructure". If we don't have place where this kind of thing is documented already, I'll submit a review adding one. Does anybody have any ideas about a process for ensuring that projects have read and agreed to the terms when we add them? cheers, Zane. From doug at doughellmann.com Tue May 29 15:31:03 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 29 May 2018 11:31:03 -0400 Subject: [openstack-dev] [Release-job-failures][tripleo] Release of openstack/tripleo-validations failed In-Reply-To: References: Message-ID: <1527607775-sup-6727@lrrr.local> Excerpts from zuul's message of 2018-05-29 14:28:57 +0000: > Build failed. > > - release-openstack-python http://logs.openstack.org/26/26956a27b95550e2162243da79d62bb1b19d50d7/release/release-openstack-python/2bd7f7d/ : POST_FAILURE in 6m 34s > - announce-release announce-release : SKIPPED > - propose-update-constraints propose-update-constraints : SKIPPED > There appears to be an issue with the tripleo-validations README.rst file. It's likely this is new validation being done by PyPI, so rather than worrying about which change broke things, I suggest just working out how to fix it and move on. http://logs.openstack.org/26/26956a27b95550e2162243da79d62bb1b19d50d7/release/release-openstack-python/2bd7f7d/job-output.txt.gz#_2018-05-29_14_28_35_963702 Doug From amy at demarco.com Tue May 29 15:35:09 2018 From: amy at demarco.com (Amy Marrich) Date: Tue, 29 May 2018 08:35:09 -0700 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: References: Message-ID: If I have a nit that doesn't affect things, I'll make a note of it and say if you do another patch I'd really like it fixed but also give the patch a vote. What I'll also do sometimes if I know the user or they are online I'll offer to fix things for them, that way they can see what I've done, I've sped things along and I haven't caused a simple change to take a long amount of time and reviews. I think this is a great addition! Thanks, Amy (spotz) On Tue, May 29, 2018 at 6:55 AM, Julia Kreger wrote: > During the Forum, the topic of review culture came up in session after > session. During these discussions, the subject of our use of nitpicks > were often raised as a point of contention and frustration, especially > by community members that have left the community and that were > attempting to re-engage the community. Contributors raised the point > of review feedback requiring for extremely precise English, or > compliance to a particular core reviewer's style preferences, which > may not be the same as another core reviewer. > > These things are not just frustrating, but also very inhibiting for > part time contributors such as students who may also be time limited. > Or an operator who noticed something that was clearly a bug and that > put forth a very minor fix and doesn't have the time to revise it over > and over. > > While nitpicks do help guide and teach, the consensus seemed to be > that we do need to shift the culture a little bit. As such, I've > proposed a change to our principles[1] in governance that attempts to > capture the essence and spirit of the nitpicking topic as a first > step. > > -Julia > --------- > [1]: https://review.openstack.org/570940 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From neil at tigera.io Tue May 29 16:00:03 2018 From: neil at tigera.io (Neil Jerram) Date: Tue, 29 May 2018 17:00:03 +0100 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: References: Message-ID: >From my point of view as someone who is still just an occasional contributor (in all OpenStack projects other than my own team's networking driver), and so I think still sensitive to the concerns being raised here: - Nits are not actually a problem, at all, if they are uncontroversial and quick to deal with. For example, if it's a point of English, and most English speakers would agree that a correction is better, it's quick and no problem for me to make that correction. - What is much more of a problem is: - Anything that is more a matter of opinion. If a markup is just the reviewer's personal opinion, and they can't say anything to explain more objectively why their suggestion is better, it would be wiser to defer to the contributor's initial choice. - Questioning something unconstructively or out of proportion to the change being made. This is a tricky one to pin down, but sometimes I've had comments that raise some random left-field question that isn't really related to the change being made, or where the reviewer could have done a couple minutes research themselves and then either made a more precise comment, or not made their comment at all. - Asking - implicitly or explicitly - the contributor to add more cleanups to their change. If someone usefully fixes a problem, and their fix does not of itself impair the quality or maintainability of the surrounding code, they should not be asked to extend their fix so as to fix further problems that a more regular developer may be aware of in that area, or to advance a refactoring / cleanup that another developer has in mind. (At least, not as part of that initial change.) (Obviously the common thread of those problem points is taking up more time; psychologically I think one of the things that can turn a contributor away is the feeling that they've contributed a clearly useful thing, yet the community is stalling over accepting it for reasons that do not appear clearcut.) Hoping this is vaguely helpful... Neil On Tue, May 29, 2018 at 4:35 PM Amy Marrich wrote: > If I have a nit that doesn't affect things, I'll make a note of it and say > if you do another patch I'd really like it fixed but also give the patch a > vote. What I'll also do sometimes if I know the user or they are online > I'll offer to fix things for them, that way they can see what I've done, > I've sped things along and I haven't caused a simple change to take a long > amount of time and reviews. > > I think this is a great addition! > > Thanks, > > Amy (spotz) > > On Tue, May 29, 2018 at 6:55 AM, Julia Kreger > wrote: > >> During the Forum, the topic of review culture came up in session after >> session. During these discussions, the subject of our use of nitpicks >> were often raised as a point of contention and frustration, especially >> by community members that have left the community and that were >> attempting to re-engage the community. Contributors raised the point >> of review feedback requiring for extremely precise English, or >> compliance to a particular core reviewer's style preferences, which >> may not be the same as another core reviewer. >> >> These things are not just frustrating, but also very inhibiting for >> part time contributors such as students who may also be time limited. >> Or an operator who noticed something that was clearly a bug and that >> put forth a very minor fix and doesn't have the time to revise it over >> and over. >> >> While nitpicks do help guide and teach, the consensus seemed to be >> that we do need to shift the culture a little bit. As such, I've >> proposed a change to our principles[1] in governance that attempts to >> capture the essence and spirit of the nitpicking topic as a first >> step. >> >> -Julia >> --------- >> [1]: https://review.openstack.org/570940 >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Tue May 29 16:04:35 2018 From: aschultz at redhat.com (Alex Schultz) Date: Tue, 29 May 2018 10:04:35 -0600 Subject: [openstack-dev] [Release-job-failures][tripleo] Release of openstack/tripleo-validations failed In-Reply-To: <1527607775-sup-6727@lrrr.local> References: <1527607775-sup-6727@lrrr.local> Message-ID: On Tue, May 29, 2018 at 9:31 AM, Doug Hellmann wrote: > Excerpts from zuul's message of 2018-05-29 14:28:57 +0000: >> Build failed. >> >> - release-openstack-python http://logs.openstack.org/26/26956a27b95550e2162243da79d62bb1b19d50d7/release/release-openstack-python/2bd7f7d/ : POST_FAILURE in 6m 34s >> - announce-release announce-release : SKIPPED >> - propose-update-constraints propose-update-constraints : SKIPPED >> > > There appears to be an issue with the tripleo-validations README.rst > file. It's likely this is new validation being done by PyPI, so rather > than worrying about which change broke things, I suggest just working out > how to fix it and move on. > > http://logs.openstack.org/26/26956a27b95550e2162243da79d62bb1b19d50d7/release/release-openstack-python/2bd7f7d/job-output.txt.gz#_2018-05-29_14_28_35_963702 > https://bugs.launchpad.net/tripleo/+bug/1774001 I've proposed a fix to shuffle around the readme to clean this up. https://review.openstack.org/570954 Thanks, -Alex > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mgagne at calavera.ca Tue May 29 16:41:49 2018 From: mgagne at calavera.ca (=?UTF-8?Q?Mathieu_Gagn=C3=A9?=) Date: Tue, 29 May 2018 09:41:49 -0700 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: References: Message-ID: Hi Julia, Thanks for the follow up on this topic. On Tue, May 29, 2018 at 6:55 AM, Julia Kreger wrote: > > These things are not just frustrating, but also very inhibiting for > part time contributors such as students who may also be time limited. > Or an operator who noticed something that was clearly a bug and that > put forth a very minor fix and doesn't have the time to revise it over > and over. > What I found frustrating is receiving *only* nitpicks, addressing them to only receive more nitpicks (sometimes from the same reviewer) with no substantial review on the change itself afterward. I wouldn't mind addressing nitpicks if more substantial reviews were made in a timely fashion. -- Mathieu From jon at csail.mit.edu Tue May 29 16:41:51 2018 From: jon at csail.mit.edu (Jonathan D. Proulx) Date: Tue, 29 May 2018 12:41:51 -0400 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: References: Message-ID: <20180529150920.GA21733@csail.mit.edu> On Tue, May 29, 2018 at 10:52:04AM -0400, Mohammed Naser wrote: :On Tue, May 29, 2018 at 10:43 AM, Artom Lifshitz wrote: :> One idea would be that, once the meat of the patch :> has passed multiple rounds of reviews and looks good, and what remains :> is only nits, the reviewer themselves take on the responsibility of :> pushing a new patch that fixes the nits that they found. Doesn't the above suggestion sufficiently address the concern below? :I'd just like to point out that what you perceive as a 'finished :product that looks unprofessional' might be already hard enough for a :contributor to achieve. We have a lot of new contributors coming from :all over the world and it is very discouraging for them to have their :technical knowledge and work be categorized as 'unprofessional' :because of the language barrier. : :git-nit and a few minutes of your time will go a long way, IMHO. As very intermittent contributor and native english speaker with relatively poor spelling and typing I'd be much happier with a reviewer pushing a patch that fixes nits rather than having a ton of inline comments that point them out. maybe we're all saying the same thing here? -JOn From harlowja at fastmail.com Tue May 29 16:48:32 2018 From: harlowja at fastmail.com (Joshua Harlow) Date: Tue, 29 May 2018 09:48:32 -0700 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: <20180529150920.GA21733@csail.mit.edu> References: <20180529150920.GA21733@csail.mit.edu> Message-ID: <5B0D8460.9090206@fastmail.com> Jonathan D. Proulx wrote: > On Tue, May 29, 2018 at 10:52:04AM -0400, Mohammed Naser wrote: > > :On Tue, May 29, 2018 at 10:43 AM, Artom Lifshitz wrote: > :> One idea would be that, once the meat of the patch > :> has passed multiple rounds of reviews and looks good, and what remains > :> is only nits, the reviewer themselves take on the responsibility of > :> pushing a new patch that fixes the nits that they found. > > Doesn't the above suggestion sufficiently address the concern below? > > :I'd just like to point out that what you perceive as a 'finished > :product that looks unprofessional' might be already hard enough for a > :contributor to achieve. We have a lot of new contributors coming from > :all over the world and it is very discouraging for them to have their > :technical knowledge and work be categorized as 'unprofessional' > :because of the language barrier. > : > :git-nit and a few minutes of your time will go a long way, IMHO. > > As very intermittent contributor and native english speaker with > relatively poor spelling and typing I'd be much happier with a > reviewer pushing a patch that fixes nits rather than having a ton of > inline comments that point them out. > > maybe we're all saying the same thing here? https://sep.yimg.com/ay/computergear/i-write-code-computer-t-shirt-13.gif I am the same ;) > > -JOn > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From davanum at gmail.com Tue May 29 17:05:53 2018 From: davanum at gmail.com (Davanum Srinivas) Date: Tue, 29 May 2018 10:05:53 -0700 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: References: Message-ID: Thanks for driving this Julia. +1. It's time to do this. -- Dims On Tue, May 29, 2018 at 6:55 AM, Julia Kreger wrote: > During the Forum, the topic of review culture came up in session after > session. During these discussions, the subject of our use of nitpicks > were often raised as a point of contention and frustration, especially > by community members that have left the community and that were > attempting to re-engage the community. Contributors raised the point > of review feedback requiring for extremely precise English, or > compliance to a particular core reviewer's style preferences, which > may not be the same as another core reviewer. > > These things are not just frustrating, but also very inhibiting for > part time contributors such as students who may also be time limited. > Or an operator who noticed something that was clearly a bug and that > put forth a very minor fix and doesn't have the time to revise it over > and over. > > While nitpicks do help guide and teach, the consensus seemed to be > that we do need to shift the culture a little bit. As such, I've > proposed a change to our principles[1] in governance that attempts to > capture the essence and spirit of the nitpicking topic as a first > step. > > -Julia > --------- > [1]: https://review.openstack.org/570940 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Davanum Srinivas :: https://twitter.com/dims From doug at doughellmann.com Tue May 29 17:17:50 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 29 May 2018 13:17:50 -0400 Subject: [openstack-dev] [tc] Organizational diversity tag In-Reply-To: References: <31d5e78c-276c-3ac5-6b42-c20399b34a66@openstack.org> Message-ID: <1527614177-sup-1244@lrrr.local> Excerpts from Mohammed Naser's message of 2018-05-29 08:51:16 -0400: > On Tue, May 29, 2018 at 7:59 AM, Thierry Carrez wrote: > > Mohammed Naser wrote: > >> > >> During the TC retrospective at the OpenStack summit last week, the > >> topic of the organizational diversity tag is becoming irrelevant was > >> brought up by Thierry (ttx)[1]. It seems that for projects that are > >> not very active, they can easily lose this tag with a few changes by > >> perhaps the infrastructure team for CI related fixes. > >> > >> As an action item, Thierry and I have paired up in order to look into > >> a way to resolve this issue. There have been ideas to switch this to > >> a report that is published at the end of the cycle rather than > >> continuously. Julia (TheJulia) suggested that we change or track > >> different types of diversity. > >> > >> Before we start diving into solutions, I wanted to bring this topic up > >> to the mailing list and ask for any suggestions. In digging the > >> codebase behind this[2], I've found that there are some knobs that we > >> can also tweak if need-be, or perhaps we can adjust those numbers > >> depending on the number of commits. > > > > > > Right, the issue is that under a given level of team activity, there is a > > lot of state flapping between single-vendor, no tag, and > > diverse-affiliation. Some isolated events (someone changing affiliation, a > > dozen of infra-related changes) end up having a significant impact. > > > > My current thinking was that rather than apply a mathematical rule to > > produce quantitative results every month, we could take the time for a > > deeper analysis and produce a qualitative report every quarter. > > I like this idea, however... > > > Alternatively (if that's too much work), we could add a new team tag > > (low-activity ?) that would appear for all projects where the activity is so > > low that the team diversity tags no longer really apply. > > I think as a first step, it would be better to look into adding a > low-activity team that so that anything under X number of commits > would fall under that tag. I personally lean towards this because > it'll be a useful indication for consumers of deliverables of these > projects, because I think low activity is just as important as > diversity/single-vendor driven projects. > > The only thing I have in mind is the possible 'feeling' for projects > which are very stable, quiet and functioning to end up with > low-activity tag, giving an impression that they are unmaintained. I > think in general most associate low activity = unmaintained.. but I > can't come up with any better options either. We have the status:maintenance-mode tag[3] today. How would a new "low-activity" tag be differentiated from the existing one? [3] https://governance.openstack.org/tc/reference/tags/status_maintenance-mode.html > > > -- > > Thierry Carrez (ttx) > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From fungi at yuggoth.org Tue May 29 17:37:24 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 29 May 2018 17:37:24 +0000 Subject: [openstack-dev] [TC] [Infra] Terms of service for hosted projects In-Reply-To: References: Message-ID: <20180529173724.aww4myeqpof3dtnj@yuggoth.org> On 2018-05-29 10:53:03 -0400 (-0400), Zane Bitter wrote: > We allow various open source projects that are not an official > part of OpenStack or necessarily used by OpenStack to be hosted on > OpenStack infrastructure - previously under the 'StackForge' > branding, but now without separate branding. Do we document > anywhere the terms of service under which we offer such hosting? We do so minimally here: https://docs.openstack.org/infra/system-config/unofficial_project_hosting.html It's linked from this section of the Project Creator’s Guide in the Infra Manual: https://docs.openstack.org/infra/manual/creators.html#decide-status-of-your-project But yes, we should probably add some clarity to that document and see about making sure it's linked more prominently. We also maintain some guidelines for reviewers of changes to the openstack-infra/project-config repository, which has a bit to say about new repository creation changes: https://git.openstack.org/cgit/openstack-infra/project-config/tree/REVIEWING.rst > It is my understanding that the infra team will enforce the > following conditions when a repo import request is received: > > * The repo must be licensed under an OSI-approved open source > license. That has been our custom, but we should add a statement to this effect in the aforementioned document. > * If the repo is a fork of another project, there must be (public) > evidence of an attempt to co-ordinate with the upstream first. I don't recall this ever being mandated, though the project-config reviewers do often provide suggestions to project creators such as places in the existing community with which they might consider cooperating/collaborating. > Neither of those appears to be documented (specifically, > https://governance.openstack.org/tc/reference/licensing.html only > specifies licensing requirements for official projects, libraries > imported by official projects, and software used by the Infra > team). The Infrastructure team has been granted a fair amount of autonomy to determine its operating guidelines, and future plans to separate project hosting further from the OpenStack name (in an attempt to make it more clear that hosting your project in the infrastructure is not an endorsement by OpenStack and doesn't make it "part of OpenStack") make the OpenStack TC governance site a particularly poor choice of venue to document such things. > In addition, I think we should require projects hosted on our > infrastructure to agree to other policies: > > * Adhere to the OpenStack Foundation Code of Conduct. This seems like a reasonable addition to our hosting requirements. > * Not misrepresent their relationship to the official OpenStack > project or the Foundation. Ideally we'd come up with language that > they *can* use to describe their status, such as "hosted on the > OpenStack infrastructure". Also a great suggestion. We sort of say that in the "what being an unoffocial project is not" bullet list, but it could use some fleshing out. > If we don't have place where this kind of thing is documented > already, I'll submit a review adding one. Does anybody have any > ideas about a process for ensuring that projects have read and > agreed to the terms when we add them? Adding process forcing active confirmation of such rules seems like a lot of unnecessary overhead/red tape/bureaucracy. As it stands, we're working to get rid of active agreement to the ICLA in favor of simply asserting the DCO in commit messages, so I'm not a fan of adding some new agreement people have to directly acknowledge along with associated automation and policing. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jungleboyj at gmail.com Tue May 29 18:29:02 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Tue, 29 May 2018 13:29:02 -0500 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: References: Message-ID: Julia, Thank you for starting this discussion. On 5/29/2018 9:43 AM, Artom Lifshitz wrote: > I dunno, there's a fine line to be drawn between getting a finished > product that looks unprofessional (because of typos, English mistakes, > etc), and nitpicking to the point of smothering and being > counter-productive. One idea would be that, once the meat of the patch > has passed multiple rounds of reviews and looks good, and what remains > is only nits, the reviewer themselves take on the responsibility of > pushing a new patch that fixes the nits that they found. In the past this is something that I have wanted to do but have received mixed feedback on its level of appropriateness.  I am happy to push follow-up patches to address nit-picks rather than hold up a patch.  We, however, will need to communicate to the community that this is now an acceptable practice. > On Tue, May 29, 2018 at 9:55 AM, Julia Kreger > wrote: >> During the Forum, the topic of review culture came up in session after >> session. During these discussions, the subject of our use of nitpicks >> were often raised as a point of contention and frustration, especially >> by community members that have left the community and that were >> attempting to re-engage the community. Contributors raised the point >> of review feedback requiring for extremely precise English, or >> compliance to a particular core reviewer's style preferences, which >> may not be the same as another core reviewer. >> >> These things are not just frustrating, but also very inhibiting for >> part time contributors such as students who may also be time limited. >> Or an operator who noticed something that was clearly a bug and that >> put forth a very minor fix and doesn't have the time to revise it over >> and over. >> >> While nitpicks do help guide and teach, the consensus seemed to be >> that we do need to shift the culture a little bit. As such, I've >> proposed a change to our principles[1] in governance that attempts to >> capture the essence and spirit of the nitpicking topic as a first >> step. >> >> -Julia >> --------- >> [1]: https://review.openstack.org/570940 >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > From alifshit at redhat.com Tue May 29 19:06:48 2018 From: alifshit at redhat.com (Artom Lifshitz) Date: Tue, 29 May 2018 15:06:48 -0400 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: <20180529150920.GA21733@csail.mit.edu> References: <20180529150920.GA21733@csail.mit.edu> Message-ID: > On Tue, May 29, 2018 at 10:52:04AM -0400, Mohammed Naser wrote: > > :On Tue, May 29, 2018 at 10:43 AM, Artom Lifshitz wrote: > :> One idea would be that, once the meat of the patch > :> has passed multiple rounds of reviews and looks good, and what remains > :> is only nits, the reviewer themselves take on the responsibility of > :> pushing a new patch that fixes the nits that they found. > > Doesn't the above suggestion sufficiently address the concern below? > > :I'd just like to point out that what you perceive as a 'finished > :product that looks unprofessional' might be already hard enough for a > :contributor to achieve. We have a lot of new contributors coming from > :all over the world and it is very discouraging for them to have their > :technical knowledge and work be categorized as 'unprofessional' > :because of the language barrier. > : > :git-nit and a few minutes of your time will go a long way, IMHO. > > As very intermittent contributor and native english speaker with > relatively poor spelling and typing I'd be much happier with a > reviewer pushing a patch that fixes nits rather than having a ton of > inline comments that point them out. > > maybe we're all saying the same thing here? Yeah, I feel like we're all essentially in agreement that nits (of the English mistake of typo type) do need to get fixed, but sometimes (often?) putting the burden of fixing them on the original patch contributor is neither fair nor constructive. From jungleboyj at gmail.com Tue May 29 19:16:33 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Tue, 29 May 2018 14:16:33 -0500 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: References: <20180529150920.GA21733@csail.mit.edu> Message-ID: <4488099e-1391-a020-0365-342556a8b2e1@gmail.com> On 5/29/2018 2:06 PM, Artom Lifshitz wrote: >> On Tue, May 29, 2018 at 10:52:04AM -0400, Mohammed Naser wrote: >> >> :On Tue, May 29, 2018 at 10:43 AM, Artom Lifshitz wrote: >> :> One idea would be that, once the meat of the patch >> :> has passed multiple rounds of reviews and looks good, and what remains >> :> is only nits, the reviewer themselves take on the responsibility of >> :> pushing a new patch that fixes the nits that they found. >> >> Doesn't the above suggestion sufficiently address the concern below? >> >> :I'd just like to point out that what you perceive as a 'finished >> :product that looks unprofessional' might be already hard enough for a >> :contributor to achieve. We have a lot of new contributors coming from >> :all over the world and it is very discouraging for them to have their >> :technical knowledge and work be categorized as 'unprofessional' >> :because of the language barrier. >> : >> :git-nit and a few minutes of your time will go a long way, IMHO. >> >> As very intermittent contributor and native english speaker with >> relatively poor spelling and typing I'd be much happier with a >> reviewer pushing a patch that fixes nits rather than having a ton of >> inline comments that point them out. >> >> maybe we're all saying the same thing here? > Yeah, I feel like we're all essentially in agreement that nits (of the > English mistake of typo type) do need to get fixed, but sometimes > (often?) putting the burden of fixing them on the original patch > contributor is neither fair nor constructive. I am ok with this statement if we are all in agreement that doing follow-up patches is an acceptable practice. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ijw.ubuntu at cack.org.uk Tue May 29 19:17:12 2018 From: ijw.ubuntu at cack.org.uk (Ian Wells) Date: Tue, 29 May 2018 12:17:12 -0700 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: References: Message-ID: If your nitpick is a spelling mistake or the need for a comment where you've pretty much typed the text of the comment in the review comment itself, then I have personally found it easiest to use the Gerrit online editor to actually update the patch yourself. There's nothing magical about the original submitter, and no point in wasting your time and theirs to get them to make the change. That said, please be a grown up; if you're changing code or messing up formatting enough for PEP8 to be a concern, it's your responsibility, not the original submitter's, to fix it. Also, do all your fixes in one commit if you don't want to make Zuul cry. -- Ian. On 29 May 2018 at 09:00, Neil Jerram wrote: > From my point of view as someone who is still just an occasional > contributor (in all OpenStack projects other than my own team's networking > driver), and so I think still sensitive to the concerns being raised here: > > - Nits are not actually a problem, at all, if they are uncontroversial and > quick to deal with. For example, if it's a point of English, and most > English speakers would agree that a correction is better, it's quick and no > problem for me to make that correction. > > - What is much more of a problem is: > > - Anything that is more a matter of opinion. If a markup is just the > reviewer's personal opinion, and they can't say anything to explain more > objectively why their suggestion is better, it would be wiser to defer to > the contributor's initial choice. > > - Questioning something unconstructively or out of proportion to the > change being made. This is a tricky one to pin down, but sometimes I've > had comments that raise some random left-field question that isn't really > related to the change being made, or where the reviewer could have done a > couple minutes research themselves and then either made a more precise > comment, or not made their comment at all. > > - Asking - implicitly or explicitly - the contributor to add more > cleanups to their change. If someone usefully fixes a problem, and their > fix does not of itself impair the quality or maintainability of the > surrounding code, they should not be asked to extend their fix so as to fix > further problems that a more regular developer may be aware of in that > area, or to advance a refactoring / cleanup that another developer has in > mind. (At least, not as part of that initial change.) > > (Obviously the common thread of those problem points is taking up more > time; psychologically I think one of the things that can turn a contributor > away is the feeling that they've contributed a clearly useful thing, yet > the community is stalling over accepting it for reasons that do not appear > clearcut.) > > Hoping this is vaguely helpful... > Neil > > > On Tue, May 29, 2018 at 4:35 PM Amy Marrich wrote: > >> If I have a nit that doesn't affect things, I'll make a note of it and >> say if you do another patch I'd really like it fixed but also give the >> patch a vote. What I'll also do sometimes if I know the user or they are >> online I'll offer to fix things for them, that way they can see what I've >> done, I've sped things along and I haven't caused a simple change to take a >> long amount of time and reviews. >> >> I think this is a great addition! >> >> Thanks, >> >> Amy (spotz) >> >> On Tue, May 29, 2018 at 6:55 AM, Julia Kreger < >> juliaashleykreger at gmail.com> wrote: >> >>> During the Forum, the topic of review culture came up in session after >>> session. During these discussions, the subject of our use of nitpicks >>> were often raised as a point of contention and frustration, especially >>> by community members that have left the community and that were >>> attempting to re-engage the community. Contributors raised the point >>> of review feedback requiring for extremely precise English, or >>> compliance to a particular core reviewer's style preferences, which >>> may not be the same as another core reviewer. >>> >>> These things are not just frustrating, but also very inhibiting for >>> part time contributors such as students who may also be time limited. >>> Or an operator who noticed something that was clearly a bug and that >>> put forth a very minor fix and doesn't have the time to revise it over >>> and over. >>> >>> While nitpicks do help guide and teach, the consensus seemed to be >>> that we do need to shift the culture a little bit. As such, I've >>> proposed a change to our principles[1] in governance that attempts to >>> capture the essence and spirit of the nitpicking topic as a first >>> step. >>> >>> -Julia >>> --------- >>> [1]: https://review.openstack.org/570940 >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >>> unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Tue May 29 19:41:55 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 29 May 2018 15:41:55 -0400 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: <4488099e-1391-a020-0365-342556a8b2e1@gmail.com> References: <20180529150920.GA21733@csail.mit.edu> <4488099e-1391-a020-0365-342556a8b2e1@gmail.com> Message-ID: On Tue, May 29, 2018 at 3:16 PM, Jay S Bryant wrote: > > > On 5/29/2018 2:06 PM, Artom Lifshitz wrote: >> Yeah, I feel like we're all essentially in agreement that nits (of the >> English mistake of typo type) do need to get fixed, but sometimes >> (often?) putting the burden of fixing them on the original patch >> contributor is neither fair nor constructive. > > I am ok with this statement if we are all in agreement that doing follow-up > patches is an acceptable practice. > It does feel like there is some general agreement. \o/ Putting my Ironic hat on, we've been trying to stress that follow-up patches are totally acceptable and encouraged. Follow-up patches seem to land faster in the grand scheme of things and allow series of patches to move forward in the mean time which is important when a feature may be spread across 10+ patches As for editing just prior to approving, we have learned there can be absolutely no delay between that edit being made and the patch approved to land. In essence patches would begin to look like only a single core reviewer had approved the change they just edited even if the prior revision had a second core approving the prior revision.. In my experience, the async nature of waiting for a second core to sign-off on your edits incurs additional time for nitpicks to occur and a patch to be blocked. Sadly putting the burden on the person approving changes to land is a bit much as well. I think anyone should be free to propose a follow-up to any patch, at least that is my opinion and why I wrote the principles change as I did. From davanum at gmail.com Tue May 29 19:43:08 2018 From: davanum at gmail.com (Davanum Srinivas) Date: Tue, 29 May 2018 12:43:08 -0700 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: References: Message-ID: Agree with Ian here. Also another problem that comes up is: "Why are you touching *MY* review?" (probably coming from the view where stats - and stackalytics leaderboard position is important). So i guess we ask permission before editing (or) file a follow up later (or) just tell folks that this is ok to do!! Hoping engaging with them will solve yet another issue is someone going around filing the same change in a dozen projects (repeatedly!), but that may be wishful thinking. -- Dims On Tue, May 29, 2018 at 12:17 PM, Ian Wells wrote: > If your nitpick is a spelling mistake or the need for a comment where you've > pretty much typed the text of the comment in the review comment itself, then > I have personally found it easiest to use the Gerrit online editor to > actually update the patch yourself. There's nothing magical about the > original submitter, and no point in wasting your time and theirs to get them > to make the change. That said, please be a grown up; if you're changing > code or messing up formatting enough for PEP8 to be a concern, it's your > responsibility, not the original submitter's, to fix it. Also, do all your > fixes in one commit if you don't want to make Zuul cry. > -- > Ian. > > > On 29 May 2018 at 09:00, Neil Jerram wrote: >> >> From my point of view as someone who is still just an occasional >> contributor (in all OpenStack projects other than my own team's networking >> driver), and so I think still sensitive to the concerns being raised here: >> >> - Nits are not actually a problem, at all, if they are uncontroversial and >> quick to deal with. For example, if it's a point of English, and most >> English speakers would agree that a correction is better, it's quick and no >> problem for me to make that correction. >> >> - What is much more of a problem is: >> >> - Anything that is more a matter of opinion. If a markup is just the >> reviewer's personal opinion, and they can't say anything to explain more >> objectively why their suggestion is better, it would be wiser to defer to >> the contributor's initial choice. >> >> - Questioning something unconstructively or out of proportion to the >> change being made. This is a tricky one to pin down, but sometimes I've had >> comments that raise some random left-field question that isn't really >> related to the change being made, or where the reviewer could have done a >> couple minutes research themselves and then either made a more precise >> comment, or not made their comment at all. >> >> - Asking - implicitly or explicitly - the contributor to add more >> cleanups to their change. If someone usefully fixes a problem, and their >> fix does not of itself impair the quality or maintainability of the >> surrounding code, they should not be asked to extend their fix so as to fix >> further problems that a more regular developer may be aware of in that area, >> or to advance a refactoring / cleanup that another developer has in mind. >> (At least, not as part of that initial change.) >> >> (Obviously the common thread of those problem points is taking up more >> time; psychologically I think one of the things that can turn a contributor >> away is the feeling that they've contributed a clearly useful thing, yet the >> community is stalling over accepting it for reasons that do not appear >> clearcut.) >> >> Hoping this is vaguely helpful... >> Neil >> >> >> On Tue, May 29, 2018 at 4:35 PM Amy Marrich wrote: >>> >>> If I have a nit that doesn't affect things, I'll make a note of it and >>> say if you do another patch I'd really like it fixed but also give the patch >>> a vote. What I'll also do sometimes if I know the user or they are online >>> I'll offer to fix things for them, that way they can see what I've done, >>> I've sped things along and I haven't caused a simple change to take a long >>> amount of time and reviews. >>> >>> I think this is a great addition! >>> >>> Thanks, >>> >>> Amy (spotz) >>> >>> On Tue, May 29, 2018 at 6:55 AM, Julia Kreger >>> wrote: >>>> >>>> During the Forum, the topic of review culture came up in session after >>>> session. During these discussions, the subject of our use of nitpicks >>>> were often raised as a point of contention and frustration, especially >>>> by community members that have left the community and that were >>>> attempting to re-engage the community. Contributors raised the point >>>> of review feedback requiring for extremely precise English, or >>>> compliance to a particular core reviewer's style preferences, which >>>> may not be the same as another core reviewer. >>>> >>>> These things are not just frustrating, but also very inhibiting for >>>> part time contributors such as students who may also be time limited. >>>> Or an operator who noticed something that was clearly a bug and that >>>> put forth a very minor fix and doesn't have the time to revise it over >>>> and over. >>>> >>>> While nitpicks do help guide and teach, the consensus seemed to be >>>> that we do need to shift the culture a little bit. As such, I've >>>> proposed a change to our principles[1] in governance that attempts to >>>> capture the essence and spirit of the nitpicking topic as a first >>>> step. >>>> >>>> -Julia >>>> --------- >>>> [1]: https://review.openstack.org/570940 >>>> >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Davanum Srinivas :: https://twitter.com/dims From doug at doughellmann.com Tue May 29 19:51:31 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 29 May 2018 15:51:31 -0400 Subject: [openstack-dev] [all][tc] final stages of python 3 transition In-Reply-To: <1524689037-sup-783@lrrr.local> References: <1524689037-sup-783@lrrr.local> Message-ID: <1527621695-sup-5274@lrrr.local> Following up on this topic, at the Forum discussion last week (see https://etherpad.openstack.org/p/YVR-python-2-deprecation-timeline) the general plan outlined below was acceptable to most of the folks in the room with a few small changes (included below). Excerpts from Doug Hellmann's message of 2018-04-25 16:54:46 -0400: > It's time to talk about the next steps in our migration from python > 2 to python 3. > > Up to this point we have mostly focused on reaching a state where > we support both versions of the language. We are not quite there > with all projects, as you can see by reviewing the test coverage > status information at > https://wiki.openstack.org/wiki/Python3#Python_3_Status_of_OpenStack_projects > > Still, we need to press on to the next phase of the migration, which > I have been calling "Python 3 first". This is where we use python > 3 as the default, for everything, and set up the exceptions we need > for anything that still requires python 2. > > To reach that stage, we need to: > > 1. Change the documentation and release notes jobs to use python 3. > (The Oslo team recently completed this, and found that we did > need to make a few small code changes to get them to work.) > 2. Change (or duplicate) all functional test jobs to run under > python 3. > 3. Change the packaging jobs to use python 3. > 4. Update devstack to use 3 by default and require setting a flag to > use 2. (This may trigger other job changes.) Also: - Ensure that devstack configures mod_wsgi (or whatever WSGI service) to use Python 3 when deploying API components. - Test "python version skew" within a service during a rolling upgrade across multiple hosts. - Add an integration test job that does not include python2 on the host at all. That last item may block us from using other tools, such as ansible, that rely on python2. If the point of such a test is to ensure that we are properly installing (and running) our tools under python3, maybe *that's* what we want to check, instead of forbidding a python2 package at all? Could we, for example, look at the set of packages installed under python2 and report errors if any OpenStack packages end up there? > > At that point, all of our deliverables will be produced using python > 3, and we can be relatively confident that if we no longer had > access to python 2 we could still continue operating. We could also > start updating deployment tools to use either python 3 or 2, so > that users could actually deploy using the python 3 versions of > services. > > Somewhere in that time frame our third-party CI systems will need > to ensure they have python 3 support as well. > > After the "Python 3 first" phase is completed we should release > one series using the packages built with python 3. Perhaps Stein? > Or is that too ambitious? > > Next, we will be ready to address the prerequisites for "Python 3 > only," which will allow us to drop Python 2 support. > > We need to wait to drop python 2 support as a community, rather > than going one project at a time, to avoid doubling the work of > downstream consumers such as distros and independent deployers. We > don't want them to have to package all (or even a large number) of > the dependencies of OpenStack twice because they have to install > some services running under python 2 and others under 3. Ideally > they would be able to upgrade all of the services on a node together > as part of their transition to the new version, without ending up > with a python 2 version of a dependency along side a python 3 version > of the same package. > > The remaining items could be fixed earlier, but this is the point > at which they would block us: > > 1. Fix oslo.service functional tests -- the Oslo team needs help > maintaining this library. Alternatively, we could move all > services to use cotyledon (https://pypi.org/project/cotyledon/). > > 2. Finish the unit test and functional test ports so that all of > our tests can run under python 3 (this implies that the services > all run under python 3, so there is no more porting to do). > > Finally, after we have *all* tests running on python 3, we can > safely drop python 2. We clarified that we would only drop python 2 support on master. That clarification also raised the point that eventually backports may become more difficult if master is using python 3 features, but Matt and Tony agreed that we could potentially have rebasing issues with fixes today so while this is a new source of such issues it isn't a completely new problem and our stable backport policies already address it. > > We have previously discussed the end of the T cycle as the point > at which we would have all of those tests running, and if that holds > true we could reasonably drop python 2 during the beginning of the > U cycle, in late 2019 and before the 2020 cut-off point when upstream > python 2 support will be dropped. This date as the earliest point at which existing projects could drop python 2 seems to be generally acceptable to everyone. I wrote up a TC resolution as a more formal documentation for it: https://review.openstack.org/571011 I also intend to propose a "python 3 first" goal for Stein, but I consider those two things independent. The last point of significant interest is that we discussed modifying Graham's existing governance change to indicate that new projects do not need to have python 2 support with the caveat that platforms that do not support python 3 fully, yet, are unlikely to package those projects. Graham was going to update the patch, IIRC. > > I need some info from the deployment tool teams to understand whether > they would be ready to take the plunge during T or U and start > deploying only the python 3 version. Are there other upgrade issues > that need to be addressed to support moving from 2 to 3? Something > that might be part of the platform(s), rather than OpenStack itself? > > What else have I missed in these phases? Other jobs? Other blocking > conditions? > > Doug From doug at doughellmann.com Tue May 29 19:53:41 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 29 May 2018 15:53:41 -0400 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: <4488099e-1391-a020-0365-342556a8b2e1@gmail.com> References: <20180529150920.GA21733@csail.mit.edu> <4488099e-1391-a020-0365-342556a8b2e1@gmail.com> Message-ID: <1527623552-sup-8763@lrrr.local> Excerpts from Jay S Bryant's message of 2018-05-29 14:16:33 -0500: > > On 5/29/2018 2:06 PM, Artom Lifshitz wrote: > >> On Tue, May 29, 2018 at 10:52:04AM -0400, Mohammed Naser wrote: > >> > >> :On Tue, May 29, 2018 at 10:43 AM, Artom Lifshitz wrote: > >> :> One idea would be that, once the meat of the patch > >> :> has passed multiple rounds of reviews and looks good, and what remains > >> :> is only nits, the reviewer themselves take on the responsibility of > >> :> pushing a new patch that fixes the nits that they found. > >> > >> Doesn't the above suggestion sufficiently address the concern below? > >> > >> :I'd just like to point out that what you perceive as a 'finished > >> :product that looks unprofessional' might be already hard enough for a > >> :contributor to achieve. We have a lot of new contributors coming from > >> :all over the world and it is very discouraging for them to have their > >> :technical knowledge and work be categorized as 'unprofessional' > >> :because of the language barrier. > >> : > >> :git-nit and a few minutes of your time will go a long way, IMHO. > >> > >> As very intermittent contributor and native english speaker with > >> relatively poor spelling and typing I'd be much happier with a > >> reviewer pushing a patch that fixes nits rather than having a ton of > >> inline comments that point them out. > >> > >> maybe we're all saying the same thing here? > > Yeah, I feel like we're all essentially in agreement that nits (of the > > English mistake of typo type) do need to get fixed, but sometimes > > (often?) putting the burden of fixing them on the original patch > > contributor is neither fair nor constructive. > I am ok with this statement if we are all in agreement that doing > follow-up patches is an acceptable practice. Has it ever not been? It seems like it has always come down to a bit of negotiation with the original author, hasn't it? And that won't change, except that we will be emphasizing to reviewers that we encourage them to be more active in seeking out that negotiation and then proposing patches? Doug From doug at doughellmann.com Tue May 29 19:55:41 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 29 May 2018 15:55:41 -0400 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: References: <20180529150920.GA21733@csail.mit.edu> <4488099e-1391-a020-0365-342556a8b2e1@gmail.com> Message-ID: <1527623661-sup-8289@lrrr.local> Excerpts from Julia Kreger's message of 2018-05-29 15:41:55 -0400: > On Tue, May 29, 2018 at 3:16 PM, Jay S Bryant wrote: > > > > > > On 5/29/2018 2:06 PM, Artom Lifshitz wrote: > >> Yeah, I feel like we're all essentially in agreement that nits (of the > >> English mistake of typo type) do need to get fixed, but sometimes > >> (often?) putting the burden of fixing them on the original patch > >> contributor is neither fair nor constructive. > > > > I am ok with this statement if we are all in agreement that doing follow-up > > patches is an acceptable practice. > > > It does feel like there is some general agreement. \o/ > > Putting my Ironic hat on, we've been trying to stress that follow-up > patches are totally acceptable and encouraged. Follow-up patches seem > to land faster in the grand scheme of things and allow series of > patches to move forward in the mean time which is important when a > feature may be spread across 10+ patches > > As for editing just prior to approving, we have learned there can be > absolutely no delay between that edit being made and the patch > approved to land. In essence patches would begin to look like only a > single core reviewer had approved the change they just edited even if > the prior revision had a second core approving the prior revision.. In > my experience, the async nature of waiting for a second core to > sign-off on your edits incurs additional time for nitpicks to occur > and a patch to be blocked. > > Sadly putting the burden on the person approving changes to land is a > bit much as well. I think anyone should be free to propose a follow-up > to any patch, at least that is my opinion and why I wrote the > principles change as I did. > +1 to that last bit, for sure. In several conversations about this last week we discussed the impression that we don't often see +1 votes with useful comments. A +1 with a follow-up to fix minor issues seems like something we ought to encourage. Doug From s at cassiba.com Tue May 29 19:56:18 2018 From: s at cassiba.com (Samuel Cassiba) Date: Tue, 29 May 2018 12:56:18 -0700 Subject: [openstack-dev] [tc] Organizational diversity tag In-Reply-To: References: <31d5e78c-276c-3ac5-6b42-c20399b34a66@openstack.org> Message-ID: On Tue, May 29, 2018 at 5:51 AM, Mohammed Naser wrote: > On Tue, May 29, 2018 at 7:59 AM, Thierry Carrez > wrote: > > Mohammed Naser wrote: > >> > >> During the TC retrospective at the OpenStack summit last week, the > >> topic of the organizational diversity tag is becoming irrelevant was > >> brought up by Thierry (ttx)[1]. It seems that for projects that are > >> not very active, they can easily lose this tag with a few changes by > >> perhaps the infrastructure team for CI related fixes. > >> > >> As an action item, Thierry and I have paired up in order to look into > >> a way to resolve this issue. There have been ideas to switch this to > >> a report that is published at the end of the cycle rather than > >> continuously. Julia (TheJulia) suggested that we change or track > >> different types of diversity. > >> > >> Before we start diving into solutions, I wanted to bring this topic up > >> to the mailing list and ask for any suggestions. In digging the > >> codebase behind this[2], I've found that there are some knobs that we > >> can also tweak if need-be, or perhaps we can adjust those numbers > >> depending on the number of commits. > > > > > > Right, the issue is that under a given level of team activity, there is a > > lot of state flapping between single-vendor, no tag, and > > diverse-affiliation. Some isolated events (someone changing affiliation, > a > > dozen of infra-related changes) end up having a significant impact. > > > > My current thinking was that rather than apply a mathematical rule to > > produce quantitative results every month, we could take the time for a > > deeper analysis and produce a qualitative report every quarter. > > I like this idea, however... > > > Alternatively (if that's too much work), we could add a new team tag > > (low-activity ?) that would appear for all projects where the activity > is so > > low that the team diversity tags no longer really apply. > > I think as a first step, it would be better to look into adding a > low-activity team that so that anything under X number of commits > would fall under that tag. I personally lean towards this because > it'll be a useful indication for consumers of deliverables of these > projects, because I think low activity is just as important as > diversity/single-vendor driven projects. > > The only thing I have in mind is the possible 'feeling' for projects > which are very stable, quiet and functioning to end up with > low-activity tag, giving an impression that they are unmaintained. I > think in general most associate low activity = unmaintained.. but I > can't come up with any better options either. > This seems like my cue. It's unfortunate that I could not be in Vancouver last week to discuss this, and I don't want to give the wrong impression, but here goes. Putting my own interests up front: if openstack-chef, a relatively quiet subproject, with a reasonably stable codebase and measurable user base, were to be suddenly be labeled with 'low-activity', then openstack-chef, and I imagine others in a similar situation, surely would be considered as dead as some perceptions have suggested in the past. The wrong perceptions can make open source contributions increasingly more difficult to obtain and maintain over time, making 'low-activity' a self-fulfilling prophecy and not a particularly helpful metric. For the record, openstack-chef has no tags at all, even though we may have at some point qualified for organizational diversity on paper. The problem with any label close to the idea of things declining is that the perception would be more overt than it is if we were to put our collective heads in the sand, unable to come to an accord. Hearken to Glance (a core project!) being barely able to make a release due to rapid developer decline over a cycle. Consider the more recently talked about people-formerly-known-as-docs-team, or the lesser known projects with contributors from a couple of companies struggling to get and maintain exposure, and the ones that lag behind core by a release (hi!) just because it takes that long to get to the next one. Brand any or all of them 'low-activity' with the best of intentions of identifying the ones that need love, and that's more or less signaling their end of life, since 'nobody' wants to touch that janky, unmaintained abandonware with 'no activity'. The moniker of 'low-activity' does give the very real, negative perception that things are just barely hanging on. It conveys the subconscious, officiated statement (!!!whether or not this was intended!!!) that nobody in their right mind should consider using the subproject, let alone develop on or against it, for fear that it wind up some poor end-user's support nightmare. Having quietly served as PTL for four cycles -- sometimes not as quietly as others -- I've struggled with the notions of contributorship versus maintainership. After this long at it, experience says a bunch of well-intended contributors does not a maintained project make, unless their heads can be in the right place (or wrong, depending on how salty you get by reading this far) to consider it as such. I really wish I had a good label for projects like openstack-chef, but labels can be extremely caustic if misinterpreted, even applied with the best of intentions. Things like 'needs-volunteers' come to mind, but that's still casting things somewhat negatively, more akin to digital panhandling. The end result should be a way of identifying the need for more investment with a more positive inference in the public view, instead of the negative connotations of 'low-activity'. Even 'maintenance-mode' paints negative perceptions. Do YOU want to touch that janky, unmaintained stuff? Neither do I. To back down off my soapbox, the fact that projects are losing the organizational diversity tag seems more a symptom of unwellness in what is being measured, not necessarily irrelevance of the metric. Measuring in terms of throughput and number of contributors is one thing, but the outcome of the measure needs to feed back into better maintainership for the overall health of OpenStack as a collection of open source projects. Some of the destined 'low-activity' projects would do quite well with an extra couple of part-timers if they aren't framed as being on the proverbial junk pile. Best, Samuel Cassiba (scas) > > -- > > Thierry Carrez (ttx) > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.op > enstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jon at csail.mit.edu Tue May 29 20:05:06 2018 From: jon at csail.mit.edu (Jonathan Proulx) Date: Tue, 29 May 2018 16:05:06 -0400 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: <1527623552-sup-8763@lrrr.local> References: <20180529150920.GA21733@csail.mit.edu> <4488099e-1391-a020-0365-342556a8b2e1@gmail.com> <1527623552-sup-8763@lrrr.local> Message-ID: <20180529200506.GD21733@csail.mit.edu> On Tue, May 29, 2018 at 03:53:41PM -0400, Doug Hellmann wrote: :> >> maybe we're all saying the same thing here? :> > Yeah, I feel like we're all essentially in agreement that nits (of the :> > English mistake of typo type) do need to get fixed, but sometimes :> > (often?) putting the burden of fixing them on the original patch :> > contributor is neither fair nor constructive. :> I am ok with this statement if we are all in agreement that doing :> follow-up patches is an acceptable practice. : :Has it ever not been? : :It seems like it has always come down to a bit of negotiation with :the original author, hasn't it? And that won't change, except that :we will be emphasizing to reviewers that we encourage them to be :more active in seeking out that negotiation and then proposing :patches? Exactly, it's more codifying a default. It's not been unacceptable but I think there's some understandable reluctance to make changes to someone else's work, you don't want to seem like your taking over or getting in the way. At least that's what's in my head when deciding should this be a comment or a patch. I think this discussion suggests for certain class of "nits" patch is preferred to comment. If that is true making this explicit is a good thing becuase let's face it my social skills are only marginally better than my speeling :) -Jon From mtreinish at kortar.org Tue May 29 20:16:50 2018 From: mtreinish at kortar.org (Matthew Treinish) Date: Tue, 29 May 2018 16:16:50 -0400 Subject: [openstack-dev] [all][tc] final stages of python 3 transition In-Reply-To: <1527621695-sup-5274@lrrr.local> References: <1524689037-sup-783@lrrr.local> <1527621695-sup-5274@lrrr.local> Message-ID: <20180529201650.GA1633@zeong> On Tue, May 29, 2018 at 03:51:31PM -0400, Doug Hellmann wrote: > Following up on this topic, at the Forum discussion last week (see > https://etherpad.openstack.org/p/YVR-python-2-deprecation-timeline) the > general plan outlined below was acceptable to most of the folks in the > room with a few small changes (included below). > > Excerpts from Doug Hellmann's message of 2018-04-25 16:54:46 -0400: > > It's time to talk about the next steps in our migration from python > > 2 to python 3. > > > > Up to this point we have mostly focused on reaching a state where > > we support both versions of the language. We are not quite there > > with all projects, as you can see by reviewing the test coverage > > status information at > > https://wiki.openstack.org/wiki/Python3#Python_3_Status_of_OpenStack_projects > > > > Still, we need to press on to the next phase of the migration, which > > I have been calling "Python 3 first". This is where we use python > > 3 as the default, for everything, and set up the exceptions we need > > for anything that still requires python 2. > > > > To reach that stage, we need to: > > > > 1. Change the documentation and release notes jobs to use python 3. > > (The Oslo team recently completed this, and found that we did > > need to make a few small code changes to get them to work.) > > 2. Change (or duplicate) all functional test jobs to run under > > python 3. > > 3. Change the packaging jobs to use python 3. > > 4. Update devstack to use 3 by default and require setting a flag to > > use 2. (This may trigger other job changes.) > > Also: > > - Ensure that devstack configures mod_wsgi (or whatever WSGI service) to > use Python 3 when deploying API components. The python 3 dsvm jobs already do this for the most part. All API services that support running as a wsgi application run under uwsgi with a single apache redirecting traffic to those. This is the supported model for running wsgi services on devstack. Currently keystone, glance, nova, placement, and cinder run their API servers this way. Neutron doesn't run under as wsgi app (I don't recall why this was never implemented for neutron) and swift doesn't run in the py3 jobs at all. You can see an example of this here: http://logs.openstack.org/08/550108/3/gate/tempest-full-py3/df744ef/controller/logs/ For other services it depends on how they implemented their devstack plugin. I haven't done an inventory on how all the plugins are running things, so I don't know what the status of each project is there. > - Test "python version skew" within a service during a rolling upgrade > across multiple hosts. > - Add an integration test job that does not include python2 on the host > at all. > > That last item may block us from using other tools, such as ansible, > that rely on python2. If the point of such a test is to ensure that > we are properly installing (and running) our tools under python3, > maybe *that's* what we want to check, instead of forbidding a python2 > package at all? Could we, for example, look at the set of packages > installed under python2 and report errors if any OpenStack packages end > up there? > > > > > At that point, all of our deliverables will be produced using python > > 3, and we can be relatively confident that if we no longer had > > access to python 2 we could still continue operating. We could also > > start updating deployment tools to use either python 3 or 2, so > > that users could actually deploy using the python 3 versions of > > services. > > > > Somewhere in that time frame our third-party CI systems will need > > to ensure they have python 3 support as well. > > > > After the "Python 3 first" phase is completed we should release > > one series using the packages built with python 3. Perhaps Stein? > > Or is that too ambitious? > > > > Next, we will be ready to address the prerequisites for "Python 3 > > only," which will allow us to drop Python 2 support. > > > > We need to wait to drop python 2 support as a community, rather > > than going one project at a time, to avoid doubling the work of > > downstream consumers such as distros and independent deployers. We > > don't want them to have to package all (or even a large number) of > > the dependencies of OpenStack twice because they have to install > > some services running under python 2 and others under 3. Ideally > > they would be able to upgrade all of the services on a node together > > as part of their transition to the new version, without ending up > > with a python 2 version of a dependency along side a python 3 version > > of the same package. > > > > The remaining items could be fixed earlier, but this is the point > > at which they would block us: > > > > 1. Fix oslo.service functional tests -- the Oslo team needs help > > maintaining this library. Alternatively, we could move all > > services to use cotyledon (https://pypi.org/project/cotyledon/). > > > > 2. Finish the unit test and functional test ports so that all of > > our tests can run under python 3 (this implies that the services > > all run under python 3, so there is no more porting to do). > > > > Finally, after we have *all* tests running on python 3, we can > > safely drop python 2. > > We clarified that we would only drop python 2 support on master. > That clarification also raised the point that eventually backports > may become more difficult if master is using python 3 features, but > Matt and Tony agreed that we could potentially have rebasing issues > with fixes today so while this is a new source of such issues it > isn't a completely new problem and our stable backport policies > already address it. > > > > > We have previously discussed the end of the T cycle as the point > > at which we would have all of those tests running, and if that holds > > true we could reasonably drop python 2 during the beginning of the > > U cycle, in late 2019 and before the 2020 cut-off point when upstream > > python 2 support will be dropped. > > This date as the earliest point at which existing projects could drop > python 2 seems to be generally acceptable to everyone. I wrote up > a TC resolution as a more formal documentation for it: > > https://review.openstack.org/571011 > > I also intend to propose a "python 3 first" goal for Stein, but I > consider those two things independent. > > The last point of significant interest is that we discussed modifying > Graham's existing governance change to indicate that new projects > do not need to have python 2 support with the caveat that platforms > that do not support python 3 fully, yet, are unlikely to package > those projects. Graham was going to update the patch, IIRC. > > > > > I need some info from the deployment tool teams to understand whether > > they would be ready to take the plunge during T or U and start > > deploying only the python 3 version. Are there other upgrade issues > > that need to be addressed to support moving from 2 to 3? Something > > that might be part of the platform(s), rather than OpenStack itself? > > > > What else have I missed in these phases? Other jobs? Other blocking > > conditions? > > > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From doug at doughellmann.com Tue May 29 20:19:12 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 29 May 2018 16:19:12 -0400 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: <20180529200506.GD21733@csail.mit.edu> References: <20180529150920.GA21733@csail.mit.edu> <4488099e-1391-a020-0365-342556a8b2e1@gmail.com> <1527623552-sup-8763@lrrr.local> <20180529200506.GD21733@csail.mit.edu> Message-ID: <1527624908-sup-8314@lrrr.local> Excerpts from Jonathan Proulx's message of 2018-05-29 16:05:06 -0400: > On Tue, May 29, 2018 at 03:53:41PM -0400, Doug Hellmann wrote: > :> >> maybe we're all saying the same thing here? > :> > Yeah, I feel like we're all essentially in agreement that nits (of the > :> > English mistake of typo type) do need to get fixed, but sometimes > :> > (often?) putting the burden of fixing them on the original patch > :> > contributor is neither fair nor constructive. > :> I am ok with this statement if we are all in agreement that doing > :> follow-up patches is an acceptable practice. > : > :Has it ever not been? > : > :It seems like it has always come down to a bit of negotiation with > :the original author, hasn't it? And that won't change, except that > :we will be emphasizing to reviewers that we encourage them to be > :more active in seeking out that negotiation and then proposing > :patches? > > Exactly, it's more codifying a default. > > It's not been unacceptable but I think there's some understandable > reluctance to make changes to someone else's work, you don't want to > seem like your taking over or getting in the way. At least that's > what's in my head when deciding should this be a comment or a patch. > > I think this discussion suggests for certain class of "nits" patch is > preferred to comment. If that is true making this explicit is a good > thing becuase let's face it my social skills are only marginally > better than my speeling :) > > -Jon > OK, that's all good. I'm just surprised to learn that throwing a follow-up patch on top of someone else's patch was ever seen as discouraged. The spice must flow, Doug From doug at doughellmann.com Tue May 29 20:24:53 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 29 May 2018 16:24:53 -0400 Subject: [openstack-dev] [all][tc] final stages of python 3 transition In-Reply-To: <20180529201650.GA1633@zeong> References: <1524689037-sup-783@lrrr.local> <1527621695-sup-5274@lrrr.local> <20180529201650.GA1633@zeong> Message-ID: <1527625436-sup-5776@lrrr.local> Excerpts from Matthew Treinish's message of 2018-05-29 16:16:50 -0400: > On Tue, May 29, 2018 at 03:51:31PM -0400, Doug Hellmann wrote: > > Following up on this topic, at the Forum discussion last week (see > > https://etherpad.openstack.org/p/YVR-python-2-deprecation-timeline) the > > general plan outlined below was acceptable to most of the folks in the > > room with a few small changes (included below). > > > > Excerpts from Doug Hellmann's message of 2018-04-25 16:54:46 -0400: > > > It's time to talk about the next steps in our migration from python > > > 2 to python 3. > > > > > > Up to this point we have mostly focused on reaching a state where > > > we support both versions of the language. We are not quite there > > > with all projects, as you can see by reviewing the test coverage > > > status information at > > > https://wiki.openstack.org/wiki/Python3#Python_3_Status_of_OpenStack_projects > > > > > > Still, we need to press on to the next phase of the migration, which > > > I have been calling "Python 3 first". This is where we use python > > > 3 as the default, for everything, and set up the exceptions we need > > > for anything that still requires python 2. > > > > > > To reach that stage, we need to: > > > > > > 1. Change the documentation and release notes jobs to use python 3. > > > (The Oslo team recently completed this, and found that we did > > > need to make a few small code changes to get them to work.) > > > 2. Change (or duplicate) all functional test jobs to run under > > > python 3. > > > 3. Change the packaging jobs to use python 3. > > > 4. Update devstack to use 3 by default and require setting a flag to > > > use 2. (This may trigger other job changes.) > > > > Also: > > > > - Ensure that devstack configures mod_wsgi (or whatever WSGI service) to > > use Python 3 when deploying API components. > > The python 3 dsvm jobs already do this for the most part. All API services that > support running as a wsgi application run under uwsgi with a single apache > redirecting traffic to those. This is the supported model for running wsgi > services on devstack. Currently keystone, glance, nova, placement, and cinder > run their API servers this way. Neutron doesn't run under as wsgi app (I > don't recall why this was never implemented for neutron) and swift doesn't run > in the py3 jobs at all. You can see an example of this here: OK, good, thank you for clarifying that. I think the folks in the room weren't 100% sure, so the point was to double check. And it sounds like we still need to do that for projects using devstack plugins, based on your next comment. > > http://logs.openstack.org/08/550108/3/gate/tempest-full-py3/df744ef/controller/logs/ > > For other services it depends on how they implemented their devstack plugin. I > haven't done an inventory on how all the plugins are running things, so I don't > know what the status of each project is there. > > > - Test "python version skew" within a service during a rolling upgrade > > across multiple hosts. > > - Add an integration test job that does not include python2 on the host > > at all. > > > > That last item may block us from using other tools, such as ansible, > > that rely on python2. If the point of such a test is to ensure that > > we are properly installing (and running) our tools under python3, > > maybe *that's* what we want to check, instead of forbidding a python2 > > package at all? Could we, for example, look at the set of packages > > installed under python2 and report errors if any OpenStack packages end > > up there? > > > > > > > > At that point, all of our deliverables will be produced using python > > > 3, and we can be relatively confident that if we no longer had > > > access to python 2 we could still continue operating. We could also > > > start updating deployment tools to use either python 3 or 2, so > > > that users could actually deploy using the python 3 versions of > > > services. > > > > > > Somewhere in that time frame our third-party CI systems will need > > > to ensure they have python 3 support as well. > > > > > > After the "Python 3 first" phase is completed we should release > > > one series using the packages built with python 3. Perhaps Stein? > > > Or is that too ambitious? > > > > > > Next, we will be ready to address the prerequisites for "Python 3 > > > only," which will allow us to drop Python 2 support. > > > > > > We need to wait to drop python 2 support as a community, rather > > > than going one project at a time, to avoid doubling the work of > > > downstream consumers such as distros and independent deployers. We > > > don't want them to have to package all (or even a large number) of > > > the dependencies of OpenStack twice because they have to install > > > some services running under python 2 and others under 3. Ideally > > > they would be able to upgrade all of the services on a node together > > > as part of their transition to the new version, without ending up > > > with a python 2 version of a dependency along side a python 3 version > > > of the same package. > > > > > > The remaining items could be fixed earlier, but this is the point > > > at which they would block us: > > > > > > 1. Fix oslo.service functional tests -- the Oslo team needs help > > > maintaining this library. Alternatively, we could move all > > > services to use cotyledon (https://pypi.org/project/cotyledon/). > > > > > > 2. Finish the unit test and functional test ports so that all of > > > our tests can run under python 3 (this implies that the services > > > all run under python 3, so there is no more porting to do). > > > > > > Finally, after we have *all* tests running on python 3, we can > > > safely drop python 2. > > > > We clarified that we would only drop python 2 support on master. > > That clarification also raised the point that eventually backports > > may become more difficult if master is using python 3 features, but > > Matt and Tony agreed that we could potentially have rebasing issues > > with fixes today so while this is a new source of such issues it > > isn't a completely new problem and our stable backport policies > > already address it. > > > > > > > > We have previously discussed the end of the T cycle as the point > > > at which we would have all of those tests running, and if that holds > > > true we could reasonably drop python 2 during the beginning of the > > > U cycle, in late 2019 and before the 2020 cut-off point when upstream > > > python 2 support will be dropped. > > > > This date as the earliest point at which existing projects could drop > > python 2 seems to be generally acceptable to everyone. I wrote up > > a TC resolution as a more formal documentation for it: > > > > https://review.openstack.org/571011 > > > > I also intend to propose a "python 3 first" goal for Stein, but I > > consider those two things independent. > > > > The last point of significant interest is that we discussed modifying > > Graham's existing governance change to indicate that new projects > > do not need to have python 2 support with the caveat that platforms > > that do not support python 3 fully, yet, are unlikely to package > > those projects. Graham was going to update the patch, IIRC. > > > > > > > > I need some info from the deployment tool teams to understand whether > > > they would be ready to take the plunge during T or U and start > > > deploying only the python 3 version. Are there other upgrade issues > > > that need to be addressed to support moving from 2 to 3? Something > > > that might be part of the platform(s), rather than OpenStack itself? > > > > > > What else have I missed in these phases? Other jobs? Other blocking > > > conditions? > > > > > > Doug > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jungleboyj at gmail.com Tue May 29 20:25:01 2018 From: jungleboyj at gmail.com (Jay S Bryant) Date: Tue, 29 May 2018 15:25:01 -0500 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: <1527624908-sup-8314@lrrr.local> References: <20180529150920.GA21733@csail.mit.edu> <4488099e-1391-a020-0365-342556a8b2e1@gmail.com> <1527623552-sup-8763@lrrr.local> <20180529200506.GD21733@csail.mit.edu> <1527624908-sup-8314@lrrr.local> Message-ID: <09731e0c-5528-6c2c-68f0-89062bc39e9b@gmail.com> On 5/29/2018 3:19 PM, Doug Hellmann wrote: > Excerpts from Jonathan Proulx's message of 2018-05-29 16:05:06 -0400: >> On Tue, May 29, 2018 at 03:53:41PM -0400, Doug Hellmann wrote: >> :> >> maybe we're all saying the same thing here? >> :> > Yeah, I feel like we're all essentially in agreement that nits (of the >> :> > English mistake of typo type) do need to get fixed, but sometimes >> :> > (often?) putting the burden of fixing them on the original patch >> :> > contributor is neither fair nor constructive. >> :> I am ok with this statement if we are all in agreement that doing >> :> follow-up patches is an acceptable practice. >> : >> :Has it ever not been? >> : >> :It seems like it has always come down to a bit of negotiation with >> :the original author, hasn't it? And that won't change, except that >> :we will be emphasizing to reviewers that we encourage them to be >> :more active in seeking out that negotiation and then proposing >> :patches? >> >> Exactly, it's more codifying a default. >> >> It's not been unacceptable but I think there's some understandable >> reluctance to make changes to someone else's work, you don't want to >> seem like your taking over or getting in the way. At least that's >> what's in my head when deciding should this be a comment or a patch. >> >> I think this discussion suggests for certain class of "nits" patch is >> preferred to comment. If that is true making this explicit is a good >> thing becuase let's face it my social skills are only marginally >> better than my speeling :) >> >> -Jon >> > OK, that's all good. I'm just surprised to learn that throwing a > follow-up patch on top of someone else's patch was ever seen as > discouraged. > > The spice must flow, > Doug Maybe it would be different now that I am a Core/PTL but in the past I had been warned to be careful as it could be misinterpreted if I was changing other people's patches or that it could look like I was trying to pad my numbers. (I am a nit-picker though I do my best not to be. I am happy if people understand I am just trying to keep the process moving and keep the read/flow of Cinder consistent.  :-) Jay > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From skaplons at redhat.com Tue May 29 20:49:07 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Tue, 29 May 2018 22:49:07 +0200 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: <09731e0c-5528-6c2c-68f0-89062bc39e9b@gmail.com> References: <20180529150920.GA21733@csail.mit.edu> <4488099e-1391-a020-0365-342556a8b2e1@gmail.com> <1527623552-sup-8763@lrrr.local> <20180529200506.GD21733@csail.mit.edu> <1527624908-sup-8314@lrrr.local> <09731e0c-5528-6c2c-68f0-89062bc39e9b@gmail.com> Message-ID: <5C428395-0B4B-446B-B1E9-F9A4CA5D7DCD@redhat.com> Hi, > Wiadomość napisana przez Jay S Bryant w dniu 29.05.2018, o godz. 22:25: > > > On 5/29/2018 3:19 PM, Doug Hellmann wrote: >> Excerpts from Jonathan Proulx's message of 2018-05-29 16:05:06 -0400: >>> On Tue, May 29, 2018 at 03:53:41PM -0400, Doug Hellmann wrote: >>> :> >> maybe we're all saying the same thing here? >>> :> > Yeah, I feel like we're all essentially in agreement that nits (of the >>> :> > English mistake of typo type) do need to get fixed, but sometimes >>> :> > (often?) putting the burden of fixing them on the original patch >>> :> > contributor is neither fair nor constructive. >>> :> I am ok with this statement if we are all in agreement that doing >>> :> follow-up patches is an acceptable practice. >>> : >>> :Has it ever not been? >>> : >>> :It seems like it has always come down to a bit of negotiation with >>> :the original author, hasn't it? And that won't change, except that >>> :we will be emphasizing to reviewers that we encourage them to be >>> :more active in seeking out that negotiation and then proposing >>> :patches? >>> >>> Exactly, it's more codifying a default. >>> >>> It's not been unacceptable but I think there's some understandable >>> reluctance to make changes to someone else's work, you don't want to >>> seem like your taking over or getting in the way. At least that's >>> what's in my head when deciding should this be a comment or a patch. >>> >>> I think this discussion suggests for certain class of "nits" patch is >>> preferred to comment. If that is true making this explicit is a good >>> thing becuase let's face it my social skills are only marginally >>> better than my speeling :) >>> >>> -Jon >>> >> OK, that's all good. I'm just surprised to learn that throwing a >> follow-up patch on top of someone else's patch was ever seen as >> discouraged. >> >> The spice must flow, >> Doug > > Maybe it would be different now that I am a Core/PTL but in the past I had been warned to be careful as it could be misinterpreted if I was changing other people's patches or that it could look like I was trying to pad my numbers. (I am a nit-picker though I do my best not to be. Exactly. I remember when I was doing my first patch (or one of first patches) and someone pushed new PS with some very small nits fixed. I was a bit confused because of that and I was thinking why he did it instead of me? Now it’s of course much more clear for me but for someone who is new contributor I think that this might be confusing. Maybe such person should at least remember to explain in comment why he pushed new PS and that’s not „stealing” work of original author :) > > I am happy if people understand I am just trying to keep the process moving and keep the read/flow of Cinder consistent. :-) > > Jay > >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Slawek Kaplonski Senior software engineer Red Hat From fungi at yuggoth.org Tue May 29 21:22:15 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 29 May 2018 21:22:15 +0000 Subject: [openstack-dev] [all][tc] final stages of python 3 transition In-Reply-To: <1527621695-sup-5274@lrrr.local> References: <1524689037-sup-783@lrrr.local> <1527621695-sup-5274@lrrr.local> Message-ID: <20180529212215.s5bmo6f2w32sqlvu@yuggoth.org> On 2018-05-29 15:51:31 -0400 (-0400), Doug Hellmann wrote: [...] > Could we, for example, look at the set of packages installed under > python2 and report errors if any OpenStack packages end up there? [...] This sounds like a marvellous solution. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From doug at doughellmann.com Tue May 29 21:26:16 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 29 May 2018 17:26:16 -0400 Subject: [openstack-dev] [tc][forum] TC Retrospective for Queens/Rocky Message-ID: <1527628983-sup-2281@lrrr.local> At the forum last week the TC held a retrospective to discuss how we felt things have gone for the TC over the last 6-ish months, since the previous election. I will try to summarize the feedback here based on the notes in the etherpad [1], but please reply if I misremember something, leave out details, or otherwise give an incomplete view of what we talked about. We approached the retro using the 3 questions (what went well, what didn't go well, what can we change) but in this summary I'm going to focus on each topic, because some ended up in multiple sections. We all agreed that having some new members on the TC is healthy. Graham, Mohammed, and Zane have all already started taking a very active role. The work Thierry has done to engage with the kubernetes community leadership through in-person meetings at some of their conferences has been received well by participants from both groups. The recent event in Copenhagen was more lightly attended than the one in Austin in December, in part because of the increased demands on the time from the participants as their events grow larger. We, of course, also have other outreach activities from Dims, hogepoge, and others who are working directly on integration projects, but since this was a TC retrospective we were talking specifically about TC-led engagement between the communities. Our efforts at communicating outside of office hours through discussion digests and the #openstack-tc channel seem to be going well. There was some discussion of whether the office hours themselves are useful, based on the apparent lack of participation. We had theories that this was a combination of bad times (meaning that TC members haven't always attended) and bad platforms (meaning that some parts of the community we are trying to reach may emphasize other tools over IRC for real time chat). We need to look into that. Thierry brought up the point that the diversity tags may be less relevant as they are currently defined, especially because some projects with very low activity may end up bouncing back and forth between "diverse" and not with a very small change in participation. Mohammed and Thierry are going to work on addressing that. See the thread starting with http://lists.openstack.org/pipermail/openstack-dev/2018-May/130776.html for details. There was some discussion of how well the interaction with the Foundation staff and Board is working. Opinions were mixed, but generally still positive. I will be working with Alan to increase engagement with the Board and with Thierry and other Foundation staff on that relationship. I encourage all of you to join the Board meetings when it is possible. It looks like we will have another in-person joint leadership meeting before the PTG in Denver, so start thinking of things we might want to place on the agenda for discussion. Keep in mind that given the large size of the group, pre-seeding discussions with a mailing list thread can help frame things for everyone, just like with design summit sessions. Chris also brought up a concern about a decline in the electorate voting for TC members. I don't remember having enough details in the room to confirm this as an overall trend, but there was some discussion of the most recent numbers seeming low in an unhealthy way. We need to talk about this further. Jeremy also brought up a lack of geographic diversity among the candidates who won the elections. That continues to be a challenge, and the cause isn't obvious to me. I hope that past (and future) candidates will get involved with the TC regardless of the outcome of the election, since we do want their input and being active is one of the best ways to raise your profile for an election. Colleen mentioned that our goal selection was "contentious" and based on the discussion of Stein goals and the goal process in general I think we have some good feedback about how to improve that. I'm sure Sean will be posting about that session separately. Chris brought up a concern about whether we have much traction on "doing stuff" and especially "getting things done that not everyone wants," Graham noted a lack of "visible impact," and Zane mentioned the TC vision in particular. Based on conversations last week, I am currently tracking a list of 20+ things the TC is working on. I will add the public ones to the wiki page this week as I catch up with my notes (remember, sometimes these things involve disputes that can be more smoothly handled one-on-one, so not everything that is going on is necessarily going to have its own email thread announcing it). As far as the vision, although we aren't "done" with those items, I do think we’re doing better than may be obvious, in part because we haven’t reviewed it recently. Thierry, Emilien, and Jeremy volunteered to review our progress. It would be good to do that before the PTG, and to especially consider how we can talk more about the impact of the changes we have made as part of that work. Related to the discussion of being effective, I proposed that we team up on each task we agree to take on. By having at least 2 TC members looking at each item, we’ll have better coverage when someone is on vacation, etc. Ideally we will not always pair up with the same folks, so we also have more of an opportunity to work together. Related to the idea of being more active, I proposed that we more actively engage with PTLs and project teams to try to spot issues before they fester. Last week we started addressing at least 2 such issues that only came to light in other conversations, which tells me we can't sit back and wait for PTLs to ask for help. I will work on a plan for dividing up responsibility for checking with teams. Zane proposed introducing some sort of cadence to making progress. Perhaps by having teams of 2 or more focusing on different areas, we can set up a regular reporting schedule to trigger that? Every 2 weeks? Again, this is my personal attempt to summarize what I remember from the discussion based on the notes. Please don't hesitate to add to the thread. Doug [1] https://etherpad.openstack.org/p/YVR-tc-retrospective From fungi at yuggoth.org Tue May 29 21:31:22 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 29 May 2018 21:31:22 +0000 Subject: [openstack-dev] [tc] Organizational diversity tag In-Reply-To: <31d5e78c-276c-3ac5-6b42-c20399b34a66@openstack.org> References: <31d5e78c-276c-3ac5-6b42-c20399b34a66@openstack.org> Message-ID: <20180529213121.k5rlrk2q4pc6uhbw@yuggoth.org> On 2018-05-29 13:59:27 +0200 (+0200), Thierry Carrez wrote: [...] > Alternatively (if that's too much work), we could add a new team > tag (low-activity ?) that would appear for all projects where the > activity is so low that the team diversity tags no longer really > apply. As others have also said, this seems like a potentially useful metric on its own anyway. We could simply avoid including low-activity tagged teams in diversity reporting and not associate any diversity tags with them. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From melwittt at gmail.com Tue May 29 21:38:28 2018 From: melwittt at gmail.com (melanie witt) Date: Tue, 29 May 2018 14:38:28 -0700 Subject: [openstack-dev] [nova] review runway status Message-ID: <51b825e2-98dd-9dfe-9bcb-f3691d82ba14@gmail.com> Hi everybody, This is just a brief status about the blueprints currently occupying review runways [0] and an ask for the nova-core team to give these reviews priority for their code review focus. Note that these 3 blueprints were in runways during summit week with end dates of 2018-05-28 and 2018-05-30. Because of significantly reduced review attention during the summit as core team members were in attendance and busy, we have extended the end date for these blueprints by one week to EOD next Tuesday 2018-06-05. * PowerVM Driver https://blueprints.launchpad.net/nova/+spec/powervm-vscsi (esberglu) [END DATE: 2018-06-05] vSCSI Cinder Volume Driver: https://review.openstack.org/526094 * Granular Placement Policy https://blueprints.launchpad.net/nova/+spec/granular-placement-policy (mriedem) [END DATE: 2018-06-05] https://review.openstack.org/#/q/topic:bp/granular-placement-policy+status:open * vGPU work in rocky https://blueprints.launchpad.net/nova/+spec/vgpu-rocky (naichuans) [END DATE: 2018-06-05] series starting at https://review.openstack.org/520313 Best, -melanie [0] https://etherpad.openstack.org/p/nova-runways-rocky From fungi at yuggoth.org Tue May 29 21:43:25 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 29 May 2018 21:43:25 +0000 Subject: [openstack-dev] [tc] Organizational diversity tag In-Reply-To: <1527614177-sup-1244@lrrr.local> References: <31d5e78c-276c-3ac5-6b42-c20399b34a66@openstack.org> <1527614177-sup-1244@lrrr.local> Message-ID: <20180529214325.2scxi6od6o7o6ss4@yuggoth.org> On 2018-05-29 13:17:50 -0400 (-0400), Doug Hellmann wrote: [...] > We have the status:maintenance-mode tag[3] today. How would a new > "low-activity" tag be differentiated from the existing one? [...] status:maintenance-mode is (as it says on the tin) a subjective indicator that a team has entered a transient period of reduced activity. By contrast, a low-activity tag (maybe it should be something more innocuous like low-churn?) would be an objective indicator that attempts to make contributor diversity assertions are doomed to fail the statistical significance test. We could consider overloading status:maintenance-mode for this purpose, but some teams perhaps simply don't have large amounts of code change ever and that's just a normal effect of how they operate. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From doug at doughellmann.com Tue May 29 21:45:21 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 29 May 2018 17:45:21 -0400 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: <5C428395-0B4B-446B-B1E9-F9A4CA5D7DCD@redhat.com> References: <20180529150920.GA21733@csail.mit.edu> <4488099e-1391-a020-0365-342556a8b2e1@gmail.com> <1527623552-sup-8763@lrrr.local> <20180529200506.GD21733@csail.mit.edu> <1527624908-sup-8314@lrrr.local> <09731e0c-5528-6c2c-68f0-89062bc39e9b@gmail.com> <5C428395-0B4B-446B-B1E9-F9A4CA5D7DCD@redhat.com> Message-ID: <1527630233-sup-3514@lrrr.local> Excerpts from Slawomir Kaplonski's message of 2018-05-29 22:49:07 +0200: > Hi, > > > Wiadomość napisana przez Jay S Bryant w dniu 29.05.2018, o godz. 22:25: > > > > > > On 5/29/2018 3:19 PM, Doug Hellmann wrote: > >> Excerpts from Jonathan Proulx's message of 2018-05-29 16:05:06 -0400: > >>> On Tue, May 29, 2018 at 03:53:41PM -0400, Doug Hellmann wrote: > >>> :> >> maybe we're all saying the same thing here? > >>> :> > Yeah, I feel like we're all essentially in agreement that nits (of the > >>> :> > English mistake of typo type) do need to get fixed, but sometimes > >>> :> > (often?) putting the burden of fixing them on the original patch > >>> :> > contributor is neither fair nor constructive. > >>> :> I am ok with this statement if we are all in agreement that doing > >>> :> follow-up patches is an acceptable practice. > >>> : > >>> :Has it ever not been? > >>> : > >>> :It seems like it has always come down to a bit of negotiation with > >>> :the original author, hasn't it? And that won't change, except that > >>> :we will be emphasizing to reviewers that we encourage them to be > >>> :more active in seeking out that negotiation and then proposing > >>> :patches? > >>> > >>> Exactly, it's more codifying a default. > >>> > >>> It's not been unacceptable but I think there's some understandable > >>> reluctance to make changes to someone else's work, you don't want to > >>> seem like your taking over or getting in the way. At least that's > >>> what's in my head when deciding should this be a comment or a patch. > >>> > >>> I think this discussion suggests for certain class of "nits" patch is > >>> preferred to comment. If that is true making this explicit is a good > >>> thing becuase let's face it my social skills are only marginally > >>> better than my speeling :) > >>> > >>> -Jon > >>> > >> OK, that's all good. I'm just surprised to learn that throwing a > >> follow-up patch on top of someone else's patch was ever seen as > >> discouraged. > >> > >> The spice must flow, > >> Doug > > > > Maybe it would be different now that I am a Core/PTL but in the past I had been warned to be careful as it could be misinterpreted if I was changing other people's patches or that it could look like I was trying to pad my numbers. (I am a nit-picker though I do my best not to be. > > Exactly. I remember when I was doing my first patch (or one of first patches) and someone pushed new PS with some very small nits fixed. I was a bit confused because of that and I was thinking why he did it instead of me? > Now it’s of course much more clear for me but for someone who is new contributor I think that this might be confusing. Maybe such person should at least remember to explain in comment why he pushed new PS and that’s not „stealing” work of original author :) I guess it never occurred to me that someone would do that without also leaving a comment explaining the situation. Doug > > > > > I am happy if people understand I am just trying to keep the process moving and keep the read/flow of Cinder consistent. :-) > > > > Jay > > > >> __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > — > Slawek Kaplonski > Senior software engineer > Red Hat > From fungi at yuggoth.org Tue May 29 21:53:35 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 29 May 2018 21:53:35 +0000 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: <09731e0c-5528-6c2c-68f0-89062bc39e9b@gmail.com> References: <20180529150920.GA21733@csail.mit.edu> <4488099e-1391-a020-0365-342556a8b2e1@gmail.com> <1527623552-sup-8763@lrrr.local> <20180529200506.GD21733@csail.mit.edu> <1527624908-sup-8314@lrrr.local> <09731e0c-5528-6c2c-68f0-89062bc39e9b@gmail.com> Message-ID: <20180529215335.adq3ftmiagvvmyfn@yuggoth.org> On 2018-05-29 15:25:01 -0500 (-0500), Jay S Bryant wrote: [...] > Maybe it would be different now that I am a Core/PTL but in the past I had > been warned to be careful as it could be misinterpreted if I was changing > other people's patches or that it could look like I was trying to pad my > numbers. (I am a nit-picker though I do my best not to be. [...] Most stats tracking goes by the Gerrit "Owner" metadata or the Git "Author" field, neither of which are modified in a typical new patchset workflow and so carry over from the original patchset #1 (resetting Author requires creating a new commit from scratch or passing extra options to git to reset it, while changing the Owner needs a completely new Change-Id footer). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Tue May 29 22:57:45 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 29 May 2018 22:57:45 +0000 Subject: [openstack-dev] [tc][forum] TC Retrospective for Queens/Rocky In-Reply-To: <1527628983-sup-2281@lrrr.local> References: <1527628983-sup-2281@lrrr.local> Message-ID: <20180529225745.pkgavspefqpu4nah@yuggoth.org> On 2018-05-29 17:26:16 -0400 (-0400), Doug Hellmann wrote: [...] > There was some discussion of whether the office hours themselves > are useful, based on the apparent lack of participation. We had > theories that this was a combination of bad times (meaning that TC > members haven't always attended) and bad platforms (meaning that > some parts of the community we are trying to reach may emphasize > other tools over IRC for real time chat). We need to look into that. [...] We also had some consensus in the room on starting to use meetbot during office hours to highlight whatever we discussed in the resulting meeting minutes. I'm planning to do that at our 01:00z office hour (a couple hours from now) and see how it goes, though that timeslot in particular tends to have little or no discussion. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From ijw.ubuntu at cack.org.uk Tue May 29 23:26:46 2018 From: ijw.ubuntu at cack.org.uk (Ian Wells) Date: Tue, 29 May 2018 16:26:46 -0700 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: <20180529215335.adq3ftmiagvvmyfn@yuggoth.org> References: <20180529150920.GA21733@csail.mit.edu> <4488099e-1391-a020-0365-342556a8b2e1@gmail.com> <1527623552-sup-8763@lrrr.local> <20180529200506.GD21733@csail.mit.edu> <1527624908-sup-8314@lrrr.local> <09731e0c-5528-6c2c-68f0-89062bc39e9b@gmail.com> <20180529215335.adq3ftmiagvvmyfn@yuggoth.org> Message-ID: On 29 May 2018 at 14:53, Jeremy Stanley wrote: > On 2018-05-29 15:25:01 -0500 (-0500), Jay S Bryant wrote: > [...] > > Maybe it would be different now that I am a Core/PTL but in the past I > had > > been warned to be careful as it could be misinterpreted if I was changing > > other people's patches or that it could look like I was trying to pad my > > numbers. (I am a nit-picker though I do my best not to be. > [...] > > Most stats tracking goes by the Gerrit "Owner" metadata or the Git > "Author" field, neither of which are modified in a typical new > patchset workflow and so carry over from the original patchset #1 > (resetting Author requires creating a new commit from scratch or > passing extra options to git to reset it, while changing the Owner > needs a completely new Change-Id footer). > We know this, but other people don't, so the comment is wise. Also, arguably, if I badly fix someone else's patch, I'm making them look bad by leaving them with the 'credit' for my bad work, so it's important to be careful and tactful. But the history is public record, at least. -- Ian. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sundar.nadathur at intel.com Tue May 29 23:33:59 2018 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Tue, 29 May 2018 16:33:59 -0700 Subject: [openstack-dev] [Cyborg] [Nova] Cyborg traits Message-ID: <1e33d001-ae8c-c28d-0ab6-fa061c5d362b@intel.com> Hi all,    The Cyborg/Nova scheduling spec [1] details what traits will be applied to the resource providers that represent devices like GPUs. Some of the traits referred to vendor names. I got feedback that traits must not refer to products or specific models of devices. I agree. However, we need some reference to device types to enable matching the VM driver with the device. TL;DR We need some reference to device types, but we don't need product names. I will update the spec [1] to clarify that. Rest of this email clarifies why we need device types in traits, and what traits we propose to include. In general, an accelerator device is operated by two pieces of software: a driver in the kernel (which may discover and handle the PF for SR-IOV  devices), and a driver/library in the guest (which may handle the assigned VF). The device assigned to the VM must match the driver/library packaged in the VM. For this, the request must explicitly state what category of devices it needs. For example, if the VM needs a GPU, it needs to say whether it needs an AMD GPU or an Nvidia GPU, since it may have the driver/libraries for that vendor alone. It may also need to state what version of Cuda is needed, if it is a Nvidia GPU. These aspects are necessarily vendor-specific. Further, one driver/library version may handle multiple devices. Since a new driver version may be backwards compatible, multiple driver versions may manage the same device. The development/release of the driver/library inside the VM should be independent of the kernel driver for that device. For FPGAs, there is an additional twist as the VM may need specific bitstream(s), and they match only specific device/region types. The bitstream for a device from a vendor will not fit any other device from the same vendor, let alone other vendors. IOW, the region type is specific not just to a vendor but to a device type within the vendor. So, it is essential to identify the device type. So, the proposed set of RCs and traits are as below. As we learn more about actual usages by operators, we may need to evolve this set. * There is a resource class per device category e.g. CUSTOM_ACCELERATOR_GPU, CUSTOM_ACCELERATOR_FPGA. * The resource provider that represents a device has the following traits: o Vendor/Category trait: e.g. CUSTOM_GPU_AMD, CUSTOM_FPGA_XILINX. o Device type trait which is a refinement of vendor/category trait e.g. CUSTOM_FPGA_XILINX_VU9P. NOTE: This is not a product or model, at least for FPGAs. Multiple products may use the same FPGA chip. NOTE: The reason for having both the vendor/category and this one is that a flavor may ask for either, depending on the granularity desired. IOW, if one driver can handle all devices from a vendor (*eye roll*), the flavor can ask for the vendor/category trait alone. If there are separate drivers for different device families from the same vendor, the flavor must specify the trait for the device family. NOTE: The equivalent trait for GPUs may be like CUSTOM_GPU_NVIDIA_P90, but I'll let others decide if that is a product or not. o For FPGAs, we have additional traits: + Functionality trait: e.g. CUSTOM_FPGA_COMPUTE, CUSTOM_FPGA_NETWORK, CUSTOM_FPGA_STORAGE + Region type ID.  e.g. CUSTOM_FPGA_INTEL_REGION_. + Optionally, a function ID, indicating what function is currently programmed in the region RP. e.g. CUSTOM_FPGA_INTEL_FUNCTION_. Not all implementations may provide it. The function trait may change on reprogramming, but it is not expected to be frequent. + Possibly, CUSTOM_PROGRAMMABLE as a separate trait. [1] https://review.openstack.org/#/c/554717/ Thanks. Regards, Sundar -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Tue May 29 23:42:45 2018 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 29 May 2018 19:42:45 -0400 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: <5C428395-0B4B-446B-B1E9-F9A4CA5D7DCD@redhat.com> References: <20180529150920.GA21733@csail.mit.edu> <4488099e-1391-a020-0365-342556a8b2e1@gmail.com> <1527623552-sup-8763@lrrr.local> <20180529200506.GD21733@csail.mit.edu> <1527624908-sup-8314@lrrr.local> <09731e0c-5528-6c2c-68f0-89062bc39e9b@gmail.com> <5C428395-0B4B-446B-B1E9-F9A4CA5D7DCD@redhat.com> Message-ID: <8ffae6f3-e78e-580b-86ec-1c791a6f3aba@redhat.com> On 29/05/18 16:49, Slawomir Kaplonski wrote: > Hi, > >> Wiadomość napisana przez Jay S Bryant w dniu 29.05.2018, o godz. 22:25: >> Maybe it would be different now that I am a Core/PTL but in the past I had been warned to be careful as it could be misinterpreted if I was changing other people's patches or that it could look like I was trying to pad my numbers. (I am a nit-picker though I do my best not to be. > > Exactly. I remember when I was doing my first patch (or one of first patches) and someone pushed new PS with some very small nits fixed. I was a bit confused because of that and I was thinking why he did it instead of me? > Now it’s of course much more clear for me but for someone who is new contributor I think that this might be confusing. Maybe such person should at least remember to explain in comment why he pushed new PS and that’s not „stealing” work of original author :) Another issue is that if the original author needs to rev the patch again for any reason, they then need to figure out how to check out the modified patch. This requires a fairly sophisticated knowledge of both git and gerrit, which isn't a problem for those of us who have been using them for years but is potentially a nightmarish introduction for a relatively new contributor. Sometimes it's the right choice though (especially if the patch owner hasn't been seen for a while). A follow-up patch is a good alternative, unless of course it conflicts with another patch in the series. +1 with a comment can also get you a long way - it indicates that you've reviewed the whole patch and have found only nits to quibble with. If you're a core reviewer, another core could potentially +2/+A on a subsequent patch set with the nits addressed if they felt it appropriate, and even if they don't you'll have an easy re-review when you follow up. We have lots of tools in our toolbox that are less blunt than -1. Let's save -1 for when major work is required and/or the patch as written would actually break something. Since I am replying to this thread, Julia also mentioned the situation where two core reviewers are asking for opposite changes to a patch. It is never ever ever the contributor's responsibility to resolve a dispute between two core reviewers! If you see a core reviewer's advice on a patch and you want to give the opposite advice, by all means take it up immediately - with *the other core reviewer*. NOT the submitter. Preferably on IRC and not in the review. You work together every day, you can figure it out! A random contributor has no chance of parachuting into the middle of that dynamic and walking out unscathed, and they should never be asked to. cheers, Zane. From ben at swartzlander.org Wed May 30 00:21:31 2018 From: ben at swartzlander.org (Ben Swartzlander) Date: Tue, 29 May 2018 20:21:31 -0400 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: References: Message-ID: <679b1f1d-5847-ce24-3eeb-3144be996aaf@swartzlander.org> On 05/29/2018 03:43 PM, Davanum Srinivas wrote: > Agree with Ian here. > > Also another problem that comes up is: "Why are you touching *MY* > review?" (probably coming from the view where stats - and stackalytics > leaderboard position is important). So i guess we ask permission > before editing (or) file a follow up later (or) just tell folks that > this is ok to do!! I think Stackalytics is evil and should be killed with fire. It encourages all kinds of pathological behavior, this being one prime example. Having worked as a core reviewer, I find zero value from the project. We know who is contributing code and who is doing reviews without some robot to tell us. -Ben > Hoping engaging with them will solve yet another issue is someone > going around filing the same change in a dozen projects (repeatedly!), > but that may be wishful thinking. > > -- Dims > > On Tue, May 29, 2018 at 12:17 PM, Ian Wells wrote: >> If your nitpick is a spelling mistake or the need for a comment where you've >> pretty much typed the text of the comment in the review comment itself, then >> I have personally found it easiest to use the Gerrit online editor to >> actually update the patch yourself. There's nothing magical about the >> original submitter, and no point in wasting your time and theirs to get them >> to make the change. That said, please be a grown up; if you're changing >> code or messing up formatting enough for PEP8 to be a concern, it's your >> responsibility, not the original submitter's, to fix it. Also, do all your >> fixes in one commit if you don't want to make Zuul cry. >> -- >> Ian. >> >> >> On 29 May 2018 at 09:00, Neil Jerram wrote: >>> >>> From my point of view as someone who is still just an occasional >>> contributor (in all OpenStack projects other than my own team's networking >>> driver), and so I think still sensitive to the concerns being raised here: >>> >>> - Nits are not actually a problem, at all, if they are uncontroversial and >>> quick to deal with. For example, if it's a point of English, and most >>> English speakers would agree that a correction is better, it's quick and no >>> problem for me to make that correction. >>> >>> - What is much more of a problem is: >>> >>> - Anything that is more a matter of opinion. If a markup is just the >>> reviewer's personal opinion, and they can't say anything to explain more >>> objectively why their suggestion is better, it would be wiser to defer to >>> the contributor's initial choice. >>> >>> - Questioning something unconstructively or out of proportion to the >>> change being made. This is a tricky one to pin down, but sometimes I've had >>> comments that raise some random left-field question that isn't really >>> related to the change being made, or where the reviewer could have done a >>> couple minutes research themselves and then either made a more precise >>> comment, or not made their comment at all. >>> >>> - Asking - implicitly or explicitly - the contributor to add more >>> cleanups to their change. If someone usefully fixes a problem, and their >>> fix does not of itself impair the quality or maintainability of the >>> surrounding code, they should not be asked to extend their fix so as to fix >>> further problems that a more regular developer may be aware of in that area, >>> or to advance a refactoring / cleanup that another developer has in mind. >>> (At least, not as part of that initial change.) >>> >>> (Obviously the common thread of those problem points is taking up more >>> time; psychologically I think one of the things that can turn a contributor >>> away is the feeling that they've contributed a clearly useful thing, yet the >>> community is stalling over accepting it for reasons that do not appear >>> clearcut.) >>> >>> Hoping this is vaguely helpful... >>> Neil >>> >>> >>> On Tue, May 29, 2018 at 4:35 PM Amy Marrich wrote: >>>> >>>> If I have a nit that doesn't affect things, I'll make a note of it and >>>> say if you do another patch I'd really like it fixed but also give the patch >>>> a vote. What I'll also do sometimes if I know the user or they are online >>>> I'll offer to fix things for them, that way they can see what I've done, >>>> I've sped things along and I haven't caused a simple change to take a long >>>> amount of time and reviews. >>>> >>>> I think this is a great addition! >>>> >>>> Thanks, >>>> >>>> Amy (spotz) >>>> >>>> On Tue, May 29, 2018 at 6:55 AM, Julia Kreger >>>> wrote: >>>>> >>>>> During the Forum, the topic of review culture came up in session after >>>>> session. During these discussions, the subject of our use of nitpicks >>>>> were often raised as a point of contention and frustration, especially >>>>> by community members that have left the community and that were >>>>> attempting to re-engage the community. Contributors raised the point >>>>> of review feedback requiring for extremely precise English, or >>>>> compliance to a particular core reviewer's style preferences, which >>>>> may not be the same as another core reviewer. >>>>> >>>>> These things are not just frustrating, but also very inhibiting for >>>>> part time contributors such as students who may also be time limited. >>>>> Or an operator who noticed something that was clearly a bug and that >>>>> put forth a very minor fix and doesn't have the time to revise it over >>>>> and over. >>>>> >>>>> While nitpicks do help guide and teach, the consensus seemed to be >>>>> that we do need to shift the culture a little bit. As such, I've >>>>> proposed a change to our principles[1] in governance that attempts to >>>>> capture the essence and spirit of the nitpicking topic as a first >>>>> step. >>>>> >>>>> -Julia >>>>> --------- >>>>> [1]: https://review.openstack.org/570940 >>>>> >>>>> >>>>> __________________________________________________________________________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: >>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > From s at cassiba.com Wed May 30 01:29:43 2018 From: s at cassiba.com (Samuel Cassiba) Date: Tue, 29 May 2018 18:29:43 -0700 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: References: <20180529150920.GA21733@csail.mit.edu> <4488099e-1391-a020-0365-342556a8b2e1@gmail.com> <1527623552-sup-8763@lrrr.local> <20180529200506.GD21733@csail.mit.edu> <1527624908-sup-8314@lrrr.local> <09731e0c-5528-6c2c-68f0-89062bc39e9b@gmail.com> <20180529215335.adq3ftmiagvvmyfn@yuggoth.org> Message-ID: On Tue, May 29, 2018 at 4:26 PM, Ian Wells wrote: > On 29 May 2018 at 14:53, Jeremy Stanley wrote: > >> On 2018-05-29 15:25:01 -0500 (-0500), Jay S Bryant wrote: >> [...] >> > Maybe it would be different now that I am a Core/PTL but in the past I >> had >> > been warned to be careful as it could be misinterpreted if I was >> changing >> > other people's patches or that it could look like I was trying to pad my >> > numbers. (I am a nit-picker though I do my best not to be. >> [...] >> >> Most stats tracking goes by the Gerrit "Owner" metadata or the Git >> "Author" field, neither of which are modified in a typical new >> patchset workflow and so carry over from the original patchset #1 >> (resetting Author requires creating a new commit from scratch or >> passing extra options to git to reset it, while changing the Owner >> needs a completely new Change-Id footer). >> > > We know this, but other people don't, so the comment is wise. Also, > arguably, if I badly fix someone else's patch, I'm making them look bad by > leaving them with the 'credit' for my bad work, so it's important to be > careful and tactful. But the history is public record, at least. > > If the patch is bad enough where I have to step in to rewrite, I'm making the submitter look bad no matter what. That makes everyone worse off. Best, Samuel > -- > Ian. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ghanshyammann at gmail.com Wed May 30 01:35:41 2018 From: ghanshyammann at gmail.com (Ghanshyam Mann) Date: Wed, 30 May 2018 10:35:41 +0900 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: References: Message-ID: Thanks for making it formal process which really helps. I think most of the people usually does that but yes it is always helpful to be added as principles. I have gotten mix feedback on fixing other patches in past and when i got anger by author i try to leave comment for a day or two then fix if it is really needed otherwise just go with follow-up. One question - This only applies to code nitpick only right? not documentation or releasenotes. For doc and reno, we should fix spell or grammar mistake etc in same patch. -gmann On Tue, May 29, 2018 at 10:55 PM, Julia Kreger wrote: > During the Forum, the topic of review culture came up in session after > session. During these discussions, the subject of our use of nitpicks > were often raised as a point of contention and frustration, especially > by community members that have left the community and that were > attempting to re-engage the community. Contributors raised the point > of review feedback requiring for extremely precise English, or > compliance to a particular core reviewer's style preferences, which > may not be the same as another core reviewer. > > These things are not just frustrating, but also very inhibiting for > part time contributors such as students who may also be time limited. > Or an operator who noticed something that was clearly a bug and that > put forth a very minor fix and doesn't have the time to revise it over > and over. > > While nitpicks do help guide and teach, the consensus seemed to be > that we do need to shift the culture a little bit. As such, I've > proposed a change to our principles[1] in governance that attempts to > capture the essence and spirit of the nitpicking topic as a first > step. > > -Julia > --------- > [1]: https://review.openstack.org/570940 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From masayuki.igawa at gmail.com Wed May 30 02:30:16 2018 From: masayuki.igawa at gmail.com (Masayuki Igawa) Date: Wed, 30 May 2018 11:30:16 +0900 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: References: Message-ID: <20180530023016.d722mtcazyqmjusr@fastmail.com> I think this is not only for code but also for doc and reno. We should fix it basically, especially about doc/reno. But I don't think it should be in the same patch if the mistake isn't critical which means *nitpicks*. I think we can fix them with following patches if we need to fix it. Otherwise, writing reno/doc might be a really difficult and hard work for ESL people like me. -- Masayuki On 05/30, Ghanshyam Mann wrote: > Thanks for making it formal process which really helps. I think most > of the people usually does that but yes it is always helpful to be > added as principles. > > I have gotten mix feedback on fixing other patches in past and when i > got anger by author i try to leave comment for a day or two then fix > if it is really needed otherwise just go with follow-up. > > One question - This only applies to code nitpick only right? not > documentation or releasenotes. For doc and reno, we should fix spell > or grammar mistake etc in same patch. > > -gmann > > > On Tue, May 29, 2018 at 10:55 PM, Julia Kreger > wrote: > > During the Forum, the topic of review culture came up in session after > > session. During these discussions, the subject of our use of nitpicks > > were often raised as a point of contention and frustration, especially > > by community members that have left the community and that were > > attempting to re-engage the community. Contributors raised the point > > of review feedback requiring for extremely precise English, or > > compliance to a particular core reviewer's style preferences, which > > may not be the same as another core reviewer. > > > > These things are not just frustrating, but also very inhibiting for > > part time contributors such as students who may also be time limited. > > Or an operator who noticed something that was clearly a bug and that > > put forth a very minor fix and doesn't have the time to revise it over > > and over. > > > > While nitpicks do help guide and teach, the consensus seemed to be > > that we do need to shift the culture a little bit. As such, I've > > proposed a change to our principles[1] in governance that attempts to > > capture the essence and spirit of the nitpicking topic as a first > > step. > > > > -Julia > > --------- > > [1]: https://review.openstack.org/570940 > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Happy Hacking!! Masayuki Igawa GPG fingerprint = C27C 2F00 3A2A 999A 903A 753D 290F 53ED C899 BF90 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From ifat.afek at nokia.com Wed May 30 04:36:14 2018 From: ifat.afek at nokia.com (Afek, Ifat (Nokia - IL/Kfar Sava)) Date: Wed, 30 May 2018 04:36:14 +0000 Subject: [openstack-dev] [vitrage] No IRC meeting this week Message-ID: Hi, The IRC meeting this week is canceled, since many Vitrage developers are on vacation. We will meet next Wednesday, June 6th, at 8:00 UTC. See you next week, Ifat -------------- next part -------------- An HTML attachment was scrubbed... URL: From cjeanner at redhat.com Wed May 30 04:52:59 2018 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Wed, 30 May 2018 06:52:59 +0200 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: <8ffae6f3-e78e-580b-86ec-1c791a6f3aba@redhat.com> References: <20180529150920.GA21733@csail.mit.edu> <4488099e-1391-a020-0365-342556a8b2e1@gmail.com> <1527623552-sup-8763@lrrr.local> <20180529200506.GD21733@csail.mit.edu> <1527624908-sup-8314@lrrr.local> <09731e0c-5528-6c2c-68f0-89062bc39e9b@gmail.com> <5C428395-0B4B-446B-B1E9-F9A4CA5D7DCD@redhat.com> <8ffae6f3-e78e-580b-86ec-1c791a6f3aba@redhat.com> Message-ID: <154ec962-d401-cc7e-5f9d-3335fcac665d@redhat.com> On 05/30/2018 01:42 AM, Zane Bitter wrote: > On 29/05/18 16:49, Slawomir Kaplonski wrote: >> Hi, >> >>> Wiadomość napisana przez Jay S Bryant w dniu >>> 29.05.2018, o godz. 22:25: >>> Maybe it would be different now that I am a Core/PTL but in the past >>> I had been warned to be careful as it could be misinterpreted if I >>> was changing other people's patches or that it could look like I was >>> trying to pad my numbers. (I am a nit-picker though I do my best not >>> to be. >> >> Exactly. I remember when I was doing my first patch (or one of first >> patches) and someone pushed new PS with some very small nits fixed. I >> was a bit confused because of that and I was thinking why he did it >> instead of me? >> Now it’s of course much more clear for me but for someone who is new >> contributor I think that this might be confusing. Maybe such person >> should at least remember to explain in comment why he pushed new PS >> and that’s not „stealing” work of original author :) > > Another issue is that if the original author needs to rev the patch > again for any reason, they then need to figure out how to check out the > modified patch. This requires a fairly sophisticated knowledge of both > git and gerrit, which isn't a problem for those of us who have been > using them for years but is potentially a nightmarish introduction for a > relatively new contributor. Sometimes it's the right choice though > (especially if the patch owner hasn't been seen for a while). hm, "Download" -> copy/paste, and Voilà. Gerrit interface is pretty nice with the user (I an "old new contributor", never really struggled with Gerrit itself.. On the other hand, heat, ansible, that's another story :) ). > > A follow-up patch is a good alternative, unless of course it conflicts > with another patch in the series. > > +1 with a comment can also get you a long way - it indicates that you've > reviewed the whole patch and have found only nits to quibble with. If > you're a core reviewer, another core could potentially +2/+A on a > subsequent patch set with the nits addressed if they felt it > appropriate, and even if they don't you'll have an easy re-review when > you follow up. > > We have lots of tools in our toolbox that are less blunt than -1. Let's > save -1 for when major work is required and/or the patch as written > would actually break something. +1 (not core, can't +2, sorry :D) "-1" is "aggressive". Cheers, C. > > > Since I am replying to this thread, Julia also mentioned the situation > where two core reviewers are asking for opposite changes to a patch. It > is never ever ever the contributor's responsibility to resolve a dispute > between two core reviewers! If you see a core reviewer's advice on a > patch and you want to give the opposite advice, by all means take it up > immediately - with *the other core reviewer*. NOT the submitter. > Preferably on IRC and not in the review. You work together every day, > you can figure it out! A random contributor has no chance of parachuting > into the middle of that dynamic and walking out unscathed, and they > should never be asked to. > > cheers, > Zane. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Cédric Jeanneret Software Engineer DFG:DF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From cjeanner at redhat.com Wed May 30 05:05:05 2018 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Wed, 30 May 2018 07:05:05 +0200 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: References: Message-ID: <934757b0-58a7-2453-fd05-cb18f8938ce2@redhat.com> On 05/29/2018 09:43 PM, Davanum Srinivas wrote: > Agree with Ian here. > > Also another problem that comes up is: "Why are you touching *MY* > review?" (probably coming from the view where stats - and stackalytics > leaderboard position is important). So i guess we ask permission > before editing (or) file a follow up later (or) just tell folks that > this is ok to do!! We call that "communication". It's an important thing, especially in an open source project with many, many contributors from many, many different cultures and languages. Good communication avoids wars ;). So yep - if someone has anything to correct and feels the urge to push a patch on someone else', please do, but communicate the reasons (at least, that's my point of view). Cheers, C. > > Hoping engaging with them will solve yet another issue is someone > going around filing the same change in a dozen projects (repeatedly!), > but that may be wishful thinking. > > -- Dims > > On Tue, May 29, 2018 at 12:17 PM, Ian Wells wrote: >> If your nitpick is a spelling mistake or the need for a comment where you've >> pretty much typed the text of the comment in the review comment itself, then >> I have personally found it easiest to use the Gerrit online editor to >> actually update the patch yourself. There's nothing magical about the >> original submitter, and no point in wasting your time and theirs to get them >> to make the change. That said, please be a grown up; if you're changing >> code or messing up formatting enough for PEP8 to be a concern, it's your >> responsibility, not the original submitter's, to fix it. Also, do all your >> fixes in one commit if you don't want to make Zuul cry. >> -- >> Ian. >> >> >> On 29 May 2018 at 09:00, Neil Jerram wrote: >>> >>> From my point of view as someone who is still just an occasional >>> contributor (in all OpenStack projects other than my own team's networking >>> driver), and so I think still sensitive to the concerns being raised here: >>> >>> - Nits are not actually a problem, at all, if they are uncontroversial and >>> quick to deal with. For example, if it's a point of English, and most >>> English speakers would agree that a correction is better, it's quick and no >>> problem for me to make that correction. >>> >>> - What is much more of a problem is: >>> >>> - Anything that is more a matter of opinion. If a markup is just the >>> reviewer's personal opinion, and they can't say anything to explain more >>> objectively why their suggestion is better, it would be wiser to defer to >>> the contributor's initial choice. >>> >>> - Questioning something unconstructively or out of proportion to the >>> change being made. This is a tricky one to pin down, but sometimes I've had >>> comments that raise some random left-field question that isn't really >>> related to the change being made, or where the reviewer could have done a >>> couple minutes research themselves and then either made a more precise >>> comment, or not made their comment at all. >>> >>> - Asking - implicitly or explicitly - the contributor to add more >>> cleanups to their change. If someone usefully fixes a problem, and their >>> fix does not of itself impair the quality or maintainability of the >>> surrounding code, they should not be asked to extend their fix so as to fix >>> further problems that a more regular developer may be aware of in that area, >>> or to advance a refactoring / cleanup that another developer has in mind. >>> (At least, not as part of that initial change.) >>> >>> (Obviously the common thread of those problem points is taking up more >>> time; psychologically I think one of the things that can turn a contributor >>> away is the feeling that they've contributed a clearly useful thing, yet the >>> community is stalling over accepting it for reasons that do not appear >>> clearcut.) >>> >>> Hoping this is vaguely helpful... >>> Neil >>> >>> >>> On Tue, May 29, 2018 at 4:35 PM Amy Marrich wrote: >>>> >>>> If I have a nit that doesn't affect things, I'll make a note of it and >>>> say if you do another patch I'd really like it fixed but also give the patch >>>> a vote. What I'll also do sometimes if I know the user or they are online >>>> I'll offer to fix things for them, that way they can see what I've done, >>>> I've sped things along and I haven't caused a simple change to take a long >>>> amount of time and reviews. >>>> >>>> I think this is a great addition! >>>> >>>> Thanks, >>>> >>>> Amy (spotz) >>>> >>>> On Tue, May 29, 2018 at 6:55 AM, Julia Kreger >>>> wrote: >>>>> >>>>> During the Forum, the topic of review culture came up in session after >>>>> session. During these discussions, the subject of our use of nitpicks >>>>> were often raised as a point of contention and frustration, especially >>>>> by community members that have left the community and that were >>>>> attempting to re-engage the community. Contributors raised the point >>>>> of review feedback requiring for extremely precise English, or >>>>> compliance to a particular core reviewer's style preferences, which >>>>> may not be the same as another core reviewer. >>>>> >>>>> These things are not just frustrating, but also very inhibiting for >>>>> part time contributors such as students who may also be time limited. >>>>> Or an operator who noticed something that was clearly a bug and that >>>>> put forth a very minor fix and doesn't have the time to revise it over >>>>> and over. >>>>> >>>>> While nitpicks do help guide and teach, the consensus seemed to be >>>>> that we do need to shift the culture a little bit. As such, I've >>>>> proposed a change to our principles[1] in governance that attempts to >>>>> capture the essence and spirit of the nitpicking topic as a first >>>>> step. >>>>> >>>>> -Julia >>>>> --------- >>>>> [1]: https://review.openstack.org/570940 >>>>> >>>>> >>>>> __________________________________________________________________________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: >>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > -- Cédric Jeanneret Software Engineer DFG:DF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From sbauza at redhat.com Wed May 30 07:34:43 2018 From: sbauza at redhat.com (Sylvain Bauza) Date: Wed, 30 May 2018 09:34:43 +0200 Subject: [openstack-dev] [Cyborg] [Nova] Cyborg traits In-Reply-To: <1e33d001-ae8c-c28d-0ab6-fa061c5d362b@intel.com> References: <1e33d001-ae8c-c28d-0ab6-fa061c5d362b@intel.com> Message-ID: On Wed, May 30, 2018 at 1:33 AM, Nadathur, Sundar wrote: > Hi all, > The Cyborg/Nova scheduling spec [1] details what traits will be applied > to the resource providers that represent devices like GPUs. Some of the > traits referred to vendor names. I got feedback that traits must not refer > to products or specific models of devices. I agree. However, we need some > reference to device types to enable matching the VM driver with the device. > > TL;DR We need some reference to device types, but we don't need product > names. I will update the spec [1] to clarify that. Rest of this email > clarifies why we need device types in traits, and what traits we propose to > include. > > In general, an accelerator device is operated by two pieces of software: a > driver in the kernel (which may discover and handle the PF for SR-IOV > devices), and a driver/library in the guest (which may handle the assigned > VF). > > The device assigned to the VM must match the driver/library packaged in > the VM. For this, the request must explicitly state what category of > devices it needs. For example, if the VM needs a GPU, it needs to say > whether it needs an AMD GPU or an Nvidia GPU, since it may have the > driver/libraries for that vendor alone. It may also need to state what > version of Cuda is needed, if it is a Nvidia GPU. These aspects are > necessarily vendor-specific. > > FWIW, the vGPU implementation for Nova also has the same concern. We want to provide traits for explicitly say "use this vGPU type" but given it's related to a specific vendor, we can't just say "ask for this frame buffer size, or just for the display heads", but rather "we need a vGPU accepting Quadro vDWS license". > Further, one driver/library version may handle multiple devices. Since a > new driver version may be backwards compatible, multiple driver versions > may manage the same device. The development/release of the driver/library > inside the VM should be independent of the kernel driver for that device. > > I agree. > For FPGAs, there is an additional twist as the VM may need specific > bitstream(s), and they match only specific device/region types. The > bitstream for a device from a vendor will not fit any other device from the > same vendor, let alone other vendors. IOW, the region type is specific not > just to a vendor but to a device type within the vendor. So, it is > essential to identify the device type. > > So, the proposed set of RCs and traits are as below. As we learn more > about actual usages by operators, we may need to evolve this set. > > - There is a resource class per device category e.g. > CUSTOM_ACCELERATOR_GPU, CUSTOM_ACCELERATOR_FPGA. > - The resource provider that represents a device has the following > traits: > - Vendor/Category trait: e.g. CUSTOM_GPU_AMD, CUSTOM_FPGA_XILINX. > - Device type trait which is a refinement of vendor/category trait > e.g. CUSTOM_FPGA_XILINX_VU9P. > > NOTE: This is not a product or model, at least for FPGAs. Multiple > products may use the same FPGA chip. > NOTE: The reason for having both the vendor/category and this one is that > a flavor may ask for either, depending on the granularity desired. IOW, if > one driver can handle all devices from a vendor (*eye roll*), the flavor > can ask for the vendor/category trait alone. If there are separate drivers > for different device families from the same vendor, the flavor must specify > the trait for the device family. > NOTE: The equivalent trait for GPUs may be like CUSTOM_GPU_NVIDIA_P90, but > I'll let others decide if that is a product or not. > > I was about to propose the same for vGPUs in Nova, ie. using custom traits. The only concern is that we need operators to set the traits directly using osc-placement instead of having Nova magically provide those traits. But anyway, given operators need to set the vGPU types they want, I think it's acceptable. > > - For FPGAs, we have additional traits: > - Functionality trait: e.g. CUSTOM_FPGA_COMPUTE, > CUSTOM_FPGA_NETWORK, CUSTOM_FPGA_STORAGE > - Region type ID. e.g. CUSTOM_FPGA_INTEL_REGION_. > - Optionally, a function ID, indicating what function is > currently programmed in the region RP. e.g. CUSTOM_FPGA_INTEL_FUNCTION_. > Not all implementations may provide it. The function trait may change on > reprogramming, but it is not expected to be frequent. > - Possibly, CUSTOM_PROGRAMMABLE as a separate trait. > > [1] https://review.openstack.org/#/c/554717/ > I'll try to review the spec as soon as I can. -Sylvain > > > Thanks. > > Regards, > Sundar > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihaela.balas at orange.com Wed May 30 08:45:02 2018 From: mihaela.balas at orange.com (mihaela.balas at orange.com) Date: Wed, 30 May 2018 08:45:02 +0000 Subject: [openstack-dev] [octavia] TERMINATED_HTTPS + SSL to backend server Message-ID: <19835_1527669903_5B0E648F_19835_298_1_78d339a4a6b7445d9a57d899f09a7587@orange.com> Hello, Is there any user story for the scenario below? - Octavia is set to TERMINATED_HTTPS and also initiates SSL to backend servers After testing all the combination possible and after looking at the Octavia haproxy templates in Queens version I understand that this kind of setup is currently not supported. Thanks, Mihaela _________________________________________________________________________________________________________________________ Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci. This message and its attachments may contain confidential or privileged information that may be protected by law; they should not be distributed, used or copied without authorisation. If you have received this email in error, please notify the sender and delete this message and its attachments. As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From henry.nash at uk.ibm.com Wed May 30 08:45:12 2018 From: henry.nash at uk.ibm.com (Henry Nash) Date: Wed, 30 May 2018 08:45:12 +0000 Subject: [openstack-dev] [keystone] Signing off Message-ID: An HTML attachment was scrubbed... URL: From colleen at gazlene.net Wed May 30 09:05:35 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Wed, 30 May 2018 11:05:35 +0200 Subject: [openstack-dev] [keystone] Signing off In-Reply-To: References: Message-ID: <1527671135.2688078.1390168880.09E65B15@webmail.messagingengine.com> On Wed, May 30, 2018, at 10:45 AM, Henry Nash wrote: > Hi > > It is with a somewhat heavy heart that I have decided that it is time to > hang up my keystone core status. Having been involved since the closing > stages of Folsom, I've had a good run! When I look at how far keystone > has come since the v2 days, it is remarkable - and we should all feel a > sense of pride in that. > Thanks to all the hard work, commitment, humour and support from all the > keystone folks over the years - I am sure we will continue to interact > and meet among the many other open source projects that many of us are > becoming involved with. Ad astra! > Best regards, > > Henry > Twitter: @henrynash > linkedIn: www.linkedin.com/in/henrypnash > Thank you for all the incredible work you've done for this project! You were an invaluable asset at the PTGs, it was great to see you there even though keystone hasn't been your main focus lately. Wishing you the best of luck. Colleen From dtantsur at redhat.com Wed May 30 10:11:12 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 30 May 2018 12:11:12 +0200 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: References: Message-ID: <3424d691-9792-afde-dce9-4eca7601ae4f@redhat.com> Hi, This is a great discussion and a great suggestion overall, but I'd like to add a grain of salt here, especially after reading some comments. Nitpicking is bad, no disagreement. However, I don't like this whole discussion to end up marking -1's as offense or aggression. Just as often as I see newcomers proposing patches frustrated with many iterations, I see newcomers being afraid to -1. In my personal experience I have two remarkable cases: 1. A person asking me (via a private message) to not put -1 on their patches because they may have problems with their managers. 2. A person proposing a follow-up on *any* comment to their patch, including important ones. Whatever decision the TC takes, I would like it to make sure that we don't paint putting -1 as a bad act. Nor do I want "if you care, just follow-up" to be an excuse for putting up bad contributions. Additionally, I would like to have something saying that a -1 is valid and appropriate, if a contribution substantially increases the project's technical debt. After already spending *days* refactoring ironic unit tests, I will -1 the hell out of a patch that will try to bring them back to their initial state, I promise :) Dmitry On 05/29/2018 03:55 PM, Julia Kreger wrote: > During the Forum, the topic of review culture came up in session after > session. During these discussions, the subject of our use of nitpicks > were often raised as a point of contention and frustration, especially > by community members that have left the community and that were > attempting to re-engage the community. Contributors raised the point > of review feedback requiring for extremely precise English, or > compliance to a particular core reviewer's style preferences, which > may not be the same as another core reviewer. > > These things are not just frustrating, but also very inhibiting for > part time contributors such as students who may also be time limited. > Or an operator who noticed something that was clearly a bug and that > put forth a very minor fix and doesn't have the time to revise it over > and over. > > While nitpicks do help guide and teach, the consensus seemed to be > that we do need to shift the culture a little bit. As such, I've > proposed a change to our principles[1] in governance that attempts to > capture the essence and spirit of the nitpicking topic as a first > step. > > -Julia > --------- > [1]: https://review.openstack.org/570940 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From sundar.nadathur at intel.com Wed May 30 10:25:41 2018 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Wed, 30 May 2018 03:25:41 -0700 Subject: [openstack-dev] [Cyborg] [Nova] Cyborg traits In-Reply-To: References: <1e33d001-ae8c-c28d-0ab6-fa061c5d362b@intel.com> Message-ID: Hi Sylvain,   Glad to know we are on the same page. I haven't updated the spec with this proposal yet, in case I got more comments :). I will do so by today. Thanks, Sundar On 5/30/2018 12:34 AM, Sylvain Bauza wrote: > > > On Wed, May 30, 2018 at 1:33 AM, Nadathur, Sundar > > wrote: > > Hi all, >    The Cyborg/Nova scheduling spec [1] details what traits will be > applied to the resource providers that represent devices like > GPUs. Some of the traits referred to vendor names. I got feedback > that traits must not refer to products or specific models of > devices. I agree. However, we need some reference to device types > to enable matching the VM driver with the device. > > TL;DR We need some reference to device types, but we don't need > product names. I will update the spec [1] to clarify that. Rest of > this email clarifies why we need device types in traits, and what > traits we propose to include. > > In general, an accelerator device is operated by two pieces of > software: a driver in the kernel (which may discover and handle > the PF for SR-IOV  devices), and a driver/library in the guest > (which may handle the assigned VF). > > The device assigned to the VM must match the driver/library > packaged in the VM. For this, the request must explicitly state > what category of devices it needs. For example, if the VM needs a > GPU, it needs to say whether it needs an AMD GPU or an Nvidia GPU, > since it may have the driver/libraries for that vendor alone. It > may also need to state what version of Cuda is needed, if it is a > Nvidia GPU. These aspects are necessarily vendor-specific. > > > FWIW, the vGPU implementation for Nova also has the same concern. We > want to provide traits for explicitly say "use this vGPU type" but > given it's related to a specific vendor, we can't just say "ask for > this frame buffer size, or just for the display heads", but rather "we > need a vGPU accepting Quadro vDWS license". > > Further, one driver/library version may handle multiple devices. > Since a new driver version may be backwards compatible, multiple > driver versions may manage the same device. The > development/release of the driver/library inside the VM should be > independent of the kernel driver for that device. > > > I agree. > > For FPGAs, there is an additional twist as the VM may need > specific bitstream(s), and they match only specific device/region > types. The bitstream for a device from a vendor will not fit any > other device from the same vendor, let alone other vendors. IOW, > the region type is specific not just to a vendor but to a device > type within the vendor. So, it is essential to identify the device > type. > > So, the proposed set of RCs and traits are as below. As we learn > more about actual usages by operators, we may need to evolve this set. > > * There is a resource class per device category e.g. > CUSTOM_ACCELERATOR_GPU, CUSTOM_ACCELERATOR_FPGA. > * The resource provider that represents a device has the > following traits: > o Vendor/Category trait: e.g. CUSTOM_GPU_AMD, > CUSTOM_FPGA_XILINX. > o Device type trait which is a refinement of vendor/category > trait e.g. CUSTOM_FPGA_XILINX_VU9P. > > NOTE: This is not a product or model, at least for FPGAs. > Multiple products may use the same FPGA chip. > NOTE: The reason for having both the vendor/category and > this one is that a flavor may ask for either, depending on > the granularity desired. IOW, if one driver can handle all > devices from a vendor (*eye roll*), the flavor can ask for > the vendor/category trait alone. If there are separate > drivers for different device families from the same > vendor, the flavor must specify the trait for the device > family. > NOTE: The equivalent trait for GPUs may be like > CUSTOM_GPU_NVIDIA_P90, but I'll let others decide if that > is a product or not. > > > I was about to propose the same for vGPUs in Nova, ie. using custom > traits. The only concern is that we need operators to set the traits > directly using osc-placement instead of having Nova magically provide > those traits. But anyway, given operators need to set the vGPU types > they want, I think it's acceptable. > > > o For FPGAs, we have additional traits: > + Functionality trait: e.g. CUSTOM_FPGA_COMPUTE, > CUSTOM_FPGA_NETWORK, CUSTOM_FPGA_STORAGE > + Region type ID.  e.g. CUSTOM_FPGA_INTEL_REGION_. > + Optionally, a function ID, indicating what function is > currently programmed in the region RP. e.g. > CUSTOM_FPGA_INTEL_FUNCTION_. Not all > implementations may provide it. The function trait may > change on reprogramming, but it is not expected to be > frequent. > + Possibly, CUSTOM_PROGRAMMABLE as a separate trait. > > [1] https://review.openstack.org/#/c/554717/ > > > > > I'll try to review the spec as soon as I can. > > -Sylvain > > > > Thanks. > > Regards, > Sundar > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Wed May 30 10:31:23 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 30 May 2018 12:31:23 +0200 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: <09731e0c-5528-6c2c-68f0-89062bc39e9b@gmail.com> References: <20180529150920.GA21733@csail.mit.edu> <4488099e-1391-a020-0365-342556a8b2e1@gmail.com> <1527623552-sup-8763@lrrr.local> <20180529200506.GD21733@csail.mit.edu> <1527624908-sup-8314@lrrr.local> <09731e0c-5528-6c2c-68f0-89062bc39e9b@gmail.com> Message-ID: Jay S Bryant wrote: > > On 5/29/2018 3:19 PM, Doug Hellmann wrote: >> Excerpts from Jonathan Proulx's message of 2018-05-29 16:05:06 -0400: >>> On Tue, May 29, 2018 at 03:53:41PM -0400, Doug Hellmann wrote: >>> :> >> maybe we're all saying the same thing here? >>> :> > Yeah, I feel like we're all essentially in agreement that nits >>> (of the >>> :> > English mistake of typo type) do need to get fixed, but sometimes >>> :> > (often?) putting the burden of fixing them on the original patch >>> :> > contributor is neither fair nor constructive. >>> :> I am ok with this statement if we are all in agreement that doing >>> :> follow-up patches is an acceptable practice. >>> : >>> :Has it ever not been? >>> : >>> :It seems like it has always come down to a bit of negotiation with >>> :the original author, hasn't it? And that won't change, except that >>> :we will be emphasizing to reviewers that we encourage them to be >>> :more active in seeking out that negotiation and then proposing >>> :patches? >>> >>> Exactly, it's more codifying a default. >>> >>> It's not been unacceptable but I think there's some understandable >>> reluctance to make changes to someone else's work, you don't want to >>> seem like your taking over or getting in the way.  At least that's >>> what's in my head when deciding should this be a comment or a patch. >>> >>> I think this discussion suggests for certain class of "nits" patch is >>> preferred to comment.  If that is true making this explicit is a good >>> thing becuase let's face it my social skills are only marginally >>> better than my speeling :) >>> >>> -Jon >>> >> OK, that's all good. I'm just surprised to learn that throwing a >> follow-up patch on top of someone else's patch was ever seen as >> discouraged. >> >> The spice must flow, >> Doug > > Maybe it would be different now that I am a Core/PTL but in the past I > had been warned to be careful as it could be misinterpreted if I was > changing other people's patches or that it could look like I was trying > to pad my numbers. (I am a nit-picker though I do my best not to be. There still seems to be some confusion between "new patchset over existing change" and "follow-up separate change" as to what is the recommended way to fix nits. To clarify, the proposal here is to encourage the posting of a follow-up change. -- Thierry Carrez (ttx) From balazs.gibizer at ericsson.com Wed May 30 11:06:02 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Wed, 30 May 2018 13:06:02 +0200 Subject: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers In-Reply-To: References: <8eefd93a-abbf-1436-07a3-d18223ed8fa8@lab.ntt.co.jp> <1527584511.6381.1@smtp.office365.com> <1527596481.3825.0@smtp.office365.com> Message-ID: <1527678362.3825.3@smtp.office365.com> On Tue, May 29, 2018 at 3:12 PM, Sylvain Bauza wrote: > > > On Tue, May 29, 2018 at 2:21 PM, Balázs Gibizer > wrote: >> >> >> On Tue, May 29, 2018 at 1:47 PM, Sylvain Bauza >> wrote: >>> >>> >>> Le mar. 29 mai 2018 à 11:02, Balázs Gibizer >>> a écrit : >>>> >>>> >>>> On Tue, May 29, 2018 at 9:38 AM, Sylvain Bauza >>>> wrote: >>>> > >>>> > >>>> > On Tue, May 29, 2018 at 3:08 AM, TETSURO NAKAMURA >>>> > wrote >>>> > >>>> >> > In that situation, say for example with VGPU inventories, that >>>> >> would mean >>>> >> > that the compute node would stop reporting inventories for its >>>> >> root RP, but >>>> >> > would rather report inventories for at least one single child >>>> RP. >>>> >> > In that model, do we reconcile the allocations that were >>>> already >>>> >> made >>>> >> > against the "root RP" inventory ? >>>> >> >>>> >> It would be nice to see Eric and Jay comment on this, >>>> >> but if I'm not mistaken, when the virt driver stops reporting >>>> >> inventories for its root RP, placement would try to delete that >>>> >> inventory inside and raise InventoryInUse exception if any >>>> >> allocations still exist on that resource. >>>> >> >>>> >> ``` >>>> >> update_from_provider_tree() (nova/compute/resource_tracker.py) >>>> >> + _set_inventory_for_provider() >>>> (nova/scheduler/client/report.py) >>>> >> + put() - PUT /resource_providers//inventories >>>> with >>>> >> new inventories (scheduler/client/report.py) >>>> >> + set_inventories() (placement/handler/inventory.py) >>>> >> + _set_inventory() >>>> >> (placement/objects/resource_proveider.py) >>>> >> + _delete_inventory_from_provider() >>>> >> (placement/objects/resource_proveider.py) >>>> >> -> raise exception.InventoryInUse >>>> >> ``` >>>> >> >>>> >> So we need some trick something like deleting VGPU allocations >>>> >> before upgrading and set the allocation again for the created >>>> new >>>> >> child after upgrading? >>>> >> >>>> > >>>> > I wonder if we should keep the existing inventory in the root >>>> RP, and >>>> > somehow just reserve the left resources (so Placement wouldn't >>>> pass >>>> > that root RP for queries, but would still have allocations). But >>>> > then, where and how to do this ? By the resource tracker ? >>>> > >>>> >>>> AFAIK it is the virt driver that decides to model the VGU resource >>>> at a >>>> different place in the RP tree so I think it is the responsibility >>>> of >>>> the same virt driver to move any existing allocation from the old >>>> place >>>> to the new place during this change. >>>> >>>> Cheers, >>>> gibi >>> >>> Why not instead not move the allocation but rather have the virt >>> driver updating the root RP by modifying the reserved value to the >>> total size? >>> >>> That way, the virt driver wouldn't need to ask for an allocation >>> but rather continue to provide inventories... >>> >>> Thoughts? >> >> Keeping the old allocaton at the old RP and adding a similar sized >> reservation in the new RP feels hackis as those are not really >> reserved GPUs but used GPUs just from the old RP. If somebody sums >> up the total reported GPUs in this setup via the placement API then >> she will get more GPUs in total that what is physically visible for >> the hypervisor as the GPUs part of the old allocation reported twice >> in two different total value. Could we just report less GPU >> inventories to the new RP until the old RP has GPU allocations? >> > > > We could keep the old inventory in the root RP for the previous vGPU > type already supported in Queens and just add other inventories for > other vGPU types now supported. That looks possibly the simpliest > option as the virt driver knows that. That works for me. Can we somehow deprecate the previous, already supported vGPU types to eventually get rid of the splitted inventory? > > >> Some alternatives from my jetlagged brain: >> >> a) Implement a move inventory/allocation API in placement. Given a >> resource class and a source RP uuid and a destination RP uuid >> placement moves the inventory and allocations of that resource class >> from the source RP to the destination RP. Then the virt drive can >> call this API to move the allocation. This has an impact on the fast >> forward upgrade as it needs running virt driver to do the allocation >> move. >> > > Instead of having the virt driver doing that (TBH, I don't like that > given both Xen and libvirt drivers have the same problem), we could > write a nova-manage upgrade call for that that would call the > Placement API, sure. The nova-manage is another possible way similar to my idea #c) but there I imagined the logic in placement-manage instead of nova-manage. > >> b) For this I assume that live migrating an instance having a GPU >> allocation on the old RP will allocate GPU for that instance from >> the new RP. In the virt driver do not report GPUs to the new RP >> while there is allocation for such GPUs in the old RP. Let the >> deployer live migrate away the instances. When the virt driver >> detects that there is no more GPU allocations on the old RP it can >> delete the inventory from the old RP and report it to the new RP. >> > > For the moment, vGPUs don't support live migration, even within QEMU. > I haven't checked that, but IIUC when you live-migrate an instance > that have vGPUs, it will just migrate it without recreating the vGPUs. If there is no live migration support for vGPUs then this option can be ignored. > Now, the problem is with the VGPU allocation, we should delete it > then. Maybe a new bug report ? Sounds like a bug report to me :) > >> c) For this I assume that there is no support for live migration of >> an instance having a GPU. If there is GPU allocation in the old RP >> then virt driver does not report GPU inventory to the new RP just >> creates the new nested RPs. Provide a placement-manage command to do >> the inventory + allocation copy from the old RP to the new RP. >> > > what's the difference with the first alternative ? I think after you mentioned nova-manage for the first alternative the difference became only doing it from nova-manage or from placement-manage. The placement-manage solution has the benefit of being a pure DB operation, moving inventory and allocation between two RPs while nova-manage would need to call a new placement API. > > Anyway, looks like it's pretty simple to just keep the inventory for > the already existing vGPU type in the root RP, and just add nested > RPs for other vGPU types. > Oh, and btw. we could possibly have the same problem when we > implement the NUMA spec that I need to rework > https://review.openstack.org/#/c/552924/ If we want to move the VCPU resources from the root to the nested NUMA RP then yes, that feels like the same problem. gibi > > -Sylvain >> Cheers, >> gibi >> >> >>> >>>> >>>> > -Sylvain >>>> > >>>> >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From a.chadin at servionica.ru Wed May 30 11:50:52 2018 From: a.chadin at servionica.ru (=?utf-8?B?0KfQsNC00LjQvSDQkNC70LXQutGB0LDQvdC00YA=?=) Date: Wed, 30 May 2018 11:50:52 +0000 Subject: [openstack-dev] [watcher] Cancel meeting today Message-ID: <65B8766E-A1A0-4847-81A5-3EE003DC9D8A@servionica.ru> Hi Watchers, I cancel weekly meeting today because of urgent internal meetings. I’m available on openstack-watcher channel most of the time. Let’s meet on June 6. Best Regards, ____ Alex From amal.kammoun.2 at gmail.com Wed May 30 11:57:41 2018 From: amal.kammoun.2 at gmail.com (amal kammoun) Date: Wed, 30 May 2018 13:57:41 +0200 Subject: [openstack-dev] collect up and down time for deployed openstack resources Message-ID: Hi, We aim at collecting inter failures time for the compute and network resources (i.e., availability and reliability of each resource). Is there any mean via Restfull to collect these metrics ? If not, which OpenStack/celiometer/heat modules may be extended in order to collect this information via the implementation of a prob protocol for instance? Thanks in adavance, Regards, Amal -------------- next part -------------- An HTML attachment was scrubbed... URL: From pkovar at redhat.com Wed May 30 12:05:47 2018 From: pkovar at redhat.com (Petr Kovar) Date: Wed, 30 May 2018 14:05:47 +0200 Subject: [openstack-dev] [docs] Documentation meeting today Message-ID: <20180530140547.338e95eb245baed88ba76d83@redhat.com> Hi all, The docs meeting will continue today at 16:00 UTC in #openstack-doc, as scheduled. For more details, see the meeting page: https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting Cheers, pk From thierry at openstack.org Wed May 30 12:29:10 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 30 May 2018 14:29:10 +0200 Subject: [openstack-dev] [tc] Organizational diversity tag In-Reply-To: References: <31d5e78c-276c-3ac5-6b42-c20399b34a66@openstack.org> Message-ID: Samuel Cassiba wrote: > [...] > The moniker of 'low-activity' does give the very real, negative > perception that things are just barely hanging on. It conveys the > subconscious, officiated statement (!!!whether or not this was > intended!!!) that nobody in their right mind should consider using the > subproject, let alone develop on or against it, for fear that it wind up > some poor end-user's support nightmare. [...] Yes, that's fair... and why my original suggestion was to do a (regular) qualitative report that would use words rather than binary tags. -- Thierry Carrez (ttx) From elod.illes at ericsson.com Wed May 30 12:35:49 2018 From: elod.illes at ericsson.com (=?UTF-8?B?RWzDtWQgSWxsw6lz?=) Date: Wed, 30 May 2018 14:35:49 +0200 Subject: [openstack-dev] [stable] [tooz] [ceilometer] cmd2 without upper constraints causes errors in tox-py27 Message-ID: <1b65e553-97c9-eb4c-2de7-7e372c4a7d11@ericsson.com> Hi, In the last two days the ceilometer [1] [2] [3] and tooz [4] [5] [6] tox-py27 periodic stable jobs are failing. The root cause is the following: * cmd2 released version 0.9.0, which requires python >=3.4 from now on. These projects have comment in their tox.ini that they do not consume upper-constraints.txt (in which there is an upper constraints for cmd2). My question is: could we use upper-constraints.txt on these projects as well, or is there any reason why it isn't the case? Of course an entry could be added to test-requirements.txt with "cmd2<0.9.0", but wouldn't it be better to use the upper-constraints.txt? Thanks in advance, Előd [1] http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/ceilometer/stable/queens/openstack-tox-py27/b44c7cd/ [2] http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/ceilometer/stable/pike/openstack-tox-py27/6c4fd5d/ [3] http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/ceilometer/stable/ocata/openstack-tox-py27/4d2d0b3/ [4] http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/tooz/stable/queens/openstack-tox-py27/37bd360/ [5] http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/tooz/stable/pike/openstack-tox-py27/8bb8c29/ [6] http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/tooz/stable/ocata/openstack-tox-py27/1016d56/ From julien at danjou.info Wed May 30 12:42:58 2018 From: julien at danjou.info (Julien Danjou) Date: Wed, 30 May 2018 14:42:58 +0200 Subject: [openstack-dev] [stable] [tooz] [ceilometer] cmd2 without upper constraints causes errors in tox-py27 In-Reply-To: <1b65e553-97c9-eb4c-2de7-7e372c4a7d11@ericsson.com> (=?utf-8?Q?=22El=C3=B5d=09Ill=C3=A9s=22's?= message of "Wed, 30 May 2018 14:35:49 +0200") References: <1b65e553-97c9-eb4c-2de7-7e372c4a7d11@ericsson.com> Message-ID: On Wed, May 30 2018, Elõd Illés wrote: > In the last two days the ceilometer [1] [2] [3] and tooz [4] [5] [6] tox-py27 > periodic stable jobs are failing. The root cause is the following: > * cmd2 released version 0.9.0, which requires python >=3.4 from now on. > These projects have comment in their tox.ini that they do not consume > upper-constraints.txt (in which there is an upper constraints for cmd2). > > My question is: could we use upper-constraints.txt on these projects as well, > or is there any reason why it isn't the case? > Of course an entry could be added to test-requirements.txt with "cmd2<0.9.0", > but wouldn't it be better to use the upper-constraints.txt? The question is: why cmd2 0.9.0 does not work and how do we fix that? -- Julien Danjou ;; Free Software hacker ;; https://julien.danjou.info -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 832 bytes Desc: not available URL: From msashika38 at gmail.com Wed May 30 12:56:29 2018 From: msashika38 at gmail.com (Ashika Meher Majety) Date: Wed, 30 May 2018 18:26:29 +0530 Subject: [openstack-dev] [Heat] : Query regarding bug 1769089 Message-ID: Hello, We have raised a bug in launchpad and the bug link is as follows: https://bugs.launchpad.net/heat-dashboard/+bug/1769089 . Can anyone please provide a solution or fix for this issue since it's been 20 days since we have created this bug. Thanks&Regards, Ashika Meher -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at ericsson.com Wed May 30 12:55:44 2018 From: elod.illes at ericsson.com (=?UTF-8?B?RWzDtWQgSWxsw6lz?=) Date: Wed, 30 May 2018 14:55:44 +0200 Subject: [openstack-dev] [stable] [tooz] [ceilometer] cmd2 without upper constraints causes errors in tox-py27 In-Reply-To: References: <1b65e553-97c9-eb4c-2de7-7e372c4a7d11@ericsson.com> Message-ID: cmd2 says that: "Python 2.7 support is EOL Support for adding new features to the Python 2.7 release of |cmd2| was discontinued on April 15, 2018. Bug fixes will be supported for Python 2.7 via 0.8.x until August 31, 2018. Supporting Python 2 was an increasing burden on our limited resources. Switching to support only Python 3 will allow us to clean up the codebase, remove some cruft, and focus on developing new features." See: https://github.com/python-cmd2/cmd2 Előd On 2018-05-30 14:42, Julien Danjou wrote: > On Wed, May 30 2018, Elõd Illés wrote: > >> In the last two days the ceilometer [1] [2] [3] and tooz [4] [5] [6] tox-py27 >> periodic stable jobs are failing. The root cause is the following: >> * cmd2 released version 0.9.0, which requires python >=3.4 from now on. >> These projects have comment in their tox.ini that they do not consume >> upper-constraints.txt (in which there is an upper constraints for cmd2). >> >> My question is: could we use upper-constraints.txt on these projects as well, >> or is there any reason why it isn't the case? >> Of course an entry could be added to test-requirements.txt with "cmd2<0.9.0", >> but wouldn't it be better to use the upper-constraints.txt? > The question is: why cmd2 0.9.0 does not work and how do we fix that? > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed May 30 13:00:31 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 30 May 2018 13:00:31 +0000 Subject: [openstack-dev] [stable] [tooz] [ceilometer] cmd2 without upper constraints causes errors in tox-py27 In-Reply-To: References: <1b65e553-97c9-eb4c-2de7-7e372c4a7d11@ericsson.com> Message-ID: <20180530130030.zbx43r2ozgrluotk@yuggoth.org> On 2018-05-30 14:42:58 +0200 (+0200), Julien Danjou wrote: [...] > The question is: why cmd2 0.9.0 does not work and how do we fix that? The cmd2 maintainers decided that they no longer want to carry support for old Python interpreters, so versions starting with 0.9.0 only work on Python 3.4 and later: https://github.com/python-cmd2/cmd2/issues/421 -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From thierry at openstack.org Wed May 30 13:19:34 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 30 May 2018 15:19:34 +0200 Subject: [openstack-dev] [TC] [Infra] Terms of service for hosted projects In-Reply-To: <20180529173724.aww4myeqpof3dtnj@yuggoth.org> References: <20180529173724.aww4myeqpof3dtnj@yuggoth.org> Message-ID: Jeremy Stanley wrote: > On 2018-05-29 10:53:03 -0400 (-0400), Zane Bitter wrote: >> It is my understanding that the infra team will enforce the >> following conditions when a repo import request is received: >> >> * The repo must be licensed under an OSI-approved open source >> license. > > That has been our custom, but we should add a statement to this > effect in the aforementioned document. > >> * If the repo is a fork of another project, there must be (public) >> evidence of an attempt to co-ordinate with the upstream first. > > I don't recall this ever being mandated, though the project-config > reviewers do often provide suggestions to project creators such as > places in the existing community with which they might consider > cooperating/collaborating. Right, that was never a rule (for Stackforge or the current "non-official project hosting" space), and I doubt very much we have enforced it in the past. FWIW we currently host forks of gitdm, gerrit, as well as copies of all sorts of JS code under xstatic-*. That said, I think consulting upstream in case of code copies/forks is a good practice to add in the future. > [...] >> In addition, I think we should require projects hosted on our >> infrastructure to agree to other policies: >> >> * Adhere to the OpenStack Foundation Code of Conduct. > > This seems like a reasonable addition to our hosting requirements. > >> * Not misrepresent their relationship to the official OpenStack >> project or the Foundation. Ideally we'd come up with language that >> they *can* use to describe their status, such as "hosted on the >> OpenStack infrastructure". > > Also a great suggestion. We sort of say that in the "what being an > unoffocial project is not" bullet list, but it could use some > fleshing out. "The OpenStack infrastructure" is likely to be changed in the near future to a more neutral name, but I would keep the "hosted on" language to describe the relationship. -- Thierry Carrez (ttx) From pratapagoutham at gmail.com Wed May 30 13:25:12 2018 From: pratapagoutham at gmail.com (Goutham Pratapa) Date: Wed, 30 May 2018 18:55:12 +0530 Subject: [openstack-dev] [Heat] : Query regarding bug 1769089 In-Reply-To: References: Message-ID: Try asking people in webirc webchat.freenode.net Nickname : your name channel: #openstack-heat and you will find people ask them it's better to talk to them rather than here people might not respond here there you have a good chance https://review.openstack.org/#/admin/groups/114,members you will find people with these names ping them directly... Cheers Goutham. On Wed, May 30, 2018 at 6:26 PM, Ashika Meher Majety wrote: > Hello, > > We have raised a bug in launchpad and the bug link is as follows: > https://bugs.launchpad.net/heat-dashboard/+bug/1769089 . > Can anyone please provide a solution or fix for this issue since it's been > 20 days since we have created this bug. > > Thanks&Regards, > Ashika Meher > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Cheers !!! Goutham Pratapa -------------- next part -------------- An HTML attachment was scrubbed... URL: From ksnhr.tech at gmail.com Wed May 30 13:26:42 2018 From: ksnhr.tech at gmail.com (Kaz Shinohara) Date: Wed, 30 May 2018 22:26:42 +0900 Subject: [openstack-dev] [Heat] : Query regarding bug 1769089 In-Reply-To: References: Message-ID: Hi, First off, sorry for being late to response. Looking at your comment, your environment is Newton & AFAIK Newton is EOL, even if you will wait for the fix, it will not be delivered to Newton. https://releases.openstack.org/ Current my concern is that your raised issue may happen in Queens code too (latest maintained) Note: dashboard for heat is split out from Horizon since Queens. Let me check if I could reproduce your issue at my environment(Queens) first. I will update my result at https://storyboard.openstack.org/#!/story/1769089 Cheers, Kaz 2018-05-30 21:56 GMT+09:00 Ashika Meher Majety : > Hello, > > We have raised a bug in launchpad and the bug link is as follows: > https://bugs.launchpad.net/heat-dashboard/+bug/1769089 . > Can anyone please provide a solution or fix for this issue since it's been > 20 days since we have created this bug. > > Thanks&Regards, > Ashika Meher > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas at goirand.fr Wed May 30 13:27:59 2018 From: thomas at goirand.fr (Thomas Goirand) Date: Wed, 30 May 2018 15:27:59 +0200 Subject: [openstack-dev] [horizon] Font awesome currently broken with Debian Sid and Horizon In-Reply-To: References: <48349161-33c5-59cb-97ca-6b397b93d592@debian.org> Message-ID: <7551cc21-a759-147f-9720-2dbf7a45b1aa@goirand.fr> Hi Radomir, I'm adding the debian bug as Cc. On 05/28/2018 08:35 AM, Radomir Dopieralski wrote: > I did a quick search for all the glyphs we are using: > > ~/dev/horizon(master)> ag 'fa-' | egrep -o 'fa-[a-z-]*' | sort | uniq > fa- > fa-angle-left > [...] Thanks for your investigation. I did a quick test, and loaded all of these glyphs into a test HTML page. As much as I can see, only 4 glyphs aren't in fa-solid-900: fa-cloud-upload fa-pencil fa-share-square-o fa-sign-out It'd be nice if we could replace these by glyphs present in fa-solid-900 so we could simply replace the old v4 font by fa-solid-900 only. Your thoughts? Cheers, Thomas Goirand (zigo) From rico.lin.guanyu at gmail.com Wed May 30 13:29:23 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Wed, 30 May 2018 06:29:23 -0700 Subject: [openstack-dev] [Heat] : Query regarding bug 1769089 In-Reply-To: References: Message-ID: Hi As Kaz mentioned, Since we moved to StoryBoard (instead of Launchapd) please maintain your bug here: https://storyboard.openstack.org/#!/story/1769089 Also, I try to use the same template [1] as you did to create a stack in master version and it works (I can't reproduce it) If possible, can you trace log to find where that error is from, is it from heat-eng.log, heat-api.log, or other where? Also, Heat IRC is at #heat not #openstack-heat :) [1] TEMPLATE description: Template for router create heat_template_version: '2013-05-23' resources: router: type: OS::Neutron::Router properties: name: resourceRouter 2018-05-30 21:26 GMT+08:00 Kaz Shinohara : > Hi, > > > First off, sorry for being late to response. > > Looking at your comment, > your environment is Newton & AFAIK Newton is EOL, even if you will wait > for the fix, it will not be delivered to Newton. > https://releases.openstack.org/ > > Current my concern is that your raised issue may happen in Queens code too > (latest maintained) > Note: dashboard for heat is split out from Horizon since Queens. > > Let me check if I could reproduce your issue at my environment(Queens) > first. > I will update my result at https://storyboard. > openstack.org/#!/story/1769089 > > Cheers, > Kaz > > > > 2018-05-30 21:56 GMT+09:00 Ashika Meher Majety : > >> Hello, >> >> We have raised a bug in launchpad and the bug link is as follows: >> https://bugs.launchpad.net/heat-dashboard/+bug/1769089 . >> Can anyone please provide a solution or fix for this issue since it's >> been 20 days since we have created this bug. >> >> Thanks&Regards, >> Ashika Meher >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Wed May 30 13:47:50 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 30 May 2018 08:47:50 -0500 Subject: [openstack-dev] Questions about token scopes Message-ID: I know the keystone team has been doing a lot of work on scoped tokens and Lance has been trying to roll that out to other projects (like nova). In Rocky the nova team is adding granular policy rules to the placement API [1] which is a good opportunity to set scope on those rules as well. For now, we've just said everything is system scope since resources in placement, for the most part, are managed by "the system". But we do have some resources in placement which have project/user information in them, so could theoretically also be scoped to a project, like GET /usages [2]. While going through this, I've been hammering Lance with questions but I had some more this morning and wanted to send them to the list to help spread the load and share the knowledge on working with scoped tokens in the other projects. So here goes with the random questions: * devstack has the admin project/user - does that by default get system scope tokens? I see the scope is part of the token create request [3] but it's optional, so is there a default value if not specified? * Why don't the token create and show APIs return the scope? * It looks like python-openstackclient doesn't allow specifying a scope when issuing a token, is that going to be added? The reason I'm asking about OSC stuff is because we have the osc-placement plugin [4] which allows users with the admin role to work with resources in placement, which could be useful for things like fixing up incorrect or leaked allocations, i.e. fixing the fallout of a bug in nova. I'm wondering if we define all of the placement API rules as system scope and we're enforcing scope, will admins, as we know them today, continue to be able to use those APIs? Or will deployments just need to grow a system-scope admin project/user and per-project admin users, and then use the former for working with placement via the OSC plugin? [1] https://review.openstack.org/#/q/topic:bp/granular-placement-policy+(status:open+OR+status:merged) [2] https://developer.openstack.org/api-ref/placement/#list-usages [3] https://developer.openstack.org/api-ref/identity/v3/index.html#password-authentication-with-scoped-authorization [4] https://docs.openstack.org/osc-placement/latest/index.html -- Thanks, Matt From juliaashleykreger at gmail.com Wed May 30 13:54:33 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 30 May 2018 09:54:33 -0400 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: <8ffae6f3-e78e-580b-86ec-1c791a6f3aba@redhat.com> References: <20180529150920.GA21733@csail.mit.edu> <4488099e-1391-a020-0365-342556a8b2e1@gmail.com> <1527623552-sup-8763@lrrr.local> <20180529200506.GD21733@csail.mit.edu> <1527624908-sup-8314@lrrr.local> <09731e0c-5528-6c2c-68f0-89062bc39e9b@gmail.com> <5C428395-0B4B-446B-B1E9-F9A4CA5D7DCD@redhat.com> <8ffae6f3-e78e-580b-86ec-1c791a6f3aba@redhat.com> Message-ID: On Tue, May 29, 2018 at 7:42 PM, Zane Bitter wrote: [trim] > Since I am replying to this thread, Julia also mentioned the situation where > two core reviewers are asking for opposite changes to a patch. It is never > ever ever the contributor's responsibility to resolve a dispute between two > core reviewers! If you see a core reviewer's advice on a patch and you want > to give the opposite advice, by all means take it up immediately - with *the > other core reviewer*. NOT the submitter. Preferably on IRC and not in the > review. You work together every day, you can figure it out! A random > contributor has no chance of parachuting into the middle of that dynamic and > walking out unscathed, and they should never be asked to. > Absolutely agree! In the case that was in mind where it has happened to me personally, I think it was 10-15 revisions apart, so it becomes a little hard to identify when your starting to play the game of "make the cores happy to land it". It is not a fun game for the contributor. Truthfully it caused me to add the behavior of intentionally waiting longer between uploads of new revisions... which does not help at all. The other conundrum is when you disagree and the person has left a -1 which blocks forward progress and any additional reviews since it gets viewed as "not ready", which makes it even harder and slower to build consensus. At some point you get into "Oh, what formatting can I change to clear that -1 because the person is not responding" mode. At least beginning to shift the review culture should help many of these issues. From lbragstad at gmail.com Wed May 30 13:55:59 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 30 May 2018 08:55:59 -0500 Subject: [openstack-dev] [keystone] Signing off In-Reply-To: References: Message-ID: <77d8c5cd-06a8-d358-0533-631a86bd92d1@gmail.com> I remember when I first started contributing upstream, I spent a Saturday sending you internal emails asking about the intricacies of database migrations :) Since then you've give me (or I've stolen) a number of other tools and techniques. Thanks for everything you've done for this community, Henry. It's been a pleasure! On 05/30/2018 03:45 AM, Henry Nash wrote: > Hi >   > It is with a somewhat heavy heart that I have decided that it is time > to hang up my keystone core status. Having been involved since the > closing stages of Folsom, I've had a good run! When I look at how far > keystone has come since the v2 days, it is remarkable - and we should > all feel a sense of pride in that. >   > Thanks to all the hard work, commitment, humour and support from all > the keystone folks over the years - I am sure we will continue to > interact and meet among the many other open source projects that many > of us are becoming involved with. Ad astra! >   > Best regards, >   > Henry > Twitter: @henrynash > linkedIn: www.linkedin.com/in/ henrypnash >   > Unless stated otherwise above: > IBM United Kingdom Limited - Registered in England and Wales with > number 741598. > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From hrybacki at redhat.com Wed May 30 13:57:37 2018 From: hrybacki at redhat.com (Harry Rybacki) Date: Wed, 30 May 2018 09:57:37 -0400 Subject: [openstack-dev] [keystone] Signing off In-Reply-To: <1527671135.2688078.1390168880.09E65B15@webmail.messagingengine.com> References: <1527671135.2688078.1390168880.09E65B15@webmail.messagingengine.com> Message-ID: On Wed, May 30, 2018 at 5:05 AM, Colleen Murphy wrote: > On Wed, May 30, 2018, at 10:45 AM, Henry Nash wrote: >> Hi >> >> It is with a somewhat heavy heart that I have decided that it is time to >> hang up my keystone core status. Having been involved since the closing >> stages of Folsom, I've had a good run! When I look at how far keystone >> has come since the v2 days, it is remarkable - and we should all feel a >> sense of pride in that. >> Thanks to all the hard work, commitment, humour and support from all the >> keystone folks over the years - I am sure we will continue to interact >> and meet among the many other open source projects that many of us are >> becoming involved with. Ad astra! >> Best regards, >> >> Henry >> Twitter: @henrynash >> linkedIn: www.linkedin.com/in/henrypnash >> > > Thank you for all the incredible work you've done for this project! You were an invaluable asset at the PTGs, it was great to see you there even though keystone hasn't been your main focus lately. Wishing you the best of luck. > Hear, hear. Keystone has largely been shaped by those continual efforts, Henry. Your face may be missed but your voice will hang around to guide us future. Hope to run into you somewhere, sometime! /R > Colleen > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Harry From dms at danplanet.com Wed May 30 14:01:48 2018 From: dms at danplanet.com (Dan Smith) Date: Wed, 30 May 2018 07:01:48 -0700 Subject: [openstack-dev] [Nova] z/VM introducing a new config driveformat In-Reply-To: (Chen CH Ji's message of "Fri, 25 May 2018 17:18:11 +0800") References: <7efe6916-17fc-59c5-d666-6bdfc19c3329@gmail.com> Message-ID: > can I know a use case for this 'live copy metadata or ' the 'only way > to access device tags when hot-attach? my thought is this is one time > thing in cloud-init side either through metatdata service or config > drive and won't be used later? then why I need a live copy? If I do something like this: nova interface-attach --tag=data-network --port-id=foo myserver Then we update the device metadata live, which is visible immediately via the metadata service. However, in config drive, that only gets updated the next time the drive is generated (which may be a long time away). For more information on device metadata, see: https://specs.openstack.org/openstack/nova-specs/specs/mitaka/approved/virt-device-role-tagging.html Further, some of the drivers support setting the admin password securely via metadata, which similarly requires the instance pulling updated information out, which wouldn't be available in the config drive. For reference: https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L1985-L1993 --Dan From dprince at redhat.com Wed May 30 14:10:53 2018 From: dprince at redhat.com (Dan Prince) Date: Wed, 30 May 2018 10:10:53 -0400 Subject: [openstack-dev] [tripleo] Containerized Undercloud deep-dive In-Reply-To: References: Message-ID: We are on for this tomorrow (Thursday) at 2pm UTC (10am EST). We'll meet here: https://redhat.bluejeans.com/dprince/ and record it live. We'll do an overview presentation and then perhaps jump into a terminal for some live questions. Dan On Tue, May 15, 2018 at 10:51 AM, Emilien Macchi wrote: > Dan and I are organizing a deep-dive session focused on the containerized > undercloud. > > https://etherpad.openstack.org/p/tripleo-deep-dive-containerized-undercloud > > We proposed a date + list of topics but feel free to comment and ask for > topics/questions. > Thanks, > -- > Emilien & Dan > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From e0ne at e0ne.info Wed May 30 14:13:52 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Wed, 30 May 2018 17:13:52 +0300 Subject: [openstack-dev] [horizon] Font awesome currently broken with Debian Sid and Horizon In-Reply-To: <7551cc21-a759-147f-9720-2dbf7a45b1aa@goirand.fr> References: <48349161-33c5-59cb-97ca-6b397b93d592@debian.org> <7551cc21-a759-147f-9720-2dbf7a45b1aa@goirand.fr> Message-ID: Hi Thomas, > As my python3-xstatic-font-awesome removes the embedded fonts It sounds like you broke xstatic-* packages for Debian and use something we don't test with Horizon at all. Speaking about Rocky/master version, our upper-constraint XStatic-Font-Awesome===4.7.0.0 [1]. We don't test horizon with font awesome v 5.0.10. Second, it'd be nice if Horizon could adapt and use the new v5 > font-awesome, so that the problem is completely solved. +1. I'll put my +2/A once somebody provides a patch for it with a detailed description how can I test it. Unfortunately, Horizon team has a very limited set of resources, so we can't adopt new version of xstatic-* fast :(. [1] https://github.com/openstack/requirements/blob/master/upper-constraints.txt#L61 Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Wed, May 30, 2018 at 4:27 PM, Thomas Goirand wrote: > Hi Radomir, > > I'm adding the debian bug as Cc. > > On 05/28/2018 08:35 AM, Radomir Dopieralski wrote: > > I did a quick search for all the glyphs we are using: > > > > ~/dev/horizon(master)> ag 'fa-' | egrep -o 'fa-[a-z-]*' | sort | uniq > > fa- > > fa-angle-left > > [...] > > Thanks for your investigation. > > I did a quick test, and loaded all of these glyphs into a test HTML > page. As much as I can see, only 4 glyphs aren't in fa-solid-900: > > fa-cloud-upload > fa-pencil > fa-share-square-o > fa-sign-out > > It'd be nice if we could replace these by glyphs present in fa-solid-900 > so we could simply replace the old v4 font by fa-solid-900 only. > > Your thoughts? > > Cheers, > > Thomas Goirand (zigo) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Wed May 30 14:16:23 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 30 May 2018 16:16:23 +0200 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: References: <20180529150920.GA21733@csail.mit.edu> <4488099e-1391-a020-0365-342556a8b2e1@gmail.com> <1527623552-sup-8763@lrrr.local> <20180529200506.GD21733@csail.mit.edu> <1527624908-sup-8314@lrrr.local> <09731e0c-5528-6c2c-68f0-89062bc39e9b@gmail.com> <5C428395-0B4B-446B-B1E9-F9A4CA5D7DCD@redhat.com> <8ffae6f3-e78e-580b-86ec-1c791a6f3aba@redhat.com> Message-ID: On 05/30/2018 03:54 PM, Julia Kreger wrote: > On Tue, May 29, 2018 at 7:42 PM, Zane Bitter wrote: > [trim] >> Since I am replying to this thread, Julia also mentioned the situation where >> two core reviewers are asking for opposite changes to a patch. It is never >> ever ever the contributor's responsibility to resolve a dispute between two >> core reviewers! If you see a core reviewer's advice on a patch and you want >> to give the opposite advice, by all means take it up immediately - with *the >> other core reviewer*. NOT the submitter. Preferably on IRC and not in the >> review. You work together every day, you can figure it out! A random >> contributor has no chance of parachuting into the middle of that dynamic and >> walking out unscathed, and they should never be asked to. >> > > Absolutely agree! In the case that was in mind where it has happened > to me personally, I think it was 10-15 revisions apart, so it becomes > a little hard to identify when your starting to play the game of "make > the cores happy to land it". It is not a fun game for the contributor. > Truthfully it caused me to add the behavior of intentionally waiting > longer between uploads of new revisions... which does not help at all. > > The other conundrum is when you disagree and the person has left a -1 > which blocks forward progress and any additional reviews since it gets > viewed as "not ready", which makes it even harder and slower to build > consensus. At some point you get into "Oh, what formatting can I > change to clear that -1 because the person is not responding" mode. This, by the way, is a much broader and interesting question. In case of a by-passer leaving a comment ("this link must be https") and disappearing, the PTL or any core can remove the reviewer from the review. What to do with a core leaving a comment or non-core leaving a potentially useful comment and going to PTO? > > At least beginning to shift the review culture should help many of these issues. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From cjeanner at redhat.com Wed May 30 14:18:17 2018 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Wed, 30 May 2018 16:18:17 +0200 Subject: [openstack-dev] [tripleo] Containerized Undercloud deep-dive In-Reply-To: References: Message-ID: On 05/30/2018 04:10 PM, Dan Prince wrote: > We are on for this tomorrow (Thursday) at 2pm UTC (10am EST). meaning 3:00pm CET - conflicting with my squad meeting (3:30pm for squad red) :(. I'll watch it on youtube then. > > We'll meet here: https://redhat.bluejeans.com/dprince/ and record it > live. We'll do an overview presentation and then perhaps jump into a > terminal for some live questions. > > Dan > > On Tue, May 15, 2018 at 10:51 AM, Emilien Macchi wrote: >> Dan and I are organizing a deep-dive session focused on the containerized >> undercloud. >> >> https://etherpad.openstack.org/p/tripleo-deep-dive-containerized-undercloud >> >> We proposed a date + list of topics but feel free to comment and ask for >> topics/questions. >> Thanks, >> -- >> Emilien & Dan >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Cédric Jeanneret Software Engineer DFG:DF -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From juliaashleykreger at gmail.com Wed May 30 14:23:06 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 30 May 2018 10:23:06 -0400 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: <3424d691-9792-afde-dce9-4eca7601ae4f@redhat.com> References: <3424d691-9792-afde-dce9-4eca7601ae4f@redhat.com> Message-ID: I don't feel like anyone is proposing to end the use of -1's, but that we should generally be encouraging, accepting, and trusting. That means if there are major gaps or issues, then the use of a -1 is perfectly valid because it needs more work. We also need to be mindful of context as well, and in the grand scheme not try for something perfect as many often do. This *does* mean we land something that needs to be fixed later or reverted later, but neither are things we should fear. We can't let that fear control us. There are also the qualifiers of "does this help" or "does this harm", and neither are nitpicks to me. That is also going to vary from project to project, and even more so case by case based upon the item being evaluated. That is where the context is important. Perhaps we need to also, as the community evolves, consider mindfulness. Granted, mindfulness is hard and can be even harder with elevated stress. Maybe this is where we should also pitch something like "Stacker Cruises" with an intermediate forced vacation for everyone. :) On Wed, May 30, 2018 at 6:11 AM, Dmitry Tantsur wrote: > Hi, > > This is a great discussion and a great suggestion overall, but I'd like to > add a grain of salt here, especially after reading some comments. > > Nitpicking is bad, no disagreement. However, I don't like this whole > discussion to end up marking -1's as offense or aggression. Just as often as > I see newcomers proposing patches frustrated with many iterations, I see > newcomers being afraid to -1. > > In my personal experience I have two remarkable cases: > 1. A person asking me (via a private message) to not put -1 on their patches > because they may have problems with their managers. > 2. A person proposing a follow-up on *any* comment to their patch, including > important ones. > > Whatever decision the TC takes, I would like it to make sure that we don't > paint putting -1 as a bad act. Nor do I want "if you care, just follow-up" > to be an excuse for putting up bad contributions. > > Additionally, I would like to have something saying that a -1 is valid and > appropriate, if a contribution substantially increases the project's > technical debt. After already spending *days* refactoring ironic unit tests, > I will -1 the hell out of a patch that will try to bring them back to their > initial state, I promise :) > > Dmitry > > > On 05/29/2018 03:55 PM, Julia Kreger wrote: >> >> During the Forum, the topic of review culture came up in session after >> session. During these discussions, the subject of our use of nitpicks >> were often raised as a point of contention and frustration, especially >> by community members that have left the community and that were >> attempting to re-engage the community. Contributors raised the point >> of review feedback requiring for extremely precise English, or >> compliance to a particular core reviewer's style preferences, which >> may not be the same as another core reviewer. >> >> These things are not just frustrating, but also very inhibiting for >> part time contributors such as students who may also be time limited. >> Or an operator who noticed something that was clearly a bug and that >> put forth a very minor fix and doesn't have the time to revise it over >> and over. >> >> While nitpicks do help guide and teach, the consensus seemed to be >> that we do need to shift the culture a little bit. As such, I've >> proposed a change to our principles[1] in governance that attempts to >> capture the essence and spirit of the nitpicking topic as a first >> step. >> >> -Julia >> --------- >> [1]: https://review.openstack.org/570940 >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From yroblamo at redhat.com Wed May 30 14:27:36 2018 From: yroblamo at redhat.com (Yolanda Robla Mota) Date: Wed, 30 May 2018 16:27:36 +0200 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: References: <20180529150920.GA21733@csail.mit.edu> <4488099e-1391-a020-0365-342556a8b2e1@gmail.com> <1527623552-sup-8763@lrrr.local> <20180529200506.GD21733@csail.mit.edu> <1527624908-sup-8314@lrrr.local> <09731e0c-5528-6c2c-68f0-89062bc39e9b@gmail.com> <5C428395-0B4B-446B-B1E9-F9A4CA5D7DCD@redhat.com> <8ffae6f3-e78e-580b-86ec-1c791a6f3aba@redhat.com> Message-ID: I see another problem working on patchsets with lots of revisions and long-lived history, such as specs or a complex change. The opinions of several reviewers may be different. So first reviewer lefts a comment, the owner of the change amends the patch according to it. But after time and iterations pass (and because the history is very long or impossible to read) another reviewer who has not read the whole history, may come and put a -1 that contradicts the initial change. Things like that could be sorted if we keep shorter patchsets, and merge things that are production ready but not perfect, then we come up with follow-up patches to amend possible comments, or create enhancements. On Wed, May 30, 2018 at 4:16 PM, Dmitry Tantsur wrote: > On 05/30/2018 03:54 PM, Julia Kreger wrote: > >> On Tue, May 29, 2018 at 7:42 PM, Zane Bitter wrote: >> [trim] >> >>> Since I am replying to this thread, Julia also mentioned the situation >>> where >>> two core reviewers are asking for opposite changes to a patch. It is >>> never >>> ever ever the contributor's responsibility to resolve a dispute between >>> two >>> core reviewers! If you see a core reviewer's advice on a patch and you >>> want >>> to give the opposite advice, by all means take it up immediately - with >>> *the >>> other core reviewer*. NOT the submitter. Preferably on IRC and not in the >>> review. You work together every day, you can figure it out! A random >>> contributor has no chance of parachuting into the middle of that dynamic >>> and >>> walking out unscathed, and they should never be asked to. >>> >>> >> Absolutely agree! In the case that was in mind where it has happened >> to me personally, I think it was 10-15 revisions apart, so it becomes >> a little hard to identify when your starting to play the game of "make >> the cores happy to land it". It is not a fun game for the contributor. >> Truthfully it caused me to add the behavior of intentionally waiting >> longer between uploads of new revisions... which does not help at all. >> >> The other conundrum is when you disagree and the person has left a -1 >> which blocks forward progress and any additional reviews since it gets >> viewed as "not ready", which makes it even harder and slower to build >> consensus. At some point you get into "Oh, what formatting can I >> change to clear that -1 because the person is not responding" mode. >> > > This, by the way, is a much broader and interesting question. In case of a > by-passer leaving a comment ("this link must be https") and disappearing, > the PTL or any core can remove the reviewer from the review. What to do > with a core leaving a comment or non-core leaving a potentially useful > comment and going to PTO? > > > >> At least beginning to shift the review culture should help many of these >> issues. >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Yolanda Robla Mota Principal Software Engineer, RHCE Red Hat C/Avellana 213 Urb Portugal yroblamo at redhat.com M: +34605641639 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sc at linux.it Wed May 30 14:27:42 2018 From: sc at linux.it (Stefano Canepa) Date: Wed, 30 May 2018 15:27:42 +0100 Subject: [openstack-dev] collect up and down time for deployed openstack resources In-Reply-To: References: Message-ID: On 30 May 2018 at 12:57, amal kammoun wrote: > Hi, > > We aim at collecting inter failures time for the compute and network > resources (i.e., availability and reliability of each resource). > > Is there any mean via Restfull to collect these metrics ? > ​You can use monasca-api to get measurements stored by Monasca. You can setup monasca-agent to collect the data needed from the host running the services.​ If not, which OpenStack/celiometer/heat modules may be extended in order to > collect this information via the implementation of a prob protocol for > instance? > > > Thanks in adavance, > > > Regards, > > > Amal > ​All the best Stefano ​ ​ -- Stefano Canepa aka sc sc at linux.it or stefano at canepa.ge.it www.stefanocanepa.it Three great virtues of a programmer: laziness, impatience, and hubris. (Larry Wall) -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Wed May 30 14:53:40 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 30 May 2018 09:53:40 -0500 Subject: [openstack-dev] Questions about token scopes In-Reply-To: References: Message-ID: <61dae2da-e38b-ab3a-3921-6c2c8bd81796@gmail.com> On 05/30/2018 08:47 AM, Matt Riedemann wrote: > I know the keystone team has been doing a lot of work on scoped tokens > and Lance has been trying to roll that out to other projects (like nova). > > In Rocky the nova team is adding granular policy rules to the > placement API [1] which is a good opportunity to set scope on those > rules as well. > > For now, we've just said everything is system scope since resources in > placement, for the most part, are managed by "the system". But we do > have some resources in placement which have project/user information > in them, so could theoretically also be scoped to a project, like GET > /usages [2]. > > While going through this, I've been hammering Lance with questions but > I had some more this morning and wanted to send them to the list to > help spread the load and share the knowledge on working with scoped > tokens in the other projects. ++ good idea > > So here goes with the random questions: > > * devstack has the admin project/user - does that by default get > system scope tokens? I see the scope is part of the token create > request [3] but it's optional, so is there a default value if not > specified? No, not necessarily. The keystone-manage bootstrap command is what bootstraps new deployments with the admin user, an admin role, a project to work in, etc. It also grants the newly created admin user the admin role on a project and the system. This functionality was added in Queens [0]. This should be backwards compatible and allow the admin user to get tokens scoped to whatever they had authorization on previously. The only thing they should notice is that they have another role assignment on something called the "system". That being said, they can start requesting system-scoped tokens from keystone. We have a document that tries to explain the differences in scopes and what they mean [1]. [0] https://review.openstack.org/#/c/530410/ [1] https://docs.openstack.org/keystone/latest/admin/identity-tokens.html > > * Why don't the token create and show APIs return the scope? Good question. In a way, they do. If you look at a response when you authenticate for a token or validate a token, you should see an object contained within the token reference for the purpose of scope. For example, a project-scoped token will have a project object in the response [2]. A domain-scoped token will have a domain object in the response [3]. The same is true for system scoped tokens [4]. Unscoped tokens do not have any of these objects present and do not contain a service catalog [5]. While scope isn't explicitly denoted by an attribute, it can be derived from the attributes of the token response. [2] http://paste.openstack.org/raw/722349/ [3] http://paste.openstack.org/raw/722351/ [4] http://paste.openstack.org/raw/722348/ [5] http://paste.openstack.org/raw/722350/ > > * It looks like python-openstackclient doesn't allow specifying a > scope when issuing a token, is that going to be added? Yes, I have a patch up for it [6]. I wanted to get this in during Queens, but it missed the boat. I believe this and a new release of oslo.context are the only bits left in order for services to have everything they need to easily consume system-scoped tokens. Keystonemiddleware should know how to handle system-scoped tokens in front of each service [7]. The oslo.context library should be smart enough to handle system scope set by keystonemiddleware if context is built from environment variables [8]. Both keystoneauth [9] and python-keystoneclient [10] should have what they need to generate system-scoped tokens. That should be enough to allow the service to pass a request environment to oslo.context and use the context object to reason about the scope of the request. As opposed to trying to understand different token scope responses from keystone. We attempted to abstract that away in to the context object. [6] https://review.openstack.org/#/c/524416/ [7] https://review.openstack.org/#/c/564072/ [8] https://review.openstack.org/#/c/530509/ [9] https://review.openstack.org/#/c/529665/ [10] https://review.openstack.org/#/c/524415/ > > The reason I'm asking about OSC stuff is because we have the > osc-placement plugin [4] which allows users with the admin role to > work with resources in placement, which could be useful for things > like fixing up incorrect or leaked allocations, i.e. fixing the > fallout of a bug in nova. I'm wondering if we define all of the > placement API rules as system scope and we're enforcing scope, will > admins, as we know them today, continue to be able to use those APIs? > Or will deployments just need to grow a system-scope admin > project/user and per-project admin users, and then use the former for > working with placement via the OSC plugin? Uhm, if I understand your question, it depends on how you define the scope types for those APIs. If you set them to system-scope, then an operator will need to use a system-scoped token in order to access those APIs iff the placement configuration file contains placement.conf [oslo.policy] enforce_scope = True. Otherwise, setting that option to false will log a warning to operators saying that someone is accessing a system-scoped API with a project-scoped token (e.g. education needs to happen). > > [1] > https://review.openstack.org/#/q/topic:bp/granular-placement-policy+(status:open+OR+status:merged) > [2] https://developer.openstack.org/api-ref/placement/#list-usages > [3] > https://developer.openstack.org/api-ref/identity/v3/index.html#password-authentication-with-scoped-authorization > [4] https://docs.openstack.org/osc-placement/latest/index.html > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From julien at danjou.info Wed May 30 14:59:10 2018 From: julien at danjou.info (Julien Danjou) Date: Wed, 30 May 2018 16:59:10 +0200 Subject: [openstack-dev] [stable] [tooz] [ceilometer] cmd2 without upper constraints causes errors in tox-py27 In-Reply-To: (=?utf-8?Q?=22El=C3=B5d=09Ill=C3=A9s=22's?= message of "Wed, 30 May 2018 14:55:44 +0200") References: <1b65e553-97c9-eb4c-2de7-7e372c4a7d11@ericsson.com> Message-ID: On Wed, May 30 2018, Elõd Illés wrote: > cmd2 says that: > > "Python 2.7 support is EOL > > Support for adding new features to the Python 2.7 release of |cmd2| was > discontinued on April 15, 2018. Bug fixes will be supported for Python 2.7 via > 0.8.x until August 31, 2018. > > Supporting Python 2 was an increasing burden on our limited resources. > Switching to support only Python 3 will allow us to clean up the codebase, > remove some cruft, and focus on developing new features." Erf. :( So the problem is that cmd2 is not a requirements in Ceilometer. It's pulled by cliff. As far as I can see, cliff latest version (2.11.0) announces it supports Python 2.7 (obviously) while also depends on cmd2>=0.6.7. Cliff needs to upper cap cmd2 if it wants to support Python 2. I see cliff has already set that requirements in its master branch, but no new version has been released. Releasing a new cliff version should fix that issue. -- Julien Danjou // Free Software hacker // https://julien.danjou.info -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 832 bytes Desc: not available URL: From zigo at debian.org Wed May 30 15:06:39 2018 From: zigo at debian.org (Thomas Goirand) Date: Wed, 30 May 2018 17:06:39 +0200 Subject: [openstack-dev] [horizon] Font awesome currently broken with Debian Sid and Horizon In-Reply-To: References: <48349161-33c5-59cb-97ca-6b397b93d592@debian.org> <7551cc21-a759-147f-9720-2dbf7a45b1aa@goirand.fr> Message-ID: On 05/30/2018 04:13 PM, Ivan Kolodyazhny wrote: > Hi Thomas, >   > > As my python3-xstatic-font-awesome removes the embedded fonts > > > It sounds like you broke xstatic-* packages for Debian and use something > we don't test with Horizon at all. > > Speaking about Rocky/master version, our upper-constraint > XStatic-Font-Awesome===4.7.0.0 [1]. We don't test horizon with font > awesome v 5.0.10. > > > Second, it'd be nice if Horizon could adapt and use the new v5 > font-awesome, so that the problem is completely solved. > > +1. I'll put my +2/A once somebody provides a patch for it with a > detailed description how can I test it. Unfortunately, Horizon team has > a very limited set of resources, so we can't adopt new version of > xstatic-* fast :(. > > [1] https://github.com/openstack/requirements/blob/master/upper-constraints.txt#L61 > > Regards, > Ivan Kolodyazhny, > http://blog.e0ne.info/ Ivan, The point of Xstatic packages is so that, in distributions, they depend on the asset which is packaged separately, so that there's no duplication of data in the distro. In this case, the python3-xstatic-font-awesome package depends on the fonts-font-awesome package. And it is the later that got updated in Debian. I don't maintain it, so it's not my fault. This broke many packages, including the openstackdocstheme also. Of course, I could revert what was previously done, and have python3-xstatic-font-awesome to contain the fonts data again. But that's not desirable. What we really want is have Horizon fixed, and use a newer version of font-awesome (ie: v5) if possible. Using only glyphs from fa-solid-900 will make it possible to have Horizon work with both v4 and v5, which would be even better (of course, package maintainers would have to set correct links to the right font file, but that's a packaging detail). Cheers, Thomas Goirand (zigo) From zbitter at redhat.com Wed May 30 15:13:41 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 30 May 2018 11:13:41 -0400 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: <154ec962-d401-cc7e-5f9d-3335fcac665d@redhat.com> References: <20180529150920.GA21733@csail.mit.edu> <4488099e-1391-a020-0365-342556a8b2e1@gmail.com> <1527623552-sup-8763@lrrr.local> <20180529200506.GD21733@csail.mit.edu> <1527624908-sup-8314@lrrr.local> <09731e0c-5528-6c2c-68f0-89062bc39e9b@gmail.com> <5C428395-0B4B-446B-B1E9-F9A4CA5D7DCD@redhat.com> <8ffae6f3-e78e-580b-86ec-1c791a6f3aba@redhat.com> <154ec962-d401-cc7e-5f9d-3335fcac665d@redhat.com> Message-ID: On 30/05/18 00:52, Cédric Jeanneret wrote: >> Another issue is that if the original author needs to rev the patch >> again for any reason, they then need to figure out how to check out the >> modified patch. This requires a fairly sophisticated knowledge of both >> git and gerrit, which isn't a problem for those of us who have been >> using them for years but is potentially a nightmarish introduction for a >> relatively new contributor. Sometimes it's the right choice though >> (especially if the patch owner hasn't been seen for a while). > hm, "Download" -> copy/paste, and Voilà. Gerrit interface is pretty nice > with the user (I an "old new contributor", never really struggled with > Gerrit itself.. On the other hand, heat, ansible, that's another story :) ). OK, so I am sitting here with a branch containing a patch I have sent for review, and that I need to revise, but somebody else has pushed a revision upstream. Which of the 4 'Download' commands do I use to replace my commit with the latest one from upstream? (Hint: it's a trick question) Now imagine it wasn't the last patch in the series. - ZB (P.S. without testing, I believe the correct answers are `git reset --hard FETCH_HEAD` and `git rebase HEAD~ --onto FETCH_HEAD` respectively.) From fungi at yuggoth.org Wed May 30 15:15:27 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 30 May 2018 15:15:27 +0000 Subject: [openstack-dev] [stable] [tooz] [ceilometer] cmd2 without upper constraints causes errors in tox-py27 In-Reply-To: References: <1b65e553-97c9-eb4c-2de7-7e372c4a7d11@ericsson.com> Message-ID: <20180530151527.hgir52lxnowljffi@yuggoth.org> On 2018-05-30 16:59:10 +0200 (+0200), Julien Danjou wrote: [...] > I see cliff has already set that requirements in its master branch, but > no new version has been released. Releasing a new cliff version should > fix that issue. Yes, that's in progress today. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From ltoscano at redhat.com Wed May 30 15:39:57 2018 From: ltoscano at redhat.com (Luigi Toscano) Date: Wed, 30 May 2018 17:39:57 +0200 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: References: <154ec962-d401-cc7e-5f9d-3335fcac665d@redhat.com> Message-ID: <1667389.sddy1eftd0@whitebase.usersys.redhat.com> On Wednesday, 30 May 2018 17:13:41 CEST Zane Bitter wrote: > On 30/05/18 00:52, Cédric Jeanneret wrote: > >> Another issue is that if the original author needs to rev the patch > >> again for any reason, they then need to figure out how to check out the > >> modified patch. This requires a fairly sophisticated knowledge of both > >> git and gerrit, which isn't a problem for those of us who have been > >> using them for years but is potentially a nightmarish introduction for a > >> relatively new contributor. Sometimes it's the right choice though > >> (especially if the patch owner hasn't been seen for a while). > > > > hm, "Download" -> copy/paste, and Voilà. Gerrit interface is pretty nice > > with the user (I an "old new contributor", never really struggled with > > Gerrit itself.. On the other hand, heat, ansible, that's another story :) > > ). > OK, so I am sitting here with a branch containing a patch I have sent > for review, and that I need to revise, but somebody else has pushed a > revision upstream. Which of the 4 'Download' commands do I use to > replace my commit with the latest one from upstream? > > (Hint: it's a trick question) > > Now imagine it wasn't the last patch in the series. Maybe using git review is enough: git review -d We document git review -d for rebasing and adding new review on top of a existing reviews, but not explicitly for amending a change: https://docs.openstack.org/infra/manual/developers.html Ciao -- Luigi From cboylan at sapwetik.org Wed May 30 15:42:45 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 30 May 2018 08:42:45 -0700 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: References: <20180529150920.GA21733@csail.mit.edu> <4488099e-1391-a020-0365-342556a8b2e1@gmail.com> <1527623552-sup-8763@lrrr.local> <20180529200506.GD21733@csail.mit.edu> <1527624908-sup-8314@lrrr.local> <09731e0c-5528-6c2c-68f0-89062bc39e9b@gmail.com> <5C428395-0B4B-446B-B1E9-F9A4CA5D7DCD@redhat.com> <8ffae6f3-e78e-580b-86ec-1c791a6f3aba@redhat.com> <154ec962-d401-cc7e-5f9d-3335fcac665d@redhat.com> Message-ID: <1527694965.3599295.1390619800.19182BBC@webmail.messagingengine.com> On Wed, May 30, 2018, at 8:13 AM, Zane Bitter wrote: > On 30/05/18 00:52, Cédric Jeanneret wrote: > >> Another issue is that if the original author needs to rev the patch > >> again for any reason, they then need to figure out how to check out the > >> modified patch. This requires a fairly sophisticated knowledge of both > >> git and gerrit, which isn't a problem for those of us who have been > >> using them for years but is potentially a nightmarish introduction for a > >> relatively new contributor. Sometimes it's the right choice though > >> (especially if the patch owner hasn't been seen for a while). > > hm, "Download" -> copy/paste, and Voilà. Gerrit interface is pretty nice > > with the user (I an "old new contributor", never really struggled with > > Gerrit itself.. On the other hand, heat, ansible, that's another story :) ). > > OK, so I am sitting here with a branch containing a patch I have sent > for review, and that I need to revise, but somebody else has pushed a > revision upstream. Which of the 4 'Download' commands do I use to > replace my commit with the latest one from upstream? > > (Hint: it's a trick question) > > Now imagine it wasn't the last patch in the series. > > - ZB > > (P.S. without testing, I believe the correct answers are `git reset > --hard FETCH_HEAD` and `git rebase HEAD~ --onto FETCH_HEAD` > respectively.) We do have tools for this and it is the same tool you use to push code to gerrit. `git review -d changenumber` will grab the latest patchset from that change and check it out locally. You can use `git review -d changenumber,patchsetnumber` to pick a different older patchset. If you have a series of changes things become more complicated. I personally like to always operate against leaf most change, make local updates, then squash "back" onto the appropriate changes in the series to keep existing changes happy. Yes, this is complicated especially for new users. In general though I think git review addresses the simpler cases of I need to use latest version of my change. If we use change series as proposed in this thread I think keeping the parent of the child changes up to date is going to be up to the more experienced nit picker that is addressing the minor problems and not the original change author. Clark From doug at doughellmann.com Wed May 30 15:44:03 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 30 May 2018 11:44:03 -0400 Subject: [openstack-dev] [stable] [tooz] [ceilometer] cmd2 without upper constraints causes errors in tox-py27 In-Reply-To: <20180530130030.zbx43r2ozgrluotk@yuggoth.org> References: <1b65e553-97c9-eb4c-2de7-7e372c4a7d11@ericsson.com> <20180530130030.zbx43r2ozgrluotk@yuggoth.org> Message-ID: <1527694850-sup-3207@lrrr.local> Excerpts from Jeremy Stanley's message of 2018-05-30 13:00:31 +0000: > On 2018-05-30 14:42:58 +0200 (+0200), Julien Danjou wrote: > [...] > > The question is: why cmd2 0.9.0 does not work and how do we fix that? > > The cmd2 maintainers decided that they no longer want to carry > support for old Python interpreters, so versions starting with 0.9.0 > only work on Python 3.4 and later: > > https://github.com/python-cmd2/cmd2/issues/421 > I expect we're going to see more and more upstream tools dropping python 2 support as we approach the 2020 deadline. It would help the community immensely if we had a few people who could become familiar enough with environment markers and requirements specification to be ready to help folks fix the problems that come up when we have future cases. I'm sure the requirements team would be happy to help find documentation and explain the fix for this specific case if you have questions. Doug From johnsomor at gmail.com Wed May 30 16:10:13 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Wed, 30 May 2018 09:10:13 -0700 Subject: [openstack-dev] [octavia] TERMINATED_HTTPS + SSL to backend server In-Reply-To: <19835_1527669903_5B0E648F_19835_298_1_78d339a4a6b7445d9a57d899f09a7587@orange.com> References: <19835_1527669903_5B0E648F_19835_298_1_78d339a4a6b7445d9a57d899f09a7587@orange.com> Message-ID: Hi Mihaela, Backend re-encryption is on our roadmap[1], but not yet implemented. We have all of the technical pieces to make this work, it's just someone getting time to do the API additions and update the flows. [1] https://wiki.openstack.org/wiki/Octavia/Roadmap#Considerations_for_Octavia_3.0.2B Michael On Wed, May 30, 2018 at 1:45 AM, wrote: > Hello, > > > > Is there any user story for the scenario below? > > > > - Octavia is set to TERMINATED_HTTPS and also initiates SSL to > backend servers > > > > After testing all the combination possible and after looking at the Octavia > haproxy templates in Queens version I understand that this kind of setup is > currently not supported. > > > > Thanks, > > Mihaela > > _________________________________________________________________________________________________________________________ > > Ce message et ses pieces jointes peuvent contenir des informations > confidentielles ou privilegiees et ne doivent donc > pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu > ce message par erreur, veuillez le signaler > a l'expediteur et le detruire ainsi que les pieces jointes. Les messages > electroniques etant susceptibles d'alteration, > Orange decline toute responsabilite si ce message a ete altere, deforme ou > falsifie. Merci. > > This message and its attachments may contain confidential or privileged > information that may be protected by law; > they should not be distributed, used or copied without authorisation. > If you have received this email in error, please notify the sender and > delete this message and its attachments. > As emails may be altered, Orange is not liable for messages that have been > modified, changed or falsified. > Thank you. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From kristi at nikolla.me Wed May 30 16:15:48 2018 From: kristi at nikolla.me (Kristi Nikolla) Date: Wed, 30 May 2018 12:15:48 -0400 Subject: [openstack-dev] [keystone] Signing off In-Reply-To: References: Message-ID: <2Etp3nNg2g2Y0qt0tvCzONyz5VR5CPAMEXYrYBaJMbgOtz06vhvGO4tA8m7w2L5anOPGelk8_iXmNrAt_get2ec_K7IqoyXy_-I_3VgLKGQ=@nikolla.me> Thank you for all your invaluable contributions Henry. It has been a pleasure working with you. Best of luck! Kristi ‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐ On May 30, 2018 4:45 AM, Henry Nash wrote: > Hi > > It is with a somewhat heavy heart that I have decided that it is time to hang up my keystone core status. Having been involved since the closing stages of Folsom, I've had a good run! When I look at how far keystone has come since the v2 days, it is remarkable - and we should all feel a sense of pride in that. > > Thanks to all the hard work, commitment, humour and support from all the keystone folks over the years - I am sure we will continue to interact and meet among the many other open source projects that many of us are becoming involved with. Ad astra! > > Best regards, > > Henry > Twitter: @henrynash > linkedIn: www.linkedin.com/in/henrypnash > > Unless stated otherwise above: > IBM United Kingdom Limited - Registered in England and Wales with number 741598. > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Wed May 30 16:25:14 2018 From: corvus at inaugust.com (James E. Blair) Date: Wed, 30 May 2018 09:25:14 -0700 Subject: [openstack-dev] Winterscale: a proposal regarding the project infrastructure Message-ID: <87o9gxdsb9.fsf@meyer.lemoncheese.net> Hi, With recent changes implemented by the OpenStack Foundation to include projects other than "OpenStack" under its umbrella, it has become clear that the "Project Infrastructure Team" needs to change. The infrastructure that is run for the OpenStack project is valued by other OpenStack Foundation projects (and beyond). Our community has not only produced an amazing cloud infrastructure system, but it has also pioneered new tools and techniques for software development and collaboration. For some time it's been apparent that we need to alter the way we run services in order to accommodate other Foundation projects. We've been talking about this informally for at least the last several months. One of the biggest sticking points has been a name for the effort. It seems very likely that we will want a new top-level domain for hosting multiple projects in a neutral environment (so that people don't have to say "hosted on OpenStack's infrastructure"). But finding such a name is difficult, and even before we do, we need to talk about it. I propose we call the overall effort "winterscale". In the best tradition of code names, it means nothing; look for no hidden meaning here. We won't use it for any actual services we provide. We'll use it to refer to the overall effort of restructuring our team and infrastructure to provide services to projects beyond OpenStack itself. And we'll stop using it when the restructuring effort is concluded. This is my first proposal: that we acknowledge this effort is underway and name it as such. My second proposal is an organizational structure for this effort. First, some goals: * The infrastructure should be collaboratively run as it is now, and the operational decisions should be made by the core reviewers as they are now. * Issues of service definition (i.e., what services we offer and how they are used) should be made via a collaborative process including the infrastructure operators and the projects which use it. To that end, I propose that we: * Work with the Foundation to create a new effort independent of the OpenStack project with the goal of operating infrastructure for the wider OpenStack Foundation community. * Work with the Foundation marketing team to help us with the branding and marketing of this effort. * Establish a "winterscale infrastructure team" (to be renamed) consisting of the current infra-core team members to operate this effort. * Move many of the git repos currently under the OpenStack project infrastructure team's governance to this new team. * Establish a "winterscale infrastructure council" (to be renamed) which will govern the services that the team provides by vote. The council will consist of the PTL of the winterscale infrastructure team and one member from each official OpenStack Foundation project. Currently, as I understand it, there's only one: OpenStack. But we expect kata, zuul, and others to be declared official in the not too distant future. The winterscale representative (the PTL) will have tiebreaking and veto power over council decisions. (This is structured loosely based on the current Infrastructure Council used by the OpenStack Project Infrastructure Team.) None of this is obviously final. My goal here is to give this effort a name and a starting point so that we can discuss it and make progress. -Jim From sfinucan at redhat.com Wed May 30 16:26:20 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Wed, 30 May 2018 17:26:20 +0100 Subject: [openstack-dev] Following the new PTI for document build, broken local builds In-Reply-To: <1524754167.6216.30.camel@redhat.com> References: <1521629342.8587.20.camel@redhat.com> <8da6121b-a48a-5dd5-8865-75b9e0d38e15@redhat.com> <1523018692.22377.1.camel@redhat.com> <20180406130205.GA15660@smcginnis-mbp.local> <1523026366.22377.13.camel@redhat.com> <20180406172714.d8cdbd0a03d77f9de657a20e@redhat.com> <20180425145913.GB22839@sm-xps> <1524670685-sup-2247@lrrr.local> <1210453e-e9af-ff50-6d61-af47afd47857@redhat.com> <20180425170644.GB459@sm-xps> <1524754167.6216.30.camel@redhat.com> Message-ID: <467231f8d7721180e96b05b2ee9122640bd8965b.camel@redhat.com> On Thu, 2018-04-26 at 15:49 +0100, Stephen Finucane wrote: > On Wed, 2018-04-25 at 12:06 -0500, Sean McGinnis wrote: > > > > > > > > > > [1] https://review.openstack.org/#/c/564232/ > > > > > > > > > > > > > The only concern I have is that it may slow the transition to the > > > > python 3 version of the jobs, since someone would have to actually > > > > fix the warnings before they could add the new job. I'm not sure I > > > > want to couple the tasks of fixing doc build warnings with also > > > > making those docs build under python 3 (which is usually quite > > > > simple). > > > > > > > > Is there some other way to enable this flag independently of the move to > > > > the python3 job? > > > > > > The existing proposal is: > > > > > > https://review.openstack.org/559348 > > > > > > TL;DR if you still have a build_sphinx section in setup.cfg then defaults > > > will remain the same, but when removing it as part of the transition to the > > > new PTI you'll have to eliminate any warnings. (Although AFAICT it doesn't > > > hurt to leave that section in place as long as you need, and you can still > > > do the rest of the PTI conversion.) > > > > > > The hold-up is that the job in question is also potentially used by other > > > Zuul users outside of OpenStack - including those who aren't using pbr at > > > all (i.e. there's no setup.cfg). So we need to warn those folks to prepare. > > > > > > cheers, > > > Zane. > > > > > > > Ah, I had looked but did not find an existing proposal. Looks like that would > > work too. I am good either way, but I will leave my approach out there just as > > another option to consider. I'll abandon that if folks prefer this way. > > Yeah, I reviewed your patch but assumed you'd seen mine already and > were looking for a quicker alternative. > > I've started the process of adding this to zuul-jobs by posting the > warning to zuul-announce (though it's waiting moderation by corvus). We > only need to wait two weeks after sending that message before we can > merge the patch to zuul-jobs, so I guess we should go that way now? > > Stephen This is now merged: https://review.openstack.org/#/c/559348/ For anyone that has migrated to the new PTI for documentation (namely, not using 'python setup.py build_sphinx'), you may find your docs builds are now broken. This will likely be a result of regressions introduced since the migration to the new PTI and should be trivial to resolve. If you have any issues, feel free to jump on #openstack-doc where one of the lovely docs people will be happy to help. Cheers, Stephen PS: I do realize this has likely incurred some extra work for some projects. No one likes busy work and I'm sorry to have forced that upon people but this change was essential to ensure every project could rely on the gate to actually validate their documentation. From pkovar at redhat.com Wed May 30 16:27:22 2018 From: pkovar at redhat.com (Petr Kovar) Date: Wed, 30 May 2018 18:27:22 +0200 Subject: [openstack-dev] [docs] Documentation meeting minutes for 2018-05-30 In-Reply-To: <20180530140547.338e95eb245baed88ba76d83@redhat.com> References: <20180530140547.338e95eb245baed88ba76d83@redhat.com> Message-ID: <20180530182722.dec77a8ea9bfb5ee5e0edffe@redhat.com> ======================= #openstack-doc: docteam ======================= Meeting started by pkovar at 16:00:44 UTC. The full logs are available at http://eavesdrop.openstack.org/meetings/docteam/2018/docteam.2018-05-30-16.00.log.html . Meeting summary --------------- * Ops docs discussion (pkovar, 16:05:40) * LINK: http://lists.openstack.org/pipermail/openstack-operators/2018-May/015318.html (pkovar, 16:05:45) * Ops community plans to take ownership of the ops, ha and architecture guides (pkovar, 16:05:49) * though the details and scope of content are to be agreed on yet (pkovar, 16:06:38) * primarily, the community wants to work on ops guide (pkovar, 16:07:02) * Contributor Guide lead (pkovar, 16:09:21) * After Mike P's departure, Kendall N is the new lead (pkovar, 16:09:26) * Default warning-is-error to True for non-legacy Sphinx projects (pkovar, 16:09:37) * LINK: https://review.openstack.org/#/c/559348/ (pkovar, 16:09:43) * Just merged today (pkovar, 16:09:47) * Vancouver Summit (pkovar, 16:11:46) * Docs-i18N - Project Update recording available online, thanks to everybody involved! (pkovar, 16:11:51) * LINK: https://www.youtube.com/watch?v=FIGErKqXAy8 (pkovar, 16:11:55) * Bug Triage Team (pkovar, 16:12:36) * LINK: https://wiki.openstack.org/wiki/Documentation/SpecialityTeams (pkovar, 16:12:41) * Backlog stable, number of unresolved bugs started to decrease recently (pkovar, 16:12:45) * Ops docs discussion (pkovar, 16:22:56) * when migrating the ops guide, it might be easier for ops folks to just copy the rst files from openstack-manuals (pkovar, 16:22:58) * and save time on conversion from wiki (pkovar, 16:23:35) Meeting ended at 16:25:49 UTC. People present (lines said) --------------------------- * pkovar (39) * dhellmann (15) * openstack (3) * stephenfin (3) Generated by `MeetBot`_ 0.1.4 From doug at doughellmann.com Wed May 30 16:43:50 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 30 May 2018 12:43:50 -0400 Subject: [openstack-dev] [OpenStack-Infra] Winterscale: a proposal regarding the project infrastructure In-Reply-To: <87o9gxdsb9.fsf@meyer.lemoncheese.net> References: <87o9gxdsb9.fsf@meyer.lemoncheese.net> Message-ID: <1527698378-sup-1721@lrrr.local> Excerpts from corvus's message of 2018-05-30 09:25:14 -0700: > Hi, > > With recent changes implemented by the OpenStack Foundation to include > projects other than "OpenStack" under its umbrella, it has become clear > that the "Project Infrastructure Team" needs to change. > > The infrastructure that is run for the OpenStack project is valued by > other OpenStack Foundation projects (and beyond). Our community has not > only produced an amazing cloud infrastructure system, but it has also > pioneered new tools and techniques for software development and > collaboration. > > For some time it's been apparent that we need to alter the way we run > services in order to accommodate other Foundation projects. We've been > talking about this informally for at least the last several months. One > of the biggest sticking points has been a name for the effort. It seems > very likely that we will want a new top-level domain for hosting > multiple projects in a neutral environment (so that people don't have to > say "hosted on OpenStack's infrastructure"). But finding such a name is > difficult, and even before we do, we need to talk about it. > > I propose we call the overall effort "winterscale". In the best > tradition of code names, it means nothing; look for no hidden meaning > here. We won't use it for any actual services we provide. We'll use it > to refer to the overall effort of restructuring our team and > infrastructure to provide services to projects beyond OpenStack itself. > And we'll stop using it when the restructuring effort is concluded. > > This is my first proposal: that we acknowledge this effort is underway > and name it as such. > > My second proposal is an organizational structure for this effort. > First, some goals: > > * The infrastructure should be collaboratively run as it is now, and > the operational decisions should be made by the core reviewers as > they are now. > > * Issues of service definition (i.e., what services we offer and how > they are used) should be made via a collaborative process including > the infrastructure operators and the projects which use it. > > To that end, I propose that we: > > * Work with the Foundation to create a new effort independent of the > OpenStack project with the goal of operating infrastructure for the > wider OpenStack Foundation community. > > * Work with the Foundation marketing team to help us with the branding > and marketing of this effort. > > * Establish a "winterscale infrastructure team" (to be renamed) > consisting of the current infra-core team members to operate this > effort. > > * Move many of the git repos currently under the OpenStack project > infrastructure team's governance to this new team. I'm curious about the "many" in that sentence. Which do you anticipate not moving, and if this new team replaces the existing team then who would end up owning the ones that do not move? > > * Establish a "winterscale infrastructure council" (to be renamed) which > will govern the services that the team provides by vote. The council > will consist of the PTL of the winterscale infrastructure team and one > member from each official OpenStack Foundation project. Currently, as > I understand it, there's only one: OpenStack. But we expect kata, > zuul, and others to be declared official in the not too distant > future. The winterscale representative (the PTL) will have > tiebreaking and veto power over council decisions. That structure seems sound, although it means the council is going to be rather small (at least in the near term). What sorts of decisions do you anticipate needing to be addressed by this council? > > (This is structured loosely based on the current Infrastructure > Council used by the OpenStack Project Infrastructure Team.) > > None of this is obviously final. My goal here is to give this effort a > name and a starting point so that we can discuss it and make progress. > > -Jim > Thanks for starting this thread! I've replied to both mailing lists because I wasn't sure which was more appropriate. Please let me know if I should focus future replies on one list. Doug From doug at doughellmann.com Wed May 30 16:45:26 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 30 May 2018 12:45:26 -0400 Subject: [openstack-dev] [tc][forum] TC Retrospective for Queens/Rocky In-Reply-To: <20180529225745.pkgavspefqpu4nah@yuggoth.org> References: <1527628983-sup-2281@lrrr.local> <20180529225745.pkgavspefqpu4nah@yuggoth.org> Message-ID: <1527698665-sup-4435@lrrr.local> Excerpts from Jeremy Stanley's message of 2018-05-29 22:57:45 +0000: > On 2018-05-29 17:26:16 -0400 (-0400), Doug Hellmann wrote: > [...] > > There was some discussion of whether the office hours themselves > > are useful, based on the apparent lack of participation. We had > > theories that this was a combination of bad times (meaning that TC > > members haven't always attended) and bad platforms (meaning that > > some parts of the community we are trying to reach may emphasize > > other tools over IRC for real time chat). We need to look into that. > [...] > > We also had some consensus in the room on starting to use meetbot > during office hours to highlight whatever we discussed in the > resulting meeting minutes. I'm planning to do that at our 01:00z > office hour (a couple hours from now) and see how it goes, though > that timeslot in particular tends to have little or no discussion. Thanks for remembering, and reminding me. The minutes from the most recent office hours are at http://eavesdrop.openstack.org/meetings/tc/2018/tc.2018-05-30-01.00.html for anyone interested in following them. Doug From corvus at inaugust.com Wed May 30 17:09:23 2018 From: corvus at inaugust.com (James E. Blair) Date: Wed, 30 May 2018 10:09:23 -0700 Subject: [openstack-dev] [OpenStack-Infra] Winterscale: a proposal regarding the project infrastructure In-Reply-To: <1527698378-sup-1721@lrrr.local> (Doug Hellmann's message of "Wed, 30 May 2018 12:43:50 -0400") References: <87o9gxdsb9.fsf@meyer.lemoncheese.net> <1527698378-sup-1721@lrrr.local> Message-ID: <87wovlcbp8.fsf@meyer.lemoncheese.net> Doug Hellmann writes: >> * Move many of the git repos currently under the OpenStack project >> infrastructure team's governance to this new team. > > I'm curious about the "many" in that sentence. Which do you anticipate > not moving, and if this new team replaces the existing team then who > would end up owning the ones that do not move? There are a lot. Generally speaking, I think most of the custom software, deployment tooling, and configuration would move. An example of something that probably shouldn't move is "openstack-zuul-jobs". We still need people that are concerned with how OpenStack uses the winterscale service. I'm not sure whether that should be its own team or should those functions get folded into other teams. >> * Establish a "winterscale infrastructure council" (to be renamed) which >> will govern the services that the team provides by vote. The council >> will consist of the PTL of the winterscale infrastructure team and one >> member from each official OpenStack Foundation project. Currently, as >> I understand it, there's only one: OpenStack. But we expect kata, >> zuul, and others to be declared official in the not too distant >> future. The winterscale representative (the PTL) will have >> tiebreaking and veto power over council decisions. > > That structure seems sound, although it means the council is going > to be rather small (at least in the near term). What sorts of > decisions do you anticipate needing to be addressed by this council? Yes, very small. Perhaps we need an interim structure until it gets larger? Or perhaps just discipline and agreement that the two people on it will consult with the necessary constituencies and represent them well? I expect the council not to have to vote very often. Perhaps only on substantial changes to services (bringing a new offering online, retiring a disused offering, establishing parameters of a service). As an example, the recent thread on "terms of service" would be a good topic for the council to settle. >> (This is structured loosely based on the current Infrastructure >> Council used by the OpenStack Project Infrastructure Team.) >> >> None of this is obviously final. My goal here is to give this effort a >> name and a starting point so that we can discuss it and make progress. >> >> -Jim >> > > Thanks for starting this thread! I've replied to both mailing lists > because I wasn't sure which was more appropriate. Please let me > know if I should focus future replies on one list. Indeed, perhaps we should steer this toward openstack-dev now. I'll drop openstack-infra from future replies. -Jim From fungi at yuggoth.org Wed May 30 17:30:23 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 30 May 2018 17:30:23 +0000 Subject: [openstack-dev] [OpenStack-Infra] Winterscale: a proposal regarding the project infrastructure In-Reply-To: <87wovlcbp8.fsf@meyer.lemoncheese.net> References: <87o9gxdsb9.fsf@meyer.lemoncheese.net> <1527698378-sup-1721@lrrr.local> <87wovlcbp8.fsf@meyer.lemoncheese.net> Message-ID: <20180530173022.pi5u6ftnmtjxhuam@yuggoth.org> On 2018-05-30 10:09:23 -0700 (-0700), James E. Blair wrote: [...] > An example of something that probably shouldn't move is > "openstack-zuul-jobs". We still need people that are concerned with how > OpenStack uses the winterscale service. I'm not sure whether that > should be its own team or should those functions get folded into other > teams. [...] At least for some undefined transitional period, anything we think shouldn't move can remain the domain of the existing Infrastructure team under OpenStack TC governance. Once we get a better feel for what's left, that team will likely get scaled back to just the remaining interested maintainers and its continued governance situation can be revisited at that future date. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gael.therond at gmail.com Wed May 30 17:34:17 2018 From: gael.therond at gmail.com (Flint WALRUS) Date: Wed, 30 May 2018 19:34:17 +0200 Subject: [openstack-dev] [neutron][api][grapql] Proof of Concept In-Reply-To: References: Message-ID: Hi Gilles, I hope you enjoyed your Summit!? Did you had any interesting talk to report about our little initiative ? Le dim. 6 mai 2018 à 15:01, Gilles Dubreuil a écrit : > > Akihiro, thank you for your precious help! > > Regarding the choice of Neutron as PoC, I'm sorry for not providing much > details when I said "because of its specific data model", > effectively the original mention was "its API exposes things at an > individual table level, requiring the client to join that information to > get the answers they need". > I realize now such description probably applies to many OpenStack APIs. > So I'm not sure what was the reason for choosing Neutron. > I suppose Nova is also a good candidate because API is quite complex too, > in a different way, and need to expose the data API and the control API > plane as we discussed. > > After all Neutron is maybe not the best candidate but it seems good > enough. > > And as Flint say the extension mechanism shouldn't be an issue. > > So if someone believe there is a better candidate for the PoC, please > speak now. > > Thanks, > Gilles > > PS: Flint, Thank you for offering to be the advocate for Berlin. That's > great! > > > On 06/05/18 02:23, Flint WALRUS wrote: > > Hi Akihiro, > > Thanks a lot for this insight on how neutron behave. > > We would love to get support and backing from the neutron team in order to > be able to get the best PoC possible. > > Someone suggested neutron as a good choice because of it simple database > model. As GraphQL can manage your behavior of an extension declaring its > own schemes I don’t think it would take that much time to implement it. > > @Gilles, if I goes to the berlin summitt I could definitely do the > networking and relationship work needed to get support on our PoC from > different teams members. This would help to spread the world multiple time > and don’t have a long time before someone come to talk about this subject > as what happens with the 2015 talk of the Facebook speaker. > > Le sam. 5 mai 2018 à 18:05, Akihiro Motoki a écrit : > >> Hi, >> >> I am happy to see the effort to explore a new API mechanism. >> I would like to see good progress and help effort as API liaison from the >> neutron team. >> >> > Neutron has been selected for the PoC because of its specific data >> model >> >> On the other hand, I am not sure this is the right reason to choose >> 'neutron' only from this reason. I would like to note "its specific data >> model" is not the reason that makes the progress of API versioning slowest >> in the OpenStack community. I believe it is worth recognized as I would >> like not to block the effort due to the neutron-specific reasons. >> The most complicated point in the neutron API is that the neutron API >> layer allows neutron plugins to declare which features are supported. The >> neutron API is a collection of API extensions defined in the neutron-lib >> repo and each neutron plugin can declare which subset(s) of the neutron >> APIs are supported. (For more detail, you can check how the neutron API >> extension mechanism is implemented). It is not defined only by the neutron >> API layer. We need to communicate which API features are supported by >> communicating enabled service plugins. >> >> I am afraid that most efforts to explore a new mechanism in neutron will >> be spent to address the above points which is not directly related to >> GraphQL itself. >> Of course, it would be great if you overcome long-standing complicated >> topics as part of GraphQL effort :) >> >> I am happy to help the effort and understand how the neutron API is >> defined. >> >> Thanks, >> Akihiro >> >> >> 2018年5月5日(土) 18:16 Gilles Dubreuil : >> >>> Hello, >>> >>> Few of us recently discussed [1] how GraphQL [2], the next evolution >>> from REST, could transform OpenStack APIs for the better. >>> Effectively we believe OpenStack APIs provide perfect use cases for >>> GraphQL DSL approach, to bring among other advantages, better >>> performance and stability, easier developments and consumption, and with >>> GraphQL Schema provide automation capabilities never achieved before. >>> >>> The API SIG suggested to start an API GraphQL Proof of Concept (PoC) to >>> demonstrate the capabilities before eventually extend GraphQL to other >>> projects. >>> Neutron has been selected for the PoC because of its specific data model. >>> >>> So if you are interested, please join us. >>> For those who can make it, we'll also discuss this during the SIG API >>> BoF at OpenStack Summit at Vancouver [3] >>> >>> To learn more about GraphQL, check-out howtographql.com [4]. >>> >>> So let's get started... >>> >>> >>> [1] >>> http://lists.openstack.org/pipermail/openstack-dev/2018-May/130054.html >>> [2] http://graphql.org/ >>> [3] >>> >>> https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21798/api-special-interest-group-session >>> [4] https://www.howtographql.com/ >>> >>> Regards, >>> Gilles >>> >>> >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > -- > Gilles Dubreuil > Senior Software Engineer - Red Hat - Openstack DFG Integration > Email: gilles at redhat.com > GitHub/IRC: gildub > Mobile: +61 400 894 219 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagehugo at gmail.com Wed May 30 18:22:42 2018 From: gagehugo at gmail.com (Gage Hugo) Date: Wed, 30 May 2018 13:22:42 -0500 Subject: [openstack-dev] [keystone] Signing off In-Reply-To: References: Message-ID: It was great working with you Henry. Hope to see you around sometime and wishing you all the best! On Wed, May 30, 2018 at 3:45 AM, Henry Nash wrote: > Hi > > It is with a somewhat heavy heart that I have decided that it is time to > hang up my keystone core status. Having been involved since the closing > stages of Folsom, I've had a good run! When I look at how far keystone has > come since the v2 days, it is remarkable - and we should all feel a sense > of pride in that. > > Thanks to all the hard work, commitment, humour and support from all the > keystone folks over the years - I am sure we will continue to interact and > meet among the many other open source projects that many of us are becoming > involved with. Ad astra! > > Best regards, > > Henry > Twitter: @henrynash > linkedIn: www.linkedin.com/in/henrypnash > > Unless stated otherwise above: > IBM United Kingdom Limited - Registered in England and Wales with number > 741598. > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed May 30 18:48:30 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 30 May 2018 14:48:30 -0400 Subject: [openstack-dev] [tc][forum] TC Retrospective for Queens/Rocky In-Reply-To: <1527628983-sup-2281@lrrr.local> References: <1527628983-sup-2281@lrrr.local> Message-ID: <1527705783-sup-2521@lrrr.local> Excerpts from Doug Hellmann's message of 2018-05-29 17:26:16 -0400: [snip] > Chris brought up a concern about whether we have much traction on > "doing stuff" and especially "getting things done that not everyone > wants," Graham noted a lack of "visible impact," and Zane mentioned > the TC vision in particular. Based on conversations last week, I > am currently tracking a list of 20+ things the TC is working on. I > will add the public ones to the wiki page this week as I catch up > with my notes (remember, sometimes these things involve disputes > that can be more smoothly handled one-on-one, so not everything > that is going on is necessarily going to have its own email thread > announcing it). I have updated our tracking page in the wiki (https://wiki.openstack.org/wiki/Technical_Committee_Tracker). I listed drivers for the topics where I knew we had commitments. Several of you raised issues, but I didn't want to volunteer anyone to work on something just because they raised it. I'm missing details and/or whole topics. Please review the list and make any updates you think are necessary. Doug From doug at doughellmann.com Wed May 30 19:03:28 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 30 May 2018 15:03:28 -0400 Subject: [openstack-dev] [OpenStack-Infra] Winterscale: a proposal regarding the project infrastructure In-Reply-To: <20180530173022.pi5u6ftnmtjxhuam@yuggoth.org> References: <87o9gxdsb9.fsf@meyer.lemoncheese.net> <1527698378-sup-1721@lrrr.local> <87wovlcbp8.fsf@meyer.lemoncheese.net> <20180530173022.pi5u6ftnmtjxhuam@yuggoth.org> Message-ID: <1527706991-sup-8688@lrrr.local> Excerpts from Jeremy Stanley's message of 2018-05-30 17:30:23 +0000: > On 2018-05-30 10:09:23 -0700 (-0700), James E. Blair wrote: > [...] > > An example of something that probably shouldn't move is > > "openstack-zuul-jobs". We still need people that are concerned with how > > OpenStack uses the winterscale service. I'm not sure whether that > > should be its own team or should those functions get folded into other > > teams. > [...] > > At least for some undefined transitional period, anything we think > shouldn't move can remain the domain of the existing Infrastructure > team under OpenStack TC governance. Once we get a better feel for > what's left, that team will likely get scaled back to just the > remaining interested maintainers and its continued governance > situation can be revisited at that future date. That feels like a good solution. Doug From doug at doughellmann.com Wed May 30 19:08:56 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 30 May 2018 15:08:56 -0400 Subject: [openstack-dev] [OpenStack-Infra] Winterscale: a proposal regarding the project infrastructure In-Reply-To: <87wovlcbp8.fsf@meyer.lemoncheese.net> References: <87o9gxdsb9.fsf@meyer.lemoncheese.net> <1527698378-sup-1721@lrrr.local> <87wovlcbp8.fsf@meyer.lemoncheese.net> Message-ID: <1527707031-sup-204@lrrr.local> Excerpts from corvus's message of 2018-05-30 10:09:23 -0700: > Doug Hellmann writes: > > >> * Move many of the git repos currently under the OpenStack project > >> infrastructure team's governance to this new team. > > > > I'm curious about the "many" in that sentence. Which do you anticipate > > not moving, and if this new team replaces the existing team then who > > would end up owning the ones that do not move? > > There are a lot. Generally speaking, I think most of the custom > software, deployment tooling, and configuration would move. > > An example of something that probably shouldn't move is > "openstack-zuul-jobs". We still need people that are concerned with how > OpenStack uses the winterscale service. I'm not sure whether that > should be its own team or should those functions get folded into other > teams. > > >> * Establish a "winterscale infrastructure council" (to be renamed) which > >> will govern the services that the team provides by vote. The council > >> will consist of the PTL of the winterscale infrastructure team and one > >> member from each official OpenStack Foundation project. Currently, as > >> I understand it, there's only one: OpenStack. But we expect kata, > >> zuul, and others to be declared official in the not too distant > >> future. The winterscale representative (the PTL) will have > >> tiebreaking and veto power over council decisions. > > > > That structure seems sound, although it means the council is going > > to be rather small (at least in the near term). What sorts of > > decisions do you anticipate needing to be addressed by this council? > > Yes, very small. Perhaps we need an interim structure until it gets > larger? Or perhaps just discipline and agreement that the two people on > it will consult with the necessary constituencies and represent them > well? I don't want to make too much out of it, but it does feel a bit odd to have a 2 person body where 1 person has the final decision power. :-) Having 2 people per official team (including winterscale) would give us more depth of coverage overall (allowing for quorum when someone is on vacation, for example). In the short term, it also has the benefit of having twice as many people involved. > I expect the council not to have to vote very often. Perhaps only on > substantial changes to services (bringing a new offering online, > retiring a disused offering, establishing parameters of a service). As > an example, the recent thread on "terms of service" would be a good > topic for the council to settle. OK, so not on every change but on the significant ones that might affect more than one project. Ideally any sort of conflict would be worked out in advance, but it's good to have the process in place to resolve problems before they come up. Doug From melwittt at gmail.com Wed May 30 19:22:20 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 30 May 2018 12:22:20 -0700 Subject: [openstack-dev] [nova] spec review day next week Tuesday 2018-06-05 Message-ID: <61bd1858-93b4-1d1d-5106-0aaf2074c8b0@gmail.com> Howdy all, This cycle, we have our spec freeze later than usual at milestone r-2 June 7 because of the review runways system we've been trying out. We wanted to allow more time for spec approvals as blueprints were completed via runways. So, ahead of the spec freeze, let's have a spec review day next week Tuesday June 5 to ensure we get what spec approvals we can over the line before the freeze. Please try to make some time on Tuesday to review some specs and thanks in advance for participating! Cheers, -melanie From alee at redhat.com Wed May 30 19:58:14 2018 From: alee at redhat.com (Ade Lee) Date: Wed, 30 May 2018 15:58:14 -0400 Subject: [openstack-dev] [tripleo] [barbican] [tc] key store in base services In-Reply-To: <16b41f65-053b-70c3-b95f-93b763a5f4ae@openstack.org> References: <20180516174209.45ghmqz7qmshsd7g@yuggoth.org> <16b41f65-053b-70c3-b95f-93b763a5f4ae@openstack.org> Message-ID: <1527710294.31249.24.camel@redhat.com> On Thu, 2018-05-17 at 09:58 +0200, Thierry Carrez wrote: > Jeremy Stanley wrote: > > [...] > > As a community, we're likely to continue to make imbalanced > > trade-offs against relevant security features if we don't move > > forward and declare that some sort of standardized key storage > > solution is a fundamental component on which OpenStack services can > > rely. Being able to just assume that you can encrypt volumes in > > Swift, even as a means to further secure a TripleO undercloud, > > would > > be a step in the right direction for security-minded deployments. > > > > Unfortunately, I'm unable to find any follow-up summary on the > > mailing list from the aforementioned session, but recollection from > > those who were present (I had a schedule conflict at that time) was > > that a Castellan-compatible key store would at least be a candidate > > for inclusion in our base services list: > > > > https://governance.openstack.org/tc/reference/base-services.html > > Yes, last time this was discussed, there was lazy consensus that > adding > "a Castellan-compatible secret store" would be a good addition to > the > base services list if we wanted to avoid proliferation of half-baked > keystore implementations in various components. > > The two blockers were: > > 1/ castellan had to be made less Barbican-specific, offer at least > one > other secrets store (Vault), and move under Oslo (done) > > 2/ some projects (was it Designate ? Octavia ?) were relying on > advanced > functions of Barbican not generally found in other secrets store, > like > certificate generation, and so would prefer to depend on Barbican > itself, which confuses the messaging around the base service addition > a > bit ("any Castellan-supported secret store as long as it's Barbican") > As far as I know, Octavia no longer depends on barbican specific functions. Rather, they use castellan now. And the current oslo-config work provides secrets through a castellan backend. So it seems that the two blockers above have been resolved. So is it time to ad a castellan compatible secret store to the base services? Ade From alee at redhat.com Wed May 30 20:06:38 2018 From: alee at redhat.com (Ade Lee) Date: Wed, 30 May 2018 16:06:38 -0400 Subject: [openstack-dev] [tripleo] [barbican] [tc] key store in base services In-Reply-To: References: <20180516174209.45ghmqz7qmshsd7g@yuggoth.org> <16b41f65-053b-70c3-b95f-93b763a5f4ae@openstack.org> Message-ID: <1527710798.31249.30.camel@redhat.com> On Thu, 2018-05-17 at 10:33 +0200, Cédric Jeanneret wrote: > > On 05/17/2018 10:18 AM, Bogdan Dobrelya wrote: > > On 5/17/18 9:58 AM, Thierry Carrez wrote: > > > Jeremy Stanley wrote: > > > > [...] > > > > As a community, we're likely to continue to make imbalanced > > > > trade-offs against relevant security features if we don't move > > > > forward and declare that some sort of standardized key storage > > > > solution is a fundamental component on which OpenStack services > > > > can > > > > rely. Being able to just assume that you can encrypt volumes in > > > > Swift, even as a means to further secure a TripleO undercloud, > > > > would > > > > be a step in the right direction for security-minded > > > > deployments. > > > > > > > > Unfortunately, I'm unable to find any follow-up summary on the > > > > mailing list from the aforementioned session, but recollection > > > > from > > > > those who were present (I had a schedule conflict at that time) > > > > was > > > > that a Castellan-compatible key store would at least be a > > > > candidate > > > > for inclusion in our base services list: > > > > > > > > https://governance.openstack.org/tc/reference/base-services.htm > > > > l > > > > > > Yes, last time this was discussed, there was lazy consensus that > > > adding "a Castellan-compatible secret store" would be a good > > > addition > > > to the base services list if we wanted to avoid proliferation of > > > half-baked keystore implementations in various components. > > > > > > The two blockers were: > > > > > > 1/ castellan had to be made less Barbican-specific, offer at > > > least one > > > other secrets store (Vault), and move under Oslo (done) > > > > Back to the subject and tripleo underclouds running Barbican, using > > vault as a backend may be a good option, given that openshift > > supports > > [0] it as well for storing k8s secrets, and kubespray does [1] for > > vanilla k8s deployments, and that we have openshift/k8s-based > > control > > plane for openstack on the integration roadmap. So we'll highly > > likely > > end up running Barbican/Vault on undercloud anyway. > > > > [0] > > https://blog.openshift.com/managing-secrets-openshift-vault-integra > > tion/ > > [1] > > https://github.com/kubernetes-incubator/kubespray/blob/master/docs/ > > vault.md > > > > That just sounds lovely, especially since this allows to converge > "secure storage" tech between projects. > On my own, I was considering some secure storage (custodia) in the > context of the public TLS certificate storage/update/provisioning. > Having by default a native way to store secrets used by the overcloud > deploy/life is a really good thing, and will prevent leaks, having > ardcoded passwords in files and so on (although, yeah, you'll need > something to access barbican ;)). > I think that for consistency sake, we will want to use a Castellan- compatible back-end in the undercloud, rather than something more custom like a LUKS encrypted partition or similar. Right now, that means either castellan -> vault or castellan-> barbican. In the future, it could also mean castellan-> custodia. If you do use Barbican, you could still use vault as a backend to Barbican. ie. castellan -> barbican -> vault. > > > > > > 2/ some projects (was it Designate ? Octavia ?) were relying on > > > advanced functions of Barbican not generally found in other > > > secrets > > > store, like certificate generation, and so would prefer to depend > > > on > > > Barbican itself, which confuses the messaging around the base > > > service > > > addition a bit ("any Castellan-supported secret store as long as > > > it's > > > Barbican") > > > > > > > > > _____________________________________________________________________ > _____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > cribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From anteaya at anteaya.info Wed May 30 20:07:05 2018 From: anteaya at anteaya.info (Anita Kuno) Date: Wed, 30 May 2018 16:07:05 -0400 Subject: [openstack-dev] [OpenStack-Infra] Winterscale: a proposal regarding the project infrastructure In-Reply-To: <87o9gxdsb9.fsf@meyer.lemoncheese.net> References: <87o9gxdsb9.fsf@meyer.lemoncheese.net> Message-ID: On 2018-05-30 12:25 PM, James E. Blair wrote: > I propose we call the overall effort "winterscale". In the best > tradition of code names, it means nothing; look for no hidden meaning > here. We won't use it for any actual services we provide. We'll use it > to refer to the overall effort of restructuring our team and > infrastructure to provide services to projects beyond OpenStack itself. > And we'll stop using it when the restructuring effort is concluded. I think as names with no meaning go, this is one I find acceptable. Thank you Jim, Anita From openstack at fried.cc Wed May 30 20:18:40 2018 From: openstack at fried.cc (Eric Fried) Date: Wed, 30 May 2018 15:18:40 -0500 Subject: [openstack-dev] [Cyborg] [Nova] Cyborg traits In-Reply-To: References: <1e33d001-ae8c-c28d-0ab6-fa061c5d362b@intel.com> Message-ID: <37700cc2-a79c-30ea-d986-e18584cc0464@fried.cc> This all sounds fully reasonable to me. One thing, though... >> * There is a resource class per device category e.g. >> CUSTOM_ACCELERATOR_GPU, CUSTOM_ACCELERATOR_FPGA. Let's propose standard resource classes for these ASAP. https://github.com/openstack/nova/blob/d741f624c81baf89fc8b6b94a2bc20eb5355a818/nova/rc_fields.py -efried . From mnaser at vexxhost.com Wed May 30 20:23:08 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 30 May 2018 16:23:08 -0400 Subject: [openstack-dev] [tc] StarlingX project status update Message-ID: Hi everyone: Over the past week in the summit, there was a lot of discussion regarding StarlingX and members of the technical commitee had a few productive discussions regarding the best approach to deal with a proposed new pilot project for incubation in the OSF's Edge Computing strategic focus area: StarlingX. If you're not aware, StarlingX includes forks of some OpenStack components and other open source software which contain certain features that are specific to edge and industrial IoT computing use cases. The code behind the project is from Wind River (and is used to build a product called "Titanium Cloud"). At the moment, the goal of StarlingX hosting their projects on the community infrastructure is to get the developers used to the Gerrit workflow. The intention is to evenutally work with upstream teams in order to bring the features and bug fixes which are specific to the fork back upstream, with an ideal goal of bringing all the differences upstream. We've discussed around all the different ways that we can approach this and how to help the StarlingX team be part of our community. If we can succesfully do this, it would be a big success for our community as well as our community gaining contributors from the Wind River team. In an ideal world, it's a win-win. The plan at the moment is the following: - StarlingX will have the first import of code that is not forked, simply other software that they've developed to help deliver their product. This code can be hosted with no problems. - StarlingX will generate a list of patches to be brought upstream and the StarlingX team will work together with upstream teams in order to start backporting and upstreaming the codebase. Emilien Macchi (EmilienM) and I have volunteered to take on the responsibility of monitoring the progress upstreaming these patches. - StarlingX contains a few forks of other non-OpenStack software. The StarlingX team will work with the authors of the original projects to ensure that they do not mind us hosting a fork of their software. If they don't, we'll proceed to host those projects. If they prefer something else (hosting it themselves, placing it on another hosting service, etc.), the StarlingX team will work with them in that way. We discussed approaches for cases where patches aren't acceptable upstream, because they diverge from the project mission or aren't comprehensive. Ideally all of those could be turned into acceptable changes that meet both team's criteria. In some cases, adding plugin interfaces or driver interfaces may be the best alternative. Only as a last resort would we retain the forks for a long period of time. >From what was brought up, the team from Wind River is hoping to on-board roughly 50 new full time contributors. In combination with the features that they've built that we can hopefully upstream, I am hopeful that we can come to a win-win situation for everyone in this. Regards, Mohammed From mriedemos at gmail.com Wed May 30 20:37:49 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 30 May 2018 15:37:49 -0500 Subject: [openstack-dev] Questions about token scopes In-Reply-To: <61dae2da-e38b-ab3a-3921-6c2c8bd81796@gmail.com> References: <61dae2da-e38b-ab3a-3921-6c2c8bd81796@gmail.com> Message-ID: <40b4e723-6915-7b01-04a3-7b96f39032ae@gmail.com> On 5/30/2018 9:53 AM, Lance Bragstad wrote: > While scope isn't explicitly denoted by an > attribute, it can be derived from the attributes of the token response. > Yeah, this was confusing to me, which is why I reported it as a bug in the API reference documentation: https://bugs.launchpad.net/keystone/+bug/1774229 >> * It looks like python-openstackclient doesn't allow specifying a >> scope when issuing a token, is that going to be added? > Yes, I have a patch up for it [6]. I wanted to get this in during > Queens, but it missed the boat. I believe this and a new release of > oslo.context are the only bits left in order for services to have > everything they need to easily consume system-scoped tokens. > Keystonemiddleware should know how to handle system-scoped tokens in > front of each service [7]. The oslo.context library should be smart > enough to handle system scope set by keystonemiddleware if context is > built from environment variables [8]. Both keystoneauth [9] and > python-keystoneclient [10] should have what they need to generate > system-scoped tokens. > > That should be enough to allow the service to pass a request environment > to oslo.context and use the context object to reason about the scope of > the request. As opposed to trying to understand different token scope > responses from keystone. We attempted to abstract that away in to the > context object. > > [6]https://review.openstack.org/#/c/524416/ > [7]https://review.openstack.org/#/c/564072/ > [8]https://review.openstack.org/#/c/530509/ > [9]https://review.openstack.org/#/c/529665/ > [10]https://review.openstack.org/#/c/524415/ I think your reply in IRC was more what I was looking for: lbragstad mriedem: if you install https://review.openstack.org/#/c/524416/5 locally with devstack and setup a clouds.yaml, ``openstack token issue --os-cloud devstack-system-admin`` should work 15:39 lbragstad http://paste.openstack.org/raw/722357/ 15:39 So users with the system role will need to create a token using that role to get the system-scoped token, as far as I understand. There is no --scope option on the 'openstack token issue' CLI. > Uhm, if I understand your question, it depends on how you define the > scope types for those APIs. If you set them to system-scope, then an > operator will need to use a system-scoped token in order to access those > APIs iff the placement configuration file contains placement.conf > [oslo.policy] enforce_scope = True. Otherwise, setting that option to > false will log a warning to operators saying that someone is accessing a > system-scoped API with a project-scoped token (e.g. education needs to > happen). > All placement APIs will be system scoped for now, so yeah I guess if operators enable scope enforcement they'll just have to learn how to deal with system-scope enforced APIs. Here is another random question: Do we have any CI jobs running devstack/tempest with scope enforcement enabled to see what blows up? -- Thanks, Matt From myoung at redhat.com Wed May 30 21:14:30 2018 From: myoung at redhat.com (Matt Young) Date: Wed, 30 May 2018 17:14:30 -0400 Subject: [openstack-dev] [tripleo] CI Team Sprint 13 Summary Message-ID: Greetings, The TripleO CI team has just completed Sprint 13 (5/3 - 05/23). The following is a summary of activities during our sprint. Details on our team structure can be found in the spec [1]. # Sprint 13 Epic (CI Squad): Upgrade Support and Refactoring - Epic Card: https://trello.com/c/cuKevn28/728-sprint-13-upgrades-goals - Tasks: http://ow.ly/L86Y30kg75L This sprint was spent with the CI squad focused on Upgrades. We wanted to be able to use existing/working/tested CI collateral (ansible playbooks and roles) used in CI today. Throughout many of these are references to “{{ release }}” (e.g. ‘queens’, ‘pike’). In order to not retrofit the bulk of these with “upgrade aware” conditionals and/or logic we needed a tool/module that could generate the inputs for the ‘release’ variable (and other similar inputs). This allows us to reuse our common roles and playbooks by decoupling the specifics of {upgrades, updates, FFU} * {pike, queens, rocky,…}. We’ve created this tool, and also put into place a linting and unit tests for it as well. We also made a few of the jobs that had been prototyped and created in previous sprints voting, then used them to validate changes to said jobs to wire in the new workflow/tool. We are optimistic that work done in sprint 13 will prove useful in future sprints. A table to describe some of the problem set and our thinking around variables used in CI is at [2]. The tool and tests are at [3]. # Sprint 13 Epic (Tempest Squad): - Epic Card: https://trello.com/c/efqE5XMr/82-sprint-13-refactor-python-tempestconf - Tasks: http://ow.ly/LH8Q30kgd1C In Sprint 13 the tempest squad was focused on refactoring python-tempestconf. It is the primary tool used by tempest users to generate tempest.conf automatically so that users can easily run tempest tests. Currently in TripleO and Infrared CI, we pass numerous parameters manually via CLI. This is cumbersome and error prone. The high level goals were to reduce the number of default CLI overrides used today, and to prepare python-tempestconf enabling better integration with refstack-client. This entailed service discoverability work. We added support for keystone, glance, cinder, swift, and neutron. Additional service support is planned for future sprints. We also improved existing documentation for python-tempestconf. # Ruck & Rover (Sprint 13) Sagi Shnaidman (sshnaidm), Matt Young (myoung) https://review.rdoproject.org/etherpad/p/ruckrover-sprint13 A few notable issues where substantial time was spent are below. Note that this is not an inclusive list: - When centos 7.5 was released, this caused a number of issues that impacted gates. This included deltas between package versions in BM vs. container images, changes to centos that caused failures when modifying images (e.g. IPA) in gates, and the like. - We experienced issues with our promoter server, and the tripleo-infra tenant generally around DNS and networking throughput, which impactacted ability to process promotions. - RHOS-13 jobs were created, and will eventually be used to gate changes to TQ/TQE. - Numerous patches/fixes to RDO Phase 2 jobs and CI infra. We had accumulated technical debt. While we have additional work to do, particularly around some of the BM configs, we made good progress in bringing various jobs back online. We are still working on this in sprint 14 and moving forward. Thanks, The Tripleo CI team [1] https://specs.openstack.org/openstack/tripleo-specs/specs/policy/ci-team-structure.html [2] https://wiki.openstack.org/wiki/Tripleo-upgrades-fs-variables [3] https://github.com/openstack-infra/tripleo-ci/blob/master/scripts/emit_releases_file -------------- next part -------------- An HTML attachment was scrubbed... URL: From remo at rm.ht Wed May 30 21:18:35 2018 From: remo at rm.ht (Remo Mattei) Date: Wed, 30 May 2018 14:18:35 -0700 Subject: [openstack-dev] Hello all, puppet modules Message-ID: <4572B304-8F2F-4703-8114-ABD2F137DDDE@rm.ht> Hello all, I have talked to several people about this and I would love to get this finalized once and for all. I have checked the OpenStack puppet modules which are mostly developed by the Red Hat team, as of right now, TripleO is using a combo of Ansible and puppet to deploy but in the next couple of releases, the plan is to move away from the puppet option. So consequently, what will be the plan of TripleO and the puppet modules? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Wed May 30 21:24:05 2018 From: corvus at inaugust.com (James E. Blair) Date: Wed, 30 May 2018 14:24:05 -0700 Subject: [openstack-dev] [OpenStack-Infra] Winterscale: a proposal regarding the project infrastructure In-Reply-To: <1527707031-sup-204@lrrr.local> (Doug Hellmann's message of "Wed, 30 May 2018 15:08:56 -0400") References: <87o9gxdsb9.fsf@meyer.lemoncheese.net> <1527698378-sup-1721@lrrr.local> <87wovlcbp8.fsf@meyer.lemoncheese.net> <1527707031-sup-204@lrrr.local> Message-ID: <876034alca.fsf@meyer.lemoncheese.net> Doug Hellmann writes: >> >> * Establish a "winterscale infrastructure council" (to be renamed) which >> >> will govern the services that the team provides by vote. The council >> >> will consist of the PTL of the winterscale infrastructure team and one >> >> member from each official OpenStack Foundation project. Currently, as >> >> I understand it, there's only one: OpenStack. But we expect kata, >> >> zuul, and others to be declared official in the not too distant >> >> future. The winterscale representative (the PTL) will have >> >> tiebreaking and veto power over council decisions. >> > >> > That structure seems sound, although it means the council is going >> > to be rather small (at least in the near term). What sorts of >> > decisions do you anticipate needing to be addressed by this council? >> >> Yes, very small. Perhaps we need an interim structure until it gets >> larger? Or perhaps just discipline and agreement that the two people on >> it will consult with the necessary constituencies and represent them >> well? > > I don't want to make too much out of it, but it does feel a bit odd > to have a 2 person body where 1 person has the final decision power. :-) > > Having 2 people per official team (including winterscale) would > give us more depth of coverage overall (allowing for quorum when > someone is on vacation, for example). In the short term, it also > has the benefit of having twice as many people involved. That's a good idea, and we can scale it down later if needed. >> I expect the council not to have to vote very often. Perhaps only on >> substantial changes to services (bringing a new offering online, >> retiring a disused offering, establishing parameters of a service). As >> an example, the recent thread on "terms of service" would be a good >> topic for the council to settle. > > OK, so not on every change but on the significant ones that might affect > more than one project. Ideally any sort of conflict would be worked out > in advance, but it's good to have the process in place to resolve > problems before they come up. Yes, and like most things, I think the biggest value will be in having the forum to propose changes, discuss them, and collect feedback from all members of participating projects (not just voting members). Hopefully in most decisions, the votes are just a formality which confirms the consensus (but if there isn't consensus, we still need to be able to make a decision). -Jim From cdent+os at anticdent.org Wed May 30 21:37:11 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 30 May 2018 14:37:11 -0700 (PDT) Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: References: <3424d691-9792-afde-dce9-4eca7601ae4f@redhat.com> Message-ID: On Wed, 30 May 2018, Julia Kreger wrote: > I don't feel like anyone is proposing to end the use of -1's, but that > we should generally be encouraging, accepting, and trusting. Being encouraging, accepting, and trusting is the outcome I'd like to see from this process. Being less nitpicking is a behavior or process change. Adjusting attitudes (in some environments lightly, in others more) so that we (where "we" is regulars in the projects, experienced reviewers, and cores) perceive patches as something to be grateful for and shepherded instead of an intrusion or obligation would be a significant and beneficial culture change. A perhaps more straightforward way to put it is: When someone (even one of "us") submits a patch they are doing us (the same "we" as above) a favor and we owe them not just a cordial and supportive response, but one with some continuity. Like many, I'm guilty of letting a false or inflated sense of urgency get the better of me and being an ass in some reviews. Sorry about that. A cultural shift in this area will improve things for all of us. Nitpicking is symptomatic of an attitude, one we can change, not the disease itself. > We also need to be mindful > of context as well, and in the grand scheme not try for something > perfect as many often do. This *does* mean we land something that > needs to be fixed later or reverted later, but neither are things we > should fear. We can't let that fear control us. Yes, very much yes. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From davanum at gmail.com Wed May 30 21:50:11 2018 From: davanum at gmail.com (Davanum Srinivas) Date: Wed, 30 May 2018 14:50:11 -0700 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: References: <3424d691-9792-afde-dce9-4eca7601ae4f@redhat.com> Message-ID: Please see below: On Wed, May 30, 2018 at 2:37 PM, Chris Dent wrote: > On Wed, 30 May 2018, Julia Kreger wrote: > >> I don't feel like anyone is proposing to end the use of -1's, but that >> we should generally be encouraging, accepting, and trusting. > > > Being encouraging, accepting, and trusting is the outcome I'd like > to see from this process. Being less nitpicking is a behavior or > process change. Adjusting attitudes (in some environments lightly, > in others more) so that we (where "we" is regulars in the projects, > experienced reviewers, and cores) perceive patches as something to > be grateful for and shepherded instead of an intrusion or obligation > would be a significant and beneficial culture change. > > A perhaps more straightforward way to put it is: When someone (even > one of "us") submits a patch they are doing us (the same "we" as > above) a favor and we owe them not just a cordial and supportive > response, but one with some continuity. > > Like many, I'm guilty of letting a false or inflated sense of urgency > get the better of me and being an ass in some reviews. Sorry about > that. > > A cultural shift in this area will improve things for all of us. > Nitpicking is symptomatic of an attitude, one we can change, not the > disease itself. > >> We also need to be mindful >> of context as well, and in the grand scheme not try for something >> perfect as many often do. This *does* mean we land something that >> needs to be fixed later or reverted later, but neither are things we >> should fear. We can't let that fear control us. Let me poke at this a bit. Some of the projects do say (not in so many words): "master should be always deployable and fully backward compatible and so we cant let anything in anytime that could possibly regress anyone" Should we change that attitude too? Anyone agree? disagree? Thanks, Dims > > Yes, very much yes. > > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent tw: @anticdent > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Davanum Srinivas :: https://twitter.com/dims From amy at demarco.com Wed May 30 22:07:35 2018 From: amy at demarco.com (Amy Marrich) Date: Wed, 30 May 2018 15:07:35 -0700 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: References: <3424d691-9792-afde-dce9-4eca7601ae4f@redhat.com> Message-ID: Coming from Ops, yes things should always be deployable, backward compatible and shouldn't break, but at the same time we're talking about a master branch which is always in flux and not an actual release. I think that statement you provided Dims should apply to releases or tags and not the master branch as a whole. And to be honest unless someone desperately needed something just committed into master, I doubt most folks are using master in dev let alone production, at least that's my hope. We can't move forward as a community if we don't welcome new members to it, which is the heart of this proposal. Amy (spotz) On Wed, May 30, 2018 at 2:50 PM, Davanum Srinivas wrote: > Please see below: > > On Wed, May 30, 2018 at 2:37 PM, Chris Dent > wrote: > > On Wed, 30 May 2018, Julia Kreger wrote: > > > >> I don't feel like anyone is proposing to end the use of -1's, but that > >> we should generally be encouraging, accepting, and trusting. > > > > > > Being encouraging, accepting, and trusting is the outcome I'd like > > to see from this process. Being less nitpicking is a behavior or > > process change. Adjusting attitudes (in some environments lightly, > > in others more) so that we (where "we" is regulars in the projects, > > experienced reviewers, and cores) perceive patches as something to > > be grateful for and shepherded instead of an intrusion or obligation > > would be a significant and beneficial culture change. > > > > A perhaps more straightforward way to put it is: When someone (even > > one of "us") submits a patch they are doing us (the same "we" as > > above) a favor and we owe them not just a cordial and supportive > > response, but one with some continuity. > > > > Like many, I'm guilty of letting a false or inflated sense of urgency > > get the better of me and being an ass in some reviews. Sorry about > > that. > > > > A cultural shift in this area will improve things for all of us. > > Nitpicking is symptomatic of an attitude, one we can change, not the > > disease itself. > > > >> We also need to be mindful > >> of context as well, and in the grand scheme not try for something > >> perfect as many often do. This *does* mean we land something that > >> needs to be fixed later or reverted later, but neither are things we > >> should fear. We can't let that fear control us. > > Let me poke at this a bit. Some of the projects do say (not in so many > words): > > "master should be always deployable and fully backward compatible and > so we cant let anything in anytime that could possibly regress anyone" > > Should we change that attitude too? Anyone agree? disagree? > > Thanks, > Dims > > > > > Yes, very much yes. > > > > -- > > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > > freenode: cdent tw: @anticdent > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > -- > Davanum Srinivas :: https://twitter.com/dims > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Wed May 30 22:48:41 2018 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 30 May 2018 17:48:41 -0500 Subject: [openstack-dev] [oslo] Summit onboarding and project update slides Message-ID: As promised in the sessions, here are the slides that were presented: https://www.slideshare.net/BenNemec1/oslo-vancouver-onboarding https://www.slideshare.net/BenNemec1/oslo-vancouver-project-update The font in the onboarding one got a little funny in the conversion, so if you want to see the original that is more readable let me know and I can send it to you. -Ben From melwittt at gmail.com Wed May 30 23:28:43 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 30 May 2018 16:28:43 -0700 Subject: [openstack-dev] [nova] core team update Message-ID: Howdy everyone, As I'm sure many of you have noticed, Sean Dague has shifted his focus onto other projects outside of Nova for some time now, and with that, I'm removing him from the core team at this time. I consider our team fortunate to have had the opportunity to work with Sean over the years and he is certainly welcome back to the core team if he returns to active reviewing someday in the future. Thank you Sean, for all of your contributions! Best, -melanie From tony at bakeyournoodle.com Wed May 30 23:45:35 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 31 May 2018 09:45:35 +1000 Subject: [openstack-dev] [OpenStackAnsible] Tag repos as newton-eol In-Reply-To: References: <20180314212003.GC25428@thor.bakeyournoodle.com> Message-ID: <20180530234535.GB29981@thor.bakeyournoodle.com> On Sun, Apr 29, 2018 at 09:36:15AM +0200, Jean-Philippe Evrard wrote: > Hello, > > > I'd like to phase out openstack/openstack-ansible-tests and > > openstack/openstack-ansible later. > > Now that we had the time to bump the roles in openstack-ansible, and > adapt the tests, we can now EOL the rest of newton, i.e.: > openstack/openstack-ansible and openstack/openstack-ansible-tests. > > Thanks for the help again Tony! Done. http://git.openstack.org/cgit/openstack/openstack-ansible/tag/?h=newton-eol http://git.openstack.org/cgit/openstack/openstack-ansible-tests/tag/?h=newton-eol Sorry for the delay. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From me at not.mn Wed May 30 23:47:35 2018 From: me at not.mn (John Dickinson) Date: Wed, 30 May 2018 16:47:35 -0700 Subject: [openstack-dev] [swift] change in review policy: normally one +2 is sufficient Message-ID: <271E9040-BBB1-4FAE-BBE0-3D7A2B7ACFAC@not.mn> During today's Swift team meeting[1], we discussed the idea of relaxing review guidelines. We agreed the normal case is "one core reviewer's approval is sufficient to land code". We've long had a "one +2" policy for trivial and obviously correct patches. Put simply, the old policy was one of "normally, two +2s are needed, but if a reviewer feels it's not necessary to get another review, go ahead and land it." Our new policy inverts that. Normally, one +2 is needed, but a core may want to ask for additional reviews for significant or complex patches. When the Swift team gathers in Denver for the next PTG, we'll spend some time revisiting this decision and reflect on the impact it has had for the community and codebase. [1] http://eavesdrop.openstack.org/meetings/swift/2018/swift.2018-05-30-21.00.log.html --John From sean.mcginnis at gmx.com Thu May 31 00:01:05 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 30 May 2018 19:01:05 -0500 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: References: <3424d691-9792-afde-dce9-4eca7601ae4f@redhat.com> Message-ID: <7489c0e7-de93-6305-89a0-167873f5e3ec@gmx.com> > "master should be always deployable and fully backward compatible and > so we cant let anything in anytime that could possibly regress anyone" > > Should we change that attitude too? Anyone agree? disagree? > > Thanks, > Dims > I'll definitely jump at this one. I've always thought (and shared on the ML several times now) that our implied but not explicit support for CD from any random commit was a bad thing. While I think it's good to support the idea that master is always deployable, I do not think it is a good mindset to think that every commit is a "release" and therefore should be supported until the end of time. We have a coordinated release for a reason, and I think design decisions and fixes should be based on the assumption that a release is a release and the point at which we need to be cognizant and caring about keeping backward compatibility. Doing that for every single commit is not ideal for the overall health of the product, IMO. From emilien at redhat.com Thu May 31 00:14:43 2018 From: emilien at redhat.com (Emilien Macchi) Date: Wed, 30 May 2018 17:14:43 -0700 Subject: [openstack-dev] [tripleo] The Weekly Owl - 22th Edition Message-ID: Welcome to the twenty second edition of a weekly update in TripleO world! The goal is to provide a short reading (less than 5 minutes) to learn what's new this week. Any contributions and feedback are welcome. Link to the previous version: http://lists.openstack.org/pipermail/openstack-dev/2018-May/130528.html +---------------------------------+ | General announcements | +---------------------------------+ +--> OpenStack community met last week for the Summit in Vancouver, we had great presentations and also great feedback! +--> Milestone 2 deadline is next week! +---------------------+ | Owls at Summit | +---------------------+ +--> TripleO project updates: from Queens to Rocky and beyond Recording: https://www.youtube.com/watch?v=4q_zkvOP8Dk Slides: https://t.co/DYOAjt1jDk +--> TripleO onboarding session: https://etherpad.openstack.org/p/YVR-forum-tripleo-onboarding People used that time to ask any questions about TripleO and the team was happy to answer and provide support. +--> TripleO Ops and User Feedback: https://etherpad.openstack.org/p/tripleo-rocky-ops-and-user-feedback Feedback about logging, documentation were the main topics we covered but other things were discussed, see etherpad. +--> TripleO and Ansible integration: https://etherpad.openstack.org/p/tripleo-rocky-ansible-integration James did a great job at presenting the config-download feature and how we now use Ansible for some deployment tasks. +------------------------------+ | Continuous Integration | +------------------------------+ +--> Ruck is arxcruz and Rover is rlandy. Please let them know any new CI issue. +--> Master promotion is 0 day, Queens is 0 days, Pike is 3 days and Ocata is 4 days. +--> Sprint 13 themes were Upgrade CI (new jobs, forward looking release state machine, voting jobs). +--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting +-------------+ | Upgrades | +-------------+ +--> FFU at Summit: https://www.youtube.com/watch?v=YJXem5d6fkI +--> Need reviews converge patches and docs updates +--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad-status +---------------+ | Containers | +---------------+ +--> Efforts arounds all-in-one installer, image prepare and image workflows, good progress overall. +--> Focus is on stabilization and make the containerized undercloud the default in TripleO. +--> Tomorrow is containerized undercloud deep dive: https://etherpad. openstack.org/p/tripleo-deep-dive-containerized-undercloud +--> More: https://etherpad.openstack.org/p/tripleo-containers-squad-status +----------------------+ | config-download | +----------------------+ +--> config download status commands and workflows +--> UI work still ongoing +--> Major doc update (merged): https://review.openstack.org/#/c/566606 +--> More: https://etherpad.openstack.org/p/tripleo-config-downlo ad-squad-status +--------------+ | Integration | +--------------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-integration-squad-status +---------+ | UI/CLI | +---------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status +---------------+ | Validations | +---------------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-validations-squad-status +---------------+ | Networking | +---------------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-networking-squad-status +--------------+ | Workflows | +--------------+ +--> Mistral project update https://www.youtube.com/watch?v=y9qieruccO4 +--> Validate workflow input: https://bugs.launchpad.net/tripleo/+bug/1774166 +--> More: https://etherpad.openstack.org/p/tripleo-workflows-squad-status +-----------+ | Security | +-----------+ +--> Public TLS is being refactored +--> Kerberos auth for keystone update +--> More: https://etherpad.openstack.org/p/tripleo-security-squad +------------+ | Owl fact | +------------+ Owl flight is silent, unlike most birds, owls make virtually no noise when they fly. They have special feathers that break turbulence into smaller currents, which reduces sound. Soft velvety down further muffles noise. Source: http://mentalfloss.com/article/68473/15-mysterious-facts-about-owls Thank you all for reading and stay tuned! -- Your fellow reporter, Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Thu May 31 00:17:39 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Thu, 31 May 2018 10:17:39 +1000 Subject: [openstack-dev] [stable][kolla] tagging newton EOL In-Reply-To: References: Message-ID: <20180531001739.GC29981@thor.bakeyournoodle.com> On Sat, Apr 14, 2018 at 11:02:54AM +0800, Jeffrey Zhang wrote: > hi stable team, > > Kolla project is ready for Newton EOL. Since kolla-ansible is split from > kolla since ocata cycle, so there is not newton branch in kolla-ansible. > please make following repo EOL > > openstack/kolla Okay I did this today but to be perfectly frank I suspect I've done it wrong. There was already an existing tag for newton-eol pointing at 3.0.3-20'ish so I tagged what was the HEAD of the existing newton branch which was 3.0.0.0rc1-335'ish: About to delete the branch stable/newton from openstack/kolla (rev 40e768ec2a370dc010be773af37e2ce417adda80) I'm not really sure about the history there. I apologise if I've made a mistake. but at least as we have everything in git we can recover the branches and retag if required. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From Kevin.Fox at pnnl.gov Thu May 31 00:21:35 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Thu, 31 May 2018 00:21:35 +0000 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: <7489c0e7-de93-6305-89a0-167873f5e3ec@gmx.com> References: <3424d691-9792-afde-dce9-4eca7601ae4f@redhat.com> , <7489c0e7-de93-6305-89a0-167873f5e3ec@gmx.com> Message-ID: <1A3C52DFCD06494D8528644858247BF01C0D7F72@EX10MBOX03.pnnl.gov> To play devils advocate and as someone that has had to git bisect an ugly regression once I still think its important not to break trunk. It can be much harder to deal with difficult issues like that if trunk frequently breaks. Thanks, Kevin ________________________________________ From: Sean McGinnis [sean.mcginnis at gmx.com] Sent: Wednesday, May 30, 2018 5:01 PM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [tc][all] A culture change (nitpicking) > "master should be always deployable and fully backward compatible and > so we cant let anything in anytime that could possibly regress anyone" > > Should we change that attitude too? Anyone agree? disagree? > > Thanks, > Dims > I'll definitely jump at this one. I've always thought (and shared on the ML several times now) that our implied but not explicit support for CD from any random commit was a bad thing. While I think it's good to support the idea that master is always deployable, I do not think it is a good mindset to think that every commit is a "release" and therefore should be supported until the end of time. We have a coordinated release for a reason, and I think design decisions and fixes should be based on the assumption that a release is a release and the point at which we need to be cognizant and caring about keeping backward compatibility. Doing that for every single commit is not ideal for the overall health of the product, IMO. __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From fungi at yuggoth.org Thu May 31 00:23:00 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 31 May 2018 00:23:00 +0000 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: References: <3424d691-9792-afde-dce9-4eca7601ae4f@redhat.com> Message-ID: <20180531002300.5uff6i6mmot4lq72@yuggoth.org> On 2018-05-30 14:50:11 -0700 (-0700), Davanum Srinivas wrote: [...] > Let me poke at this a bit. Some of the projects do say (not in so > many words): > > "master should be always deployable and fully backward compatible and > so we cant let anything in anytime that could possibly regress anyone" > > Should we change that attitude too? Anyone agree? disagree? I think this is orthogonal to the thread. The idea is that we should avoid nettling contributors over minor imperfections in their submissions (grammatical, spelling or typographical errors in code comments and documentation, mild inefficiencies in implementations, et cetera). Clearly we shouldn't merge broken features, changes which fail tests/linters, and so on. For me the rule of thumb is, "will the software be better or worse if this is merged?" It's not about perfection or imperfection, it's about incremental improvement. If a proposed change is an improvement, that's enough. If it's not perfect... well, that's just opportunity for more improvement later. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Thu May 31 00:26:54 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 31 May 2018 00:26:54 +0000 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C0D7F72@EX10MBOX03.pnnl.gov> References: <3424d691-9792-afde-dce9-4eca7601ae4f@redhat.com> <7489c0e7-de93-6305-89a0-167873f5e3ec@gmx.com> <1A3C52DFCD06494D8528644858247BF01C0D7F72@EX10MBOX03.pnnl.gov> Message-ID: <20180531002653.kwgf6d67pebtkrfw@yuggoth.org> On 2018-05-31 00:21:35 +0000 (+0000), Fox, Kevin M wrote: > To play devils advocate and as someone that has had to git bisect > an ugly regression once I still think its important not to break > trunk. It can be much harder to deal with difficult issues like > that if trunk frequently breaks. [...] Agreed. We made a choice as a community early on to avoid doing that. (Almost) always deployable is (almost) always testable; if trunk is broken, then as a developer working on a bugfix or new feature you have to hunt for a known-working state in the history to develop against and then rebase onto all the broken and cross your fingers that you're not making the job of whoever has to untangle trunk before release that much harder. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From davanum at gmail.com Thu May 31 00:42:41 2018 From: davanum at gmail.com (Davanum Srinivas) Date: Wed, 30 May 2018 17:42:41 -0700 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: <20180531002300.5uff6i6mmot4lq72@yuggoth.org> References: <3424d691-9792-afde-dce9-4eca7601ae4f@redhat.com> <20180531002300.5uff6i6mmot4lq72@yuggoth.org> Message-ID: On Wed, May 30, 2018 at 5:23 PM, Jeremy Stanley wrote: > On 2018-05-30 14:50:11 -0700 (-0700), Davanum Srinivas wrote: > [...] >> Let me poke at this a bit. Some of the projects do say (not in so >> many words): >> >> "master should be always deployable and fully backward compatible and >> so we cant let anything in anytime that could possibly regress anyone" >> >> Should we change that attitude too? Anyone agree? disagree? > > I think this is orthogonal to the thread. The idea is that we should > avoid nettling contributors over minor imperfections in their > submissions (grammatical, spelling or typographical errors in code > comments and documentation, mild inefficiencies in implementations, > et cetera). Clearly we shouldn't merge broken features, changes > which fail tests/linters, and so on. For me the rule of thumb is, > "will the software be better or worse if this is merged?" It's not > about perfection or imperfection, it's about incremental > improvement. If a proposed change is an improvement, that's enough. > If it's not perfect... well, that's just opportunity for more > improvement later. Well said Jeremy! > -- > Jeremy Stanley > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Davanum Srinivas :: https://twitter.com/dims From mtreinish at kortar.org Thu May 31 01:09:57 2018 From: mtreinish at kortar.org (Matthew Treinish) Date: Wed, 30 May 2018 21:09:57 -0400 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C0D7F72@EX10MBOX03.pnnl.gov> References: <3424d691-9792-afde-dce9-4eca7601ae4f@redhat.com> <7489c0e7-de93-6305-89a0-167873f5e3ec@gmx.com> <1A3C52DFCD06494D8528644858247BF01C0D7F72@EX10MBOX03.pnnl.gov> Message-ID: <20180531010957.GA1354@zeong> On Thu, May 31, 2018 at 12:21:35AM +0000, Fox, Kevin M wrote: > To play devils advocate and as someone that has had to git bisect an ugly regression once I still think its important not to break trunk. It can be much harder to deal with difficult issues like that if trunk frequently breaks. > > Thanks, > Kevin > ________________________________________ > From: Sean McGinnis [sean.mcginnis at gmx.com] > Sent: Wednesday, May 30, 2018 5:01 PM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [tc][all] A culture change (nitpicking) > > > "master should be always deployable and fully backward compatible and > > so we cant let anything in anytime that could possibly regress anyone" > > > > Should we change that attitude too? Anyone agree? disagree? > > > > Thanks, > > Dims > > > I'll definitely jump at this one. > > I've always thought (and shared on the ML several times now) that our > implied > but not explicit support for CD from any random commit was a bad thing. > > While I think it's good to support the idea that master is always > deployable, I > do not think it is a good mindset to think that every commit is a > "release" and > therefore should be supported until the end of time. We have a coordinated > release for a reason, and I think design decisions and fixes should be > based on > the assumption that a release is a release and the point at which we > need to be > cognizant and caring about keeping backward compatibility. Doing that for > every single commit is not ideal for the overall health of the product, IMO. > It's more than just a CD guarantee, while from a quick glance it would seem like that's the only value it goes much deeper than that. Ensuring that every commit works, is deployable, and maintains backwards compatibility is what enables us to have such a high quality end result at release time. Quite frankly it's looking at every commit as always being a working unit that enables us to manage a project this size at this velocity. Even if people assume no one is actually CDing the projects(which we shouldn't), it's a flawed assumption to think that everyone is running strictly the same code as what's in the release tarballs. I can't think of any production cloud out there that doesn't carry patches to fix things encountered in the real world. Or look at stable maint we regularly need to backport fixes to fix bugs found after release. If we can't rely on these to always work this makes our life much more difficult, both as upstream maintainers but also as downstream consumers of OpenStack. The other aspect to look at here is just the review mindset, supporting every every commit is useable puts reviewers in the mindset to consider things like backwards compatibility and deployability when looking at proposed changes. If we stop looking for these potential issues, we t will also cause many more bugs to be in our released code. To simply discount this as only a release concern and punt this kind of scrutiny until it's time to release is not only going to make release time much more stressful. Also, our testing is built to try and ensure every commit works **before** we merge it. If we decided to take this stance as a community then we should really just rip out all the testing, because that's what it's there to verify and help us make sure we don't land a change that doesn't work. If we don't actually care about that making sure every commit is deployable we are wasting quite a lot of resources on it. -Matt Treinish -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From joshua.hesketh at gmail.com Thu May 31 01:40:21 2018 From: joshua.hesketh at gmail.com (Joshua Hesketh) Date: Thu, 31 May 2018 11:40:21 +1000 Subject: [openstack-dev] Winterscale: a proposal regarding the project infrastructure In-Reply-To: <87o9gxdsb9.fsf@meyer.lemoncheese.net> References: <87o9gxdsb9.fsf@meyer.lemoncheese.net> Message-ID: On Thu, May 31, 2018 at 2:25 AM, James E. Blair wrote: > Hi, > > With recent changes implemented by the OpenStack Foundation to include > projects other than "OpenStack" under its umbrella, it has become clear > that the "Project Infrastructure Team" needs to change. > > The infrastructure that is run for the OpenStack project is valued by > other OpenStack Foundation projects (and beyond). Our community has not > only produced an amazing cloud infrastructure system, but it has also > pioneered new tools and techniques for software development and > collaboration. > > For some time it's been apparent that we need to alter the way we run > services in order to accommodate other Foundation projects. We've been > talking about this informally for at least the last several months. One > of the biggest sticking points has been a name for the effort. It seems > very likely that we will want a new top-level domain for hosting > multiple projects in a neutral environment (so that people don't have to > say "hosted on OpenStack's infrastructure"). But finding such a name is > difficult, and even before we do, we need to talk about it. > > I propose we call the overall effort "winterscale". In the best > tradition of code names, it means nothing; look for no hidden meaning > here. We won't use it for any actual services we provide. We'll use it > to refer to the overall effort of restructuring our team and > infrastructure to provide services to projects beyond OpenStack itself. > And we'll stop using it when the restructuring effort is concluded. > > This is my first proposal: that we acknowledge this effort is underway > and name it as such. > > My second proposal is an organizational structure for this effort. > First, some goals: > > * The infrastructure should be collaboratively run as it is now, and > the operational decisions should be made by the core reviewers as > they are now. > > * Issues of service definition (i.e., what services we offer and how > they are used) should be made via a collaborative process including > the infrastructure operators and the projects which use it. > > To that end, I propose that we: > > * Work with the Foundation to create a new effort independent of the > OpenStack project with the goal of operating infrastructure for the > wider OpenStack Foundation community. > > * Work with the Foundation marketing team to help us with the branding > and marketing of this effort. > > * Establish a "winterscale infrastructure team" (to be renamed) > consisting of the current infra-core team members to operate this > effort. > > * Move many of the git repos currently under the OpenStack project > infrastructure team's governance to this new team. > > * Establish a "winterscale infrastructure council" (to be renamed) which > will govern the services that the team provides by vote. The council > will consist of the PTL of the winterscale infrastructure team and one > member from each official OpenStack Foundation project. Currently, as > I understand it, there's only one: OpenStack. But we expect kata, > zuul, and others to be declared official in the not too distant > future. The winterscale representative (the PTL) will have > tiebreaking and veto power over council decisions. > So the "winterscale infrastructure council"'s purview is quite limited in scope to just govern the services provided? If so, would you foresee a need to maintain some kind of "Infrastructure council" as it exists at the moment to be the technical design body? Specifically, wouldn't we still want somewhere for the "winterscale infrastructure team" to be represented and would that expand to any infrastructure-related core teams? Cheers, Josh > > (This is structured loosely based on the current Infrastructure > Council used by the OpenStack Project Infrastructure Team.) > > None of this is obviously final. My goal here is to give this effort a > name and a starting point so that we can discuss it and make progress. > > -Jim > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fengyd81 at gmail.com Thu May 31 02:00:58 2018 From: fengyd81 at gmail.com (fengyd) Date: Thu, 31 May 2018 10:00:58 +0800 Subject: [openstack-dev] Ceph multiattach support Message-ID: Hi, I'm using Ceph for cinder backend. Do you have any plan to support multiattach for Ceph backend? Thanks Yafeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From davanum at gmail.com Thu May 31 02:09:22 2018 From: davanum at gmail.com (Davanum Srinivas) Date: Wed, 30 May 2018 19:09:22 -0700 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: <20180531010957.GA1354@zeong> References: <3424d691-9792-afde-dce9-4eca7601ae4f@redhat.com> <7489c0e7-de93-6305-89a0-167873f5e3ec@gmx.com> <1A3C52DFCD06494D8528644858247BF01C0D7F72@EX10MBOX03.pnnl.gov> <20180531010957.GA1354@zeong> Message-ID: On Wed, May 30, 2018 at 6:09 PM, Matthew Treinish wrote: > On Thu, May 31, 2018 at 12:21:35AM +0000, Fox, Kevin M wrote: >> To play devils advocate and as someone that has had to git bisect an ugly regression once I still think its important not to break trunk. It can be much harder to deal with difficult issues like that if trunk frequently breaks. >> >> Thanks, >> Kevin >> ________________________________________ >> From: Sean McGinnis [sean.mcginnis at gmx.com] >> Sent: Wednesday, May 30, 2018 5:01 PM >> To: openstack-dev at lists.openstack.org >> Subject: Re: [openstack-dev] [tc][all] A culture change (nitpicking) >> >> > "master should be always deployable and fully backward compatible and >> > so we cant let anything in anytime that could possibly regress anyone" >> > >> > Should we change that attitude too? Anyone agree? disagree? >> > >> > Thanks, >> > Dims >> > >> I'll definitely jump at this one. >> >> I've always thought (and shared on the ML several times now) that our >> implied >> but not explicit support for CD from any random commit was a bad thing. >> >> While I think it's good to support the idea that master is always >> deployable, I >> do not think it is a good mindset to think that every commit is a >> "release" and >> therefore should be supported until the end of time. We have a coordinated >> release for a reason, and I think design decisions and fixes should be >> based on >> the assumption that a release is a release and the point at which we >> need to be >> cognizant and caring about keeping backward compatibility. Doing that for >> every single commit is not ideal for the overall health of the product, IMO. >> > > It's more than just a CD guarantee, while from a quick glance it would seem like > that's the only value it goes much deeper than that. Ensuring that every commit > works, is deployable, and maintains backwards compatibility is what enables us > to have such a high quality end result at release time. Quite frankly it's > looking at every commit as always being a working unit that enables us to manage > a project this size at this velocity. Even if people assume no one is actually > CDing the projects(which we shouldn't), it's a flawed assumption to think that > everyone is running strictly the same code as what's in the release tarballs. I > can't think of any production cloud out there that doesn't carry patches to fix > things encountered in the real world. Or look at stable maint we regularly need > to backport fixes to fix bugs found after release. If we can't rely on these to > always work this makes our life much more difficult, both as upstream > maintainers but also as downstream consumers of OpenStack. > > The other aspect to look at here is just the review mindset, supporting every > every commit is useable puts reviewers in the mindset to consider things like > backwards compatibility and deployability when looking at proposed changes. If > we stop looking for these potential issues, we t will also cause many more bugs > to be in our released code. To simply discount this as only a release concern > and punt this kind of scrutiny until it's time to release is not only going to > make release time much more stressful. Also, our testing is built to try and > ensure every commit works **before** we merge it. If we decided to take this > stance as a community then we should really just rip out all the testing, > because that's what it's there to verify and help us make sure we don't land a > change that doesn't work. If we don't actually care about that making sure every > commit is deployable we are wasting quite a lot of resources on it. "rip out all testing" is probably taking it too far Matt. Instead of perfection when merging, we should look for iteration and reverts. That's what i would like to see. I am not asking for a "Commit-Then-Review" like the ASF. I want us to be just be practical and have some leeway to iterate / update / experiment instead of absolute perfection from all angles. We should move the needle at least a bit away from it. > > -Matt Treinish > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Davanum Srinivas :: https://twitter.com/dims From abishop at redhat.com Thu May 31 02:29:21 2018 From: abishop at redhat.com (Alan Bishop) Date: Wed, 30 May 2018 22:29:21 -0400 Subject: [openstack-dev] Ceph multiattach support In-Reply-To: References: Message-ID: On Wed, May 30, 2018 at 10:00 PM, fengyd wrote: > Hi, > > I'm using Ceph for cinder backend. > Do you have any plan to support multiattach for Ceph backend? A lot of people have expressed an interest in this, so I'm sure multi-attach with Ceph will eventually be supported. However, it will require a fair amount of investigation to fully understand what is needed for the feature to work correctly. The situation is summarized in a comment in the Cinder code [1]. [1] https://github.com/openstack/cinder/blob/master/cinder/volume/drivers/rbd.py#L486 Alan > Thanks > > Yafeng > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From emccormick at cirrusseven.com Thu May 31 02:35:19 2018 From: emccormick at cirrusseven.com (Erik McCormick) Date: Wed, 30 May 2018 22:35:19 -0400 Subject: [openstack-dev] Ceph multiattach support In-Reply-To: References: Message-ID: The lack of ceph support is a ceph problem rather than a Cinder problem. There are issues with replication and multi-attached RBD volumes asd I understand it. The ceph folks are aware but have other priorities presently. I encourage making your interest known to them. In the meantime, check out Manilla with cephfs if you are running modern versions of both Ceph and Openstack. -Erik On Wed, May 30, 2018, 10:02 PM fengyd wrote: > Hi, > > I'm using Ceph for cinder backend. > Do you have any plan to support multiattach for Ceph backend? > > Thanks > > Yafeng > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ed at leafe.com Thu May 31 02:53:52 2018 From: ed at leafe.com (Ed Leafe) Date: Wed, 30 May 2018 21:53:52 -0500 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: <3424d691-9792-afde-dce9-4eca7601ae4f@redhat.com> References: <3424d691-9792-afde-dce9-4eca7601ae4f@redhat.com> Message-ID: <0D4B8FCA-D4AE-46A5-A774-2A6822BB9DCD@leafe.com> On May 30, 2018, at 5:11 AM, Dmitry Tantsur wrote: > Whatever decision the TC takes, I would like it to make sure that we don't paint putting -1 as a bad act. Nor do I want "if you care, just follow-up" to be an excuse for putting up bad contributions. > > Additionally, I would like to have something saying that a -1 is valid and appropriate, if a contribution substantially increases the project's technical debt. After already spending *days* refactoring ironic unit tests, I will -1 the hell out of a patch that will try to bring them back to their initial state, I promise :) Yes to this. -1 should never mean anything other than "some more work needs to be done before this can merge". It most certainly does not mean "your code is bad and you should feel terrible". While this started as a discussion about reducing nitpicking, which I think we can all embrace, we shouldn't let it slide into imagining that contributors are such fragile things that pointing out an error/problem in the code is seen as a personal attack. Of course you should not be mean when you do so. But that's very, very rare in my OpenStack experience. Nitpicking, on the other hand, is much more prevalent, and I welcome these efforts to reduce it. -- Ed Leafe -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From tpb at dyncloud.net Thu May 31 02:59:08 2018 From: tpb at dyncloud.net (Tom Barron) Date: Wed, 30 May 2018 22:59:08 -0400 Subject: [openstack-dev] [manila] No meeting 31 May 2018 Message-ID: <20180531025908.7tj4xzeaikxk4mk5@barron.net> We don't have anything on the agenda yet for this week's manila meeting and my travel plans just got shuffled so I'm in the air at our regular time so let's cancel this week's meeting and start up again the following week. We'll have a summary of relevant Summit events then. -- Tom Barron (tbarron) From ed at leafe.com Thu May 31 03:08:00 2018 From: ed at leafe.com (Ed Leafe) Date: Wed, 30 May 2018 22:08:00 -0500 Subject: [openstack-dev] [neutron][api][grapql] Proof of Concept In-Reply-To: References: Message-ID: <17F4CBAD-16C4-4FA0-A92D-C9CC81BE76EA@leafe.com> On May 6, 2018, at 8:01 AM, Gilles Dubreuil wrote: > Regarding the choice of Neutron as PoC, I'm sorry for not providing much details when I said "because of its specific data model", > effectively the original mention was "its API exposes things at an individual table level, requiring the client to join that information to get the answers they need". > I realize now such description probably applies to many OpenStack APIs. > So I'm not sure what was the reason for choosing Neutron. Blame Monty! It was Monty who suggested Neutron due to his particular experience working with the API. Since none of the rest of us in the API-SIG had any better suggestions, that was what we passed on to you. But I think that any target that demonstrates the advantages to be had by adopting GraphQL is fine. So if the team working on this think they can create a more impressive PoC with another API, the API-SIG will support that. -- Ed Leafe -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From msashika38 at gmail.com Thu May 31 05:06:38 2018 From: msashika38 at gmail.com (Ashika Meher Majety) Date: Thu, 31 May 2018 10:36:38 +0530 Subject: [openstack-dev] [Heat] : Query regarding bug 1769089 In-Reply-To: References: Message-ID: Hi Kaz, Thank you for the update regarding the issue ( https://storyboard. openstack.org/#!/story/1769089 ). To be more clear the issue is as follows, we have tried accessing horizon(GUI) with port forwarding(tunnel link) and also with the direct IP. So, when we use port forwarding stack launch is possible in only one of the link. The other is showing the error "None type object has no attribute 'encode' ". If we restart the apache server then also it is not working in both. We didn't get any errors in heat-api or heat-engine logs, the error is shown in the stack launch form at the top. It is not only with the template I attached in the storyboard, but seen for all types of HOT templates. This might be somewhat clearer. Regards, Ashika Meher On Wed, May 30, 2018 at 6:56 PM, Kaz Shinohara wrote: > Hi, > > > First off, sorry for being late to response. > > Looking at your comment, > your environment is Newton & AFAIK Newton is EOL, even if you will wait > for the fix, it will not be delivered to Newton. > https://releases.openstack.org/ > > Current my concern is that your raised issue may happen in Queens code too > (latest maintained) > Note: dashboard for heat is split out from Horizon since Queens. > > Let me check if I could reproduce your issue at my environment(Queens) > first. > I will update my result at https://storyboard. > openstack.org/#!/story/1769089 > > Cheers, > Kaz > > > > 2018-05-30 21:56 GMT+09:00 Ashika Meher Majety : > >> Hello, >> >> We have raised a bug in launchpad and the bug link is as follows: >> https://bugs.launchpad.net/heat-dashboard/+bug/1769089 . >> Can anyone please provide a solution or fix for this issue since it's >> been 20 days since we have created this bug. >> >> Thanks&Regards, >> Ashika Meher >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ghanshyammann at gmail.com Thu May 31 05:09:59 2018 From: ghanshyammann at gmail.com (Ghanshyam Mann) Date: Thu, 31 May 2018 14:09:59 +0900 Subject: [openstack-dev] Questions about token scopes In-Reply-To: <61dae2da-e38b-ab3a-3921-6c2c8bd81796@gmail.com> References: <61dae2da-e38b-ab3a-3921-6c2c8bd81796@gmail.com> Message-ID: On Wed, May 30, 2018 at 11:53 PM, Lance Bragstad wrote: > > > On 05/30/2018 08:47 AM, Matt Riedemann wrote: >> I know the keystone team has been doing a lot of work on scoped tokens >> and Lance has been trying to roll that out to other projects (like nova). >> >> In Rocky the nova team is adding granular policy rules to the >> placement API [1] which is a good opportunity to set scope on those >> rules as well. >> >> For now, we've just said everything is system scope since resources in >> placement, for the most part, are managed by "the system". But we do >> have some resources in placement which have project/user information >> in them, so could theoretically also be scoped to a project, like GET >> /usages [2]. Just adding that this is same for nova policy also. As you might know spec[1] try to make nova policy more granular but on hold because of default roles things. We will do policy rule split with more better defaults values like read-only for GET APIs. Along with that, like you mentioned about scope setting for placement policy rules, we need to do same for nova policy also. That can be done later or together with nova policy granular. spec. [1] https://review.openstack.org/#/c/547850/ >> >> While going through this, I've been hammering Lance with questions but >> I had some more this morning and wanted to send them to the list to >> help spread the load and share the knowledge on working with scoped >> tokens in the other projects. > > ++ good idea > >> >> So here goes with the random questions: >> >> * devstack has the admin project/user - does that by default get >> system scope tokens? I see the scope is part of the token create >> request [3] but it's optional, so is there a default value if not >> specified? > > No, not necessarily. The keystone-manage bootstrap command is what > bootstraps new deployments with the admin user, an admin role, a project > to work in, etc. It also grants the newly created admin user the admin > role on a project and the system. This functionality was added in Queens > [0]. This should be backwards compatible and allow the admin user to get > tokens scoped to whatever they had authorization on previously. The only > thing they should notice is that they have another role assignment on > something called the "system". That being said, they can start > requesting system-scoped tokens from keystone. We have a document that > tries to explain the differences in scopes and what they mean [1]. Another related question is, does scope setting will impact existing operator? I mean when policy rule start setting scope, that might break the existing operator as their current token (say project scoped) might not be able to authorize the policy modified with setting the system scope. In that case, how we are going to avoid the upgrade break. One way can be to soft enforcement scope things for a cycle with warning and then start enforcing that after one cycle (like we do for any policy rule change)? but not sure at this point. > > [0] https://review.openstack.org/#/c/530410/ > [1] https://docs.openstack.org/keystone/latest/admin/identity-tokens.html > >> >> * Why don't the token create and show APIs return the scope? > > Good question. In a way, they do. If you look at a response when you > authenticate for a token or validate a token, you should see an object > contained within the token reference for the purpose of scope. For > example, a project-scoped token will have a project object in the > response [2]. A domain-scoped token will have a domain object in the > response [3]. The same is true for system scoped tokens [4]. Unscoped > tokens do not have any of these objects present and do not contain a > service catalog [5]. While scope isn't explicitly denoted by an > attribute, it can be derived from the attributes of the token response. > > [2] http://paste.openstack.org/raw/722349/ > [3] http://paste.openstack.org/raw/722351/ > [4] http://paste.openstack.org/raw/722348/ > [5] http://paste.openstack.org/raw/722350/ > > >> >> * It looks like python-openstackclient doesn't allow specifying a >> scope when issuing a token, is that going to be added? > > Yes, I have a patch up for it [6]. I wanted to get this in during > Queens, but it missed the boat. I believe this and a new release of > oslo.context are the only bits left in order for services to have > everything they need to easily consume system-scoped tokens. > Keystonemiddleware should know how to handle system-scoped tokens in > front of each service [7]. The oslo.context library should be smart > enough to handle system scope set by keystonemiddleware if context is > built from environment variables [8]. Both keystoneauth [9] and > python-keystoneclient [10] should have what they need to generate > system-scoped tokens. > > That should be enough to allow the service to pass a request environment > to oslo.context and use the context object to reason about the scope of > the request. As opposed to trying to understand different token scope > responses from keystone. We attempted to abstract that away in to the > context object. > > [6] https://review.openstack.org/#/c/524416/ > [7] https://review.openstack.org/#/c/564072/ > [8] https://review.openstack.org/#/c/530509/ > [9] https://review.openstack.org/#/c/529665/ > [10] https://review.openstack.org/#/c/524415/ > >> >> The reason I'm asking about OSC stuff is because we have the >> osc-placement plugin [4] which allows users with the admin role to >> work with resources in placement, which could be useful for things >> like fixing up incorrect or leaked allocations, i.e. fixing the >> fallout of a bug in nova. I'm wondering if we define all of the >> placement API rules as system scope and we're enforcing scope, will >> admins, as we know them today, continue to be able to use those APIs? >> Or will deployments just need to grow a system-scope admin >> project/user and per-project admin users, and then use the former for >> working with placement via the OSC plugin? > > Uhm, if I understand your question, it depends on how you define the > scope types for those APIs. If you set them to system-scope, then an > operator will need to use a system-scoped token in order to access those > APIs iff the placement configuration file contains placement.conf > [oslo.policy] enforce_scope = True. Otherwise, setting that option to > false will log a warning to operators saying that someone is accessing a > system-scoped API with a project-scoped token (e.g. education needs to > happen). > >> >> [1] >> https://review.openstack.org/#/q/topic:bp/granular-placement-policy+(status:open+OR+status:merged) >> [2] https://developer.openstack.org/api-ref/placement/#list-usages >> [3] >> https://developer.openstack.org/api-ref/identity/v3/index.html#password-authentication-with-scoped-authorization >> [4] https://docs.openstack.org/osc-placement/latest/index.html >> > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From yamamoto at midokura.com Thu May 31 05:36:08 2018 From: yamamoto at midokura.com (Takashi Yamamoto) Date: Thu, 31 May 2018 14:36:08 +0900 Subject: [openstack-dev] [tap-as-a-service] core reviewer update Message-ID: hi, i plan to add Kazuhiro Suzuki to tap-as-a-service-core group. [1] he is one of active members of the project. he is also the original author of tap-as-a-service-dashboard. i'll make the change after a week unless i hear any objections/concerns. [1] https://review.openstack.org/#/admin/groups/957,members http://stackalytics.com/report/contribution/tap-as-a-service/120 From sundar.nadathur at intel.com Thu May 31 06:02:05 2018 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Wed, 30 May 2018 23:02:05 -0700 Subject: [openstack-dev] [Cyborg] [Nova] Cyborg traits In-Reply-To: <37700cc2-a79c-30ea-d986-e18584cc0464@fried.cc> References: <1e33d001-ae8c-c28d-0ab6-fa061c5d362b@intel.com> <37700cc2-a79c-30ea-d986-e18584cc0464@fried.cc> Message-ID: On 5/30/2018 1:18 PM, Eric Fried wrote: > This all sounds fully reasonable to me. One thing, though... > >>> * There is a resource class per device category e.g. >>> CUSTOM_ACCELERATOR_GPU, CUSTOM_ACCELERATOR_FPGA. > Let's propose standard resource classes for these ASAP. > > https://github.com/openstack/nova/blob/d741f624c81baf89fc8b6b94a2bc20eb5355a818/nova/rc_fields.py > > -efried Makes sense, Eric. The obvious names would be ACCELERATOR_GPU and ACCELERATOR_FPGA. Do we just submit a patch to rc_fields.py? Thanks, Sundar From zhang.lei.fly at gmail.com Thu May 31 06:11:48 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Thu, 31 May 2018 14:11:48 +0800 Subject: [openstack-dev] [stable][kolla] tagging newton EOL In-Reply-To: <20180531001739.GC29981@thor.bakeyournoodle.com> References: <20180531001739.GC29981@thor.bakeyournoodle.com> Message-ID: have no idea about this too. and it looks as expected. Thanks tony On Thu, May 31, 2018 at 8:17 AM Tony Breeds wrote: > > On Sat, Apr 14, 2018 at 11:02:54AM +0800, Jeffrey Zhang wrote: > > hi stable team, > > > > Kolla project is ready for Newton EOL. Since kolla-ansible is split from > > kolla since ocata cycle, so there is not newton branch in kolla-ansible. > > please make following repo EOL > > > > openstack/kolla > > Okay I did this today but to be perfectly frank I suspect I've done it > wrong. > > There was already an existing tag for newton-eol pointing at > 3.0.3-20'ish so I tagged what was the HEAD of the existing newton branch > which was 3.0.0.0rc1-335'ish: > > About to delete the branch stable/newton from openstack/kolla (rev 40e768ec2a370dc010be773af37e2ce417adda80) > > I'm not really sure about the history there. I apologise if I've made a > mistake. > > but at least as we have everything in git we can recover the branches > and retag if required. > > Yours Tony. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Regards, Jeffrey Zhang Blog: http://xcodest.me From shigeta.soichi at jp.fujitsu.com Thu May 31 06:41:17 2018 From: shigeta.soichi at jp.fujitsu.com (Soichi Shigeta) Date: Thu, 31 May 2018 15:41:17 +0900 Subject: [openstack-dev] [tap-as-a-service] core reviewer update In-Reply-To: References: Message-ID: +1 Soichi On 2018/05/31 14:36, Takashi Yamamoto wrote: > hi, > > i plan to add Kazuhiro Suzuki to tap-as-a-service-core group. [1] > he is one of active members of the project. > he is also the original author of tap-as-a-service-dashboard. > i'll make the change after a week unless i hear any objections/concerns. > > [1] https://review.openstack.org/#/admin/groups/957,members > http://stackalytics.com/report/contribution/tap-as-a-service/120 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From gdubreui at redhat.com Thu May 31 07:16:57 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Thu, 31 May 2018 17:16:57 +1000 Subject: [openstack-dev] [neutron][api][grapql] Proof of Concept In-Reply-To: References: Message-ID: Hi Flint, I wish it was "my" summit ;) In the latter case I'd make the sessions an hour and not 20 or 40 minutes, well at least for the Forum part. And I will also make only one summit a year instead of two (which is also a feed back I got from the Market place). I've passed that during the user feedback session. Sorry for not responding earlier, @elmiko is going to send the minutes of the API SIG forum session we had. We confirmed Neutron to be the PoC. We are going to use a feature branch, waiting for Miguel Lavalle to confirm the request has been acknowledge by the Infra group. The PoC goal is to show GraphQL efficiency. So we're going to make something straightforward, use Neutron existing server by  adding the graphQL endpoint and cover few core items such as network, subnets and ports (for example). Also the idea of having a central point of access for OpenStack APIs using GrahpQL stitching and delegation is exciting for everyone (and I had obviously same feedback off the session) and that's something that could happen once the PoC has convinced. During the meeting, Jiri Tomasek explained how GraphQL could help TripleO UI. Effectively they struggle with APIs requests and had to create a middle(ware) module in JS to do API work and reconstruction before the Javascript client can use it. GraphQL would simplify the process and allow to get rid of the module. He also explained, after the meeting, how Horizon could benefit as well, allowing to use only JS and avoid Django altogether! I've also been told that Zuul nees GraphQL. Well basically the question is who doesn't need it? Cheers, Gilles On 31/05/18 03:34, Flint WALRUS wrote: > Hi Gilles, I hope you enjoyed your Summit!? > > Did you had any interesting talk to report about our little initiative ? > Le dim. 6 mai 2018 à 15:01, Gilles Dubreuil > a écrit : > > > Akihiro, thank you for your precious help! > > Regarding the choice of Neutron as PoC, I'm sorry for not > providing much details when I said "because of its specific data > model", > effectively the original mention was  "its API exposes things at > an individual table level, requiring the client to join that > information to get the answers they need". > I realize now such description probably applies to many OpenStack > APIs. > So I'm not sure what was the reason for choosing Neutron. > I suppose Nova is also a good candidate because API is quite > complex too, in a different way, and need to expose the data API > and the control API plane as we discussed. > > After all Neutron is maybe not the best candidate but it seems > good enough. > > And as Flint say the extension mechanism shouldn't be an issue. > > So if someone believe there is a better candidate for the PoC, > please speak now. > > Thanks, > Gilles > > PS: Flint,  Thank you for offering to be the advocate for Berlin. > That's great! > > > On 06/05/18 02:23, Flint WALRUS wrote: >> Hi Akihiro, >> >> Thanks a lot for this insight on how neutron behave. >> >> We would love to get support and backing from the neutron team in >> order to be able to get the best PoC possible. >> >> Someone suggested neutron as a good choice because of it simple >> database model. As GraphQL can manage your behavior of an >> extension declaring its own schemes I don’t think it would take >> that much time to implement it. >> >> @Gilles, if I goes to the berlin summitt I could definitely do >> the networking and relationship work needed to get support on our >> PoC from different teams members. This would help to spread the >> world multiple time and don’t have a long time before someone >> come to talk about this subject as what happens with the 2015 >> talk of the Facebook speaker. >> >> Le sam. 5 mai 2018 à 18:05, Akihiro Motoki > > a écrit : >> >> Hi, >> >> I am happy to see the effort to explore a new API mechanism. >> I would like to see good progress and help effort as API >> liaison from the neutron team. >> >> > Neutron has been selected for the PoC because of its >> specific data model >> >> On the other hand, I am not sure this is the right reason to >> choose 'neutron' only from this reason. I would like to note >> "its specific data model" is not the reason that makes the >> progress of API versioning slowest in the OpenStack >> community. I believe it is worth recognized as I would like >> not to block the effort due to the neutron-specific reasons. >> The most complicated point in the neutron API is that the >> neutron API layer allows neutron plugins to declare which >> features are supported. The neutron API is a collection of >> API extensions defined in the neutron-lib repo and each >> neutron plugin can declare which subset(s) of the neutron >> APIs are supported. (For more detail, you can check how the >> neutron API extension mechanism is implemented). It is not >> defined only by the neutron API layer. We need to communicate >> which API features are supported by communicating enabled >> service plugins. >> >> I am afraid that most efforts to explore a new mechanism in >> neutron will be spent to address the above points which is >> not directly related to GraphQL itself. >> Of course, it would be great if you overcome long-standing >> complicated topics as part of GraphQL effort :) >> >> I am happy to help the effort and understand how the neutron >> API is defined. >> >> Thanks, >> Akihiro >> >> >> 2018年5月5日(土) 18:16 Gilles Dubreuil > >: >> >> Hello, >> >> Few of us recently discussed [1] how GraphQL [2], the >> next evolution >> from REST, could transform OpenStack APIs for the better. >> Effectively we believe OpenStack APIs provide perfect use >> cases for >> GraphQL DSL approach, to bring among other advantages, >> better >> performance and stability, easier developments and >> consumption, and with >> GraphQL Schema provide automation capabilities never >> achieved before. >> >> The API SIG suggested to start an API GraphQL Proof of >> Concept (PoC) to >> demonstrate the capabilities before eventually extend >> GraphQL to other >> projects. >> Neutron has been selected for the PoC because of its >> specific data model. >> >> So if you are interested, please join us. >> For those who can make it, we'll also discuss this during >> the SIG API >> BoF at OpenStack Summit at Vancouver [3] >> >> To learn more about GraphQL, check-out howtographql.com >> [4]. >> >> So let's get started... >> >> >> [1] >> http://lists.openstack.org/pipermail/openstack-dev/2018-May/130054.html >> [2] http://graphql.org/ >> [3] >> https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21798/api-special-interest-group-session >> [4] https://www.howtographql.com/ >> >> Regards, >> Gilles >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > -- > Gilles Dubreuil > Senior Software Engineer - Red Hat - Openstack DFG Integration > Email:gilles at redhat.com > GitHub/IRC: gildub > Mobile: +61 400 894 219 > -- Gilles Dubreuil Senior Software Engineer - Red Hat - Openstack DFG Integration Email: gilles at redhat.com GitHub/IRC: gildub Mobile: +61 400 894 219 -------------- next part -------------- An HTML attachment was scrubbed... URL: From gdubreui at redhat.com Thu May 31 07:22:32 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Thu, 31 May 2018 17:22:32 +1000 Subject: [openstack-dev] [neutron][api][grapql] Proof of Concept In-Reply-To: <17F4CBAD-16C4-4FA0-A92D-C9CC81BE76EA@leafe.com> References: <17F4CBAD-16C4-4FA0-A92D-C9CC81BE76EA@leafe.com> Message-ID: On 31/05/18 13:08, Ed Leafe wrote: > On May 6, 2018, at 8:01 AM, Gilles Dubreuil wrote: > >> Regarding the choice of Neutron as PoC, I'm sorry for not providing much details when I said "because of its specific data model", >> effectively the original mention was "its API exposes things at an individual table level, requiring the client to join that information to get the answers they need". >> I realize now such description probably applies to many OpenStack APIs. >> So I'm not sure what was the reason for choosing Neutron. > Blame Monty! > > It was Monty who suggested Neutron due to his particular experience working with the API. Since none of the rest of us in the API-SIG had any better suggestions, that was what we passed on to you. But I think that any target that demonstrates the advantages to be had by adopting GraphQL is fine. So if the team working on this think they can create a more impressive PoC with another API, the API-SIG will support that. > > > -- Ed Leafe > > > Well after being explained the story of the duck versus the duck parts (liver, heart, etc) it makes sense. With Neutron the API provides lots of parts but consumers have to put the part together to get the whole. So Neutron is a good candidate as GraphQL will be able to show how it can fetch several parts at once (maybe not the whole beast since the PoC will cover only a fraction of the API). And as you said as any API it should allow for GraphQL to show it's performance anyway. So I believe we're good. Cheers, Gilles From gael.therond at gmail.com Thu May 31 07:27:37 2018 From: gael.therond at gmail.com (Flint WALRUS) Date: Thu, 31 May 2018 09:27:37 +0200 Subject: [openstack-dev] [neutron][api][grapql] Proof of Concept In-Reply-To: References: Message-ID: Hi Gilles, Ed, I’m really glad and thrilled to read such good news! At this point it’s cool to see that many initiatives have the same convergent needs regarding GraphQL as it will give us a good traction from the beginning if our PoC manage to sufficiently convince our peers. Let me know as soon as the branch have been made, I’ll work on it. Regards, Fl1nt. Le jeu. 31 mai 2018 à 09:17, Gilles Dubreuil a écrit : > Hi Flint, > > I wish it was "my" summit ;) > In the latter case I'd make the sessions an hour and not 20 or 40 minutes, > well at least for the Forum part. And I will also make only one summit a > year instead of two (which is also a feed back I got from the Market > place). I've passed that during the user feedback session. > Sorry for not responding earlier, @elmiko is going to send the minutes of > the API SIG forum session we had. > > We confirmed Neutron to be the PoC. > We are going to use a feature branch, waiting for Miguel Lavalle to > confirm the request has been acknowledge by the Infra group. > The PoC goal is to show GraphQL efficiency. > So we're going to make something straightforward, use Neutron existing > server by adding the graphQL endpoint and cover few core items such as > network, subnets and ports (for example). > > Also the idea of having a central point of access for OpenStack APIs using > GrahpQL stitching and delegation is exciting for everyone (and I had > obviously same feedback off the session) and that's something that could > happen once the PoC has convinced. > > During the meeting, Jiri Tomasek explained how GraphQL could help TripleO > UI. Effectively they struggle with APIs requests and had to create a > middle(ware) module in JS to do API work and reconstruction before the > Javascript client can use it. GraphQL would simplify the process and allow > to get rid of the module. He also explained, after the meeting, how Horizon > could benefit as well, allowing to use only JS and avoid Django altogether! > > I've also been told that Zuul nees GraphQL. > > Well basically the question is who doesn't need it? > > Cheers, > Gilles > > > > On 31/05/18 03:34, Flint WALRUS wrote: > > Hi Gilles, I hope you enjoyed your Summit!? > > Did you had any interesting talk to report about our little initiative ? > Le dim. 6 mai 2018 à 15:01, Gilles Dubreuil a > écrit : > >> >> Akihiro, thank you for your precious help! >> >> Regarding the choice of Neutron as PoC, I'm sorry for not providing much >> details when I said "because of its specific data model", >> effectively the original mention was "its API exposes things at an >> individual table level, requiring the client to join that information to >> get the answers they need". >> I realize now such description probably applies to many OpenStack APIs. >> So I'm not sure what was the reason for choosing Neutron. >> I suppose Nova is also a good candidate because API is quite complex too, >> in a different way, and need to expose the data API and the control API >> plane as we discussed. >> >> After all Neutron is maybe not the best candidate but it seems good >> enough. >> >> And as Flint say the extension mechanism shouldn't be an issue. >> >> So if someone believe there is a better candidate for the PoC, please >> speak now. >> >> Thanks, >> Gilles >> >> PS: Flint, Thank you for offering to be the advocate for Berlin. That's >> great! >> >> >> On 06/05/18 02:23, Flint WALRUS wrote: >> >> Hi Akihiro, >> >> Thanks a lot for this insight on how neutron behave. >> >> We would love to get support and backing from the neutron team in order >> to be able to get the best PoC possible. >> >> Someone suggested neutron as a good choice because of it simple database >> model. As GraphQL can manage your behavior of an extension declaring its >> own schemes I don’t think it would take that much time to implement it. >> >> @Gilles, if I goes to the berlin summitt I could definitely do the >> networking and relationship work needed to get support on our PoC from >> different teams members. This would help to spread the world multiple time >> and don’t have a long time before someone come to talk about this subject >> as what happens with the 2015 talk of the Facebook speaker. >> >> Le sam. 5 mai 2018 à 18:05, Akihiro Motoki a écrit : >> >>> Hi, >>> >>> I am happy to see the effort to explore a new API mechanism. >>> I would like to see good progress and help effort as API liaison from >>> the neutron team. >>> >>> > Neutron has been selected for the PoC because of its specific data >>> model >>> >>> On the other hand, I am not sure this is the right reason to choose >>> 'neutron' only from this reason. I would like to note "its specific data >>> model" is not the reason that makes the progress of API versioning slowest >>> in the OpenStack community. I believe it is worth recognized as I would >>> like not to block the effort due to the neutron-specific reasons. >>> The most complicated point in the neutron API is that the neutron API >>> layer allows neutron plugins to declare which features are supported. The >>> neutron API is a collection of API extensions defined in the neutron-lib >>> repo and each neutron plugin can declare which subset(s) of the neutron >>> APIs are supported. (For more detail, you can check how the neutron API >>> extension mechanism is implemented). It is not defined only by the neutron >>> API layer. We need to communicate which API features are supported by >>> communicating enabled service plugins. >>> >>> I am afraid that most efforts to explore a new mechanism in neutron will >>> be spent to address the above points which is not directly related to >>> GraphQL itself. >>> Of course, it would be great if you overcome long-standing complicated >>> topics as part of GraphQL effort :) >>> >>> I am happy to help the effort and understand how the neutron API is >>> defined. >>> >>> Thanks, >>> Akihiro >>> >>> >>> 2018年5月5日(土) 18:16 Gilles Dubreuil : >>> >>>> Hello, >>>> >>>> Few of us recently discussed [1] how GraphQL [2], the next evolution >>>> from REST, could transform OpenStack APIs for the better. >>>> Effectively we believe OpenStack APIs provide perfect use cases for >>>> GraphQL DSL approach, to bring among other advantages, better >>>> performance and stability, easier developments and consumption, and >>>> with >>>> GraphQL Schema provide automation capabilities never achieved before. >>>> >>>> The API SIG suggested to start an API GraphQL Proof of Concept (PoC) to >>>> demonstrate the capabilities before eventually extend GraphQL to other >>>> projects. >>>> Neutron has been selected for the PoC because of its specific data >>>> model. >>>> >>>> So if you are interested, please join us. >>>> For those who can make it, we'll also discuss this during the SIG API >>>> BoF at OpenStack Summit at Vancouver [3] >>>> >>>> To learn more about GraphQL, check-out howtographql.com [4]. >>>> >>>> So let's get started... >>>> >>>> >>>> [1] >>>> http://lists.openstack.org/pipermail/openstack-dev/2018-May/130054.html >>>> [2] http://graphql.org/ >>>> [3] >>>> >>>> https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21798/api-special-interest-group-session >>>> [4] https://www.howtographql.com/ >>>> >>>> Regards, >>>> Gilles >>>> >>>> >>>> >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> -- >> Gilles Dubreuil >> Senior Software Engineer - Red Hat - Openstack DFG Integration >> Email: gilles at redhat.com >> GitHub/IRC: gildub >> Mobile: +61 400 894 219 >> >> >> > -- > Gilles Dubreuil > Senior Software Engineer - Red Hat - Openstack DFG Integration > Email: gilles at redhat.com > GitHub/IRC: gildub > Mobile: +61 400 894 219 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ghanshyammann at gmail.com Thu May 31 07:39:59 2018 From: ghanshyammann at gmail.com (Ghanshyam Mann) Date: Thu, 31 May 2018 16:39:59 +0900 Subject: [openstack-dev] Questions about token scopes In-Reply-To: References: <61dae2da-e38b-ab3a-3921-6c2c8bd81796@gmail.com> Message-ID: On Thu, May 31, 2018 at 2:09 PM, Ghanshyam Mann wrote: > On Wed, May 30, 2018 at 11:53 PM, Lance Bragstad wrote: >> >> >> On 05/30/2018 08:47 AM, Matt Riedemann wrote: >>> I know the keystone team has been doing a lot of work on scoped tokens >>> and Lance has been trying to roll that out to other projects (like nova). >>> >>> In Rocky the nova team is adding granular policy rules to the >>> placement API [1] which is a good opportunity to set scope on those >>> rules as well. >>> >>> For now, we've just said everything is system scope since resources in >>> placement, for the most part, are managed by "the system". But we do >>> have some resources in placement which have project/user information >>> in them, so could theoretically also be scoped to a project, like GET >>> /usages [2]. > > Just adding that this is same for nova policy also. As you might know > spec[1] try to make nova policy more granular but on hold because of > default roles things. We will do policy rule split with more better > defaults values like read-only for GET APIs. > > Along with that, like you mentioned about scope setting for placement > policy rules, we need to do same for nova policy also. That can be > done later or together with nova policy granular. spec. > > [1] https://review.openstack.org/#/c/547850/ > >>> >>> While going through this, I've been hammering Lance with questions but >>> I had some more this morning and wanted to send them to the list to >>> help spread the load and share the knowledge on working with scoped >>> tokens in the other projects. >> >> ++ good idea >> >>> >>> So here goes with the random questions: >>> >>> * devstack has the admin project/user - does that by default get >>> system scope tokens? I see the scope is part of the token create >>> request [3] but it's optional, so is there a default value if not >>> specified? >> >> No, not necessarily. The keystone-manage bootstrap command is what >> bootstraps new deployments with the admin user, an admin role, a project >> to work in, etc. It also grants the newly created admin user the admin >> role on a project and the system. This functionality was added in Queens >> [0]. This should be backwards compatible and allow the admin user to get >> tokens scoped to whatever they had authorization on previously. The only >> thing they should notice is that they have another role assignment on >> something called the "system". That being said, they can start >> requesting system-scoped tokens from keystone. We have a document that >> tries to explain the differences in scopes and what they mean [1]. > > Another related question is, does scope setting will impact existing > operator? I mean when policy rule start setting scope, that might > break the existing operator as their current token (say project > scoped) might not be able to authorize the policy modified with > setting the system scope. > > In that case, how we are going to avoid the upgrade break. One way can > be to soft enforcement scope things for a cycle with warning and then > start enforcing that after one cycle (like we do for any policy rule > change)? but not sure at this point. ^^ this is basically the same question i got while this review -https://review.openstack.org/#/c/570621/1/nova/api/openstack/placement/policies/aggregate.py Checking how scope_type will affect existing operator(token) so that we can evaluate the upgrade impact. > >> >> [0] https://review.openstack.org/#/c/530410/ >> [1] https://docs.openstack.org/keystone/latest/admin/identity-tokens.html >> >>> >>> * Why don't the token create and show APIs return the scope? >> >> Good question. In a way, they do. If you look at a response when you >> authenticate for a token or validate a token, you should see an object >> contained within the token reference for the purpose of scope. For >> example, a project-scoped token will have a project object in the >> response [2]. A domain-scoped token will have a domain object in the >> response [3]. The same is true for system scoped tokens [4]. Unscoped >> tokens do not have any of these objects present and do not contain a >> service catalog [5]. While scope isn't explicitly denoted by an >> attribute, it can be derived from the attributes of the token response. >> >> [2] http://paste.openstack.org/raw/722349/ >> [3] http://paste.openstack.org/raw/722351/ >> [4] http://paste.openstack.org/raw/722348/ >> [5] http://paste.openstack.org/raw/722350/ >> >> >>> >>> * It looks like python-openstackclient doesn't allow specifying a >>> scope when issuing a token, is that going to be added? >> >> Yes, I have a patch up for it [6]. I wanted to get this in during >> Queens, but it missed the boat. I believe this and a new release of >> oslo.context are the only bits left in order for services to have >> everything they need to easily consume system-scoped tokens. >> Keystonemiddleware should know how to handle system-scoped tokens in >> front of each service [7]. The oslo.context library should be smart >> enough to handle system scope set by keystonemiddleware if context is >> built from environment variables [8]. Both keystoneauth [9] and >> python-keystoneclient [10] should have what they need to generate >> system-scoped tokens. >> >> That should be enough to allow the service to pass a request environment >> to oslo.context and use the context object to reason about the scope of >> the request. As opposed to trying to understand different token scope >> responses from keystone. We attempted to abstract that away in to the >> context object. >> >> [6] https://review.openstack.org/#/c/524416/ >> [7] https://review.openstack.org/#/c/564072/ >> [8] https://review.openstack.org/#/c/530509/ >> [9] https://review.openstack.org/#/c/529665/ >> [10] https://review.openstack.org/#/c/524415/ >> >>> >>> The reason I'm asking about OSC stuff is because we have the >>> osc-placement plugin [4] which allows users with the admin role to >>> work with resources in placement, which could be useful for things >>> like fixing up incorrect or leaked allocations, i.e. fixing the >>> fallout of a bug in nova. I'm wondering if we define all of the >>> placement API rules as system scope and we're enforcing scope, will >>> admins, as we know them today, continue to be able to use those APIs? >>> Or will deployments just need to grow a system-scope admin >>> project/user and per-project admin users, and then use the former for >>> working with placement via the OSC plugin? >> >> Uhm, if I understand your question, it depends on how you define the >> scope types for those APIs. If you set them to system-scope, then an >> operator will need to use a system-scoped token in order to access those >> APIs iff the placement configuration file contains placement.conf >> [oslo.policy] enforce_scope = True. Otherwise, setting that option to >> false will log a warning to operators saying that someone is accessing a >> system-scoped API with a project-scoped token (e.g. education needs to >> happen). >> >>> >>> [1] >>> https://review.openstack.org/#/q/topic:bp/granular-placement-policy+(status:open+OR+status:merged) >>> [2] https://developer.openstack.org/api-ref/placement/#list-usages >>> [3] >>> https://developer.openstack.org/api-ref/identity/v3/index.html#password-authentication-with-scoped-authorization >>> [4] https://docs.openstack.org/osc-placement/latest/index.html >>> >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> From thierry at openstack.org Thu May 31 08:31:12 2018 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 31 May 2018 10:31:12 +0200 Subject: [openstack-dev] [tc][forum] TC Retrospective for Queens/Rocky In-Reply-To: <1527705783-sup-2521@lrrr.local> References: <1527628983-sup-2281@lrrr.local> <1527705783-sup-2521@lrrr.local> Message-ID: <0397c253-03af-5c72-30c1-15f3edd43d75@openstack.org> Doug Hellmann wrote: > [...] > I'm missing details and/or whole topics. Please review the list and > make any updates you think are necessary. One thing that was raised at the Board+TC+UC meeting is the idea of creating a group to help with wording and communication of "help most needed" list items, so that they contain more business-value explanation and get more regular status updates at the Board... If I remember correctly, Chris Price, dims and you volunteered :) I'm happy to help too. Is that something you would like to track on this document as well ? -- Thierry Carrez (ttx) From thierry at openstack.org Thu May 31 08:33:51 2018 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 31 May 2018 10:33:51 +0200 Subject: [openstack-dev] [tripleo] [barbican] [tc] key store in base services In-Reply-To: <1527710294.31249.24.camel@redhat.com> References: <20180516174209.45ghmqz7qmshsd7g@yuggoth.org> <16b41f65-053b-70c3-b95f-93b763a5f4ae@openstack.org> <1527710294.31249.24.camel@redhat.com> Message-ID: <86bf4382-2bdd-02f9-5544-9bad6190263b@openstack.org> Ade Lee wrote: > [...] > So it seems that the two blockers above have been resolved. So is it > time to ad a castellan compatible secret store to the base services? It's definitely time to start a discussion about it, at least :) Would you be interested in starting a ML thread about it ? If not, that's probably something I can do :) -- Thierry Carrez (ttx) From thierry at openstack.org Thu May 31 08:50:59 2018 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 31 May 2018 10:50:59 +0200 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: References: <3424d691-9792-afde-dce9-4eca7601ae4f@redhat.com> <7489c0e7-de93-6305-89a0-167873f5e3ec@gmx.com> <1A3C52DFCD06494D8528644858247BF01C0D7F72@EX10MBOX03.pnnl.gov> <20180531010957.GA1354@zeong> Message-ID: <3a59cb5f-599d-4a89-40ec-e2610ef1d821@openstack.org> Davanum Srinivas wrote: >>>> "master should be always deployable and fully backward compatible and >>>> so we cant let anything in anytime that could possibly regress anyone" >>>> >>>> Should we change that attitude too? Anyone agree? disagree? >>>> >>>> Thanks, >>>> Dims >>>> >>> I'll definitely jump at this one. >>> >>> I've always thought (and shared on the ML several times now) that our >>> implied >>> but not explicit support for CD from any random commit was a bad thing. >>> >>> While I think it's good to support the idea that master is always >>> deployable, I >>> do not think it is a good mindset to think that every commit is a >>> "release" and >>> therefore should be supported until the end of time. We have a coordinated >>> release for a reason, and I think design decisions and fixes should be >>> based on >>> the assumption that a release is a release and the point at which we >>> need to be >>> cognizant and caring about keeping backward compatibility. Doing that for >>> every single commit is not ideal for the overall health of the product, IMO. >>> >> >> It's more than just a CD guarantee, while from a quick glance it would seem like >> that's the only value it goes much deeper than that. Ensuring that every commit >> works, is deployable, and maintains backwards compatibility is what enables us >> to have such a high quality end result at release time. Quite frankly it's >> looking at every commit as always being a working unit that enables us to manage >> a project this size at this velocity. Even if people assume no one is actually >> CDing the projects(which we shouldn't), it's a flawed assumption to think that >> everyone is running strictly the same code as what's in the release tarballs. I >> can't think of any production cloud out there that doesn't carry patches to fix >> things encountered in the real world. Or look at stable maint we regularly need >> to backport fixes to fix bugs found after release. If we can't rely on these to >> always work this makes our life much more difficult, both as upstream >> maintainers but also as downstream consumers of OpenStack. >> >> The other aspect to look at here is just the review mindset, supporting every >> every commit is useable puts reviewers in the mindset to consider things like >> backwards compatibility and deployability when looking at proposed changes. If >> we stop looking for these potential issues, we t will also cause many more bugs >> to be in our released code. To simply discount this as only a release concern >> and punt this kind of scrutiny until it's time to release is not only going to >> make release time much more stressful. Also, our testing is built to try and >> ensure every commit works **before** we merge it. If we decided to take this >> stance as a community then we should really just rip out all the testing, >> because that's what it's there to verify and help us make sure we don't land a >> change that doesn't work. If we don't actually care about that making sure every >> commit is deployable we are wasting quite a lot of resources on it. > > "rip out all testing" is probably taking it too far Matt. > > Instead of perfection when merging, we should look for iteration and > reverts. That's what i would like to see. I am not asking for a > "Commit-Then-Review" like the ASF. I want us to be just be practical > and have some leeway to iterate / update / experiment instead of > absolute perfection from all angles. We should move the needle at > least a bit away from it. Right... There might be a reasonable middle ground between "every commit on master must be backward-compatible" and "rip out all testing" that allows us to routinely revert broken feature commits (as long as they don't cross a release boundary). To be fair, I'm pretty sure that's already the case: we did revert feature commits on master in the past, therefore breaking backward compatibility if someone started to use that feature right away. It's the issue with implicit rules: everyone interprets them the way they want... So I think that could use some explicit clarification. [ This tangent should probably gets its own thread to not disrupt the no-nitpicking discussion ] -- Thierry Carrez (ttx) From alex.william at microfocus.com Thu May 31 08:52:48 2018 From: alex.william at microfocus.com (., Alex Dominic Savio) Date: Thu, 31 May 2018 08:52:48 +0000 Subject: [openstack-dev] Help required to install devstack with GBP In-Reply-To: References: Message-ID: Hi Experts, I have been trying to install devstack with gbp as per the instruction given in the GitHub https://github.com/openstack/group-based-policy This I am running on Ubuntu 16.x as well as 14.x but both the attempts were not successful. It fails stating "neutron is not started" Can you please help me with this issue to get pass ? Thanks & Regards, Alex Dominic Savio Product Manager, ITOM-HCM Micro Focus Bagmane Tech Park Bangalore, India. (M)+91 9880634388 alex.william at microfocus.com ________________________________ [cid:image003.jpg at 01D3ED74.F4E1E2F0] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 1373 bytes Desc: image001.jpg URL: From sbauza at redhat.com Thu May 31 09:10:55 2018 From: sbauza at redhat.com (Sylvain Bauza) Date: Thu, 31 May 2018 11:10:55 +0200 Subject: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers In-Reply-To: <1527678362.3825.3@smtp.office365.com> References: <8eefd93a-abbf-1436-07a3-d18223ed8fa8@lab.ntt.co.jp> <1527584511.6381.1@smtp.office365.com> <1527596481.3825.0@smtp.office365.com> <1527678362.3825.3@smtp.office365.com> Message-ID: On Wed, May 30, 2018 at 1:06 PM, Balázs Gibizer wrote: > > > On Tue, May 29, 2018 at 3:12 PM, Sylvain Bauza wrote: > >> >> >> On Tue, May 29, 2018 at 2:21 PM, Balázs Gibizer < >> balazs.gibizer at ericsson.com> wrote: >> >>> >>> >>> On Tue, May 29, 2018 at 1:47 PM, Sylvain Bauza >>> wrote: >>> >>>> >>>> >>>> Le mar. 29 mai 2018 à 11:02, Balázs Gibizer < >>>> balazs.gibizer at ericsson.com> a écrit : >>>> >>>>> >>>>> >>>>> On Tue, May 29, 2018 at 9:38 AM, Sylvain Bauza >>>>> wrote: >>>>> > >>>>> > >>>>> > On Tue, May 29, 2018 at 3:08 AM, TETSURO NAKAMURA >>>>> > wrote >>>>> > >>>>> >> > In that situation, say for example with VGPU inventories, that >>>>> >> would mean >>>>> >> > that the compute node would stop reporting inventories for its >>>>> >> root RP, but >>>>> >> > would rather report inventories for at least one single child RP. >>>>> >> > In that model, do we reconcile the allocations that were already >>>>> >> made >>>>> >> > against the "root RP" inventory ? >>>>> >> >>>>> >> It would be nice to see Eric and Jay comment on this, >>>>> >> but if I'm not mistaken, when the virt driver stops reporting >>>>> >> inventories for its root RP, placement would try to delete that >>>>> >> inventory inside and raise InventoryInUse exception if any >>>>> >> allocations still exist on that resource. >>>>> >> >>>>> >> ``` >>>>> >> update_from_provider_tree() (nova/compute/resource_tracker.py) >>>>> >> + _set_inventory_for_provider() (nova/scheduler/client/report.py) >>>>> >> + put() - PUT /resource_providers//inventories with >>>>> >> new inventories (scheduler/client/report.py) >>>>> >> + set_inventories() (placement/handler/inventory.py) >>>>> >> + _set_inventory() >>>>> >> (placement/objects/resource_proveider.py) >>>>> >> + _delete_inventory_from_provider() >>>>> >> (placement/objects/resource_proveider.py) >>>>> >> -> raise exception.InventoryInUse >>>>> >> ``` >>>>> >> >>>>> >> So we need some trick something like deleting VGPU allocations >>>>> >> before upgrading and set the allocation again for the created new >>>>> >> child after upgrading? >>>>> >> >>>>> > >>>>> > I wonder if we should keep the existing inventory in the root RP, and >>>>> > somehow just reserve the left resources (so Placement wouldn't pass >>>>> > that root RP for queries, but would still have allocations). But >>>>> > then, where and how to do this ? By the resource tracker ? >>>>> > >>>>> >>>>> AFAIK it is the virt driver that decides to model the VGU resource at a >>>>> different place in the RP tree so I think it is the responsibility of >>>>> the same virt driver to move any existing allocation from the old place >>>>> to the new place during this change. >>>>> >>>>> Cheers, >>>>> gibi >>>>> >>>> >>>> Why not instead not move the allocation but rather have the virt driver >>>> updating the root RP by modifying the reserved value to the total size? >>>> >>>> That way, the virt driver wouldn't need to ask for an allocation but >>>> rather continue to provide inventories... >>>> >>>> Thoughts? >>>> >>> >>> Keeping the old allocaton at the old RP and adding a similar sized >>> reservation in the new RP feels hackis as those are not really reserved >>> GPUs but used GPUs just from the old RP. If somebody sums up the total >>> reported GPUs in this setup via the placement API then she will get more >>> GPUs in total that what is physically visible for the hypervisor as the >>> GPUs part of the old allocation reported twice in two different total >>> value. Could we just report less GPU inventories to the new RP until the >>> old RP has GPU allocations? >>> >>> >> >> We could keep the old inventory in the root RP for the previous vGPU type >> already supported in Queens and just add other inventories for other vGPU >> types now supported. That looks possibly the simpliest option as the virt >> driver knows that. >> > > That works for me. Can we somehow deprecate the previous, already > supported vGPU types to eventually get rid of the splitted inventory? > > >> >> Some alternatives from my jetlagged brain: >>> >>> a) Implement a move inventory/allocation API in placement. Given a >>> resource class and a source RP uuid and a destination RP uuid placement >>> moves the inventory and allocations of that resource class from the source >>> RP to the destination RP. Then the virt drive can call this API to move the >>> allocation. This has an impact on the fast forward upgrade as it needs >>> running virt driver to do the allocation move. >>> >>> >> Instead of having the virt driver doing that (TBH, I don't like that >> given both Xen and libvirt drivers have the same problem), we could write a >> nova-manage upgrade call for that that would call the Placement API, sure. >> > > The nova-manage is another possible way similar to my idea #c) but there I > imagined the logic in placement-manage instead of nova-manage. > > >> b) For this I assume that live migrating an instance having a GPU >>> allocation on the old RP will allocate GPU for that instance from the new >>> RP. In the virt driver do not report GPUs to the new RP while there is >>> allocation for such GPUs in the old RP. Let the deployer live migrate away >>> the instances. When the virt driver detects that there is no more GPU >>> allocations on the old RP it can delete the inventory from the old RP and >>> report it to the new RP. >>> >>> >> For the moment, vGPUs don't support live migration, even within QEMU. I >> haven't checked that, but IIUC when you live-migrate an instance that have >> vGPUs, it will just migrate it without recreating the vGPUs. >> > > If there is no live migration support for vGPUs then this option can be > ignored. > > > Now, the problem is with the VGPU allocation, we should delete it then. >> Maybe a new bug report ? >> > > Sounds like a bug report to me :) > > >> c) For this I assume that there is no support for live migration of an >>> instance having a GPU. If there is GPU allocation in the old RP then virt >>> driver does not report GPU inventory to the new RP just creates the new >>> nested RPs. Provide a placement-manage command to do the inventory + >>> allocation copy from the old RP to the new RP. >>> >>> >> what's the difference with the first alternative ? >> > > I think after you mentioned nova-manage for the first alternative the > difference became only doing it from nova-manage or from placement-manage. > The placement-manage solution has the benefit of being a pure DB operation, > moving inventory and allocation between two RPs while nova-manage would > need to call a new placement API. > > After considering the whole approach, discussing with a couple of folks over IRC, here is what I feel the best approach for a seamless upgrade : - VGPU inventory will be kept on root RP (for the first type) in Queens so that a compute service upgrade won't impact the DB - during Queens, operators can run a DB online migration script (like the ones we currently have in https://github.com/openstack/nova/blob/c2f42b0/nova/cmd/manage.py#L375) that will create a new resource provider for the first type and move the inventory and allocations to it. - it's the responsibility of the virt driver code to check whether a child RP with its name being the first type name already exists to know whether to update the inventory against the root RP or the child RP. Does it work for folks ? PS : we already have the plumbing in place in nova-manage and we're still managing full Nova resources. I know we plan to move Placement out of the nova tree, but for the Rocky timeframe, I feel we can consider nova-manage as the best and quickiest approach for the data upgrade. -Sylvain > >> Anyway, looks like it's pretty simple to just keep the inventory for the >> already existing vGPU type in the root RP, and just add nested RPs for >> other vGPU types. >> Oh, and btw. we could possibly have the same problem when we implement >> the NUMA spec that I need to rework https://review.openstack.org/# >> /c/552924/ >> > > If we want to move the VCPU resources from the root to the nested NUMA RP > then yes, that feels like the same problem. > > gibi > > > >> -Sylvain >> >>> Cheers, >>> gibi >>> >>> >>> >>>> >>>>> > -Sylvain >>>>> > >>>>> >>>>> >>>>> ____________________________________________________________ >>>>> ______________ >>>>> OpenStack Development Mailing List (not for usage questions) >>>>> Unsubscribe: OpenStack-dev-request at lists.op >>>>> enstack.org?subject:unsubscribe >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>>> >>>> >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From glongwave at gmail.com Thu May 31 09:27:05 2018 From: glongwave at gmail.com (ChangBo Guo) Date: Thu, 31 May 2018 17:27:05 +0800 Subject: [openstack-dev] [oslo] Summit onboarding and project update slides In-Reply-To: References: Message-ID: Thanks Ben 2018-05-31 6:48 GMT+08:00 Ben Nemec : > As promised in the sessions, here are the slides that were presented: > > https://www.slideshare.net/BenNemec1/oslo-vancouver-onboarding > > https://www.slideshare.net/BenNemec1/oslo-vancouver-project-update > > The font in the onboarding one got a little funny in the conversion, so if > you want to see the original that is more readable let me know and I can > send it to you. > > -Ben > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- ChangBo Guo(gcb) Community Director @EasyStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at ericsson.com Thu May 31 09:34:36 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Thu, 31 May 2018 11:34:36 +0200 Subject: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers In-Reply-To: References: <8eefd93a-abbf-1436-07a3-d18223ed8fa8@lab.ntt.co.jp> <1527584511.6381.1@smtp.office365.com> <1527596481.3825.0@smtp.office365.com> <1527678362.3825.3@smtp.office365.com> Message-ID: <1527759276.19128.0@smtp.office365.com> On Thu, May 31, 2018 at 11:10 AM, Sylvain Bauza wrote: >> > > After considering the whole approach, discussing with a couple of > folks over IRC, here is what I feel the best approach for a seamless > upgrade : > - VGPU inventory will be kept on root RP (for the first type) in > Queens so that a compute service upgrade won't impact the DB > - during Queens, operators can run a DB online migration script > (like the ones we currently have in > https://github.com/openstack/nova/blob/c2f42b0/nova/cmd/manage.py#L375) > that will create a new resource provider for the first type and move > the inventory and allocations to it. > - it's the responsibility of the virt driver code to check whether a > child RP with its name being the first type name already exists to > know whether to update the inventory against the root RP or the child > RP. > > Does it work for folks ? +1 works for me gibi > PS : we already have the plumbing in place in nova-manage and we're > still managing full Nova resources. I know we plan to move Placement > out of the nova tree, but for the Rocky timeframe, I feel we can > consider nova-manage as the best and quickiest approach for the data > upgrade. > > -Sylvain > > From tenobreg at redhat.com Thu May 31 10:20:45 2018 From: tenobreg at redhat.com (Telles Nobrega) Date: Thu, 31 May 2018 07:20:45 -0300 Subject: [openstack-dev] [sahara] Canceling today's meeting Message-ID: Hi saharans and interested folks, we won't be having meeting today since at least half of our team is on PTO today. We will be back next Thursday. See you all. -- TELLES NOBREGA SOFTWARE ENGINEER Red Hat Brasil Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo tenobreg at redhat.com TRIED. TESTED. TRUSTED. Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil pelo Great Place to Work. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu May 31 13:00:47 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 31 May 2018 13:00:47 +0000 Subject: [openstack-dev] [tripleo] [barbican] [tc] key store in base services In-Reply-To: <86bf4382-2bdd-02f9-5544-9bad6190263b@openstack.org> References: <20180516174209.45ghmqz7qmshsd7g@yuggoth.org> <16b41f65-053b-70c3-b95f-93b763a5f4ae@openstack.org> <1527710294.31249.24.camel@redhat.com> <86bf4382-2bdd-02f9-5544-9bad6190263b@openstack.org> Message-ID: <20180531130047.q2x2gmhkredaqxis@yuggoth.org> On 2018-05-31 10:33:51 +0200 (+0200), Thierry Carrez wrote: > Ade Lee wrote: > > [...] > > So it seems that the two blockers above have been resolved. So is it > > time to ad a castellan compatible secret store to the base services? > > It's definitely time to start a discussion about it, at least :) > > Would you be interested in starting a ML thread about it ? If not, that's > probably something I can do :) That was, in fact, the entire reason I started this subthread, changed the subject and added the [tc] tag. ;) http://lists.openstack.org/pipermail/openstack-dev/2018-May/130567.html I figured I'd let it run through the summit to garner feedback before proposing the corresponding Gerrit change. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From doug at doughellmann.com Thu May 31 13:03:21 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 31 May 2018 09:03:21 -0400 Subject: [openstack-dev] [tc][forum] TC Retrospective for Queens/Rocky In-Reply-To: <0397c253-03af-5c72-30c1-15f3edd43d75@openstack.org> References: <1527628983-sup-2281@lrrr.local> <1527705783-sup-2521@lrrr.local> <0397c253-03af-5c72-30c1-15f3edd43d75@openstack.org> Message-ID: <1527771702-sup-2622@lrrr.local> Excerpts from Thierry Carrez's message of 2018-05-31 10:31:12 +0200: > Doug Hellmann wrote: > > [...] > > I'm missing details and/or whole topics. Please review the list and > > make any updates you think are necessary. > > One thing that was raised at the Board+TC+UC meeting is the idea of > creating a group to help with wording and communication of "help most > needed" list items, so that they contain more business-value explanation > and get more regular status updates at the Board... > > If I remember correctly, Chris Price, dims and you volunteered :) I'm > happy to help too. > > Is that something you would like to track on this document as well ? > Yes, that would be a good thing to add. I also still plan to send a summary of the meeting from my perspective. Doug From openstack at fried.cc Thu May 31 13:49:36 2018 From: openstack at fried.cc (Eric Fried) Date: Thu, 31 May 2018 08:49:36 -0500 Subject: [openstack-dev] [Cyborg] [Nova] Cyborg traits In-Reply-To: References: <1e33d001-ae8c-c28d-0ab6-fa061c5d362b@intel.com> <37700cc2-a79c-30ea-d986-e18584cc0464@fried.cc> Message-ID: <3fc4ed48-125f-7479-7ea7-a370e7450df3@fried.cc> Yup. I'm sure reviewers will bikeshed the names, but the review is the appropriate place for that to happen. A couple of test changes will also be required. You can have a look at [1] as an example to follow. -efried [1] https://review.openstack.org/#/c/511180/ On 05/31/2018 01:02 AM, Nadathur, Sundar wrote: > On 5/30/2018 1:18 PM, Eric Fried wrote: >> This all sounds fully reasonable to me.  One thing, though... >> >>>>        * There is a resource class per device category e.g. >>>>          CUSTOM_ACCELERATOR_GPU, CUSTOM_ACCELERATOR_FPGA. >> Let's propose standard resource classes for these ASAP. >> >> https://github.com/openstack/nova/blob/d741f624c81baf89fc8b6b94a2bc20eb5355a818/nova/rc_fields.py >> >> >> -efried > Makes sense, Eric. The obvious names would be ACCELERATOR_GPU and > ACCELERATOR_FPGA. Do we just submit a patch to rc_fields.py? > > Thanks, > Sundar > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From openstack at fried.cc Thu May 31 13:54:13 2018 From: openstack at fried.cc (Eric Fried) Date: Thu, 31 May 2018 08:54:13 -0500 Subject: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers In-Reply-To: <1527759276.19128.0@smtp.office365.com> References: <8eefd93a-abbf-1436-07a3-d18223ed8fa8@lab.ntt.co.jp> <1527584511.6381.1@smtp.office365.com> <1527596481.3825.0@smtp.office365.com> <1527678362.3825.3@smtp.office365.com> <1527759276.19128.0@smtp.office365.com> Message-ID: <9cba3398-0972-61df-b143-999596775342@fried.cc> This seems reasonable, but... On 05/31/2018 04:34 AM, Balázs Gibizer wrote: > > > On Thu, May 31, 2018 at 11:10 AM, Sylvain Bauza wrote: >>> >> >> After considering the whole approach, discussing with a couple of >> folks over IRC, here is what I feel the best approach for a seamless >> upgrade : >>  - VGPU inventory will be kept on root RP (for the first type) in >> Queens so that a compute service upgrade won't impact the DB >>  - during Queens, operators can run a DB online migration script (like -------------^^^^^^ Did you mean Rocky? >> the ones we currently have in >> https://github.com/openstack/nova/blob/c2f42b0/nova/cmd/manage.py#L375) that >> will create a new resource provider for the first type and move the >> inventory and allocations to it. >>  - it's the responsibility of the virt driver code to check whether a >> child RP with its name being the first type name already exists to >> know whether to update the inventory against the root RP or the child RP. >> >> Does it work for folks ? > > +1 works for me > gibi > >> PS : we already have the plumbing in place in nova-manage and we're >> still managing full Nova resources. I know we plan to move Placement >> out of the nova tree, but for the Rocky timeframe, I feel we can >> consider nova-manage as the best and quickiest approach for the data >> upgrade. >> >> -Sylvain >> >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From naichuan.sun at citrix.com Thu May 31 14:14:48 2018 From: naichuan.sun at citrix.com (Naichuan Sun) Date: Thu, 31 May 2018 14:14:48 +0000 Subject: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers In-Reply-To: <9cba3398-0972-61df-b143-999596775342@fried.cc> References: <8eefd93a-abbf-1436-07a3-d18223ed8fa8@lab.ntt.co.jp> <1527584511.6381.1@smtp.office365.com> <1527596481.3825.0@smtp.office365.com> <1527678362.3825.3@smtp.office365.com> <1527759276.19128.0@smtp.office365.com> <9cba3398-0972-61df-b143-999596775342@fried.cc> Message-ID: <085ff254c81641c3925c62a99f1730ed@SINPEX02CL01.citrite.net> I can do it on xenserver side, although keep old inv in compute node rp looks weird to me(it just work for one case: upgrade)... -----Original Message----- From: Eric Fried [mailto:openstack at fried.cc] Sent: Thursday, May 31, 2018 9:54 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers This seems reasonable, but... On 05/31/2018 04:34 AM, Balázs Gibizer wrote: > > > On Thu, May 31, 2018 at 11:10 AM, Sylvain Bauza wrote: >>> >> >> After considering the whole approach, discussing with a couple of >> folks over IRC, here is what I feel the best approach for a seamless >> upgrade : >>  - VGPU inventory will be kept on root RP (for the first type) in >> Queens so that a compute service upgrade won't impact the DB >>  - during Queens, operators can run a DB online migration script >> (like -------------^^^^^^ Did you mean Rocky? >> the ones we currently have in >> https://github.com/openstack/nova/blob/c2f42b0/nova/cmd/manage.py#L37 >> 5) that will create a new resource provider for the first type and >> move the inventory and allocations to it. >>  - it's the responsibility of the virt driver code to check whether a >> child RP with its name being the first type name already exists to >> know whether to update the inventory against the root RP or the child RP. >> >> Does it work for folks ? > > +1 works for me > gibi > >> PS : we already have the plumbing in place in nova-manage and we're >> still managing full Nova resources. I know we plan to move Placement >> out of the nova tree, but for the Rocky timeframe, I feel we can >> consider nova-manage as the best and quickiest approach for the data >> upgrade. >> >> -Sylvain >> >> > > > ______________________________________________________________________ > ____ OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From lbragstad at gmail.com Thu May 31 14:15:53 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Thu, 31 May 2018 09:15:53 -0500 Subject: [openstack-dev] Questions about token scopes In-Reply-To: <40b4e723-6915-7b01-04a3-7b96f39032ae@gmail.com> References: <61dae2da-e38b-ab3a-3921-6c2c8bd81796@gmail.com> <40b4e723-6915-7b01-04a3-7b96f39032ae@gmail.com> Message-ID: On 05/30/2018 03:37 PM, Matt Riedemann wrote: > On 5/30/2018 9:53 AM, Lance Bragstad wrote: >> While scope isn't explicitly denoted by an >> attribute, it can be derived from the attributes of the token response. >> > > Yeah, this was confusing to me, which is why I reported it as a bug in > the API reference documentation: > > https://bugs.launchpad.net/keystone/+bug/1774229 > >>> * It looks like python-openstackclient doesn't allow specifying a >>> scope when issuing a token, is that going to be added? >> Yes, I have a patch up for it [6]. I wanted to get this in during >> Queens, but it missed the boat. I believe this and a new release of >> oslo.context are the only bits left in order for services to have >> everything they need to easily consume system-scoped tokens. >> Keystonemiddleware should know how to handle system-scoped tokens in >> front of each service [7]. The oslo.context library should be smart >> enough to handle system scope set by keystonemiddleware if context is >> built from environment variables [8]. Both keystoneauth [9] and >> python-keystoneclient [10] should have what they need to generate >> system-scoped tokens. >> >> That should be enough to allow the service to pass a request environment >> to oslo.context and use the context object to reason about the scope of >> the request. As opposed to trying to understand different token scope >> responses from keystone. We attempted to abstract that away in to the >> context object. >> >> [6]https://review.openstack.org/#/c/524416/ >> [7]https://review.openstack.org/#/c/564072/ >> [8]https://review.openstack.org/#/c/530509/ >> [9]https://review.openstack.org/#/c/529665/ >> [10]https://review.openstack.org/#/c/524415/ > > I think your reply in IRC was more what I was looking for: > > lbragstad    mriedem: if you install > https://review.openstack.org/#/c/524416/5 locally with devstack and > setup a clouds.yaml, ``openstack token issue --os-cloud > devstack-system-admin`` should work    15:39 > lbragstad    http://paste.openstack.org/raw/722357/    15:39 > > So users with the system role will need to create a token using that > role to get the system-scoped token, as far as I understand. There is > no --scope option on the 'openstack token issue' CLI. > >> Uhm, if I understand your question, it depends on how you define the >> scope types for those APIs. If you set them to system-scope, then an >> operator will need to use a system-scoped token in order to access those >> APIs iff the placement configuration file contains placement.conf >> [oslo.policy] enforce_scope = True. Otherwise, setting that option to >> false will log a warning to operators saying that someone is accessing a >> system-scoped API with a project-scoped token (e.g. education needs to >> happen). >> > > All placement APIs will be system scoped for now, so yeah I guess if > operators enable scope enforcement they'll just have to learn how to > deal with system-scope enforced APIs. > > Here is another random question: > > Do we have any CI jobs running devstack/tempest with scope enforcement > enabled to see what blows up? > Yes and no. There is an effort to include CI testing of some sort, building on devstack, tempest, and patrole [0]. We actually have a specification that details how we plan to start testing these changes with an experimental job, once we get the correct RBAC behavior that we want [1]. If anyone has cycles or is interested in test coverage for this type of stuff, please don't hesitate to reach out. We could really use some help in this area and we have a pretty good plan in place. [0] https://github.com/openstack/patrole [1] https://review.openstack.org/#/c/464678/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From aschultz at redhat.com Thu May 31 14:22:04 2018 From: aschultz at redhat.com (Alex Schultz) Date: Thu, 31 May 2018 08:22:04 -0600 Subject: [openstack-dev] [tripleo][puppet] Hello all, puppet modules Message-ID: On Wed, May 30, 2018 at 3:18 PM, Remo Mattei wrote: > Hello all, > I have talked to several people about this and I would love to get this > finalized once and for all. I have checked the OpenStack puppet modules > which are mostly developed by the Red Hat team, as of right now, TripleO is > using a combo of Ansible and puppet to deploy but in the next couple of > releases, the plan is to move away from the puppet option. > > So the OpenStack puppet modules are maintained by others other than Red Hat, however we have been a major contributor since TripleO has relied on them for some time. That being said, as TripleO has migrated to containers built with Kolla, we've adapted our deployment mechanism to include Ansible and we really only use puppet for configuration generation. Our goal for TripleO is to eventually be fully containerized which isn't something the puppet modules support today and I'm not sure is on the road map. > > So consequently, what will be the plan of TripleO and the puppet modules? > As TripleO moves forward, we may continue to support deployments via puppet modules but the amount of testing that we'll be including upstream will mostly exercise external Ansible integrations (example, ceph-ansible, openshift-ansible, etc) and Kolla containers. As of Queens, most of the services deployed via TripleO are deployed via containers and not on baremetal via puppet. We no longer support deploying OpenStack services on baremetal via the puppet modules and will likely be removing this support in the code in Stein. The end goal will likely be moving away from puppet modules within TripleO if we can solve the backwards compatibility and configuration generation via other mechanism. We will likely recommend leveraging external Ansible role calls rather than including puppet modules and using those to deploy services that are not inherently supported by TripleO. I can't really give a time frame as we are still working out the details, but it is likely that over the next several cycles we'll see a reduction in the dependence of puppet in TripleO and an increase in leveraging available Ansible roles. >From the Puppet OpenStack standpoint, others are stepping up to continue to ensure the modules are available and I know I'll keep an eye on them for as long as TripleO leverages some of the functionality. The Puppet OpenStack modules are very stable but I'm not sure without additional community folks stepping up that there will be support for newer functionality being added by the various OpenStack projects. I'm sure others can chime in here on their usage/plans for the Puppet OpenStack modules. Hope that helps. Thanks, -Alex > > Thanks > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lbragstad at gmail.com Thu May 31 14:24:46 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Thu, 31 May 2018 09:24:46 -0500 Subject: [openstack-dev] Questions about token scopes In-Reply-To: References: <61dae2da-e38b-ab3a-3921-6c2c8bd81796@gmail.com> Message-ID: <7468c1ee-03ea-dbfc-ad79-552a2708f410@gmail.com> On 05/31/2018 12:09 AM, Ghanshyam Mann wrote: > On Wed, May 30, 2018 at 11:53 PM, Lance Bragstad wrote: >> >> On 05/30/2018 08:47 AM, Matt Riedemann wrote: >>> I know the keystone team has been doing a lot of work on scoped tokens >>> and Lance has been trying to roll that out to other projects (like nova). >>> >>> In Rocky the nova team is adding granular policy rules to the >>> placement API [1] which is a good opportunity to set scope on those >>> rules as well. >>> >>> For now, we've just said everything is system scope since resources in >>> placement, for the most part, are managed by "the system". But we do >>> have some resources in placement which have project/user information >>> in them, so could theoretically also be scoped to a project, like GET >>> /usages [2]. > Just adding that this is same for nova policy also. As you might know > spec[1] try to make nova policy more granular but on hold because of > default roles things. We will do policy rule split with more better > defaults values like read-only for GET APIs. > > Along with that, like you mentioned about scope setting for placement > policy rules, we need to do same for nova policy also. That can be > done later or together with nova policy granular. spec. > > [1] https://review.openstack.org/#/c/547850/ > >>> While going through this, I've been hammering Lance with questions but >>> I had some more this morning and wanted to send them to the list to >>> help spread the load and share the knowledge on working with scoped >>> tokens in the other projects. >> ++ good idea >> >>> So here goes with the random questions: >>> >>> * devstack has the admin project/user - does that by default get >>> system scope tokens? I see the scope is part of the token create >>> request [3] but it's optional, so is there a default value if not >>> specified? >> No, not necessarily. The keystone-manage bootstrap command is what >> bootstraps new deployments with the admin user, an admin role, a project >> to work in, etc. It also grants the newly created admin user the admin >> role on a project and the system. This functionality was added in Queens >> [0]. This should be backwards compatible and allow the admin user to get >> tokens scoped to whatever they had authorization on previously. The only >> thing they should notice is that they have another role assignment on >> something called the "system". That being said, they can start >> requesting system-scoped tokens from keystone. We have a document that >> tries to explain the differences in scopes and what they mean [1]. > Another related question is, does scope setting will impact existing > operator? I mean when policy rule start setting scope, that might > break the existing operator as their current token (say project > scoped) might not be able to authorize the policy modified with > setting the system scope. > > In that case, how we are going to avoid the upgrade break. One way can > be to soft enforcement scope things for a cycle with warning and then > start enforcing that after one cycle (like we do for any policy rule > change)? but not sure at this point. Good question. This was the primary driver behind adding a new configuration option to the oslo.policy library called `enforce_scope` [0]. This let's operators turn off scope checking while they do a few things. They'll need to audit their users and give administrators of the deployment access to the system via a system role assignment (as opposed to the 'admin' role on some random project). They also need to ensure those people understand the concept of system scope. They might also send emails or notifications explaining the incoming changes and why they're being done, et cetera. Ideally, this should buy operators time to clean things up by reassessing their policy situation with the new defaults and scope types before enforcing those constraints. If `enforce_scope` is False, then a warning is logged during the enforcement check saying something along the lines of "someone used a token scoped to X to do something in Y". [0] https://docs.openstack.org/oslo.policy/latest/configuration/index.html#oslo_policy.enforce_scope > >> [0] https://review.openstack.org/#/c/530410/ >> [1] https://docs.openstack.org/keystone/latest/admin/identity-tokens.html >> >>> * Why don't the token create and show APIs return the scope? >> Good question. In a way, they do. If you look at a response when you >> authenticate for a token or validate a token, you should see an object >> contained within the token reference for the purpose of scope. For >> example, a project-scoped token will have a project object in the >> response [2]. A domain-scoped token will have a domain object in the >> response [3]. The same is true for system scoped tokens [4]. Unscoped >> tokens do not have any of these objects present and do not contain a >> service catalog [5]. While scope isn't explicitly denoted by an >> attribute, it can be derived from the attributes of the token response. >> >> [2] http://paste.openstack.org/raw/722349/ >> [3] http://paste.openstack.org/raw/722351/ >> [4] http://paste.openstack.org/raw/722348/ >> [5] http://paste.openstack.org/raw/722350/ >> >> >>> * It looks like python-openstackclient doesn't allow specifying a >>> scope when issuing a token, is that going to be added? >> Yes, I have a patch up for it [6]. I wanted to get this in during >> Queens, but it missed the boat. I believe this and a new release of >> oslo.context are the only bits left in order for services to have >> everything they need to easily consume system-scoped tokens. >> Keystonemiddleware should know how to handle system-scoped tokens in >> front of each service [7]. The oslo.context library should be smart >> enough to handle system scope set by keystonemiddleware if context is >> built from environment variables [8]. Both keystoneauth [9] and >> python-keystoneclient [10] should have what they need to generate >> system-scoped tokens. >> >> That should be enough to allow the service to pass a request environment >> to oslo.context and use the context object to reason about the scope of >> the request. As opposed to trying to understand different token scope >> responses from keystone. We attempted to abstract that away in to the >> context object. >> >> [6] https://review.openstack.org/#/c/524416/ >> [7] https://review.openstack.org/#/c/564072/ >> [8] https://review.openstack.org/#/c/530509/ >> [9] https://review.openstack.org/#/c/529665/ >> [10] https://review.openstack.org/#/c/524415/ >> >>> The reason I'm asking about OSC stuff is because we have the >>> osc-placement plugin [4] which allows users with the admin role to >>> work with resources in placement, which could be useful for things >>> like fixing up incorrect or leaked allocations, i.e. fixing the >>> fallout of a bug in nova. I'm wondering if we define all of the >>> placement API rules as system scope and we're enforcing scope, will >>> admins, as we know them today, continue to be able to use those APIs? >>> Or will deployments just need to grow a system-scope admin >>> project/user and per-project admin users, and then use the former for >>> working with placement via the OSC plugin? >> Uhm, if I understand your question, it depends on how you define the >> scope types for those APIs. If you set them to system-scope, then an >> operator will need to use a system-scoped token in order to access those >> APIs iff the placement configuration file contains placement.conf >> [oslo.policy] enforce_scope = True. Otherwise, setting that option to >> false will log a warning to operators saying that someone is accessing a >> system-scoped API with a project-scoped token (e.g. education needs to >> happen). >> >>> [1] >>> https://review.openstack.org/#/q/topic:bp/granular-placement-policy+(status:open+OR+status:merged) >>> [2] https://developer.openstack.org/api-ref/placement/#list-usages >>> [3] >>> https://developer.openstack.org/api-ref/identity/v3/index.html#password-authentication-with-scoped-authorization >>> [4] https://docs.openstack.org/osc-placement/latest/index.html >>> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From jaypipes at gmail.com Thu May 31 14:34:32 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 31 May 2018 10:34:32 -0400 Subject: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers In-Reply-To: References: <8eefd93a-abbf-1436-07a3-d18223ed8fa8@lab.ntt.co.jp> <1527584511.6381.1@smtp.office365.com> <1527596481.3825.0@smtp.office365.com> Message-ID: On 05/29/2018 09:12 AM, Sylvain Bauza wrote: > We could keep the old inventory in the root RP for the previous vGPU > type already supported in Queens and just add other inventories for > other vGPU types now supported. That looks possibly the simpliest option > as the virt driver knows that. What do you mean by "vGPU type"? Are you referring to the multiple GPU types stuff where specific virt drivers know how to handle different vGPU vendor types? Or are you referring to a "non-nested VGPU inventory on the compute node provider" versus a "VGPU inventory on multiple child providers, each representing a different physical GPU (or physical GPU group in the case of Xen)"? -jay From jaypipes at gmail.com Thu May 31 14:35:45 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 31 May 2018 10:35:45 -0400 Subject: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers In-Reply-To: <1527678362.3825.3@smtp.office365.com> References: <8eefd93a-abbf-1436-07a3-d18223ed8fa8@lab.ntt.co.jp> <1527584511.6381.1@smtp.office365.com> <1527596481.3825.0@smtp.office365.com> <1527678362.3825.3@smtp.office365.com> Message-ID: <5000e40e-2d69-244b-762b-b31ed357c38e@gmail.com> On 05/30/2018 07:06 AM, Balázs Gibizer wrote: > The nova-manage is another possible way similar to my idea #c) but there > I imagined the logic in placement-manage instead of nova-manage. Please note there is no placement-manage CLI tool. Best, -jay From sean.mcginnis at gmx.com Thu May 31 14:41:05 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 31 May 2018 09:41:05 -0500 Subject: [openstack-dev] [release] Release countdown for week R-12, June 4-8 Message-ID: <20180531144105.GA866@sm-xps> Welcome back to our weekly countdown email. Development Focus ----------------- The Rocky-2 milestone deadline is June 7th. Teams should be focused on implementing priority features. General Information ------------------- Membership freeze coincides with milestone 2 [0]. This means projects that have not done a release yet must do so for the next two milestones to be included in the Rocky release. [0] https://releases.openstack.org/rocky/schedule.html#r-mf The following libraries have not done a release yet in the rocky cycle: automaton blazar-nova ceilometermiddleware debtcollector glance-store heat-translator kuryr oslo.context pycadf requestsexceptions stevedore taskflow python-aodhclient python-barbicanclient python-blazarclient python-brick-cinderclient-ext python-cinderclient python-cloudkittyclient python-congressclient python-cyborgclient python-designateclient python-karborclient python-magnumclient python-masakariclient python-muranoclient python-octaviaclient python-pankoclient python-searchlightclient python-senlinclient python-solumclient python-swiftclient python-tricircleclient python-vitrageclient python-zaqarclient For library-only projects, please be aware of the membership freeze mentioned above. I believe all of these use the cycle-with-intermediary release model, but especially for clients, it is good to get pending changes released early/often in the cycle to make sure there is enough time to address issues found by those that only use the released libraries. Remember that there are client and non-client library freezes for the release starting mid-July. If there are any questions about preparing a release by the 7th, please come talk to us in #openstack-releases. **Note for projects that publish to PyPi** There was a recent change with PyPi where they now enforce valid RST formatting for package long descriptions. In most cases, the repo's README.rst gets pulled in as this long description. This means that there is now a need to ensure these README files are properly formatted and do not have errors that will prevent the upload of a package. This would fail after all of the other release automation was complete, so to prevent this from happening we now have validation performed against repos when new releases are proposed to the openstack/releases repo. If you see the openstack-tox-validate job fail, this is a likely culprit. See the note added to the project-team-guide for a recommendation on how to protect against this before release time: https://docs.openstack.org/project-team-guide/project-setup/python.html#running-the-style-checks Unfortunately the error message isn't too helpful, so if you see a failure due to this, the next step to help in identifying the cause may be to run doc8 against the README.rst file locally. Upcoming Deadlines & Dates -------------------------- Rocky-2 Milestone: June 7 Final non-client library release deadline: July 19 Final client library release deadline: July 26 Rocky-3 Milestone: July 26 -- Sean McGinnis (smcginnis) From jaypipes at gmail.com Thu May 31 15:00:20 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 31 May 2018 11:00:20 -0400 Subject: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers In-Reply-To: References: <8eefd93a-abbf-1436-07a3-d18223ed8fa8@lab.ntt.co.jp> <1527584511.6381.1@smtp.office365.com> <1527596481.3825.0@smtp.office365.com> <1527678362.3825.3@smtp.office365.com> Message-ID: <5cccaa5b-45f6-cc0e-2b63-afdb271de2fb@gmail.com> On 05/31/2018 05:10 AM, Sylvain Bauza wrote: > After considering the whole approach, discussing with a couple of folks > over IRC, here is what I feel the best approach for a seamless upgrade : >  - VGPU inventory will be kept on root RP (for the first type) in > Queens so that a compute service upgrade won't impact the DB >  - during Queens, operators can run a DB online migration script (like > the ones we currently have in > https://github.com/openstack/nova/blob/c2f42b0/nova/cmd/manage.py#L375) > that will create a new resource provider for the first type and move the > inventory and allocations to it. >  - it's the responsibility of the virt driver code to check whether a > child RP with its name being the first type name already exists to know > whether to update the inventory against the root RP or the child RP. > > Does it work for folks ? No, sorry, that doesn't work for me. It seems overly complex and fragile, especially considering that VGPUs are not moveable anyway (no support for live migrating them). Same goes for CPU pinning, NUMA topologies, PCI passthrough devices, SR-IOV PF/VFs and all the other "must have" features that have been added to the virt driver over the last 5 years. My feeling is that we should not attempt to "migrate" any allocations or inventories between root or child providers within a compute node, period. The virt drivers should simply error out of update_provider_tree() if there are ANY existing VMs on the host AND the virt driver wishes to begin tracking resources with nested providers. The upgrade operation should look like this: 1) Upgrade placement 2) Upgrade nova-scheduler 3) start loop on compute nodes. for each compute node: 3a) disable nova-compute service on node (to take it out of scheduling) 3b) evacuate all existing VMs off of node 3c) upgrade compute node (on restart, the compute node will see no VMs running on the node and will construct the provider tree inside update_provider_tree() with an appropriate set of child providers and inventories on those child providers) 3d) enable nova-compute service on node Which is virtually identical to the "normal" upgrade process whenever there are significant changes to the compute node -- such as upgrading libvirt or the kernel. Nested resource tracking is another such significant change and should be dealt with in a similar way, IMHO. Best, -jay From Tim.Bell at cern.ch Thu May 31 15:36:57 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Thu, 31 May 2018 15:36:57 +0000 Subject: [openstack-dev] [tripleo][puppet] Hello all, puppet modules In-Reply-To: References: Message-ID: <9300F696-8743-46DF-8E73-EC4A78DD12B2@cern.ch> CERN use these puppet modules too and contributes any missing functionality we need upstream. Tim From: Alex Schultz Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Thursday, 31 May 2018 at 16:24 To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] [tripleo][puppet] Hello all, puppet modules On Wed, May 30, 2018 at 3:18 PM, Remo Mattei > wrote: Hello all, I have talked to several people about this and I would love to get this finalized once and for all. I have checked the OpenStack puppet modules which are mostly developed by the Red Hat team, as of right now, TripleO is using a combo of Ansible and puppet to deploy but in the next couple of releases, the plan is to move away from the puppet option. So the OpenStack puppet modules are maintained by others other than Red Hat, however we have been a major contributor since TripleO has relied on them for some time. That being said, as TripleO has migrated to containers built with Kolla, we've adapted our deployment mechanism to include Ansible and we really only use puppet for configuration generation. Our goal for TripleO is to eventually be fully containerized which isn't something the puppet modules support today and I'm not sure is on the road map. So consequently, what will be the plan of TripleO and the puppet modules? As TripleO moves forward, we may continue to support deployments via puppet modules but the amount of testing that we'll be including upstream will mostly exercise external Ansible integrations (example, ceph-ansible, openshift-ansible, etc) and Kolla containers. As of Queens, most of the services deployed via TripleO are deployed via containers and not on baremetal via puppet. We no longer support deploying OpenStack services on baremetal via the puppet modules and will likely be removing this support in the code in Stein. The end goal will likely be moving away from puppet modules within TripleO if we can solve the backwards compatibility and configuration generation via other mechanism. We will likely recommend leveraging external Ansible role calls rather than including puppet modules and using those to deploy services that are not inherently supported by TripleO. I can't really give a time frame as we are still working out the details, but it is likely that over the next several cycles we'll see a reduction in the dependence of puppet in TripleO and an increase in leveraging available Ansible roles. From the Puppet OpenStack standpoint, others are stepping up to continue to ensure the modules are available and I know I'll keep an eye on them for as long as TripleO leverages some of the functionality. The Puppet OpenStack modules are very stable but I'm not sure without additional community folks stepping up that there will be support for newer functionality being added by the various OpenStack projects. I'm sure others can chime in here on their usage/plans for the Puppet OpenStack modules. Hope that helps. Thanks, -Alex Thanks __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From sumitnaiksatam at gmail.com Thu May 31 16:49:28 2018 From: sumitnaiksatam at gmail.com (Sumit Naiksatam) Date: Thu, 31 May 2018 09:49:28 -0700 Subject: [openstack-dev] Help required to install devstack with GBP In-Reply-To: References: Message-ID: Hi, Sure we can help you. Could you please take a look at the neutron logs and let me know what exception you are seeing? Also, please let me know which branch you are trying to install. Thanks, ~Sumit. On Thu, May 31, 2018 at 1:52 AM, ., Alex Dominic Savio < alex.william at microfocus.com> wrote: > > > Hi Experts, > > > > I have been trying to install devstack with gbp as per the instruction > given in the GitHub https://github.com/openstack/group-based-policy > > This I am running on Ubuntu 16.x as well as 14.x but both the attempts > were not successful. > > It fails stating “neutron is not started” > > > > Can you please help me with this issue to get pass ? > > > > > > Thanks & Regards, > > *Alex Dominic Savio* > Product Manager, ITOM-HCM > Micro Focus > Bagmane Tech Park > > Bangalore, India. > > (M)+91 9880634388 > alex.william at microfocus.com > ------------------------------ > > [image: cid:image003.jpg at 01D3ED74.F4E1E2F0] > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 1373 bytes Desc: not available URL: From sean.mcginnis at gmx.com Thu May 31 16:54:46 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 31 May 2018 11:54:46 -0500 Subject: [openstack-dev] [tc][all] CD tangent - was: A culture change (nitpicking) In-Reply-To: <3a59cb5f-599d-4a89-40ec-e2610ef1d821@openstack.org> References: <3424d691-9792-afde-dce9-4eca7601ae4f@redhat.com> <7489c0e7-de93-6305-89a0-167873f5e3ec@gmx.com> <1A3C52DFCD06494D8528644858247BF01C0D7F72@EX10MBOX03.pnnl.gov> <20180531010957.GA1354@zeong> <3a59cb5f-599d-4a89-40ec-e2610ef1d821@openstack.org> Message-ID: <4b09edbd-62f3-2c99-78ac-9b2721191c7d@gmx.com> On 05/31/2018 03:50 AM, Thierry Carrez wrote: > Right... There might be a reasonable middle ground between "every > commit on master must be backward-compatible" and "rip out all > testing" that allows us to routinely revert broken feature commits (as > long as they don't cross a release boundary). > > To be fair, I'm pretty sure that's already the case: we did revert > feature commits on master in the past, therefore breaking backward > compatibility if someone started to use that feature right away. It's > the issue with implicit rules: everyone interprets them the way they > want... So I think that could use some explicit clarification. > > [ This tangent should probably gets its own thread to not disrupt the > no-nitpicking discussion ] > Just one last one on this, then I'm hoping this tangent ends. I think what Thierry said is exactly what Dims and I were saying. I'm not sure how that turned into the idea of supporting committing broken code. The point (at least mine) was just that we should not have the mindset that HEAD~4 committed something that we realize was not right, so we should not have the mindset that "someone might have deployed that broken behavior so we need to make sure we don't break them." HEAD should always be deployable, just not treated like an official release that needs to be maintained. From borne.mace at oracle.com Thu May 31 17:02:27 2018 From: borne.mace at oracle.com (Borne Mace) Date: Thu, 31 May 2018 10:02:27 -0700 Subject: [openstack-dev] [kolla][vote] Nominating Steve Noyes for kolla-cli core reviewer Message-ID: <706e833a-9dad-6353-0f5c-f14382556df3@oracle.com> Greetings all, I would like to propose the addition of Steve Noyes to the kolla-cli core reviewer team. Consider this nomination as my personal +1. Steve has a long history with the kolla-cli and should be considered its co-creator as probably half or more of the existing code was due to his efforts. He has now been working diligently since it was pushed upstream to improve the stability and testability of the cli and has the second most commits on the project. The kolla core team consists of 19 people, and the kolla-cli team of 2, for a total of 21. Steve therefore requires a minimum of 11 votes (so just 10 more after my +1), with no veto -2 votes within a 7 day voting window to end on June 6th. Voting will be closed immediately on a veto or in the case of a unanimous vote. As I'm not sure how active all of the 19 kolla cores are, your attention and timely vote is much appreciated. Thanks! -- Borne From dms at danplanet.com Thu May 31 17:09:21 2018 From: dms at danplanet.com (Dan Smith) Date: Thu, 31 May 2018 10:09:21 -0700 Subject: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers In-Reply-To: <5cccaa5b-45f6-cc0e-2b63-afdb271de2fb@gmail.com> (Jay Pipes's message of "Thu, 31 May 2018 11:00:20 -0400") References: <8eefd93a-abbf-1436-07a3-d18223ed8fa8@lab.ntt.co.jp> <1527584511.6381.1@smtp.office365.com> <1527596481.3825.0@smtp.office365.com> <1527678362.3825.3@smtp.office365.com> <5cccaa5b-45f6-cc0e-2b63-afdb271de2fb@gmail.com> Message-ID: > My feeling is that we should not attempt to "migrate" any allocations > or inventories between root or child providers within a compute node, > period. While I agree this is the simplest approach, it does put a lot of responsibility on the operators to do work to sidestep this issue, which might not even apply to them (and knowing if it does might be difficult). > The virt drivers should simply error out of update_provider_tree() if > there are ANY existing VMs on the host AND the virt driver wishes to > begin tracking resources with nested providers. > > The upgrade operation should look like this: > > 1) Upgrade placement > 2) Upgrade nova-scheduler > 3) start loop on compute nodes. for each compute node: > 3a) disable nova-compute service on node (to take it out of scheduling) > 3b) evacuate all existing VMs off of node You mean s/evacuate/cold migrate/ of course... :) > 3c) upgrade compute node (on restart, the compute node will see no > VMs running on the node and will construct the provider tree inside > update_provider_tree() with an appropriate set of child providers > and inventories on those child providers) > 3d) enable nova-compute service on node > > Which is virtually identical to the "normal" upgrade process whenever > there are significant changes to the compute node -- such as upgrading > libvirt or the kernel. Not necessarily. It's totally legit (and I expect quite common) to just reboot the host to take kernel changes, bringing back all the instances that were there when it resumes. The "normal" case of moving things around slide-puzzle-style applies to live migration (which isn't an option here). I think people that can take downtime for the instances would rather not have to move things around for no reason if the instance has to get shut off anyway. > Nested resource tracking is another such significant change and should > be dealt with in a similar way, IMHO. This basically says that for anyone to move to rocky, they will have to cold migrate every single instance in order to do that upgrade right? I mean, anyone with two socket machines or SRIOV NICs would end up with at least one level of nesting, correct? Forcing everyone to move everything to do an upgrade seems like a non-starter to me. We also need to consider the case where people would be FFU'ing past rocky (i.e. never running rocky computes). We've previously said that we'd provide a way to push any needed transitions with everything offline to facilitate that case, so I think we need to implement that method anyway. I kinda think we need to either: 1. Make everything perform the pivot on compute node start (which can be re-used by a CLI tool for the offline case) 2. Make everything default to non-nested inventory at first, and provide a way to migrate a compute node and its instances one at a time (in place) to roll through. We can also document "or do the cold-migration slide puzzle thing" as an alternative for people that feel that's more reasonable. I just think that forcing people to take down their data plane to work around our own data model is kinda evil and something we should be avoiding at this level of project maturity. What we're really saying is "we know how to translate A into B, but we require you to move many GBs of data over the network and take some downtime because it's easier for *us* than making it seamless." --Dan From mark.giles at oracle.com Thu May 31 17:40:54 2018 From: mark.giles at oracle.com (Mark Giles) Date: Thu, 31 May 2018 13:40:54 -0400 Subject: [openstack-dev] [kolla][vote] Nominating Steve Noyes for kolla-cli core reviewer In-Reply-To: <706e833a-9dad-6353-0f5c-f14382556df3@oracle.com> References: <706e833a-9dad-6353-0f5c-f14382556df3@oracle.com> Message-ID: +1 On May 31, 2018 at 1:06:43 PM, Borne Mace (borne.mace at oracle.com) wrote: Greetings all, I would like to propose the addition of Steve Noyes to the kolla-cli core reviewer team. Consider this nomination as my personal +1. Steve has a long history with the kolla-cli and should be considered its co-creator as probably half or more of the existing code was due to his efforts. He has now been working diligently since it was pushed upstream to improve the stability and testability of the cli and has the second most commits on the project. The kolla core team consists of 19 people, and the kolla-cli team of 2, for a total of 21. Steve therefore requires a minimum of 11 votes (so just 10 more after my +1), with no veto -2 votes within a 7 day voting window to end on June 6th. Voting will be closed immediately on a veto or in the case of a unanimous vote. As I'm not sure how active all of the 19 kolla cores are, your attention and timely vote is much appreciated. Thanks! -- Borne __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Thu May 31 17:44:44 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 31 May 2018 10:44:44 -0700 (PDT) Subject: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers In-Reply-To: References: <8eefd93a-abbf-1436-07a3-d18223ed8fa8@lab.ntt.co.jp> <1527584511.6381.1@smtp.office365.com> <1527596481.3825.0@smtp.office365.com> <1527678362.3825.3@smtp.office365.com> <5cccaa5b-45f6-cc0e-2b63-afdb271de2fb@gmail.com> Message-ID: On Thu, 31 May 2018, Dan Smith wrote: > I kinda think we need to either: > > 1. Make everything perform the pivot on compute node start (which can be > re-used by a CLI tool for the offline case) This sounds effectively like: validate my inventory and allocations at compute node start, correcting them as required (including the kind of migration stuff related to nested). Is that right? That's something I'd like to be the norm. It takes us back to a sort of self-healing compute node. Or am I missing something (forgive me, I've been on holiday). > I just think that forcing people to take down their data plane to work > around our own data model is kinda evil and something we should be > avoiding at this level of project maturity. What we're really saying is > "we know how to translate A into B, but we require you to move many GBs > of data over the network and take some downtime because it's easier for > *us* than making it seamless." If we can do it, I agree that being not evil is good. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From tpb at dyncloud.net Thu May 31 17:55:21 2018 From: tpb at dyncloud.net (Tom Barron) Date: Thu, 31 May 2018 13:55:21 -0400 Subject: [openstack-dev] Ceph multiattach support In-Reply-To: References: Message-ID: <20180531175521.6perud4hb7om55ms@barron.net> On 31/05/18 10:00 +0800, fengyd wrote: > Hi, > >I'm using Ceph for cinder backend. >Do you have any plan to support multiattach for Ceph backend? > >Thanks > >Yafeng Yafeng, Would you describe your use case for cinder multi-attach with ceph backend? I'd like to understand better whether manila (file share infrastructure as a service) with CephFS native or CephFS-NFS backends would (as Erik McCormick also suggested) meet your needs. -- Tom Barron From mrhillsman at gmail.com Thu May 31 18:03:20 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Thu, 31 May 2018 13:03:20 -0500 Subject: [openstack-dev] OpenLab Cross-community Impact Message-ID: Hi everyone, I know we have sent out quite a bit of information over the past few days with the OpenStack Summit and other updates recently. Additionally there are plenty of meetings we all attend. I just want to take time to point to something very significant in my opinion and again give big thanks to Chris, Dims, Liusheng, Chenrui, Zhuli, Joe (gophercloud), and anyone else contributing to OpenLab. A member of the release team working on the testing infrastructure for Kubernetes did a shoutout to the team for the following: (AishSundar) Shoutout to @dims and OpenStack team for quickly getting their 1.11 Conformance results piped to CI runs and contributing results to Conformance dashboard ! https://k8s-testgrid.appspot.com/sig-release-1.11-all#Conformance%20-%20OpenStack&show-stale-tests= Here is why this is significant and those working on this who I previously mentioned should get recognition: (hogepodge) OpenStack and GCE are the first two clouds that will release block on conformance testing failures. Thanks @dims for building out the test pipeline and @mrhillsman for leading the OpenLab efforts that are reporting back to the test grid. @RuiChen for his contributions to the testing effort. Amazing work for the last six months. In other words, if the external cloud provider ci conformance tests we do in OpenLab are not passing, it will be one of the signals used for blocking the release. OpenStack and GCE are the first two clouds to achieve this and it is a significant accomplishment for the OpenLab team and the OpenStack community overall regarding our relationship with the Kubernetes community. Thanks again Chris, Dims, Joe, Liusheng, Chenrui, and Zhuli for the work you have done and continue to do in this space. Personally I hope we take a moment to really consider this milestone and work to ensure OpenLab's continued success as we embark on working on other integrations. We started OpenLab hoping we could make substantial impact specifically for the ecosystem that builds on top of OpenStack and this is evidence we can and should do more. -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Thu May 31 18:13:35 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 31 May 2018 11:13:35 -0700 (PDT) Subject: [openstack-dev] [tc] Organizational diversity tag In-Reply-To: References: <31d5e78c-276c-3ac5-6b42-c20399b34a66@openstack.org> Message-ID: On Tue, 29 May 2018, Samuel Cassiba wrote: > The moniker of 'low-activity' does give the very real, negative perception > that things are just barely hanging on. It conveys the subconscious, > officiated statement (!!!whether or not this was intended!!!) that nobody > in their right mind should consider using the subproject, let alone develop > on or against it, for fear that it wind up some poor end-user's support > nightmare. Yeah. Which is really unfortunate because to some extent all projects ought to be striving to be low activity in the sense of mature, stable, (nearly) bug-free. If our metrics are biased towards always committing then we are encouraging unfettered growth which means we can never have any sense of complete-ness or done-ness in any domains. It should be okay to say a sub-domain of activity is done and move on to improving the wider domain. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From openstack at fried.cc Thu May 31 18:26:46 2018 From: openstack at fried.cc (Eric Fried) Date: Thu, 31 May 2018 13:26:46 -0500 Subject: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers In-Reply-To: References: <8eefd93a-abbf-1436-07a3-d18223ed8fa8@lab.ntt.co.jp> <1527584511.6381.1@smtp.office365.com> <1527596481.3825.0@smtp.office365.com> <1527678362.3825.3@smtp.office365.com> <5cccaa5b-45f6-cc0e-2b63-afdb271de2fb@gmail.com> Message-ID: <4a867428-1203-63b7-9b74-86fda468047c@fried.cc> > 1. Make everything perform the pivot on compute node start (which can be > re-used by a CLI tool for the offline case) > 2. Make everything default to non-nested inventory at first, and provide > a way to migrate a compute node and its instances one at a time (in > place) to roll through. I agree that it sure would be nice to do ^ rather than requiring the "slide puzzle" thing. But how would this be accomplished, in light of the current "separation of responsibilities" drawn at the virt driver interface, whereby the virt driver isn't supposed to talk to placement directly, or know anything about allocations? Here's a first pass: The virt driver, via the return value from update_provider_tree, tells the resource tracker that "inventory of resource class A on provider B have moved to provider C" for all applicable AxBxC. E.g. [ { 'from_resource_provider': , 'moved_resources': [VGPU: 4], 'to_resource_provider': }, { 'from_resource_provider': , 'moved_resources': [VGPU: 4], 'to_resource_provider': }, { 'from_resource_provider': , 'moved_resources': [ SRIOV_NET_VF: 2, NET_BANDWIDTH_EGRESS_KILOBITS_PER_SECOND: 1000, NET_BANDWIDTH_INGRESS_KILOBITS_PER_SECOND: 1000, ], 'to_resource_provider': } ] As today, the resource tracker takes the updated provider tree and invokes [1] the report client method update_from_provider_tree [2] to flush the changes to placement. But now update_from_provider_tree also accepts the return value from update_provider_tree and, for each "move": - Creates provider C (as described in the provider_tree) if it doesn't already exist. - Creates/updates provider C's inventory as described in the provider_tree (without yet updating provider B's inventory). This ought to create the inventory of resource class A on provider C. - Discovers allocations of rc A on rp B and POSTs to move them to rp C*. - Updates provider B's inventory. (*There's a hole here: if we're splitting a glommed-together inventory across multiple new child providers, as the VGPUs in the example, we don't know which allocations to put where. The virt driver should know which instances own which specific inventory units, and would be able to report that info within the data structure. That's getting kinda close to the virt driver mucking with allocations, but maybe it fits well enough into this model to be acceptable?) Note that the return value from update_provider_tree is optional, and only used when the virt driver is indicating a "move" of this ilk. If it's None/[] then the RT/update_from_provider_tree flow is the same as it is today. If we can do it this way, we don't need a migration tool. In fact, we don't even need to restrict provider tree "reshaping" to release boundaries. As long as the virt driver understands its own data model migrations and reports them properly via update_provider_tree, it can shuffle its tree around whenever it wants. Thoughts? -efried [1] https://github.com/openstack/nova/blob/8753c9a38667f984d385b4783c3c2fc34d7e8e1b/nova/compute/resource_tracker.py#L890 [2] https://github.com/openstack/nova/blob/8753c9a38667f984d385b4783c3c2fc34d7e8e1b/nova/scheduler/client/report.py#L1341 From openstack at fried.cc Thu May 31 18:34:51 2018 From: openstack at fried.cc (Eric Fried) Date: Thu, 31 May 2018 13:34:51 -0500 Subject: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers In-Reply-To: <4a867428-1203-63b7-9b74-86fda468047c@fried.cc> References: <8eefd93a-abbf-1436-07a3-d18223ed8fa8@lab.ntt.co.jp> <1527584511.6381.1@smtp.office365.com> <1527596481.3825.0@smtp.office365.com> <1527678362.3825.3@smtp.office365.com> <5cccaa5b-45f6-cc0e-2b63-afdb271de2fb@gmail.com> <4a867428-1203-63b7-9b74-86fda468047c@fried.cc> Message-ID: Rats, typo correction below. On 05/31/2018 01:26 PM, Eric Fried wrote: >> 1. Make everything perform the pivot on compute node start (which can be >> re-used by a CLI tool for the offline case) >> 2. Make everything default to non-nested inventory at first, and provide >> a way to migrate a compute node and its instances one at a time (in >> place) to roll through. > > I agree that it sure would be nice to do ^ rather than requiring the > "slide puzzle" thing. > > But how would this be accomplished, in light of the current "separation > of responsibilities" drawn at the virt driver interface, whereby the > virt driver isn't supposed to talk to placement directly, or know > anything about allocations? Here's a first pass: > > The virt driver, via the return value from update_provider_tree, tells > the resource tracker that "inventory of resource class A on provider B > have moved to provider C" for all applicable AxBxC. E.g. > > [ { 'from_resource_provider': , > 'moved_resources': [VGPU: 4], > 'to_resource_provider': > }, > { 'from_resource_provider': , > 'moved_resources': [VGPU: 4], > 'to_resource_provider': > }, > { 'from_resource_provider': , > 'moved_resources': [ > SRIOV_NET_VF: 2, > NET_BANDWIDTH_EGRESS_KILOBITS_PER_SECOND: 1000, > NET_BANDWIDTH_INGRESS_KILOBITS_PER_SECOND: 1000, > ], > 'to_resource_provider': -------------------------------^^^^^^^^^^^^ s/gpu_rp2_uuid/sriovnic_rp_uuid/ or similar. > } > ] > > As today, the resource tracker takes the updated provider tree and > invokes [1] the report client method update_from_provider_tree [2] to > flush the changes to placement. But now update_from_provider_tree also > accepts the return value from update_provider_tree and, for each "move": > > - Creates provider C (as described in the provider_tree) if it doesn't > already exist. > - Creates/updates provider C's inventory as described in the > provider_tree (without yet updating provider B's inventory). This ought > to create the inventory of resource class A on provider C. > - Discovers allocations of rc A on rp B and POSTs to move them to rp C*. > - Updates provider B's inventory. > > (*There's a hole here: if we're splitting a glommed-together inventory > across multiple new child providers, as the VGPUs in the example, we > don't know which allocations to put where. The virt driver should know > which instances own which specific inventory units, and would be able to > report that info within the data structure. That's getting kinda close > to the virt driver mucking with allocations, but maybe it fits well > enough into this model to be acceptable?) > > Note that the return value from update_provider_tree is optional, and > only used when the virt driver is indicating a "move" of this ilk. If > it's None/[] then the RT/update_from_provider_tree flow is the same as > it is today. > > If we can do it this way, we don't need a migration tool. In fact, we > don't even need to restrict provider tree "reshaping" to release > boundaries. As long as the virt driver understands its own data model > migrations and reports them properly via update_provider_tree, it can > shuffle its tree around whenever it wants. > > Thoughts? > > -efried > > [1] > https://github.com/openstack/nova/blob/8753c9a38667f984d385b4783c3c2fc34d7e8e1b/nova/compute/resource_tracker.py#L890 > [2] > https://github.com/openstack/nova/blob/8753c9a38667f984d385b4783c3c2fc34d7e8e1b/nova/scheduler/client/report.py#L1341 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From juliaashleykreger at gmail.com Thu May 31 18:35:16 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 31 May 2018 14:35:16 -0400 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) Message-ID: Back to the topic of nitpicking! I virtually sat down with Doug today and we hammered out the positive aspects that we feel like are the things that we as a community want to see as part of reviews coming out of this effort. The principles change[1] in governance has been updated as a result. I think we are at a point where we have to state high level principles, and then also update guidelines or other context providing documentation to re-enforce some of items covered in this discussion... not just to educate new contributors, but to serve as a checkpoint for existing reviewers when making the decision as to how to vote change set. The question then becomes where would such guidelines or documentation best fit? Should we explicitly detail the cause/effect that occurs? Should we convey contributor perceptions, or maybe even just link to this thread as there has been a massive amount of feedback raising valid cases, points, and frustrations. Personally, I'd lean towards a blended approach, but the question of where is one I'm unsure of. Thoughts? -Julia [1]: https://review.openstack.org/#/c/570940/ From melwittt at gmail.com Thu May 31 18:35:43 2018 From: melwittt at gmail.com (melanie witt) Date: Thu, 31 May 2018 11:35:43 -0700 Subject: [openstack-dev] [nova] proposal to postpone nova-network core functionality removal to Stein Message-ID: <29873b6f-8a3c-ae6e-0756-c90d2c52a306@gmail.com> Hello Operators and Devs, This cycle at the PTG, we had decided to start making some progress toward removing nova-network [1] (thanks to those who have helped!) and so far, we've landed some patches to extract common network utilities from nova-network core functionality into separate utility modules. And we've started proposing removal of nova-network REST APIs [2]. At the cells v2 sync with operators forum session at the summit [3], we learned that CERN is in the middle of migrating from nova-network to neutron and that holding off on removal of nova-network core functionality until Stein would help them out a lot to have a safety net as they continue progressing through the migration. If we recall correctly, they did say that removal of the nova-network REST APIs would not impact their migration and Surya Seetharaman is double-checking about that and will get back to us. If so, we were thinking we can go ahead and work on nova-network REST API removals this cycle to make some progress while holding off on removing the core functionality of nova-network until Stein. I wanted to send this to the ML to let everyone know what we were thinking about this and to receive any additional feedback folks might have about this plan. Thanks, -melanie [1] https://etherpad.openstack.org/p/nova-ptg-rocky L301 [2] https://review.openstack.org/567682 [3] https://etherpad.openstack.org/p/YVR18-cellsv2-migration-sync-with-operators L30 From cdent+os at anticdent.org Thu May 31 18:43:59 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 31 May 2018 11:43:59 -0700 (PDT) Subject: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers In-Reply-To: <4a867428-1203-63b7-9b74-86fda468047c@fried.cc> References: <8eefd93a-abbf-1436-07a3-d18223ed8fa8@lab.ntt.co.jp> <1527584511.6381.1@smtp.office365.com> <1527596481.3825.0@smtp.office365.com> <1527678362.3825.3@smtp.office365.com> <5cccaa5b-45f6-cc0e-2b63-afdb271de2fb@gmail.com> <4a867428-1203-63b7-9b74-86fda468047c@fried.cc> Message-ID: On Thu, 31 May 2018, Eric Fried wrote: > But how would this be accomplished, in light of the current "separation > of responsibilities" drawn at the virt driver interface, whereby the > virt driver isn't supposed to talk to placement directly, or know > anything about allocations? Here's a first pass: For sake of discussion, how much (if any) easier would it be if we got rid of this restriction? > the resource tracker that "inventory of resource class A on provider B > have moved to provider C" for all applicable AxBxC. E.g. traits too? > [ { 'from_resource_provider': , > 'moved_resources': [VGPU: 4], > 'to_resource_provider': [snip] > If we can do it this way, we don't need a migration tool. In fact, we > don't even need to restrict provider tree "reshaping" to release > boundaries. As long as the virt driver understands its own data model > migrations and reports them properly via update_provider_tree, it can > shuffle its tree around whenever it wants. Assuming the restriction is kept, your model seems at least worth exploring. The fact that we are using what amounts to a DSL to pass some additional instruction back from the virt driver feels squiffy for some reason (probably because I'm not wed to the restriction), but it is well-contained. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From msm at redhat.com Thu May 31 18:53:24 2018 From: msm at redhat.com (Michael McCune) Date: Thu, 31 May 2018 14:53:24 -0400 Subject: [openstack-dev] [api] notes from api-sig forum meeting on graphql experiment Message-ID: hi everybody, i have added my notes to the etherpad[1] from summit. briefly, we had a great meeting about creating a graphql experiment in openstack and i tried to capture some of the questions and comments that flew back and forth. there seems to be a good consensus that a proof of concept will be created on the neutron server, most likely in an experimental feature branch. Gilles Dubreuil has volunteered to lead this effort (thank you Gilles!), hopefully with some help from a few neutron savy folks ;) it is still very early in this experiment, but i think there was a good feeling that if this works it could create some great opportunities for improvement across the openstack ecosystem. i really hope it does! peace o/ [1]: https://etherpad.openstack.org/p/YVR18-API-SIG-forum From sbauza at redhat.com Thu May 31 19:11:27 2018 From: sbauza at redhat.com (Sylvain Bauza) Date: Thu, 31 May 2018 21:11:27 +0200 Subject: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers In-Reply-To: <9cba3398-0972-61df-b143-999596775342@fried.cc> References: <8eefd93a-abbf-1436-07a3-d18223ed8fa8@lab.ntt.co.jp> <1527584511.6381.1@smtp.office365.com> <1527596481.3825.0@smtp.office365.com> <1527678362.3825.3@smtp.office365.com> <1527759276.19128.0@smtp.office365.com> <9cba3398-0972-61df-b143-999596775342@fried.cc> Message-ID: On Thu, May 31, 2018 at 3:54 PM, Eric Fried wrote: > This seems reasonable, but... > > On 05/31/2018 04:34 AM, Balázs Gibizer wrote: > > > > > > On Thu, May 31, 2018 at 11:10 AM, Sylvain Bauza > wrote: > >>> > >> > >> After considering the whole approach, discussing with a couple of > >> folks over IRC, here is what I feel the best approach for a seamless > >> upgrade : > >> - VGPU inventory will be kept on root RP (for the first type) in > >> Queens so that a compute service upgrade won't impact the DB > >> - during Queens, operators can run a DB online migration script (like > -------------^^^^^^ > Did you mean Rocky? > Oops, yeah of course. Queens > Rocky. > > >> the ones we currently have in > >> https://github.com/openstack/nova/blob/c2f42b0/nova/cmd/manage.py#L375) > that > >> will create a new resource provider for the first type and move the > >> inventory and allocations to it. > >> - it's the responsibility of the virt driver code to check whether a > >> child RP with its name being the first type name already exists to > >> know whether to update the inventory against the root RP or the child > RP. > >> > >> Does it work for folks ? > > > > +1 works for me > > gibi > > > >> PS : we already have the plumbing in place in nova-manage and we're > >> still managing full Nova resources. I know we plan to move Placement > >> out of the nova tree, but for the Rocky timeframe, I feel we can > >> consider nova-manage as the best and quickiest approach for the data > >> upgrade. > >> > >> -Sylvain > >> > >> > > > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at fried.cc Thu May 31 19:13:06 2018 From: openstack at fried.cc (Eric Fried) Date: Thu, 31 May 2018 14:13:06 -0500 Subject: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers In-Reply-To: References: <8eefd93a-abbf-1436-07a3-d18223ed8fa8@lab.ntt.co.jp> <1527584511.6381.1@smtp.office365.com> <1527596481.3825.0@smtp.office365.com> <1527678362.3825.3@smtp.office365.com> <5cccaa5b-45f6-cc0e-2b63-afdb271de2fb@gmail.com> <4a867428-1203-63b7-9b74-86fda468047c@fried.cc> Message-ID: <5567847e-dfbd-15a5-bc2c-26c36713af8d@fried.cc> Chris- >> virt driver isn't supposed to talk to placement directly, or know >> anything about allocations? > > For sake of discussion, how much (if any) easier would it be if we > got rid of this restriction? At this point, having implemented the update_[from_]provider_tree flow as we have, it would probably make things harder. We still have to do the same steps, but any bits we wanted to let the virt driver handle would need some kind of weird callback dance. But even if we scrapped update_[from_]provider_tree and redesigned from first principles, virt drivers would have a lot of duplication of the logic that currently resides in update_from_provider_tree. So even though the restriction seems to make things awkward, having been embroiled in this code as I have, I'm actually seeing how it keeps things as clean and easy to reason about as can be expected for something that's inherently as complicated as this. >> the resource tracker that "inventory of resource class A on provider B >> have moved to provider C" for all applicable AxBxC.  E.g. > > traits too? The traits are part of the updated provider tree itself. The existing logic in update_from_provider_tree handles shuffling those around. I don't think the RT needs to be told about any specific trait movement in order to reason about moving allocations. Do you see something I'm missing there? > The fact that we are using what amounts to a DSL to pass > some additional instruction back from the virt driver feels squiffy Yeah, I don't disagree. The provider_tree object, and updating it via update_provider_tree, is kind of a DSL already. The list-of-dicts format is just a strawman; we could make it an object or whatever (not that that would make it less DSL-ish). Perhaps an OVO :P -efried . From sbauza at redhat.com Thu May 31 19:15:40 2018 From: sbauza at redhat.com (Sylvain Bauza) Date: Thu, 31 May 2018 21:15:40 +0200 Subject: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers In-Reply-To: References: <8eefd93a-abbf-1436-07a3-d18223ed8fa8@lab.ntt.co.jp> <1527584511.6381.1@smtp.office365.com> <1527596481.3825.0@smtp.office365.com> Message-ID: On Thu, May 31, 2018 at 4:34 PM, Jay Pipes wrote: > On 05/29/2018 09:12 AM, Sylvain Bauza wrote: > >> We could keep the old inventory in the root RP for the previous vGPU type >> already supported in Queens and just add other inventories for other vGPU >> types now supported. That looks possibly the simpliest option as the virt >> driver knows that. >> > > What do you mean by "vGPU type"? Are you referring to the multiple GPU > types stuff where specific virt drivers know how to handle different vGPU > vendor types? Or are you referring to a "non-nested VGPU inventory on the > compute node provider" versus a "VGPU inventory on multiple child > providers, each representing a different physical GPU (or physical GPU > group in the case of Xen)"? > > I speak about a "vGPU type" because it's how we agreed to have multiple child RPs. See https://specs.openstack.org/openstack/nova-specs/specs/queens/implemented/add-support-for-vgpu.html#proposed-change For Xen, a vGPU type is a Xen GPU group. For libvirt, it's just a mdev type. Each pGPU can support multiple types. For the moment, we only support one type, but my spec ( https://review.openstack.org/#/c/557065/ ) explains more about that. -jay > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Thu May 31 19:19:49 2018 From: sbauza at redhat.com (Sylvain Bauza) Date: Thu, 31 May 2018 21:19:49 +0200 Subject: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers In-Reply-To: <5cccaa5b-45f6-cc0e-2b63-afdb271de2fb@gmail.com> References: <8eefd93a-abbf-1436-07a3-d18223ed8fa8@lab.ntt.co.jp> <1527584511.6381.1@smtp.office365.com> <1527596481.3825.0@smtp.office365.com> <1527678362.3825.3@smtp.office365.com> <5cccaa5b-45f6-cc0e-2b63-afdb271de2fb@gmail.com> Message-ID: On Thu, May 31, 2018 at 5:00 PM, Jay Pipes wrote: > On 05/31/2018 05:10 AM, Sylvain Bauza wrote: > >> After considering the whole approach, discussing with a couple of folks >> over IRC, here is what I feel the best approach for a seamless upgrade : >> - VGPU inventory will be kept on root RP (for the first type) in Queens >> so that a compute service upgrade won't impact the DB >> - during Queens, operators can run a DB online migration script (like >> the ones we currently have in https://github.com/openstack/n >> ova/blob/c2f42b0/nova/cmd/manage.py#L375) that will create a new >> resource provider for the first type and move the inventory and allocations >> to it. >> - it's the responsibility of the virt driver code to check whether a >> child RP with its name being the first type name already exists to know >> whether to update the inventory against the root RP or the child RP. >> >> Does it work for folks ? >> > > No, sorry, that doesn't work for me. It seems overly complex and fragile, > especially considering that VGPUs are not moveable anyway (no support for > live migrating them). Same goes for CPU pinning, NUMA topologies, PCI > passthrough devices, SR-IOV PF/VFs and all the other "must have" features > that have been added to the virt driver over the last 5 years. > > My feeling is that we should not attempt to "migrate" any allocations or > inventories between root or child providers within a compute node, period. > > I don't understand why you're talking of *moving* an instance. My concern was about upgrading a compute node to Rocky where some instances were already there, and using vGPUs. > The virt drivers should simply error out of update_provider_tree() if > there are ANY existing VMs on the host AND the virt driver wishes to begin > tracking resources with nested providers. > > The upgrade operation should look like this: > > 1) Upgrade placement > 2) Upgrade nova-scheduler > 3) start loop on compute nodes. for each compute node: > 3a) disable nova-compute service on node (to take it out of scheduling) > 3b) evacuate all existing VMs off of node > 3c) upgrade compute node (on restart, the compute node will see no > VMs running on the node and will construct the provider tree inside > update_provider_tree() with an appropriate set of child providers > and inventories on those child providers) > 3d) enable nova-compute service on node > > Which is virtually identical to the "normal" upgrade process whenever > there are significant changes to the compute node -- such as upgrading > libvirt or the kernel. Nested resource tracking is another such significant > change and should be dealt with in a similar way, IMHO. > > Upgrading to Rocky for vGPUs doesn't need to also upgrade libvirt or the kernel. So why operators should need to "evacuate" (I understood that as "migrate") instances if they don't need to upgrade their host OS ? Best, > -jay > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Thu May 31 19:35:01 2018 From: sbauza at redhat.com (Sylvain Bauza) Date: Thu, 31 May 2018 21:35:01 +0200 Subject: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers In-Reply-To: References: <8eefd93a-abbf-1436-07a3-d18223ed8fa8@lab.ntt.co.jp> <1527584511.6381.1@smtp.office365.com> <1527596481.3825.0@smtp.office365.com> <1527678362.3825.3@smtp.office365.com> <5cccaa5b-45f6-cc0e-2b63-afdb271de2fb@gmail.com> Message-ID: On Thu, May 31, 2018 at 7:09 PM, Dan Smith wrote: > > My feeling is that we should not attempt to "migrate" any allocations > > or inventories between root or child providers within a compute node, > > period. > > While I agree this is the simplest approach, it does put a lot of > responsibility on the operators to do work to sidestep this issue, which > might not even apply to them (and knowing if it does might be > difficult). > > Shit, I missed the point why we were discussing about migrations. When you upgrade, you wanna move your workloads for upgrading your kernel and the likes. Gotcha. But, I assume that's not something mandatory for a single upgrade (say Queens>Rocky). In that case, you just wanna upgrade your compute without moving your instances. Or you notified your users about a maintenance and you know you have a minimal maintenance period for breaking them. In both cases, adding more steps for upgrading seems a tricky and dangerous path for those operators who are afraid of making a mistake. > > The virt drivers should simply error out of update_provider_tree() if > > there are ANY existing VMs on the host AND the virt driver wishes to > > begin tracking resources with nested providers. > > > > The upgrade operation should look like this: > > > > 1) Upgrade placement > > 2) Upgrade nova-scheduler > > 3) start loop on compute nodes. for each compute node: > > 3a) disable nova-compute service on node (to take it out of scheduling) > > 3b) evacuate all existing VMs off of node > > You mean s/evacuate/cold migrate/ of course... :) > > > 3c) upgrade compute node (on restart, the compute node will see no > > VMs running on the node and will construct the provider tree inside > > update_provider_tree() with an appropriate set of child providers > > and inventories on those child providers) > > 3d) enable nova-compute service on node > > > > Which is virtually identical to the "normal" upgrade process whenever > > there are significant changes to the compute node -- such as upgrading > > libvirt or the kernel. > > Not necessarily. It's totally legit (and I expect quite common) to just > reboot the host to take kernel changes, bringing back all the instances > that were there when it resumes. The "normal" case of moving things > around slide-puzzle-style applies to live migration (which isn't an > option here). I think people that can take downtime for the instances > would rather not have to move things around for no reason if the > instance has to get shut off anyway. > > Yeah exactly that. Accepting a downtime is fair, to the price to not have a long list of operations to do during that downtime period. > > Nested resource tracking is another such significant change and should > > be dealt with in a similar way, IMHO. > > This basically says that for anyone to move to rocky, they will have to > cold migrate every single instance in order to do that upgrade right? I > mean, anyone with two socket machines or SRIOV NICs would end up with at > least one level of nesting, correct? Forcing everyone to move everything > to do an upgrade seems like a non-starter to me. > > For the moment, we aren't providing NUMA topologies with nested RPs but once we do that, yeah, that would imply the above, which sounds harsh to hear from an operator perspective. > We also need to consider the case where people would be FFU'ing past > rocky (i.e. never running rocky computes). We've previously said that > we'd provide a way to push any needed transitions with everything > offline to facilitate that case, so I think we need to implement that > method anyway. > > I kinda think we need to either: > > 1. Make everything perform the pivot on compute node start (which can be > re-used by a CLI tool for the offline case) > That's another alternative I haven't explored yet. Thanks for the idea. We already reconcile the world when we restart the compute service by checking whether mediated devices exist, so that could be a good option actually. > 2. Make everything default to non-nested inventory at first, and provide > a way to migrate a compute node and its instances one at a time (in > place) to roll through. > > We could say that Rocky isn't supporting multiple vGPU types until you make the necessary DB migration that will create child RPs and the likes. That's yet another approach. We can also document "or do the cold-migration slide puzzle thing" as an > alternative for people that feel that's more reasonable. > > I just think that forcing people to take down their data plane to work > around our own data model is kinda evil and something we should be > avoiding at this level of project maturity. What we're really saying is > "we know how to translate A into B, but we require you to move many GBs > of data over the network and take some downtime because it's easier for > *us* than making it seamless." > > --Dan > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Thu May 31 19:35:57 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 31 May 2018 15:35:57 -0400 Subject: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers In-Reply-To: References: <8eefd93a-abbf-1436-07a3-d18223ed8fa8@lab.ntt.co.jp> <1527584511.6381.1@smtp.office365.com> <1527596481.3825.0@smtp.office365.com> <1527678362.3825.3@smtp.office365.com> <5cccaa5b-45f6-cc0e-2b63-afdb271de2fb@gmail.com> Message-ID: On 05/31/2018 01:09 PM, Dan Smith wrote: >> My feeling is that we should not attempt to "migrate" any allocations >> or inventories between root or child providers within a compute node, >> period. > > While I agree this is the simplest approach, it does put a lot of > responsibility on the operators to do work to sidestep this issue, which > might not even apply to them (and knowing if it does might be > difficult). Perhaps, yes. Though the process I described is certainly not foreign to operators. It is a safe and well-practiced approach. >> The virt drivers should simply error out of update_provider_tree() if >> there are ANY existing VMs on the host AND the virt driver wishes to >> begin tracking resources with nested providers. >> >> The upgrade operation should look like this: >> >> 1) Upgrade placement >> 2) Upgrade nova-scheduler >> 3) start loop on compute nodes. for each compute node: >> 3a) disable nova-compute service on node (to take it out of scheduling) >> 3b) evacuate all existing VMs off of node > > You mean s/evacuate/cold migrate/ of course... :) I meant evacuate as in `nova host-evacuate-live` with a fall back to `nova host-servers-migrate` if live migration isn't possible). >> 3c) upgrade compute node (on restart, the compute node will see no >> VMs running on the node and will construct the provider tree inside >> update_provider_tree() with an appropriate set of child providers >> and inventories on those child providers) >> 3d) enable nova-compute service on node >> >> Which is virtually identical to the "normal" upgrade process whenever >> there are significant changes to the compute node -- such as upgrading >> libvirt or the kernel. > > Not necessarily. It's totally legit (and I expect quite common) to just > reboot the host to take kernel changes, bringing back all the instances > that were there when it resumes. So, you're saying the normal process is to try upgrading the Linux kernel and associated low-level libs, wait the requisite amount of time that takes (can be a long time) and just hope that everything comes back OK? That doesn't sound like any upgrade I've ever seen. All upgrade procedures I have seen attempt to get the workloads off of the compute host before trying anything major (and upgrading a linux kernel or low-level lib like libvirt is a major thing IMHO). > The "normal" case of moving things > around slide-puzzle-style applies to live migration (which isn't an > option here). Sorry, I was saying that for all the lovely resources that have been bolted on to Nova in the past 5 years (CPU pinning, NUMA topologies, PCI passthrough, SR-IOV PF/VFs, vGPUs, etc), that if the workload uses *those* resources, then live migration won't work and that the admin would need to fall back to nova host-servers-migrate. I wasn't saying that live migration for all workloads/instances would not be a possibility. > I think people that can take downtime for the instances would rather > not have to move things around for no reason if the instance has to > get shut off anyway. Maybe. Not sure. But my line of thinking is stick to a single, already known procedure since that is safe and well-practiced. Code that we don't have to write means code that doesn't have new bugs that we'll have to track down and fix. I'm also thinking that we'd be tracking down and fixing those bugs while trying to put out a fire that was caused by trying to auto-heal everything at once on nova-compute startup and resulting in broken state and an inability of the nova-compute service to start again, essentially trapping instances on the failed host. ;) >> Nested resource tracking is another such significant change and should >> be dealt with in a similar way, IMHO. > > This basically says that for anyone to move to rocky, they will have to > cold migrate every single instance in order to do that upgrade right? No, sorry if I wasn't clear. They can live-migrate the instances off of the to-be-upgraded compute host. They would only need to cold-migrate instances that use the aforementioned non-movable resources. > I kinda think we need to either: > > 1. Make everything perform the pivot on compute node start (which can be > re-used by a CLI tool for the offline case) > > 2. Make everything default to non-nested inventory at first, and provide > a way to migrate a compute node and its instances one at a time (in > place) to roll through. I would vote for Option #2 if it comes down to it. If we are going to go through the hassle of writing a bunch of transformation code in order to keep operator action as low as possible, I would prefer to consolidate all of this code into the nova-manage (or nova-status) tool and put some sort of attribute/marker on each compute node record to indicate whether a "heal" operation has occurred for that compute node. Kinda like what Matt's been playing with for the heal_allocations stuff. At least in that case, we'd have all the transform/heal code in a single place and we wouldn't need to have much, if any, code in the compute manager, resource tracker or "scheduler" report client. Someone (maybe Gibi?) on this thread had mentioned having the virt driver (in update_provider_tree) do the whole set reserved = total thing when first attempting to create the child providers. That would work to prevent the scheduler from attempting to place workloads on those child providers, but we would still need some marker on the compute node to indicate to the nova-manage heal_nested_providers (or whatever) command that the compute node has had its provider tree validated/healed, right? > We can also document "or do the cold-migration slide puzzle thing" as an > alternative for people that feel that's more reasonable. > > I just think that forcing people to take down their data plane to work > around our own data model is kinda evil and something we should be > avoiding at this level of project maturity. The use of the word "evil" is a little, well, brutal, to describe something I'm proposing that would just be more work for operators but (again, IMHO) be the safest proven method for solving this problem. :) Best, -jay From sbauza at redhat.com Thu May 31 19:36:57 2018 From: sbauza at redhat.com (Sylvain Bauza) Date: Thu, 31 May 2018 21:36:57 +0200 Subject: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers In-Reply-To: References: <8eefd93a-abbf-1436-07a3-d18223ed8fa8@lab.ntt.co.jp> <1527584511.6381.1@smtp.office365.com> <1527596481.3825.0@smtp.office365.com> <1527678362.3825.3@smtp.office365.com> <5cccaa5b-45f6-cc0e-2b63-afdb271de2fb@gmail.com> Message-ID: On Thu, May 31, 2018 at 7:44 PM, Chris Dent wrote: > On Thu, 31 May 2018, Dan Smith wrote: > > I kinda think we need to either: >> >> 1. Make everything perform the pivot on compute node start (which can be >> re-used by a CLI tool for the offline case) >> > > This sounds effectively like: validate my inventory and allocations > at compute node start, correcting them as required (including the > kind of migration stuff related to nested). Is that right? > > That's something I'd like to be the norm. It takes us back to a sort > of self-healing compute node. > > Or am I missing something (forgive me, I've been on holiday). > I think I understand the same as you. And I think it's actually the best approach. Wow, Dan, you saved my life again. Should I call you Mitch Buchannon ? > > I just think that forcing people to take down their data plane to work >> around our own data model is kinda evil and something we should be >> avoiding at this level of project maturity. What we're really saying is >> "we know how to translate A into B, but we require you to move many GBs >> of data over the network and take some downtime because it's easier for >> *us* than making it seamless." >> > > If we can do it, I agree that being not evil is good. > > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent tw: @anticdent > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Thu May 31 19:41:46 2018 From: sbauza at redhat.com (Sylvain Bauza) Date: Thu, 31 May 2018 21:41:46 +0200 Subject: [openstack-dev] [nova] [placement] Upgrade concerns with nested Resource Providers In-Reply-To: <4a867428-1203-63b7-9b74-86fda468047c@fried.cc> References: <8eefd93a-abbf-1436-07a3-d18223ed8fa8@lab.ntt.co.jp> <1527584511.6381.1@smtp.office365.com> <1527596481.3825.0@smtp.office365.com> <1527678362.3825.3@smtp.office365.com> <5cccaa5b-45f6-cc0e-2b63-afdb271de2fb@gmail.com> <4a867428-1203-63b7-9b74-86fda468047c@fried.cc> Message-ID: On Thu, May 31, 2018 at 8:26 PM, Eric Fried wrote: > > 1. Make everything perform the pivot on compute node start (which can be > > re-used by a CLI tool for the offline case) > > 2. Make everything default to non-nested inventory at first, and provide > > a way to migrate a compute node and its instances one at a time (in > > place) to roll through. > > I agree that it sure would be nice to do ^ rather than requiring the > "slide puzzle" thing. > > But how would this be accomplished, in light of the current "separation > of responsibilities" drawn at the virt driver interface, whereby the > virt driver isn't supposed to talk to placement directly, or know > anything about allocations? Here's a first pass: > > What we usually do is to implement either at the compute service level or at the virt driver level some init_host() method that will reconcile what you want. For example, we could just imagine a non-virt specific method (and I like that because it's non-virt specific) - ie. called by compute's init_host() that would lookup the compute root RP inventories, see whether one ore more inventories tied to specific resource classes have to be moved from the root RP and be attached to a child RP. The only subtility that would require a virt-specific update would be the name of the child RP (as both Xen and libvirt plan to use the child RP name as the vGPU type identifier) but that's an implementation detail that a possible virt driver update by the resource tracker would reconcile that. The virt driver, via the return value from update_provider_tree, tells > the resource tracker that "inventory of resource class A on provider B > have moved to provider C" for all applicable AxBxC. E.g. > > [ { 'from_resource_provider': , > 'moved_resources': [VGPU: 4], > 'to_resource_provider': > }, > { 'from_resource_provider': , > 'moved_resources': [VGPU: 4], > 'to_resource_provider': > }, > { 'from_resource_provider': , > 'moved_resources': [ > SRIOV_NET_VF: 2, > NET_BANDWIDTH_EGRESS_KILOBITS_PER_SECOND: 1000, > NET_BANDWIDTH_INGRESS_KILOBITS_PER_SECOND: 1000, > ], > 'to_resource_provider': > } > ] > > As today, the resource tracker takes the updated provider tree and > invokes [1] the report client method update_from_provider_tree [2] to > flush the changes to placement. But now update_from_provider_tree also > accepts the return value from update_provider_tree and, for each "move": > > - Creates provider C (as described in the provider_tree) if it doesn't > already exist. > - Creates/updates provider C's inventory as described in the > provider_tree (without yet updating provider B's inventory). This ought > to create the inventory of resource class A on provider C. > - Discovers allocations of rc A on rp B and POSTs to move them to rp C*. > - Updates provider B's inventory. > > (*There's a hole here: if we're splitting a glommed-together inventory > across multiple new child providers, as the VGPUs in the example, we > don't know which allocations to put where. The virt driver should know > which instances own which specific inventory units, and would be able to > report that info within the data structure. That's getting kinda close > to the virt driver mucking with allocations, but maybe it fits well > enough into this model to be acceptable?) > > Note that the return value from update_provider_tree is optional, and > only used when the virt driver is indicating a "move" of this ilk. If > it's None/[] then the RT/update_from_provider_tree flow is the same as > it is today. > > If we can do it this way, we don't need a migration tool. In fact, we > don't even need to restrict provider tree "reshaping" to release > boundaries. As long as the virt driver understands its own data model > migrations and reports them properly via update_provider_tree, it can > shuffle its tree around whenever it wants. > > Thoughts? > > -efried > > [1] > https://github.com/openstack/nova/blob/8753c9a38667f984d385b4783c3c2f > c34d7e8e1b/nova/compute/resource_tracker.py#L890 > [2] > https://github.com/openstack/nova/blob/8753c9a38667f984d385b4783c3c2f > c34d7e8e1b/nova/scheduler/client/report.py#L1341 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Thu May 31 20:04:53 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 31 May 2018 15:04:53 -0500 Subject: [openstack-dev] [nova] proposal to postpone nova-network core functionality removal to Stein In-Reply-To: <29873b6f-8a3c-ae6e-0756-c90d2c52a306@gmail.com> References: <29873b6f-8a3c-ae6e-0756-c90d2c52a306@gmail.com> Message-ID: <1391ee64-90f7-9414-9168-3a4caf495555@gmail.com> On 5/31/2018 1:35 PM, melanie witt wrote: > > This cycle at the PTG, we had decided to start making some progress > toward removing nova-network [1] (thanks to those who have helped!) and > so far, we've landed some patches to extract common network utilities > from nova-network core functionality into separate utility modules. And > we've started proposing removal of nova-network REST APIs [2]. > > At the cells v2 sync with operators forum session at the summit [3], we > learned that CERN is in the middle of migrating from nova-network to > neutron and that holding off on removal of nova-network core > functionality until Stein would help them out a lot to have a safety net > as they continue progressing through the migration. > > If we recall correctly, they did say that removal of the nova-network > REST APIs would not impact their migration and Surya Seetharaman is > double-checking about that and will get back to us. If so, we were > thinking we can go ahead and work on nova-network REST API removals this > cycle to make some progress while holding off on removing the core > functionality of nova-network until Stein. > > I wanted to send this to the ML to let everyone know what we were > thinking about this and to receive any additional feedback folks might > have about this plan. > > Thanks, > -melanie > > [1] https://etherpad.openstack.org/p/nova-ptg-rocky L301 > [2] https://review.openstack.org/567682 > [3] > https://etherpad.openstack.org/p/YVR18-cellsv2-migration-sync-with-operators > L30 As a reminder, this is the etherpad I started to document the nova-net specific compute REST APIs which are candidates for removal: https://etherpad.openstack.org/p/nova-network-removal-rocky -- Thanks, Matt From mriedemos at gmail.com Thu May 31 20:06:05 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 31 May 2018 15:06:05 -0500 Subject: [openstack-dev] [nova] proposal to postpone nova-network core functionality removal to Stein In-Reply-To: <1391ee64-90f7-9414-9168-3a4caf495555@gmail.com> References: <29873b6f-8a3c-ae6e-0756-c90d2c52a306@gmail.com> <1391ee64-90f7-9414-9168-3a4caf495555@gmail.com> Message-ID: +openstack-operators On 5/31/2018 3:04 PM, Matt Riedemann wrote: > On 5/31/2018 1:35 PM, melanie witt wrote: >> >> This cycle at the PTG, we had decided to start making some progress >> toward removing nova-network [1] (thanks to those who have helped!) >> and so far, we've landed some patches to extract common network >> utilities from nova-network core functionality into separate utility >> modules. And we've started proposing removal of nova-network REST APIs >> [2]. >> >> At the cells v2 sync with operators forum session at the summit [3], >> we learned that CERN is in the middle of migrating from nova-network >> to neutron and that holding off on removal of nova-network core >> functionality until Stein would help them out a lot to have a safety >> net as they continue progressing through the migration. >> >> If we recall correctly, they did say that removal of the nova-network >> REST APIs would not impact their migration and Surya Seetharaman is >> double-checking about that and will get back to us. If so, we were >> thinking we can go ahead and work on nova-network REST API removals >> this cycle to make some progress while holding off on removing the >> core functionality of nova-network until Stein. >> >> I wanted to send this to the ML to let everyone know what we were >> thinking about this and to receive any additional feedback folks might >> have about this plan. >> >> Thanks, >> -melanie >> >> [1] https://etherpad.openstack.org/p/nova-ptg-rocky L301 >> [2] https://review.openstack.org/567682 >> [3] >> https://etherpad.openstack.org/p/YVR18-cellsv2-migration-sync-with-operators >> L30 > > As a reminder, this is the etherpad I started to document the nova-net > specific compute REST APIs which are candidates for removal: > > https://etherpad.openstack.org/p/nova-network-removal-rocky > -- Thanks, Matt From jdennis at redhat.com Thu May 31 20:49:13 2018 From: jdennis at redhat.com (John Dennis) Date: Thu, 31 May 2018 16:49:13 -0400 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: <20180531002300.5uff6i6mmot4lq72@yuggoth.org> References: <3424d691-9792-afde-dce9-4eca7601ae4f@redhat.com> <20180531002300.5uff6i6mmot4lq72@yuggoth.org> Message-ID: On 05/30/2018 08:23 PM, Jeremy Stanley wrote: > I think this is orthogonal to the thread. The idea is that we should > avoid nettling contributors over minor imperfections in their > submissions (grammatical, spelling or typographical errors in code > comments and documentation, mild inefficiencies in implementations, > et cetera). Clearly we shouldn't merge broken features, changes > which fail tests/linters, and so on. For me the rule of thumb is, > "will the software be better or worse if this is merged?" It's not > about perfection or imperfection, it's about incremental > improvement. If a proposed change is an improvement, that's enough. > If it's not perfect... well, that's just opportunity for more > improvement later. I appreciate the sentiment concerning accepting any improvement yet on the other hand waiting for improvements to the patch to occur later is folly, it won't happen. Those of us familiar with working with large bodies of code from multiple authors spanning an extended time period will tell you it's very confusing when it's obvious most of the code follows certain conventions but there are odd exceptions (often without comments). This inevitably leads to investing a lot of time trying to understand why the exception exists because "clearly it's there for a reason and I'm just missing the rationale" At that point the reason for the inconsistency is lost. At the end of the day it is more important to keep the code base clean and consistent for those that follow than it is to coddle in the near term. -- John Dennis From fungi at yuggoth.org Thu May 31 20:55:17 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 31 May 2018 20:55:17 +0000 Subject: [openstack-dev] [tc][all] A culture change (nitpicking) In-Reply-To: References: <3424d691-9792-afde-dce9-4eca7601ae4f@redhat.com> <20180531002300.5uff6i6mmot4lq72@yuggoth.org> Message-ID: <20180531205517.xcqg7cfikswxqntn@yuggoth.org> On 2018-05-31 16:49:13 -0400 (-0400), John Dennis wrote: > On 05/30/2018 08:23 PM, Jeremy Stanley wrote: > > I think this is orthogonal to the thread. The idea is that we should > > avoid nettling contributors over minor imperfections in their > > submissions (grammatical, spelling or typographical errors in code > > comments and documentation, mild inefficiencies in implementations, > > et cetera). Clearly we shouldn't merge broken features, changes > > which fail tests/linters, and so on. For me the rule of thumb is, > > "will the software be better or worse if this is merged?" It's not > > about perfection or imperfection, it's about incremental > > improvement. If a proposed change is an improvement, that's enough. > > If it's not perfect... well, that's just opportunity for more > > improvement later. > > I appreciate the sentiment concerning accepting any improvement yet on the > other hand waiting for improvements to the patch to occur later is folly, it > won't happen. > > Those of us familiar with working with large bodies of code from multiple > authors spanning an extended time period will tell you it's very confusing > when it's obvious most of the code follows certain conventions but there are > odd exceptions (often without comments). This inevitably leads to investing > a lot of time trying to understand why the exception exists because "clearly > it's there for a reason and I'm just missing the rationale" At that point > the reason for the inconsistency is lost. > > At the end of the day it is more important to keep the code base clean and > consistent for those that follow than it is to coddle in the near term. Sure, I suppose it comes down to your definition of "improvement." I don't consider a change proposing incomplete or unmaintainable code to be an improvement. On the other hand I think it's fine to approve changes which are "good enough" even if there's room for improvement, so long as they're "good enough" that you're fine with them possibly never being improved on due to shifts in priorities. I'm certainly not suggesting that it's a good idea to merge technical debt with the expectation that someone will find time to solve it later (any more than it's okay to merge obvious bugs in hopes someone will come along and fix them for you). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From sean.mcginnis at gmx.com Thu May 31 20:59:43 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 31 May 2018 15:59:43 -0500 Subject: [openstack-dev] Forum Recap - Stein Release Goals Message-ID: <20180531205942.GA18176@sm-xps> Here's my attempt to recap the goal selection discussion we had last week at the Forum. Feel free to correct any misstatements and continue the discussion. For reference, here's the etherpad from the discussion: https://etherpad.openstack.org/p/YVR-S-release-goals Overall Goal Discussion ======================= We started off by discussing the reason for having the release cycle goals and what we should actually be trying to achieve with them. There were some who expressed concerns about the current Rocky goal selections not being the right things we should be focusing on. The hope with the first part was to come to some sort of agreement, or at least common understanding, that would inform our selections for Stein. Some thought the goals should be entirely operator facing. So things that have an obvious and direct improvement for operators. At least so far, our goal selection has mostly been to try to get one goal that is a visible thing like that while the other is more of a technical debt cleanup. The idea being that the tech debt ones will keep us in a good and healthy position to be able to continue to address operator and user needs more easily. There was also a desire to make the goals more "fully baked" before making them a goal. This would mean having the necessary changes well documented with example patches for teams to refer to to help guide them in figuring out what needs to be done in their own repos. There was also the desire to make these goals something that can generate some excitement and be things that can be more of a marketing message. Things like "OpenStack services now support live configuration changes" vs. "OpenStack got rid of a testing library that no one has heard of". And I almost missed it, but there was a great suggestion to have a #goals channel for folks to go to for help and to discuss goal implementation. I really like this idea and will bring it up in the next TC office hour to see if we can get something official set up. Stein Goals =========== We ended up with only 10-15 minutes to actually discuss some ideas for goal selection for Stein. This was expected and planned. It will take some further discussion and thought before I think we are ready to actually pick some goals. Some of the more popular ones brought up in the session were: - Cold upgrade support - Python 3 first - Addition of "upgrade check" CLIs We were also able to already identify some possible goals for the T cycle: - Move all CLIs to python-openstackclient - Adopt a larger set of default roles We've been collecting a "goal backlog" with these and other ideas here: https://etherpad.openstack.org/p/community-goals --- Sean (smcginnis) From corvus at inaugust.com Thu May 31 21:23:30 2018 From: corvus at inaugust.com (James E. Blair) Date: Thu, 31 May 2018 14:23:30 -0700 Subject: [openstack-dev] Winterscale: a proposal regarding the project infrastructure In-Reply-To: (Joshua Hesketh's message of "Thu, 31 May 2018 11:40:21 +1000") References: <87o9gxdsb9.fsf@meyer.lemoncheese.net> Message-ID: <87r2lrsenh.fsf@meyer.lemoncheese.net> Joshua Hesketh writes: > So the "winterscale infrastructure council"'s purview is quite limited in > scope to just govern the services provided? > > If so, would you foresee a need to maintain some kind of "Infrastructure > council" as it exists at the moment to be the technical design body? For the foreseeable future, I think the "winterscale infrastructure team" can probably handle that. If it starts to sprawl again, we can make a new body. > Specifically, wouldn't we still want somewhere for the "winterscale > infrastructure team" to be represented and would that expand to any > infrastructure-related core teams? Can you elaborate on this? I'm not following. -Jim From lbragstad at gmail.com Thu May 31 21:58:43 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Thu, 31 May 2018 16:58:43 -0500 Subject: [openstack-dev] [keystone] failing documentation jobs Message-ID: Hi all, If you've been trying to write documentation patches, you may have noticed them tripping over unrelated errors when building the docs. We have a bug opened detailing why this happened [0] and a fix working its way through the gate [1]. The docs job should be back up and running soon. [0] https://bugs.launchpad.net/keystone/+bug/1774508 [1] https://review.openstack.org/#/c/571369/ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From curt.moore at garmin.com Thu May 31 22:14:42 2018 From: curt.moore at garmin.com (Moore, Curt) Date: Thu, 31 May 2018 22:14:42 +0000 Subject: [openstack-dev] [nova][glance] Deprecation of nova.image.download.modules extension point Message-ID: <6992a8851a8349eeb194664c267a1a63@garmin.com> Hello. We recently upgraded from Liberty to Pike and looking ahead to the code in Queens, noticed the image download deprecation notice with instructions to post here if this interface was in use. As such, I'd like to explain our use case and see if there is a better way of accomplishing our goal or lobby for the "un-deprecation" of this extension point. As with many installations, we are using Ceph for both our Glance image store and VM instance disks. In a normal workflow when both Glance and libvirt are configured to use Ceph, libvirt reacts to the direct_url field on the Glance image and performs an in-place clone of the RAW disk image from the images pool into the vms pool all within Ceph. The snapshot creation process is very fast and is thinly provisioned as it's a COW snapshot. This underlying workflow itself works great, the issue is with performance of the VM's disk within Ceph, especially as the number of nodes within the cluster grows. We have found, especially with Windows VMs (largely as a result of I/O for the Windows pagefile), that the performance of the Ceph cluster as a whole takes a very large hit in keeping up with all of this I/O thrashing, especially when Windows is booting. This is not the case with Linux VMs as they do not use swap as frequently as do Windows nodes with their pagefiles. Windows can be run without a pagefile but that leads to other odditites within Windows. I should also mention that in our case, the nodes themselves are ephemeral and we do not care about live migration, etc., we just want raw performance. As an aside on our Ceph setup without getting into too many details, we have very fast SSD based Ceph nodes for this pool (separate crush root, SSDs for both OSD and journals, 2 replicas), interconnected on the same switch backplane, each with bonded 10GB uplinks to the switch. Our Nova nodes are within the same datacenter (also have bonded 10GB uplinks to their switches) but are distributed across different switches. We could move the Nova nodes to the same switch as the Ceph nodes but that is a larger logistical challenge to rearrange many servers to make space. Back to our use case, in order to isolate this heavy I/O, a subset of our compute nodes have a local SSD and are set to use qcow2 images instead of rbd so that libvirt will pull the image down from Glance into the node's local image cache and run the VM from the local SSD. This allows Windows VMs to boot and perform their initial cloudbase-init setup/reboot within ~20 sec vs 4-5 min, regardless of overall Ceph cluster load. Additionally, this prevents us from "wasting" IOPS and instead keep them local to the Nova node, reclaiming the network bandwidth and Ceph IOPS for use by Cinder volumes. This is essentially the use case outlined here in the "Do designate some non-Ceph compute hosts with low-latency local storage" section: https://ceph.com/planet/the-dos-and-donts-for-ceph-for-openstack/ The challenge is that transferring the Glance image transfer is _glacially slow_ when using the Glance HTTP API (~30 min for a 50GB Windows image (It's Windows, it's huge with all of the necessary tools installed)). If libvirt can instead perform an RBD export on the image using the image download functionality, it is able to download the same image in ~30 sec. We have code that is performing the direct download from Glance over RBD and it works great in our use case which is very similar to the code in this older patch: https://review.openstack.org/#/c/44321/ We could look at attaching an additional ephemeral disk to the instance and have cloudbase-init use it as the pagefile but it appears that if libvirt is using rbd for its images_type, _all_ disks must then come from Ceph, there is no way at present to allow the VM image to run from Ceph and have an ephemeral disk mapped in from node-local storage. Even still, this would have the effect of "wasting" Ceph IOPS for the VM disk itself which could be better used for other purposes. Based on what I have explained about our use case, is there a better/different way to accomplish the same goal without using the deprecated image download functionality? If not, can we work to "un-deprecate" the download extension point? Should I work to get the code for this RBD download into the upstream repository? Thanks, -Curt ________________________________ CONFIDENTIALITY NOTICE: This email and any attachments are for the sole use of the intended recipient(s) and contain information that may be Garmin confidential and/or Garmin legally privileged. If you have received this email in error, please notify the sender by reply email and delete the message. Any disclosure, copying, distribution or use of this communication (including attachments) by someone other than the intended recipient is prohibited. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: