From openstack at fried.cc Sun Apr 1 00:20:24 2018 From: openstack at fried.cc (Eric Fried) Date: Sat, 31 Mar 2018 19:20:24 -0500 Subject: [openstack-dev] [barbican][nova-powervm][pyghmi][solum][trove] Switching to cryptography from pycrypto In-Reply-To: <20180331232401.hp5j4iommgw7tj3j@gentoo.org> References: <20180331232401.hp5j4iommgw7tj3j@gentoo.org> Message-ID: <91e24aaf-0f0e-dcfb-2ce2-16f7841e893a@fried.cc> Mr. Fire- > nova-powervm: no open reviews > - in test-requirements, but not actually used? > - made https://review.openstack.org/558091 for it Thanks for that. It passed all our tests; we should merge it early next week. -efried From zhang.lei.fly at gmail.com Sun Apr 1 02:36:40 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Sun, 1 Apr 2018 10:36:40 +0800 Subject: [openstack-dev] [kolla][tc][openstack-helm][tripleo]propose retire kolla-kubernetes project In-Reply-To: <20180331231642.liyindpxke5t4qm5@yuggoth.org> References: <78478c68-7ff0-7b05-59f0-2e27c2635a4f@openstack.org> <20180331134428.znkeo7rn5n5adqxo@yuggoth.org> <20180331193453.3dj72kqkbyc6gvzz@yuggoth.org> <20180331231642.liyindpxke5t4qm5@yuggoth.org> Message-ID: Thanks to all guys. The mail is a little off-topic. First of all, let us back to the topic of this mail. **kolla-kubernetes** The root issue for kolla-kubernetes is no active contributors. if more person is interested in this project, I would like to give more time to this project. But leave it in kolla governance may not good for its growth. Because it is a **totally different** technical stack with kolla-ansible. so migrate it to TC governance should the best solution. **for kolla and kolla-ansible split** kolla(container) is widely used by multi-project (TripleO, OSH). And I also heard some internal projects are using it too. kolla and kolla-ansible are well decoupled. The usage or the API kolla provides always stable and backward compatible. kolla images are also used in many produce environment through different deployment tools So kolla(container) is worth say "provide production-ready containers". This should not be negative, just because of kolla and kolla-ansible are under the same team governance. the team split would let people fouse on one thing and make it looks better. but we have two different teams, kolla-core team, and the kolla-ansible-core team already. anyone is welcome to join one of the team. But in fact, the members of these two teams are almost the same. if we split the team now, all we gain is making chaos and hard to manage. I think it may be proper time when the members of kolla-core team and the kolla-ansible-core team is different (50% maybe?). ​ On Sun, Apr 1, 2018 at 7:16 AM, Jeremy Stanley wrote: > On 2018-03-31 22:07:03 +0000 (+0000), Steven Dake (stdake) wrote: > [...] > > The problems raised in this thread (tension - tight coupling - > > second class citizens - stratification) was predicted early on - > > prior to Kolla 1.0. That prediction led to the creation of a > > technical solution - the Kolla API. This API permits anyone to > > reuse the containers as they see fit if they conform their > > implementation to the API. The API is not specifically tied to > > the Ansible deployment technology. Instead the API is tied to the > > varying requirements that various deployment teams have had in the > > past around generalized requirements for making container > > lifecycle management a reality while running OpenStack services > > and their dependencies inside containers. > [...] > > Thanks! That's where my fuzzy thought process was leading. Existence > of a stable API guarantee rather than treating the API as "whatever > kolla-ansible does" significantly increases the chances of other > projects being able to rely on kolla's images in the long term. > -- > Jeremy Stanley > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From sundar.nadathur at intel.com Sun Apr 1 02:38:14 2018 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Sat, 31 Mar 2018 19:38:14 -0700 Subject: [openstack-dev] [nova] [cyborg] Race condition in the Cyborg/Nova flow In-Reply-To: <7475d530-9800-35bc-711d-3ba91b71a7d1@fried.cc> References: <42368ae5-3fbe-cb2b-8ba4-71736740b1b3@intel.com> <11e51bc9-cc4a-27e1-29f1-3a4c04ce733d@fried.cc> <13e666d6-2e3f-0605-244d-e180d7424eee@fried.cc> <7475d530-9800-35bc-711d-3ba91b71a7d1@fried.cc> Message-ID: Hi Eric and all,     Thank you very much for considering my concerns and coming back with an improved solution. Glad that no blood was shed in the process. I took this proposal and worked out its details, as I understand them, in this etherpad:      https://etherpad.openstack.org/p/Cyborg-Nova-Multifunction The intention of this detailed scheme is to include GPUs, FPGAs and all devices, but the focus may be more on FPGAs. This scheme at first keeps the restriction that a multi-function device cannot be reprogrammed but, in the last section, explores which part of the sky will fall down if we do allow that. May be we'll get through this with tears but no blood! Have a good rest of the weekend. Regards, Sundar On 3/29/2018 9:43 AM, Eric Fried wrote: > We discussed this on IRC [1], hangout, and etherpad [2]. Here is the > summary, which we mostly seem to agree on: > > There are two different classes of device we're talking about > modeling/managing. (We don't know the real nomenclature, so forgive > errors in that regard.) > > ==> Fully dynamic: You can program one region with one function, and > then still program a different region with a different function, etc. > > ==> Single program: Once you program the card with a function, *all* its > virtual slots are *only* capable of that function until the card is > reprogrammed. And while any slot is in use, you can't reprogram. This > is Sundar's FPGA use case. It is also Sylvain's VGPU use case. > > The "fully dynamic" case is straightforward (in the sense of being what > placement was architected to handle). > * Model the PF/region as a resource provider. > * The RP has inventory of some generic resource class (e.g. "VGPU", > "SRIOV_NET_VF", "FPGA_FUNCTION"). Allocations consume that inventory, > plain and simple. > * As a region gets programmed dynamically, it's acceptable for the thing > doing the programming to set a trait indicating that that function is in > play. (Sundar, this is the thing I originally said would get > resistance; but we've agreed it's okay. No blood was shed :) > * Requests *may* use preferred traits to help them land on a card that > already has their function flashed on it. (Prerequisite: preferred > traits, which can be implemented in placement. Candidates with the most > preferred traits get sorted highest.) > > The "single program" case needs to be handled more like what Alex > describes below. TL;DR: We do *not* support dynamic programming, > traiting, or inventorying at instance boot time - it all has to be done > "up front". > * The PFs can be initially modeled as "empty" resource providers. Or > maybe not at all. Either way, *they can not be deployed* in this state. > * An operator or admin (via a CLI, config file, agent like blazar or > cyborg, etc.) preprograms the PF to have the specific desired > function/configuration. > * This may be cyborg/blazar pre-programming devices to maintain an > available set of each function > * This may be in response to a user requesting some function, which > causes a new image to be laid down on a device so it will be available > for scheduling > * This may be a human doing it at cloud-build time > * This results in the resource provider being (created and) set up with > the inventory and traits appropriate to that function. > * Now deploys can happen, using required traits representing the desired > function. > > -efried > > [1] > http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-03-29.log.html#t2018-03-29T12:52:56 > [2] https://etherpad.openstack.org/p/placement-dynamic-traiting > > On 03/29/2018 07:38 AM, Alex Xu wrote: >> Agree with that, whatever the tweak inventory or traits, none of them works. >> >> Same as VGPU, we can support pre-programmed mode for multiple-functions >> region, and each region only can support one type function. >> >> There are two reasons why Cyborg has a filter: >> * records the usage of functions in a region >> * records which function is programmed. >> >> For #1, each region provider multiple functions. Each function can be >> assigned to a VM. So we should create ResourceProvider for the region. And >> the resource class is function. That is similar to the SR-IOV device. >> The region(The PF) >> provides functions (VFs). >> >> For #2, We should use trait to distinguish the function type. >> >> Then we didn't keep any inventory info in the cyborg again, and we >> needn't any filter in cyborg also, >> and there is no race condition anymore. >> >> 2018-03-29 2:48 GMT+08:00 Eric Fried > >: >> >> Sundar- >> >>         We're running across this issue in several places right >> now.   One >> thing that's definitely not going to get traction is >> automatically/implicitly tweaking inventory in one resource class when >> an allocation is made on a different resource class (whether in the same >> or different RPs). >> >>         Slightly less of a nonstarter, but still likely to get >> significant >> push-back, is the idea of tweaking traits on the fly.  For example, your >> vGPU case might be modeled as: >> >> PGPU_RP: { >>   inventory: { >>       CUSTOM_VGPU_TYPE_A: 2, >>       CUSTOM_VGPU_TYPE_B: 4, >>   } >>   traits: [ >>       CUSTOM_VGPU_TYPE_A_CAPABLE, >>       CUSTOM_VGPU_TYPE_B_CAPABLE, >>   ] >> } >> >>         The request would come in for >> resources=CUSTOM_VGPU_TYPE_A:1&required=VGPU_TYPE_A_CAPABLE, resulting >> in an allocation of CUSTOM_VGPU_TYPE_A:1.  Now while you're processing >> that, you would *remove* CUSTOM_VGPU_TYPE_B_CAPABLE from the PGPU_RP. >> So it doesn't matter that there's still inventory of >> CUSTOM_VGPU_TYPE_B:4, because a request including >> required=CUSTOM_VGPU_TYPE_B_CAPABLE won't be satisfied by this RP. >> There's of course a window between when the initial allocation is made >> and when you tweak the trait list.  In that case you'll just have to >> fail the loser.  This would be like any other failure in e.g. the spawn >> process; it would bubble up, the allocation would be removed; retries >> might happen or whatever. >> >>         Like I said, you're likely to get a lot of resistance to >> this idea as >> well.  (Though TBH, I'm not sure how we can stop you beyond -1'ing your >> patches; there's nothing about placement that disallows it.) >> >>         The simple-but-inefficient solution is simply that we'd >> still be able >> to make allocations for vGPU type B, but you would have to fail right >> away when it came down to cyborg to attach the resource.  Which is code >> you pretty much have to write anyway.  It's an improvement if cyborg >> gets to be involved in the post-get-allocation-candidates >> weighing/filtering step, because you can do that check at that point to >> help filter out the candidates that would fail.  Of course there's still >> a race condition there, but it's no different than for any other >> resource. >> >> efried >> >> On 03/28/2018 12:27 PM, Nadathur, Sundar wrote: >> > Hi Eric and all, >> >     I should have clarified that this race condition happens only for >> > the case of devices with multiple functions. There is a prior thread >> > >> > > >> > about it. I was trying to get a solution within Cyborg, but that faces >> > this race condition as well. >> > >> > IIUC, this situation is somewhat similar to the issue with vGPU types >> > >> > > >> > (thanks to Alex Xu for pointing this out). In the latter case, we >> could >> > start with an inventory of (vgpu-type-a: 2; vgpu-type-b: 4).  But, >> after >> > consuming a unit of  vGPU-type-a, ideally the inventory should change >> > to: (vgpu-type-a: 1; vgpu-type-b: 0). With multi-function >> accelerators, >> > we start with an RP inventory of (region-type-A: 1, function-X: >> 4). But, >> > after consuming a unit of that function, ideally the inventory should >> > change to: (region-type-A: 0, function-X: 3). >> > >> > I understand that this approach is controversial :) Also, one >> difference >> > from the vGPU case is that the number and count of vGPU types is >> static, >> > whereas with FPGAs, one could reprogram it to result in more or fewer >> > functions. That said, we could hopefully keep this analogy in mind for >> > future discussions. >> > >> > We probably will not support multi-function accelerators in Rocky. >> This >> > discussion is for the longer term. >> > >> > Regards, >> > Sundar >> > >> > On 3/23/2018 12:44 PM, Eric Fried wrote: >> >> Sundar- >> >> >> >>      First thought is to simplify by NOT keeping inventory >> information in >> >> the cyborg db at all.  The provider record in the placement service >> >> already knows the device (the provider ID, which you can look up >> in the >> >> cyborg db) the host (the root_provider_uuid of the provider >> representing >> >> the device) and the inventory, and (I hope) you'll be augmenting >> it with >> >> traits indicating what functions it's capable of.  That way, you'll >> >> always get allocation candidates with devices that *can* load the >> >> desired function; now you just have to engage your weigher to >> prioritize >> >> the ones that already have it loaded so you can prefer those. >> >> >> >>      Am I missing something? >> >> >> >>              efried >> >> >> >> On 03/22/2018 11:27 PM, Nadathur, Sundar wrote: >> >>> Hi all, >> >>>     There seems to be a possibility of a race condition in the >> >>> Cyborg/Nova flow. Apologies for missing this earlier. (You can >> refer to >> >>> the proposed Cyborg/Nova spec >> >>> >> > > >> >>> for details.) >> >>> >> >>> Consider the scenario where the flavor specifies a resource >> class for a >> >>> device type, and also specifies a function (e.g. encrypt) in the >> extra >> >>> specs. The Nova scheduler would only track the device type as a >> >>> resource, and Cyborg needs to track the availability of functions. >> >>> Further, to keep it simple, say all the functions exist all the >> time (no >> >>> reprogramming involved). >> >>> >> >>> To recap, here is the scheduler flow for this case: >> >>> >> >>>   * A request spec with a flavor comes to Nova >> conductor/scheduler. The >> >>>     flavor has a device type as a resource class, and a function >> in the >> >>>     extra specs. >> >>>   * Placement API returns the list of RPs (compute nodes) which >> contain >> >>>     the requested device types (but not necessarily the function). >> >>>   * Cyborg will provide a custom filter which queries Cyborg DB. >> This >> >>>     needs to check which hosts contain the needed function, and >> filter >> >>>     out the rest. >> >>>   * The scheduler selects one node from the filtered list, and the >> >>>     request goes to the compute node. >> >>> >> >>> For the filter to work, the Cyborg DB needs to maintain a table with >> >>> triples of (host, function type, #free units). The filter checks >> if a >> >>> given host has one or more free units of the requested function >> type. >> >>> But, to keep the # free units up to date, Cyborg on the selected >> compute >> >>> node needs to notify the Cyborg API to decrement the #free units >> when an >> >>> instance is spawned, and to increment them when resources are >> released. >> >>> >> >>> Therein lies the catch: this loop from the compute node to >> controller is >> >>> susceptible to race conditions. For example, if two simultaneous >> >>> requests each ask for function A, and there is only one unit of that >> >>> available, the Cyborg filter will approve both, both may land on the >> >>> same host, and one will fail. This is because Cyborg on the >> controller >> >>> does not decrement resource usage due to one request before >> processing >> >>> the next request. >> >>> >> >>> This is similar to this previous Nova scheduling issue >> >>> >> > >. >> >>> That was solved by having the scheduler claim a resource in >> Placement >> >>> for the selected node. I don't see an analog for Cyborg, since >> it would >> >>> not know which node is selected. >> >>> >> >>> Thanks in advance for suggestions and solutions. >> >>> >> >>> Regards, >> >>> Sundar >> >>> >> >>> >> >>> >> >>> >> >>> >> >>> >> >>> >> >>> >> >>> >> __________________________________________________________________________ >> >>> OpenStack Development Mailing List (not for usage questions) >> >>> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >>> >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > >> > >> > >> > >> __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From tony at bakeyournoodle.com Sun Apr 1 03:54:25 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Sun, 1 Apr 2018 13:54:25 +1000 Subject: [openstack-dev] [midonet][Openstack-stable-maint] Stable check of openstack/networking-midonet failed In-Reply-To: References: Message-ID: <20180401035425.GC4343@thor.bakeyournoodle.com> On Sat, Mar 31, 2018 at 06:17:07AM +0000, A mailing list for the OpenStack Stable Branch test reports. wrote: > Build failed. > > - build-openstack-sphinx-docs http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/networking-midonet/stable/ocata/build-openstack-sphinx-docs/2f351df/html/ : SUCCESS in 6m 25s > - openstack-tox-py27 http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/networking-midonet/stable/ocata/openstack-tox-py27/c558974/ : FAILURE in 14m 22s I'm not sure what's going on here but the networking-midonet periodic-stable jobs have been failing like this for close to a week. Can someone from that team take a look Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tony at bakeyournoodle.com Sun Apr 1 03:55:08 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Sun, 1 Apr 2018 13:55:08 +1000 Subject: [openstack-dev] [Openstack-stable-maint] Stable check of openstack/networking-midonet failed In-Reply-To: References: Message-ID: <20180401035507.GD4343@thor.bakeyournoodle.com> On Sat, Mar 31, 2018 at 06:17:41AM +0000, A mailing list for the OpenStack Stable Branch test reports. wrote: > Build failed. > > - build-openstack-sphinx-docs http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/networking-midonet/stable/pike/build-openstack-sphinx-docs/b20c665/html/ : SUCCESS in 5m 48s > - openstack-tox-py27 http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/networking-midonet/stable/pike/openstack-tox-py27/75db3fe/ : FAILURE in 11m 49s I'm not sure what's going on here but as with stable/ocata the networking-midonet periodic-stable jobs have been failing like this for close to a week. Can someone from that team take a look Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From prometheanfire at gentoo.org Sun Apr 1 03:57:41 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Sat, 31 Mar 2018 22:57:41 -0500 Subject: [openstack-dev] [requirements] Our job is done, time to close up shop. Message-ID: <20180401035741.r7vlk6556habbris@gentoo.org> The requirements project had a good run, but things seem to be winding down. We only break openstack a couple times a cycle now, and that's just not acceptable. The graph must go up and to the right. So, it's time for the requirements project to close up shop. So long and thanks for all the fish. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From jaypipes at gmail.com Sun Apr 1 14:18:09 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Sun, 1 Apr 2018 10:18:09 -0400 Subject: [openstack-dev] [requirements] Our job is done, time to close up shop. In-Reply-To: <20180401035741.r7vlk6556habbris@gentoo.org> References: <20180401035741.r7vlk6556habbris@gentoo.org> Message-ID: <1d74ea69-9dc9-3be5-bd37-55cd13dc6c2b@gmail.com> On 03/31/2018 11:57 PM, Matthew Thode wrote: > The requirements project had a good run, but things seem to be winding > down. We only break openstack a couple times a cycle now, and that's > just not acceptable. The graph must go up and to the right. So, it's > time for the requirements project to close up shop. So long and thanks > for all the fish. Completely agreed. The requirements project should really aim to break things *weekly* (or daily, but not Sundays or 1sts of April), otherwise I see no real value in the project at all. All the best, -jay From dmsimard at redhat.com Sun Apr 1 22:27:23 2018 From: dmsimard at redhat.com (David Moreau Simard) Date: Sun, 1 Apr 2018 18:27:23 -0400 Subject: [openstack-dev] [requirements] Our job is done, time to close up shop. In-Reply-To: <20180401035741.r7vlk6556habbris@gentoo.org> References: <20180401035741.r7vlk6556habbris@gentoo.org> Message-ID: The requirements project isn't even required. Oh, the irony. David Moreau Simard Senior Software Engineer | OpenStack RDO dmsimard = [irc, github, twitter] On Sat, Mar 31, 2018 at 11:57 PM, Matthew Thode wrote: > The requirements project had a good run, but things seem to be winding > down. We only break openstack a couple times a cycle now, and that's > just not acceptable. The graph must go up and to the right. So, it's > time for the requirements project to close up shop. So long and thanks > for all the fish. > > -- > Matthew Thode (prometheanfire) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From delightwook at ssu.ac.kr Mon Apr 2 01:51:48 2018 From: delightwook at ssu.ac.kr (MinWookKim) Date: Mon, 2 Apr 2018 10:51:48 +0900 Subject: [openstack-dev] [Vitrage] New proposal for analysis. In-Reply-To: <38E590A3-69BF-4BE1-A701-FA8171429D46@nokia.com> References: <0a7201d3c5c1$2ab596a0$8020c3e0$@ssu.ac.kr> <0b4201d3c63b$79038400$6b0a8c00$@ssu.ac.kr> <0cf201d3c72f$2b3f5ec0$81be1c40$@ssu.ac.kr> <0d8101d3c754$41e73c90$c5b5b5b0$@ssu.ac.kr> <38E590A3-69BF-4BE1-A701-FA8171429D46@nokia.com> Message-ID: <00e801d3ca25$29befee0$7d3cfca0$@ssu.ac.kr> Hello Ifat, Thank you for the reply. :) It is my opinion only, so if I'm wrong, we can change the implementation part at any time. (Even if it differs from my initial intention) The same security issues arise as you say. But now Vitrage does not call external APIs. The Vitrage-dashboard uses Vitrageclient libraries for Topology, Alarms, and RCA requests to Vitrage. So if we add an API, it will have the following flow. Vitrage-dashboard requests checks using the Vitrageclient library. -> Vitrage receives the API. -> api / controllers / v1 / checks.py is called. -> checks service is called. In accordance with the above flow, passing through the Vitrage API is the purpose of data passing and function calls. I think Vitrage does not need to call external APIs. If you do not want to go through the Vitrage API, we need to create a function for the check action in the Vitrage-dashboard, and write code to call the function. If I think wrong, please tell me anytime. :) Thank you. Best regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Sunday, April 1, 2018 3:40 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, I understand your concern about the security issue. But how would that be different if the API call is passed through Vitrage API? The authentication from vitrage-dashboard to vitrage API will work, but then Vitrage will call an external API and you’ll have the same security issue, right? I don’t understand what is the difference between calling the external component from vitrage-dashboard and calling it from vitrage. Best regards, Ifat. From: MinWookKim > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Thursday, 29 March 2018 at 14:51 To: "'OpenStack Development Mailing List (not for usage questions)'" > Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thanks for your reply. : ) I wrote my opinion on your comment. Why do you think the request should pass through the Vitrage API? Why can’t vitrage-dashboard call the check component directly? Authentication issues: I think the check component is a separate component based on the API. In my opinion, if the check component has a separate api address from the vitrage to receive requests from the Vitrage-dashboard, the Vitrage-dashboard needs to know the api address for the check component. This can result in a request / response situation open to anyone, regardless of the authentication supported by openstack between the Vitrage-dashboard and the request / response procedure of check component. This is possible not only through the Vitrage-dashboard, but also with simple commands such as curl. (I think it is unnecessary to implement a separate authentication system for the check component.) This problem may occur if someone knows the api address for the check component, which can cause the host and VM to execute system commands. what should happen if the user closes the check window before the checks are over? I assume that the checks will finish, but the user won’t be able to see the results? If the window is closed before the check is finished, the user can not check the result. To solve this problem, I think that temporarily saving a list of recent results is also a solution. By storing temporary lists (for example, up to 10), the user can see the previous results and think that it is also possible to empty the list by the user. how is it? Thank you. Best Regrads, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Thursday, March 29, 2018 8:07 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, Why do you think the request should pass through the Vitrage API? Why can’t vitrage-dashboard call the check component directly? And another question: what should happen if the user closes the check window before the checks are over? I assume that the checks will finish, but the user won’t be able to see the results? Thanks, Ifat. From: MinWookKim < delightwook at ssu.ac.kr> Reply-To: "OpenStack Development Mailing List (not for usage questions)" < openstack-dev at lists.openstack.org> Date: Thursday, 29 March 2018 at 10:25 To: "'OpenStack Development Mailing List (not for usage questions)'" < openstack-dev at lists.openstack.org> Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat and Vitrage team. I would like to explain more about the implementation part of the mail I sent last time. The flow is as follows. Vitrage-dashboard (action-list-panel) -> Vitrage-api -> check component The last time I mentioned it as api-handler, it would be better to call the check component directly from Vitarge-api without having to use it. I hope this helps you understand. Thank you Best Regards, Minwook. From: MinWookKim [ mailto:delightwook at ssu.ac.kr] Sent: Wednesday, March 28, 2018 11:21 AM To: 'OpenStack Development Mailing List (not for usage questions)' Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thanks for your reply. : ) This proposal is a proposal that we expect to be useful from a user perspective. >From a manager's point of view, we need an implementation that minimizes the overhead incurred by the proposal. The answers to some of your questions are: • I assume that these checks will not be implemented in Vitrage, and the results will not be stored in Vitrage, right? Vitrage role is to be a place where it is easy and intuitive for the user to execute external actions/checks. Yes, that's right. We do not need to save it to Vitrage because we just need to check the results. However, it is possible to implement the function directly in Vitrage-dashboard separately from Vitrage like add-action-list panel, but it seems that it is not enough to implement all the functions. If you do not mind, we will have the following flow. 1. The user requests the check action from the vitrage-dashboard (add-action-list-panel). 2. Call the check component through the vitrage's API handler. 3. The check component executes the command and returns the result. Because it is my opinion only, please tell us if there is an unnecessary part. :) • Do you expect the user to click an entity, select an action to run (e.g. ‘P2P check’), and wait by the open panel for the results? What if the user switches to another menu before the check is done? What if the user asks to run an additional check in parallel? What if the user wants to see again a previous result? My idea was to select the task, wait for the results in an open panel, and then instantly see it in the panel. If we switch to another menu before the scan is complete, we will not be able to see the results. Parallel checking is a matter of fact. (This can cause excessive overhead.) For earlier results, it may be okay to temporarily save the open panel until we exit the panel. We can see the previous results through the temporary saved results. • Any thoughts of what component will implement those checks? Or maybe these will be just scripts? I think I implement a separate component to request it. • It could be nice if, as a result of an action check, a new alarm will be raised in Vitrage. A specific alarm with the additional details that were found. However, it might not be trivial to implement it. We could think about it as phase #2. It is expected to be really good. It would be very useful if an Entity-Graph generates an alarm based on the check result. I think that part will be able to talk in detail later. My answer is my opinions and assumptions. If you think my implementation is wrong, or an inefficient implementation, please do not hesitate to tell me. Thanks. Best Regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [ mailto:ifat.afek at nokia.com] Sent: Wednesday, March 28, 2018 2:23 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, I think that from a user’s perspective, these are very good ideas. I have some questions regarding the UX and the implementation, since I’m trying to think what could be the best way to execute such actions from Vitrage. * I assume that these checks will not be implemented in Vitrage, and the results will not be stored in Vitrage, right? Vitrage role is to be a place where it is easy and intuitive for the user to execute external actions/checks. * Do you expect the user to click an entity, select an action to run (e.g. ‘P2P check’), and wait by the open panel for the results? What if the user switches to another menu before the check is done? What if the user asks to run an additional check in parallel? What if the user wants to see again a previous result? * Any thoughts of what component will implement those checks? Or maybe these will be just scripts? * It could be nice if, as a result of an action check, a new alarm will be raised in Vitrage. A specific alarm with the additional details that were found. However, it might not be trivial to implement it. We could think about it as phase #2. Best Regards, Ifat From: MinWookKim < delightwook at ssu.ac.kr> Reply-To: "OpenStack Development Mailing List (not for usage questions)" < openstack-dev at lists.openstack.org> Date: Tuesday, 27 March 2018 at 14:45 To: " openstack-dev at lists.openstack.org" < openstack-dev at lists.openstack.org> Subject: [openstack-dev] [Vitrage] New proposal for analysis. Hello Vitrage team. I am currently working on the Vitrage-Dashboard proposal for the ‘Add action list panel for entity click action’. ( https://review.openstack.org/#/c/531141/) I would like to make a new proposal based on the action list panel mentioned above. The new proposal is to provide multidimensional analysis capabilities in several entities that make up the infrastructure in the entity graph. Vitrage's entity-graph allows us to efficiently monitor alarms from various monitoring tools. In the current state, when there is a problem with the VM and Host, or when we want to check the status, we need to access the console individually for each VM and Host. This situation causes unnecessary behavior when the number of VMs and hosts increases. My new suggestion is that if we have a large number of vm and host, we do not need to directly connect to each VM, host console to enter the system command. Instead, we can send a system command to VM and hosts in the cloud through this proposal. It is only checking results. I have written some use-cases for an efficient explanation of the function. >From an implementation perspective, the goals of the proposal are: 1. To execute commands without installing any Agent / Client that can cause load on VM, Host. 2. I want to provide a simple UI so that users or administrators can get the desired information to multiple VMs and hosts. 3. I want to be able to grasp the results at a glance. 4. I want to implement a component that can support many additional scenarios in plug-in format. I would be happy if you could comment on the proposal or ask questions. Thanks. Best Regards, Minwook. -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 33310 bytes Desc: not available URL: From liu.xuefeng1 at zte.com.cn Mon Apr 2 03:53:38 2018 From: liu.xuefeng1 at zte.com.cn (liu.xuefeng1 at zte.com.cn) Date: Mon, 2 Apr 2018 11:53:38 +0800 (CST) Subject: [openstack-dev] =?utf-8?q?_=5Bsenlin=5D_=C2=A0Senlin_meeting_on_A?= =?utf-8?q?pril=2E_2th_2018_is_cancelled?= Message-ID: <201804021153380356907@zte.com.cn> Hi, Senlin project meeting on Aprl. 3th is cancelled. If you guys need to talk, please go to #senlin channel asking the team. We will meet again on Tuesday, April 10. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmellado at redhat.com Mon Apr 2 06:48:31 2018 From: dmellado at redhat.com (Daniel Mellado) Date: Mon, 2 Apr 2018 08:48:31 +0200 Subject: [openstack-dev] [kuryr] Cancelling this week's kuryr meeting Message-ID: Hi everyone! As a lot of people are out of office due to Easter today's meeting is cancelled. If anything urgent comes up, feel free to use #openstack-kuryr. Happy Easter! -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From gmann at ghanshyammann.com Mon Apr 2 07:42:36 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 2 Apr 2018 16:42:36 +0900 Subject: [openstack-dev] [devstack][qa] Changes to devstack LIBS_FROM_GIT In-Reply-To: <87woxwymr1.fsf@meyer.lemoncheese.net> References: <87woxwymr1.fsf@meyer.lemoncheese.net> Message-ID: On Thu, Mar 29, 2018 at 5:21 AM, James E. Blair wrote: > Hi, > > I've proposed a change to devstack which slightly alters the > LIBS_FROM_GIT behavior. This shouldn't be a significant change for > those using legacy devstack jobs (but you may want to be aware of it). > It is more significant for new-style devstack jobs. > > The change is at https://review.openstack.org/549252 > > In summary, when this change lands, new-style devstack jobs should no > longer need to set LIBS_FROM_GIT explicitly. Existing legacy jobs > should be unaffected (but there is a change to the verification process > performed by devstack). > > > Currently devstack expects the contents of LIBS_FROM_GIT to be > exclusively a list of python packages which, obviously, should be > installed from git and not pypi. It is used for two purposes: > determining whether an individual package should be installed from git, > and verifying that a package was installed from git. > > In the old devstack-gate system, we prepared many of the common git > repos, whether they were used or not. So LIBS_FROM_GIT was created to > indicate that in some cases devstack should ignore those repos and > install from pypi instead. In other words, its original purpose was > purely as a method of selecting whether a devstack-gate prepared repo > should be used or ignored. > > In Zuul v3, we have a good way to indicate whether a job is going to use > a repo or not -- add it to "required-projects". Considering that, the > LIBS_FROM_GIT variable is redundant. So my patch causes it to be > automatically generated based on the contents of required-projects. > This means that job authors don't need to list every required repository > twice. > > However, a naïve implementation of that runs afoul of the second use of > LIBS_FROM_GIT -- verifying that python packages are installed from git. > > This usage was added later, after a typographical error ("-" vs "_" in a > python package name) in a constraints file caused us not to install a > package from git. Now devstack verifies that every package in > LIBS_FROM_GIT is installed. However, Zuul doesn't know that devstack, > tempest, and other packages aren't installed. So adding them > automatically to LIBS_FROM_GIT will cause devstack to fail. > > My change modifies this verification to only check that packages > mentioned in LIBS_FROM_GIT that devstack tried to install were actually > installed. I realize that stated as such this sounds tautological, > however, this check is still valid -- it would have caught the original > error that prompted the check in the first case. > > What the revised check will no longer handle is a typo in a legacy job. > If someone enters a typo into LIBS_FROM_GIT, it will no longer fail. > However, I think the risk is worthwhile -- particularly since it is in > service of a system which eliminates the opportunity to introduce such > an error in the first place. > > To see the result in action, take a look at this change which, in only a > few lines, implements what was a significantly more complex undertaking > in Zuul v2: > > https://review.openstack.org/548331 > > Finally, a note on the automatic generation of LIBS_FROM_GIT -- if, for > some reason, you require a new-style devstack job to manually set > LIBS_FROM_GIT, that will still work. Simply define the variable as > normal, and the module which generates the devstack config will bypass > automatic generation if the variable is already set. +1, thanks Jim. idea looks good to me as long as it still works for non-zuulv3 users. ll check the patch. -gmann > > -Jim > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From hreddy at adaranetworks.com Mon Apr 2 08:31:04 2018 From: hreddy at adaranetworks.com (Hanumantha Reddy) Date: Mon, 2 Apr 2018 01:31:04 -0700 Subject: [openstack-dev] [vitrage] In-Reply-To: References: Message-ID: From geguileo at redhat.com Mon Apr 2 11:59:59 2018 From: geguileo at redhat.com (Gorka Eguileor) Date: Mon, 2 Apr 2018 13:59:59 +0200 Subject: [openstack-dev] [cinder][nova] about re-image the volume In-Reply-To: <20180329142813.GA25762@sm-xps> References: <20180329142813.GA25762@sm-xps> Message-ID: <20180402115959.3y3j6ytab6ruorrg@localhost> On 29/03, Sean McGinnis wrote: > > This is the spec [0] about rebuild the volumed backed server. > > The question raised in the spec is about how to bandle the root volume. > > Finally,in Nova team,we think that the cleanest / best solution to this is to > > add a volume action API to cinder for re-imaging the volume.Once that is > > available in a new cinder v3 microversion, nova can use it. The reason I > > ... > > So Nova team want Cinder to achieve the re-image api.But, I see a spec > > about volume revert by snapshot[1].It is so good for rebuild operation.In > > short,I have two ideas,one is change the volume revert by snapshot spec to > > re-image spec,not only it can let the volume revert by snapshot,but also can > > re-image the volume which the image's size is greater than 0;another idea is > > add a only re-image spec,it only can re-image the volume which the image's > > size is greater than 0. > > > > I do not think changing the revert to snapshot implementation is appropriate > here. There may be some cases where this can get the desired result, but there > is no guarantee that there is a snapshot on the volume's base image state to > revert to. It also would not make sense to overload this functionality to > "revert to snapshot if you can, otherwise do all this other stuff instead." > > This would need to be a new API (microversioned) to add a reimage call. I > wouldn't expect implementation to be too difficult as we already have that > functionality for new volumes. We would just need to figure out the most > appropriate way to take an already in-use volume, detach it, rewrite the image, > then reattach it. > Hi, The implementation may be more complex that we think, as we have 4 create volume from image mechanisms we have to consider: - When Glance is using Cinder as backend - When using Glance image location to do cloning - When using Cinder cache and we do cloning - Basic case where we download the image, attach the volume, and copy the data. The only simple, yet efficient, solution I can see is calling the driver's delete volume method (without soft-deleting it from the DB), clear the volume DB information of the image metadata, and then run the create volume from image flow with the same volume information but the new image metadata. I can only see one benefit from implementing this feature in Cinder versus doing it in Nova, and that is that we can preserve the volume's UUID, but I don't think this is even relevant for this use case, so why is it better to implement this in Cinder than in Nova? Cheers, Gorka. > Ideally, from my perspective, Nova would take care of the detach/attach portion > and Cinder would only need to take care of imaging the volume. > > Sean > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jim at jimrollenhagen.com Mon Apr 2 12:26:25 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Mon, 2 Apr 2018 08:26:25 -0400 Subject: [openstack-dev] [barbican][nova-powervm][pyghmi][solum][trove] Switching to cryptography from pycrypto In-Reply-To: <20180331232401.hp5j4iommgw7tj3j@gentoo.org> References: <20180331232401.hp5j4iommgw7tj3j@gentoo.org> Message-ID: On Sat, Mar 31, 2018 at 7:24 PM, Matthew Thode wrote: > Here's the current status. I'd like to ask the projects what's keeping > them from removing pycrypto in facor of a maintained library. > > pyghmi: > - (merge conflict) https://review.openstack.org/#/c/331828 > - (merge conflict) https://review.openstack.org/#/c/545465 > - (doesn't change the import) https://review.openstack.org/#/c/545182 Looks like py26 support might be a blocker here. While we've brought pyghmi into the ironic project, it's still a project mostly built and maintained by Jarrod, and he has customers outside of OpenStack that depend on it. The ironic team will have to discuss this with Jarrod and find a good path forward. My initial thought is that we need to move forward on this, so perhaps we can release this change as a major version, and keep a py26 branch that can be released on the previous minor version for the people that need this on 2.6. Thoughts? // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From m.andre at redhat.com Mon Apr 2 12:59:38 2018 From: m.andre at redhat.com (=?UTF-8?Q?Martin_Andr=C3=A9?=) Date: Mon, 2 Apr 2018 14:59:38 +0200 Subject: [openstack-dev] [kolla][tc][openstack-helm][tripleo]propose retire kolla-kubernetes project In-Reply-To: References: <78478c68-7ff0-7b05-59f0-2e27c2635a4f@openstack.org> <20180331134428.znkeo7rn5n5adqxo@yuggoth.org> <20180331193453.3dj72kqkbyc6gvzz@yuggoth.org> Message-ID: On Sun, Apr 1, 2018 at 12:07 AM, Steven Dake (stdake) wrote: > > > > On March 31, 2018 at 12:35:31 PM, Jeremy Stanley (fungi at yuggoth.org) wrote: > > On 2018-03-31 18:06:07 +0000 (+0000), Steven Dake (stdake) wrote: >> I appreciate your personal interest in attempting to clarify the >> Kolla mission statement. >> >> The change in the Kolla mission statement you propose is >> unnecessary. > [...] > > I should probably have been more clear. The Kolla mission statement > right now says that the Kolla team produces two things: containers > and deployment tools. This may make it challenging for the team to > avoid tightly coupling their deployment tooling and images, creating > a stratification of first-class (those created by the Kolla team) > and second-class (those created by anyone else) support for > deployment tools using those images. > > > The problems raised in this thread (tension - tight coupling - second class > citizens - stratification) was predicted early on - prior to Kolla 1.0. > That prediction led to the creation of a technical solution - the Kolla API. > This API permits anyone to reuse the containers as they see fit if they > conform their implementation to the API. The API is not specifically tied > to the Ansible deployment technology. Instead the API is tied to the > varying requirements that various deployment teams have had in the past > around generalized requirements for making container lifecycle management a > reality while running OpenStack services and their dependencies inside > containers. > > Is the intent to provide "a container-oriented deployment solution > and the container images it uses" (kolla-ansible as first-class > supported deployment engine for these images) or "container images > for use by arbitrary deployment solutions, along with an example > deployment solution for use with them" (kolla-ansible on equal > footing with competing systems that make use of the same images)? > > My viewpoint is as all deployments projects are already on an equal footing > when using Kolla containers. While I acknowledge Kolla reviewers are doing a very good job at treating all incoming reviews equally, we can't realistically state these projects stand on an equal footing today. At the very least we need to have kolla changes _gating_ on TripleO and OSH jobs before we can say so. Of course, I'm not saying other kolla devs are opposed to adding more CI jobs to kolla, I'm pretty sure they would welcome the changes if someone volunteers for it, but right now when I'm approving a kolla patches I can only say with confidence that it does not break kolla-ansible. In that sense, kolla_ansible is special. > I would invite the TripleO team who did integration with the Kolla API to > provide their thoughts. The Kolla API is stable and incredibly useful... it's also undocumented. I have a stub for a documentation change that's been collecting dust on my hard drive for month, maybe it's time I brush it up and finally submit it. Today unless you're a kolla developer yourself, it's difficult to understand how to use the API, not the most user friendly. Another thing that comes for free with Kolla, the extend_start.sh scripts are for the most part only useful in the context of kolla_ansible. For instance, hardcoding path for log dirs to /var/log/kolla and changing groups to 'kolla'. In TripleO, we've chosen to not depend on the extend_start.sh scripts whenever possible for this exact reason. The other critical kolla feature we're making extensive use of in TripleO is the ability to customize the image in any imaginable way thanks to the template override mechanism. There would be no containerized deployments via TripleO without it. Kolla is a great framework for building container images for OpenStack services any project can consume. We could do a better job at advertising it. I guess bringing kolla and kolla-kubernetes under separate governance (even it the team remains mostly the same) is one way to enforce the independence of kolla-the-images project and recognize people may be interested in the images but not the deployment tools. One last though. Would you imagine a kolla PTL who is not heavily invested in kolla_ansible? Martin > I haven't kept up with OSH development, but perhaps that team could provide > their viewpoint as well. > > > Cheers > > -steve > > > -- > Jeremy Stanley > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From ifat.afek at nokia.com Mon Apr 2 13:21:48 2018 From: ifat.afek at nokia.com (Afek, Ifat (Nokia - IL/Kfar Sava)) Date: Mon, 2 Apr 2018 13:21:48 +0000 Subject: [openstack-dev] [Vitrage] New proposal for analysis. In-Reply-To: <00e801d3ca25$29befee0$7d3cfca0$@ssu.ac.kr> References: <0a7201d3c5c1$2ab596a0$8020c3e0$@ssu.ac.kr> <0b4201d3c63b$79038400$6b0a8c00$@ssu.ac.kr> <0cf201d3c72f$2b3f5ec0$81be1c40$@ssu.ac.kr> <0d8101d3c754$41e73c90$c5b5b5b0$@ssu.ac.kr> <38E590A3-69BF-4BE1-A701-FA8171429D46@nokia.com> <00e801d3ca25$29befee0$7d3cfca0$@ssu.ac.kr> Message-ID: Hi Minwook, Thinking about it again, writing a new service for these checks might be an unnecessary overhead. Have you considered using an existing tool, like Zabbix, for running such checks? If you use Zabbix, you can define new triggers that run the new checks, and whenever needed the user can ask to open Zabbix and show the relevant metrics. The format will not be exactly the same as in your example, but it will save a lot of work and spare you the need to write and manage a new service. Some technical details: · The current information that Vitrage stores is not enough for opening the right Zabbix page. We will need to keep a little more data, like the item id, on the alarm vertex. But can be done easily. · A relevant Zabbix API is history.get [1] · If you are not using Zabbix, I assume that other monitoring tools have similar capabilities What do you think? Do you think it can work with your scenario? Or do you see a benefit to the user is viewing the data in the format that you suggested? [1] https://www.zabbix.com/documentation/3.0/manual/api/reference/history/get Thanks, Ifat From: MinWookKim Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Monday, 2 April 2018 at 4:51 To: "'OpenStack Development Mailing List (not for usage questions)'" Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thank you for the reply. :) It is my opinion only, so if I'm wrong, we can change the implementation part at any time. (Even if it differs from my initial intention) The same security issues arise as you say. But now Vitrage does not call external APIs. The Vitrage-dashboard uses Vitrageclient libraries for Topology, Alarms, and RCA requests to Vitrage. So if we add an API, it will have the following flow. Vitrage-dashboard requests checks using the Vitrageclient library. -> Vitrage receives the API. -> api / controllers / v1 / checks.py is called. -> checks service is called. In accordance with the above flow, passing through the Vitrage API is the purpose of data passing and function calls. I think Vitrage does not need to call external APIs. If you do not want to go through the Vitrage API, we need to create a function for the check action in the Vitrage-dashboard, and write code to call the function. If I think wrong, please tell me anytime. :) Thank you. Best regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Sunday, April 1, 2018 3:40 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, I understand your concern about the security issue. But how would that be different if the API call is passed through Vitrage API? The authentication from vitrage-dashboard to vitrage API will work, but then Vitrage will call an external API and you’ll have the same security issue, right? I don’t understand what is the difference between calling the external component from vitrage-dashboard and calling it from vitrage. Best regards, Ifat. From: MinWookKim > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Thursday, 29 March 2018 at 14:51 To: "'OpenStack Development Mailing List (not for usage questions)'" > Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thanks for your reply. : ) I wrote my opinion on your comment. Why do you think the request should pass through the Vitrage API? Why can’t vitrage-dashboard call the check component directly? Authentication issues: I think the check component is a separate component based on the API. In my opinion, if the check component has a separate api address from the vitrage to receive requests from the Vitrage-dashboard, the Vitrage-dashboard needs to know the api address for the check component. This can result in a request / response situation open to anyone, regardless of the authentication supported by openstack between the Vitrage-dashboard and the request / response procedure of check component. This is possible not only through the Vitrage-dashboard, but also with simple commands such as curl. (I think it is unnecessary to implement a separate authentication system for the check component.) This problem may occur if someone knows the api address for the check component, which can cause the host and VM to execute system commands. what should happen if the user closes the check window before the checks are over? I assume that the checks will finish, but the user won’t be able to see the results? If the window is closed before the check is finished, the user can not check the result. To solve this problem, I think that temporarily saving a list of recent results is also a solution. By storing temporary lists (for example, up to 10), the user can see the previous results and think that it is also possible to empty the list by the user. how is it? Thank you. Best Regrads, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Thursday, March 29, 2018 8:07 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, Why do you think the request should pass through the Vitrage API? Why can’t vitrage-dashboard call the check component directly? And another question: what should happen if the user closes the check window before the checks are over? I assume that the checks will finish, but the user won’t be able to see the results? Thanks, Ifat. From: MinWookKim > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Thursday, 29 March 2018 at 10:25 To: "'OpenStack Development Mailing List (not for usage questions)'" > Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat and Vitrage team. I would like to explain more about the implementation part of the mail I sent last time. The flow is as follows. Vitrage-dashboard (action-list-panel) -> Vitrage-api -> check component The last time I mentioned it as api-handler, it would be better to call the check component directly from Vitarge-api without having to use it. I hope this helps you understand. Thank you Best Regards, Minwook. From: MinWookKim [mailto:delightwook at ssu.ac.kr] Sent: Wednesday, March 28, 2018 11:21 AM To: 'OpenStack Development Mailing List (not for usage questions)' Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thanks for your reply. : ) This proposal is a proposal that we expect to be useful from a user perspective. From a manager's point of view, we need an implementation that minimizes the overhead incurred by the proposal. The answers to some of your questions are: • I assume that these checks will not be implemented in Vitrage, and the results will not be stored in Vitrage, right? Vitrage role is to be a place where it is easy and intuitive for the user to execute external actions/checks. Yes, that's right. We do not need to save it to Vitrage because we just need to check the results. However, it is possible to implement the function directly in Vitrage-dashboard separately from Vitrage like add-action-list panel, but it seems that it is not enough to implement all the functions. If you do not mind, we will have the following flow. 1. The user requests the check action from the vitrage-dashboard (add-action-list-panel). 2. Call the check component through the vitrage's API handler. 3. The check component executes the command and returns the result. Because it is my opinion only, please tell us if there is an unnecessary part. :) • Do you expect the user to click an entity, select an action to run (e.g. ‘P2P check’), and wait by the open panel for the results? What if the user switches to another menu before the check is done? What if the user asks to run an additional check in parallel? What if the user wants to see again a previous result? My idea was to select the task, wait for the results in an open panel, and then instantly see it in the panel. If we switch to another menu before the scan is complete, we will not be able to see the results. Parallel checking is a matter of fact. (This can cause excessive overhead.) For earlier results, it may be okay to temporarily save the open panel until we exit the panel. We can see the previous results through the temporary saved results. • Any thoughts of what component will implement those checks? Or maybe these will be just scripts? I think I implement a separate component to request it. • It could be nice if, as a result of an action check, a new alarm will be raised in Vitrage. A specific alarm with the additional details that were found. However, it might not be trivial to implement it. We could think about it as phase #2. It is expected to be really good. It would be very useful if an Entity-Graph generates an alarm based on the check result. I think that part will be able to talk in detail later. My answer is my opinions and assumptions. If you think my implementation is wrong, or an inefficient implementation, please do not hesitate to tell me. Thanks. Best Regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Wednesday, March 28, 2018 2:23 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, I think that from a user’s perspective, these are very good ideas. I have some questions regarding the UX and the implementation, since I’m trying to think what could be the best way to execute such actions from Vitrage. · I assume that these checks will not be implemented in Vitrage, and the results will not be stored in Vitrage, right? Vitrage role is to be a place where it is easy and intuitive for the user to execute external actions/checks. · Do you expect the user to click an entity, select an action to run (e.g. ‘P2P check’), and wait by the open panel for the results? What if the user switches to another menu before the check is done? What if the user asks to run an additional check in parallel? What if the user wants to see again a previous result? · Any thoughts of what component will implement those checks? Or maybe these will be just scripts? · It could be nice if, as a result of an action check, a new alarm will be raised in Vitrage. A specific alarm with the additional details that were found. However, it might not be trivial to implement it. We could think about it as phase #2. Best Regards, Ifat From: MinWookKim > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Tuesday, 27 March 2018 at 14:45 To: "openstack-dev at lists.openstack.org" > Subject: [openstack-dev] [Vitrage] New proposal for analysis. Hello Vitrage team. I am currently working on the Vitrage-Dashboard proposal for the ‘Add action list panel for entity click action’. (https://review.openstack.org/#/c/531141/) I would like to make a new proposal based on the action list panel mentioned above. The new proposal is to provide multidimensional analysis capabilities in several entities that make up the infrastructure in the entity graph. Vitrage's entity-graph allows us to efficiently monitor alarms from various monitoring tools. In the current state, when there is a problem with the VM and Host, or when we want to check the status, we need to access the console individually for each VM and Host. This situation causes unnecessary behavior when the number of VMs and hosts increases. My new suggestion is that if we have a large number of vm and host, we do not need to directly connect to each VM, host console to enter the system command. Instead, we can send a system command to VM and hosts in the cloud through this proposal. It is only checking results. I have written some use-cases for an efficient explanation of the function. From an implementation perspective, the goals of the proposal are: 1. To execute commands without installing any Agent / Client that can cause load on VM, Host. 2. I want to provide a simple UI so that users or administrators can get the desired information to multiple VMs and hosts. 3. I want to be able to grasp the results at a glance. 4. I want to implement a component that can support many additional scenarios in plug-in format. I would be happy if you could comment on the proposal or ask questions. Thanks. Best Regards, Minwook. -------------- next part -------------- An HTML attachment was scrubbed... URL: From alee at redhat.com Mon Apr 2 13:49:48 2018 From: alee at redhat.com (Ade Lee) Date: Mon, 02 Apr 2018 09:49:48 -0400 Subject: [openstack-dev] [barbican][nova-powervm][pyghmi][solum][trove] Switching to cryptography from pycrypto In-Reply-To: <20180331232401.hp5j4iommgw7tj3j@gentoo.org> References: <20180331232401.hp5j4iommgw7tj3j@gentoo.org> Message-ID: <1522676988.9232.41.camel@redhat.com> On Sat, 2018-03-31 at 18:24 -0500, Matthew Thode wrote: > Here's the current status. I'd like to ask the projects what's > keeping > them from removing pycrypto in facor of a maintained library. > > Open reviews > barbican: > - (merge conflict) https://review.openstack.org/#/c/458196 > - (merge conflict) https://review.openstack.org/#/c/544873 There is still some pycrypto in the Dogtag plugin, which needs to be switched out to cryptography. I'm aware of what needs to be done and plan to get to it in this release. > nova-powervm: no open reviews > - in test-requirements, but not actually used? > - made https://review.openstack.org/558091 for it > pyghmi: > - (merge conflict) https://review.openstack.org/#/c/331828 > - (merge conflict) https://review.openstack.org/#/c/545465 > - (doesn't change the import) https://review.openstack.org/#/c/5451 > 82 > solum: no open reviews > - looks like only a couple of functions need changing > trove: no open reviews > - mostly uses the random feature > > _____________________________________________________________________ > _____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > cribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From alee at redhat.com Mon Apr 2 13:54:51 2018 From: alee at redhat.com (Ade Lee) Date: Mon, 02 Apr 2018 09:54:51 -0400 Subject: [openstack-dev] [barbican] [Fwd: Barbican is Eligible to Migrate!] References: Message-ID: <1522677291.9232.46.camel@redhat.com> Hey Barbicaneers, Kendall has provided us a test migration to storyboard, and Barbican has apparently migrated smoothly. You can see the test instance in his email (forwarded below). The correct URL is actually https://storyboar d-dev.openstack.org/#!/project/286 Any objections/ concerns about doing the migration? Ade -------------- next part -------------- An embedded message was scrubbed... From: Kendall Nelson Subject: Barbican is Eligible to Migrate! Date: Thu, 29 Mar 2018 17:46:01 +0000 Size: 7547 URL: From stdake at cisco.com Mon Apr 2 14:38:07 2018 From: stdake at cisco.com (Steven Dake (stdake)) Date: Mon, 2 Apr 2018 14:38:07 +0000 Subject: [openstack-dev] [kolla][tc][openstack-helm][tripleo]propose retire kolla-kubernetes project In-Reply-To: References: <78478c68-7ff0-7b05-59f0-2e27c2635a4f@openstack.org> <20180331134428.znkeo7rn5n5adqxo@yuggoth.org> <20180331193453.3dj72kqkbyc6gvzz@yuggoth.org> Message-ID: On April 2, 2018 at 6:00:15 AM, Martin André (m.andre at redhat.com) wrote: On Sun, Apr 1, 2018 at 12:07 AM, Steven Dake (stdake) wrote: > My viewpoint is as all deployments projects are already on an equal footing > when using Kolla containers. While I acknowledge Kolla reviewers are doing a very good job at treating all incoming reviews equally, we can't realistically state these projects stand on an equal footing today. At the very least we need to have kolla changes _gating_ on TripleO and OSH jobs before we can say so. Of course, I'm not saying other kolla devs are opposed to adding more CI jobs to kolla, I'm pretty sure they would welcome the changes if someone volunteers for it, but right now when I'm approving a kolla patches I can only say with confidence that it does not break kolla-ansible. In that sense, kolla_ansible is special. Martin, Personally I think all of OpenStack projects that have a dependency or inverse dependency should cross-gate. For example, Nova should gate on kolla-ansible, and at one point I think they agreed to this, if we submitted gate work to do so. We never did that. Nobody from TripleO or OSH has submitted gates for Kolla. Submit them and they will follow the standard mechanism used in OpenStack experimental->non-voting->voting (if people are on-call to resolve problems). I don't think gating is relevant to equal footing. TripleO for the moment has chosen to gate on their own image builds, which is fine. If the gating should be enhanced, write the gates :) Here is a simple definition from the internet: "with the same rights and conditions as someone you are competing with" Does that mean if you want to split the kolla repo into 40+ repos for each separate project, the core team will do that? No. Does that mean if there is a reasonable addition to the API the patch would merge? Yes. Thats right, deployment tools compete, but they also cooperate and collaborate. The containers (atleast from my perspective) are an area where Kolla has chosen to collaborate. FWIW I also think we have chosen to collobrate a bit in areas we compete (the deployment tooling itself). Its a very complex topic. Splitting the governance and PTLs doesn't change the makeup of the core review team who ultimately makes the decision about what is reasonable. | > I would invite the TripleO team who did integration with the Kolla API to > provide their thoughts. The Kolla API is stable and incredibly useful... it's also undocumented. I have a stub for a documentation change that's been collecting dust on my hard drive for month, maybe it's time I brush it Most of Kolla unfortunately is undocumented. The API is simple and straightforward enough that TripleO, OSH, and several proprietary vendors (the ones Jeffrey mentioned) have managed to implement deployment tooling that consume the API. Documentation for any part of Kolla would be highly valued - IMO it is the Kolla project's biggest weakness. up and finally submit it. Today unless you're a kolla developer yourself, it's difficult to understand how to use the API, not the most user friendly. Another thing that comes for free with Kolla, the extend_start.sh scripts are for the most part only useful in the context of kolla_ansible. For instance, hardcoding path for log dirs to /var/log/kolla and changing groups to 'kolla'. In TripleO, we've chosen to not depend on the extend_start.sh scripts whenever possible for this exact reason. I don't disagree. I was never fond of extend_start, and thought any special operations it provided belong in the API itself. This is why there are mkdir operations and chmod/chown -R operations in the API. The JSON blob handed to the API during runtime is where the API begins and ends. The implementation (what set_cfg.py does with start.sh and extend_start.sh) are not part of the API but part of the API implementation. I don't think I said anywhere the API is perfectly implemented. I'm not sure I've ever seen this mythical perfection thing in an API anyway :) Patches are welcome to improve the API to make it more general, as long as they maintain backward compatibility. The other critical kolla feature we're making extensive use of in TripleO is the ability to customize the image in any imaginable way thanks to the template override mechanism. There would be no containerized deployments via TripleO without it. We knew people would find creative ways to use the plugin templating technology, and help drive adoption of Kolla as a standard... Kolla is a great framework for building container images for OpenStack services any project can consume. We could do a better job at advertising it. I guess bringing kolla and kolla-kubernetes under separate governance (even it the team remains mostly the same) is one way to enforce the independence of kolla-the-images project and recognize people may be interested in the images but not the deployment tools. One last though. Would you imagine a kolla PTL who is not heavily invested in kolla_ansible? Do you mean to imply a conflict of interest? I guess I don't understand the statement. Would you clarify please? Martin > I haven't kept up with OSH development, but perhaps that team could provide > their viewpoint as well. > > > Cheers > > -steve > > > -- > Jeremy Stanley > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheanfire at gentoo.org Mon Apr 2 15:06:35 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Mon, 2 Apr 2018 10:06:35 -0500 Subject: [openstack-dev] [infra][qa][requirements] Pip 10 is on the way In-Reply-To: <20180331150026.nqrwaxakxcn3vmqz@yuggoth.org> References: <20180331150026.nqrwaxakxcn3vmqz@yuggoth.org> Message-ID: <20180402150635.5d4jbbnzry2biowu@gentoo.org> On 18-03-31 15:00:27, Jeremy Stanley wrote: > According to a notice[1] posted to the pypa-announce and > distutils-sig mailing lists, pip 10.0.0.b1 is on PyPI now and 10.0.0 > is expected to be released in two weeks (over the April 14/15 > weekend). We know it's at least going to start breaking[2] DevStack > and we need to come up with a plan for addressing that, but we don't > know how much more widespread the problem might end up being so > encourage everyone to try it out now where they can. > I'd like to suggest locking down pip/setuptools/wheel like openstack ansible is doing in https://github.com/openstack/openstack-ansible/blob/master/global-requirement-pins.txt We could maintain it as a separate constraints file (or infra could maintian it, doesn't mater). The file would only be used for the initial get-pip install. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From aschultz at redhat.com Mon Apr 2 15:24:06 2018 From: aschultz at redhat.com (Alex Schultz) Date: Mon, 2 Apr 2018 09:24:06 -0600 Subject: [openstack-dev] [tripleo] Blueprints for Rocky In-Reply-To: References: Message-ID: On Tue, Mar 13, 2018 at 7:58 AM, Alex Schultz wrote: > Hey everyone, > > So we currently have 63 blueprints for currently targeted for > Rocky[0]. Please make sure that any blueprints you are interested in > delivering have an assignee set and have been approved. I would like > to have the ones we plan on delivering for Rocky to be updated by > April 3, 2018. Any blueprints that have not been updated will be > moved out to the next cycle after this date. > Reminder this is tomorrow. I'll be going through the blueprints and moving them out this week. > Thanks, > -Alex > > [0] https://blueprints.launchpad.net/tripleo/rocky From cboylan at sapwetik.org Mon Apr 2 16:13:57 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 02 Apr 2018 09:13:57 -0700 Subject: [openstack-dev] [infra][qa][requirements] Pip 10 is on the way In-Reply-To: <20180402150635.5d4jbbnzry2biowu@gentoo.org> References: <20180331150026.nqrwaxakxcn3vmqz@yuggoth.org> <20180402150635.5d4jbbnzry2biowu@gentoo.org> Message-ID: <1522685637.1678193.1323782608.022AAF87@webmail.messagingengine.com> On Mon, Apr 2, 2018, at 8:06 AM, Matthew Thode wrote: > On 18-03-31 15:00:27, Jeremy Stanley wrote: > > According to a notice[1] posted to the pypa-announce and > > distutils-sig mailing lists, pip 10.0.0.b1 is on PyPI now and 10.0.0 > > is expected to be released in two weeks (over the April 14/15 > > weekend). We know it's at least going to start breaking[2] DevStack > > and we need to come up with a plan for addressing that, but we don't > > know how much more widespread the problem might end up being so > > encourage everyone to try it out now where they can. > > > > I'd like to suggest locking down pip/setuptools/wheel like openstack > ansible is doing in > https://github.com/openstack/openstack-ansible/blob/master/global-requirement-pins.txt > > We could maintain it as a separate constraints file (or infra could > maintian it, doesn't mater). The file would only be used for the > initial get-pip install. In the past we've done our best to avoid pinning these tools because 1) we've told people they should use latest for openstack to work and 2) it is really difficult to actually control what versions of these tools end up on your systems if not latest. I would strongly push towards addressing the distutils package deletion problem that we've run into with pip10 instead. One of the approaches thrown out that pabelanger is working on is to use a common virtualenv for devstack and avoid the system package conflict entirely. Clark From m.andre at redhat.com Mon Apr 2 17:12:29 2018 From: m.andre at redhat.com (=?UTF-8?Q?Martin_Andr=C3=A9?=) Date: Mon, 2 Apr 2018 19:12:29 +0200 Subject: [openstack-dev] [kolla][tc][openstack-helm][tripleo]propose retire kolla-kubernetes project In-Reply-To: References: <78478c68-7ff0-7b05-59f0-2e27c2635a4f@openstack.org> <20180331134428.znkeo7rn5n5adqxo@yuggoth.org> <20180331193453.3dj72kqkbyc6gvzz@yuggoth.org> Message-ID: On Mon, Apr 2, 2018 at 4:38 PM, Steven Dake (stdake) wrote: > > > > On April 2, 2018 at 6:00:15 AM, Martin André (m.andre at redhat.com) wrote: > > On Sun, Apr 1, 2018 at 12:07 AM, Steven Dake (stdake) > wrote: >> My viewpoint is as all deployments projects are already on an equal >> footing >> when using Kolla containers. > > While I acknowledge Kolla reviewers are doing a very good job at > treating all incoming reviews equally, we can't realistically state > these projects stand on an equal footing today. > > > At the very least we need to have kolla changes _gating_ on TripleO > and OSH jobs before we can say so. Of course, I'm not saying other > kolla devs are opposed to adding more CI jobs to kolla, I'm pretty > sure they would welcome the changes if someone volunteers for it, but > right now when I'm approving a kolla patches I can only say with > confidence that it does not break kolla-ansible. In that sense, > kolla_ansible is special. > > Martin, > > Personally I think all of OpenStack projects that have a dependency or > inverse dependency should cross-gate. For example, Nova should gate on > kolla-ansible, and at one point I think they agreed to this, if we submitted > gate work to do so. We never did that. > > Nobody from TripleO or OSH has submitted gates for Kolla. Submit them and > they will follow the standard mechanism used in OpenStack > experimental->non-voting->voting (if people are on-call to resolve > problems). I don't think gating is relevant to equal footing. TripleO for > the moment has chosen to gate on their own image builds, which is fine. If > the gating should be enhanced, write the gates :) > > Here is a simple definition from the internet: > > "with the same rights and conditions as someone you are competing with" > > Does that mean if you want to split the kolla repo into 40+ repos for each > separate project, the core team will do that? No. Does that mean if there > is a reasonable addition to the API the patch would merge? Yes. > > Thats right, deployment tools compete, but they also cooperate and > collaborate. The containers (atleast from my perspective) are an area where > Kolla has chosen to collaborate. FWIW I also think we have chosen to > collobrate a bit in areas we compete (the deployment tooling itself). Its a > very complex topic. Splitting the governance and PTLs doesn't change the > makeup of the core review team who ultimately makes the decision about what > is reasonable. Collaboration is good, there is no question about it. I suppose the question we need to answer is "would splitting kolla and kolla-ansible further benefit kolla and the projects that consume it?". I believe if you look at it from this angle maybe you'll find areas that are neglected because they are lower priority for kolla-ansible developers. >> I would invite the TripleO team who did integration with the Kolla API to >> provide their thoughts. > > The Kolla API is stable and incredibly useful... it's also > undocumented. I have a stub for a documentation change that's been > collecting dust on my hard drive for month, maybe it's time I brush it > > Most of Kolla unfortunately is undocumented. The API is simple and > straightforward enough that TripleO, OSH, and several proprietary vendors > (the ones Jeffrey mentioned) have managed to implement deployment tooling > that consume the API. Documentation for any part of Kolla would be highly > valued - IMO it is the Kolla project's biggest weakness. > > > up and finally submit it. Today unless you're a kolla developer > yourself, it's difficult to understand how to use the API, not the > most user friendly. > > Another thing that comes for free with Kolla, the extend_start.sh > scripts are for the most part only useful in the context of > kolla_ansible. For instance, hardcoding path for log dirs to > /var/log/kolla and changing groups to 'kolla'. > In TripleO, we've chosen to not depend on the extend_start.sh scripts > whenever possible for this exact reason. > > I don't disagree. I was never fond of extend_start, and thought any special > operations it provided belong in the API itself. This is why there are > mkdir operations and chmod/chown -R operations in the API. The JSON blob > handed to the API during runtime is where the API begins and ends. The > implementation (what set_cfg.py does with start.sh and extend_start.sh) are > not part of the API but part of the API implementation. One could argue that the environment variables we pass to the containers to control what extend_start.sh does are also part of the API. That's not my point. There is a lot of cruft in these scripts that remain from the days where kolla-ansible was the only consumer of kolla images. > I don't think I said anywhere the API is perfectly implemented. I'm not > sure I've ever seen this mythical perfection thing in an API anyway :) > > Patches are welcome to improve the API to make it more general, as long as > they maintain backward compatibility. > > > > The other critical kolla feature we're making extensive use of in > TripleO is the ability to customize the image in any imaginable way > thanks to the template override mechanism. There would be no > containerized deployments via TripleO without it. > > > We knew people would find creative ways to use the plugin templating > technology, and help drive adoption of Kolla as a standard... > > Kolla is a great framework for building container images for OpenStack > services any project can consume. We could do a better job at > advertising it. I guess bringing kolla and kolla-kubernetes under > separate governance (even it the team remains mostly the same) is one > way to enforce the independence of kolla-the-images project and > recognize people may be interested in the images but not the > deployment tools. > > One last though. Would you imagine a kolla PTL who is not heavily > invested in kolla_ansible? > > > Do you mean to imply a conflict of interest? I guess I don't understand the > statement. Would you clarify please? All I'm saying is that we can't truly claim we've fully decoupled Kolla and Kolla-ansible until we're ready to accept someone who is not a dedicated contributor to kolla-ansible as kolla PTL. Until then, some might rightfully say kolla-ansible is driving the kolla project. It's OK, maybe as the kolla community that's what we want, but we can't legitimately say all consumers are on an equal footing. Martin From aschultz at redhat.com Mon Apr 2 17:18:06 2018 From: aschultz at redhat.com (Alex Schultz) Date: Mon, 2 Apr 2018 11:18:06 -0600 Subject: [openstack-dev] [TripleO] Prototyping dedicated roles with unique repositories for Ansible tasks in TripleO In-Reply-To: References: Message-ID: On Thu, Mar 29, 2018 at 11:32 AM, David Moreau Simard wrote: > Nice! > > I don't have a strong opinion > about this but what I might recommend would be to chat with the > openshift-ansible [1] and the kolla-ansible [2] folks. > > I'm happy to do the introductions if necessary ! > > Their models, requirements or context might be different than ours but at > the end of the day, it's a set of Ansible roles and playbooks to install > something. > It would be a good idea just to informally chat about the reasons why their > things are set up the way they are, what are the pros, cons.. or their > challenges. > > I'm not saying we should structure our things like theirs. > What I'm trying to say is that they've surely learned a lot over the years > these projects have existed and it's surely worthwhile to chat with them so > we don't repeat some of the same mistakes. > > Generally just draw from their experience, learn from their conclusions and > take that into account before committing to any particular model we'd like > to have in TripleO ? Yea it'd probably be a good idea to check with them on some of their structure choices. I think we do not necessarily want to use a similar structure to those based on our experiences with oooq, openstack-puppet-modules, etc. I think this first iteration to get some of the upgrade tasks out of the various */services/*.yaml will help us build out a decent structure that might be reusable. I did notice that kolla-ansible has a main.yaml[0] that might be interesting for us to consider when we start using the ansible roles directly rather than importing the tasks themselves. What I'd really like for us to work on is better cookiecutter/testing structure for ansible roles themselves so we stop just merging ansible bits that are only tested via full deployment tests (which we may not even run). As much as I hate rspec puppet tests, it is really nice for testing the logic without having to do an actual deployment. Thanks, -Alex [0] https://git.openstack.org/cgit/openstack/kolla-ansible/tree/ansible/roles/keystone/tasks/main.yml > > [1]: https://github.com/openshift/openshift-ansible > [2]: https://github.com/openstack/kolla-ansible > > David Moreau Simard > Senior Software Engineer | Openstack RDO > > dmsimard = [irc, github, twitter] > > On Thu, Mar 29, 2018, 12:34 PM David Peacock, wrote: >> >> Hi everyone, >> >> During the recent PTG in Dublin, it was decided that we'd prototype a way >> forward with Ansible tasks in TripleO that adhere to Ansible best practises, >> creating dedicated roles with unique git repositories and RPM packaging per >> role. >> >> With a view to moving in this direction, a couple of us on the TripleO >> team have begun developing tooling to facilitate this. Initially we've >> worked on a tool [0] to extract Ansible tasks lists from >> tripleo-heat-templates and move them into new formally structured Ansible >> roles. >> >> An example with the existing keystone docker service [1]: >> >> The upgrade_tasks block will become: >> >> ``` >> upgrade_tasks: >> - import_role: >> name: tripleo-role-keystone >> tasks_from: upgrade.yaml >> ``` >> >> The fast_forward_upgrade_tasks block will become: >> >> ``` >> fast_forward_upgrade_tasks: >> - import_role: >> name: tripleo-role-keystone >> tasks_from: fast_forward_upgrade.yaml >> ``` >> >> And this role [2] will be structured: >> >> ``` >> tripleo-role-keystone/ >> └── tasks >> ├── fast_forward_upgrade.yaml >> ├── main.yaml >> └── upgrade.yaml >> ``` >> >> We'd love to hear any feedback from the community as we move towards this. >> >> Thank you, >> David Peacock >> >> [0] >> https://github.com/davidjpeacock/openstack-role-extract/blob/master/role-extractor-creator.py >> [1] >> https://github.com/openstack/tripleo-heat-templates/blob/master/docker/services/keystone.yaml >> [2] https://github.com/davidjpeacock/tripleo-role-keystone >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From kennelson11 at gmail.com Mon Apr 2 17:19:31 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 02 Apr 2018 17:19:31 +0000 Subject: [openstack-dev] [barbican] [Fwd: Barbican is Eligible to Migrate!] In-Reply-To: <1522677291.9232.46.camel@redhat.com> References: <1522677291.9232.46.camel@redhat.com> Message-ID: https://storyboard-dev.openstack.org/#!/project_group/27 shows the project group that has all the barbican repos represented for tracking issues and new features against. https://storyboard-dev.openstack.org/#!/project/286 shows items specifically related to the main barbican repo- its where the majority of things exist right now after the migration. Let me know if you have any questions! -Kendall (diablo_rojo) On Mon, Apr 2, 2018 at 6:54 AM Ade Lee wrote: > Hey Barbicaneers, > > Kendall has provided us a test migration to storyboard, and Barbican > has apparently migrated smoothly. You can see the test instance in his > email (forwarded below). The correct URL is actually https://storyboar > d-dev.openstack.org/#!/project/286 > > Any objections/ concerns about doing the migration? > > Ade > > > ---------- Forwarded message ---------- > From: Kendall Nelson > To: Ade Lee > Cc: > Bcc: > Date: Thu, 29 Mar 2018 17:46:01 +0000 > Subject: Barbican is Eligible to Migrate! > Hello! > > Long story short, hopefully you are aware that projects are in the process > of migrating to StoryBoard. I've been working on another round of test > migrations this week and Barbican test migrated without issue! > > If you would be willing to start the conversation with your team, we would > love to migrate the project at your earliest convenience. The general > migration process is outlined here[1]. > > This blog has several posts related to why we are migrating, how Launchpad > maps to StoryBoard, etc. [2] > > If you have any questions please let me know! Or feel free to ask in the > #storyboard channel. > > If you are interested in seeing what the result of Barbican's test > migration looks like you can see the result here[3]. I have a project group > (named barbican) set up with the repos barbican has (based off those listed > in projects.yaml in governance) and then ran the import from your Launchpad > projects to migrate the bugs over. I only found three Launchpad projects > (the python-barbicanclient, barbican, and castellan-ui which had nothing in > it) associated with Barbican so if I missed any, please let me know and I > can migrate them as well. > > Hope to hear from you soon! > > -Kendall (diablo_rojo) > > [1] https://docs.openstack.org/infra/storyboard/migration.html > [2] https://storyboard-blog.io/ > [3] https://storyboard-dev.openstack.org/#!/project_group/27 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jalnliu at lbl.gov Mon Apr 2 18:46:01 2018 From: jalnliu at lbl.gov (Jialin Liu) Date: Mon, 2 Apr 2018 11:46:01 -0700 Subject: [openstack-dev] container name in swift Message-ID: Hi, Can a container name in openstack swift contains / ? e.g., abc/111/mycontainer Best, Jialin -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at not.mn Mon Apr 2 18:56:56 2018 From: me at not.mn (John Dickinson) Date: Mon, 02 Apr 2018 11:56:56 -0700 Subject: [openstack-dev] container name in swift In-Reply-To: References: Message-ID: <6E3BBB3C-7FFB-4BF7-8F1B-DF0928919569@not.mn> no On 2 Apr 2018, at 11:46, Jialin Liu wrote: > Hi, > Can a container name in openstack swift contains / ? > e.g., > abc/111/mycontainer > > > Best, > Jialin > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 801 bytes Desc: OpenPGP digital signature URL: From jalnliu at lbl.gov Mon Apr 2 20:00:07 2018 From: jalnliu at lbl.gov (Jialin Liu) Date: Mon, 2 Apr 2018 13:00:07 -0700 Subject: [openstack-dev] container name in swift In-Reply-To: <6E3BBB3C-7FFB-4BF7-8F1B-DF0928919569@not.mn> References: <6E3BBB3C-7FFB-4BF7-8F1B-DF0928919569@not.mn> Message-ID: Hi John, What is allowed in container name, but not in object name? I need a way to distinguish their name.. Best, Jialin On Mon, Apr 2, 2018 at 11:56 AM, John Dickinson wrote: > no > > On 2 Apr 2018, at 11:46, Jialin Liu wrote: > > > Hi, > > Can a container name in openstack swift contains / ? > > e.g., > > abc/111/mycontainer > > > > > > Best, > > Jialin > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From iurygregory at gmail.com Mon Apr 2 20:17:44 2018 From: iurygregory at gmail.com (Iury Gregory) Date: Mon, 2 Apr 2018 17:17:44 -0300 Subject: [openstack-dev] container name in swift In-Reply-To: References: <6E3BBB3C-7FFB-4BF7-8F1B-DF0928919569@not.mn> Message-ID: According to Swift doc[1] Length of container names 256 bytes Cannot contain the / character. Length of object names 1024 bytes By default, there are no character restrictions. [1] https://docs.openstack.org/swift/latest/api/object_api_v1_overview.html 2018-04-02 17:00 GMT-03:00 Jialin Liu : > Hi John, > What is allowed in container name, but not in object name? > I need a way to distinguish their name.. > > Best, > Jialin > > On Mon, Apr 2, 2018 at 11:56 AM, John Dickinson wrote: > >> no >> >> On 2 Apr 2018, at 11:46, Jialin Liu wrote: >> >> > Hi, >> > Can a container name in openstack swift contains / ? >> > e.g., >> > abc/111/mycontainer >> > >> > >> > Best, >> > Jialin >> > ____________________________________________________________ >> ______________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: OpenStack-dev-request at lists.op >> enstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- *Att[]'sIury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Part of the puppet-manager-core team in OpenStack* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Mon Apr 2 20:54:14 2018 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 2 Apr 2018 15:54:14 -0500 Subject: [openstack-dev] [oslo] proposing Ken Giusti for oslo-core In-Reply-To: <1522079471-sup-7587@lrrr.local> References: <1522079471-sup-7587@lrrr.local> Message-ID: <351ccb3c-b32e-a08c-f8eb-aace60e5852f@nemebean.com> It's been a week and no nacks, so welcome to oslo-core, Ken! -Ben On 03/26/2018 10:52 AM, Doug Hellmann wrote: > Ken has been managing oslo.messaging for ages now but his participation > in the team has gone far beyond that single library. He regularly > attends meetings, including the PTG, and has provided input into several > of our team decisions recently. > > I think it's time we make him a full member of the oslo-core group. > > Please respond here with a +1 or -1 to indicate your opinion. > > Thanks, > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From chris at openstack.org Mon Apr 2 20:59:29 2018 From: chris at openstack.org (Chris Hoge) Date: Mon, 2 Apr 2018 13:59:29 -0700 Subject: [openstack-dev] [k8s] OpenStack and Containers White Paper Message-ID: <47CB0DA7-9332-4DE1-B9AF-B14C663ACE41@openstack.org> Hi everyone, In advance of the Vancouver Summit, I'm leading an effort to publish a community produced white-paper on OpenStack and container integrations. This has come out of a need to develop materials, both short and long form, to help explain how OpenStack interacts with container technologies across the entire stack, from infrastructure to application. The rough outline of the white-paper proposes an entire technology stack and discuss deployment and usage strategies at every level. The white-paper will focus on existing technologies, and how they are being used in production today across our community. Beginning at the hardware layer, we have the following outline (which may be inverted for clarity): * OpenStack Ironic for managing bare metal deployments. * Container-based deployment tools for installing and managing OpenStack * Kolla containers and Kolla-Ansible * Loci containers and OpenStack Helm * OpenStack-hosted APIs for managing container application infrastructure. * Magnum * Zun * Community-driven integration of Kubernetes and OpenStack with K8s Cloud Provider OpenStack * Projects that can stand alone in integrations with Kubernetes and other cloud technology * Cinder * Neutron with Kuryr and Calico integrations * Keystone authentication and authorization I'm looking for volunteers to help produce the content for these sections (and any others we may uncover to be useful) for presenting a complete picture of OpenStack and container integrations. If you're involved with one of these projects, or are using any of these tools in production, it would be fantastic to get your input in producing the appropriate section. We especially want real-world deployments to use as small case studies to inform the work. During the process of creating the white-paper, we will be working with a technical writer and the Foundation design team to produce a document that is consistent in voice, has accurate and informative graphics that can be used to illustrate the major points and themes of the white-paper, and that can be used as stand-alone media for conferences and presentations. Over the next week, I'll be reaching out to individuals and inviting them to collaborate. This is also a general invitation to collaborate, and if you'd like to help out with a section please reach out to me here, on the K8s #sig-openstack Slack channel, or at my work e-mail, chris at openstack.org. Starting next week, we'll work out a schedule for producing and delivering the white-paper by the Vancouver Summit. We are very short on time, so we will have to be focused to quickly produce high-quality content. Thanks in advance to everyone who participates in writing this document. I'm looking forward to working with you in the coming weeks to publish this important resource for clearly describing the multitude of interactions between these complementary technologies. -Chris Hoge K8s-SIG-OpenStack/OpenStack-SIG-K8s Co-Lead From jalnliu at lbl.gov Mon Apr 2 21:12:41 2018 From: jalnliu at lbl.gov (Jialin Liu) Date: Mon, 2 Apr 2018 14:12:41 -0700 Subject: [openstack-dev] container name in swift In-Reply-To: References: <6E3BBB3C-7FFB-4BF7-8F1B-DF0928919569@not.mn> Message-ID: Thanks Iury and John. Best, Jialin On Mon, Apr 2, 2018 at 1:17 PM, Iury Gregory wrote: > According to Swift doc[1] > > Length of container names 256 bytes Cannot contain the / character. > Length of object names 1024 bytes By default, there are no character > restrictions. > > [1] https://docs.openstack.org/swift/latest/api/object_api_ > v1_overview.html > > > > 2018-04-02 17:00 GMT-03:00 Jialin Liu : > >> Hi John, >> What is allowed in container name, but not in object name? >> I need a way to distinguish their name.. >> >> Best, >> Jialin >> >> On Mon, Apr 2, 2018 at 11:56 AM, John Dickinson wrote: >> >>> no >>> >>> On 2 Apr 2018, at 11:46, Jialin Liu wrote: >>> >>> > Hi, >>> > Can a container name in openstack swift contains / ? >>> > e.g., >>> > abc/111/mycontainer >>> > >>> > >>> > Best, >>> > Jialin >>> > ____________________________________________________________ >>> ______________ >>> > OpenStack Development Mailing List (not for usage questions) >>> > Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > > > *Att[]'sIury Gregory Melo Ferreira * > *MSc in Computer Science at UFCG* > > *Part of the puppet-manager-core team in OpenStack* > *Social*: https://www.linkedin.com/in/iurygregory > *E-mail: iurygregory at gmail.com * > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Mon Apr 2 21:14:03 2018 From: amy at demarco.com (Amy Marrich) Date: Mon, 2 Apr 2018 16:14:03 -0500 Subject: [openstack-dev] [k8s] OpenStack and Containers White Paper In-Reply-To: <47CB0DA7-9332-4DE1-B9AF-B14C663ACE41@openstack.org> References: <47CB0DA7-9332-4DE1-B9AF-B14C663ACE41@openstack.org> Message-ID: Chris, Can't help with content but can volunteer as an editor. Amy (spotz) On Mon, Apr 2, 2018 at 3:59 PM, Chris Hoge wrote: > Hi everyone, > > In advance of the Vancouver Summit, I'm leading an effort to publish a > community produced white-paper on OpenStack and container integrations. > This has come out of a need to develop materials, both short and long > form, to help explain how OpenStack interacts with container > technologies across the entire stack, from infrastructure to > application. The rough outline of the white-paper proposes an entire > technology stack and discuss deployment and usage strategies at every > level. The white-paper will focus on existing technologies, and how they > are being used in production today across our community. Beginning at > the hardware layer, we have the following outline (which may be inverted > for clarity): > > * OpenStack Ironic for managing bare metal deployments. > * Container-based deployment tools for installing and managing OpenStack > * Kolla containers and Kolla-Ansible > * Loci containers and OpenStack Helm > * OpenStack-hosted APIs for managing container application > infrastructure. > * Magnum > * Zun > * Community-driven integration of Kubernetes and OpenStack with K8s > Cloud Provider OpenStack > * Projects that can stand alone in integrations with Kubernetes and > other cloud technology > * Cinder > * Neutron with Kuryr and Calico integrations > * Keystone authentication and authorization > > I'm looking for volunteers to help produce the content for these sections > (and any others we may uncover to be useful) for presenting a complete > picture of OpenStack and container integrations. If you're involved with > one of these projects, or are using any of these tools in > production, it would be fantastic to get your input in producing the > appropriate section. We especially want real-world deployments to use as > small case studies to inform the work. > > During the process of creating the white-paper, we will be working with a > technical writer and the Foundation design team to produce a document that > is consistent in voice, has accurate and informative graphics that > can be used to illustrate the major points and themes of the white-paper, > and that can be used as stand-alone media for conferences and > presentations. > > Over the next week, I'll be reaching out to individuals and inviting them > to collaborate. This is also a general invitation to collaborate, and if > you'd like to help out with a section please reach out to me here, on the > K8s #sig-openstack Slack channel, or at my work e-mail, > chris at openstack.org. > Starting next week, we'll work out a schedule for producing and delivering > the white-paper by the Vancouver Summit. We are very short on time, so > we will have to be focused to quickly produce high-quality content. > > Thanks in advance to everyone who participates in writing this > document. I'm looking forward to working with you in the coming weeks to > publish this important resource for clearly describing the multitude of > interactions between these complementary technologies. > > -Chris Hoge > K8s-SIG-OpenStack/OpenStack-SIG-K8s Co-Lead > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongbin034 at gmail.com Mon Apr 2 21:30:06 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Mon, 2 Apr 2018 17:30:06 -0400 Subject: [openstack-dev] [k8s] OpenStack and Containers White Paper In-Reply-To: <47CB0DA7-9332-4DE1-B9AF-B14C663ACE41@openstack.org> References: <47CB0DA7-9332-4DE1-B9AF-B14C663ACE41@openstack.org> Message-ID: Hi Chris, I can help with the Zun session. Best regards, Hongbin On Mon, Apr 2, 2018 at 4:59 PM, Chris Hoge wrote: > Hi everyone, > > In advance of the Vancouver Summit, I'm leading an effort to publish a > community produced white-paper on OpenStack and container integrations. > This has come out of a need to develop materials, both short and long > form, to help explain how OpenStack interacts with container > technologies across the entire stack, from infrastructure to > application. The rough outline of the white-paper proposes an entire > technology stack and discuss deployment and usage strategies at every > level. The white-paper will focus on existing technologies, and how they > are being used in production today across our community. Beginning at > the hardware layer, we have the following outline (which may be inverted > for clarity): > > * OpenStack Ironic for managing bare metal deployments. > * Container-based deployment tools for installing and managing OpenStack > * Kolla containers and Kolla-Ansible > * Loci containers and OpenStack Helm > * OpenStack-hosted APIs for managing container application > infrastructure. > * Magnum > * Zun > * Community-driven integration of Kubernetes and OpenStack with K8s > Cloud Provider OpenStack > * Projects that can stand alone in integrations with Kubernetes and > other cloud technology > * Cinder > * Neutron with Kuryr and Calico integrations > * Keystone authentication and authorization > > I'm looking for volunteers to help produce the content for these sections > (and any others we may uncover to be useful) for presenting a complete > picture of OpenStack and container integrations. If you're involved with > one of these projects, or are using any of these tools in > production, it would be fantastic to get your input in producing the > appropriate section. We especially want real-world deployments to use as > small case studies to inform the work. > > During the process of creating the white-paper, we will be working with a > technical writer and the Foundation design team to produce a document that > is consistent in voice, has accurate and informative graphics that > can be used to illustrate the major points and themes of the white-paper, > and that can be used as stand-alone media for conferences and > presentations. > > Over the next week, I'll be reaching out to individuals and inviting them > to collaborate. This is also a general invitation to collaborate, and if > you'd like to help out with a section please reach out to me here, on the > K8s #sig-openstack Slack channel, or at my work e-mail, > chris at openstack.org. > Starting next week, we'll work out a schedule for producing and delivering > the white-paper by the Vancouver Summit. We are very short on time, so > we will have to be focused to quickly produce high-quality content. > > Thanks in advance to everyone who participates in writing this > document. I'm looking forward to working with you in the coming weeks to > publish this important resource for clearly describing the multitude of > interactions between these complementary technologies. > > -Chris Hoge > K8s-SIG-OpenStack/OpenStack-SIG-K8s Co-Lead > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at openstack.org Mon Apr 2 21:55:00 2018 From: mark at openstack.org (Mark Collier) Date: Mon, 2 Apr 2018 16:55:00 -0500 Subject: [openstack-dev] Last chance Vancouver Summit Early Birds! Message-ID: <32721DA6-2332-40A2-B5D6-24B6B9B2D2CA@openstack.org> Hey Stackers, You’ve got TWO DAYS left to snag an early bird ticket, which is $699 for a full access, week-long pass. That’s four days of 300+ sessions and workshops on OpenStack, containers, edge, CI/CD and HPC/GPU/AI in Vancouver May 21-24th. The OpenStack Summit is my favorite place to meet and learn from smart, driven, funny people from all over the world. Will you join me in Vancouver May 21-24? OpenStack.org/summit has the details. Who else will you meet in Vancouver? - An OpenStack developer to discuss the future of the software? - A Kubernetes expert in one of more than 60 sessions about Kubernetes? - A Foundation member who can help you learn how to contribute code upstream at the Upstream Institute? - Other enterprises & service providers running OpenStack at scale like JPMorgan Chase, Progressive Insurance, Google, Target, Walmart, Yahoo!, China Mobile, AT&T, Verizon, China Railway, and Yahoo! Japan? - Your next employee… or employer? Key links: Register: openstack.org/summit (Early bird pricing ends April 4 at 11:59pm Pacific Time / April 5 6:59 UTC) Full Schedule: https://www.openstack.org/summit/vancouver-2018/summit-schedule/#day=2018-05-21 Hotel Discounts: https://www.openstack.org/summit/vancouver-2018/travel/ Sponsor: https://www.openstack.org/summit/vancouver-2018/sponsors/ Code of Conduct: https://www.openstack.org/summit/vancouver-2018/code-of-conduct/ See you at the Summit! Mark twitter.com/sparkycollier -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Mon Apr 2 22:28:15 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Mon, 2 Apr 2018 18:28:15 -0400 Subject: [openstack-dev] [glance] python-glanceclient release status Message-ID: These need to be reviewed in master: - https://review.openstack.org/#/c/555550/ - https://review.openstack.org/#/c/556292/ Backports needing review: - https://review.openstack.org/#/c/555436/ From zbitter at redhat.com Mon Apr 2 23:41:04 2018 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 2 Apr 2018 19:41:04 -0400 Subject: [openstack-dev] Replacing pbr's autodoc feature with sphinxcontrib-apidoc In-Reply-To: <1522247496.4003.31.camel@redhat.com> References: <1522247496.4003.31.camel@redhat.com> Message-ID: <7c7d3421-9bfb-fa6e-69fe-6e3baea762cf@redhat.com> On 28/03/18 10:31, Stephen Finucane wrote: > As noted last week [1], we're trying to move away from pbr's autodoc > feature as part of the new docs PTI. To that end, I've created > sphinxcontrib-apidoc, which should do what pbr was previously doing for > us by via a Sphinx extension. > > https://pypi.org/project/sphinxcontrib-apidoc/ > > This works by reading some configuration from your documentation's > 'conf.py' file and using this to call 'sphinx-apidoc'. It means we no > longer need pbr to do this for. > > I have pushed version 0.1.0 to PyPi already but before I add this to > global requirements, I'd like to ensure things are working as expected. > smcginnis was kind enough to test this out on glance and it seemed to > work for him but I'd appreciate additional data points. The > configuration steps for this extension are provided in the above link. > To test this yourself, you simply need to do the following: > > 1. Add 'sphinxcontrib-apidoc' to your test-requirements.txt or > doc/requirements.txt file > 2. Configure as noted above and remove the '[pbr]' and '[build_sphinx]' > configuration from 'setup.cfg' > 3. Replace 'python setup.py build_sphinx' with a call to 'sphinx-build' > 4. Run 'tox -e docs' > 5. Profit? > > Be sure to let me know if anyone encounters issues. If not, I'll be > pushing for this to be included in global requirements so we can start > the migration. Thanks Stephen! I tried it out with no problems: https://review.openstack.org/558262 However, there are a couple of differences compared to how pbr did things. 1) pbr can generate an 'autoindex' file with a flat list of modules (this appears to be configurable with the autodoc_index_modules option), but apidoc only generates a 'modules' file with a hierarchical list of modules. This is easy to work around, but I guess it needs to be added to the instructions to check that you're not relying on it. 2) pbr generates a page per module; this plugin generates a page per package. This results in waaaay too much information on a page to be able to navigate it comfortably IMHO. To the point where it's easier to read the code. (It also breaks existing links, if you care about that kind of thing.) I sent you a PR to add an option to pass --separate: https://github.com/sphinx-contrib/apidoc/pull/1 cheers, Zane. From zhipengh512 at gmail.com Tue Apr 3 00:13:01 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Tue, 3 Apr 2018 08:13:01 +0800 Subject: [openstack-dev] [k8s] OpenStack and Containers White Paper In-Reply-To: References: <47CB0DA7-9332-4DE1-B9AF-B14C663ACE41@openstack.org> Message-ID: Hi Chris, If it is possible to add Cyborg under "Projects that can stand alone in integrations with Kubernetes and other cloud technology" section, I would like to help on that content. On Tue, Apr 3, 2018 at 5:30 AM, Hongbin Lu wrote: > Hi Chris, > > I can help with the Zun session. > > Best regards, > Hongbin > > On Mon, Apr 2, 2018 at 4:59 PM, Chris Hoge wrote: > >> Hi everyone, >> >> In advance of the Vancouver Summit, I'm leading an effort to publish a >> community produced white-paper on OpenStack and container integrations. >> This has come out of a need to develop materials, both short and long >> form, to help explain how OpenStack interacts with container >> technologies across the entire stack, from infrastructure to >> application. The rough outline of the white-paper proposes an entire >> technology stack and discuss deployment and usage strategies at every >> level. The white-paper will focus on existing technologies, and how they >> are being used in production today across our community. Beginning at >> the hardware layer, we have the following outline (which may be inverted >> for clarity): >> >> * OpenStack Ironic for managing bare metal deployments. >> * Container-based deployment tools for installing and managing OpenStack >> * Kolla containers and Kolla-Ansible >> * Loci containers and OpenStack Helm >> * OpenStack-hosted APIs for managing container application >> infrastructure. >> * Magnum >> * Zun >> * Community-driven integration of Kubernetes and OpenStack with K8s >> Cloud Provider OpenStack >> * Projects that can stand alone in integrations with Kubernetes and >> other cloud technology >> * Cinder >> * Neutron with Kuryr and Calico integrations >> * Keystone authentication and authorization >> >> I'm looking for volunteers to help produce the content for these sections >> (and any others we may uncover to be useful) for presenting a complete >> picture of OpenStack and container integrations. If you're involved with >> one of these projects, or are using any of these tools in >> production, it would be fantastic to get your input in producing the >> appropriate section. We especially want real-world deployments to use as >> small case studies to inform the work. >> >> During the process of creating the white-paper, we will be working with a >> technical writer and the Foundation design team to produce a document that >> is consistent in voice, has accurate and informative graphics that >> can be used to illustrate the major points and themes of the white-paper, >> and that can be used as stand-alone media for conferences and >> presentations. >> >> Over the next week, I'll be reaching out to individuals and inviting them >> to collaborate. This is also a general invitation to collaborate, and if >> you'd like to help out with a section please reach out to me here, on the >> K8s #sig-openstack Slack channel, or at my work e-mail, >> chris at openstack.org. >> Starting next week, we'll work out a schedule for producing and delivering >> the white-paper by the Vancouver Summit. We are very short on time, so >> we will have to be focused to quickly produce high-quality content. >> >> Thanks in advance to everyone who participates in writing this >> document. I'm looking forward to working with you in the coming weeks to >> publish this important resource for clearly describing the multitude of >> interactions between these complementary technologies. >> >> -Chris Hoge >> K8s-SIG-OpenStack/OpenStack-SIG-K8s Co-Lead >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From dprince at redhat.com Tue Apr 3 01:05:53 2018 From: dprince at redhat.com (Dan Prince) Date: Mon, 2 Apr 2018 21:05:53 -0400 Subject: [openstack-dev] [tripleo] PTG session about All-In-One installer: recap & roadmap In-Reply-To: References: Message-ID: On Thu, Mar 29, 2018 at 5:32 PM, Emilien Macchi wrote: > Greeting folks, > > During the last PTG we spent time discussing some ideas around an All-In-One > installer, using 100% of the TripleO bits to deploy a single node OpenStack > very similar with what we have today with the containerized undercloud and > what we also have with other tools like Packstack or Devstack. > > https://etherpad.openstack.org/p/tripleo-rocky-all-in-one > > One of the problems that we're trying to solve here is to give a simple tool > for developers so they can both easily and quickly deploy an OpenStack for > their needs. > > "As a developer, I need to deploy OpenStack in a VM on my laptop, quickly > and without complexity, reproducing the same exact same tooling as TripleO > is using." > "As a Neutron developer, I need to develop a feature in Neutron and test it > with TripleO in my local env." > "As a TripleO dev, I need to implement a new service and test its deployment > in my local env." > "As a developer, I need to reproduce a bug in TripleO CI that blocks the > production chain, quickly and simply." > > Probably more use cases, but to me that's what came into my mind now. > > Dan kicked-off a doc patch a month ago: > https://review.openstack.org/#/c/547038/ > And I just went ahead and proposed a blueprint: > https://blueprints.launchpad.net/tripleo/+spec/all-in-one > So hopefully we can start prototyping something during Rocky. I've actually started hacking a bit here: https://github.com/dprince/talon Very early and I haven't committed everything yet. (Probably wouldn't have announced it to the list yet but it might help some understand the use case). I'm running this on my laptop to develop TripleO containers with no extra VM involved. P.S. We should call it Talon! Dan > > Before talking about the actual implementation, I would like to gather > feedback from people interested by the use-cases. If you recognize yourself > in these use-cases and you're not using TripleO today to test your things > because it's too complex to deploy, we want to hear from you. > I want to see feedback (positive or negative) about this idea. We need to > gather ideas, use cases, needs, before we go design a prototype in Rocky. Sorry dude. Already prototyping :) > > Thanks everyone who'll be involved, > -- > Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From zhaochao1984 at gmail.com Tue Apr 3 02:03:47 2018 From: zhaochao1984 at gmail.com (=?UTF-8?B?6LW16LaF?=) Date: Tue, 3 Apr 2018 10:03:47 +0800 Subject: [openstack-dev] [trove] Trove weekly meeting on April 4th, 2018 is cancelled Message-ID: Hi, Sorry for forgetting to discuss this on the last meeting, as here in China we'll on a vacation for Qingming Festival from April 5th, some core members may not be able to attend the meeting, so let's skip it, the next meeting will be Wednesday, April 11th, 2018. -- To be free as in freedom. -------------- next part -------------- An HTML attachment was scrubbed... URL: From delightwook at ssu.ac.kr Tue Apr 3 02:36:26 2018 From: delightwook at ssu.ac.kr (MinWookKim) Date: Tue, 3 Apr 2018 11:36:26 +0900 Subject: [openstack-dev] [Vitrage] New proposal for analysis. In-Reply-To: References: <0a7201d3c5c1$2ab596a0$8020c3e0$@ssu.ac.kr> <0b4201d3c63b$79038400$6b0a8c00$@ssu.ac.kr> <0cf201d3c72f$2b3f5ec0$81be1c40$@ssu.ac.kr> <0d8101d3c754$41e73c90$c5b5b5b0$@ssu.ac.kr> <38E590A3-69BF-4BE1-A701-FA8171429D46@nokia.com> <00e801d3ca25$29befee0$7d3cfca0$@ssu.ac.kr> Message-ID: <000a01d3caf4$90584010$b108c030$@ssu.ac.kr> Hello Ifat, I also thought about several scenarios that use monitoring tools like Zabbix, Nagios, and Prometheus. But there are some limitations, so we have to think about it. We also need to think about targets, scope, and so on. The reason I do not think of tools like Zabbix, Nagios, and Prometheus as a tool to run checks is because we need to configure an agent or an exporter. I think it is not hard to configure an agent for monitoring objects such as a physical host. But the scope of the idea I think involves the VM's interior. Therefore, configuring the agent automatically inside the VM may not be easy. (although we can use parameters like user-data) If we exclude VM internal checks from scope, we can simply perform a check via Zabbix. (Like Zabbix's remote command, history) On the other hand, if we include the inside of a VM in a scope, and configure each of them, we have a rather constant overhead. The check service may incur temporary overhead, but the agent configuration can cause constant overhead. And Zabbix history can be another task for Vitrage. If we configure the agents themselves and exclude the VM's internal checks, we can provide functionality with simple code. how is it? Thank you. Best regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Monday, April 2, 2018 10:22 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, Thinking about it again, writing a new service for these checks might be an unnecessary overhead. Have you considered using an existing tool, like Zabbix, for running such checks? If you use Zabbix, you can define new triggers that run the new checks, and whenever needed the user can ask to open Zabbix and show the relevant metrics. The format will not be exactly the same as in your example, but it will save a lot of work and spare you the need to write and manage a new service. Some technical details: * The current information that Vitrage stores is not enough for opening the right Zabbix page. We will need to keep a little more data, like the item id, on the alarm vertex. But can be done easily. * A relevant Zabbix API is history.get [1] * If you are not using Zabbix, I assume that other monitoring tools have similar capabilities What do you think? Do you think it can work with your scenario? Or do you see a benefit to the user is viewing the data in the format that you suggested? [1] https://www.zabbix.com/documentation/3.0/manual/api/reference/history/get Thanks, Ifat From: MinWookKim < delightwook at ssu.ac.kr> Reply-To: "OpenStack Development Mailing List (not for usage questions)" < openstack-dev at lists.openstack.org> Date: Monday, 2 April 2018 at 4:51 To: "'OpenStack Development Mailing List (not for usage questions)'" < openstack-dev at lists.openstack.org> Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thank you for the reply. :) It is my opinion only, so if I'm wrong, we can change the implementation part at any time. (Even if it differs from my initial intention) The same security issues arise as you say. But now Vitrage does not call external APIs. The Vitrage-dashboard uses Vitrageclient libraries for Topology, Alarms, and RCA requests to Vitrage. So if we add an API, it will have the following flow. Vitrage-dashboard requests checks using the Vitrageclient library. -> Vitrage receives the API. -> api / controllers / v1 / checks.py is called. -> checks service is called. In accordance with the above flow, passing through the Vitrage API is the purpose of data passing and function calls. I think Vitrage does not need to call external APIs. If you do not want to go through the Vitrage API, we need to create a function for the check action in the Vitrage-dashboard, and write code to call the function. If I think wrong, please tell me anytime. :) Thank you. Best regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [ mailto:ifat.afek at nokia.com] Sent: Sunday, April 1, 2018 3:40 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, I understand your concern about the security issue. But how would that be different if the API call is passed through Vitrage API? The authentication from vitrage-dashboard to vitrage API will work, but then Vitrage will call an external API and you’ll have the same security issue, right? I don’t understand what is the difference between calling the external component from vitrage-dashboard and calling it from vitrage. Best regards, Ifat. From: MinWookKim < delightwook at ssu.ac.kr> Reply-To: "OpenStack Development Mailing List (not for usage questions)" < openstack-dev at lists.openstack.org> Date: Thursday, 29 March 2018 at 14:51 To: "'OpenStack Development Mailing List (not for usage questions)'" < openstack-dev at lists.openstack.org> Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thanks for your reply. : ) I wrote my opinion on your comment. Why do you think the request should pass through the Vitrage API? Why can’t vitrage-dashboard call the check component directly? Authentication issues: I think the check component is a separate component based on the API. In my opinion, if the check component has a separate api address from the vitrage to receive requests from the Vitrage-dashboard, the Vitrage-dashboard needs to know the api address for the check component. This can result in a request / response situation open to anyone, regardless of the authentication supported by openstack between the Vitrage-dashboard and the request / response procedure of check component. This is possible not only through the Vitrage-dashboard, but also with simple commands such as curl. (I think it is unnecessary to implement a separate authentication system for the check component.) This problem may occur if someone knows the api address for the check component, which can cause the host and VM to execute system commands. what should happen if the user closes the check window before the checks are over? I assume that the checks will finish, but the user won’t be able to see the results? If the window is closed before the check is finished, the user can not check the result. To solve this problem, I think that temporarily saving a list of recent results is also a solution. By storing temporary lists (for example, up to 10), the user can see the previous results and think that it is also possible to empty the list by the user. how is it? Thank you. Best Regrads, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [ mailto:ifat.afek at nokia.com] Sent: Thursday, March 29, 2018 8:07 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, Why do you think the request should pass through the Vitrage API? Why can’t vitrage-dashboard call the check component directly? And another question: what should happen if the user closes the check window before the checks are over? I assume that the checks will finish, but the user won’t be able to see the results? Thanks, Ifat. From: MinWookKim < delightwook at ssu.ac.kr> Reply-To: "OpenStack Development Mailing List (not for usage questions)" < openstack-dev at lists.openstack.org> Date: Thursday, 29 March 2018 at 10:25 To: "'OpenStack Development Mailing List (not for usage questions)'" < openstack-dev at lists.openstack.org> Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat and Vitrage team. I would like to explain more about the implementation part of the mail I sent last time. The flow is as follows. Vitrage-dashboard (action-list-panel) -> Vitrage-api -> check component The last time I mentioned it as api-handler, it would be better to call the check component directly from Vitarge-api without having to use it. I hope this helps you understand. Thank you Best Regards, Minwook. From: MinWookKim [ mailto:delightwook at ssu.ac.kr] Sent: Wednesday, March 28, 2018 11:21 AM To: 'OpenStack Development Mailing List (not for usage questions)' Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thanks for your reply. : ) This proposal is a proposal that we expect to be useful from a user perspective. >From a manager's point of view, we need an implementation that minimizes the overhead incurred by the proposal. The answers to some of your questions are: • I assume that these checks will not be implemented in Vitrage, and the results will not be stored in Vitrage, right? Vitrage role is to be a place where it is easy and intuitive for the user to execute external actions/checks. Yes, that's right. We do not need to save it to Vitrage because we just need to check the results. However, it is possible to implement the function directly in Vitrage-dashboard separately from Vitrage like add-action-list panel, but it seems that it is not enough to implement all the functions. If you do not mind, we will have the following flow. 1. The user requests the check action from the vitrage-dashboard (add-action-list-panel). 2. Call the check component through the vitrage's API handler. 3. The check component executes the command and returns the result. Because it is my opinion only, please tell us if there is an unnecessary part. :) • Do you expect the user to click an entity, select an action to run (e.g. ‘P2P check’), and wait by the open panel for the results? What if the user switches to another menu before the check is done? What if the user asks to run an additional check in parallel? What if the user wants to see again a previous result? My idea was to select the task, wait for the results in an open panel, and then instantly see it in the panel. If we switch to another menu before the scan is complete, we will not be able to see the results. Parallel checking is a matter of fact. (This can cause excessive overhead.) For earlier results, it may be okay to temporarily save the open panel until we exit the panel. We can see the previous results through the temporary saved results. • Any thoughts of what component will implement those checks? Or maybe these will be just scripts? I think I implement a separate component to request it. • It could be nice if, as a result of an action check, a new alarm will be raised in Vitrage. A specific alarm with the additional details that were found. However, it might not be trivial to implement it. We could think about it as phase #2. It is expected to be really good. It would be very useful if an Entity-Graph generates an alarm based on the check result. I think that part will be able to talk in detail later. My answer is my opinions and assumptions. If you think my implementation is wrong, or an inefficient implementation, please do not hesitate to tell me. Thanks. Best Regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [ mailto:ifat.afek at nokia.com] Sent: Wednesday, March 28, 2018 2:23 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, I think that from a user’s perspective, these are very good ideas. I have some questions regarding the UX and the implementation, since I’m trying to think what could be the best way to execute such actions from Vitrage. * I assume that these checks will not be implemented in Vitrage, and the results will not be stored in Vitrage, right? Vitrage role is to be a place where it is easy and intuitive for the user to execute external actions/checks. * Do you expect the user to click an entity, select an action to run (e.g. ‘P2P check’), and wait by the open panel for the results? What if the user switches to another menu before the check is done? What if the user asks to run an additional check in parallel? What if the user wants to see again a previous result? * Any thoughts of what component will implement those checks? Or maybe these will be just scripts? * It could be nice if, as a result of an action check, a new alarm will be raised in Vitrage. A specific alarm with the additional details that were found. However, it might not be trivial to implement it. We could think about it as phase #2. Best Regards, Ifat From: MinWookKim < delightwook at ssu.ac.kr> Reply-To: "OpenStack Development Mailing List (not for usage questions)" < openstack-dev at lists.openstack.org> Date: Tuesday, 27 March 2018 at 14:45 To: " openstack-dev at lists.openstack.org" < openstack-dev at lists.openstack.org> Subject: [openstack-dev] [Vitrage] New proposal for analysis. Hello Vitrage team. I am currently working on the Vitrage-Dashboard proposal for the ‘Add action list panel for entity click action’. ( https://review.openstack.org/#/c/531141/) I would like to make a new proposal based on the action list panel mentioned above. The new proposal is to provide multidimensional analysis capabilities in several entities that make up the infrastructure in the entity graph. Vitrage's entity-graph allows us to efficiently monitor alarms from various monitoring tools. In the current state, when there is a problem with the VM and Host, or when we want to check the status, we need to access the console individually for each VM and Host. This situation causes unnecessary behavior when the number of VMs and hosts increases. My new suggestion is that if we have a large number of vm and host, we do not need to directly connect to each VM, host console to enter the system command. Instead, we can send a system command to VM and hosts in the cloud through this proposal. It is only checking results. I have written some use-cases for an efficient explanation of the function. >From an implementation perspective, the goals of the proposal are: 1. To execute commands without installing any Agent / Client that can cause load on VM, Host. 2. I want to provide a simple UI so that users or administrators can get the desired information to multiple VMs and hosts. 3. I want to be able to grasp the results at a glance. 4. I want to implement a component that can support many additional scenarios in plug-in format. I would be happy if you could comment on the proposal or ask questions. Thanks. Best Regards, Minwook. -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 41182 bytes Desc: not available URL: From pete at port.direct Tue Apr 3 04:38:23 2018 From: pete at port.direct (Pete Birley) Date: Mon, 2 Apr 2018 23:38:23 -0500 Subject: [openstack-dev] [k8s] OpenStack and Containers White Paper In-Reply-To: <47CB0DA7-9332-4DE1-B9AF-B14C663ACE41@openstack.org> References: <47CB0DA7-9332-4DE1-B9AF-B14C663ACE41@openstack.org> Message-ID: Chris, I'd be happy to help out where I can, mostly related to OSH and LOCI. One thing we should make clear is that both of these projects are agnostic to each other: we gate OSH with both LOCI and kolla images, and conversely LOCI has uses far beyond just OSH. Pete On Monday, April 2, 2018, Chris Hoge wrote: > Hi everyone, > > In advance of the Vancouver Summit, I'm leading an effort to publish a > community produced white-paper on OpenStack and container integrations. > This has come out of a need to develop materials, both short and long > form, to help explain how OpenStack interacts with container > technologies across the entire stack, from infrastructure to > application. The rough outline of the white-paper proposes an entire > technology stack and discuss deployment and usage strategies at every > level. The white-paper will focus on existing technologies, and how they > are being used in production today across our community. Beginning at > the hardware layer, we have the following outline (which may be inverted > for clarity): > > * OpenStack Ironic for managing bare metal deployments. > * Container-based deployment tools for installing and managing OpenStack > * Kolla containers and Kolla-Ansible > * Loci containers and OpenStack Helm > * OpenStack-hosted APIs for managing container application > infrastructure. > * Magnum > * Zun > * Community-driven integration of Kubernetes and OpenStack with K8s > Cloud Provider OpenStack > * Projects that can stand alone in integrations with Kubernetes and > other cloud technology > * Cinder > * Neutron with Kuryr and Calico integrations > * Keystone authentication and authorization > > I'm looking for volunteers to help produce the content for these sections > (and any others we may uncover to be useful) for presenting a complete > picture of OpenStack and container integrations. If you're involved with > one of these projects, or are using any of these tools in > production, it would be fantastic to get your input in producing the > appropriate section. We especially want real-world deployments to use as > small case studies to inform the work. > > During the process of creating the white-paper, we will be working with a > technical writer and the Foundation design team to produce a document that > is consistent in voice, has accurate and informative graphics that > can be used to illustrate the major points and themes of the white-paper, > and that can be used as stand-alone media for conferences and > presentations. > > Over the next week, I'll be reaching out to individuals and inviting them > to collaborate. This is also a general invitation to collaborate, and if > you'd like to help out with a section please reach out to me here, on the > K8s #sig-openstack Slack channel, or at my work e-mail, > chris at openstack.org. > Starting next week, we'll work out a schedule for producing and delivering > the white-paper by the Vancouver Summit. We are very short on time, so > we will have to be focused to quickly produce high-quality content. > > Thanks in advance to everyone who participates in writing this > document. I'm looking forward to working with you in the coming weeks to > publish this important resource for clearly describing the multitude of > interactions between these complementary technologies. > > -Chris Hoge > K8s-SIG-OpenStack/OpenStack-SIG-K8s Co-Lead > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- [image: Port.direct] Pete Birley / Director pete at port.direct / +447446862551 *PORT.*DIRECT United Kingdom https://port.direct This e-mail message may contain confidential or legally privileged information and is intended only for the use of the intended recipient(s). Any unauthorized disclosure, dissemination, distribution, copying or the taking of any action in reliance on the information herein is prohibited. E-mails are not secure and cannot be guaranteed to be error free as they can be intercepted, amended, or contain viruses. Anyone who communicates with us by e-mail is deemed to have accepted these risks. Port.direct is not responsible for errors or omissions in this message and denies any responsibility for any damage arising from the use of e-mail. Any opinion and other statement contained in this message and any attachment are solely those of the author and do not necessarily represent those of the company. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bluejay.ahn at gmail.com Tue Apr 3 06:38:48 2018 From: bluejay.ahn at gmail.com (Jaesuk Ahn) Date: Tue, 03 Apr 2018 06:38:48 +0000 Subject: [openstack-dev] [k8s] OpenStack and Containers White Paper In-Reply-To: References: <47CB0DA7-9332-4DE1-B9AF-B14C663ACE41@openstack.org> Message-ID: Hi Chris, I can probably help on proof-reading and making some contents on the openstack-helm part. As Pete pointed out, LOCI and OpenStack-Helm (OSH) are agnostic to each other. OSH is working well with both kolla image and loci image. IMHO, following categorization might be better to capture the nature of these project. Just suggestion. * OpenStack Containerization tools * Kolla * Loci * Container-based deployment tools for installing and managing OpenStack * Kolla-Ansible * OpenStack Helm On Tue, Apr 3, 2018 at 10:08 AM Pete Birley wrote: > Chris, > > I'd be happy to help out where I can, mostly related to OSH and LOCI. One > thing we should make clear is that both of these projects are agnostic to > each other: we gate OSH with both LOCI and kolla images, and conversely > LOCI has uses far beyond just OSH. > > Pete > > On Monday, April 2, 2018, Chris Hoge wrote: > >> Hi everyone, >> >> In advance of the Vancouver Summit, I'm leading an effort to publish a >> community produced white-paper on OpenStack and container integrations. >> This has come out of a need to develop materials, both short and long >> form, to help explain how OpenStack interacts with container >> technologies across the entire stack, from infrastructure to >> application. The rough outline of the white-paper proposes an entire >> technology stack and discuss deployment and usage strategies at every >> level. The white-paper will focus on existing technologies, and how they >> are being used in production today across our community. Beginning at >> the hardware layer, we have the following outline (which may be inverted >> for clarity): >> >> * OpenStack Ironic for managing bare metal deployments. >> * Container-based deployment tools for installing and managing OpenStack >> * Kolla containers and Kolla-Ansible >> * Loci containers and OpenStack Helm >> * OpenStack-hosted APIs for managing container application >> infrastructure. >> * Magnum >> * Zun >> * Community-driven integration of Kubernetes and OpenStack with K8s >> Cloud Provider OpenStack >> * Projects that can stand alone in integrations with Kubernetes and >> other cloud technology >> * Cinder >> * Neutron with Kuryr and Calico integrations >> * Keystone authentication and authorization >> >> I'm looking for volunteers to help produce the content for these sections >> (and any others we may uncover to be useful) for presenting a complete >> picture of OpenStack and container integrations. If you're involved with >> one of these projects, or are using any of these tools in >> production, it would be fantastic to get your input in producing the >> appropriate section. We especially want real-world deployments to use as >> small case studies to inform the work. >> >> During the process of creating the white-paper, we will be working with a >> technical writer and the Foundation design team to produce a document that >> is consistent in voice, has accurate and informative graphics that >> can be used to illustrate the major points and themes of the white-paper, >> and that can be used as stand-alone media for conferences and >> presentations. >> >> Over the next week, I'll be reaching out to individuals and inviting them >> to collaborate. This is also a general invitation to collaborate, and if >> you'd like to help out with a section please reach out to me here, on the >> K8s #sig-openstack Slack channel, or at my work e-mail, >> chris at openstack.org. >> Starting next week, we'll work out a schedule for producing and delivering >> the white-paper by the Vancouver Summit. We are very short on time, so >> we will have to be focused to quickly produce high-quality content. >> >> Thanks in advance to everyone who participates in writing this >> document. I'm looking forward to working with you in the coming weeks to >> publish this important resource for clearly describing the multitude of >> interactions between these complementary technologies. >> >> -Chris Hoge >> K8s-SIG-OpenStack/OpenStack-SIG-K8s Co-Lead >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > -- > > [image: Port.direct] > > Pete Birley / Director > pete at port.direct / +447446862551 <+44%207446%20862551> > > *PORT.*DIRECT > United Kingdom > https://port.direct > > This e-mail message may contain confidential or legally privileged > information and is intended only for the use of the intended recipient(s). > Any unauthorized disclosure, dissemination, distribution, copying or the > taking of any action in reliance on the information herein is prohibited. > E-mails are not secure and cannot be guaranteed to be error free as they > can be intercepted, amended, or contain viruses. Anyone who communicates > with us by e-mail is deemed to have accepted these risks. Port.direct is > not responsible for errors or omissions in this message and denies any > responsibility for any damage arising from the use of e-mail. Any opinion > and other statement contained in this message and any attachment are solely > those of the author and do not necessarily represent those of the company. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Jaesuk Ahn, Team Lead Virtualization SW Lab, SW R&D Center SK Telecom -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Tue Apr 3 10:07:42 2018 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 3 Apr 2018 12:07:42 +0200 Subject: [openstack-dev] [os-upstream-institute] Call before the Vancouver training - ACTION NEEDED Message-ID: Hi Training Team, Our next training in Vancouver[1] is quickly approaching and we still have a lot of work to do. In order to sync up I created a Doodle poll[2] with hours that are somewhat inconvenient, but can work around the globe. Please respond to the poll so we can setup a call to check on where we are and do last minute changes if needed. In the meantime we are moving content over from the training-guides slides to the Contributor Guide[3], please pick a task and help out! We also need to work on the exercises to keep the training interactive and hands on. If you have ideas please respond to this thread, jump on our IRC channel (#openstack-upstream-institute) or propose a patch to the training guides repository. :) Let me know if you have any questions. Thanks, Ildikó (IRC: ildikov) [1] https://www.openstack.org/summit/vancouver-2018/summit-schedule/global-search?t=Upstream+Institute [2] https://doodle.com/poll/i894hhd7bfukmm7p [3] https://storyboard.openstack.org/#!/project/913 From tbechtold at suse.com Tue Apr 3 10:10:43 2018 From: tbechtold at suse.com (Thomas Bechtold) Date: Tue, 3 Apr 2018 12:10:43 +0200 Subject: [openstack-dev] RFC: Next minimum libvirt / QEMU versions for "Solar" release In-Reply-To: <20180330142643.ff3czxy35khmjakx@eukaryote> References: <20180330142643.ff3czxy35khmjakx@eukaryote> Message-ID: <50df1384-0a2c-d832-8b7d-7d5f8877bf1b@suse.com> Hey, On 30.03.2018 16:26, Kashyap Chamarthy wrote: [...] > Taking the DistroSupportMatrix into picture, for the sake of discussion, > how about the following NEXT_MIN versions for "Solar" release: > > (a) libvirt: 3.2.0 (released on 23-Feb-2017) [...] > > (b) QEMU: 2.9.0 (released on 20-Apr-2017) [...] Works both for openSUSE and SLES. Best, Tom From huan.xiong at hxt-semitech.com Tue Apr 3 10:25:29 2018 From: huan.xiong at hxt-semitech.com (Xiong, Huan) Date: Tue, 3 Apr 2018 10:25:29 +0000 Subject: [openstack-dev] [novaclient] invoking methods on the same client object in different theads caused malformed requests Message-ID: <1fb56ae6b328402fb3dd58dde67c2002@HXTBJIDCEMVIW02.hxtcorp.net> Hi, I'm using a cloud benchmarking tool [1], which creates a *single* nova client object in main thread and invoke methods on that object in different worker threads. I find it generated malformed requests at random (my system has python-novaclient 10.1.0 installed). The root cause was because some methods in novaclient (e.g., those in images.py and networks.py) changed client object's service_type. Since all threads shared a single client object, the change caused other threads generated malformed requests and hence the failure. I wonder if this is a known issue for novaclient, or the above approach is not supported? Thanks, rayx [1] https://github.com/ibmcb/cbtool From sfinucan at redhat.com Tue Apr 3 10:28:25 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Tue, 03 Apr 2018 11:28:25 +0100 Subject: [openstack-dev] Replacing pbr's autodoc feature with sphinxcontrib-apidoc In-Reply-To: <7c7d3421-9bfb-fa6e-69fe-6e3baea762cf@redhat.com> References: <1522247496.4003.31.camel@redhat.com> <7c7d3421-9bfb-fa6e-69fe-6e3baea762cf@redhat.com> Message-ID: <1522751305.3618.17.camel@redhat.com> On Mon, 2018-04-02 at 19:41 -0400, Zane Bitter wrote: > On 28/03/18 10:31, Stephen Finucane wrote: > > As noted last week [1], we're trying to move away from pbr's autodoc > > feature as part of the new docs PTI. To that end, I've created > > sphinxcontrib-apidoc, which should do what pbr was previously doing for > > us by via a Sphinx extension. > > > > https://pypi.org/project/sphinxcontrib-apidoc/ > > > > This works by reading some configuration from your documentation's > > 'conf.py' file and using this to call 'sphinx-apidoc'. It means we no > > longer need pbr to do this for. > > > > I have pushed version 0.1.0 to PyPi already but before I add this to > > global requirements, I'd like to ensure things are working as expected. > > smcginnis was kind enough to test this out on glance and it seemed to > > work for him but I'd appreciate additional data points. The > > configuration steps for this extension are provided in the above link. > > To test this yourself, you simply need to do the following: > > > > 1. Add 'sphinxcontrib-apidoc' to your test-requirements.txt or > > doc/requirements.txt file > > 2. Configure as noted above and remove the '[pbr]' and '[build_sphinx]' > > configuration from 'setup.cfg' > > 3. Replace 'python setup.py build_sphinx' with a call to 'sphinx-build' > > 4. Run 'tox -e docs' > > 5. Profit? > > > > Be sure to let me know if anyone encounters issues. If not, I'll be > > pushing for this to be included in global requirements so we can start > > the migration. > > Thanks Stephen! I tried it out with no problems: > > https://review.openstack.org/558262 > > However, there are a couple of differences compared to how pbr did things. > > 1) pbr can generate an 'autoindex' file with a flat list of modules > (this appears to be configurable with the autodoc_index_modules option), > but apidoc only generates a 'modules' file with a hierarchical list of > modules. This is easy to work around, but I guess it needs to be added > to the instructions to check that you're not relying on it. Yup, smcginnis and I discussed this at some point. PBR has two different ways of generating API documentation: 'autodoc_tree', which is based on 'sphinx-apidoc', and 'autodoc', which is custom (and presumably legacy). This extension replaces the former of those but, as you note below, it seems 'sphinx-apidoc' can be wrangled into generating something approaching the latter. > 2) pbr generates a page per module; this plugin generates a page per > package. This results in waaaay too much information on a page to be > able to navigate it comfortably IMHO. To the point where it's easier to > read the code. (It also breaks existing links, if you care about that > kind of thing.) I sent you a PR to add an option to pass --separate: > > https://github.com/sphinx-contrib/apidoc/pull/1 Thanks for that. I've merged it and will use it as the basis of a 0.2.0 release assuming nothing else pops up in the next day or two. I'm not sure what we can do about the broken links though - maybe use the redirect infrastructure to just send everyone to the new place? I guess I can add this to the guide I'm adding to the README on migrating from pbr. Cheers, Stephen From ifat.afek at nokia.com Tue Apr 3 11:30:50 2018 From: ifat.afek at nokia.com (Afek, Ifat (Nokia - IL/Kfar Sava)) Date: Tue, 3 Apr 2018 11:30:50 +0000 Subject: [openstack-dev] [Vitrage] New proposal for analysis. In-Reply-To: <000a01d3caf4$90584010$b108c030$@ssu.ac.kr> References: <0a7201d3c5c1$2ab596a0$8020c3e0$@ssu.ac.kr> <0b4201d3c63b$79038400$6b0a8c00$@ssu.ac.kr> <0cf201d3c72f$2b3f5ec0$81be1c40$@ssu.ac.kr> <0d8101d3c754$41e73c90$c5b5b5b0$@ssu.ac.kr> <38E590A3-69BF-4BE1-A701-FA8171429D46@nokia.com> <00e801d3ca25$29befee0$7d3cfca0$@ssu.ac.kr> <000a01d3caf4$90584010$b108c030$@ssu.ac.kr> Message-ID: Hi Minwook, Thanks for the explanation, I understand the reasons for not running these checks on a regular basis in Zabbix or other monitoring tools. It makes sense. However, I don’t want to re-invent the wheel and add to Vitrage functionality that already exists in other projects. How about using Mistral for the purpose of manually running these extra checks? If you prepare the script/agent in advance, as well as the Mistral workflow, I believe that Mistral can successfully execute the check and return the results. I’m not so sure about the UI part, we will have to figure out how and where the user can see the output. But it will save a lot of effort around managing the checks, running a new service, supporting a new API, etc. What do you think? Ifat From: MinWookKim Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Tuesday, 3 April 2018 at 5:36 To: "'OpenStack Development Mailing List (not for usage questions)'" Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, I also thought about several scenarios that use monitoring tools like Zabbix, Nagios, and Prometheus. But there are some limitations, so we have to think about it. We also need to think about targets, scope, and so on. The reason I do not think of tools like Zabbix, Nagios, and Prometheus as a tool to run checks is because we need to configure an agent or an exporter. I think it is not hard to configure an agent for monitoring objects such as a physical host. But the scope of the idea I think involves the VM's interior. Therefore, configuring the agent automatically inside the VM may not be easy. (although we can use parameters like user-data) If we exclude VM internal checks from scope, we can simply perform a check via Zabbix. (Like Zabbix's remote command, history) On the other hand, if we include the inside of a VM in a scope, and configure each of them, we have a rather constant overhead. The check service may incur temporary overhead, but the agent configuration can cause constant overhead. And Zabbix history can be another task for Vitrage. If we configure the agents themselves and exclude the VM's internal checks, we can provide functionality with simple code. how is it? Thank you. Best regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Monday, April 2, 2018 10:22 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, Thinking about it again, writing a new service for these checks might be an unnecessary overhead. Have you considered using an existing tool, like Zabbix, for running such checks? If you use Zabbix, you can define new triggers that run the new checks, and whenever needed the user can ask to open Zabbix and show the relevant metrics. The format will not be exactly the same as in your example, but it will save a lot of work and spare you the need to write and manage a new service. Some technical details: · The current information that Vitrage stores is not enough for opening the right Zabbix page. We will need to keep a little more data, like the item id, on the alarm vertex. But can be done easily. · A relevant Zabbix API is history.get [1] · If you are not using Zabbix, I assume that other monitoring tools have similar capabilities What do you think? Do you think it can work with your scenario? Or do you see a benefit to the user is viewing the data in the format that you suggested? [1] https://www.zabbix.com/documentation/3.0/manual/api/reference/history/get Thanks, Ifat From: MinWookKim > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Monday, 2 April 2018 at 4:51 To: "'OpenStack Development Mailing List (not for usage questions)'" > Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thank you for the reply. :) It is my opinion only, so if I'm wrong, we can change the implementation part at any time. (Even if it differs from my initial intention) The same security issues arise as you say. But now Vitrage does not call external APIs. The Vitrage-dashboard uses Vitrageclient libraries for Topology, Alarms, and RCA requests to Vitrage. So if we add an API, it will have the following flow. Vitrage-dashboard requests checks using the Vitrageclient library. -> Vitrage receives the API. -> api / controllers / v1 / checks.py is called. -> checks service is called. In accordance with the above flow, passing through the Vitrage API is the purpose of data passing and function calls. I think Vitrage does not need to call external APIs. If you do not want to go through the Vitrage API, we need to create a function for the check action in the Vitrage-dashboard, and write code to call the function. If I think wrong, please tell me anytime. :) Thank you. Best regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Sunday, April 1, 2018 3:40 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, I understand your concern about the security issue. But how would that be different if the API call is passed through Vitrage API? The authentication from vitrage-dashboard to vitrage API will work, but then Vitrage will call an external API and you’ll have the same security issue, right? I don’t understand what is the difference between calling the external component from vitrage-dashboard and calling it from vitrage. Best regards, Ifat. From: MinWookKim > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Thursday, 29 March 2018 at 14:51 To: "'OpenStack Development Mailing List (not for usage questions)'" > Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thanks for your reply. : ) I wrote my opinion on your comment. Why do you think the request should pass through the Vitrage API? Why can’t vitrage-dashboard call the check component directly? Authentication issues: I think the check component is a separate component based on the API. In my opinion, if the check component has a separate api address from the vitrage to receive requests from the Vitrage-dashboard, the Vitrage-dashboard needs to know the api address for the check component. This can result in a request / response situation open to anyone, regardless of the authentication supported by openstack between the Vitrage-dashboard and the request / response procedure of check component. This is possible not only through the Vitrage-dashboard, but also with simple commands such as curl. (I think it is unnecessary to implement a separate authentication system for the check component.) This problem may occur if someone knows the api address for the check component, which can cause the host and VM to execute system commands. what should happen if the user closes the check window before the checks are over? I assume that the checks will finish, but the user won’t be able to see the results? If the window is closed before the check is finished, the user can not check the result. To solve this problem, I think that temporarily saving a list of recent results is also a solution. By storing temporary lists (for example, up to 10), the user can see the previous results and think that it is also possible to empty the list by the user. how is it? Thank you. Best Regrads, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Thursday, March 29, 2018 8:07 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, Why do you think the request should pass through the Vitrage API? Why can’t vitrage-dashboard call the check component directly? And another question: what should happen if the user closes the check window before the checks are over? I assume that the checks will finish, but the user won’t be able to see the results? Thanks, Ifat. From: MinWookKim > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Thursday, 29 March 2018 at 10:25 To: "'OpenStack Development Mailing List (not for usage questions)'" > Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat and Vitrage team. I would like to explain more about the implementation part of the mail I sent last time. The flow is as follows. Vitrage-dashboard (action-list-panel) -> Vitrage-api -> check component The last time I mentioned it as api-handler, it would be better to call the check component directly from Vitarge-api without having to use it. I hope this helps you understand. Thank you Best Regards, Minwook. From: MinWookKim [mailto:delightwook at ssu.ac.kr] Sent: Wednesday, March 28, 2018 11:21 AM To: 'OpenStack Development Mailing List (not for usage questions)' Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thanks for your reply. : ) This proposal is a proposal that we expect to be useful from a user perspective. From a manager's point of view, we need an implementation that minimizes the overhead incurred by the proposal. The answers to some of your questions are: • I assume that these checks will not be implemented in Vitrage, and the results will not be stored in Vitrage, right? Vitrage role is to be a place where it is easy and intuitive for the user to execute external actions/checks. Yes, that's right. We do not need to save it to Vitrage because we just need to check the results. However, it is possible to implement the function directly in Vitrage-dashboard separately from Vitrage like add-action-list panel, but it seems that it is not enough to implement all the functions. If you do not mind, we will have the following flow. 1. The user requests the check action from the vitrage-dashboard (add-action-list-panel). 2. Call the check component through the vitrage's API handler. 3. The check component executes the command and returns the result. Because it is my opinion only, please tell us if there is an unnecessary part. :) • Do you expect the user to click an entity, select an action to run (e.g. ‘P2P check’), and wait by the open panel for the results? What if the user switches to another menu before the check is done? What if the user asks to run an additional check in parallel? What if the user wants to see again a previous result? My idea was to select the task, wait for the results in an open panel, and then instantly see it in the panel. If we switch to another menu before the scan is complete, we will not be able to see the results. Parallel checking is a matter of fact. (This can cause excessive overhead.) For earlier results, it may be okay to temporarily save the open panel until we exit the panel. We can see the previous results through the temporary saved results. • Any thoughts of what component will implement those checks? Or maybe these will be just scripts? I think I implement a separate component to request it. • It could be nice if, as a result of an action check, a new alarm will be raised in Vitrage. A specific alarm with the additional details that were found. However, it might not be trivial to implement it. We could think about it as phase #2. It is expected to be really good. It would be very useful if an Entity-Graph generates an alarm based on the check result. I think that part will be able to talk in detail later. My answer is my opinions and assumptions. If you think my implementation is wrong, or an inefficient implementation, please do not hesitate to tell me. Thanks. Best Regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Wednesday, March 28, 2018 2:23 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, I think that from a user’s perspective, these are very good ideas. I have some questions regarding the UX and the implementation, since I’m trying to think what could be the best way to execute such actions from Vitrage. · I assume that these checks will not be implemented in Vitrage, and the results will not be stored in Vitrage, right? Vitrage role is to be a place where it is easy and intuitive for the user to execute external actions/checks. · Do you expect the user to click an entity, select an action to run (e.g. ‘P2P check’), and wait by the open panel for the results? What if the user switches to another menu before the check is done? What if the user asks to run an additional check in parallel? What if the user wants to see again a previous result? · Any thoughts of what component will implement those checks? Or maybe these will be just scripts? · It could be nice if, as a result of an action check, a new alarm will be raised in Vitrage. A specific alarm with the additional details that were found. However, it might not be trivial to implement it. We could think about it as phase #2. Best Regards, Ifat From: MinWookKim > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Tuesday, 27 March 2018 at 14:45 To: "openstack-dev at lists.openstack.org" > Subject: [openstack-dev] [Vitrage] New proposal for analysis. Hello Vitrage team. I am currently working on the Vitrage-Dashboard proposal for the ‘Add action list panel for entity click action’. (https://review.openstack.org/#/c/531141/) I would like to make a new proposal based on the action list panel mentioned above. The new proposal is to provide multidimensional analysis capabilities in several entities that make up the infrastructure in the entity graph. Vitrage's entity-graph allows us to efficiently monitor alarms from various monitoring tools. In the current state, when there is a problem with the VM and Host, or when we want to check the status, we need to access the console individually for each VM and Host. This situation causes unnecessary behavior when the number of VMs and hosts increases. My new suggestion is that if we have a large number of vm and host, we do not need to directly connect to each VM, host console to enter the system command. Instead, we can send a system command to VM and hosts in the cloud through this proposal. It is only checking results. I have written some use-cases for an efficient explanation of the function. From an implementation perspective, the goals of the proposal are: 1. To execute commands without installing any Agent / Client that can cause load on VM, Host. 2. I want to provide a simple UI so that users or administrators can get the desired information to multiple VMs and hosts. 3. I want to be able to grasp the results at a glance. 4. I want to implement a component that can support many additional scenarios in plug-in format. I would be happy if you could comment on the proposal or ask questions. Thanks. Best Regards, Minwook. -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at ericsson.com Tue Apr 3 12:05:35 2018 From: elod.illes at ericsson.com (=?UTF-8?B?RWzDtWQgSWxsw6lz?=) Date: Tue, 3 Apr 2018 14:05:35 +0200 Subject: [openstack-dev] [Openstack-stable-maint] Stable check of openstack/networking-midonet failed In-Reply-To: <20180401035507.GD4343@thor.bakeyournoodle.com> References: <20180401035507.GD4343@thor.bakeyournoodle.com> Message-ID: <144369c3-204e-fcf7-9265-855f952bdb02@ericsson.com> Hi, These patches probably solve the issue, if someone could review them: https://review.openstack.org/#/c/557005/ and https://review.openstack.org/#/c/557006/ Thanks, Előd On 2018-04-01 05:55, Tony Breeds wrote: > On Sat, Mar 31, 2018 at 06:17:41AM +0000, A mailing list for the OpenStack Stable Branch test reports. wrote: >> Build failed. >> >> - build-openstack-sphinx-docs http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/networking-midonet/stable/pike/build-openstack-sphinx-docs/b20c665/html/ : SUCCESS in 5m 48s >> - openstack-tox-py27 http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/networking-midonet/stable/pike/openstack-tox-py27/75db3fe/ : FAILURE in 11m 49s > > > I'm not sure what's going on here but as with stable/ocata the > networking-midonet periodic-stable jobs have been failing like this for > close to a week. > > Can someone from that team take a look > > Yours Tony. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From delightwook at ssu.ac.kr Tue Apr 3 12:19:18 2018 From: delightwook at ssu.ac.kr (MinWookKim) Date: Tue, 3 Apr 2018 21:19:18 +0900 Subject: [openstack-dev] [Vitrage] New proposal for analysis. In-Reply-To: References: <0a7201d3c5c1$2ab596a0$8020c3e0$@ssu.ac.kr> <0b4201d3c63b$79038400$6b0a8c00$@ssu.ac.kr> <0cf201d3c72f$2b3f5ec0$81be1c40$@ssu.ac.kr> <0d8101d3c754$41e73c90$c5b5b5b0$@ssu.ac.kr> <38E590A3-69BF-4BE1-A701-FA8171429D46@nokia.com> <00e801d3ca25$29befee0$7d3cfca0$@ssu.ac.kr> <000a01d3caf4$90584010$b108c030$@ssu.ac.kr> Message-ID: <003c01d3cb45$fda29930$f8e7cb90$@ssu.ac.kr> Hello Ifat, Thanks for your reply. Your comments have been a great help to the proposal. (sorry, I did not think we could use Mistral). If we use the Mistral workflow for the proposal, we can get better results (we can get good results on performance and code conciseness). Also, if we use the Mistral workflow, we do not need to write any unnecessary code. Since I don't know about mistral yet, I think it would be better to do the most efficient design including mistral after grasping it. If we run a check through a Mistral workflow, how about providing users with a choice of tools that have the capability to perform checks? We can get the results of the check through the Mistral and tools, but I think we need the least functionality to manage them. What do you think? I attached a picture of the actual UI that I simply implemented. I hope it helps you understand. (The parameter and content have no meaning and are a simple example.) : ) Thanks. Best regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Tuesday, April 3, 2018 8:31 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, Thanks for the explanation, I understand the reasons for not running these checks on a regular basis in Zabbix or other monitoring tools. It makes sense. However, I don’t want to re-invent the wheel and add to Vitrage functionality that already exists in other projects. How about using Mistral for the purpose of manually running these extra checks? If you prepare the script/agent in advance, as well as the Mistral workflow, I believe that Mistral can successfully execute the check and return the results. I’m not so sure about the UI part, we will have to figure out how and where the user can see the output. But it will save a lot of effort around managing the checks, running a new service, supporting a new API, etc. What do you think? Ifat From: MinWookKim > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Tuesday, 3 April 2018 at 5:36 To: "'OpenStack Development Mailing List (not for usage questions)'" > Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, I also thought about several scenarios that use monitoring tools like Zabbix, Nagios, and Prometheus. But there are some limitations, so we have to think about it. We also need to think about targets, scope, and so on. The reason I do not think of tools like Zabbix, Nagios, and Prometheus as a tool to run checks is because we need to configure an agent or an exporter. I think it is not hard to configure an agent for monitoring objects such as a physical host. But the scope of the idea I think involves the VM's interior. Therefore, configuring the agent automatically inside the VM may not be easy. (although we can use parameters like user-data) If we exclude VM internal checks from scope, we can simply perform a check via Zabbix. (Like Zabbix's remote command, history) On the other hand, if we include the inside of a VM in a scope, and configure each of them, we have a rather constant overhead. The check service may incur temporary overhead, but the agent configuration can cause constant overhead. And Zabbix history can be another task for Vitrage. If we configure the agents themselves and exclude the VM's internal checks, we can provide functionality with simple code. how is it? Thank you. Best regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Monday, April 2, 2018 10:22 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, Thinking about it again, writing a new service for these checks might be an unnecessary overhead. Have you considered using an existing tool, like Zabbix, for running such checks? If you use Zabbix, you can define new triggers that run the new checks, and whenever needed the user can ask to open Zabbix and show the relevant metrics. The format will not be exactly the same as in your example, but it will save a lot of work and spare you the need to write and manage a new service. Some technical details: * The current information that Vitrage stores is not enough for opening the right Zabbix page. We will need to keep a little more data, like the item id, on the alarm vertex. But can be done easily. * A relevant Zabbix API is history.get [1] * If you are not using Zabbix, I assume that other monitoring tools have similar capabilities What do you think? Do you think it can work with your scenario? Or do you see a benefit to the user is viewing the data in the format that you suggested? [1] https://www.zabbix.com/documentation/3.0/manual/api/reference/history/get Thanks, Ifat From: MinWookKim < delightwook at ssu.ac.kr> Reply-To: "OpenStack Development Mailing List (not for usage questions)" < openstack-dev at lists.openstack.org> Date: Monday, 2 April 2018 at 4:51 To: "'OpenStack Development Mailing List (not for usage questions)'" < openstack-dev at lists.openstack.org> Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thank you for the reply. :) It is my opinion only, so if I'm wrong, we can change the implementation part at any time. (Even if it differs from my initial intention) The same security issues arise as you say. But now Vitrage does not call external APIs. The Vitrage-dashboard uses Vitrageclient libraries for Topology, Alarms, and RCA requests to Vitrage. So if we add an API, it will have the following flow. Vitrage-dashboard requests checks using the Vitrageclient library. -> Vitrage receives the API. -> api / controllers / v1 / checks.py is called. -> checks service is called. In accordance with the above flow, passing through the Vitrage API is the purpose of data passing and function calls. I think Vitrage does not need to call external APIs. If you do not want to go through the Vitrage API, we need to create a function for the check action in the Vitrage-dashboard, and write code to call the function. If I think wrong, please tell me anytime. :) Thank you. Best regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [ mailto:ifat.afek at nokia.com] Sent: Sunday, April 1, 2018 3:40 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, I understand your concern about the security issue. But how would that be different if the API call is passed through Vitrage API? The authentication from vitrage-dashboard to vitrage API will work, but then Vitrage will call an external API and you’ll have the same security issue, right? I don’t understand what is the difference between calling the external component from vitrage-dashboard and calling it from vitrage. Best regards, Ifat. From: MinWookKim < delightwook at ssu.ac.kr> Reply-To: "OpenStack Development Mailing List (not for usage questions)" < openstack-dev at lists.openstack.org> Date: Thursday, 29 March 2018 at 14:51 To: "'OpenStack Development Mailing List (not for usage questions)'" < openstack-dev at lists.openstack.org> Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thanks for your reply. : ) I wrote my opinion on your comment. Why do you think the request should pass through the Vitrage API? Why can’t vitrage-dashboard call the check component directly? Authentication issues: I think the check component is a separate component based on the API. In my opinion, if the check component has a separate api address from the vitrage to receive requests from the Vitrage-dashboard, the Vitrage-dashboard needs to know the api address for the check component. This can result in a request / response situation open to anyone, regardless of the authentication supported by openstack between the Vitrage-dashboard and the request / response procedure of check component. This is possible not only through the Vitrage-dashboard, but also with simple commands such as curl. (I think it is unnecessary to implement a separate authentication system for the check component.) This problem may occur if someone knows the api address for the check component, which can cause the host and VM to execute system commands. what should happen if the user closes the check window before the checks are over? I assume that the checks will finish, but the user won’t be able to see the results? If the window is closed before the check is finished, the user can not check the result. To solve this problem, I think that temporarily saving a list of recent results is also a solution. By storing temporary lists (for example, up to 10), the user can see the previous results and think that it is also possible to empty the list by the user. how is it? Thank you. Best Regrads, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [ mailto:ifat.afek at nokia.com] Sent: Thursday, March 29, 2018 8:07 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, Why do you think the request should pass through the Vitrage API? Why can’t vitrage-dashboard call the check component directly? And another question: what should happen if the user closes the check window before the checks are over? I assume that the checks will finish, but the user won’t be able to see the results? Thanks, Ifat. From: MinWookKim < delightwook at ssu.ac.kr> Reply-To: "OpenStack Development Mailing List (not for usage questions)" < openstack-dev at lists.openstack.org> Date: Thursday, 29 March 2018 at 10:25 To: "'OpenStack Development Mailing List (not for usage questions)'" < openstack-dev at lists.openstack.org> Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat and Vitrage team. I would like to explain more about the implementation part of the mail I sent last time. The flow is as follows. Vitrage-dashboard (action-list-panel) -> Vitrage-api -> check component The last time I mentioned it as api-handler, it would be better to call the check component directly from Vitarge-api without having to use it. I hope this helps you understand. Thank you Best Regards, Minwook. From: MinWookKim [ mailto:delightwook at ssu.ac.kr] Sent: Wednesday, March 28, 2018 11:21 AM To: 'OpenStack Development Mailing List (not for usage questions)' Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thanks for your reply. : ) This proposal is a proposal that we expect to be useful from a user perspective. >From a manager's point of view, we need an implementation that minimizes the overhead incurred by the proposal. The answers to some of your questions are: • I assume that these checks will not be implemented in Vitrage, and the results will not be stored in Vitrage, right? Vitrage role is to be a place where it is easy and intuitive for the user to execute external actions/checks. Yes, that's right. We do not need to save it to Vitrage because we just need to check the results. However, it is possible to implement the function directly in Vitrage-dashboard separately from Vitrage like add-action-list panel, but it seems that it is not enough to implement all the functions. If you do not mind, we will have the following flow. 1. The user requests the check action from the vitrage-dashboard (add-action-list-panel). 2. Call the check component through the vitrage's API handler. 3. The check component executes the command and returns the result. Because it is my opinion only, please tell us if there is an unnecessary part. :) • Do you expect the user to click an entity, select an action to run (e.g. ‘P2P check’), and wait by the open panel for the results? What if the user switches to another menu before the check is done? What if the user asks to run an additional check in parallel? What if the user wants to see again a previous result? My idea was to select the task, wait for the results in an open panel, and then instantly see it in the panel. If we switch to another menu before the scan is complete, we will not be able to see the results. Parallel checking is a matter of fact. (This can cause excessive overhead.) For earlier results, it may be okay to temporarily save the open panel until we exit the panel. We can see the previous results through the temporary saved results. • Any thoughts of what component will implement those checks? Or maybe these will be just scripts? I think I implement a separate component to request it. • It could be nice if, as a result of an action check, a new alarm will be raised in Vitrage. A specific alarm with the additional details that were found. However, it might not be trivial to implement it. We could think about it as phase #2. It is expected to be really good. It would be very useful if an Entity-Graph generates an alarm based on the check result. I think that part will be able to talk in detail later. My answer is my opinions and assumptions. If you think my implementation is wrong, or an inefficient implementation, please do not hesitate to tell me. Thanks. Best Regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [ mailto:ifat.afek at nokia.com] Sent: Wednesday, March 28, 2018 2:23 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, I think that from a user’s perspective, these are very good ideas. I have some questions regarding the UX and the implementation, since I’m trying to think what could be the best way to execute such actions from Vitrage. * I assume that these checks will not be implemented in Vitrage, and the results will not be stored in Vitrage, right? Vitrage role is to be a place where it is easy and intuitive for the user to execute external actions/checks. * Do you expect the user to click an entity, select an action to run (e.g. ‘P2P check’), and wait by the open panel for the results? What if the user switches to another menu before the check is done? What if the user asks to run an additional check in parallel? What if the user wants to see again a previous result? * Any thoughts of what component will implement those checks? Or maybe these will be just scripts? * It could be nice if, as a result of an action check, a new alarm will be raised in Vitrage. A specific alarm with the additional details that were found. However, it might not be trivial to implement it. We could think about it as phase #2. Best Regards, Ifat From: MinWookKim < delightwook at ssu.ac.kr> Reply-To: "OpenStack Development Mailing List (not for usage questions)" < openstack-dev at lists.openstack.org> Date: Tuesday, 27 March 2018 at 14:45 To: " openstack-dev at lists.openstack.org" < openstack-dev at lists.openstack.org> Subject: [openstack-dev] [Vitrage] New proposal for analysis. Hello Vitrage team. I am currently working on the Vitrage-Dashboard proposal for the ‘Add action list panel for entity click action’. ( https://review.openstack.org/#/c/531141/) I would like to make a new proposal based on the action list panel mentioned above. The new proposal is to provide multidimensional analysis capabilities in several entities that make up the infrastructure in the entity graph. Vitrage's entity-graph allows us to efficiently monitor alarms from various monitoring tools. In the current state, when there is a problem with the VM and Host, or when we want to check the status, we need to access the console individually for each VM and Host. This situation causes unnecessary behavior when the number of VMs and hosts increases. My new suggestion is that if we have a large number of vm and host, we do not need to directly connect to each VM, host console to enter the system command. Instead, we can send a system command to VM and hosts in the cloud through this proposal. It is only checking results. I have written some use-cases for an efficient explanation of the function. >From an implementation perspective, the goals of the proposal are: 1. To execute commands without installing any Agent / Client that can cause load on VM, Host. 2. I want to provide a simple UI so that users or administrators can get the desired information to multiple VMs and hosts. 3. I want to be able to grasp the results at a glance. 4. I want to implement a component that can support many additional scenarios in plug-in format. I would be happy if you could comment on the proposal or ask questions. Thanks. Best Regards, Minwook. -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 119425 bytes Desc: not available URL: From james.slagle at gmail.com Tue Apr 3 13:23:53 2018 From: james.slagle at gmail.com (James Slagle) Date: Tue, 3 Apr 2018 09:23:53 -0400 Subject: [openstack-dev] [tripleo] PTG session about All-In-One installer: recap & roadmap In-Reply-To: References: Message-ID: On Mon, Apr 2, 2018 at 9:05 PM, Dan Prince wrote: > On Thu, Mar 29, 2018 at 5:32 PM, Emilien Macchi wrote: >> Greeting folks, >> >> During the last PTG we spent time discussing some ideas around an All-In-One >> installer, using 100% of the TripleO bits to deploy a single node OpenStack >> very similar with what we have today with the containerized undercloud and >> what we also have with other tools like Packstack or Devstack. >> >> https://etherpad.openstack.org/p/tripleo-rocky-all-in-one >> >> One of the problems that we're trying to solve here is to give a simple tool >> for developers so they can both easily and quickly deploy an OpenStack for >> their needs. >> >> "As a developer, I need to deploy OpenStack in a VM on my laptop, quickly >> and without complexity, reproducing the same exact same tooling as TripleO >> is using." >> "As a Neutron developer, I need to develop a feature in Neutron and test it >> with TripleO in my local env." >> "As a TripleO dev, I need to implement a new service and test its deployment >> in my local env." >> "As a developer, I need to reproduce a bug in TripleO CI that blocks the >> production chain, quickly and simply." >> >> Probably more use cases, but to me that's what came into my mind now. >> >> Dan kicked-off a doc patch a month ago: >> https://review.openstack.org/#/c/547038/ >> And I just went ahead and proposed a blueprint: >> https://blueprints.launchpad.net/tripleo/+spec/all-in-one >> So hopefully we can start prototyping something during Rocky. > > I've actually started hacking a bit here: > > https://github.com/dprince/talon > > Very early and I haven't committed everything yet. (Probably wouldn't > have announced it to the list yet but it might help some understand > the use case). > > I'm running this on my laptop to develop TripleO containers with no > extra VM involved. > > P.S. We should call it Talon! > > Dan > >> >> Before talking about the actual implementation, I would like to gather >> feedback from people interested by the use-cases. If you recognize yourself >> in these use-cases and you're not using TripleO today to test your things >> because it's too complex to deploy, we want to hear from you. >> I want to see feedback (positive or negative) about this idea. We need to >> gather ideas, use cases, needs, before we go design a prototype in Rocky. > > Sorry dude. Already prototyping :) A related use case to all this work that takes it a step further: I think it would be useful if we could eventually further break down "openstack undercloud deploy" into just the pieces needed to: - start an ephemeral Heat container - create the Heat stack passing all requested -e's - run config-download and save the output Essentially removing the undercloud specific logic (or all-in-one specific logic in this case) from "openstack undercloud deploy" and resulting in a generic way to create the config-download playbooks for any given TripleO stack (openstack tripleo depoy?). This would be possible when using deployed-server, noop'ing Neutron networks, and using fixed IP's as those are the only OpenStack resources actually created by Heat when using a full undercloud. This would allow one to consume the ansible playbooks for a multinode overcloud using an ephemeral Heat. The same generic tooling could then be used to deploy an actual undercloud, any all-in-one configuration, or any overcloud configuration. -- -- James Slagle -- From jpena at redhat.com Tue Apr 3 14:00:03 2018 From: jpena at redhat.com (Javier Pena) Date: Tue, 3 Apr 2018 10:00:03 -0400 (EDT) Subject: [openstack-dev] [tripleo] PTG session about All-In-One installer: recap & roadmap In-Reply-To: References: Message-ID: <1941500803.14106606.1522764003036.JavaMail.zimbra@redhat.com> > Greeting folks, > > During the last PTG we spent time discussing some ideas around an All-In-One > installer, using 100% of the TripleO bits to deploy a single node OpenStack > very similar with what we have today with the containerized undercloud and > what we also have with other tools like Packstack or Devstack. > > https://etherpad.openstack.org/p/tripleo-rocky-all-in-one > I'm really +1 to this. And as a Packstack developer, I'd love to see this as a mid-term Packstack replacement. So let's dive into the details. > One of the problems that we're trying to solve here is to give a simple tool > for developers so they can both easily and quickly deploy an OpenStack for > their needs. > > "As a developer, I need to deploy OpenStack in a VM on my laptop, quickly and > without complexity, reproducing the same exact same tooling as TripleO is > using." > "As a Neutron developer, I need to develop a feature in Neutron and test it > with TripleO in my local env." > "As a TripleO dev, I need to implement a new service and test its deployment > in my local env." > "As a developer, I need to reproduce a bug in TripleO CI that blocks the > production chain, quickly and simply." > "As a packager, I want an easy/low overhead way to test updated packages with TripleO bits, so I can make sure they will not break any automation". > Probably more use cases, but to me that's what came into my mind now. > > Dan kicked-off a doc patch a month ago: > https://review.openstack.org/#/c/547038/ > And I just went ahead and proposed a blueprint: > https://blueprints.launchpad.net/tripleo/+spec/all-in-one > So hopefully we can start prototyping something during Rocky. > > Before talking about the actual implementation, I would like to gather > feedback from people interested by the use-cases. If you recognize yourself > in these use-cases and you're not using TripleO today to test your things > because it's too complex to deploy, we want to hear from you. > I want to see feedback (positive or negative) about this idea. We need to > gather ideas, use cases, needs, before we go design a prototype in Rocky. > I would like to offer help with initial testing once there is something in the repos, so count me in! Regards, Javier > Thanks everyone who'll be involved, > -- > Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From tenobreg at redhat.com Tue Apr 3 14:12:58 2018 From: tenobreg at redhat.com (Telles Nobrega) Date: Tue, 03 Apr 2018 14:12:58 +0000 Subject: [openstack-dev] [tripleo] PTG session about All-In-One installer: recap & roadmap In-Reply-To: <1941500803.14106606.1522764003036.JavaMail.zimbra@redhat.com> References: <1941500803.14106606.1522764003036.JavaMail.zimbra@redhat.com> Message-ID: I'd really love to this going forward, I fit perfectly on the category that I usually don't test stuff on tripleO because it can get too complex and it will take a lot of time to deploy, so this seems like a perfect solution for that. Thanks for putting this forward. On Tue, Apr 3, 2018 at 11:00 AM Javier Pena wrote: > > > Greeting folks, > > > > During the last PTG we spent time discussing some ideas around an > All-In-One > > installer, using 100% of the TripleO bits to deploy a single node > OpenStack > > very similar with what we have today with the containerized undercloud > and > > what we also have with other tools like Packstack or Devstack. > > > > https://etherpad.openstack.org/p/tripleo-rocky-all-in-one > > > > I'm really +1 to this. And as a Packstack developer, I'd love to see this > as a > mid-term Packstack replacement. So let's dive into the details. > > > One of the problems that we're trying to solve here is to give a simple > tool > > for developers so they can both easily and quickly deploy an OpenStack > for > > their needs. > > > > "As a developer, I need to deploy OpenStack in a VM on my laptop, > quickly and > > without complexity, reproducing the same exact same tooling as TripleO is > > using." > > "As a Neutron developer, I need to develop a feature in Neutron and test > it > > with TripleO in my local env." > > "As a TripleO dev, I need to implement a new service and test its > deployment > > in my local env." > > "As a developer, I need to reproduce a bug in TripleO CI that blocks the > > production chain, quickly and simply." > > > > "As a packager, I want an easy/low overhead way to test updated packages > with TripleO bits, so I can make sure they will not break any automation". > > > Probably more use cases, but to me that's what came into my mind now. > > > > Dan kicked-off a doc patch a month ago: > > https://review.openstack.org/#/c/547038/ > > And I just went ahead and proposed a blueprint: > > https://blueprints.launchpad.net/tripleo/+spec/all-in-one > > So hopefully we can start prototyping something during Rocky. > > > > Before talking about the actual implementation, I would like to gather > > feedback from people interested by the use-cases. If you recognize > yourself > > in these use-cases and you're not using TripleO today to test your things > > because it's too complex to deploy, we want to hear from you. > > I want to see feedback (positive or negative) about this idea. We need to > > gather ideas, use cases, needs, before we go design a prototype in Rocky. > > > > I would like to offer help with initial testing once there is something in > the repos, so count me in! > > Regards, > Javier > > > Thanks everyone who'll be involved, > > -- > > Emilien Macchi > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- TELLES NOBREGA SOFTWARE ENGINEER Red Hat Brasil Av. Brg. Faria Lima, 3900 - 8º andar - Itaim Bibi, São Paulo tenobreg at redhat.com TRIED. TESTED. TRUSTED. Red Hat é reconhecida entre as melhores empresas para trabalhar no Brasil pelo Great Place to Work. -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Tue Apr 3 14:59:54 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 03 Apr 2018 14:59:54 +0000 Subject: [openstack-dev] [tripleo] PTG session about All-In-One installer: recap & roadmap In-Reply-To: <1941500803.14106606.1522764003036.JavaMail.zimbra@redhat.com> References: <1941500803.14106606.1522764003036.JavaMail.zimbra@redhat.com> Message-ID: On Tue, 3 Apr 2018 at 10:00 Javier Pena wrote: > > > Greeting folks, > > > > During the last PTG we spent time discussing some ideas around an > All-In-One > > installer, using 100% of the TripleO bits to deploy a single node > OpenStack > > very similar with what we have today with the containerized undercloud > and > > what we also have with other tools like Packstack or Devstack. > > > > https://etherpad.openstack.org/p/tripleo-rocky-all-in-one > > > > I'm really +1 to this. And as a Packstack developer, I'd love to see this > as a > mid-term Packstack replacement. So let's dive into the details. > > > One of the problems that we're trying to solve here is to give a simple > tool > > for developers so they can both easily and quickly deploy an OpenStack > for > > their needs. > > > > "As a developer, I need to deploy OpenStack in a VM on my laptop, > quickly and > > without complexity, reproducing the same exact same tooling as TripleO is > > using." > > "As a Neutron developer, I need to develop a feature in Neutron and test > it > > with TripleO in my local env." > > "As a TripleO dev, I need to implement a new service and test its > deployment > > in my local env." > > "As a developer, I need to reproduce a bug in TripleO CI that blocks the > > production chain, quickly and simply." > > > > "As a packager, I want an easy/low overhead way to test updated packages > with TripleO bits, so I can make sure they will not break any automation". > I suspect we need to not only update packages, but also update containers, wdyt? > > > Probably more use cases, but to me that's what came into my mind now. > > > > Dan kicked-off a doc patch a month ago: > > https://review.openstack.org/#/c/547038/ > > And I just went ahead and proposed a blueprint: > > https://blueprints.launchpad.net/tripleo/+spec/all-in-one > > So hopefully we can start prototyping something during Rocky. > > > > Before talking about the actual implementation, I would like to gather > > feedback from people interested by the use-cases. If you recognize > yourself > > in these use-cases and you're not using TripleO today to test your things > > because it's too complex to deploy, we want to hear from you. > > I want to see feedback (positive or negative) about this idea. We need to > > gather ideas, use cases, needs, before we go design a prototype in Rocky. > > > > I would like to offer help with initial testing once there is something in > the repos, so count me in! > > Regards, > Javier > > > Thanks everyone who'll be involved, > > -- > > Emilien Macchi > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbayer at redhat.com Tue Apr 3 15:07:16 2018 From: mbayer at redhat.com (Michael Bayer) Date: Tue, 3 Apr 2018 11:07:16 -0400 Subject: [openstack-dev] [oslo.db] innodb OPTIMIZE TABLE ? Message-ID: The MySQL / MariaDB variants we use nowadays default to innodb_file_per_table=ON and we also set this flag to ON in installer tools like TripleO. The reason we like file per table is so that we don't grow an enormous ibdata file that can't be shrunk without rebuilding the database. Instead, we have lots of little .ibd datafiles for each table throughout each openstack database. But now we have the issue that these files also can benefit from periodic optimization which can shrink them and also have a beneficial effect on performance. The OPTIMIZE TABLE statement achieves this, but as would be expected it itself can lock tables for potentially a long time. Googling around reveals a lot of controversy, as various users and publications suggest that OPTIMIZE is never needed and would have only a negligible effect on performance. However here we seek to use OPTIMIZE so that we can reclaim disk space on tables that have lots of DELETE activity, such as keystone "token" and ceilometer "sample". Questions for the group: 1. is OPTIMIZE table worthwhile to be run for tables where the datafile has grown much larger than the number of rows we have in the table? 2. from people's production experience how safe is it to run OPTIMIZE, e.g. how long is it locking tables, etc. 3. is there a heuristic we can use to measure when we might run this -.e.g my plan is we measure the size in bytes of each row in a table and then compare that in some ratio to the size of the corresponding .ibd file, if the .ibd file is N times larger than the logical data size we run OPTIMIZE ? 4. I'd like to propose this job of scanning table datafile sizes in ratio to logical data sizes, then running OPTIMIZE, be a utility script that is delivered via oslo.db, and would run for all innodb tables within a target MySQL/ MariaDB server generically. That is, I really *dont* want this to be a script that Keystone, Nova, Ceilometer etc. are all maintaining delivering themselves. this should be done as a generic pass on a whole database (noting, again, we are only running it for very specific InnoDB tables that we observe have a poor logical/physical size ratio). 5. for Galera this gets more tricky, as we might want to run OPTIMIZE on individual nodes directly. The script at [1] illustrates how to run this on individual nodes one at a time. More succinctly, the Q is: a. OPTIMIZE, yes or no? b. oslo.db script to run generically, yes or no? thanks for your thoughts! [1] https://github.com/deimosfr/galera_innoptimizer From jaypipes at gmail.com Tue Apr 3 15:13:30 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 3 Apr 2018 11:13:30 -0400 Subject: [openstack-dev] [nova][placement] Consumer generations (allowing multiple clients to allocate for an instance) Message-ID: <7e407bc4-5edd-b306-52aa-69f7e961ba14@gmail.com> Stackers, Today, a few of us had a chat to discuss changes to the Placement REST API [1] that will allow multiple clients to safely update a single consumer's set of resource allocations. This email is to summarize the decisions coming out of that chat. Note that Ed is currently updating the following nova-spec: https://review.openstack.org/#/c/556971/ The decisions made were as follows: 1) The GET /allocations/{consumer_uuid} REST API endpoint will now have a required consumer_generation field in the response. This will be an integer value. 2) The PUT /allocations/{consumer_uuid} REST API endpoint will have a new consumer_generation required field in the request payload. 3) Callers to PUT /allocations/{consumer_uuid} that believe they are the first caller to set allocations for the consumer will set consumer_generation to None. 4) If consumer_generation is None in the request to PUT /allocations/{consumer_uuid} and the placement service notes that allocations already exist for that consumer, a 409 conflict will be returned. The caller will need to then GET /allocations/{consumer_uuid} to retrieve the consumer's current generation and allocations, merge its new resources into those allocations and retry PUT /allocations/{consumer_uuid}, passing the merged allocation set and consumer generation. 5) The POST /allocations REST API endpoint is currently only used by nova when performing migrate or resize operations for a virtual machine. The POST /allocations REST API request payload will contain a new required consumer_generation field in each top-level dict element corresponding to the allocations to overwrite for one or more consumers. (the migrate/resize code paths use multiple consumer UUIDs to identify the resources that are allocated to the source and destination hosts) 6) The HTTP response codes for both PUT /allocations/{consumer_uuid} and POST /allocations will continue to be 204 No Content. Thanks, -jay [1] https://docs.openstack.org/nova/latest/user/placement.html From jpena at redhat.com Tue Apr 3 15:36:31 2018 From: jpena at redhat.com (Javier Pena) Date: Tue, 3 Apr 2018 11:36:31 -0400 (EDT) Subject: [openstack-dev] [tripleo] PTG session about All-In-One installer: recap & roadmap In-Reply-To: References: <1941500803.14106606.1522764003036.JavaMail.zimbra@redhat.com> Message-ID: <1413179443.14139196.1522769791104.JavaMail.zimbra@redhat.com> ----- Original Message ----- > On Tue, 3 Apr 2018 at 10:00 Javier Pena < jpena at redhat.com > wrote: > > > Greeting folks, > > > > > > > > During the last PTG we spent time discussing some ideas around an > > > All-In-One > > > > installer, using 100% of the TripleO bits to deploy a single node > > > OpenStack > > > > very similar with what we have today with the containerized undercloud > > > and > > > > what we also have with other tools like Packstack or Devstack. > > > > > > > > https://etherpad.openstack.org/p/tripleo-rocky-all-in-one > > > > > > > I'm really +1 to this. And as a Packstack developer, I'd love to see this > > as > > a > > > mid-term Packstack replacement. So let's dive into the details. > > > > One of the problems that we're trying to solve here is to give a simple > > > tool > > > > for developers so they can both easily and quickly deploy an OpenStack > > > for > > > > their needs. > > > > > > > > "As a developer, I need to deploy OpenStack in a VM on my laptop, quickly > > > and > > > > without complexity, reproducing the same exact same tooling as TripleO is > > > > using." > > > > "As a Neutron developer, I need to develop a feature in Neutron and test > > > it > > > > with TripleO in my local env." > > > > "As a TripleO dev, I need to implement a new service and test its > > > deployment > > > > in my local env." > > > > "As a developer, I need to reproduce a bug in TripleO CI that blocks the > > > > production chain, quickly and simply." > > > > > > > "As a packager, I want an easy/low overhead way to test updated packages > > with > > TripleO bits, so I can make sure they will not break any automation". > > I suspect we need to not only update packages, but also update containers, > wdyt? I'm being implementation-agnostic in my requirement on purpose :). It could be either a new container including the updates, or updating the existing container with the new packages. > > > Probably more use cases, but to me that's what came into my mind now. > > > > > > > > Dan kicked-off a doc patch a month ago: > > > > https://review.openstack.org/#/c/547038/ > > > > And I just went ahead and proposed a blueprint: > > > > https://blueprints.launchpad.net/tripleo/+spec/all-in-one > > > > So hopefully we can start prototyping something during Rocky. > > > > > > > > Before talking about the actual implementation, I would like to gather > > > > feedback from people interested by the use-cases. If you recognize > > > yourself > > > > in these use-cases and you're not using TripleO today to test your things > > > > because it's too complex to deploy, we want to hear from you. > > > > I want to see feedback (positive or negative) about this idea. We need to > > > > gather ideas, use cases, needs, before we go design a prototype in Rocky. > > > > > > > I would like to offer help with initial testing once there is something in > > the repos, so count me in! > > > Regards, > > > Javier > > > > Thanks everyone who'll be involved, > > > > -- > > > > Emilien Macchi > > > > > > > > __________________________________________________________________________ > > > > OpenStack Development Mailing List (not for usage questions) > > > > Unsubscribe: > > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Tue Apr 3 15:41:15 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 3 Apr 2018 11:41:15 -0400 Subject: [openstack-dev] [oslo.db] innodb OPTIMIZE TABLE ? In-Reply-To: References: Message-ID: <04e33bc7-90cf-e9c6-c276-a852212c25c7@gmail.com> On 04/03/2018 11:07 AM, Michael Bayer wrote: > The MySQL / MariaDB variants we use nowadays default to > innodb_file_per_table=ON and we also set this flag to ON in installer > tools like TripleO. The reason we like file per table is so that > we don't grow an enormous ibdata file that can't be shrunk without > rebuilding the database. Instead, we have lots of little .ibd > datafiles for each table throughout each openstack database. > > But now we have the issue that these files also can benefit from > periodic optimization which can shrink them and also have a beneficial > effect on performance. The OPTIMIZE TABLE statement achieves this, > but as would be expected it itself can lock tables for potentially a > long time. Googling around reveals a lot of controversy, as various > users and publications suggest that OPTIMIZE is never needed and would > have only a negligible effect on performance. However here we seek > to use OPTIMIZE so that we can reclaim disk space on tables that have > lots of DELETE activity, such as keystone "token" and ceilometer > "sample". > > Questions for the group: > > 1. is OPTIMIZE table worthwhile to be run for tables where the > datafile has grown much larger than the number of rows we have in the > table? Possibly, though it's questionable to use MySQL/InnoDB for storing transient data that is deleted often like ceilometer samples and keystone tokens. A much better solution is to use RDBMS partitioning so you can simply ALTER TABLE .. DROP PARTITION those partitions that are no longer relevant (and don't even bother DELETEing individual rows) or, in the case of Ceilometer samples, don't use a traditional RDBMS for timeseries data at all... But since that is unfortunately already the case, yes it is probably a good idea to OPTIMIZE TABLE on those tables. > 2. from people's production experience how safe is it to run OPTIMIZE, > e.g. how long is it locking tables, etc. Is it safe? Yes. Does it lock the entire table for the duration of the operation? No. It uses online DDL operations: https://dev.mysql.com/doc/refman/5.7/en/innodb-file-defragmenting.html Note that OPTIMIZE TABLE is mapped to ALTER TABLE tbl_name FORCE for InnoDB tables. > 3. is there a heuristic we can use to measure when we might run this > -.e.g my plan is we measure the size in bytes of each row in a table > and then compare that in some ratio to the size of the corresponding > .ibd file, if the .ibd file is N times larger than the logical data > size we run OPTIMIZE ? I don't believe so, no. Most things I see recommended is to simply run OPTIMIZE TABLE in a cron job on each table periodically. > 4. I'd like to propose this job of scanning table datafile sizes in > ratio to logical data sizes, then running OPTIMIZE, be a utility > script that is delivered via oslo.db, and would run for all innodb > tables within a target MySQL/ MariaDB server generically. That is, I > really *dont* want this to be a script that Keystone, Nova, Ceilometer > etc. are all maintaining delivering themselves. this should be done > as a generic pass on a whole database (noting, again, we are only > running it for very specific InnoDB tables that we observe have a poor > logical/physical size ratio). I don't believe this should be in oslo.db. This is strictly the purview of deployment tools and should stay there, IMHO. > 5. for Galera this gets more tricky, as we might want to run OPTIMIZE > on individual nodes directly. The script at [1] illustrates how to > run this on individual nodes one at a time. > > More succinctly, the Q is: > > a. OPTIMIZE, yes or no? Yes. > b. oslo.db script to run generically, yes or no? No. Just have Triple-O install galera_innoptimizer and run it in a cron job. Best, -jay > thanks for your thoughts! > > > > [1] https://github.com/deimosfr/galera_innoptimizer > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From chris.friesen at windriver.com Tue Apr 3 15:44:13 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Tue, 3 Apr 2018 09:44:13 -0600 Subject: [openstack-dev] [novaclient] invoking methods on the same client object in different theads caused malformed requests In-Reply-To: <1fb56ae6b328402fb3dd58dde67c2002@HXTBJIDCEMVIW02.hxtcorp.net> References: <1fb56ae6b328402fb3dd58dde67c2002@HXTBJIDCEMVIW02.hxtcorp.net> Message-ID: <5AC3A14D.9060908@windriver.com> On 04/03/2018 04:25 AM, Xiong, Huan wrote: > Hi, > > I'm using a cloud benchmarking tool [1], which creates a *single* nova > client object in main thread and invoke methods on that object in different > worker threads. I find it generated malformed requests at random (my > system has python-novaclient 10.1.0 installed). The root cause was because > some methods in novaclient (e.g., those in images.py and networks.py) > changed client object's service_type. Since all threads shared a single > client object, the change caused other threads generated malformed requests > and hence the failure. > > I wonder if this is a known issue for novaclient, or the above approach is > not supported? In general, unless something says it is thread-safe you should assume it is not. Chris From mbayer at redhat.com Tue Apr 3 15:51:30 2018 From: mbayer at redhat.com (Michael Bayer) Date: Tue, 3 Apr 2018 11:51:30 -0400 Subject: [openstack-dev] [oslo.db] innodb OPTIMIZE TABLE ? In-Reply-To: <04e33bc7-90cf-e9c6-c276-a852212c25c7@gmail.com> References: <04e33bc7-90cf-e9c6-c276-a852212c25c7@gmail.com> Message-ID: On Tue, Apr 3, 2018 at 11:41 AM, Jay Pipes wrote: > On 04/03/2018 11:07 AM, Michael Bayer wrote: >> > > Yes. > >> b. oslo.db script to run generically, yes or no? > > > No. Just have Triple-O install galera_innoptimizer and run it in a cron job. OK, here are the issues I have with galera_innoptimizer: 1. only runs on Galera. This should work on a non-galera db as well 2. hardcoded to MySQLdb / mysqlclient. We don't install that driver anymore. 3. is just running OPTIMIZE on every table across the board, and at best you can give it a list of tables. I was hoping to not add more hardcoded cross-dependencies to tripleo, as this means individual projects would need to affect how the script is run which means we have to again start shipping individual per-app crons that require eternal babysitting. What failures do you foresee if I tried to make it compare the logical data size to the physical file size? since I'm going here for file size optimization only. or just too complicated / brittle ? > > Best, > -jay > >> thanks for your thoughts! >> >> >> >> [1] https://github.com/deimosfr/galera_innoptimizer >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From zbitter at redhat.com Tue Apr 3 16:04:05 2018 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 3 Apr 2018 12:04:05 -0400 Subject: [openstack-dev] Replacing pbr's autodoc feature with sphinxcontrib-apidoc In-Reply-To: <1522751305.3618.17.camel@redhat.com> References: <1522247496.4003.31.camel@redhat.com> <7c7d3421-9bfb-fa6e-69fe-6e3baea762cf@redhat.com> <1522751305.3618.17.camel@redhat.com> Message-ID: On 03/04/18 06:28, Stephen Finucane wrote: > On Mon, 2018-04-02 at 19:41 -0400, Zane Bitter wrote: >> On 28/03/18 10:31, Stephen Finucane wrote: >>> As noted last week [1], we're trying to move away from pbr's autodoc >>> feature as part of the new docs PTI. To that end, I've created >>> sphinxcontrib-apidoc, which should do what pbr was previously doing for >>> us by via a Sphinx extension. >>> >>> https://pypi.org/project/sphinxcontrib-apidoc/ >>> >>> This works by reading some configuration from your documentation's >>> 'conf.py' file and using this to call 'sphinx-apidoc'. It means we no >>> longer need pbr to do this for. >>> >>> I have pushed version 0.1.0 to PyPi already but before I add this to >>> global requirements, I'd like to ensure things are working as expected. >>> smcginnis was kind enough to test this out on glance and it seemed to >>> work for him but I'd appreciate additional data points. The >>> configuration steps for this extension are provided in the above link. >>> To test this yourself, you simply need to do the following: >>> >>> 1. Add 'sphinxcontrib-apidoc' to your test-requirements.txt or >>> doc/requirements.txt file >>> 2. Configure as noted above and remove the '[pbr]' and '[build_sphinx]' >>> configuration from 'setup.cfg' >>> 3. Replace 'python setup.py build_sphinx' with a call to 'sphinx-build' >>> 4. Run 'tox -e docs' >>> 5. Profit? >>> >>> Be sure to let me know if anyone encounters issues. If not, I'll be >>> pushing for this to be included in global requirements so we can start >>> the migration. >> >> Thanks Stephen! I tried it out with no problems: >> >> https://review.openstack.org/558262 >> >> However, there are a couple of differences compared to how pbr did things. >> >> 1) pbr can generate an 'autoindex' file with a flat list of modules >> (this appears to be configurable with the autodoc_index_modules option), >> but apidoc only generates a 'modules' file with a hierarchical list of >> modules. This is easy to work around, but I guess it needs to be added >> to the instructions to check that you're not relying on it. > > Yup, smcginnis and I discussed this at some point. PBR has two > different ways of generating API documentation: 'autodoc_tree', which > is based on 'sphinx-apidoc', and 'autodoc', which is custom (and > presumably legacy). This extension replaces the former of those but, as > you note below, it seems 'sphinx-apidoc' can be wrangled into > generating something approaching the latter. That explains quite a lot that was confusing me :D >> 2) pbr generates a page per module; this plugin generates a page per >> package. This results in waaaay too much information on a page to be >> able to navigate it comfortably IMHO. To the point where it's easier to >> read the code. (It also breaks existing links, if you care about that >> kind of thing.) I sent you a PR to add an option to pass --separate: >> >> https://github.com/sphinx-contrib/apidoc/pull/1 > > Thanks for that. I've merged it and will use it as the basis of a 0.2.0 > release assuming nothing else pops up in the next day or two. Thanks! > I'm not > sure what we can do about the broken links though - maybe use the > redirect infrastructure to just send everyone to the new place? I guess > I can add this to the guide I'm adding to the README on migrating from > pbr. No links break if you use the apidoc_separate_modules=True option, so I would recommend any projects currently generating a page per module (i.e. using 'autodoc' instead of 'autodoc_tree') should enable that option to keep continuity. cheers, Zane. From jaypipes at gmail.com Tue Apr 3 16:13:00 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 3 Apr 2018 12:13:00 -0400 Subject: [openstack-dev] [oslo.db] innodb OPTIMIZE TABLE ? In-Reply-To: References: <04e33bc7-90cf-e9c6-c276-a852212c25c7@gmail.com> Message-ID: On 04/03/2018 11:51 AM, Michael Bayer wrote: > On Tue, Apr 3, 2018 at 11:41 AM, Jay Pipes wrote: >> On 04/03/2018 11:07 AM, Michael Bayer wrote: >>> >> >> Yes. >> >>> b. oslo.db script to run generically, yes or no? >> >> >> No. Just have Triple-O install galera_innoptimizer and run it in a cron job. > > OK, here are the issues I have with galera_innoptimizer: > > 1. only runs on Galera. This should work on a non-galera db as well To recap what we just discussed on IRC... it's not necessary to do this for non-galera DBs because non-galera DBs don't use manual locking for OPTIMIZE TABLE (MySQL 5.7 online DDL changes ensure OPTIMIZE TABLE for InnoDB is a non-locking operation). Galera enforces a strict ordering with its total order isolation mode by default for DDL operations, which is what the galera_innoptimizer thing is doing: turning off that total order isolation temporarily and executing optimize table, then turning on total order isolation again. > 2. hardcoded to MySQLdb / mysqlclient. We don't install that driver anymore. > > 3. is just running OPTIMIZE on every table across the board, and at > best you can give it a list of tables. I was hoping to not add more > hardcoded cross-dependencies to tripleo, as this means individual > projects would need to affect how the script is run which means we > have to again start shipping individual per-app crons that require > eternal babysitting. I have no issues with you creating a better tool :) Just not in oslo.db... > What failures do you foresee if I tried to make it compare the logical > data size to the physical file size? since I'm going here for file > size optimization only. or just too complicated / brittle ? Yeah, you are prematurely optimizing (pun intended). No need. Just run OPTIMIZE TABLE every day on all tables in a cron job. With modern MySQL, there's really not an issue with that. Best, -jay From sfinucan at redhat.com Tue Apr 3 16:17:22 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Tue, 03 Apr 2018 17:17:22 +0100 Subject: [openstack-dev] Replacing pbr's autodoc feature with sphinxcontrib-apidoc In-Reply-To: References: <1522247496.4003.31.camel@redhat.com> <7c7d3421-9bfb-fa6e-69fe-6e3baea762cf@redhat.com> <1522751305.3618.17.camel@redhat.com> Message-ID: <1522772242.3618.31.camel@redhat.com> On Tue, 2018-04-03 at 12:04 -0400, Zane Bitter wrote: > On 03/04/18 06:28, Stephen Finucane wrote: > > On Mon, 2018-04-02 at 19:41 -0400, Zane Bitter wrote: > > > On 28/03/18 10:31, Stephen Finucane wrote: > > > > As noted last week [1], we're trying to move away from pbr's autodoc > > > > feature as part of the new docs PTI. To that end, I've created > > > > sphinxcontrib-apidoc, which should do what pbr was previously doing for > > > > us by via a Sphinx extension. > > > > > > > > https://pypi.org/project/sphinxcontrib-apidoc/ > > > > > > > > This works by reading some configuration from your documentation's > > > > 'conf.py' file and using this to call 'sphinx-apidoc'. It means we no > > > > longer need pbr to do this for. > > > > > > > > I have pushed version 0.1.0 to PyPi already but before I add this to > > > > global requirements, I'd like to ensure things are working as expected. > > > > smcginnis was kind enough to test this out on glance and it seemed to > > > > work for him but I'd appreciate additional data points. The > > > > configuration steps for this extension are provided in the above link. > > > > To test this yourself, you simply need to do the following: > > > > > > > > 1. Add 'sphinxcontrib-apidoc' to your test-requirements.txt or > > > > doc/requirements.txt file > > > > 2. Configure as noted above and remove the '[pbr]' and '[build_sphinx]' > > > > configuration from 'setup.cfg' > > > > 3. Replace 'python setup.py build_sphinx' with a call to 'sphinx-build' > > > > 4. Run 'tox -e docs' > > > > 5. Profit? > > > > > > > > Be sure to let me know if anyone encounters issues. If not, I'll be > > > > pushing for this to be included in global requirements so we can start > > > > the migration. > > > > > > Thanks Stephen! I tried it out with no problems: > > > > > > https://review.openstack.org/558262 > > > > > > However, there are a couple of differences compared to how pbr did things. > > > > > > 1) pbr can generate an 'autoindex' file with a flat list of modules > > > (this appears to be configurable with the autodoc_index_modules option), > > > but apidoc only generates a 'modules' file with a hierarchical list of > > > modules. This is easy to work around, but I guess it needs to be added > > > to the instructions to check that you're not relying on it. > > > > Yup, smcginnis and I discussed this at some point. PBR has two > > different ways of generating API documentation: 'autodoc_tree', which > > is based on 'sphinx-apidoc', and 'autodoc', which is custom (and > > presumably legacy). This extension replaces the former of those but, as > > you note below, it seems 'sphinx-apidoc' can be wrangled into > > generating something approaching the latter. > > That explains quite a lot that was confusing me :D > > > > 2) pbr generates a page per module; this plugin generates a page per > > > package. This results in waaaay too much information on a page to be > > > able to navigate it comfortably IMHO. To the point where it's easier to > > > read the code. (It also breaks existing links, if you care about that > > > kind of thing.) I sent you a PR to add an option to pass --separate: > > > > > > https://github.com/sphinx-contrib/apidoc/pull/1 > > > > Thanks for that. I've merged it and will use it as the basis of a 0.2.0 > > release assuming nothing else pops up in the next day or two. > > Thanks! > > > I'm not sure what we can do about the broken links though - maybe use the > > redirect infrastructure to just send everyone to the new place? I guess > > I can add this to the guide I'm adding to the README on migrating from > > pbr. > > No links break if you use the apidoc_separate_modules=True option, so I > would recommend any projects currently generating a page per module > (i.e. using 'autodoc' instead of 'autodoc_tree') should enable that > option to keep continuity. Fancy taking a look at [1], in that case? This should clarify everything. [1] https://github.com/sphinx-contrib/apidoc/pull/3 Stephen From cdent+os at anticdent.org Tue Apr 3 16:40:32 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 3 Apr 2018 17:40:32 +0100 (BST) Subject: [openstack-dev] [tc] [all] TC Report 18-14 Message-ID: html: https://anticdent.org/tc-report-18-14.html If the [logs of #openstack-tc](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/index.html) are any indicator of reality (they are not), then the only things that happened in the past week are that the next OpenStack release got a name, and the TC talked about how to evaluate projects applying to be official. # Stein Yes, the people have spoken and their voices were almost heard. The first choice for the name of the "S" release of OpenStack, "Solar", foundered at the desk of legal and "Stein" won the day and there was much emojifying: 🍺. >From "Rocky" comes...another rock. Not ein Maß. Presumably such details will not limit the rejoicing. Associated [chatter](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-29.log.html#t2018-03-29T19:10:52). # Official Projects The [application of Adjutant](https://review.openstack.org/#/c/553643/) continues to drive some discussion, both on the review and in IRC. On [Wednesday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-28.log.html#t2018-03-28T12:04:06) I dropped a wall of text on the review, expressing my doubt and confusion over what rules we are supposed to be using when evaluating applicants. Then at [Thursday's office hour](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-03-29.log.html#t2018-03-29T15:04:56) the discussion picked up with a larger group. There were at least three different threads of conversation happening at once: * comments related to the general topics I raised * evaluating Adjutant itself in terms of its impact on OpenStack * trying to get (and encourage the getting of) input from real operators about their thoughts on the usefulness of Adjutant (or something like it) The last was an effort to stop speculating, which is something we do too much. The second was an effort to not be moving the goalposts in the middle of an application, despite the confusion. The first had a lot of ideas, but none were resolved (and there's a pattern there) so there's a plan to have a session about it at the Forum. If you look at the [planning etherpad](https://etherpad.openstack.org/p/YVR-forum-TC-sessions) you'll see that there are two different topics related to project applications: one is for Adjutant specifically, in case things aren't resolved by then (we hope they will be); the other is a general session on really trying to dig deep into the questions and figure out what we're trying to do and be when we say "official". These are separate sessions very much on purpose. The questions reach into the core of what OpenStack is, so it ought to be an interesting session. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From dmsimard at redhat.com Tue Apr 3 17:00:47 2018 From: dmsimard at redhat.com (David Moreau Simard) Date: Tue, 3 Apr 2018 13:00:47 -0400 Subject: [openstack-dev] [all][infra] Upcoming changes in ARA Zuul job reports In-Reply-To: References: <20180329231235.GA15222@localhost.localdomain> Message-ID: On Thu, Mar 29, 2018 at 9:05 PM, Jeffrey Zhang wrote: > cool. kolla will try to implement it. Cool ! For reference, openstack-ansible already retooled their log collection to copy the database instead of generating the report [1]. [1]: https://review.openstack.org/#/c/557921/ David Moreau Simard Senior Software Engineer | OpenStack RDO dmsimard = [irc, github, twitter] From dprince at redhat.com Tue Apr 3 17:18:21 2018 From: dprince at redhat.com (Dan Prince) Date: Tue, 3 Apr 2018 13:18:21 -0400 Subject: [openstack-dev] [tripleo] PTG session about All-In-One installer: recap & roadmap In-Reply-To: References: Message-ID: On Tue, Apr 3, 2018 at 9:23 AM, James Slagle wrote: > On Mon, Apr 2, 2018 at 9:05 PM, Dan Prince wrote: >> On Thu, Mar 29, 2018 at 5:32 PM, Emilien Macchi wrote: >>> Greeting folks, >>> >>> During the last PTG we spent time discussing some ideas around an All-In-One >>> installer, using 100% of the TripleO bits to deploy a single node OpenStack >>> very similar with what we have today with the containerized undercloud and >>> what we also have with other tools like Packstack or Devstack. >>> >>> https://etherpad.openstack.org/p/tripleo-rocky-all-in-one >>> >>> One of the problems that we're trying to solve here is to give a simple tool >>> for developers so they can both easily and quickly deploy an OpenStack for >>> their needs. >>> >>> "As a developer, I need to deploy OpenStack in a VM on my laptop, quickly >>> and without complexity, reproducing the same exact same tooling as TripleO >>> is using." >>> "As a Neutron developer, I need to develop a feature in Neutron and test it >>> with TripleO in my local env." >>> "As a TripleO dev, I need to implement a new service and test its deployment >>> in my local env." >>> "As a developer, I need to reproduce a bug in TripleO CI that blocks the >>> production chain, quickly and simply." >>> >>> Probably more use cases, but to me that's what came into my mind now. >>> >>> Dan kicked-off a doc patch a month ago: >>> https://review.openstack.org/#/c/547038/ >>> And I just went ahead and proposed a blueprint: >>> https://blueprints.launchpad.net/tripleo/+spec/all-in-one >>> So hopefully we can start prototyping something during Rocky. >> >> I've actually started hacking a bit here: >> >> https://github.com/dprince/talon >> >> Very early and I haven't committed everything yet. (Probably wouldn't >> have announced it to the list yet but it might help some understand >> the use case). >> >> I'm running this on my laptop to develop TripleO containers with no >> extra VM involved. >> >> P.S. We should call it Talon! >> >> Dan >> >>> >>> Before talking about the actual implementation, I would like to gather >>> feedback from people interested by the use-cases. If you recognize yourself >>> in these use-cases and you're not using TripleO today to test your things >>> because it's too complex to deploy, we want to hear from you. >>> I want to see feedback (positive or negative) about this idea. We need to >>> gather ideas, use cases, needs, before we go design a prototype in Rocky. >> >> Sorry dude. Already prototyping :) > > A related use case to all this work that takes it a step further: > > I think it would be useful if we could eventually further break down > "openstack undercloud deploy" into just the pieces needed to: > > - start an ephemeral Heat container > - create the Heat stack passing all requested -e's > - run config-download and save the output Yes! This pretty similar what we outlined at the PTG here [1] (lines 21-23): The high level workflow of here is already possible now if you use the new --output-only option to config download [2] and is exactly what I was doing with the Talon prototype. Essentially trying to take it as far as possible with our existing commands and then bring that to the group as a "how do we want to package this better?". One difference I've taken is instead of using a Heat container I instead use a python-tripleoclient container (which I aim to push to Kolla if I can whittle it down). This has the benefit of letting you do everything in a single container. Also I needed a few other cherry-picks [3] to pull it off to do things like make it so that docker-puppet.py consumes puppet-tripleo from within the container instead of bind mounting it from the host, and disabling puppet from running on the host machine entirely (something I do not want on my laptop). The nice thing about all of this is you end up with a self-contained 'Heat template -> Ansible' generator that can translate a set of heat templates into ansible playbooks which you then just run. What it does highlight however is perhaps there are still some dependencies that must be on each host in order for our Ansible playbooks to work. Things like paunch, and most of the heat-agent hooks still need to be on each host OS or the resulting playbooks won't work. Continuing the work to convert things to pure Ansible without requiring any heat-agents to be installed would make things even nicer I think. But as it stands today it is already a nice way to hack on tripleo-heat-templates in a very tight loop. No VMs or quickstart required. Dan [1] https://etherpad.openstack.org/p/tripleo-rocky-all-in-one [2] http://git.openstack.org/cgit/openstack/python-tripleoclient/commit/?id=50a093247742be896bbbeb91408eeaf0362b5085 [3] https://github.com/dprince/talon/blob/master/containers/tripleoclient/tripleoclient.sh#L31 > > Essentially removing the undercloud specific logic (or all-in-one > specific logic in this case) from "openstack undercloud deploy" and > resulting in a generic way to create the config-download playbooks for > any given TripleO stack (openstack tripleo depoy?). This would be > possible when using deployed-server, noop'ing Neutron networks, and > using fixed IP's as those are the only OpenStack resources actually > created by Heat when using a full undercloud. > > This would allow one to consume the ansible playbooks for a multinode > overcloud using an ephemeral Heat. > > The same generic tooling could then be used to deploy an actual > undercloud, any all-in-one configuration, or any overcloud > configuration. > > -- > -- James Slagle > -- > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From openstack at nemebean.com Tue Apr 3 17:23:04 2018 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 3 Apr 2018 12:23:04 -0500 Subject: [openstack-dev] [All] New PBR release coming soon In-Reply-To: References: Message-ID: The new pbr version is now in upper-constraints, so it should be getting exercised in ci going forward. Please report any issues to #openstack-oslo. On 03/26/2018 11:56 AM, Ben Nemec wrote: > Hi, > > Since this will potentially affect the majority of OpenStack projects, I > wanted to give everyone some advance notice.  PBR[1] hasn't been > released since last summer, and as a result none of the bug fixes or new > features that have gone in since then are available to users.  Because > of some feature removals that have happened, this will be a major > release and due to the number of changes since the last release there's > a higher probability of issues. > > We want to get this potentially painful release out of the way early in > the cycle and then resume regular releases going forward.  If you know > of any reason we shouldn't do this right now please respond ASAP. > > Thanks. > > -Ben > > 1: https://docs.openstack.org/pbr/latest/ From doug at doughellmann.com Tue Apr 3 17:44:19 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 03 Apr 2018 13:44:19 -0400 Subject: [openstack-dev] [nova][oslo] what to do with problematic mocking in nova unit tests In-Reply-To: References: <1522257468-sup-81@lrrr.local> <4C87D8A7-9A50-4141-A667-6F7B1425B6E3@doughellmann.com> Message-ID: <1522777325-sup-3403@lrrr.local> Excerpts from Eric Fried's message of 2018-03-31 16:12:22 -0500: > Hi Doug, I made this [2] for you. I tested it locally with oslo.config > master, and whereas I started off with a slightly different set of > errors than you show at [1], they were in the same suites. Since I > didn't want to tox the world locally, I went ahead and added a > Depends-On from [3]. Let's see how it plays out. > > >> [1] > http://logs.openstack.org/12/557012/1/check/cross-nova-py27/37b2a7c/job-output.txt.gz#_2018-03-27_21_41_09_883881 > [2] https://review.openstack.org/#/c/558084/ > [3] https://review.openstack.org/#/c/557012/ > > -efried Thanks, Eric! That looks like it should do the trick. I'll give it a try. Doug > > On 03/30/2018 06:35 AM, Doug Hellmann wrote: > > Anyone? > > > >> On Mar 28, 2018, at 1:26 PM, Doug Hellmann wrote: > >> > >> In the course of preparing the next release of oslo.config, Ben noticed > >> that nova's unit tests fail with oslo.config master [1]. > >> > >> The underlying issue is that the tests mock things that oslo.config > >> is now calling as part of determining where options are being set > >> in code. This isn't an API change in oslo.config, and it is all > >> transparent for normal uses of the library. But the mocks replace > >> os.path.exists() and open() for the entire duration of a test > >> function (not just for the isolated application code being tested), > >> and so the library behavior change surfaces as a test error. > >> > >> I'm not really in a position to go through and clean up the use of > >> mocks in those (and other?) tests myself, and I would like to not > >> have to revert the feature work in oslo.config, especially since > >> we did it for the placement API stuff for the nova team. > >> > >> I'm looking for ideas about what to do. > >> > >> Doug > >> > >> [1] http://logs.openstack.org/12/557012/1/check/cross-nova-py27/37b2a7c/job-output.txt.gz#_2018-03-27_21_41_09_883881 > >> > >> __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > From dprince at redhat.com Tue Apr 3 17:53:05 2018 From: dprince at redhat.com (Dan Prince) Date: Tue, 3 Apr 2018 13:53:05 -0400 Subject: [openstack-dev] [tripleo] PTG session about All-In-One installer: recap & roadmap In-Reply-To: <1941500803.14106606.1522764003036.JavaMail.zimbra@redhat.com> References: <1941500803.14106606.1522764003036.JavaMail.zimbra@redhat.com> Message-ID: On Tue, Apr 3, 2018 at 10:00 AM, Javier Pena wrote: > >> Greeting folks, >> >> During the last PTG we spent time discussing some ideas around an All-In-One >> installer, using 100% of the TripleO bits to deploy a single node OpenStack >> very similar with what we have today with the containerized undercloud and >> what we also have with other tools like Packstack or Devstack. >> >> https://etherpad.openstack.org/p/tripleo-rocky-all-in-one >> > > I'm really +1 to this. And as a Packstack developer, I'd love to see this as a > mid-term Packstack replacement. So let's dive into the details. Curious on this one actually, do you see a need for continued baremetal support? Today we support both baremetal and containers. Perhaps "support" is a strong word. We support both in terms of installation but only containers now have fully supported upgrades. The interfaces we have today still support baremetal and containers but there were some suggestions about getting rid of baremetal support and only having containers. If we were to remove baremetal support though, Could we keep the Packstack case intact by just using containers instead? Dan > >> One of the problems that we're trying to solve here is to give a simple tool >> for developers so they can both easily and quickly deploy an OpenStack for >> their needs. >> >> "As a developer, I need to deploy OpenStack in a VM on my laptop, quickly and >> without complexity, reproducing the same exact same tooling as TripleO is >> using." >> "As a Neutron developer, I need to develop a feature in Neutron and test it >> with TripleO in my local env." >> "As a TripleO dev, I need to implement a new service and test its deployment >> in my local env." >> "As a developer, I need to reproduce a bug in TripleO CI that blocks the >> production chain, quickly and simply." >> > > "As a packager, I want an easy/low overhead way to test updated packages with TripleO bits, so I can make sure they will not break any automation". > >> Probably more use cases, but to me that's what came into my mind now. >> >> Dan kicked-off a doc patch a month ago: >> https://review.openstack.org/#/c/547038/ >> And I just went ahead and proposed a blueprint: >> https://blueprints.launchpad.net/tripleo/+spec/all-in-one >> So hopefully we can start prototyping something during Rocky. >> >> Before talking about the actual implementation, I would like to gather >> feedback from people interested by the use-cases. If you recognize yourself >> in these use-cases and you're not using TripleO today to test your things >> because it's too complex to deploy, we want to hear from you. >> I want to see feedback (positive or negative) about this idea. We need to >> gather ideas, use cases, needs, before we go design a prototype in Rocky. >> > > I would like to offer help with initial testing once there is something in the repos, so count me in! > > Regards, > Javier > >> Thanks everyone who'll be involved, >> -- >> Emilien Macchi >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From melwittt at gmail.com Tue Apr 3 19:20:59 2018 From: melwittt at gmail.com (melanie witt) Date: Tue, 3 Apr 2018 12:20:59 -0700 Subject: [openstack-dev] [nova] Proposing Eric Fried for nova-core In-Reply-To: <5d5be2ad-9547-7579-a62b-328df2efd6c0@gmail.com> References: <5d5be2ad-9547-7579-a62b-328df2efd6c0@gmail.com> Message-ID: <0851552f-1d77-7f73-9123-da12d10aa8ac@gmail.com> On Mon, 26 Mar 2018 19:00:06 -0700, Melanie Witt wrote: > Howdy everyone, > > I'd like to propose that we add Eric Fried to the nova-core team. > > Eric has been instrumental to the placement effort with his work on > nested resource providers and has been actively contributing to many > other areas of openstack [0] like project-config, gerritbot, > keystoneauth, devstack, os-loganalyze, and so on. > > He's an active reviewer in nova [1] and elsewhere in openstack and > reviews in-depth, asking questions and catching issues in patches and > working with authors to help get code into merge-ready state. These are > qualities I look for in a potential core reviewer. > > In addition to all that, Eric is an active participant in the project in > general, helping people with questions in the #openstack-nova IRC > channel, contributing to design discussions, helping to write up > outcomes of discussions, reporting bugs, fixing bugs, and writing tests. > His contributions help to maintain and increase the health of our project. > > To the existing core team members, please respond with your comments, > +1s, or objections within one week. > > Cheers, > -melanie > > [0] https://review.openstack.org/#/q/owner:efried > [1] http://stackalytics.com/report/contribution/nova/90 Thanks to everyone who responded with their feedback. It's been one week and we have had more than enough +1s, so I've added Eric to the team. Welcome Eric! Best, -melanie From openstack at fried.cc Tue Apr 3 19:32:03 2018 From: openstack at fried.cc (Eric Fried) Date: Tue, 3 Apr 2018 14:32:03 -0500 Subject: [openstack-dev] [nova] Proposing Eric Fried for nova-core In-Reply-To: <0851552f-1d77-7f73-9123-da12d10aa8ac@gmail.com> References: <5d5be2ad-9547-7579-a62b-328df2efd6c0@gmail.com> <0851552f-1d77-7f73-9123-da12d10aa8ac@gmail.com> Message-ID: <242888ab-8981-8b72-0ad9-0901fc5cb543@fried.cc> Thank you Melanie for the complimentary nomination, to the cores for welcoming me into the fold, and especially to all (cores and non, Nova and otherwise) who have mentored me along the way thus far. I hope to live up to your example and continue to pay it forward. -efried On 04/03/2018 02:20 PM, melanie witt wrote: > On Mon, 26 Mar 2018 19:00:06 -0700, Melanie Witt wrote: >> Howdy everyone, >> >> I'd like to propose that we add Eric Fried to the nova-core team. >> >> Eric has been instrumental to the placement effort with his work on >> nested resource providers and has been actively contributing to many >> other areas of openstack [0] like project-config, gerritbot, >> keystoneauth, devstack, os-loganalyze, and so on. >> >> He's an active reviewer in nova [1] and elsewhere in openstack and >> reviews in-depth, asking questions and catching issues in patches and >> working with authors to help get code into merge-ready state. These are >> qualities I look for in a potential core reviewer. >> >> In addition to all that, Eric is an active participant in the project in >> general, helping people with questions in the #openstack-nova IRC >> channel, contributing to design discussions, helping to write up >> outcomes of discussions, reporting bugs, fixing bugs, and writing tests. >> His contributions help to maintain and increase the health of our >> project. >> >> To the existing core team members, please respond with your comments, >> +1s, or objections within one week. >> >> Cheers, >> -melanie >> >> [0] https://review.openstack.org/#/q/owner:efried >> [1] http://stackalytics.com/report/contribution/nova/90 > > Thanks to everyone who responded with their feedback. It's been one week > and we have had more than enough +1s, so I've added Eric to the team. > > Welcome Eric! > > Best, > -melanie > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From whayutin at redhat.com Tue Apr 3 19:57:20 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 03 Apr 2018 19:57:20 +0000 Subject: [openstack-dev] [tripleo] PTG session about All-In-One installer: recap & roadmap In-Reply-To: References: <1941500803.14106606.1522764003036.JavaMail.zimbra@redhat.com> Message-ID: On Tue, 3 Apr 2018 at 13:53 Dan Prince wrote: > On Tue, Apr 3, 2018 at 10:00 AM, Javier Pena wrote: > > > >> Greeting folks, > >> > >> During the last PTG we spent time discussing some ideas around an > All-In-One > >> installer, using 100% of the TripleO bits to deploy a single node > OpenStack > >> very similar with what we have today with the containerized undercloud > and > >> what we also have with other tools like Packstack or Devstack. > >> > >> https://etherpad.openstack.org/p/tripleo-rocky-all-in-one > >> > > > > I'm really +1 to this. And as a Packstack developer, I'd love to see > this as a > > mid-term Packstack replacement. So let's dive into the details. > > Curious on this one actually, do you see a need for continued > baremetal support? Today we support both baremetal and containers. > Perhaps "support" is a strong word. We support both in terms of > installation but only containers now have fully supported upgrades. > > The interfaces we have today still support baremetal and containers > but there were some suggestions about getting rid of baremetal support > and only having containers. If we were to remove baremetal support > though, Could we keep the Packstack case intact by just using > containers instead? > > Dan > Hey couple thoughts.. 1. I've added this topic to the RDO meeting tomorrow. 2. Just a thought, the "elf owl" is the worlds smallest owl at least according to the internets Maybe the all in one could be nick named tripleo elf? Talon is cool too. 3. From a CI perspective, I see this being very help with: a: faster run times generally, but especially for an upgrade tests. It may be possible to have upgrades gating tripleo projects again. b: enabling more packaging tests to be done with TripleO c: If developers dig it, we have a better chance at getting TripleO into other project's check jobs / third party jobs where current requirements and run times are prohibitive. d: Generally speaking replacing packstack / devstack in devel and CI workflows where it still exists. e: Improved utilization of our resources in RDO-Cloud It would be interesting to me to see more design and a little more thought into the potential use cases before we get far along. Looks like there is a good start to that here [2]. I'll add some comments with the potential use cases for CI. /me is very happy to see this moving! Thanks all [1] https://en.wikipedia.org/wiki/Elf_owl [2] https://review.openstack.org/#/c/547038/1/doc/source/install/advanced_deployment/all_in_one.rst > > > > >> One of the problems that we're trying to solve here is to give a simple > tool > >> for developers so they can both easily and quickly deploy an OpenStack > for > >> their needs. > >> > >> "As a developer, I need to deploy OpenStack in a VM on my laptop, > quickly and > >> without complexity, reproducing the same exact same tooling as TripleO > is > >> using." > >> "As a Neutron developer, I need to develop a feature in Neutron and > test it > >> with TripleO in my local env." > >> "As a TripleO dev, I need to implement a new service and test its > deployment > >> in my local env." > >> "As a developer, I need to reproduce a bug in TripleO CI that blocks the > >> production chain, quickly and simply." > >> > > > > "As a packager, I want an easy/low overhead way to test updated packages > with TripleO bits, so I can make sure they will not break any automation". > > > >> Probably more use cases, but to me that's what came into my mind now. > >> > >> Dan kicked-off a doc patch a month ago: > >> https://review.openstack.org/#/c/547038/ > >> And I just went ahead and proposed a blueprint: > >> https://blueprints.launchpad.net/tripleo/+spec/all-in-one > >> So hopefully we can start prototyping something during Rocky. > >> > >> Before talking about the actual implementation, I would like to gather > >> feedback from people interested by the use-cases. If you recognize > yourself > >> in these use-cases and you're not using TripleO today to test your > things > >> because it's too complex to deploy, we want to hear from you. > >> I want to see feedback (positive or negative) about this idea. We need > to > >> gather ideas, use cases, needs, before we go design a prototype in > Rocky. > >> > > > > I would like to offer help with initial testing once there is something > in the repos, so count me in! > > > > Regards, > > Javier > > > >> Thanks everyone who'll be involved, > >> -- > >> Emilien Macchi > >> > >> > __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pabelanger at redhat.com Tue Apr 3 20:15:30 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Tue, 3 Apr 2018 16:15:30 -0400 Subject: [openstack-dev] Gerrit server replacement scheduled for May 2nd 2018 Message-ID: <20180403201530.GA7899@localhost.localdomain> Hello from Infra. It's that time again... on Wednesday, May 02, 2018 20:00 UTC, the OpenStack Project Infrastructure team is upgrading the server which runs review.openstack.org to Ubuntu Xenial, and that means a new virtual machine instance with new IP addresses assigned by our service provider. The new IP addresses will be as follows: IPv4 -> 104.130.246.32 IPv6 -> 2001:4800:7819:103:be76:4eff:fe04:9229 They will replace these current production IP addresses: IPv4 -> 104.130.246.91 IPv6 -> 2001:4800:7819:103:be76:4eff:fe05:8525 We understand that some users may be running from egress-filtered networks with port 29418/tcp explicitly allowed to the current review.openstack.org IP addresses, and so are providing this information as far in advance as we can to allow them time to update their firewalls accordingly. Note that some users dealing with egress filtering may find it easier to switch their local configuration to use Gerrit's REST API via HTTPS instead, and the current release of git-review has support for that workflow as well. http://lists.openstack.org/pipermail/openstack-dev/2014-September/045385.html We will follow up with final confirmation in subsequent announcements. Thanks, Paul From sundar.nadathur at intel.com Tue Apr 3 20:54:36 2018 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Tue, 3 Apr 2018 13:54:36 -0700 Subject: [openstack-dev] [cyborg] Cyborg/Nova scheduling spec Message-ID: Thanks to everybody who has commented on the Cyborg/Nova scheduling spec (https://review.openstack.org/#/c/554717/). As you may have noted, some issues were raised (*1), discussed (*2) and a potential solution was offered (*3). I have tried to synthesize the new solution from Nova team here:          https://etherpad.openstack.org/p/Cyborg-Nova-Multifunction This simplifies Cyborg design/implementation, by having the weigher use Placement info (no queries or extra info in Cyborg DB), and by opening the possibility of removing the weigher altogether if/when Nova supports preferred traits. Please review it. Once that is done. I'll post an update that includes the new scheme and addresses any applicable comment in the current spec. Thank you very much! (*1) http://lists.openstack.org/pipermail/openstack-dev/2018-March/128685.html (*2) http://lists.openstack.org/pipermail/openstack-dev/2018-March/128840.html, 128889.html, etc. (*3) http://lists.openstack.org/pipermail/openstack-dev/2018-March/128888.html Regards, Sundar From cboylan at sapwetik.org Tue Apr 3 21:11:34 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 03 Apr 2018 14:11:34 -0700 Subject: [openstack-dev] [all] A quick note on recent IRC trolling/vandalism Message-ID: <1522789894.2364369.1325470248.3DBC8014@webmail.messagingengine.com> Hello everyone, During the recent holiday weekend some of our channels experienced some IRC trolling/vandalism. In particular the meetbot was used to start meetings titled 'maintenance' which updated the channel topic to 'maintenance'. The individual or bot doing this then used this as the pretense for claiming the channel was to undergo maintenance and everyone should leave. This is one of the risks of using public communications channels, anyone can show up and abuse them. In an effort to make it more clear as to what is trolling and what isn't, here are the bots we currently operate: - Meetbot ("openstack") to handle IRC meetings and log channels on eavesdrop.openstack.org - Statusbot ("openstackstatus") to notify channels about service outages and update topic accordingly - Gerritbot ("openstackgerrit") to notify channels about code review updates Should the Infra team need to notify of pending maintenance work, that notification will come via the statusbot and not the meetbot. The number of individuals that can set topics via statusbot is limited to a small number of IRC operators. If you have any questions you can reach out either in the #openstack-infra channel or to any channel operator directly and ask them. To get a list of channel operators run `/msg chanserv access #channel-name list`. Finally any user can end a meeting that meetbot started after one hour (by issuing a #endmeeting command). So you should feel free to clean those up yourself if you are able. If the Freenode staff needs to perform maintenance or otherwise make announcements, they tend to send special messages directly to clients so you will see messages from them in your IRC client's status channel. Should you have any questions for Freenode you can find freenode operators in the #freenode channel. As a final note the infra team has an approved spec for improving our IRC bot tooling, http://specs.openstack.org/openstack-infra/infra-specs/specs/irc.html. Implementing this spec is going to be a prerequisite for implementing smarter automated responses to problems like this and it needs volunteers. If you think this might be interesting to you definitely reach out. Thank you for your patience, Clark From mikal at stillhq.com Tue Apr 3 21:54:59 2018 From: mikal at stillhq.com (Michael Still) Date: Wed, 4 Apr 2018 07:54:59 +1000 Subject: [openstack-dev] [nova] pep8 failures on master Message-ID: Thanks to jichenjc for fixing the pep8 failures I was seeing on master. I'd decided they were specific to my local dev environment given no one else was seeing them. As I said in the patch that fixed the issue [1], I think its worth exploring how these got through the gate in the first place. There is nothing in the patch which stops us from ending up here again, and no real explanation for what caused the issue in the first place. Discuss. Michael 1: https://review.openstack.org/#/c/557633 -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Tue Apr 3 22:03:46 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 03 Apr 2018 22:03:46 +0000 Subject: [openstack-dev] [First Contact] Meeting tonight/tomorrow/today (Depends on your perspective) Message-ID: Hello! Another meeting tonight late/tomorrow depending on where in the world you live :) 0800 UTC Wednesday. Here is the agenda if you have anything to add [1]. Or if you want to add your name to the ping list it is there as well! See you all soon! -Kendall (diablo_rojo) [1] https://wiki.openstack.org/wiki/First_Contact_SIG#Meeting_Agenda -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Tue Apr 3 22:08:11 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 03 Apr 2018 18:08:11 -0400 Subject: [openstack-dev] [nova] pep8 failures on master In-Reply-To: References: Message-ID: <1522793202-sup-9133@lrrr.local> Excerpts from Michael Still's message of 2018-04-04 07:54:59 +1000: > Thanks to jichenjc for fixing the pep8 failures I was seeing on master. I'd > decided they were specific to my local dev environment given no one else > was seeing them. > > As I said in the patch that fixed the issue [1], I think its worth > exploring how these got through the gate in the first place. There is > nothing in the patch which stops us from ending up here again, and no real > explanation for what caused the issue in the first place. > > Discuss. > > Michael > > > 1: https://review.openstack.org/#/c/557633 Were you running pep8 with python 3 locally (that might happen if tox is installed under python 3 so the default base-python is python3 instead of just python)? There are some different defaults in flake8 based on the version of Python, but I don't know if those 2 specific errors are among that set. Doug From rosmaita.fossdev at gmail.com Tue Apr 3 22:18:08 2018 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Tue, 3 Apr 2018 18:18:08 -0400 Subject: [openstack-dev] [glance] python-glanceclient release status In-Reply-To: References: Message-ID: On Mon, Apr 2, 2018 at 6:28 PM, Brian Rosmaita wrote: > These need to be reviewed in master: > - https://review.openstack.org/#/c/555550/ > - https://review.openstack.org/#/c/556292/ Thanks for the reviews. The requested changes have been made and Zuul has given a +1, so ready for reviews again! > Backports needing review: > - https://review.openstack.org/#/c/555436/ This has a +2 from Sean; it's up to Erno now. cheers, brian From klmitch at mit.edu Tue Apr 3 22:18:58 2018 From: klmitch at mit.edu (Kevin L. Mitchell) Date: Tue, 03 Apr 2018 17:18:58 -0500 Subject: [openstack-dev] [nova] pep8 failures on master In-Reply-To: References: Message-ID: <1522793938.8549.7.camel@mit.edu> On Wed, 2018-04-04 at 07:54 +1000, Michael Still wrote: > Thanks to jichenjc for fixing the pep8 failures I was seeing on > master. I'd decided they were specific to my local dev environment > given no one else was seeing them. > > As I said in the patch that fixed the issue [1], I think its worth > exploring how these got through the gate in the first place. There is > nothing in the patch which stops us from ending up here again, and no > real explanation for what caused the issue in the first place. While there was no discussion in the patch, the topic of the patch hints at the cause: "fix_pep8_py3". These were probably pep8 errors that would only occur if pep8 was running under Python 3 and not Python 2. The first error was fixed by removing a debugging print that was formatted as "print (…)", which would satisfy pep8 under Python 2—since 'print' is a statement—but not under Python 3, where it's a function. The second error was in a clause protected by six.PY2, and was caused by "unicode" being missing in Python 3; the solution jichenjc chose there was to disable the pep8 check for that line. The only way I can imagine stopping these errors in the future would be to double-up on the pep8 check: have the gate run pep8 under both Python 2 and Python 3. -- Kevin L. Mitchell -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 220 bytes Desc: This is a digitally signed message part URL: From mikal at stillhq.com Tue Apr 3 22:23:10 2018 From: mikal at stillhq.com (Michael Still) Date: Tue, 03 Apr 2018 22:23:10 +0000 Subject: [openstack-dev] [nova] pep8 failures on master In-Reply-To: <1522793938.8549.7.camel@mit.edu> References: <1522793938.8549.7.camel@mit.edu> Message-ID: I think the bit I am lost on is the concept of running pep8 "under" a version of python. Is this an artifact of what version of pep8 I have installed somehow? If the py3 pep8 is stricter, couldn't we just move to only that one? Michael On Wed., 4 Apr. 2018, 8:19 am Kevin L. Mitchell, wrote: > On Wed, 2018-04-04 at 07:54 +1000, Michael Still wrote: > > Thanks to jichenjc for fixing the pep8 failures I was seeing on > > master. I'd decided they were specific to my local dev environment > > given no one else was seeing them. > > > > As I said in the patch that fixed the issue [1], I think its worth > > exploring how these got through the gate in the first place. There is > > nothing in the patch which stops us from ending up here again, and no > > real explanation for what caused the issue in the first place. > > While there was no discussion in the patch, the topic of the patch > hints at the cause: "fix_pep8_py3". These were probably pep8 errors > that would only occur if pep8 was running under Python 3 and not Python > 2. The first error was fixed by removing a debugging print that was > formatted as "print (…)", which would satisfy pep8 under Python 2—since > 'print' is a statement—but not under Python 3, where it's a function. > The second error was in a clause protected by six.PY2, and was caused > by "unicode" being missing in Python 3; the solution jichenjc chose > there was to disable the pep8 check for that line. > > The only way I can imagine stopping these errors in the future would be > to double-up on the pep8 check: have the gate run pep8 under both > Python 2 and Python 3. > -- > Kevin L. Mitchell >__________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Tue Apr 3 22:26:17 2018 From: melwittt at gmail.com (melanie witt) Date: Tue, 3 Apr 2018 15:26:17 -0700 Subject: [openstack-dev] [nova] pep8 failures on master In-Reply-To: References: Message-ID: <66c73122-d64d-b98b-b528-847cb059506f@gmail.com> On Wed, 4 Apr 2018 07:54:59 +1000, Michael Still wrote: > Thanks to jichenjc for fixing the pep8 failures I was seeing on master. > I'd decided they were specific to my local dev environment given no one > else was seeing them. > > As I said in the patch that fixed the issue [1], I think its worth > exploring how these got through the gate in the first place. There is > nothing in the patch which stops us from ending up here again, and no > real explanation for what caused the issue in the first place. > > Discuss. > > Michael > > > 1: https://review.openstack.org/#/c/557633 I think by default, infra runs jobs with python2. This is the job definition for openstack-tox-pep8 [0] which says it "Uses tox with the ``pep8`` environment." And in our tox.ini [1], we don't specify the basepython version. I contrasted the openstack-tox-pep8 job definition with the tempest-full-py3 job definition [2] and it sets the USE_PYTHON3=True variable for devstack. So, I think we're not gating the pep8 job for python3, only python2, and that's how the problems got through the gate in the first place. I'm not sure what the best way is to fix it -- whether we should be looking at adding a base openstack-tox-pep8-py3 job to openstack-zuul-jobs that sets USE_PYTHON3=True or if we need to instead change something in our tox.ini or what. -melanie [0] https://github.com/openstack-infra/openstack-zuul-jobs/blob/6a48004/zuul.d/jobs.yaml#L399 [1] https://github.com/openstack/nova/blob/master/tox.ini#L47 [2] https://github.com/openstack/tempest/blob/master/.zuul.yaml#L61-L74 From melwittt at gmail.com Tue Apr 3 22:30:07 2018 From: melwittt at gmail.com (melanie witt) Date: Tue, 3 Apr 2018 15:30:07 -0700 Subject: [openstack-dev] [nova] pep8 failures on master In-Reply-To: <66c73122-d64d-b98b-b528-847cb059506f@gmail.com> References: <66c73122-d64d-b98b-b528-847cb059506f@gmail.com> Message-ID: <17a2cd21-ebda-253a-cf8e-ddf5f539afe1@gmail.com> On Tue, 3 Apr 2018 15:26:17 -0700, Melanie Witt wrote: > On Wed, 4 Apr 2018 07:54:59 +1000, Michael Still wrote: >> Thanks to jichenjc for fixing the pep8 failures I was seeing on master. >> I'd decided they were specific to my local dev environment given no one >> else was seeing them. >> >> As I said in the patch that fixed the issue [1], I think its worth >> exploring how these got through the gate in the first place. There is >> nothing in the patch which stops us from ending up here again, and no >> real explanation for what caused the issue in the first place. >> >> Discuss. >> >> Michael >> >> >> 1: https://review.openstack.org/#/c/557633 > > I think by default, infra runs jobs with python2. This is the job > definition for openstack-tox-pep8 [0] which says it "Uses tox with the > ``pep8`` environment." And in our tox.ini [1], we don't specify the > basepython version. I contrasted the openstack-tox-pep8 job definition > with the tempest-full-py3 job definition [2] and it sets the > USE_PYTHON3=True variable for devstack. Re-reading this after I sent it (of course), I realize USE_PYTHON3 in devstack isn't relevant to the pep8 run since devstack isn't used. So, I'm not sure what we can do to run both python2 and python3 versions of the pep8 check considering that the openstack-tox-pep8 job runs tox with the "pep8" environment only (and we can't just add another "pep8-py3" environment and have it run it). > So, I think we're not gating the pep8 job for python3, only python2, and > that's how the problems got through the gate in the first place. I'm not > sure what the best way is to fix it -- whether we should be looking at > adding a base openstack-tox-pep8-py3 job to openstack-zuul-jobs that > sets USE_PYTHON3=True or if we need to instead change something in our > tox.ini or what. > > -melanie > > [0] > https://github.com/openstack-infra/openstack-zuul-jobs/blob/6a48004/zuul.d/jobs.yaml#L399 > [1] https://github.com/openstack/nova/blob/master/tox.ini#L47 > [2] https://github.com/openstack/tempest/blob/master/.zuul.yaml#L61-L74 From doug at doughellmann.com Tue Apr 3 22:52:19 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 03 Apr 2018 18:52:19 -0400 Subject: [openstack-dev] [nova] pep8 failures on master In-Reply-To: References: <1522793938.8549.7.camel@mit.edu> Message-ID: <1522795646-sup-606@lrrr.local> Excerpts from Michael Still's message of 2018-04-03 22:23:10 +0000: > I think the bit I am lost on is the concept of running pep8 "under" a > version of python. Is this an artifact of what version of pep8 I have > installed somehow? > > If the py3 pep8 is stricter, couldn't we just move to only that one? It's the same code, but that code is installed into the python3 interpreter's site-packages directory and the console script indicates that it should execute the python3 interpreter to run the script, then some checks are added or changed. Tox assumes if you don't specify otherwise that it should use the interpreter it's running under to create any virtualenvs used for tests. On most systems that default is still python2, but it is possible to install tox under python3 and then the default is python3. You can set basepython=python3 in tox.ini under the pep8 section to force the use of python3 [1] and remove the ambiguity. That's something we're going to need to do as we transition to python 3 anyway, because at some point the "default" python in CI will be python 3 and we're going to want to ensure that developers working on their local system see the same behavior. Doug [1] https://tox.readthedocs.io/en/latest/config.html#confval-basepython=NAME-OR-PATH > > Michael > > On Wed., 4 Apr. 2018, 8:19 am Kevin L. Mitchell, wrote: > > > On Wed, 2018-04-04 at 07:54 +1000, Michael Still wrote: > > > Thanks to jichenjc for fixing the pep8 failures I was seeing on > > > master. I'd decided they were specific to my local dev environment > > > given no one else was seeing them. > > > > > > As I said in the patch that fixed the issue [1], I think its worth > > > exploring how these got through the gate in the first place. There is > > > nothing in the patch which stops us from ending up here again, and no > > > real explanation for what caused the issue in the first place. > > > > While there was no discussion in the patch, the topic of the patch > > hints at the cause: "fix_pep8_py3". These were probably pep8 errors > > that would only occur if pep8 was running under Python 3 and not Python > > 2. The first error was fixed by removing a debugging print that was > > formatted as "print (…)", which would satisfy pep8 under Python 2—since > > 'print' is a statement—but not under Python 3, where it's a function. > > The second error was in a clause protected by six.PY2, and was caused > > by "unicode" being missing in Python 3; the solution jichenjc chose > > there was to disable the pep8 check for that line. > > > > The only way I can imagine stopping these errors in the future would be > > to double-up on the pep8 check: have the gate run pep8 under both > > Python 2 and Python 3. > > -- > > Kevin L. Mitchell > >__________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > From doug at doughellmann.com Tue Apr 3 22:53:33 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 03 Apr 2018 18:53:33 -0400 Subject: [openstack-dev] [nova] pep8 failures on master In-Reply-To: <17a2cd21-ebda-253a-cf8e-ddf5f539afe1@gmail.com> References: <66c73122-d64d-b98b-b528-847cb059506f@gmail.com> <17a2cd21-ebda-253a-cf8e-ddf5f539afe1@gmail.com> Message-ID: <1522795957-sup-1754@lrrr.local> Excerpts from melanie witt's message of 2018-04-03 15:30:07 -0700: > On Tue, 3 Apr 2018 15:26:17 -0700, Melanie Witt wrote: > > On Wed, 4 Apr 2018 07:54:59 +1000, Michael Still wrote: > >> Thanks to jichenjc for fixing the pep8 failures I was seeing on master. > >> I'd decided they were specific to my local dev environment given no one > >> else was seeing them. > >> > >> As I said in the patch that fixed the issue [1], I think its worth > >> exploring how these got through the gate in the first place. There is > >> nothing in the patch which stops us from ending up here again, and no > >> real explanation for what caused the issue in the first place. > >> > >> Discuss. > >> > >> Michael > >> > >> > >> 1: https://review.openstack.org/#/c/557633 > > > > I think by default, infra runs jobs with python2. This is the job > > definition for openstack-tox-pep8 [0] which says it "Uses tox with the > > ``pep8`` environment." And in our tox.ini [1], we don't specify the > > basepython version. I contrasted the openstack-tox-pep8 job definition > > with the tempest-full-py3 job definition [2] and it sets the > > USE_PYTHON3=True variable for devstack. > > Re-reading this after I sent it (of course), I realize USE_PYTHON3 in > devstack isn't relevant to the pep8 run since devstack isn't used. So, > I'm not sure what we can do to run both python2 and python3 versions of > the pep8 check considering that the openstack-tox-pep8 job runs tox with > the "pep8" environment only (and we can't just add another "pep8-py3" > environment and have it run it). The python3 settings are more strict, and all of our code should be at least importable under python3 now, so I think if we just convert those jobs to run under 3 we should be good to go. Doug From emilien at redhat.com Tue Apr 3 23:07:21 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 3 Apr 2018 16:07:21 -0700 Subject: [openstack-dev] [tripleo] The Weekly Owl - 15th Edition Message-ID: Note: this is the fifteenth edition of a weekly update of what happens in TripleO. The goal is to provide a short reading (less than 5 minutes) to learn where we are and what we're doing. Any contributions and feedback are welcome. Link to the previous version: http://lists.openstack.org/pipermail/openstack-dev/2018-March/128784.html +---------------------------------+ | General announcements | +---------------------------------+ +--> Deadline for Rocky blueprints submission was today. From now, new blueprints should target Stein. +--> Migration to Storyboard made progress (See UI updates). +--> Rocky milestone 1 is in 2 weeks! +------------------------------+ | Continuous Integration | +------------------------------+ +--> We're currently having serious issues with OVB CI jobs, see https://bugs.launchpad.net/tripleo/+bug/1757556 +--> Rover is Arx and Ruck is Rafael. Please let them know any new CI issue. +--> Master promotion is 5 days, Queens is 5 days, Pike is 10 days and Ocata is 10 days. +--> team is working on helping the upgrade squad with upstream upgrade ci and logging +--> tempest squad is still working on containerizing tempest https://trello.com/c/066JFJjf/537-epic-containerize-tempest +--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting and https://goo.gl/D4WuBP +-------------+ | Upgrades | +-------------+ +--> Progress on FFU CLI in tripleoclient +--> Work on CI jobs for undercloud upgrades +--> Need reviews, see etherpad +--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad-status +---------------+ | Containers | +---------------+ +--> Working on cleaning up some technical debt with masquerading +--> Still working on OVB fs001 switch to containerized undercloud, slowed down by CI issues +--> fs010 was switched to deploy a containerized undercloud (multinode-containers) +--> Investigations around an All-In-One installer, see mailing-list. +--> More: https://etherpad.openstack.org/p/tripleo-containers-squad-status +----------------------+ | config-download | +----------------------+ +--> Prototyping dedicated roles with unique repositories for Ansible tasks in TripleO (see mailing-list) +--> Migrating ceph & octavia to use external_deploy_tasks +--> Work in progress for inventory improvements +--> UI support is still work in progress, see etherpad. +--> More: https://etherpad.openstack.org/p/tripleo-config-download- squad-status +--------------+ | Integration | +--------------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-integration-squad-status +---------+ | UI/CLI | +---------+ +--> All bugs tagged with "ui" and "ux" are now part of Storyboard: https://storyboard.openstack.org/#!/project/964 +--> UI developers should now use Storyboard instead of Launchpad. A guide is provided here: https://docs.openstack.org/infra/storyboard/gui/manual.html +--> The team is focused on config-download integration +--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status +---------------+ | Validations | +---------------+ +--> Evaluating OpenShift on OpenStack validations +--> More: https://etherpad.openstack.org/p/tripleo-validations-squad-status +---------------+ | Networking | +---------------+ +--> Routed networks can now be configured when the undercloud is containerized. +--> More: https://etherpad.openstack.org/p/tripleo-networking-squad-status +--------------+ | Workflows | +--------------+ +--> Rocky planning is still in progress. +--> More: https://etherpad.openstack.org/p/tripleo-workflows-squad-status +-----------+ | Security | +-----------+ +--> Discussions around Public TLS by default and Secret Management Audit. +--> More: https://etherpad.openstack.org/p/tripleo-security-squad +------------+ | Owl fact | +------------+ The weekly owl fact is sponsored by Wes: the smallest owl is named the "Elf" owl. The mean body weight of this species is 40 g (1.4 oz). These tiny owls are 12.5 to 14.5 cm (4.9 to 5.7 in) long and have a wingspan of about 27 cm (10.5 in). Source: https://en.wikipedia.org/wiki/Elf_owl It was brought during the All-In-One installer discussion, where this name could be use since we're looking for something tiny and lightweight. Thanks all for reading and stay tuned! -- Your fellow reporter, Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Tue Apr 3 23:16:35 2018 From: melwittt at gmail.com (melanie witt) Date: Tue, 3 Apr 2018 16:16:35 -0700 Subject: [openstack-dev] [nova] pep8 failures on master In-Reply-To: <1522795957-sup-1754@lrrr.local> References: <66c73122-d64d-b98b-b528-847cb059506f@gmail.com> <17a2cd21-ebda-253a-cf8e-ddf5f539afe1@gmail.com> <1522795957-sup-1754@lrrr.local> Message-ID: <849303fc-20b1-e262-549e-c7b6edea66f9@gmail.com> On Tue, 03 Apr 2018 18:53:33 -0400, Doug Hellmann wrote: > Excerpts from melanie witt's message of 2018-04-03 15:30:07 -0700: >> On Tue, 3 Apr 2018 15:26:17 -0700, Melanie Witt wrote: >>> On Wed, 4 Apr 2018 07:54:59 +1000, Michael Still wrote: >>>> Thanks to jichenjc for fixing the pep8 failures I was seeing on master. >>>> I'd decided they were specific to my local dev environment given no one >>>> else was seeing them. >>>> >>>> As I said in the patch that fixed the issue [1], I think its worth >>>> exploring how these got through the gate in the first place. There is >>>> nothing in the patch which stops us from ending up here again, and no >>>> real explanation for what caused the issue in the first place. >>>> >>>> Discuss. >>>> >>>> Michael >>>> >>>> >>>> 1: https://review.openstack.org/#/c/557633 >>> >>> I think by default, infra runs jobs with python2. This is the job >>> definition for openstack-tox-pep8 [0] which says it "Uses tox with the >>> ``pep8`` environment." And in our tox.ini [1], we don't specify the >>> basepython version. I contrasted the openstack-tox-pep8 job definition >>> with the tempest-full-py3 job definition [2] and it sets the >>> USE_PYTHON3=True variable for devstack. >> >> Re-reading this after I sent it (of course), I realize USE_PYTHON3 in >> devstack isn't relevant to the pep8 run since devstack isn't used. So, >> I'm not sure what we can do to run both python2 and python3 versions of >> the pep8 check considering that the openstack-tox-pep8 job runs tox with >> the "pep8" environment only (and we can't just add another "pep8-py3" >> environment and have it run it). > > The python3 settings are more strict, and all of our code should be at > least importable under python3 now, so I think if we just convert those > jobs to run under 3 we should be good to go. Thanks Michael and Doug for suggesting we convert to running the pep8 tox env with python3, I've proposed a change here: https://review.openstack.org/#/c/558648 Best, -melanie From yumeng_bao at yahoo.com Wed Apr 4 02:36:04 2018 From: yumeng_bao at yahoo.com (yumeng bao) Date: Wed, 4 Apr 2018 02:36:04 +0000 (UTC) Subject: [openstack-dev] [cyborg] High Precision Time Synchronization Card Use Case Summary References: <831510546.1362584.1522809364949.ref@mail.yahoo.com> Message-ID: <831510546.1362584.1522809364949@mail.yahoo.com> Hi team, In our last weekly meeting, High Precision Time Synchronization Card Use Case was firstly introduced. In the following link is a summary/description about this use case. Please take a look and don't hesitate to ask any question.  :) https://etherpad.openstack.org/p/clock-driver Regards,Yumeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From yumeng_bao at yahoo.com Wed Apr 4 02:40:25 2018 From: yumeng_bao at yahoo.com (yumeng bao) Date: Wed, 4 Apr 2018 02:40:25 +0000 (UTC) Subject: [openstack-dev] [cyborg] High Precision Time Synchronization Card Use Case Summary References: <557456660.1340032.1522809625587.ref@mail.yahoo.com> Message-ID: <557456660.1340032.1522809625587@mail.yahoo.com> Hi team, In our last weekly meeting, High Precision Time Synchronization Card Use Case was firstly introduced. In the following link is a summary/description about this use case. Please take a look and don't hesitate to ask any question.  :) https://etherpad.openstack.org/p/clock-driver Regards,Yumeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From xinni.ge1990 at gmail.com Wed Apr 4 05:34:43 2018 From: xinni.ge1990 at gmail.com (Xinni Ge) Date: Wed, 4 Apr 2018 14:34:43 +0900 Subject: [openstack-dev] [horizon] [heat-dashboard] Horizon plugin settings for new xstatic modules In-Reply-To: References: Message-ID: Hi Ivan and other Horizon team member, Thanks for adding us into xstatic-core group. But I still need your opinion and help to release the newly-added xstatic packages to pypi index. Current `xstatic-core` group doesn't have the permission to PUSH SIGNED TAG, and I cannot release the first non-trivial version. If I (or maybe Kaz) could be added into xstatic-release group, we can release all the 8 packages by ourselves. Or, we are very appreciate if any member of xstatic-release could help to do it. Just for your quick access, here is the link of access permission page of one xstatic package. https://review.openstack.org/#/admin/projects/openstack/xstatic-angular-material,access -- Best Regards, Xinni On Thu, Mar 29, 2018 at 9:59 AM, Kaz Shinohara wrote: > Hi Ivan, > > > Thank you very much. > I've confirmed that all of us have been added to xstatic-core. > > As discussed, we will focus on the followings what we added for > heat-dashboard, will not touch other xstatic repos as core. > > xstatic-angular-material > xstatic-angular-notify > xstatic-angular-uuid > xstatic-angular-vis > xstatic-filesaver > xstatic-js-yaml > xstatic-json2yaml > xstatic-vis > > Regards, > Kaz > > 2018-03-29 5:40 GMT+09:00 Ivan Kolodyazhny : > > Hi Kuz, > > > > Don't worry, we're on the same page with you. I added both you, Xinni and > > Keichii to the xstatic-core group. Thank you for your contributions! > > > > Regards, > > Ivan Kolodyazhny, > > http://blog.e0ne.info/ > > > > On Wed, Mar 28, 2018 at 5:18 PM, Kaz Shinohara > wrote: > >> > >> Hi Ivan & Horizon folks > >> > >> > >> AFAIK, Horizon team had conclusion that you will add the specific > >> members to xstatic-core, correct ? > >> Can I ask you to add the following members ? > >> # All of tree are heat-dashboard core. > >> > >> Kazunori Shinohara / ksnhr.tech at gmail.com #myself > >> Xinni Ge / xinni.ge1990 at gmail.com > >> Keiichi Hikita / keiichi.hikita at gmail.com > >> > >> Please give me a shout, if we are not on same page or any concern. > >> > >> Regards, > >> Kaz > >> > >> > >> 2018-03-21 22:29 GMT+09:00 Kaz Shinohara : > >> > Hi Ivan, Akihiro, > >> > > >> > > >> > Thanks for your kind arrangement. > >> > Looking forward to hearing your decision soon. > >> > > >> > Regards, > >> > Kaz > >> > > >> > 2018-03-21 21:43 GMT+09:00 Ivan Kolodyazhny : > >> >> HI Team, > >> >> > >> >> From my perspective, I'm OK both with #2 and #3 options. I agree that > >> >> #4 > >> >> could be too complicated for us. Anyway, we've got this topic on the > >> >> meeting > >> >> agenda [1] so we'll discuss it there too. I'll share our decision > after > >> >> the > >> >> meeting. > >> >> > >> >> [1] https://wiki.openstack.org/wiki/Meetings/Horizon > >> >> > >> >> > >> >> > >> >> Regards, > >> >> Ivan Kolodyazhny, > >> >> http://blog.e0ne.info/ > >> >> > >> >> On Tue, Mar 20, 2018 at 10:45 AM, Akihiro Motoki > >> >> wrote: > >> >>> > >> >>> Hi Kaz and Ivan, > >> >>> > >> >>> Yeah, it is worth discussed officially in the horizon team meeting > or > >> >>> the > >> >>> mailing list thread to get a consensus. > >> >>> Hopefully you can add this topic to the horizon meeting agenda. > >> >>> > >> >>> After sending the previous mail, I noticed anther option. I see > there > >> >>> are > >> >>> several options now. > >> >>> (1) Keep xstatic-core and horizon-core same. > >> >>> (2) Add specific members to xstatic-core > >> >>> (3) Add specific horizon-plugin core to xstatic-core > >> >>> (4) Split core membership into per-repo basis (perhaps too > >> >>> complicated!!) > >> >>> > >> >>> My current vote is (2) as xstatic-core needs to understand what is > >> >>> xstatic > >> >>> and how it is maintained. > >> >>> > >> >>> Thanks, > >> >>> Akihiro > >> >>> > >> >>> > >> >>> 2018-03-20 17:17 GMT+09:00 Kaz Shinohara : > >> >>>> > >> >>>> Hi Akihiro, > >> >>>> > >> >>>> > >> >>>> Thanks for your comment. > >> >>>> The background of my request to add us to xstatic-core comes from > >> >>>> Ivan's comment in last PTG's etherpad for heat-dashboard > discussion. > >> >>>> > >> >>>> https://etherpad.openstack.org/p/heat-dashboard-ptg- > rocky-discussion > >> >>>> Line135, "we can share ownership if needed - e0ne" > >> >>>> > >> >>>> Just in case, could you guys confirm unified opinion on this matter > >> >>>> as > >> >>>> Horizon team ? > >> >>>> > >> >>>> Frankly speaking I'm feeling the benefit to make us xstatic-core > >> >>>> because it's easier & smoother to manage what we are taking for > >> >>>> heat-dashboard. > >> >>>> On the other hand, I can understand what Akihiro you are saying, > the > >> >>>> newly added repos belong to Horizon project & being managed by not > >> >>>> Horizon core is not consistent. > >> >>>> Also having exception might make unexpected confusion in near > future. > >> >>>> > >> >>>> Eventually we will follow your opinion, let me hear Horizon team's > >> >>>> conclusion. > >> >>>> > >> >>>> Regards, > >> >>>> Kaz > >> >>>> > >> >>>> > >> >>>> 2018-03-20 12:58 GMT+09:00 Akihiro Motoki : > >> >>>> > Hi Kaz, > >> >>>> > > >> >>>> > These repositories are under horizon project. It looks better to > >> >>>> > keep > >> >>>> > the > >> >>>> > current core team. > >> >>>> > It potentially brings some confusion if we treat some horizon > >> >>>> > plugin > >> >>>> > team > >> >>>> > specially. > >> >>>> > Reviewing xstatic repos would be a small burden, wo I think it > >> >>>> > would > >> >>>> > work > >> >>>> > without problem even if only horizon-core can approve xstatic > >> >>>> > reviews. > >> >>>> > > >> >>>> > > >> >>>> > 2018-03-20 10:02 GMT+09:00 Kaz Shinohara : > >> >>>> >> > >> >>>> >> Hi Ivan, Horizon folks, > >> >>>> >> > >> >>>> >> > >> >>>> >> Now totally 8 xstatic-** repos for heat-dashboard have been > >> >>>> >> landed. > >> >>>> >> > >> >>>> >> In project-config for them, I've set same acl-config as the > >> >>>> >> existing > >> >>>> >> xstatic repos. > >> >>>> >> It means only "xstatic-core" can manage the newly created repos > on > >> >>>> >> gerrit. > >> >>>> >> Could you kindly add "heat-dashboard-core" into "xstatic-core" > >> >>>> >> like as > >> >>>> >> what horizon-core is doing ? > >> >>>> >> > >> >>>> >> xstatic-core > >> >>>> >> https://review.openstack.org/#/admin/groups/385,members > >> >>>> >> > >> >>>> >> heat-dashboard-core > >> >>>> >> https://review.openstack.org/#/admin/groups/1844,members > >> >>>> >> > >> >>>> >> Of course, we will surely touch only what we made, just would > like > >> >>>> >> to > >> >>>> >> manage them smoothly by ourselves. > >> >>>> >> In case we need to touch the other ones, will ask Horizon team > for > >> >>>> >> help. > >> >>>> >> > >> >>>> >> Thanks in advance. > >> >>>> >> > >> >>>> >> Regards, > >> >>>> >> Kaz > >> >>>> >> > >> >>>> >> > >> >>>> >> 2018-03-14 15:12 GMT+09:00 Xinni Ge : > >> >>>> >> > Hi Horizon Team, > >> >>>> >> > > >> >>>> >> > I reported a bug about lack of ``ADD_XSTATIC_MODULES`` plugin > >> >>>> >> > option, > >> >>>> >> > and submitted a patch for it. > >> >>>> >> > Could you please help to review the patch. > >> >>>> >> > > >> >>>> >> > https://bugs.launchpad.net/horizon/+bug/1755339 > >> >>>> >> > https://review.openstack.org/#/c/552259/ > >> >>>> >> > > >> >>>> >> > Thank you very much. > >> >>>> >> > > >> >>>> >> > Best Regards, > >> >>>> >> > Xinni > >> >>>> >> > > >> >>>> >> > On Tue, Mar 13, 2018 at 6:41 PM, Ivan Kolodyazhny > >> >>>> >> > > >> >>>> >> > wrote: > >> >>>> >> >> > >> >>>> >> >> Hi Kaz, > >> >>>> >> >> > >> >>>> >> >> Thanks for cleaning this up. I put +1 on both of these > patches > >> >>>> >> >> > >> >>>> >> >> Regards, > >> >>>> >> >> Ivan Kolodyazhny, > >> >>>> >> >> http://blog.e0ne.info/ > >> >>>> >> >> > >> >>>> >> >> On Tue, Mar 13, 2018 at 4:48 AM, Kaz Shinohara > >> >>>> >> >> > >> >>>> >> >> wrote: > >> >>>> >> >>> > >> >>>> >> >>> Hi Ivan & Horizon folks, > >> >>>> >> >>> > >> >>>> >> >>> > >> >>>> >> >>> Now we are submitting a couple of patches to have the new > >> >>>> >> >>> xstatic > >> >>>> >> >>> modules. > >> >>>> >> >>> Let me request you to have review the following patches. > >> >>>> >> >>> We need Horizon PTL's +1 to move these forward. > >> >>>> >> >>> > >> >>>> >> >>> project-config > >> >>>> >> >>> https://review.openstack.org/#/c/551978/ > >> >>>> >> >>> > >> >>>> >> >>> governance > >> >>>> >> >>> https://review.openstack.org/#/c/551980/ > >> >>>> >> >>> > >> >>>> >> >>> Thanks in advance:) > >> >>>> >> >>> > >> >>>> >> >>> Regards, > >> >>>> >> >>> Kaz > >> >>>> >> >>> > >> >>>> >> >>> > >> >>>> >> >>> 2018-03-12 20:00 GMT+09:00 Radomir Dopieralski > >> >>>> >> >>> : > >> >>>> >> >>> > Yes, please do that. We can then discuss in the review > about > >> >>>> >> >>> > technical > >> >>>> >> >>> > details. > >> >>>> >> >>> > > >> >>>> >> >>> > On Mon, Mar 12, 2018 at 2:54 AM, Xinni Ge > >> >>>> >> >>> > > >> >>>> >> >>> > wrote: > >> >>>> >> >>> >> > >> >>>> >> >>> >> Hi, Akihiro > >> >>>> >> >>> >> > >> >>>> >> >>> >> Thanks for the quick reply. > >> >>>> >> >>> >> > >> >>>> >> >>> >> I agree with your opinion that BASE_XSTATIC_MODULES > should > >> >>>> >> >>> >> not > >> >>>> >> >>> >> be > >> >>>> >> >>> >> modified. > >> >>>> >> >>> >> It is much better to enhance horizon plugin settings, > >> >>>> >> >>> >> and I think maybe there could be one option like > >> >>>> >> >>> >> ADD_XSTATIC_MODULES. > >> >>>> >> >>> >> This option adds the plugin's xstatic files in > >> >>>> >> >>> >> STATICFILES_DIRS. > >> >>>> >> >>> >> I am considering to add a bug report to describe it at > >> >>>> >> >>> >> first, > >> >>>> >> >>> >> and > >> >>>> >> >>> >> give > >> >>>> >> >>> >> a > >> >>>> >> >>> >> patch later maybe. > >> >>>> >> >>> >> Is that ok with the Horizon team? > >> >>>> >> >>> >> > >> >>>> >> >>> >> Best Regards. > >> >>>> >> >>> >> Xinni > >> >>>> >> >>> >> > >> >>>> >> >>> >> On Fri, Mar 9, 2018 at 11:47 PM, Akihiro Motoki > >> >>>> >> >>> >> > >> >>>> >> >>> >> wrote: > >> >>>> >> >>> >>> > >> >>>> >> >>> >>> Hi Xinni, > >> >>>> >> >>> >>> > >> >>>> >> >>> >>> 2018-03-09 12:05 GMT+09:00 Xinni Ge > >> >>>> >> >>> >>> : > >> >>>> >> >>> >>> > Hello Horizon Team, > >> >>>> >> >>> >>> > > >> >>>> >> >>> >>> > I would like to hear about your opinions about how to > >> >>>> >> >>> >>> > add > >> >>>> >> >>> >>> > new > >> >>>> >> >>> >>> > xstatic > >> >>>> >> >>> >>> > modules to horizon settings. > >> >>>> >> >>> >>> > > >> >>>> >> >>> >>> > As for Heat-dashboard project embedded 3rd-party files > >> >>>> >> >>> >>> > issue, > >> >>>> >> >>> >>> > thanks > >> >>>> >> >>> >>> > for > >> >>>> >> >>> >>> > your advices in Dublin PTG, we are now removing them > and > >> >>>> >> >>> >>> > referencing as > >> >>>> >> >>> >>> > new > >> >>>> >> >>> >>> > xstatic-* libs. > >> >>>> >> >>> >>> > >> >>>> >> >>> >>> Thanks for moving this forward. > >> >>>> >> >>> >>> > >> >>>> >> >>> >>> > So we installed the new xstatic files (not uploaded as > >> >>>> >> >>> >>> > openstack > >> >>>> >> >>> >>> > official > >> >>>> >> >>> >>> > repos yet) in our development environment now, but > >> >>>> >> >>> >>> > hesitate > >> >>>> >> >>> >>> > to > >> >>>> >> >>> >>> > decide > >> >>>> >> >>> >>> > how to > >> >>>> >> >>> >>> > add the new installed xstatic lib path to > >> >>>> >> >>> >>> > STATICFILES_DIRS > >> >>>> >> >>> >>> > in > >> >>>> >> >>> >>> > openstack_dashboard.settings so that the static files > >> >>>> >> >>> >>> > could > >> >>>> >> >>> >>> > be > >> >>>> >> >>> >>> > automatically > >> >>>> >> >>> >>> > collected by *collectstatic* process. > >> >>>> >> >>> >>> > > >> >>>> >> >>> >>> > Currently Horizon defines BASE_XSTATIC_MODULES in > >> >>>> >> >>> >>> > openstack_dashboard/utils/settings.py and the > relevant > >> >>>> >> >>> >>> > static > >> >>>> >> >>> >>> > fils > >> >>>> >> >>> >>> > are > >> >>>> >> >>> >>> > added > >> >>>> >> >>> >>> > to STATICFILES_DIRS before it updates any Horizon > plugin > >> >>>> >> >>> >>> > dashboard. > >> >>>> >> >>> >>> > We may want new plugin setting keywords ( something > >> >>>> >> >>> >>> > similar > >> >>>> >> >>> >>> > to > >> >>>> >> >>> >>> > ADD_JS_FILES) > >> >>>> >> >>> >>> > to update horizon XSTATIC_MODULES (or directly update > >> >>>> >> >>> >>> > STATICFILES_DIRS). > >> >>>> >> >>> >>> > >> >>>> >> >>> >>> IMHO it is better to allow horizon plugins to add > xstatic > >> >>>> >> >>> >>> modules > >> >>>> >> >>> >>> through horizon plugin settings. I don't think it is a > >> >>>> >> >>> >>> good > >> >>>> >> >>> >>> idea > >> >>>> >> >>> >>> to > >> >>>> >> >>> >>> add a new entry in BASE_XSTATIC_MODULES based on horizon > >> >>>> >> >>> >>> plugin > >> >>>> >> >>> >>> usages. It makes difficult to track why and where a > >> >>>> >> >>> >>> xstatic > >> >>>> >> >>> >>> module > >> >>>> >> >>> >>> in > >> >>>> >> >>> >>> BASE_XSTATIC_MODULES is used. > >> >>>> >> >>> >>> Multiple horizon plugins can add a same entry, so > horizon > >> >>>> >> >>> >>> code > >> >>>> >> >>> >>> to > >> >>>> >> >>> >>> handle plugin settings should merge multiple entries to > a > >> >>>> >> >>> >>> single > >> >>>> >> >>> >>> one > >> >>>> >> >>> >>> hopefully. > >> >>>> >> >>> >>> My vote is to enhance the horizon plugin settings. > >> >>>> >> >>> >>> > >> >>>> >> >>> >>> Akihiro > >> >>>> >> >>> >>> > >> >>>> >> >>> >>> > > >> >>>> >> >>> >>> > Looking forward to hearing any suggestions from you > >> >>>> >> >>> >>> > guys, > >> >>>> >> >>> >>> > and > >> >>>> >> >>> >>> > Best Regards, > >> >>>> >> >>> >>> > > >> >>>> >> >>> >>> > Xinni Ge > >> >>>> >> >>> >>> > > >> >>>> >> >>> >>> > > >> >>>> >> >>> >>> > > >> >>>> >> >>> >>> > > >> >>>> >> >>> >>> > > >> >>>> >> >>> >>> > > >> >>>> >> >>> >>> > ______________________________ > ____________________________________________ > >> >>>> >> >>> >>> > OpenStack Development Mailing List (not for usage > >> >>>> >> >>> >>> > questions) > >> >>>> >> >>> >>> > Unsubscribe: > >> >>>> >> >>> >>> > > >> >>>> >> >>> >>> > > >> >>>> >> >>> >>> > OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > >> >>>> >> >>> >>> > > >> >>>> >> >>> >>> > > >> >>>> >> >>> >>> > > >> >>>> >> >>> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack-dev > >> >>>> >> >>> >>> > > >> >>>> >> >>> >>> > >> >>>> >> >>> >>> > >> >>>> >> >>> >>> > >> >>>> >> >>> >>> > >> >>>> >> >>> >>> > >> >>>> >> >>> >>> > >> >>>> >> >>> >>> ______________________________ > ____________________________________________ > >> >>>> >> >>> >>> OpenStack Development Mailing List (not for usage > >> >>>> >> >>> >>> questions) > >> >>>> >> >>> >>> Unsubscribe: > >> >>>> >> >>> >>> > >> >>>> >> >>> >>> OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > >> >>>> >> >>> >>> > >> >>>> >> >>> >>> > >> >>>> >> >>> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack-dev > >> >>>> >> >>> >> > >> >>>> >> >>> >> > >> >>>> >> >>> >> > >> >>>> >> >>> >> > >> >>>> >> >>> >> -- > >> >>>> >> >>> >> 葛馨霓 Xinni Ge > >> >>>> >> >>> >> > >> >>>> >> >>> >> > >> >>>> >> >>> >> > >> >>>> >> >>> >> > >> >>>> >> >>> >> > >> >>>> >> >>> >> ______________________________ > ____________________________________________ > >> >>>> >> >>> >> OpenStack Development Mailing List (not for usage > >> >>>> >> >>> >> questions) > >> >>>> >> >>> >> Unsubscribe: > >> >>>> >> >>> >> > >> >>>> >> >>> >> OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > >> >>>> >> >>> >> > >> >>>> >> >>> >> > >> >>>> >> >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack-dev > >> >>>> >> >>> >> > >> >>>> >> >>> > > >> >>>> >> >>> > > >> >>>> >> >>> > > >> >>>> >> >>> > > >> >>>> >> >>> > > >> >>>> >> >>> > > >> >>>> >> >>> > ______________________________ > ____________________________________________ > >> >>>> >> >>> > OpenStack Development Mailing List (not for usage > questions) > >> >>>> >> >>> > Unsubscribe: > >> >>>> >> >>> > > >> >>>> >> >>> > OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > >> >>>> >> >>> > > >> >>>> >> >>> > > >> >>>> >> >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack-dev > >> >>>> >> >>> > > >> >>>> >> >>> > >> >>>> >> >>> > >> >>>> >> >>> > >> >>>> >> >>> > >> >>>> >> >>> > >> >>>> >> >>> ______________________________ > ____________________________________________ > >> >>>> >> >>> OpenStack Development Mailing List (not for usage questions) > >> >>>> >> >>> Unsubscribe: > >> >>>> >> >>> OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > >> >>>> >> >>> > >> >>>> >> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack-dev > >> >>>> >> >> > >> >>>> >> >> > >> >>>> >> >> > >> >>>> >> >> > >> >>>> >> >> > >> >>>> >> >> > >> >>>> >> >> ____________________________________________________________ > ______________ > >> >>>> >> >> OpenStack Development Mailing List (not for usage questions) > >> >>>> >> >> Unsubscribe: > >> >>>> >> >> OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > >> >>>> >> >> > >> >>>> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack-dev > >> >>>> >> >> > >> >>>> >> > > >> >>>> >> > > >> >>>> >> > > >> >>>> >> > -- > >> >>>> >> > 葛馨霓 Xinni Ge > >> >>>> >> > > >> >>>> >> > > >> >>>> >> > > >> >>>> >> > > >> >>>> >> > ____________________________________________________________ > ______________ > >> >>>> >> > OpenStack Development Mailing List (not for usage questions) > >> >>>> >> > Unsubscribe: > >> >>>> >> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> >>>> >> > > >> >>>> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack-dev > >> >>>> >> > > >> >>>> >> > >> >>>> >> > >> >>>> >> > >> >>>> >> ____________________________________________________________ > ______________ > >> >>>> >> OpenStack Development Mailing List (not for usage questions) > >> >>>> >> Unsubscribe: > >> >>>> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> >>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack-dev > >> >>>> > > >> >>>> > > >> >>>> > > >> >>>> > > >> >>>> > > >> >>>> > ____________________________________________________________ > ______________ > >> >>>> > OpenStack Development Mailing List (not for usage questions) > >> >>>> > Unsubscribe: > >> >>>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> >>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/ > openstack-dev > >> >>>> > > >> >>>> > >> >>>> > >> >>>> > >> >>>> ____________________________________________________________ > ______________ > >> >>>> OpenStack Development Mailing List (not for usage questions) > >> >>>> Unsubscribe: > >> >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> >>> > >> >>> > >> >>> > >> >>> > >> >>> ____________________________________________________________ > ______________ > >> >>> OpenStack Development Mailing List (not for usage questions) > >> >>> Unsubscribe: > >> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> >>> > >> >> > >> >> > >> >> > >> >> ____________________________________________________________ > ______________ > >> >> OpenStack Development Mailing List (not for usage questions) > >> >> Unsubscribe: > >> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> >> > >> > >> ____________________________________________________________ > ______________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- 葛馨霓 Xinni Ge -------------- next part -------------- An HTML attachment was scrubbed... URL: From e0ne at e0ne.info Wed Apr 4 05:48:28 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Wed, 4 Apr 2018 08:48:28 +0300 Subject: [openstack-dev] [horizon] [heat-dashboard] Horizon plugin settings for new xstatic modules In-Reply-To: References: Message-ID: Hi Xinni, Please, send me a list of packages which should be released. In general, release-* groups are different from core-*. We should discuss how to go forward with it Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Wed, Apr 4, 2018 at 8:34 AM, Xinni Ge wrote: > Hi Ivan and other Horizon team member, > > Thanks for adding us into xstatic-core group. > But I still need your opinion and help to release the newly-added xstatic > packages to pypi index. > > Current `xstatic-core` group doesn't have the permission to PUSH SIGNED > TAG, and I cannot release the first non-trivial version. > > If I (or maybe Kaz) could be added into xstatic-release group, we can > release all the 8 packages by ourselves. > > Or, we are very appreciate if any member of xstatic-release could help to > do it. > > Just for your quick access, here is the link of access permission page of > one xstatic package. > https://review.openstack.org/#/admin/projects/openstack/ > xstatic-angular-material,access > > -- > Best Regards, > Xinni > > On Thu, Mar 29, 2018 at 9:59 AM, Kaz Shinohara > wrote: > >> Hi Ivan, >> >> >> Thank you very much. >> I've confirmed that all of us have been added to xstatic-core. >> >> As discussed, we will focus on the followings what we added for >> heat-dashboard, will not touch other xstatic repos as core. >> >> xstatic-angular-material >> xstatic-angular-notify >> xstatic-angular-uuid >> xstatic-angular-vis >> xstatic-filesaver >> xstatic-js-yaml >> xstatic-json2yaml >> xstatic-vis >> >> Regards, >> Kaz >> >> 2018-03-29 5:40 GMT+09:00 Ivan Kolodyazhny : >> > Hi Kuz, >> > >> > Don't worry, we're on the same page with you. I added both you, Xinni >> and >> > Keichii to the xstatic-core group. Thank you for your contributions! >> > >> > Regards, >> > Ivan Kolodyazhny, >> > http://blog.e0ne.info/ >> > >> > On Wed, Mar 28, 2018 at 5:18 PM, Kaz Shinohara >> wrote: >> >> >> >> Hi Ivan & Horizon folks >> >> >> >> >> >> AFAIK, Horizon team had conclusion that you will add the specific >> >> members to xstatic-core, correct ? >> >> Can I ask you to add the following members ? >> >> # All of tree are heat-dashboard core. >> >> >> >> Kazunori Shinohara / ksnhr.tech at gmail.com #myself >> >> Xinni Ge / xinni.ge1990 at gmail.com >> >> Keiichi Hikita / keiichi.hikita at gmail.com >> >> >> >> Please give me a shout, if we are not on same page or any concern. >> >> >> >> Regards, >> >> Kaz >> >> >> >> >> >> 2018-03-21 22:29 GMT+09:00 Kaz Shinohara : >> >> > Hi Ivan, Akihiro, >> >> > >> >> > >> >> > Thanks for your kind arrangement. >> >> > Looking forward to hearing your decision soon. >> >> > >> >> > Regards, >> >> > Kaz >> >> > >> >> > 2018-03-21 21:43 GMT+09:00 Ivan Kolodyazhny : >> >> >> HI Team, >> >> >> >> >> >> From my perspective, I'm OK both with #2 and #3 options. I agree >> that >> >> >> #4 >> >> >> could be too complicated for us. Anyway, we've got this topic on the >> >> >> meeting >> >> >> agenda [1] so we'll discuss it there too. I'll share our decision >> after >> >> >> the >> >> >> meeting. >> >> >> >> >> >> [1] https://wiki.openstack.org/wiki/Meetings/Horizon >> >> >> >> >> >> >> >> >> >> >> >> Regards, >> >> >> Ivan Kolodyazhny, >> >> >> http://blog.e0ne.info/ >> >> >> >> >> >> On Tue, Mar 20, 2018 at 10:45 AM, Akihiro Motoki > > >> >> >> wrote: >> >> >>> >> >> >>> Hi Kaz and Ivan, >> >> >>> >> >> >>> Yeah, it is worth discussed officially in the horizon team meeting >> or >> >> >>> the >> >> >>> mailing list thread to get a consensus. >> >> >>> Hopefully you can add this topic to the horizon meeting agenda. >> >> >>> >> >> >>> After sending the previous mail, I noticed anther option. I see >> there >> >> >>> are >> >> >>> several options now. >> >> >>> (1) Keep xstatic-core and horizon-core same. >> >> >>> (2) Add specific members to xstatic-core >> >> >>> (3) Add specific horizon-plugin core to xstatic-core >> >> >>> (4) Split core membership into per-repo basis (perhaps too >> >> >>> complicated!!) >> >> >>> >> >> >>> My current vote is (2) as xstatic-core needs to understand what is >> >> >>> xstatic >> >> >>> and how it is maintained. >> >> >>> >> >> >>> Thanks, >> >> >>> Akihiro >> >> >>> >> >> >>> >> >> >>> 2018-03-20 17:17 GMT+09:00 Kaz Shinohara : >> >> >>>> >> >> >>>> Hi Akihiro, >> >> >>>> >> >> >>>> >> >> >>>> Thanks for your comment. >> >> >>>> The background of my request to add us to xstatic-core comes from >> >> >>>> Ivan's comment in last PTG's etherpad for heat-dashboard >> discussion. >> >> >>>> >> >> >>>> https://etherpad.openstack.org/p/heat-dashboard-ptg-rocky- >> discussion >> >> >>>> Line135, "we can share ownership if needed - e0ne" >> >> >>>> >> >> >>>> Just in case, could you guys confirm unified opinion on this >> matter >> >> >>>> as >> >> >>>> Horizon team ? >> >> >>>> >> >> >>>> Frankly speaking I'm feeling the benefit to make us xstatic-core >> >> >>>> because it's easier & smoother to manage what we are taking for >> >> >>>> heat-dashboard. >> >> >>>> On the other hand, I can understand what Akihiro you are saying, >> the >> >> >>>> newly added repos belong to Horizon project & being managed by not >> >> >>>> Horizon core is not consistent. >> >> >>>> Also having exception might make unexpected confusion in near >> future. >> >> >>>> >> >> >>>> Eventually we will follow your opinion, let me hear Horizon team's >> >> >>>> conclusion. >> >> >>>> >> >> >>>> Regards, >> >> >>>> Kaz >> >> >>>> >> >> >>>> >> >> >>>> 2018-03-20 12:58 GMT+09:00 Akihiro Motoki : >> >> >>>> > Hi Kaz, >> >> >>>> > >> >> >>>> > These repositories are under horizon project. It looks better to >> >> >>>> > keep >> >> >>>> > the >> >> >>>> > current core team. >> >> >>>> > It potentially brings some confusion if we treat some horizon >> >> >>>> > plugin >> >> >>>> > team >> >> >>>> > specially. >> >> >>>> > Reviewing xstatic repos would be a small burden, wo I think it >> >> >>>> > would >> >> >>>> > work >> >> >>>> > without problem even if only horizon-core can approve xstatic >> >> >>>> > reviews. >> >> >>>> > >> >> >>>> > >> >> >>>> > 2018-03-20 10:02 GMT+09:00 Kaz Shinohara > >: >> >> >>>> >> >> >> >>>> >> Hi Ivan, Horizon folks, >> >> >>>> >> >> >> >>>> >> >> >> >>>> >> Now totally 8 xstatic-** repos for heat-dashboard have been >> >> >>>> >> landed. >> >> >>>> >> >> >> >>>> >> In project-config for them, I've set same acl-config as the >> >> >>>> >> existing >> >> >>>> >> xstatic repos. >> >> >>>> >> It means only "xstatic-core" can manage the newly created >> repos on >> >> >>>> >> gerrit. >> >> >>>> >> Could you kindly add "heat-dashboard-core" into "xstatic-core" >> >> >>>> >> like as >> >> >>>> >> what horizon-core is doing ? >> >> >>>> >> >> >> >>>> >> xstatic-core >> >> >>>> >> https://review.openstack.org/#/admin/groups/385,members >> >> >>>> >> >> >> >>>> >> heat-dashboard-core >> >> >>>> >> https://review.openstack.org/#/admin/groups/1844,members >> >> >>>> >> >> >> >>>> >> Of course, we will surely touch only what we made, just would >> like >> >> >>>> >> to >> >> >>>> >> manage them smoothly by ourselves. >> >> >>>> >> In case we need to touch the other ones, will ask Horizon team >> for >> >> >>>> >> help. >> >> >>>> >> >> >> >>>> >> Thanks in advance. >> >> >>>> >> >> >> >>>> >> Regards, >> >> >>>> >> Kaz >> >> >>>> >> >> >> >>>> >> >> >> >>>> >> 2018-03-14 15:12 GMT+09:00 Xinni Ge : >> >> >>>> >> > Hi Horizon Team, >> >> >>>> >> > >> >> >>>> >> > I reported a bug about lack of ``ADD_XSTATIC_MODULES`` plugin >> >> >>>> >> > option, >> >> >>>> >> > and submitted a patch for it. >> >> >>>> >> > Could you please help to review the patch. >> >> >>>> >> > >> >> >>>> >> > https://bugs.launchpad.net/horizon/+bug/1755339 >> >> >>>> >> > https://review.openstack.org/#/c/552259/ >> >> >>>> >> > >> >> >>>> >> > Thank you very much. >> >> >>>> >> > >> >> >>>> >> > Best Regards, >> >> >>>> >> > Xinni >> >> >>>> >> > >> >> >>>> >> > On Tue, Mar 13, 2018 at 6:41 PM, Ivan Kolodyazhny >> >> >>>> >> > >> >> >>>> >> > wrote: >> >> >>>> >> >> >> >> >>>> >> >> Hi Kaz, >> >> >>>> >> >> >> >> >>>> >> >> Thanks for cleaning this up. I put +1 on both of these >> patches >> >> >>>> >> >> >> >> >>>> >> >> Regards, >> >> >>>> >> >> Ivan Kolodyazhny, >> >> >>>> >> >> http://blog.e0ne.info/ >> >> >>>> >> >> >> >> >>>> >> >> On Tue, Mar 13, 2018 at 4:48 AM, Kaz Shinohara >> >> >>>> >> >> >> >> >>>> >> >> wrote: >> >> >>>> >> >>> >> >> >>>> >> >>> Hi Ivan & Horizon folks, >> >> >>>> >> >>> >> >> >>>> >> >>> >> >> >>>> >> >>> Now we are submitting a couple of patches to have the new >> >> >>>> >> >>> xstatic >> >> >>>> >> >>> modules. >> >> >>>> >> >>> Let me request you to have review the following patches. >> >> >>>> >> >>> We need Horizon PTL's +1 to move these forward. >> >> >>>> >> >>> >> >> >>>> >> >>> project-config >> >> >>>> >> >>> https://review.openstack.org/#/c/551978/ >> >> >>>> >> >>> >> >> >>>> >> >>> governance >> >> >>>> >> >>> https://review.openstack.org/#/c/551980/ >> >> >>>> >> >>> >> >> >>>> >> >>> Thanks in advance:) >> >> >>>> >> >>> >> >> >>>> >> >>> Regards, >> >> >>>> >> >>> Kaz >> >> >>>> >> >>> >> >> >>>> >> >>> >> >> >>>> >> >>> 2018-03-12 20:00 GMT+09:00 Radomir Dopieralski >> >> >>>> >> >>> : >> >> >>>> >> >>> > Yes, please do that. We can then discuss in the review >> about >> >> >>>> >> >>> > technical >> >> >>>> >> >>> > details. >> >> >>>> >> >>> > >> >> >>>> >> >>> > On Mon, Mar 12, 2018 at 2:54 AM, Xinni Ge >> >> >>>> >> >>> > >> >> >>>> >> >>> > wrote: >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> Hi, Akihiro >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> Thanks for the quick reply. >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> I agree with your opinion that BASE_XSTATIC_MODULES >> should >> >> >>>> >> >>> >> not >> >> >>>> >> >>> >> be >> >> >>>> >> >>> >> modified. >> >> >>>> >> >>> >> It is much better to enhance horizon plugin settings, >> >> >>>> >> >>> >> and I think maybe there could be one option like >> >> >>>> >> >>> >> ADD_XSTATIC_MODULES. >> >> >>>> >> >>> >> This option adds the plugin's xstatic files in >> >> >>>> >> >>> >> STATICFILES_DIRS. >> >> >>>> >> >>> >> I am considering to add a bug report to describe it at >> >> >>>> >> >>> >> first, >> >> >>>> >> >>> >> and >> >> >>>> >> >>> >> give >> >> >>>> >> >>> >> a >> >> >>>> >> >>> >> patch later maybe. >> >> >>>> >> >>> >> Is that ok with the Horizon team? >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> Best Regards. >> >> >>>> >> >>> >> Xinni >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> On Fri, Mar 9, 2018 at 11:47 PM, Akihiro Motoki >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> wrote: >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> Hi Xinni, >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> 2018-03-09 12:05 GMT+09:00 Xinni Ge >> >> >>>> >> >>> >>> : >> >> >>>> >> >>> >>> > Hello Horizon Team, >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > I would like to hear about your opinions about how to >> >> >>>> >> >>> >>> > add >> >> >>>> >> >>> >>> > new >> >> >>>> >> >>> >>> > xstatic >> >> >>>> >> >>> >>> > modules to horizon settings. >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > As for Heat-dashboard project embedded 3rd-party >> files >> >> >>>> >> >>> >>> > issue, >> >> >>>> >> >>> >>> > thanks >> >> >>>> >> >>> >>> > for >> >> >>>> >> >>> >>> > your advices in Dublin PTG, we are now removing them >> and >> >> >>>> >> >>> >>> > referencing as >> >> >>>> >> >>> >>> > new >> >> >>>> >> >>> >>> > xstatic-* libs. >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> Thanks for moving this forward. >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> > So we installed the new xstatic files (not uploaded >> as >> >> >>>> >> >>> >>> > openstack >> >> >>>> >> >>> >>> > official >> >> >>>> >> >>> >>> > repos yet) in our development environment now, but >> >> >>>> >> >>> >>> > hesitate >> >> >>>> >> >>> >>> > to >> >> >>>> >> >>> >>> > decide >> >> >>>> >> >>> >>> > how to >> >> >>>> >> >>> >>> > add the new installed xstatic lib path to >> >> >>>> >> >>> >>> > STATICFILES_DIRS >> >> >>>> >> >>> >>> > in >> >> >>>> >> >>> >>> > openstack_dashboard.settings so that the static files >> >> >>>> >> >>> >>> > could >> >> >>>> >> >>> >>> > be >> >> >>>> >> >>> >>> > automatically >> >> >>>> >> >>> >>> > collected by *collectstatic* process. >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > Currently Horizon defines BASE_XSTATIC_MODULES in >> >> >>>> >> >>> >>> > openstack_dashboard/utils/settings.py and the >> relevant >> >> >>>> >> >>> >>> > static >> >> >>>> >> >>> >>> > fils >> >> >>>> >> >>> >>> > are >> >> >>>> >> >>> >>> > added >> >> >>>> >> >>> >>> > to STATICFILES_DIRS before it updates any Horizon >> plugin >> >> >>>> >> >>> >>> > dashboard. >> >> >>>> >> >>> >>> > We may want new plugin setting keywords ( something >> >> >>>> >> >>> >>> > similar >> >> >>>> >> >>> >>> > to >> >> >>>> >> >>> >>> > ADD_JS_FILES) >> >> >>>> >> >>> >>> > to update horizon XSTATIC_MODULES (or directly update >> >> >>>> >> >>> >>> > STATICFILES_DIRS). >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> IMHO it is better to allow horizon plugins to add >> xstatic >> >> >>>> >> >>> >>> modules >> >> >>>> >> >>> >>> through horizon plugin settings. I don't think it is a >> >> >>>> >> >>> >>> good >> >> >>>> >> >>> >>> idea >> >> >>>> >> >>> >>> to >> >> >>>> >> >>> >>> add a new entry in BASE_XSTATIC_MODULES based on >> horizon >> >> >>>> >> >>> >>> plugin >> >> >>>> >> >>> >>> usages. It makes difficult to track why and where a >> >> >>>> >> >>> >>> xstatic >> >> >>>> >> >>> >>> module >> >> >>>> >> >>> >>> in >> >> >>>> >> >>> >>> BASE_XSTATIC_MODULES is used. >> >> >>>> >> >>> >>> Multiple horizon plugins can add a same entry, so >> horizon >> >> >>>> >> >>> >>> code >> >> >>>> >> >>> >>> to >> >> >>>> >> >>> >>> handle plugin settings should merge multiple entries >> to a >> >> >>>> >> >>> >>> single >> >> >>>> >> >>> >>> one >> >> >>>> >> >>> >>> hopefully. >> >> >>>> >> >>> >>> My vote is to enhance the horizon plugin settings. >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> Akihiro >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > Looking forward to hearing any suggestions from you >> >> >>>> >> >>> >>> > guys, >> >> >>>> >> >>> >>> > and >> >> >>>> >> >>> >>> > Best Regards, >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > Xinni Ge >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > ______________________________ >> ____________________________________________ >> >> >>>> >> >>> >>> > OpenStack Development Mailing List (not for usage >> >> >>>> >> >>> >>> > questions) >> >> >>>> >> >>> >>> > Unsubscribe: >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > OpenStack-dev-request at lists.op >> enstack.org?subject:unsubscribe >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > http://lists.openstack.org/cgi >> -bin/mailman/listinfo/openstack-dev >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> ______________________________ >> ____________________________________________ >> >> >>>> >> >>> >>> OpenStack Development Mailing List (not for usage >> >> >>>> >> >>> >>> questions) >> >> >>>> >> >>> >>> Unsubscribe: >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> OpenStack-dev-request at lists.op >> enstack.org?subject:unsubscribe >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> http://lists.openstack.org/cgi >> -bin/mailman/listinfo/openstack-dev >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> -- >> >> >>>> >> >>> >> 葛馨霓 Xinni Ge >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> ______________________________ >> ____________________________________________ >> >> >>>> >> >>> >> OpenStack Development Mailing List (not for usage >> >> >>>> >> >>> >> questions) >> >> >>>> >> >>> >> Unsubscribe: >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> OpenStack-dev-request at lists.op >> enstack.org?subject:unsubscribe >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> http://lists.openstack.org/cgi >> -bin/mailman/listinfo/openstack-dev >> >> >>>> >> >>> >> >> >> >>>> >> >>> > >> >> >>>> >> >>> > >> >> >>>> >> >>> > >> >> >>>> >> >>> > >> >> >>>> >> >>> > >> >> >>>> >> >>> > >> >> >>>> >> >>> > ______________________________ >> ____________________________________________ >> >> >>>> >> >>> > OpenStack Development Mailing List (not for usage >> questions) >> >> >>>> >> >>> > Unsubscribe: >> >> >>>> >> >>> > >> >> >>>> >> >>> > OpenStack-dev-request at lists.op >> enstack.org?subject:unsubscribe >> >> >>>> >> >>> > >> >> >>>> >> >>> > >> >> >>>> >> >>> > http://lists.openstack.org/cgi >> -bin/mailman/listinfo/openstack-dev >> >> >>>> >> >>> > >> >> >>>> >> >>> >> >> >>>> >> >>> >> >> >>>> >> >>> >> >> >>>> >> >>> >> >> >>>> >> >>> >> >> >>>> >> >>> ______________________________ >> ____________________________________________ >> >> >>>> >> >>> OpenStack Development Mailing List (not for usage >> questions) >> >> >>>> >> >>> Unsubscribe: >> >> >>>> >> >>> OpenStack-dev-request at lists.op >> enstack.org?subject:unsubscribe >> >> >>>> >> >>> >> >> >>>> >> >>> http://lists.openstack.org/cgi >> -bin/mailman/listinfo/openstack-dev >> >> >>>> >> >> >> >> >>>> >> >> >> >> >>>> >> >> >> >> >>>> >> >> >> >> >>>> >> >> >> >> >>>> >> >> >> >> >>>> >> >> ______________________________ >> ____________________________________________ >> >> >>>> >> >> OpenStack Development Mailing List (not for usage questions) >> >> >>>> >> >> Unsubscribe: >> >> >>>> >> >> OpenStack-dev-request at lists.op >> enstack.org?subject:unsubscribe >> >> >>>> >> >> >> >> >>>> >> >> http://lists.openstack.org/cgi >> -bin/mailman/listinfo/openstack-dev >> >> >>>> >> >> >> >> >>>> >> > >> >> >>>> >> > >> >> >>>> >> > >> >> >>>> >> > -- >> >> >>>> >> > 葛馨霓 Xinni Ge >> >> >>>> >> > >> >> >>>> >> > >> >> >>>> >> > >> >> >>>> >> > >> >> >>>> >> > ____________________________________________________________ >> ______________ >> >> >>>> >> > OpenStack Development Mailing List (not for usage questions) >> >> >>>> >> > Unsubscribe: >> >> >>>> >> > OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> >> >>>> >> > >> >> >>>> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac >> k-dev >> >> >>>> >> > >> >> >>>> >> >> >> >>>> >> >> >> >>>> >> >> >> >>>> >> ____________________________________________________________ >> ______________ >> >> >>>> >> OpenStack Development Mailing List (not for usage questions) >> >> >>>> >> Unsubscribe: >> >> >>>> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> >>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac >> k-dev >> >> >>>> > >> >> >>>> > >> >> >>>> > >> >> >>>> > >> >> >>>> > >> >> >>>> > ____________________________________________________________ >> ______________ >> >> >>>> > OpenStack Development Mailing List (not for usage questions) >> >> >>>> > Unsubscribe: >> >> >>>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> >>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac >> k-dev >> >> >>>> > >> >> >>>> >> >> >>>> >> >> >>>> >> >> >>>> ____________________________________________________________ >> ______________ >> >> >>>> OpenStack Development Mailing List (not for usage questions) >> >> >>>> Unsubscribe: >> >> >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >>> >> >> >>> >> >> >>> >> >> >>> >> >> >>> ____________________________________________________________ >> ______________ >> >> >>> OpenStack Development Mailing List (not for usage questions) >> >> >>> Unsubscribe: >> >> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >>> >> >> >> >> >> >> >> >> >> >> >> >> ____________________________________________________________ >> ______________ >> >> >> OpenStack Development Mailing List (not for usage questions) >> >> >> Unsubscribe: >> >> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> >> >> >> ____________________________________________________________ >> ______________ >> >> OpenStack Development Mailing List (not for usage questions) >> >> Unsubscribe: OpenStack-dev-request at lists.op >> enstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> > >> > >> > ____________________________________________________________ >> ______________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: OpenStack-dev-request at lists.op >> enstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > -- > 葛馨霓 Xinni Ge > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Wed Apr 4 05:55:38 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Wed, 4 Apr 2018 14:55:38 +0900 Subject: [openstack-dev] [horizon] [heat-dashboard] Horizon plugin settings for new xstatic modules In-Reply-To: References: Message-ID: Hi Xinni, There is no need that you push a tag manually for official deliverables. You can propose a patch to openstack/releases repository. Horizon PTL or release liaison (at now both Ivan) can confirm it and the release team will approve it. Once it is approved, a release tag will be added and a deliverable will be published automatically by the infra script (if you've setup project-config appropriately). Akihiro 2018-04-04 14:34 GMT+09:00 Xinni Ge : > Hi Ivan and other Horizon team member, > > Thanks for adding us into xstatic-core group. > But I still need your opinion and help to release the newly-added xstatic > packages to pypi index. > > Current `xstatic-core` group doesn't have the permission to PUSH SIGNED > TAG, and I cannot release the first non-trivial version. > > If I (or maybe Kaz) could be added into xstatic-release group, we can > release all the 8 packages by ourselves. > > Or, we are very appreciate if any member of xstatic-release could help to > do it. > > Just for your quick access, here is the link of access permission page of > one xstatic package. > https://review.openstack.org/#/admin/projects/openstack/ > xstatic-angular-material,access > > -- > Best Regards, > Xinni > > On Thu, Mar 29, 2018 at 9:59 AM, Kaz Shinohara > wrote: > >> Hi Ivan, >> >> >> Thank you very much. >> I've confirmed that all of us have been added to xstatic-core. >> >> As discussed, we will focus on the followings what we added for >> heat-dashboard, will not touch other xstatic repos as core. >> >> xstatic-angular-material >> xstatic-angular-notify >> xstatic-angular-uuid >> xstatic-angular-vis >> xstatic-filesaver >> xstatic-js-yaml >> xstatic-json2yaml >> xstatic-vis >> >> Regards, >> Kaz >> >> 2018-03-29 5:40 GMT+09:00 Ivan Kolodyazhny : >> > Hi Kuz, >> > >> > Don't worry, we're on the same page with you. I added both you, Xinni >> and >> > Keichii to the xstatic-core group. Thank you for your contributions! >> > >> > Regards, >> > Ivan Kolodyazhny, >> > http://blog.e0ne.info/ >> > >> > On Wed, Mar 28, 2018 at 5:18 PM, Kaz Shinohara >> wrote: >> >> >> >> Hi Ivan & Horizon folks >> >> >> >> >> >> AFAIK, Horizon team had conclusion that you will add the specific >> >> members to xstatic-core, correct ? >> >> Can I ask you to add the following members ? >> >> # All of tree are heat-dashboard core. >> >> >> >> Kazunori Shinohara / ksnhr.tech at gmail.com #myself >> >> Xinni Ge / xinni.ge1990 at gmail.com >> >> Keiichi Hikita / keiichi.hikita at gmail.com >> >> >> >> Please give me a shout, if we are not on same page or any concern. >> >> >> >> Regards, >> >> Kaz >> >> >> >> >> >> 2018-03-21 22:29 GMT+09:00 Kaz Shinohara : >> >> > Hi Ivan, Akihiro, >> >> > >> >> > >> >> > Thanks for your kind arrangement. >> >> > Looking forward to hearing your decision soon. >> >> > >> >> > Regards, >> >> > Kaz >> >> > >> >> > 2018-03-21 21:43 GMT+09:00 Ivan Kolodyazhny : >> >> >> HI Team, >> >> >> >> >> >> From my perspective, I'm OK both with #2 and #3 options. I agree >> that >> >> >> #4 >> >> >> could be too complicated for us. Anyway, we've got this topic on the >> >> >> meeting >> >> >> agenda [1] so we'll discuss it there too. I'll share our decision >> after >> >> >> the >> >> >> meeting. >> >> >> >> >> >> [1] https://wiki.openstack.org/wiki/Meetings/Horizon >> >> >> >> >> >> >> >> >> >> >> >> Regards, >> >> >> Ivan Kolodyazhny, >> >> >> http://blog.e0ne.info/ >> >> >> >> >> >> On Tue, Mar 20, 2018 at 10:45 AM, Akihiro Motoki > > >> >> >> wrote: >> >> >>> >> >> >>> Hi Kaz and Ivan, >> >> >>> >> >> >>> Yeah, it is worth discussed officially in the horizon team meeting >> or >> >> >>> the >> >> >>> mailing list thread to get a consensus. >> >> >>> Hopefully you can add this topic to the horizon meeting agenda. >> >> >>> >> >> >>> After sending the previous mail, I noticed anther option. I see >> there >> >> >>> are >> >> >>> several options now. >> >> >>> (1) Keep xstatic-core and horizon-core same. >> >> >>> (2) Add specific members to xstatic-core >> >> >>> (3) Add specific horizon-plugin core to xstatic-core >> >> >>> (4) Split core membership into per-repo basis (perhaps too >> >> >>> complicated!!) >> >> >>> >> >> >>> My current vote is (2) as xstatic-core needs to understand what is >> >> >>> xstatic >> >> >>> and how it is maintained. >> >> >>> >> >> >>> Thanks, >> >> >>> Akihiro >> >> >>> >> >> >>> >> >> >>> 2018-03-20 17:17 GMT+09:00 Kaz Shinohara : >> >> >>>> >> >> >>>> Hi Akihiro, >> >> >>>> >> >> >>>> >> >> >>>> Thanks for your comment. >> >> >>>> The background of my request to add us to xstatic-core comes from >> >> >>>> Ivan's comment in last PTG's etherpad for heat-dashboard >> discussion. >> >> >>>> >> >> >>>> https://etherpad.openstack.org/p/heat-dashboard-ptg-rocky- >> discussion >> >> >>>> Line135, "we can share ownership if needed - e0ne" >> >> >>>> >> >> >>>> Just in case, could you guys confirm unified opinion on this >> matter >> >> >>>> as >> >> >>>> Horizon team ? >> >> >>>> >> >> >>>> Frankly speaking I'm feeling the benefit to make us xstatic-core >> >> >>>> because it's easier & smoother to manage what we are taking for >> >> >>>> heat-dashboard. >> >> >>>> On the other hand, I can understand what Akihiro you are saying, >> the >> >> >>>> newly added repos belong to Horizon project & being managed by not >> >> >>>> Horizon core is not consistent. >> >> >>>> Also having exception might make unexpected confusion in near >> future. >> >> >>>> >> >> >>>> Eventually we will follow your opinion, let me hear Horizon team's >> >> >>>> conclusion. >> >> >>>> >> >> >>>> Regards, >> >> >>>> Kaz >> >> >>>> >> >> >>>> >> >> >>>> 2018-03-20 12:58 GMT+09:00 Akihiro Motoki : >> >> >>>> > Hi Kaz, >> >> >>>> > >> >> >>>> > These repositories are under horizon project. It looks better to >> >> >>>> > keep >> >> >>>> > the >> >> >>>> > current core team. >> >> >>>> > It potentially brings some confusion if we treat some horizon >> >> >>>> > plugin >> >> >>>> > team >> >> >>>> > specially. >> >> >>>> > Reviewing xstatic repos would be a small burden, wo I think it >> >> >>>> > would >> >> >>>> > work >> >> >>>> > without problem even if only horizon-core can approve xstatic >> >> >>>> > reviews. >> >> >>>> > >> >> >>>> > >> >> >>>> > 2018-03-20 10:02 GMT+09:00 Kaz Shinohara > >: >> >> >>>> >> >> >> >>>> >> Hi Ivan, Horizon folks, >> >> >>>> >> >> >> >>>> >> >> >> >>>> >> Now totally 8 xstatic-** repos for heat-dashboard have been >> >> >>>> >> landed. >> >> >>>> >> >> >> >>>> >> In project-config for them, I've set same acl-config as the >> >> >>>> >> existing >> >> >>>> >> xstatic repos. >> >> >>>> >> It means only "xstatic-core" can manage the newly created >> repos on >> >> >>>> >> gerrit. >> >> >>>> >> Could you kindly add "heat-dashboard-core" into "xstatic-core" >> >> >>>> >> like as >> >> >>>> >> what horizon-core is doing ? >> >> >>>> >> >> >> >>>> >> xstatic-core >> >> >>>> >> https://review.openstack.org/#/admin/groups/385,members >> >> >>>> >> >> >> >>>> >> heat-dashboard-core >> >> >>>> >> https://review.openstack.org/#/admin/groups/1844,members >> >> >>>> >> >> >> >>>> >> Of course, we will surely touch only what we made, just would >> like >> >> >>>> >> to >> >> >>>> >> manage them smoothly by ourselves. >> >> >>>> >> In case we need to touch the other ones, will ask Horizon team >> for >> >> >>>> >> help. >> >> >>>> >> >> >> >>>> >> Thanks in advance. >> >> >>>> >> >> >> >>>> >> Regards, >> >> >>>> >> Kaz >> >> >>>> >> >> >> >>>> >> >> >> >>>> >> 2018-03-14 15:12 GMT+09:00 Xinni Ge : >> >> >>>> >> > Hi Horizon Team, >> >> >>>> >> > >> >> >>>> >> > I reported a bug about lack of ``ADD_XSTATIC_MODULES`` plugin >> >> >>>> >> > option, >> >> >>>> >> > and submitted a patch for it. >> >> >>>> >> > Could you please help to review the patch. >> >> >>>> >> > >> >> >>>> >> > https://bugs.launchpad.net/horizon/+bug/1755339 >> >> >>>> >> > https://review.openstack.org/#/c/552259/ >> >> >>>> >> > >> >> >>>> >> > Thank you very much. >> >> >>>> >> > >> >> >>>> >> > Best Regards, >> >> >>>> >> > Xinni >> >> >>>> >> > >> >> >>>> >> > On Tue, Mar 13, 2018 at 6:41 PM, Ivan Kolodyazhny >> >> >>>> >> > >> >> >>>> >> > wrote: >> >> >>>> >> >> >> >> >>>> >> >> Hi Kaz, >> >> >>>> >> >> >> >> >>>> >> >> Thanks for cleaning this up. I put +1 on both of these >> patches >> >> >>>> >> >> >> >> >>>> >> >> Regards, >> >> >>>> >> >> Ivan Kolodyazhny, >> >> >>>> >> >> http://blog.e0ne.info/ >> >> >>>> >> >> >> >> >>>> >> >> On Tue, Mar 13, 2018 at 4:48 AM, Kaz Shinohara >> >> >>>> >> >> >> >> >>>> >> >> wrote: >> >> >>>> >> >>> >> >> >>>> >> >>> Hi Ivan & Horizon folks, >> >> >>>> >> >>> >> >> >>>> >> >>> >> >> >>>> >> >>> Now we are submitting a couple of patches to have the new >> >> >>>> >> >>> xstatic >> >> >>>> >> >>> modules. >> >> >>>> >> >>> Let me request you to have review the following patches. >> >> >>>> >> >>> We need Horizon PTL's +1 to move these forward. >> >> >>>> >> >>> >> >> >>>> >> >>> project-config >> >> >>>> >> >>> https://review.openstack.org/#/c/551978/ >> >> >>>> >> >>> >> >> >>>> >> >>> governance >> >> >>>> >> >>> https://review.openstack.org/#/c/551980/ >> >> >>>> >> >>> >> >> >>>> >> >>> Thanks in advance:) >> >> >>>> >> >>> >> >> >>>> >> >>> Regards, >> >> >>>> >> >>> Kaz >> >> >>>> >> >>> >> >> >>>> >> >>> >> >> >>>> >> >>> 2018-03-12 20:00 GMT+09:00 Radomir Dopieralski >> >> >>>> >> >>> : >> >> >>>> >> >>> > Yes, please do that. We can then discuss in the review >> about >> >> >>>> >> >>> > technical >> >> >>>> >> >>> > details. >> >> >>>> >> >>> > >> >> >>>> >> >>> > On Mon, Mar 12, 2018 at 2:54 AM, Xinni Ge >> >> >>>> >> >>> > >> >> >>>> >> >>> > wrote: >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> Hi, Akihiro >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> Thanks for the quick reply. >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> I agree with your opinion that BASE_XSTATIC_MODULES >> should >> >> >>>> >> >>> >> not >> >> >>>> >> >>> >> be >> >> >>>> >> >>> >> modified. >> >> >>>> >> >>> >> It is much better to enhance horizon plugin settings, >> >> >>>> >> >>> >> and I think maybe there could be one option like >> >> >>>> >> >>> >> ADD_XSTATIC_MODULES. >> >> >>>> >> >>> >> This option adds the plugin's xstatic files in >> >> >>>> >> >>> >> STATICFILES_DIRS. >> >> >>>> >> >>> >> I am considering to add a bug report to describe it at >> >> >>>> >> >>> >> first, >> >> >>>> >> >>> >> and >> >> >>>> >> >>> >> give >> >> >>>> >> >>> >> a >> >> >>>> >> >>> >> patch later maybe. >> >> >>>> >> >>> >> Is that ok with the Horizon team? >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> Best Regards. >> >> >>>> >> >>> >> Xinni >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> On Fri, Mar 9, 2018 at 11:47 PM, Akihiro Motoki >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> wrote: >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> Hi Xinni, >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> 2018-03-09 12:05 GMT+09:00 Xinni Ge >> >> >>>> >> >>> >>> : >> >> >>>> >> >>> >>> > Hello Horizon Team, >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > I would like to hear about your opinions about how to >> >> >>>> >> >>> >>> > add >> >> >>>> >> >>> >>> > new >> >> >>>> >> >>> >>> > xstatic >> >> >>>> >> >>> >>> > modules to horizon settings. >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > As for Heat-dashboard project embedded 3rd-party >> files >> >> >>>> >> >>> >>> > issue, >> >> >>>> >> >>> >>> > thanks >> >> >>>> >> >>> >>> > for >> >> >>>> >> >>> >>> > your advices in Dublin PTG, we are now removing them >> and >> >> >>>> >> >>> >>> > referencing as >> >> >>>> >> >>> >>> > new >> >> >>>> >> >>> >>> > xstatic-* libs. >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> Thanks for moving this forward. >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> > So we installed the new xstatic files (not uploaded >> as >> >> >>>> >> >>> >>> > openstack >> >> >>>> >> >>> >>> > official >> >> >>>> >> >>> >>> > repos yet) in our development environment now, but >> >> >>>> >> >>> >>> > hesitate >> >> >>>> >> >>> >>> > to >> >> >>>> >> >>> >>> > decide >> >> >>>> >> >>> >>> > how to >> >> >>>> >> >>> >>> > add the new installed xstatic lib path to >> >> >>>> >> >>> >>> > STATICFILES_DIRS >> >> >>>> >> >>> >>> > in >> >> >>>> >> >>> >>> > openstack_dashboard.settings so that the static files >> >> >>>> >> >>> >>> > could >> >> >>>> >> >>> >>> > be >> >> >>>> >> >>> >>> > automatically >> >> >>>> >> >>> >>> > collected by *collectstatic* process. >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > Currently Horizon defines BASE_XSTATIC_MODULES in >> >> >>>> >> >>> >>> > openstack_dashboard/utils/settings.py and the >> relevant >> >> >>>> >> >>> >>> > static >> >> >>>> >> >>> >>> > fils >> >> >>>> >> >>> >>> > are >> >> >>>> >> >>> >>> > added >> >> >>>> >> >>> >>> > to STATICFILES_DIRS before it updates any Horizon >> plugin >> >> >>>> >> >>> >>> > dashboard. >> >> >>>> >> >>> >>> > We may want new plugin setting keywords ( something >> >> >>>> >> >>> >>> > similar >> >> >>>> >> >>> >>> > to >> >> >>>> >> >>> >>> > ADD_JS_FILES) >> >> >>>> >> >>> >>> > to update horizon XSTATIC_MODULES (or directly update >> >> >>>> >> >>> >>> > STATICFILES_DIRS). >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> IMHO it is better to allow horizon plugins to add >> xstatic >> >> >>>> >> >>> >>> modules >> >> >>>> >> >>> >>> through horizon plugin settings. I don't think it is a >> >> >>>> >> >>> >>> good >> >> >>>> >> >>> >>> idea >> >> >>>> >> >>> >>> to >> >> >>>> >> >>> >>> add a new entry in BASE_XSTATIC_MODULES based on >> horizon >> >> >>>> >> >>> >>> plugin >> >> >>>> >> >>> >>> usages. It makes difficult to track why and where a >> >> >>>> >> >>> >>> xstatic >> >> >>>> >> >>> >>> module >> >> >>>> >> >>> >>> in >> >> >>>> >> >>> >>> BASE_XSTATIC_MODULES is used. >> >> >>>> >> >>> >>> Multiple horizon plugins can add a same entry, so >> horizon >> >> >>>> >> >>> >>> code >> >> >>>> >> >>> >>> to >> >> >>>> >> >>> >>> handle plugin settings should merge multiple entries >> to a >> >> >>>> >> >>> >>> single >> >> >>>> >> >>> >>> one >> >> >>>> >> >>> >>> hopefully. >> >> >>>> >> >>> >>> My vote is to enhance the horizon plugin settings. >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> Akihiro >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > Looking forward to hearing any suggestions from you >> >> >>>> >> >>> >>> > guys, >> >> >>>> >> >>> >>> > and >> >> >>>> >> >>> >>> > Best Regards, >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > Xinni Ge >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > ______________________________ >> ____________________________________________ >> >> >>>> >> >>> >>> > OpenStack Development Mailing List (not for usage >> >> >>>> >> >>> >>> > questions) >> >> >>>> >> >>> >>> > Unsubscribe: >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > OpenStack-dev-request at lists.op >> enstack.org?subject:unsubscribe >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> > http://lists.openstack.org/cgi >> -bin/mailman/listinfo/openstack-dev >> >> >>>> >> >>> >>> > >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> ______________________________ >> ____________________________________________ >> >> >>>> >> >>> >>> OpenStack Development Mailing List (not for usage >> >> >>>> >> >>> >>> questions) >> >> >>>> >> >>> >>> Unsubscribe: >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> OpenStack-dev-request at lists.op >> enstack.org?subject:unsubscribe >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> >> >> >>>> >> >>> >>> http://lists.openstack.org/cgi >> -bin/mailman/listinfo/openstack-dev >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> -- >> >> >>>> >> >>> >> 葛馨霓 Xinni Ge >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> ______________________________ >> ____________________________________________ >> >> >>>> >> >>> >> OpenStack Development Mailing List (not for usage >> >> >>>> >> >>> >> questions) >> >> >>>> >> >>> >> Unsubscribe: >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> OpenStack-dev-request at lists.op >> enstack.org?subject:unsubscribe >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> >> >> >>>> >> >>> >> http://lists.openstack.org/cgi >> -bin/mailman/listinfo/openstack-dev >> >> >>>> >> >>> >> >> >> >>>> >> >>> > >> >> >>>> >> >>> > >> >> >>>> >> >>> > >> >> >>>> >> >>> > >> >> >>>> >> >>> > >> >> >>>> >> >>> > >> >> >>>> >> >>> > ______________________________ >> ____________________________________________ >> >> >>>> >> >>> > OpenStack Development Mailing List (not for usage >> questions) >> >> >>>> >> >>> > Unsubscribe: >> >> >>>> >> >>> > >> >> >>>> >> >>> > OpenStack-dev-request at lists.op >> enstack.org?subject:unsubscribe >> >> >>>> >> >>> > >> >> >>>> >> >>> > >> >> >>>> >> >>> > http://lists.openstack.org/cgi >> -bin/mailman/listinfo/openstack-dev >> >> >>>> >> >>> > >> >> >>>> >> >>> >> >> >>>> >> >>> >> >> >>>> >> >>> >> >> >>>> >> >>> >> >> >>>> >> >>> >> >> >>>> >> >>> ______________________________ >> ____________________________________________ >> >> >>>> >> >>> OpenStack Development Mailing List (not for usage >> questions) >> >> >>>> >> >>> Unsubscribe: >> >> >>>> >> >>> OpenStack-dev-request at lists.op >> enstack.org?subject:unsubscribe >> >> >>>> >> >>> >> >> >>>> >> >>> http://lists.openstack.org/cgi >> -bin/mailman/listinfo/openstack-dev >> >> >>>> >> >> >> >> >>>> >> >> >> >> >>>> >> >> >> >> >>>> >> >> >> >> >>>> >> >> >> >> >>>> >> >> >> >> >>>> >> >> ______________________________ >> ____________________________________________ >> >> >>>> >> >> OpenStack Development Mailing List (not for usage questions) >> >> >>>> >> >> Unsubscribe: >> >> >>>> >> >> OpenStack-dev-request at lists.op >> enstack.org?subject:unsubscribe >> >> >>>> >> >> >> >> >>>> >> >> http://lists.openstack.org/cgi >> -bin/mailman/listinfo/openstack-dev >> >> >>>> >> >> >> >> >>>> >> > >> >> >>>> >> > >> >> >>>> >> > >> >> >>>> >> > -- >> >> >>>> >> > 葛馨霓 Xinni Ge >> >> >>>> >> > >> >> >>>> >> > >> >> >>>> >> > >> >> >>>> >> > >> >> >>>> >> > ____________________________________________________________ >> ______________ >> >> >>>> >> > OpenStack Development Mailing List (not for usage questions) >> >> >>>> >> > Unsubscribe: >> >> >>>> >> > OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> >> >>>> >> > >> >> >>>> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac >> k-dev >> >> >>>> >> > >> >> >>>> >> >> >> >>>> >> >> >> >>>> >> >> >> >>>> >> ____________________________________________________________ >> ______________ >> >> >>>> >> OpenStack Development Mailing List (not for usage questions) >> >> >>>> >> Unsubscribe: >> >> >>>> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> >>>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac >> k-dev >> >> >>>> > >> >> >>>> > >> >> >>>> > >> >> >>>> > >> >> >>>> > >> >> >>>> > ____________________________________________________________ >> ______________ >> >> >>>> > OpenStack Development Mailing List (not for usage questions) >> >> >>>> > Unsubscribe: >> >> >>>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> >>>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstac >> k-dev >> >> >>>> > >> >> >>>> >> >> >>>> >> >> >>>> >> >> >>>> ____________________________________________________________ >> ______________ >> >> >>>> OpenStack Development Mailing List (not for usage questions) >> >> >>>> Unsubscribe: >> >> >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >>> >> >> >>> >> >> >>> >> >> >>> >> >> >>> ____________________________________________________________ >> ______________ >> >> >>> OpenStack Development Mailing List (not for usage questions) >> >> >>> Unsubscribe: >> >> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >>> >> >> >> >> >> >> >> >> >> >> >> >> ____________________________________________________________ >> ______________ >> >> >> OpenStack Development Mailing List (not for usage questions) >> >> >> Unsubscribe: >> >> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> >> >> >> ____________________________________________________________ >> ______________ >> >> OpenStack Development Mailing List (not for usage questions) >> >> Unsubscribe: OpenStack-dev-request at lists.op >> enstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> > >> > >> > ____________________________________________________________ >> ______________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: OpenStack-dev-request at lists.op >> enstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > -- > 葛馨霓 Xinni Ge > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hejianle at unitedstack.com Wed Apr 4 06:52:31 2018 From: hejianle at unitedstack.com (=?utf-8?B?5L2V5YGl5LmQ?=) Date: Wed, 4 Apr 2018 14:52:31 +0800 Subject: [openstack-dev] [keystone] Could keystone to keystone federation be deployed on Centos? Message-ID: Hi all, Could keystone to keystone federation be deployed on Centos. I have notice all the document was deployment on Ubuntu. If could, is there some documents that is about deploying k2k on centos. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ifat.afek at nokia.com Wed Apr 4 07:21:14 2018 From: ifat.afek at nokia.com (Afek, Ifat (Nokia - IL/Kfar Sava)) Date: Wed, 4 Apr 2018 07:21:14 +0000 Subject: [openstack-dev] [Vitrage] New proposal for analysis. In-Reply-To: <003c01d3cb45$fda29930$f8e7cb90$@ssu.ac.kr> References: <0a7201d3c5c1$2ab596a0$8020c3e0$@ssu.ac.kr> <0b4201d3c63b$79038400$6b0a8c00$@ssu.ac.kr> <0cf201d3c72f$2b3f5ec0$81be1c40$@ssu.ac.kr> <0d8101d3c754$41e73c90$c5b5b5b0$@ssu.ac.kr> <38E590A3-69BF-4BE1-A701-FA8171429D46@nokia.com> <00e801d3ca25$29befee0$7d3cfca0$@ssu.ac.kr> <000a01d3caf4$90584010$b108c030$@ssu.ac.kr> <003c01d3cb45$fda29930$f8e7cb90$@ssu.ac.kr> Message-ID: Hi Minwook, I discussed this issue with a Mistral contributor. Mistral has a long list of actions that can be used. Specifically, you can use the std.ssh action to execute shell scripts. Some useful commands: mistral action-list mistral action-get I’m not sure about the output of the std.ssh, and whether you can get it from the action. I suggest you try it and see how it works. The action is implemented here: https://github.com/openstack/mistral/blob/master/mistral/actions/std_actions.py If std.ssh does not suit your needs, you also have an option to implement and run your own action in Mistral (either as an ssh action or as a python code). And BTW, it is not related to your current use case, but we can also add Vitrage actions to Mistral, so the user can access Vitrage information (get topology, get alarms) from Mistral workflows. Best regards, Ifat From: MinWookKim Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Tuesday, 3 April 2018 at 15:19 To: "'OpenStack Development Mailing List (not for usage questions)'" Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thanks for your reply. Your comments have been a great help to the proposal. (sorry, I did not think we could use Mistral). If we use the Mistral workflow for the proposal, we can get better results (we can get good results on performance and code conciseness). Also, if we use the Mistral workflow, we do not need to write any unnecessary code. Since I don't know about mistral yet, I think it would be better to do the most efficient design including mistral after grasping it. If we run a check through a Mistral workflow, how about providing users with a choice of tools that have the capability to perform checks? We can get the results of the check through the Mistral and tools, but I think we need the least functionality to manage them. What do you think? I attached a picture of the actual UI that I simply implemented. I hope it helps you understand. (The parameter and content have no meaning and are a simple example.) : ) Thanks. Best regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Tuesday, April 3, 2018 8:31 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, Thanks for the explanation, I understand the reasons for not running these checks on a regular basis in Zabbix or other monitoring tools. It makes sense. However, I don’t want to re-invent the wheel and add to Vitrage functionality that already exists in other projects. How about using Mistral for the purpose of manually running these extra checks? If you prepare the script/agent in advance, as well as the Mistral workflow, I believe that Mistral can successfully execute the check and return the results. I’m not so sure about the UI part, we will have to figure out how and where the user can see the output. But it will save a lot of effort around managing the checks, running a new service, supporting a new API, etc. What do you think? Ifat From: MinWookKim > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Tuesday, 3 April 2018 at 5:36 To: "'OpenStack Development Mailing List (not for usage questions)'" > Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, I also thought about several scenarios that use monitoring tools like Zabbix, Nagios, and Prometheus. But there are some limitations, so we have to think about it. We also need to think about targets, scope, and so on. The reason I do not think of tools like Zabbix, Nagios, and Prometheus as a tool to run checks is because we need to configure an agent or an exporter. I think it is not hard to configure an agent for monitoring objects such as a physical host. But the scope of the idea I think involves the VM's interior. Therefore, configuring the agent automatically inside the VM may not be easy. (although we can use parameters like user-data) If we exclude VM internal checks from scope, we can simply perform a check via Zabbix. (Like Zabbix's remote command, history) On the other hand, if we include the inside of a VM in a scope, and configure each of them, we have a rather constant overhead. The check service may incur temporary overhead, but the agent configuration can cause constant overhead. And Zabbix history can be another task for Vitrage. If we configure the agents themselves and exclude the VM's internal checks, we can provide functionality with simple code. how is it? Thank you. Best regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Monday, April 2, 2018 10:22 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, Thinking about it again, writing a new service for these checks might be an unnecessary overhead. Have you considered using an existing tool, like Zabbix, for running such checks? If you use Zabbix, you can define new triggers that run the new checks, and whenever needed the user can ask to open Zabbix and show the relevant metrics. The format will not be exactly the same as in your example, but it will save a lot of work and spare you the need to write and manage a new service. Some technical details: · The current information that Vitrage stores is not enough for opening the right Zabbix page. We will need to keep a little more data, like the item id, on the alarm vertex. But can be done easily. · A relevant Zabbix API is history.get [1] · If you are not using Zabbix, I assume that other monitoring tools have similar capabilities What do you think? Do you think it can work with your scenario? Or do you see a benefit to the user is viewing the data in the format that you suggested? [1] https://www.zabbix.com/documentation/3.0/manual/api/reference/history/get Thanks, Ifat From: MinWookKim > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Monday, 2 April 2018 at 4:51 To: "'OpenStack Development Mailing List (not for usage questions)'" > Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thank you for the reply. :) It is my opinion only, so if I'm wrong, we can change the implementation part at any time. (Even if it differs from my initial intention) The same security issues arise as you say. But now Vitrage does not call external APIs. The Vitrage-dashboard uses Vitrageclient libraries for Topology, Alarms, and RCA requests to Vitrage. So if we add an API, it will have the following flow. Vitrage-dashboard requests checks using the Vitrageclient library. -> Vitrage receives the API. -> api / controllers / v1 / checks.py is called. -> checks service is called. In accordance with the above flow, passing through the Vitrage API is the purpose of data passing and function calls. I think Vitrage does not need to call external APIs. If you do not want to go through the Vitrage API, we need to create a function for the check action in the Vitrage-dashboard, and write code to call the function. If I think wrong, please tell me anytime. :) Thank you. Best regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Sunday, April 1, 2018 3:40 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, I understand your concern about the security issue. But how would that be different if the API call is passed through Vitrage API? The authentication from vitrage-dashboard to vitrage API will work, but then Vitrage will call an external API and you’ll have the same security issue, right? I don’t understand what is the difference between calling the external component from vitrage-dashboard and calling it from vitrage. Best regards, Ifat. From: MinWookKim > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Thursday, 29 March 2018 at 14:51 To: "'OpenStack Development Mailing List (not for usage questions)'" > Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thanks for your reply. : ) I wrote my opinion on your comment. Why do you think the request should pass through the Vitrage API? Why can’t vitrage-dashboard call the check component directly? Authentication issues: I think the check component is a separate component based on the API. In my opinion, if the check component has a separate api address from the vitrage to receive requests from the Vitrage-dashboard, the Vitrage-dashboard needs to know the api address for the check component. This can result in a request / response situation open to anyone, regardless of the authentication supported by openstack between the Vitrage-dashboard and the request / response procedure of check component. This is possible not only through the Vitrage-dashboard, but also with simple commands such as curl. (I think it is unnecessary to implement a separate authentication system for the check component.) This problem may occur if someone knows the api address for the check component, which can cause the host and VM to execute system commands. what should happen if the user closes the check window before the checks are over? I assume that the checks will finish, but the user won’t be able to see the results? If the window is closed before the check is finished, the user can not check the result. To solve this problem, I think that temporarily saving a list of recent results is also a solution. By storing temporary lists (for example, up to 10), the user can see the previous results and think that it is also possible to empty the list by the user. how is it? Thank you. Best Regrads, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Thursday, March 29, 2018 8:07 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, Why do you think the request should pass through the Vitrage API? Why can’t vitrage-dashboard call the check component directly? And another question: what should happen if the user closes the check window before the checks are over? I assume that the checks will finish, but the user won’t be able to see the results? Thanks, Ifat. From: MinWookKim > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Thursday, 29 March 2018 at 10:25 To: "'OpenStack Development Mailing List (not for usage questions)'" > Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat and Vitrage team. I would like to explain more about the implementation part of the mail I sent last time. The flow is as follows. Vitrage-dashboard (action-list-panel) -> Vitrage-api -> check component The last time I mentioned it as api-handler, it would be better to call the check component directly from Vitarge-api without having to use it. I hope this helps you understand. Thank you Best Regards, Minwook. From: MinWookKim [mailto:delightwook at ssu.ac.kr] Sent: Wednesday, March 28, 2018 11:21 AM To: 'OpenStack Development Mailing List (not for usage questions)' Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thanks for your reply. : ) This proposal is a proposal that we expect to be useful from a user perspective. From a manager's point of view, we need an implementation that minimizes the overhead incurred by the proposal. The answers to some of your questions are: • I assume that these checks will not be implemented in Vitrage, and the results will not be stored in Vitrage, right? Vitrage role is to be a place where it is easy and intuitive for the user to execute external actions/checks. Yes, that's right. We do not need to save it to Vitrage because we just need to check the results. However, it is possible to implement the function directly in Vitrage-dashboard separately from Vitrage like add-action-list panel, but it seems that it is not enough to implement all the functions. If you do not mind, we will have the following flow. 1. The user requests the check action from the vitrage-dashboard (add-action-list-panel). 2. Call the check component through the vitrage's API handler. 3. The check component executes the command and returns the result. Because it is my opinion only, please tell us if there is an unnecessary part. :) • Do you expect the user to click an entity, select an action to run (e.g. ‘P2P check’), and wait by the open panel for the results? What if the user switches to another menu before the check is done? What if the user asks to run an additional check in parallel? What if the user wants to see again a previous result? My idea was to select the task, wait for the results in an open panel, and then instantly see it in the panel. If we switch to another menu before the scan is complete, we will not be able to see the results. Parallel checking is a matter of fact. (This can cause excessive overhead.) For earlier results, it may be okay to temporarily save the open panel until we exit the panel. We can see the previous results through the temporary saved results. • Any thoughts of what component will implement those checks? Or maybe these will be just scripts? I think I implement a separate component to request it. • It could be nice if, as a result of an action check, a new alarm will be raised in Vitrage. A specific alarm with the additional details that were found. However, it might not be trivial to implement it. We could think about it as phase #2. It is expected to be really good. It would be very useful if an Entity-Graph generates an alarm based on the check result. I think that part will be able to talk in detail later. My answer is my opinions and assumptions. If you think my implementation is wrong, or an inefficient implementation, please do not hesitate to tell me. Thanks. Best Regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Wednesday, March 28, 2018 2:23 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, I think that from a user’s perspective, these are very good ideas. I have some questions regarding the UX and the implementation, since I’m trying to think what could be the best way to execute such actions from Vitrage. · I assume that these checks will not be implemented in Vitrage, and the results will not be stored in Vitrage, right? Vitrage role is to be a place where it is easy and intuitive for the user to execute external actions/checks. · Do you expect the user to click an entity, select an action to run (e.g. ‘P2P check’), and wait by the open panel for the results? What if the user switches to another menu before the check is done? What if the user asks to run an additional check in parallel? What if the user wants to see again a previous result? · Any thoughts of what component will implement those checks? Or maybe these will be just scripts? · It could be nice if, as a result of an action check, a new alarm will be raised in Vitrage. A specific alarm with the additional details that were found. However, it might not be trivial to implement it. We could think about it as phase #2. Best Regards, Ifat From: MinWookKim > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Tuesday, 27 March 2018 at 14:45 To: "openstack-dev at lists.openstack.org" > Subject: [openstack-dev] [Vitrage] New proposal for analysis. Hello Vitrage team. I am currently working on the Vitrage-Dashboard proposal for the ‘Add action list panel for entity click action’. (https://review.openstack.org/#/c/531141/) I would like to make a new proposal based on the action list panel mentioned above. The new proposal is to provide multidimensional analysis capabilities in several entities that make up the infrastructure in the entity graph. Vitrage's entity-graph allows us to efficiently monitor alarms from various monitoring tools. In the current state, when there is a problem with the VM and Host, or when we want to check the status, we need to access the console individually for each VM and Host. This situation causes unnecessary behavior when the number of VMs and hosts increases. My new suggestion is that if we have a large number of vm and host, we do not need to directly connect to each VM, host console to enter the system command. Instead, we can send a system command to VM and hosts in the cloud through this proposal. It is only checking results. I have written some use-cases for an efficient explanation of the function. From an implementation perspective, the goals of the proposal are: 1. To execute commands without installing any Agent / Client that can cause load on VM, Host. 2. I want to provide a simple UI so that users or administrators can get the desired information to multiple VMs and hosts. 3. I want to be able to grasp the results at a glance. 4. I want to implement a component that can support many additional scenarios in plug-in format. I would be happy if you could comment on the proposal or ask questions. Thanks. Best Regards, Minwook. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dougal at redhat.com Wed Apr 4 07:32:23 2018 From: dougal at redhat.com (Dougal Matthews) Date: Wed, 4 Apr 2018 08:32:23 +0100 Subject: [openstack-dev] [mistral] Bug Triage on Friday Message-ID: Hey all, During the office hour on Friday, and maybe some time after it, I am going to do some Mistral bug triage, planning and general tidying up on launchpad and gerrit. If you are able and want to join, please do! The slot is 8AM UTC - 9AM UTC. I'll be in #openstack-mistral I hope to try and do these regularly at office hours, giving me time to do some triage unless something else comes up to discuss. For convenience I have created a calendar with the Mistral office hours, it is just on my personal Google Calendar until I find a better place for it. If you would find it useful, you can add it here: iCal link: https://calendar.google.com/calendar/ical/dougalmatthews.com_qmk1aiaao3b5ci30dp7t7e17es%40group.calendar.google.com/public/basic.ics Google Link: https://calendar.google.com/calendar?cid=ZG91Z2FsbWF0dGhld3MuY29tX3FtazFhaWFhbzNiNWNpMzBkcDd0N2UxN2VzQGdyb3VwLmNhbGVuZGFyLmdvb2dsZS5jb20 -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Wed Apr 4 08:21:10 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 4 Apr 2018 16:21:10 +0800 Subject: [openstack-dev] [cyborg]Weejky Team Meeting 2018.04.04 Message-ID: Hi Team, As usual team meeting starting UTC1400 at #openstack-cyborg, initial agenda as follows: 1. Status report from subteam 2. Critical spec patch review Plz feel free to suggest more topics. -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Wed Apr 4 09:00:26 2018 From: geguileo at redhat.com (Gorka Eguileor) Date: Wed, 4 Apr 2018 11:00:26 +0200 Subject: [openstack-dev] [oslo.db] innodb OPTIMIZE TABLE ? In-Reply-To: <04e33bc7-90cf-e9c6-c276-a852212c25c7@gmail.com> References: <04e33bc7-90cf-e9c6-c276-a852212c25c7@gmail.com> Message-ID: <20180404090026.xl22i4kyplurq36z@localhost> On 03/04, Jay Pipes wrote: > On 04/03/2018 11:07 AM, Michael Bayer wrote: > > The MySQL / MariaDB variants we use nowadays default to > > innodb_file_per_table=ON and we also set this flag to ON in installer > > tools like TripleO. The reason we like file per table is so that > > we don't grow an enormous ibdata file that can't be shrunk without > > rebuilding the database. Instead, we have lots of little .ibd > > datafiles for each table throughout each openstack database. > > > > But now we have the issue that these files also can benefit from > > periodic optimization which can shrink them and also have a beneficial > > effect on performance. The OPTIMIZE TABLE statement achieves this, > > but as would be expected it itself can lock tables for potentially a > > long time. Googling around reveals a lot of controversy, as various > > users and publications suggest that OPTIMIZE is never needed and would > > have only a negligible effect on performance. However here we seek > > to use OPTIMIZE so that we can reclaim disk space on tables that have > > lots of DELETE activity, such as keystone "token" and ceilometer > > "sample". > > > > Questions for the group: > > > > 1. is OPTIMIZE table worthwhile to be run for tables where the > > datafile has grown much larger than the number of rows we have in the > > table? > > Possibly, though it's questionable to use MySQL/InnoDB for storing transient > data that is deleted often like ceilometer samples and keystone tokens. A > much better solution is to use RDBMS partitioning so you can simply ALTER > TABLE .. DROP PARTITION those partitions that are no longer relevant (and > don't even bother DELETEing individual rows) or, in the case of Ceilometer > samples, don't use a traditional RDBMS for timeseries data at all... > > But since that is unfortunately already the case, yes it is probably a good > idea to OPTIMIZE TABLE on those tables. > > > 2. from people's production experience how safe is it to run OPTIMIZE, > > e.g. how long is it locking tables, etc. > > Is it safe? Yes. > > Does it lock the entire table for the duration of the operation? No. It uses > online DDL operations: > > https://dev.mysql.com/doc/refman/5.7/en/innodb-file-defragmenting.html > > Note that OPTIMIZE TABLE is mapped to ALTER TABLE tbl_name FORCE for InnoDB > tables. > > > 3. is there a heuristic we can use to measure when we might run this > > -.e.g my plan is we measure the size in bytes of each row in a table > > and then compare that in some ratio to the size of the corresponding > > .ibd file, if the .ibd file is N times larger than the logical data > > size we run OPTIMIZE ? > > I don't believe so, no. Most things I see recommended is to simply run > OPTIMIZE TABLE in a cron job on each table periodically. > > > 4. I'd like to propose this job of scanning table datafile sizes in > > ratio to logical data sizes, then running OPTIMIZE, be a utility > > script that is delivered via oslo.db, and would run for all innodb > > tables within a target MySQL/ MariaDB server generically. That is, I > > really *dont* want this to be a script that Keystone, Nova, Ceilometer > > etc. are all maintaining delivering themselves. this should be done > > as a generic pass on a whole database (noting, again, we are only > > running it for very specific InnoDB tables that we observe have a poor > > logical/physical size ratio). > > I don't believe this should be in oslo.db. This is strictly the purview of > deployment tools and should stay there, IMHO. > Hi, As far as I know most projects do "soft deletes" where we just flag the rows as deleted and don't remove them from the DB, so it's only when we use a management tool and run the "purge" command that we actually remove these rows. Since running the optimize without purging would be meaningless, I'm wondering if we should trigger the OPTIMIZE also within the purging code. This way we could avoid innefective runs of the optimize command when no purge has happened and even when we do the optimization we could skip the ratio calculation altogether for tables where no rows have been deleted (the ratio hasn't changed). Ideally the ratio calculation and optimization code would be provided by oslo.db to reduce code duplication between projects. Cheers, Gorka. > > 5. for Galera this gets more tricky, as we might want to run OPTIMIZE > > on individual nodes directly. The script at [1] illustrates how to > > run this on individual nodes one at a time. > > > > More succinctly, the Q is: > > > > a. OPTIMIZE, yes or no? > > Yes. > > > b. oslo.db script to run generically, yes or no? > > No. Just have Triple-O install galera_innoptimizer and run it in a cron job. > > Best, > -jay > > > thanks for your thoughts! > > > > > > > > [1] https://github.com/deimosfr/galera_innoptimizer > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From pkovar at redhat.com Wed Apr 4 12:10:58 2018 From: pkovar at redhat.com (Petr Kovar) Date: Wed, 4 Apr 2018 14:10:58 +0200 Subject: [openstack-dev] [docs] Documentation meeting today Message-ID: <20180404141058.ff8028a3bccd39026376b502@redhat.com> Hi all, The docs meeting will continue today at 16:00 UTC in #openstack-doc, as scheduled. For more details, see the meeting page: https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting Cheers, pk From kchamart at redhat.com Wed Apr 4 08:45:07 2018 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Wed, 4 Apr 2018 10:45:07 +0200 Subject: [openstack-dev] RFC: Next minimum libvirt / QEMU versions for "Stein" release In-Reply-To: <20180331140929.r5kj3qyrefvsovwf@eukaryote> References: <20180330142643.ff3czxy35khmjakx@eukaryote> <20180331140929.r5kj3qyrefvsovwf@eukaryote> Message-ID: <20180404084507.GA18076@paraplu> On Sat, Mar 31, 2018 at 04:09:29PM +0200, Kashyap Chamarthy wrote: > [Meta comment: corrected the email subject: "Solar" --> "Stein"] Here's a change to get the discussion rolling: https://review.openstack.org/#/c/558171/ -- [RFC] Pick next minimum libvirt / QEMU versions for "Stein" > On Fri, Mar 30, 2018 at 04:26:43PM +0200, Kashyap Chamarthy wrote: [...] > > Taking the DistroSupportMatrix into picture, for the sake of discussion, > > how about the following NEXT_MIN versions for "Solar" release: > > > > (a) libvirt: 3.2.0 (released on 23-Feb-2017) > > > > This satisfies most distributions, but will affect Debian "Stretch", > > as they only have 3.0.0 in the stable branch -- I've checked their > > repositories[3][4]. Although the latest update for the stable > > release "Stretch (9.4)" was released only on 10-March-2018, I don't > > think they increment libvirt and QEMU versions in stable. Is > > there another way for "Stretch (9.4)" users to get the relevant > > versions from elsewhere? I learn that there's Debian 'stretch-backports'[0], which might provide (but doesn't yet) a newer stable version. > > (b) QEMU: 2.9.0 (released on 20-Apr-2017) > > > > This too satisfies most distributions but will affect Oracle Linux > > -- which seem to ship QEMU 1.5.3 (released in August 2013) with > > their "7", from the Wiki. And will also affect Debian "Stretch" -- > > as it only has 2.8.0 > > > > Can folks chime in here? Answering my own questions about Debian -- >From looking at the Debian Archive[1][2], these are the versions for 'Stretch' (the current stable release) and in the upcoming 'Buster' release: libvirt | 3.0.0-4+deb9u2 | stretch libvirt | 4.1.0-2 | buster qemu | 1:2.8+dfsg-6+deb9u3 | stretch qemu | 1:2.11+dfsg-1 | buster I also talked on #debian-backports IRC channel on OFTC network, where I asked: "What I'm essentially looking for is: "How can 'stretch' users get libvirt 3.2.0 and QEMU 2.9.0, even if via a different repository. As they are proposed to be least common denominator versions across distributions." And two people said: Then the versions from 'Buster' could be backported to 'stretch-backports'. The process for that is to: "ask the maintainer of those package and Cc to the backports mailing list." Any takers? [0] https://packages.debian.org/stretch-backports/ [1] https://qa.debian.org/madison.php?package=libvirt [2] https://qa.debian.org/madison.php?package=qemu -- /kashyap From dprince at redhat.com Wed Apr 4 12:39:00 2018 From: dprince at redhat.com (Dan Prince) Date: Wed, 4 Apr 2018 08:39:00 -0400 Subject: [openstack-dev] [Ironic][Tripleo] ipmitool issues HP machines Message-ID: Kind of a support question but figured I'd ask here in case there are suggestions for workarounds for specific machines. Setting up a new rack of mixed machines this week and hit this issue with HP machines using the ipmi power driver for Ironic. Curious if anyone else has seen this before? The same commands work great with my Dell boxes! ----- [root at localhost ~]# cat x.sh set -x # this is how Ironic sends its IPMI commands it fails echo -n password > /tmp/tmprmdOOv ipmitool -I lanplus -H 172.31.0.19 -U Adminstrator -f /tmp/tmprmdOOv power status # this works great ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power status [root at localhost ~]# bash x.sh + echo -n password + ipmitool -I lanplus -H 172.31.0.19 -U Adminstrator -f /tmp/tmprmdOOv power status Error: Unable to establish IPMI v2 / RMCP+ session + ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power status Chassis Power is on Dan From supadhya at redhat.com Wed Apr 4 12:52:08 2018 From: supadhya at redhat.com (Sanjay Upadhyay) Date: Wed, 4 Apr 2018 18:22:08 +0530 Subject: [openstack-dev] [Ironic][Tripleo] ipmitool issues HP machines In-Reply-To: References: Message-ID: On Wed, Apr 4, 2018 at 6:09 PM, Dan Prince wrote: > Kind of a support question but figured I'd ask here in case there are > suggestions for workarounds for specific machines. > > Setting up a new rack of mixed machines this week and hit this issue > with HP machines using the ipmi power driver for Ironic. Curious if > anyone else has seen this before? The same commands work great with my > Dell boxes! > > Are you using ILO Drivers? https://docs.openstack.org/ironic/latest/admin/drivers/ilo.html /sanjay > ----- > > [root at localhost ~]# cat x.sh > set -x > # this is how Ironic sends its IPMI commands it fails > echo -n password > /tmp/tmprmdOOv > ipmitool -I lanplus -H 172.31.0.19 -U Adminstrator -f /tmp/tmprmdOOv > power status > > # this works great > ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power > status > > [root at localhost ~]# bash x.sh > + echo -n password > + ipmitool -I lanplus -H 172.31.0.19 -U Adminstrator -f /tmp/tmprmdOOv > power status > Error: Unable to establish IPMI v2 / RMCP+ session > + ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power > status > Chassis Power is on > > Dan > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Sanjay Upadhyay IRC #saneax -------------- next part -------------- An HTML attachment was scrubbed... URL: From scheuran at linux.vnet.ibm.com Wed Apr 4 12:48:44 2018 From: scheuran at linux.vnet.ibm.com (Andreas Scheuring) Date: Wed, 4 Apr 2018 14:48:44 +0200 Subject: [openstack-dev] [Openstack-operators] RFC: Next minimum libvirt / QEMU versions for "Solar" release In-Reply-To: <20180330142643.ff3czxy35khmjakx@eukaryote> References: <20180330142643.ff3czxy35khmjakx@eukaryote> Message-ID: <718A1B88-EFAE-4474-A227-BA970E4C6C2B@linux.vnet.ibm.com> An HTML attachment was scrubbed... URL: From noama at mellanox.com Wed Apr 4 13:00:20 2018 From: noama at mellanox.com (Noam Angel) Date: Wed, 4 Apr 2018 13:00:20 +0000 Subject: [openstack-dev] [Ironic][Tripleo] ipmitool issues HP machines In-Reply-To: References: Message-ID: Hi, First check you can ping the. Then open a browser and login. Make sure ipmi enabled. Make sure user has permissions for admin or other role with reboot capabilities. Check again Get Outlook for Android ________________________________ From: Dan Prince Sent: Wednesday, April 4, 2018 3:39:00 PM To: List, OpenStack Subject: [openstack-dev] [Ironic][Tripleo] ipmitool issues HP machines Kind of a support question but figured I'd ask here in case there are suggestions for workarounds for specific machines. Setting up a new rack of mixed machines this week and hit this issue with HP machines using the ipmi power driver for Ironic. Curious if anyone else has seen this before? The same commands work great with my Dell boxes! ----- [root at localhost ~]# cat x.sh set -x # this is how Ironic sends its IPMI commands it fails echo -n password > /tmp/tmprmdOOv ipmitool -I lanplus -H 172.31.0.19 -U Adminstrator -f /tmp/tmprmdOOv power status # this works great ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power status [root at localhost ~]# bash x.sh + echo -n password + ipmitool -I lanplus -H 172.31.0.19 -U Adminstrator -f /tmp/tmprmdOOv power status Error: Unable to establish IPMI v2 / RMCP+ session + ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power status Chassis Power is on Dan __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.openstack.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fopenstack-dev&data=02%7C01%7Cnoama%40mellanox.com%7Cb8324698c48e4fba8de408d59a293ae2%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636584424239254168&sdata=o2D26f1zFNmaM%2BOhQKD0SKaqqISRYdNzVotcR%2Fdqyhc%3D&reserved=0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dprince at redhat.com Wed Apr 4 13:01:08 2018 From: dprince at redhat.com (Dan Prince) Date: Wed, 4 Apr 2018 09:01:08 -0400 Subject: [openstack-dev] [Ironic][Tripleo] ipmitool issues HP machines In-Reply-To: References: Message-ID: On Wed, Apr 4, 2018 at 8:52 AM, Sanjay Upadhyay wrote: > > > On Wed, Apr 4, 2018 at 6:09 PM, Dan Prince wrote: >> >> Kind of a support question but figured I'd ask here in case there are >> suggestions for workarounds for specific machines. >> >> Setting up a new rack of mixed machines this week and hit this issue >> with HP machines using the ipmi power driver for Ironic. Curious if >> anyone else has seen this before? The same commands work great with my >> Dell boxes! >> > > Are you using ILO Drivers? > https://docs.openstack.org/ironic/latest/admin/drivers/ilo.html > /sanjay No. I was using the ipmi driver. Trying to keep things simple. Dan >> >> ----- >> >> [root at localhost ~]# cat x.sh >> set -x >> # this is how Ironic sends its IPMI commands it fails >> echo -n password > /tmp/tmprmdOOv >> ipmitool -I lanplus -H 172.31.0.19 -U Adminstrator -f /tmp/tmprmdOOv >> power status >> >> # this works great >> ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power >> status >> >> [root at localhost ~]# bash x.sh >> + echo -n password >> + ipmitool -I lanplus -H 172.31.0.19 -U Adminstrator -f /tmp/tmprmdOOv >> power status >> Error: Unable to establish IPMI v2 / RMCP+ session >> + ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power >> status >> Chassis Power is on >> >> Dan >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > Sanjay Upadhyay > IRC #saneax > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From dprince at redhat.com Wed Apr 4 13:07:39 2018 From: dprince at redhat.com (Dan Prince) Date: Wed, 4 Apr 2018 09:07:39 -0400 Subject: [openstack-dev] [Ironic][Tripleo] ipmitool issues HP machines In-Reply-To: References: Message-ID: On Wed, Apr 4, 2018 at 9:00 AM, Noam Angel wrote: > Hi, > > First check you can ping the. > Then open a browser and login. > Make sure ipmi enabled. > Make sure user has permissions for admin or other role with reboot > capabilities. > Check again Hi, yeah. So like I mention in my initial email IPMI is working great with a password like this: ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power status It just fails when Ironic sends the similar command with a password file. It appears that the password file is the issue. Tried it with and without newlines even and no success. Dan > > Get Outlook for Android > > ________________________________ > From: Dan Prince > Sent: Wednesday, April 4, 2018 3:39:00 PM > To: List, OpenStack > Subject: [openstack-dev] [Ironic][Tripleo] ipmitool issues HP machines > > Kind of a support question but figured I'd ask here in case there are > suggestions for workarounds for specific machines. > > Setting up a new rack of mixed machines this week and hit this issue > with HP machines using the ipmi power driver for Ironic. Curious if > anyone else has seen this before? The same commands work great with my > Dell boxes! > > ----- > > [root at localhost ~]# cat x.sh > set -x > # this is how Ironic sends its IPMI commands it fails > echo -n password > /tmp/tmprmdOOv > ipmitool -I lanplus -H 172.31.0.19 -U Adminstrator -f /tmp/tmprmdOOv > power status > > # this works great > ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power status > > [root at localhost ~]# bash x.sh > + echo -n password > + ipmitool -I lanplus -H 172.31.0.19 -U Adminstrator -f /tmp/tmprmdOOv > power status > Error: Unable to establish IPMI v2 / RMCP+ session > + ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power > status > Chassis Power is on > > Dan > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.openstack.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fopenstack-dev&data=02%7C01%7Cnoama%40mellanox.com%7Cb8324698c48e4fba8de408d59a293ae2%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636584424239254168&sdata=o2D26f1zFNmaM%2BOhQKD0SKaqqISRYdNzVotcR%2Fdqyhc%3D&reserved=0 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From waleedm at mellanox.com Wed Apr 4 13:09:30 2018 From: waleedm at mellanox.com (Waleed Musa) Date: Wed, 4 Apr 2018 13:09:30 +0000 Subject: [openstack-dev] Openstack Deployment issue - Tripleo Message-ID: Hi guys, We have a problem with deploying Tripleo that the network configuration is not propagated to ComputeSriov nodes. I've included my own network.yaml to be like that: resource_registry: OS::TripleO::ComputeSriov::Net::SoftwareConfig: ./computesriov.yaml OS::TripleO::Controller::Net::SoftwareConfig: ./controller.yaml Now regarding the computesriov.yaml I edited the interface name only , but for the ./controller.yaml I keep it as the default one comes with single-vlan because it's a VM. The controller propagated the network configuration and passed all the deployment steps but for the ComputeSriov is not propagated to the network configuration. The deployment command is as following: openstack overcloud deploy \ --templates /usr/share/openstack-tripleo-heat-templates \ -r ~/roles_data_new.yaml \ --libvirt-type kvm --control-flavor oooq_control --compute-flavor oooq_compute --ceph-storage-flavor oooq_ceph --block-storage-flavor oooq_blockstorage --swift-storage-flavor oooq_objectstorage --timeout 90 -e /home/stack/cloud-names.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/docker.yaml -e /home/stack/containers-default-parameters.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml -e /home/stack/network-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/low-memory-usage.yaml -e /home/stack/enable-tls.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/tls-endpoints-public-ip.yaml -e /home/stack/inject-trust-anchor.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/disable-telemetry.yaml --validation-warnings-fatal -e /usr/share/openstack-tripleo-heat-templates/environments/config-download-environment.yaml --config-download --verbose --ntp-server pool.ntp.org \ -e ~/nic_configs/network.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/host-config-and-reboot.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/neutron-opendaylight-hw-offload.yaml \ ${DEPLOY_ENV_YAML:+-e $DEPLOY_ENV_YAML} "$@" && status_code=0 || status_code=$? Can you advice what is causing this issue, this just has happened after installing the latest queens. Regards Waleed Mousa SW Engineer at Mellanox -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed Apr 4 13:58:48 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 04 Apr 2018 09:58:48 -0400 Subject: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects In-Reply-To: <1522276901-sup-6868@lrrr.local> References: <1521110096-sup-3634@lrrr.local> <1522276901-sup-6868@lrrr.local> Message-ID: <1522850139-sup-8937@lrrr.local> Excerpts from Doug Hellmann's message of 2018-03-28 18:53:03 -0400: > > Because we had some communication issues and did a few steps out > of order, when this patch lands projects that have approved > bot-proposed requirements updates may find that their requirements > and lower-constraints files no longer match, which may lead to job > failures. It should be easy enough to fix the problems by making > the values in the constraints files match the values in the > requirements files (by editing either set of files, depending on > what is appropriate). I apologize for any inconvenience this causes. In part because of this, and in part because of some issues calculating the initial set of lower-constraints, we have several projects where their lower-constraints don't match the lower bounds in the requirements file(s). Now that the check job has been updated with the new rules, this is preventing us from landing the patches to add the lower-constraints test job (so those rules are working!). I've prepared a script to help fix up the lower-constraints.txt based on values in requirements.txt and test-requirements.txt. That's not everything, but it should make it easier to fix the rest. See https://review.openstack.org/#/c/558610/ for the script. I'll work on those pep8 errors later today so we can hopefully land it soon, but in the mean time you'll need to check out that commit and follow the instructions for setting up a virtualenv to run the script. Doug From jdennis at redhat.com Wed Apr 4 14:16:10 2018 From: jdennis at redhat.com (John Dennis) Date: Wed, 4 Apr 2018 10:16:10 -0400 Subject: [openstack-dev] [keystone] Could keystone to keystone federation be deployed on Centos? In-Reply-To: References: Message-ID: <30775e96-dcfb-7681-385a-3f418226fb76@redhat.com> On 04/04/2018 02:52 AM, 何健乐 wrote: > Hi all, > Could keystone to keystone  federation be deployed on Centos. I have > notice all the document was deployment on Ubuntu. If could, is there > some documents that is about deploying k2k on centos. Yes k2k should work on centos, there is nothing OS specific in the implementation. There is OpenStack documentation on setting up federation and k2k. If there are deficiencies in the doc it would be helpful to point them out so we can remedy that situation. If you need more information on setting up mod_auth_mellon you might want to check out the Mellon User Guide I recently wrote and contributed to upstream Mellon (it's not part of the OpenStack doc as it's more of a SAML SP setup guide, not an OpenStack federation guide) https://github.com/UNINETT/mod_auth_mellon/blob/master/doc/user_guide/mellon_user_guide.adoc -- John From ramamani.yeleswarapu at intel.com Wed Apr 4 14:36:34 2018 From: ramamani.yeleswarapu at intel.com (Yeleswarapu, Ramamani) Date: Wed, 4 Apr 2018 14:36:34 +0000 Subject: [openstack-dev] [ironic] this week's priorities and subteam reports Message-ID: Hi, We are glad to present this week's priorities and subteam report for Ironic. As usual, this is pulled directly from the Ironic whiteboard[0] and formatted. This Week's Priorities (as of the weekly ironic meeting) ======================================================== Weekly priorities ----------------- - Remaining Rescue patches - https://review.openstack.org/#/c/546919/ - Fix a bug for unrescuiing with whole disk image - better fix: https://review.openstack.org/#/c/499050/ - Fix ``agent`` deploy interface to call ``boot.prepare_instance`` Updated - https://review.openstack.org/#/c/528699/ - Tempest tests with nova (This can land after nova work is done. But, it should be ready to get the nova patch reviewed.) - Management interface boot_mode change - https://review.openstack.org/#/c/526773/ - Bios interface support - https://review.openstack.org/#/c/511162/ - https://review.openstack.org/#/c/528609/ - db api - https://review.openstack.org/#/c/511402/ - RefArch Guide - https://review.openstack.org/#/c/556986/ Vendor priorities ----------------- cisco-ucs: Patches in works for SDK update, but not posted yet, currently rebuilding third party CI infra after a disaster... idrac: RFE and first several patches for adding UEFI support will be posted by Tuesday, 1/9 ilo: https://review.openstack.org/#/c/530838/ - OOB Raid spec for iLO5 irmc: None - a few works are work in progress oneview: None at this time - No subteam at present. xclarity: None at this time - No subteam at present. Subproject priorities --------------------- bifrost: ironic-inspector (or its client): networking-baremetal: networking-generic-switch: sushy and the redfish driver: Bugs (dtantsur, vdrok, TheJulia) -------------------------------- - (TheJulia) Ironic has moved to Storyboard. Dtantsur has indicated he will update the tool that generates these stats. - Stats (diff between 12 Mar 2018 and 19 Mar 2018) - Ironic: 225 bugs (+14) + 250 wishlist items (+2). 15 new (+10), 152 in progress, 1 critical, 36 high (+3) and 26 incomplete (+2) - Inspector: 15 bugs (+1) + 26 wishlist items. 1 new (+1), 14 in progress, 0 critical, 3 high and 4 incomplete - Nova bugs with Ironic tag: 14 (-1). 1 new, 0 critical, 0 high - critical: - sushy: https://bugs.launchpad.net/sushy/+bug/1754514 (basic auth broken when SessionService is not present) - note: the increase in bug count is probably because now the dashboard tracks virtualbmc and networking-baremetal - the dashboard was abruptly deleted and needs a new home :( - use it locally with `tox -erun` if you need to - HIGH bugs with patches to review: - Clean steps are not tested in gate https://bugs.launchpad.net/ironic/+bug/1523640: Add manual clean step ironic standalone test https://review.openstack.org/#/c/429770/15 - Needs to be reproposed to the ironic tempest plugin repository. - prepare_instance() is not called for whole disk images with 'agent' deploy interface https://bugs.launchpad.net/ironic/+bug/1713916: - Fix ``agent`` deploy interface to call ``boot.prepare_instance`` https://review.openstack.org/#/c/499050/ - (TheJulia) Currently WF-1, as revision is required for deprecation. Priorities ========== Deploy Steps (rloo, mgoddard) ----------------------------- - status as of 2 April 2018: - spec for deployment steps framework: https://review.openstack.org/#/c/549493/ - has 2x+2 and was approved, but dependent patch (https://review.openstack.org/#/c/557509/) needs to be approved first BIOS config framework(zshi, yolanda, mgoddard, hshiina) ------------------------------------------------------- - status as of 2 April 2018: - Spec has merged: https://review.openstack.org/#/c/496481/ - List of ordered patches: - BIOS Settings: Add DB model: https://review.openstack.org/511162 1x-1 (a comment about DB field size) - Add bios_interface db field https://review.openstack.org/528609 2x+2, WF+1 - BIOS Settings: Add DB API: https://review.openstack.org/511402 - BIOS Settings: Add RPC object https://review.openstack.org/511714 - Add BIOSInterface to base driver class https://review.openstack.org/507793 - BIOS Settings: Add BIOS caching: https://review.openstack.org/512200 - Add Node BIOS support - REST API: https://review.openstack.org/512579 Conductor Location Awareness (jroll, dtantsur) ---------------------------------------------- - (april 2) hope to write spec this week Reference architecture guide (dtantsur, jroll) ---------------------------------------------- - story: https://storyboard.openstack.org/#!/story/2001745 - status as of 2 April 2018: - Dublin PTG consensus was to start with small architectural building blocks. - list of cases from the Denver PTG - see in the story - First story up: https://review.openstack.org/#/c/556986/ Graphical console interface (mkrai, anup-d-navare, TheJulia) ------------------------------------------------------------ - status as of 2 Apr 2018: - No update - VNC Graphical console spec: https://review.openstack.org/#/c/306074/ - needs update, address comments - nova blueprint: https://blueprints.launchpad.net/nova/+spec/ironic-vnc-console Neutron event processing (vdrok) -------------------------------- - status as of 02 April 2018: - spec at https://review.openstack.org/343684 - Needs update - WIP code at https://review.openstack.org/440778 - code is being rewritten to look a bit nicer (major rewrite), spec update coming afterwards Goals ===== Updating nova virt to use REST API (TheJulia) --------------------------------------------- Status as of 2 APR 2018: (TheJulia) Some back and forth on this topic. It looks like we're going to keep using python-ironicclient for now but wire in the ability to set the microversion on a per call level. Storyboard migration (TheJulia, dtantsur) ----------------------------------------- Status as of Apr 2nd. - Done! - TheJulia to propose patches to docs where appropriate. - Patches in review. - dtantsur to rewrite the bug dashboard Management interface refactoring (etingof, dtantsur) ---------------------------------------------------- - Status as of March 26th: - boot mode in ManagementInterface: https://review.openstack.org/#/c/526773/ needs review Getting clean steps (rloo, TheJulia) ------------------------------------ - Stat as of April 2nd 2018 - No update - Status as of March 26th: - Cleanhold specification updated - https://review.openstack.org/#/c/507910/ Project vision (jroll, TheJulia) -------------------------------- - Status as of April 2: - jroll still trying to find time to collect enough thoughts for an email SIGHUP support (rloo) --------------------- - Proposed for ironic by rloo -- this is done: https://review.openstack.org/474331 MERGED\o/ - TODO: - ironic-inspector - networking-baremetal Stretch Goals ============= NOTE: These items will be migrated into storyboard and will be removed from the weekly whiteboard once storyboard is in-place Classic driver removal formerly Classic drivers deprecation (dtantsur) ---------------------------------------------------------------------- - spec: http://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/classic-drivers-future.html - status as of 26 Mar 2018: - switch documentation to hardware types: - api-ref examples: TODO - update https://wiki.openstack.org/wiki/Ironic/Drivers: TODO - or should we kill it with fire in favour of the docs? - ironic-inspector: - documentation: https://review.openstack.org/#/c/545285/ MERGED - backport: https://review.openstack.org/#/c/554586/ - enable fake-hardware in devstack: https://review.openstack.org/#/c/550811/ MERGED - change the default discovery driver: https://review.openstack.org/#/c/550464/ - migration of CI to hardware types - IPA: https://review.openstack.org/553431 MERGED - ironic-lib: https://review.openstack.org/#/c/552537/ MERGED - python-ironicclient: https://review.openstack.org/552543 MERGED - python-ironic-inspector-client: https://review.openstack.org/552546 +A MERGED - virtualbmc: https://review.openstack.org/#/c/555361/ MERGED - started an ML thread tagging potentially affected projects: http://lists.openstack.org/pipermail/openstack-dev/2018-March/128438.html Redfish OOB inspection (etingof, deray, stendulker) --------------------------------------------------- Zuul v3 playbook refactoring (sambetts, pas-ha) ----------------------------------------------- Before Rocky ============ CI refactoring and missing test coverage ---------------------------------------- - not considered a priority, it's a 'do it always' thing - Standalone CI tests (vsaienk0) - next patch to be reviewed, needed for 3rd party CI: https://review.openstack.org/#/c/429770/ - localboot with partitioned image patches: - Ironic - add localboot partitioned image test: https://review.openstack.org/#/c/502886/ Rebase/update required - when previous are merged TODO (vsaienko) - Upload tinycore partitioned image to tarbals.openstack.org - Switch ironic to use tinyipa partitioned image by default - Missing test coverage (all) - portgroups and attach/detach tempest tests: https://review.openstack.org/382476 - adoption: https://review.openstack.org/#/c/344975/ - should probably be changed to use standalone tests - root device hints: TODO - node take over - resource classes integration tests: https://review.openstack.org/#/c/443628/ - radosgw (https://bugs.launchpad.net/ironic/+bug/1737957) Queens High Priorities ====================== Routed network support (sambetts, vsaienk0, bfournie, hjensas) -------------------------------------------------------------- - status as of 12 Feb 2018: - All code patches are merged. - One CI patch left, rework devstack baremetal simulation. To be done in Rocky? - This is to have actual 'flat' networks in CI. - Placement API work to be done in Rocky due to: Challenges with integration to Placement due to the way the integration was done in neutron. Neutron will create a resource provider for network segments in Placement, then it creates an os-aggregate in Nova for the segment, adds nova compute hosts to this aggregate. Ironic nodes cannot be added to host-aggregates. I (hjensas) had a short discussion with neutron devs (mlavalle) on the issue: http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2018-01-12.log.html#t2018-01-12T17:05:38 There are patches in Nova to add support for ironic nodes in host-aggregates: - https://review.openstack.org/#/c/526753/ allow compute nodes to be associated with host agg - https://review.openstack.org/#/c/529135/ (Spec) - Patches: - CI Patches: - https://review.openstack.org/#/c/392959/ Rework Ironic devstack baremetal network simulation - RFEs (Rocky) - https://bugs.launchpad.net/networking-baremetal/+bug/1749166 - TheJulia, March 19th 2018: This RFE seems not to contain detail on what is desired to be improved upon, and ultimately just seems like refactoring/improvement work and may not then need an rfe. - https://bugs.launchpad.net/networking-baremetal/+bug/1749162 - TheJulia, March 19th 2018: This RFE makes sense, although I would classify it as a general improvement. If we wish to adhere to strict RFE approval for networking-baremetal work, then I think we should consider this approved since it is minor enhancement to improve operation. Rescue mode (rloo, stendulker) ------------------------------ - Status as on 12 Feb 2018 - spec: http://specs.openstack.org/openstack/ironic-specs/specs/approved/implement-rescue-mode.html - code: https://review.openstack.org/#/q/topic:bug/1526449+status:open+OR+status:merged - ironic side: - all code patches have merged except for - Rescue mode standalone tests: https://review.openstack.org/#/c/538119/ (failing CI, not ready for reviews) - Tempest tests with nova: https://review.openstack.org/#/c/528699/ - Run the tempest test on the CI: https://review.openstack.org/#/c/528704/ - succeeded in rescuing: http://logs.openstack.org/04/528704/16/check/ironic-tempest-dsvm-ipa-wholedisk-bios-agent_ipmitool-tinyipa/4b74169/logs/screen-ir-cond.txt.gz#_Feb_02_09_44_12_940007 - nova side: - https://blueprints.launchpad.net/nova/+spec/ironic-rescue-mode: - approved for Queens but didn't get the ironic code (client) done in time - (TheJulia) Nova has indicated that this is deferred until Rocky. - To get the nova patch merged, we need: - release new python-ironicclient - Done - update ironicclient version in upper-constraints (this patch will be posted automatically) - update ironicclient version in global-requirement (this patch needs to be posted manually) Posted https://review.openstack.org/554673 - code patch: https://review.openstack.org/#/c/416487/ Needs revision - CI is needed for nova part to land - tiendc is working for CI Clean up deploy interfaces (vdrok) ---------------------------------- - status as of 5 Feb 2017: - patch https://review.openstack.org/524433 needs update and rebase Zuul v3 jobs in-tree (sambetts, derekh, jlvillal, rloo) ------------------------------------------------------- - etherpad tracking zuul v3 -> intree: https://etherpad.openstack.org/p/ironic-zuulv3-intree-tracking MERGED - cleaning up/centralizing job descriptions (eg 'irrelevant-files'): DONE - Next TODO is to convert jobs on master, to proper ansible. NOT a high priority though. - (pas-ha) DNM experimental patch with "devstack-tempest" as base job https://review.openstack.org/#/c/520167/ OpenStack Priorities ==================== Mox --- - TheJulia needs to just declare this done. Python 3.5 compatibility (Nisha, Ankit) --------------------------------------- - Topic: https://review.openstack.org/#/q/topic:goal-python35+NOT+project:openstack/governance+NOT+project:openstack/releases - this include all projects, not only ironic - please tag all reviews with topic "goal-python35" - TODO submit the python3 job for IPA - for ironic and ironic-inspector job enabled by disabling swift as swift is still lacking py3.5 support. - anupn to update the python3 job to build tinyipa with python3 - (anupn): Talked with swift folks and there is a bug upstream opened https://review.openstack.org/#/c/401397 for py3 support in swift. But this is not on their priority - Right now patch pass all gate jobs except agent_- drivers. - (TheJulia) It seems we might not have py3 compatibility with swift until the T- cycle. - updating setup.cfg (part of requirements for the goal): - ironic: https://review.openstack.org/#/c/539500/ - MERGED - ironic-inspector: https://review.openstack.org/#/c/539502/ - MERGED Deploying with Apache and WSGI in CI (pas-ha, vsaienk0) ------------------------------------------------------- - ironic is mostly finished - (pas-ha) needs to be rewritten for uWSGI, patches on review: - https://review.openstack.org/#/c/507067 - inspector is TODO and depends on https://review.openstack.org/#/q/topic:bug/1525218 - delayed as the HA work seems to take a different direction - (TheJulia, March 19th, 2018) Perhaps because of the different direction, we should consider ourselves done? Subprojects =========== Inspector (dtantsur) -------------------- - trying to flip dsvm-discovery to use the new dnsmasq pxe filter and failing because of bash :Dhttps://review.openstack.org/#/c/525685/6/devstack/plugin.sh at 202 - follow-ups being merged/reviewed; working on state consistency enhancements https://review.openstack.org/#/c/510928/ too (HA demo follow-up) Bifrost (TheJulia) ------------------ - Also seems a recent authentication change in keystoneauth1 has broken processing of the clouds.yaml files, i.e. `openstack` command does not work. - TheJulia will try to look at this this week. Drivers: -------- OneView (???) ~~~~~~~~~~~~~ - Oneview presently does not have a subteam. Cisco UCS (sambetts) Last updated 2018/02/05 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - Cisco CIMC driver CI back up and working on every patch - Cisco UCSM driver CI in development - Patches for updating the UCS python SDKs are in the works and should be posted soon ......... Until next week, --rama [0] https://etherpad.openstack.org/p/IronicWhiteBoard -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Wed Apr 4 14:57:37 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 4 Apr 2018 09:57:37 -0500 Subject: [openstack-dev] [cinder][nova] about re-image the volume In-Reply-To: <20180402115959.3y3j6ytab6ruorrg@localhost> References: <20180329142813.GA25762@sm-xps> <20180402115959.3y3j6ytab6ruorrg@localhost> Message-ID: <96adcaac-632a-95c3-71c8-51211c1c57bd@gmail.com> On 4/2/2018 6:59 AM, Gorka Eguileor wrote: > I can only see one benefit from implementing this feature in Cinder > versus doing it in Nova, and that is that we can preserve the volume's > UUID, but I don't think this is even relevant for this use case, so why > is it better to implement this in Cinder than in Nova? With a new image, the volume_image_metadata in the volume would also be wrong, and I don't think nova should (or even can) update that information. So nova re-imaging the volume doesn't seem like a good fit to me given Cinder "owns" the volume along with any metadata about it. If Cinder isn't agreeable to this new re-image API, then I think we're stuck with the original proposal of creating a new volume and swapping out the root disk, along with all of the problems that can arise from that (original volume type is gone, tenant goes over-quota, what do we do with the original volume (delete it?), etc). -- Thanks, Matt From pabelanger at redhat.com Wed Apr 4 15:27:34 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Wed, 4 Apr 2018 11:27:34 -0400 Subject: [openstack-dev] [bifrost][helm][OSA][barbican] Switching from fedora-26 to fedora-27 In-Reply-To: <20180313145426.GA14285@localhost.localdomain> References: <20180305234513.GA26473@localhost.localdomain> <20180313145426.GA14285@localhost.localdomain> Message-ID: <20180404152734.GA30139@localhost.localdomain> On Tue, Mar 13, 2018 at 10:54:26AM -0400, Paul Belanger wrote: > On Mon, Mar 05, 2018 at 06:45:13PM -0500, Paul Belanger wrote: > > Greetings, > > > > A quick search of git shows your projects are using fedora-26 nodes for testing. > > Please take a moment to look at gerrit[1] and help land patches. We'd like to > > remove fedora-26 nodes in the next week and to avoid broken jobs you'll need to > > approve these patches. > > > > If you jobs are failing under fedora-27, please take the time to fix any issue > > or update said patches to make them non-voting. > > > > We (openstack-infra) aim to only keep the latest fedora image online, which > > changes aprox every 6 months. > > > > Thanks for your help and understanding, > > Paul > > > > [1] https://review.openstack.org/#/q/topic:fedora-27+status:open > > > Greetings, > > This is a friendly reminder, about moving jobs to fedora-27. I'd like to remove > our fedora-26 images next week and if jobs haven't been migrated you may start > to see NODE_FAILURE messages while running jobs. Please take a moment to merge > the open changes or update them to be non-voting while you work on fixes. > > Thanks again, > Paul > Hi, It's been a month since we started asking projects to migrate to fedora-26. I've proposed the patch to review fedora-26 nodes from nodepool[2], if your project hasn't merge the patches above you will start to see NODE_FAILURE results for your jobs. Please take the time to approve the changes above. Because new fedora images come online every 6 months, we like to only keep one of them online at any given time. Fedora is meant to be a fast moving distro to pick up new versions of software out side of the Ubuntu LTS releases. If you have any questions please reach out to us in #openstack-infra. Thanks, Paul [2] https://review.openstack.org/558847/ From Tim.Bell at cern.ch Wed Apr 4 15:51:02 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Wed, 4 Apr 2018 15:51:02 +0000 Subject: [openstack-dev] [tripleo] PTG session about All-In-One installer: recap & roadmap In-Reply-To: References: Message-ID: How about * As an operator, I’d like to spin up the latest release to check if a problem is fixed before reporting a problem upstream We use this approach frequently with packstack. Ideally (as today with packstack), we’d do this inside a VM on a running OpenStack cloud… inception… ☺ Tim From: Emilien Macchi Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Thursday, 29 March 2018 at 23:35 To: OpenStack Development Mailing List Subject: [openstack-dev] [tripleo] PTG session about All-In-One installer: recap & roadmap Greeting folks, During the last PTG we spent time discussing some ideas around an All-In-One installer, using 100% of the TripleO bits to deploy a single node OpenStack very similar with what we have today with the containerized undercloud and what we also have with other tools like Packstack or Devstack. https://etherpad.openstack.org/p/tripleo-rocky-all-in-one One of the problems that we're trying to solve here is to give a simple tool for developers so they can both easily and quickly deploy an OpenStack for their needs. "As a developer, I need to deploy OpenStack in a VM on my laptop, quickly and without complexity, reproducing the same exact same tooling as TripleO is using." "As a Neutron developer, I need to develop a feature in Neutron and test it with TripleO in my local env." "As a TripleO dev, I need to implement a new service and test its deployment in my local env." "As a developer, I need to reproduce a bug in TripleO CI that blocks the production chain, quickly and simply." Probably more use cases, but to me that's what came into my mind now. Dan kicked-off a doc patch a month ago: https://review.openstack.org/#/c/547038/ And I just went ahead and proposed a blueprint: https://blueprints.launchpad.net/tripleo/+spec/all-in-one So hopefully we can start prototyping something during Rocky. Before talking about the actual implementation, I would like to gather feedback from people interested by the use-cases. If you recognize yourself in these use-cases and you're not using TripleO today to test your things because it's too complex to deploy, we want to hear from you. I want to see feedback (positive or negative) about this idea. We need to gather ideas, use cases, needs, before we go design a prototype in Rocky. Thanks everyone who'll be involved, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From pabelanger at redhat.com Wed Apr 4 16:37:22 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Wed, 4 Apr 2018 12:37:22 -0400 Subject: [openstack-dev] [openstack-helm][infra] Please consider using experimental pipeline for non-voting jobs Message-ID: <20180404163722.GA5857@localhost.localdomain> Greetings, I've recently proposed https://review.openstack.org/558870 to the openstack-helm project. This moves both centos / fedora jobs into the experimental pipeline. The reason for this, the multinode jobs in helm each use 5 nodes per distro. This this case, 10 shared between centos / fedora. Given that this happens on every patchset propose to helm, and these jobs have been non-voting for some time 3+ months, I think it is fair to now move them into experimental to help conserve CI resources. Once they have been properly fixed, I see no issue on moving them back to check / gate pipelines. Thanks, Paul From pkovar at redhat.com Wed Apr 4 16:58:34 2018 From: pkovar at redhat.com (Petr Kovar) Date: Wed, 4 Apr 2018 18:58:34 +0200 Subject: [openstack-dev] [docs] Documentation meeting minutes for 2018-04-04 In-Reply-To: <20180404141058.ff8028a3bccd39026376b502@redhat.com> References: <20180404141058.ff8028a3bccd39026376b502@redhat.com> Message-ID: <20180404185834.2c218d3dec548e5c2b7cd7e4@redhat.com> ======================= #openstack-doc: docteam ======================= Meeting started by pkovar at 16:02:31 UTC. The full logs are available at http://eavesdrop.openstack.org/meetings/docteam/2018/docteam.2018-04-04-16.02.log.html . Meeting summary --------------- * dropdown menus on docs.o.o (pkovar, 16:13:17) * design idea: re-implement dropdown menus as simple lists, as pointed out by frank in https://review.openstack.org/#/c/556969/ (pkovar, 16:13:47) * to make the site more mobile friendly (pkovar, 16:14:00) * LINK: https://ibb.co/coCMjx (pkovar, 16:18:25) * ACTION: consider starting a thread / blueprint to discuss design changes (pkovar, 16:25:55) * design ideas for docs.o.o (pkovar, 16:30:22) * idea: make the landing pages with project lists more beautiful (pkovar, 16:30:59) * like in https://www.openstack.org/software/project-navigator/ they use all caps and bold for project names (pkovar, 16:31:17) * idea: re-use mascot logos that would be stored in openstack-manuals (pkovar, 16:34:10) * (pkovar, 16:38:37) * vancouver summit (pkovar, 16:38:46) * LINK: https://www.openstack.org/summit/vancouver-2018/summit-schedule/global-search?t=docs (pkovar, 16:39:38) * summit schedule is up with two docs talks / sessions planned (pkovar, 16:39:58) * co-located with i18n (pkovar, 16:40:11) * thanks to our presenters (pkovar, 16:40:29) * documentation builds and PTI (pkovar, 16:41:12) * updating PTI wrt https://review.openstack.org/#/c/545377/ and https://review.openstack.org/#/c/509297/ might be needed (pkovar, 16:47:33) * seeking help from infra team, will probably need to create a spec (pkovar, 16:47:56) * thanks ianychoi for driving project docs translations and pdf builds (pkovar, 16:52:25) * thanks stephenfin for keeping projects updated wrt sphinxcontrib-apidoc (pkovar, 16:55:19) * LINK: http://lists.openstack.org/pipermail/openstack-dev/2018-March/128817.html (pkovar, 16:55:27) * LINK: http://lists.openstack.org/pipermail/openstack-dev/2018-March/128594.html (pkovar, 16:55:34) Meeting ended at 16:56:58 UTC. People present (lines said) --------------------------- * pkovar (75) * ianychoi (30) * openstack (3) * openstackgerrit (2) * mordred (2) Generated by `MeetBot`_ 0.1.4 From jim at jimrollenhagen.com Wed Apr 4 17:18:02 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Wed, 4 Apr 2018 13:18:02 -0400 Subject: [openstack-dev] [Ironic][Tripleo] ipmitool issues HP machines In-Reply-To: References: Message-ID: On Wed, Apr 4, 2018 at 8:39 AM, Dan Prince wrote: > Kind of a support question but figured I'd ask here in case there are > suggestions for workarounds for specific machines. > > Setting up a new rack of mixed machines this week and hit this issue > with HP machines using the ipmi power driver for Ironic. Curious if > anyone else has seen this before? The same commands work great with my > Dell boxes! > > ----- > > [root at localhost ~]# cat x.sh > set -x > # this is how Ironic sends its IPMI commands it fails > echo -n password > /tmp/tmprmdOOv > ipmitool -I lanplus -H 172.31.0.19 -U Adminstrator -f /tmp/tmprmdOOv > power status > > # this works great > ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power > status > > [root at localhost ~]# bash x.sh > + echo -n password > + ipmitool -I lanplus -H 172.31.0.19 -U Adminstrator -f /tmp/tmprmdOOv > power status > Error: Unable to establish IPMI v2 / RMCP+ session > + ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power > status > Chassis Power is on > Very strange. A tcpdump of both would probably be enlightening. :) Also curious what version of ipmitool this is, maybe you're hitting an old bug. // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at jimrollenhagen.com Wed Apr 4 17:27:46 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Wed, 4 Apr 2018 13:27:46 -0400 Subject: [openstack-dev] [Ironic][Tripleo] ipmitool issues HP machines In-Reply-To: References: Message-ID: On Wed, Apr 4, 2018 at 1:18 PM, Jim Rollenhagen wrote: > On Wed, Apr 4, 2018 at 8:39 AM, Dan Prince wrote: > >> Kind of a support question but figured I'd ask here in case there are >> suggestions for workarounds for specific machines. >> >> Setting up a new rack of mixed machines this week and hit this issue >> with HP machines using the ipmi power driver for Ironic. Curious if >> anyone else has seen this before? The same commands work great with my >> Dell boxes! >> >> ----- >> >> [root at localhost ~]# cat x.sh >> set -x >> # this is how Ironic sends its IPMI commands it fails >> echo -n password > /tmp/tmprmdOOv >> ipmitool -I lanplus -H 172.31.0.19 -U Adminstrator -f /tmp/tmprmdOOv >> power status >> >> # this works great >> ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power >> status >> >> [root at localhost ~]# bash x.sh >> + echo -n password >> + ipmitool -I lanplus -H 172.31.0.19 -U Adminstrator -f /tmp/tmprmdOOv >> power status >> Error: Unable to establish IPMI v2 / RMCP+ session >> + ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power >> status >> Chassis Power is on >> > > Very strange. A tcpdump of both would probably be enlightening. :) > > Also curious what version of ipmitool this is, maybe you're hitting an old > bug. > https://sourceforge.net/p/ipmitool/bugs/90/ would seem like a prime suspect here. // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Wed Apr 4 19:04:01 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 4 Apr 2018 12:04:01 -0700 Subject: [openstack-dev] [nova] The createBackup API In-Reply-To: References: Message-ID: +openstack-operators Operator feedback wanted: do you have users that pass rotation parameter '0' to the createBackup API in order to delete backups? Do you have users using the createBackup API in general? On Fri, 30 Mar 2018 10:44:40 +0800, Alex Xu wrote: > There is spec proposal to fix a bug of createBackup API with > microversion. (https://review.openstack.org/#/c/511825/) > > When rotation parameter is '0', the createBackup API just do a snapshot, > and then delete all the snapshots. That is meaningless behavier. Agreed that '0' is meaningless in the context of 'createBackup' but as a side-effect, it allows users to purge old backups on-demand. > But there is thing hope to get wider suggestion. Since we said before > all the nova API should be primitive, the API shouldn't be another wrap > of another API. > > So the createBackup sounds like just using the createImage API to create > a snapshot, and upload the snapshot into the glance with index number in > the image name, and rotation the image in after each snapshot. > > So it should be something can be done by the client scrips to do same > thing with createImage API. > > We have two options here: > #1. fix the bug with a microversion. And we aren't sure any people > really use '0' in the real life. But we use microversion to fix that > bug, not sure it is worth. I think this is the key point -- are there users who have been using '0' to the createBackup API in order to delete backups? If so, then I would be inclined to go ahead and fix the issue in our API with a microversion (disallow '0' for createBackup and then add a deleteBackups server action). My rationale is that if people are actively using it, let's just fix it since it's nearly already there. The only problem with how it currently works is that '0' needlessly creates a backup that it will turn around and delete. The fix would be small and straightforward as it would just add schema validation for '0' on createBackup and then the new deleteBackups action would be an alias for deleting things (we already have the delete logic happening for '0'). > #2. deprecate the backup API with a microversion, leave the bug along. > Document that how the user can do that in the client script. > > Looking for your comments. If there isn't broader use of the API, then I'd be in favor of deprecating it. -melanie From jimmy at openstack.org Wed Apr 4 21:26:12 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Wed, 04 Apr 2018 16:26:12 -0500 Subject: [openstack-dev] Asking for ask.openstack.org Message-ID: <5AC542F4.2090205@openstack.org> Hi everyone! We have a very robust and vibrant community at ask.openstack.org . There are literally dozens of posts a day. However, many of them don't receive knowledgeable answers. I'm really worried about this becoming a vacuum where potential community members get frustrated and don't realize how to get more involved with the community. I'm looking for thoughts/ideas/feelings about this tool as well as potential admin volunteers to help us manage the constant influx of technical and not-so-technical questions around OpenStack. For those of you already contributing there, Thank You! For those that are interested in becoming a moderator (instant AUC status!) or have some additional ideas around fostering this community, please respond. Looking forward to your thoughts :) Thanks! Jimmy irc: jamesmcarthur -------------- next part -------------- An HTML attachment was scrubbed... URL: From bjozsa at jinkit.com Wed Apr 4 21:45:04 2018 From: bjozsa at jinkit.com (Brandon Jozsa) Date: Wed, 4 Apr 2018 21:45:04 +0000 Subject: [openstack-dev] [kolla][tc][openstack-helm][tripleo]propose retire kolla-kubernetes project In-Reply-To: References: <78478c68-7ff0-7b05-59f0-2e27c2635a4f@openstack.org> <20180331134428.znkeo7rn5n5adqxo@yuggoth.org> <20180331193453.3dj72kqkbyc6gvzz@yuggoth.org> Message-ID: I’ve been a part of the OpenStack-Helm project from the very beginning, and there was a lot of early brainstorming on how we could collaborate and contribute directly to Kolla-Kubernetes. In fact, this was the original intent when we met with Kolla back in Barcelona. We didn’t like the idea of fragmenting interested Kubernetes developers/operators in the OpenStack-via-Kubernetes space. Whatever the project, we wanted all the domain expertise concentrated on a single deployment effort. Even though OSH/K-k8s couldn’t reach an agreement on how to handle configmaps (our biggest difference from the start), there was a lot of early collaboration between the project cores. Early K-k8s contributors may remember Halcyon, which cores from both sides promoted for early development of OpenStack-via-Kubernetes, regardless of the project. One of the requests from the initial OSH team (in Barcelona) was to formally separate Kolla from Kolla-foo deployment projects, both at a project level and from a core perspective. Why have the same cores giving +2’s to Kolla, Kolla-Ansible, Kolla-Mesos (now dead) and Kolla-Kubernetes, who may not have any interest in another given discipline? We wanted reviews to be timely, and laser-focused, and we felt that this more atomic approach would benefit Kolla in the end. But unfortunately there was heavy resistance with limited yet very influential cores. I honestly think pushback was also because it would mean that any Kolla sub-projects would be subject to re-acceptance as big tent projects. There were also countless discussions about the preservation of the Kolla API, or Ansible + Jinja portions of Kolla-Ansible. It became clear to us that Kubernetes wasn’t going to be the first class citizen for the deployment model in Kolla-Kubernetes, forcing operators to troubleshoot between OpenStack, Kolla (container builds), Ansible, Kubernetes, and Helm. This is apparent still today. And while I understand the hesitation to change Kolla/Kolla-Ansible, I think this code-debt has somewhat contributed to sustainability of Kolla-Kubernetes. Somewhat to the point of tension, I very much agree with Thierry’s comments earlier. I want all of these projects to succeed but consolidation with purposeful and deliberate planning, which Rich has so kindly agreed to do, could be the right answer. So I +1 the idea, because I think it puts all like-minded individuals on the same focus (for the overall benefit of OpenStack and the overall OpenStack community). But we have to make sure there isn’t a lot of fallout from the decision either. To Steve Dake's previous point, there could be orphaned users/operators who feel “forced” into another project. I would hate to see that. It would be nice to at least plan this with the user-base and give them fair warning. And to this point, what is the active specific Kolla-Kubernetes core? Who is “PTL” of Kolla-Kubernetes today? On the other hand, I think that OSH has some improvements to make as well. Gating could use some help and the OpenStack-Infra team has been kindly helping out recently (a huge "thank you" to them). Docs…I think docs could always use some love. Please offer your experiences to the OSH team! We would love to hear your user input. Ultimately, if users/operators want to run something that even closely resembles production, then we need some decent production quality docs as opposed to leveraging the nascent gate scripts (Zuulv3 ansible+heat). Releases and release planning should be addressed, as users/vendors are going to want to be closer to OpenStack release dates (recent versions of OpenStack, Helm and Kubernetes). Clear and open roadmaps, with potential use of community-led planning tools. Open elections for PTL. Finally, the OSH team may still be be interested in diversifying it’s core-base. Matt M. would have to address this. I know that I was actively seeking cores when I was initially PTL, and truthfully…there’s nobody nicer or easier to work with than Matt. He’s an awesome PTL, and any project would be fortunate to have him. All these things could all be improved on, but it requires a diverse base with a lot of great ideas. That said, I am in favor of consolidation…if it makes sense and if there's a strong argument for it. We just need to think what’s best for the OpenStack community as a whole, and put away the positions of the individual projects for a moment. To me, that makes things pretty clear, regardless of where the commits are going. And with the +1’s, I think we’re hearing you. Now we just have to plan it out and take action (on both sides). Brandon On April 2, 2018 at 11:14:01 AM, Martin André (m.andre at redhat.com) wrote: On Mon, Apr 2, 2018 at 4:38 PM, Steven Dake (stdake) wrote: > > > > On April 2, 2018 at 6:00:15 AM, Martin André (m.andre at redhat.com) wrote: > > On Sun, Apr 1, 2018 at 12:07 AM, Steven Dake (stdake) > wrote: >> My viewpoint is as all deployments projects are already on an equal >> footing >> when using Kolla containers. > > While I acknowledge Kolla reviewers are doing a very good job at > treating all incoming reviews equally, we can't realistically state > these projects stand on an equal footing today. > > > At the very least we need to have kolla changes _gating_ on TripleO > and OSH jobs before we can say so. Of course, I'm not saying other > kolla devs are opposed to adding more CI jobs to kolla, I'm pretty > sure they would welcome the changes if someone volunteers for it, but > right now when I'm approving a kolla patches I can only say with > confidence that it does not break kolla-ansible. In that sense, > kolla_ansible is special. > > Martin, > > Personally I think all of OpenStack projects that have a dependency or > inverse dependency should cross-gate. For example, Nova should gate on > kolla-ansible, and at one point I think they agreed to this, if we submitted > gate work to do so. We never did that. > > Nobody from TripleO or OSH has submitted gates for Kolla. Submit them and > they will follow the standard mechanism used in OpenStack > experimental->non-voting->voting (if people are on-call to resolve > problems). I don't think gating is relevant to equal footing. TripleO for > the moment has chosen to gate on their own image builds, which is fine. If > the gating should be enhanced, write the gates :) > > Here is a simple definition from the internet: > > "with the same rights and conditions as someone you are competing with" > > Does that mean if you want to split the kolla repo into 40+ repos for each > separate project, the core team will do that? No. Does that mean if there > is a reasonable addition to the API the patch would merge? Yes. > > Thats right, deployment tools compete, but they also cooperate and > collaborate. The containers (atleast from my perspective) are an area where > Kolla has chosen to collaborate. FWIW I also think we have chosen to > collobrate a bit in areas we compete (the deployment tooling itself). Its a > very complex topic. Splitting the governance and PTLs doesn't change the > makeup of the core review team who ultimately makes the decision about what > is reasonable. Collaboration is good, there is no question about it. I suppose the question we need to answer is "would splitting kolla and kolla-ansible further benefit kolla and the projects that consume it?". I believe if you look at it from this angle maybe you'll find areas that are neglected because they are lower priority for kolla-ansible developers. >> I would invite the TripleO team who did integration with the Kolla API to >> provide their thoughts. > > The Kolla API is stable and incredibly useful... it's also > undocumented. I have a stub for a documentation change that's been > collecting dust on my hard drive for month, maybe it's time I brush it > > Most of Kolla unfortunately is undocumented. The API is simple and > straightforward enough that TripleO, OSH, and several proprietary vendors > (the ones Jeffrey mentioned) have managed to implement deployment tooling > that consume the API. Documentation for any part of Kolla would be highly > valued - IMO it is the Kolla project's biggest weakness. > > > up and finally submit it. Today unless you're a kolla developer > yourself, it's difficult to understand how to use the API, not the > most user friendly. > > Another thing that comes for free with Kolla, the extend_start.sh > scripts are for the most part only useful in the context of > kolla_ansible. For instance, hardcoding path for log dirs to > /var/log/kolla and changing groups to 'kolla'. > In TripleO, we've chosen to not depend on the extend_start.sh scripts > whenever possible for this exact reason. > > I don't disagree. I was never fond of extend_start, and thought any special > operations it provided belong in the API itself. This is why there are > mkdir operations and chmod/chown -R operations in the API. The JSON blob > handed to the API during runtime is where the API begins and ends. The > implementation (what set_cfg.py does with start.sh and extend_start.sh) are > not part of the API but part of the API implementation. One could argue that the environment variables we pass to the containers to control what extend_start.sh does are also part of the API. That's not my point. There is a lot of cruft in these scripts that remain from the days where kolla-ansible was the only consumer of kolla images. > I don't think I said anywhere the API is perfectly implemented. I'm not > sure I've ever seen this mythical perfection thing in an API anyway :) > > Patches are welcome to improve the API to make it more general, as long as > they maintain backward compatibility. > > > > The other critical kolla feature we're making extensive use of in > TripleO is the ability to customize the image in any imaginable way > thanks to the template override mechanism. There would be no > containerized deployments via TripleO without it. > > > We knew people would find creative ways to use the plugin templating > technology, and help drive adoption of Kolla as a standard... > > Kolla is a great framework for building container images for OpenStack > services any project can consume. We could do a better job at > advertising it. I guess bringing kolla and kolla-kubernetes under > separate governance (even it the team remains mostly the same) is one > way to enforce the independence of kolla-the-images project and > recognize people may be interested in the images but not the > deployment tools. > > One last though. Would you imagine a kolla PTL who is not heavily > invested in kolla_ansible? > > > Do you mean to imply a conflict of interest? I guess I don't understand the > statement. Would you clarify please? All I'm saying is that we can't truly claim we've fully decoupled Kolla and Kolla-ansible until we're ready to accept someone who is not a dedicated contributor to kolla-ansible as kolla PTL. Until then, some might rightfully say kolla-ansible is driving the kolla project. It's OK, maybe as the kolla community that's what we want, but we can't legitimately say all consumers are on an equal footing. Martin __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Wed Apr 4 21:56:30 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 04 Apr 2018 21:56:30 +0000 Subject: [openstack-dev] [All] TC Election Season Message-ID: Hello! Election details: https://governance.openstack.org/election/ Please read the stipulations and timelines for candidates and electorate contained in this governance documentation. There will be further announcements posted to the mailing list as action is required from the electorate or candidates. This email is for information purposes only. If you have any questions which you feel affect others please reply to this email thread. If you have any questions that you which to discuss in private please email any of the election officials[1] so that we may address your concerns. Thank you, - Kendall Nelson (diablo_rojo) [1] https://governance.openstack.org/election/#election-officials -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Wed Apr 4 22:09:59 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 04 Apr 2018 22:09:59 +0000 Subject: [openstack-dev] [Openstack-track-chairs] Asking for ask.openstack.org In-Reply-To: <5AC542F4.2090205@openstack.org> References: <5AC542F4.2090205@openstack.org> Message-ID: Hey Jimmy, So the First Contact SIG has started looking at questions posted on ask.o.o with the tags of 'contributor' or 'contribution' on a weekly basis and discussing it in our weekly meeting as a standing item. If there are other relevant tags that you think we should be looking at I would be happy to add them to the list. At the Summit I have proposed a talk for the Forum to get more operator involvement in our SIG and perhaps after we have a more diverse SIG we can add some more tags to our watchlist. -Kendall (diablo_rojo) On Wed, Apr 4, 2018 at 2:26 PM Jimmy McArthur wrote: > Hi everyone! > > We have a very robust and vibrant community at ask.openstack.org. There > are literally dozens of posts a day. However, many of them don't receive > knowledgeable answers. I'm really worried about this becoming a vacuum > where potential community members get frustrated and don't realize how to > get more involved with the community. > > I'm looking for thoughts/ideas/feelings about this tool as well as > potential admin volunteers to help us manage the constant influx of > technical and not-so-technical questions around OpenStack. > > For those of you already contributing there, Thank You! For those that > are interested in becoming a moderator (instant AUC status!) or have some > additional ideas around fostering this community, please respond. > > Looking forward to your thoughts :) > > Thanks! > Jimmy > irc: jamesmcarthur > _______________________________________________ > Openstack-track-chairs mailing list > Openstack-track-chairs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-track-chairs > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Wed Apr 4 22:13:49 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Wed, 04 Apr 2018 17:13:49 -0500 Subject: [openstack-dev] [Openstack-track-chairs] Asking for ask.openstack.org In-Reply-To: References: <5AC542F4.2090205@openstack.org> Message-ID: <5AC54E1D.6010007@openstack.org> Oh that's great! I'll try to attend that talk and, in the meantime, gather other relevant tags. Thanks for the info :) > Kendall Nelson > April 4, 2018 at 5:09 PM > Hey Jimmy, > > So the First Contact SIG has started looking at questions posted on > ask.o.o with the tags of 'contributor' or 'contribution' on a weekly > basis and discussing it in our weekly meeting as a standing item. If > there are other relevant tags that you think we should be looking at I > would be happy to add them to the list. > > At the Summit I have proposed a talk for the Forum to get more > operator involvement in our SIG and perhaps after we have a more > diverse SIG we can add some more tags to our watchlist. > > -Kendall (diablo_rojo) > > Jimmy McArthur > April 4, 2018 at 4:26 PM > Hi everyone! > > We have a very robust and vibrant community at ask.openstack.org > . There are literally dozens of posts a > day. However, many of them don't receive knowledgeable answers. I'm > really worried about this becoming a vacuum where potential community > members get frustrated and don't realize how to get more involved with > the community. > > I'm looking for thoughts/ideas/feelings about this tool as well as > potential admin volunteers to help us manage the constant influx of > technical and not-so-technical questions around OpenStack. > > For those of you already contributing there, Thank You! For those > that are interested in becoming a moderator (instant AUC status!) or > have some additional ideas around fostering this community, please > respond. > > Looking forward to your thoughts :) > > Thanks! > Jimmy > irc: jamesmcarthur > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmsimard at redhat.com Wed Apr 4 22:16:15 2018 From: dmsimard at redhat.com (David Moreau Simard) Date: Wed, 4 Apr 2018 18:16:15 -0400 Subject: [openstack-dev] [all][infra] Upcoming changes in ARA Zuul job reports In-Reply-To: References: Message-ID: Hi, You might have noticed that the performance (and reliability) of the new reports aren't up to par. If you see failures in loading content, a refresh will usually fix the issue. We have different fixes to improve the performance and the reliability of the reports and we hope to be able to land them soon. In the meantime, please let us know if there is any report that appears to be particularly problematic. Thanks ! David Moreau Simard Senior Software Engineer | OpenStack RDO dmsimard = [irc, github, twitter] On Thu, Mar 29, 2018 at 6:14 PM, David Moreau Simard wrote: > Hi, > > By default, all jobs currently benefit from the generation of a static > ARA report located in the "ara" directory at the root of the log > directory. > Due to scalability concerns, these reports were only generated when a > job failed and were not available on successful runs. > > I'm happy to announce that you can expect ARA reports to be available > for every job from now on -- including the successful ones ! > > You'll notice a subtle but important change: the report directory will > henceforth be named "ara-report" instead of "ara". > > Instead of generating and saving a HTML report, we'll now only save > the ARA database in the "ara-report" directory. > This is a special directory from the perspective of the > logs.openstack.org server and ARA databases located in such > directories will be loaded dynamically by a WSGI middleware. > > You don't need to do anything to benefit from this change -- it will > be pushed to all jobs that inherit from the base job by default. > > However, if you happen to be using a "nested" installation of ARA and > Ansible (i.e, OpenStack-Ansible, Kolla-Ansible, TripleO, etc.), this > means that you can also leverage this feature. > In order to do that, you'll want to create an "ara-report" directory > and copy your ARA database inside before your logs are collected and > uploaded. > > To help you visualize: > /ara-report <-- This is the default Zuul report > /logs/ara <-- This wouldn't be loaded dynamically > /logs/ara-report <-- This would be loaded dynamically > /logs/some/directory/ara-report <-- This would be loaded dynamically > > For more details on this feature of ARA, you can refer to the documentation [1]. > > Let me know if you have any questions ! > > [1]: https://ara.readthedocs.io/en/latest/advanced.html > > David Moreau Simard > Senior Software Engineer | OpenStack RDO > > dmsimard = [irc, github, twitter] From inc007 at gmail.com Wed Apr 4 22:20:57 2018 From: inc007 at gmail.com (=?UTF-8?B?TWljaGHFgiBKYXN0cnrEmWJza2k=?=) Date: Wed, 4 Apr 2018 15:20:57 -0700 Subject: [openstack-dev] [kolla][tc][openstack-helm][tripleo]propose retire kolla-kubernetes project In-Reply-To: References: <78478c68-7ff0-7b05-59f0-2e27c2635a4f@openstack.org> <20180331134428.znkeo7rn5n5adqxo@yuggoth.org> <20180331193453.3dj72kqkbyc6gvzz@yuggoth.org> Message-ID: On 4 April 2018 at 14:45, Brandon Jozsa wrote: > I’ve been a part of the OpenStack-Helm project from the very beginning, and > there was a lot of early brainstorming on how we could collaborate and > contribute directly to Kolla-Kubernetes. In fact, this was the original > intent when we met with Kolla back in Barcelona. We didn’t like the idea of > fragmenting interested Kubernetes developers/operators in the > OpenStack-via-Kubernetes space. Whatever the project, we wanted all the > domain expertise concentrated on a single deployment effort. Even though > OSH/K-k8s couldn’t reach an agreement on how to handle configmaps (our > biggest difference from the start), there was a lot of early collaboration > between the project cores. Early K-k8s contributors may remember Halcyon, > which cores from both sides promoted for early development of > OpenStack-via-Kubernetes, regardless of the project. > > One of the requests from the initial OSH team (in Barcelona) was to formally > separate Kolla from Kolla-foo deployment projects, both at a project level > and from a core perspective. Why have the same cores giving +2’s to Kolla, > Kolla-Ansible, Kolla-Mesos (now dead) and Kolla-Kubernetes, who may not have > any interest in another given discipline? We wanted reviews to be timely, > and laser-focused, and we felt that this more atomic approach would benefit > Kolla in the end. But unfortunately there was heavy resistance with limited > yet very influential cores. I honestly think pushback was also because it > would mean that any Kolla sub-projects would be subject to re-acceptance as > big tent projects. Limited, but very influential cores sounds like bad community, and as it happens I was leading this community at that time, so I feel I should comment. We would love to increase number of cores (raise a limit) of images, but that comes with a cost. Cost being that person who would like to become a core would need to contribute to project in question and review other people contributions. Proper way to address this problem would be just that - contributing to Kolla and reviewing code. If I failed to notice contributions from someone who did that a lot (I hope I didn't), I'm sorry. This is best and only way to solve problem in question. > > There were also countless discussions about the preservation of the Kolla > API, or Ansible + Jinja portions of Kolla-Ansible. It became clear to us > that Kubernetes wasn’t going to be the first class citizen for the > deployment model in Kolla-Kubernetes, forcing operators to troubleshoot > between OpenStack, Kolla (container builds), Ansible, Kubernetes, and Helm. > This is apparent still today. And while I understand the hesitation to > change Kolla/Kolla-Ansible, I think this code-debt has somewhat contributed > to sustainability of Kolla-Kubernetes. Somewhat to the point of tension, I > very much agree with Thierry’s comments earlier. How k8s wasn't first class citizen? I don't understand. All processes were the same, time in PTG was generous compared to ansible etc. More people uses Ansible due to it's maturity so it's obvious it's going to have better testing etc, but again, solved by contributions. > I want all of these projects to succeed but consolidation with purposeful > and deliberate planning, which Rich has so kindly agreed to do, could be the > right answer. So I +1 the idea, because I think it puts all like-minded > individuals on the same focus (for the overall benefit of OpenStack and the > overall OpenStack community). But we have to make sure there isn’t a lot of > fallout from the decision either. To Steve Dake's previous point, there > could be orphaned users/operators who feel “forced” into another project. I > would hate to see that. It would be nice to at least plan this with the > user-base and give them fair warning. And to this point, what is the active > specific Kolla-Kubernetes core? Who is “PTL” of Kolla-Kubernetes today? As per election results it's Jeffrey. > On the other hand, I think that OSH has some improvements to make as well. > Gating could use some help and the OpenStack-Infra team has been kindly > helping out recently (a huge "thank you" to them). Docs…I think docs could > always use some love. Please offer your experiences to the OSH team! We > would love to hear your user input. Ultimately, if users/operators want to > run something that even closely resembles production, then we need some > decent production quality docs as opposed to leveraging the nascent gate > scripts (Zuulv3 ansible+heat). Releases and release planning should be > addressed, as users/vendors are going to want to be closer to OpenStack > release dates (recent versions of OpenStack, Helm and Kubernetes). Clear and > open roadmaps, with potential use of community-led planning tools. Open > elections for PTL. Finally, the OSH team may still be be interested in > diversifying it’s core-base. Matt M. would have to address this. I know that > I was actively seeking cores when I was initially PTL, and > truthfully…there’s nobody nicer or easier to work with than Matt. He’s an > awesome PTL, and any project would be fortunate to have him. All these > things could all be improved on, but it requires a diverse base with a lot > of great ideas. Kolla has one of most diverse core team in OpenStack. As I said, all it takes is valuable reviews to become core. > That said, I am in favor of consolidation…if it makes sense and if there's a > strong argument for it. We just need to think what’s best for the OpenStack > community as a whole, and put away the positions of the individual projects > for a moment. To me, that makes things pretty clear, regardless of where the > commits are going. And with the +1’s, I think we’re hearing you. Now we just > have to plan it out and take action (on both sides). > > Brandon > > > On April 2, 2018 at 11:14:01 AM, Martin André (m.andre at redhat.com) wrote: > > On Mon, Apr 2, 2018 at 4:38 PM, Steven Dake (stdake) > wrote: >> >> >> >> On April 2, 2018 at 6:00:15 AM, Martin André (m.andre at redhat.com) wrote: >> >> On Sun, Apr 1, 2018 at 12:07 AM, Steven Dake (stdake) >> wrote: >>> My viewpoint is as all deployments projects are already on an equal >>> footing >>> when using Kolla containers. >> >> While I acknowledge Kolla reviewers are doing a very good job at >> treating all incoming reviews equally, we can't realistically state >> these projects stand on an equal footing today. >> >> >> At the very least we need to have kolla changes _gating_ on TripleO >> and OSH jobs before we can say so. Of course, I'm not saying other >> kolla devs are opposed to adding more CI jobs to kolla, I'm pretty >> sure they would welcome the changes if someone volunteers for it, but >> right now when I'm approving a kolla patches I can only say with >> confidence that it does not break kolla-ansible. In that sense, >> kolla_ansible is special. >> >> Martin, >> >> Personally I think all of OpenStack projects that have a dependency or >> inverse dependency should cross-gate. For example, Nova should gate on >> kolla-ansible, and at one point I think they agreed to this, if we >> submitted >> gate work to do so. We never did that. >> >> Nobody from TripleO or OSH has submitted gates for Kolla. Submit them and >> they will follow the standard mechanism used in OpenStack >> experimental->non-voting->voting (if people are on-call to resolve >> problems). I don't think gating is relevant to equal footing. TripleO for >> the moment has chosen to gate on their own image builds, which is fine. If >> the gating should be enhanced, write the gates :) >> >> Here is a simple definition from the internet: >> >> "with the same rights and conditions as someone you are competing with" >> >> Does that mean if you want to split the kolla repo into 40+ repos for each >> separate project, the core team will do that? No. Does that mean if there >> is a reasonable addition to the API the patch would merge? Yes. >> >> Thats right, deployment tools compete, but they also cooperate and >> collaborate. The containers (atleast from my perspective) are an area >> where >> Kolla has chosen to collaborate. FWIW I also think we have chosen to >> collobrate a bit in areas we compete (the deployment tooling itself). Its >> a >> very complex topic. Splitting the governance and PTLs doesn't change the >> makeup of the core review team who ultimately makes the decision about >> what >> is reasonable. > > Collaboration is good, there is no question about it. > I suppose the question we need to answer is "would splitting kolla and > kolla-ansible further benefit kolla and the projects that consume > it?". I believe if you look at it from this angle maybe you'll find > areas that are neglected because they are lower priority for > kolla-ansible developers. > >>> I would invite the TripleO team who did integration with the Kolla API to >>> provide their thoughts. >> >> The Kolla API is stable and incredibly useful... it's also >> undocumented. I have a stub for a documentation change that's been >> collecting dust on my hard drive for month, maybe it's time I brush it >> >> Most of Kolla unfortunately is undocumented. The API is simple and >> straightforward enough that TripleO, OSH, and several proprietary vendors >> (the ones Jeffrey mentioned) have managed to implement deployment tooling >> that consume the API. Documentation for any part of Kolla would be highly >> valued - IMO it is the Kolla project's biggest weakness. >> >> >> up and finally submit it. Today unless you're a kolla developer >> yourself, it's difficult to understand how to use the API, not the >> most user friendly. >> >> Another thing that comes for free with Kolla, the extend_start.sh >> scripts are for the most part only useful in the context of >> kolla_ansible. For instance, hardcoding path for log dirs to >> /var/log/kolla and changing groups to 'kolla'. >> In TripleO, we've chosen to not depend on the extend_start.sh scripts >> whenever possible for this exact reason. >> >> I don't disagree. I was never fond of extend_start, and thought any >> special >> operations it provided belong in the API itself. This is why there are >> mkdir operations and chmod/chown -R operations in the API. The JSON blob >> handed to the API during runtime is where the API begins and ends. The >> implementation (what set_cfg.py does with start.sh and extend_start.sh) >> are >> not part of the API but part of the API implementation. > > One could argue that the environment variables we pass to the > containers to control what extend_start.sh does are also part of the > API. That's not my point. There is a lot of cruft in these scripts > that remain from the days where kolla-ansible was the only consumer of > kolla images. > >> I don't think I said anywhere the API is perfectly implemented. I'm not >> sure I've ever seen this mythical perfection thing in an API anyway :) >> >> Patches are welcome to improve the API to make it more general, as long as >> they maintain backward compatibility. >> >> >> >> The other critical kolla feature we're making extensive use of in >> TripleO is the ability to customize the image in any imaginable way >> thanks to the template override mechanism. There would be no >> containerized deployments via TripleO without it. >> >> >> We knew people would find creative ways to use the plugin templating >> technology, and help drive adoption of Kolla as a standard... >> >> Kolla is a great framework for building container images for OpenStack >> services any project can consume. We could do a better job at >> advertising it. I guess bringing kolla and kolla-kubernetes under >> separate governance (even it the team remains mostly the same) is one >> way to enforce the independence of kolla-the-images project and >> recognize people may be interested in the images but not the >> deployment tools. >> >> One last though. Would you imagine a kolla PTL who is not heavily >> invested in kolla_ansible? >> >> >> Do you mean to imply a conflict of interest? I guess I don't understand >> the >> statement. Would you clarify please? > > All I'm saying is that we can't truly claim we've fully decoupled > Kolla and Kolla-ansible until we're ready to accept someone who is not > a dedicated contributor to kolla-ansible as kolla PTL. Until then, > some might rightfully say kolla-ansible is driving the kolla project. > It's OK, maybe as the kolla community that's what we want, but we > can't legitimately say all consumers are on an equal footing. > > Martin > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From pabelanger at redhat.com Wed Apr 4 22:30:30 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Wed, 4 Apr 2018 18:30:30 -0400 Subject: [openstack-dev] Asking for ask.openstack.org In-Reply-To: <5AC542F4.2090205@openstack.org> References: <5AC542F4.2090205@openstack.org> Message-ID: <20180404223030.GA12345@localhost.localdomain> On Wed, Apr 04, 2018 at 04:26:12PM -0500, Jimmy McArthur wrote: > Hi everyone! > > We have a very robust and vibrant community at ask.openstack.org > . There are literally dozens of posts a day. > However, many of them don't receive knowledgeable answers. I'm really > worried about this becoming a vacuum where potential community members get > frustrated and don't realize how to get more involved with the community. > > I'm looking for thoughts/ideas/feelings about this tool as well as potential > admin volunteers to help us manage the constant influx of technical and > not-so-technical questions around OpenStack. > > For those of you already contributing there, Thank You! For those that are > interested in becoming a moderator (instant AUC status!) or have some > additional ideas around fostering this community, please respond. > > Looking forward to your thoughts :) > > Thanks! > Jimmy > irc: jamesmcarthur We also have a 2nd issue where the ask.o.o server doesn't appear to be large enough any more to handle the traffic. A few times over the last few weeks we've had outages due to the HDD being full. We likely need to reduce the number of days we retain database backups / http logs or look to attach a volume to increase storage. Paul From zbitter at redhat.com Thu Apr 5 00:23:09 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 4 Apr 2018 20:23:09 -0400 Subject: [openstack-dev] Asking for ask.openstack.org In-Reply-To: <5AC542F4.2090205@openstack.org> References: <5AC542F4.2090205@openstack.org> Message-ID: <998761ae-b016-ec97-ceb5-a4d4fc725b14@redhat.com> On 04/04/18 17:26, Jimmy McArthur wrote: > Hi everyone! > > We have a very robust and vibrant community at ask.openstack.org > .  There are literally dozens of posts a > day. However, many of them don't receive knowledgeable answers.  I'm > really worried about this becoming a vacuum where potential community > members get frustrated and don't realize how to get more involved with > the community. > > I'm looking for thoughts/ideas/feelings about this tool as well as > potential admin volunteers to help us manage the constant influx of > technical and not-so-technical questions around OpenStack. Here's the thing: email alerts. They're broken. I have had my email alert preferences set to 'only subscribed tags' with a daily 'Entire forum (tag filtered)' email for several (like 4) years now. I am subscribed to exactly 3 tags.[1] For the first 3 years, I didn't receive any email alerts at all despite repeated fiddling with the settings. At the beginning of 2017 there was a software update and I started getting daily emails that are *not* tag filtered. (I know it was due to a software update, because the first emails started coming from the staging server.) Within a couple of days those emails started to go directly to spam, because GMail. I trained it not to do that any more for me, but it's unlikely most people did, and in any event all I get is a daily email that generally doesn't contain any of the questions I am interested in - even on days where there _are_ in fact new questions with tags that I am subscribed to. I've been able to make a reasonably significant contribution to answering questions because I've made it a habit to check the site itself regularly (and the tag filtering on the home page works really quite well). But anyone wanting to use technical means to give them notice about only the stuff they're interested in only when needed would be unable to do so. (And even if you fixed it at this point it'd all be sucked into spam filters.) There's other problems too - for example if somebody posts a question with not enough information and I post a comment asking for more, I won't get a notification when they reply unless they specifically @ me, which most people won't. (Sometimes I've discovered the replies only years later.) In fact in general the learning curve is way too high for people who just want to ask a casual question - for example, I'd say users are considerably more likely to respond to a correct answer by posting their own 'answer' that says 'It worked!' (or, worse, contains a totally unrelated question) than they are to click the 'Accepted answer' button - and there's no point trying to educate them because you almost never see the same user twice. Those are all broader problems with the design of StackExchange though; the alert thing is a feature that's supposedly present but doesn't work as advertised. It's also worth noting that the voting in general is fairly pointless because ~nobody has an account registered. So if people find a useful question and/or answer on ask.openstack from a search engine, they still won't bother to upvote because they'd have to create an account. Communities with critical mass like StackOverflow can use voting as a quality signal to surface the best content; we don't get enough data for that. (For reference, I've answered 237 questions and less than a dozen have ever gotten a second upvote - which is likely a good proxy for 'has ever been voted on by someone other than the original questioner'.) So, suggestions: * Fix the email subscription thing. * Ensure all devs have an account - perhaps by creating one for them using their IRC nickname & Foundation email? - and encourage people to @ each other when they see a question where they don't know the answer but they know who would (like you might add people to a Gerrit review). (Although realistically most of this will end up in the Spam folder... some might say deservedly ;) * Encourage teams to figure out a set of tags they want to watch, and encourage at least all core reviewers to log in once and set up their tag subscriptions so they'll see something useful when they visit the homepage. * Ask each team to come up with 1 or 2 volunteers to subscribe to (filtered!) email alerts and try to answer or triage incoming questions. > For those of you already contributing there, Thank You!  For those that > are interested in becoming a moderator (instant AUC status!) or have > some additional ideas around fostering this community, please respond. I'm not sure what else there is that I can't already do at my current karma level, but you're welcome to add me to the list and I'll try to do some of it in my travels. cheers, Zane. [1] Feel free to use your admin powers to poke around in my settings to try to figure out what is going on: https://ask.openstack.org/en/users/2133/zaneb/?sort=email_subscriptions > Looking forward to your thoughts :) > > Thanks! > Jimmy > irc: jamesmcarthur > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From zhipengh512 at gmail.com Thu Apr 5 00:25:14 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Thu, 5 Apr 2018 08:25:14 +0800 Subject: [openstack-dev] Asking for ask.openstack.org In-Reply-To: <998761ae-b016-ec97-ceb5-a4d4fc725b14@redhat.com> References: <5AC542F4.2090205@openstack.org> <998761ae-b016-ec97-ceb5-a4d4fc725b14@redhat.com> Message-ID: The email alert definitely should be the first one to get fixed :) On Thu, Apr 5, 2018 at 8:23 AM, Zane Bitter wrote: > On 04/04/18 17:26, Jimmy McArthur wrote: > >> Hi everyone! >> >> We have a very robust and vibrant community at ask.openstack.org < >> https://ask.openstack.org/>. There are literally dozens of posts a day. >> However, many of them don't receive knowledgeable answers. I'm really >> worried about this becoming a vacuum where potential community members get >> frustrated and don't realize how to get more involved with the community. >> >> I'm looking for thoughts/ideas/feelings about this tool as well as >> potential admin volunteers to help us manage the constant influx of >> technical and not-so-technical questions around OpenStack. >> > > Here's the thing: email alerts. They're broken. > > I have had my email alert preferences set to 'only subscribed tags' with a > daily 'Entire forum (tag filtered)' email for several (like 4) years now. I > am subscribed to exactly 3 tags.[1] > > For the first 3 years, I didn't receive any email alerts at all despite > repeated fiddling with the settings. At the beginning of 2017 there was a > software update and I started getting daily emails that are *not* tag > filtered. (I know it was due to a software update, because the first emails > started coming from the staging server.) Within a couple of days those > emails started to go directly to spam, because GMail. I trained it not to > do that any more for me, but it's unlikely most people did, and in any > event all I get is a daily email that generally doesn't contain any of the > questions I am interested in - even on days where there _are_ in fact new > questions with tags that I am subscribed to. > > I've been able to make a reasonably significant contribution to answering > questions because I've made it a habit to check the site itself regularly > (and the tag filtering on the home page works really quite well). But > anyone wanting to use technical means to give them notice about only the > stuff they're interested in only when needed would be unable to do so. (And > even if you fixed it at this point it'd all be sucked into spam filters.) > > There's other problems too - for example if somebody posts a question with > not enough information and I post a comment asking for more, I won't get a > notification when they reply unless they specifically @ me, which most > people won't. (Sometimes I've discovered the replies only years later.) In > fact in general the learning curve is way too high for people who just want > to ask a casual question - for example, I'd say users are considerably more > likely to respond to a correct answer by posting their own 'answer' that > says 'It worked!' (or, worse, contains a totally unrelated question) than > they are to click the 'Accepted answer' button - and there's no point > trying to educate them because you almost never see the same user twice. > Those are all broader problems with the design of StackExchange though; the > alert thing is a feature that's supposedly present but doesn't work as > advertised. > > It's also worth noting that the voting in general is fairly pointless > because ~nobody has an account registered. So if people find a useful > question and/or answer on ask.openstack from a search engine, they still > won't bother to upvote because they'd have to create an account. > Communities with critical mass like StackOverflow can use voting as a > quality signal to surface the best content; we don't get enough data for > that. (For reference, I've answered 237 questions and less than a dozen > have ever gotten a second upvote - which is likely a good proxy for 'has > ever been voted on by someone other than the original questioner'.) > > So, suggestions: > > * Fix the email subscription thing. > * Ensure all devs have an account - perhaps by creating one for them using > their IRC nickname & Foundation email? - and encourage people to @ each > other when they see a question where they don't know the answer but they > know who would (like you might add people to a Gerrit review). (Although > realistically most of this will end up in the Spam folder... some might say > deservedly ;) > * Encourage teams to figure out a set of tags they want to watch, and > encourage at least all core reviewers to log in once and set up their tag > subscriptions so they'll see something useful when they visit the homepage. > * Ask each team to come up with 1 or 2 volunteers to subscribe to > (filtered!) email alerts and try to answer or triage incoming questions. > > For those of you already contributing there, Thank You! For those that >> are interested in becoming a moderator (instant AUC status!) or have some >> additional ideas around fostering this community, please respond. >> > > I'm not sure what else there is that I can't already do at my current > karma level, but you're welcome to add me to the list and I'll try to do > some of it in my travels. > > cheers, > Zane. > > [1] Feel free to use your admin powers to poke around in my settings to > try to figure out what is going on: https://ask.openstack.org/en/u > sers/2133/zaneb/?sort=email_subscriptions > > Looking forward to your thoughts :) >> >> Thanks! >> Jimmy >> irc: jamesmcarthur >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From delightwook at ssu.ac.kr Thu Apr 5 01:52:44 2018 From: delightwook at ssu.ac.kr (MinWookKim) Date: Thu, 5 Apr 2018 10:52:44 +0900 Subject: [openstack-dev] [Vitrage] New proposal for analysis. In-Reply-To: References: <0a7201d3c5c1$2ab596a0$8020c3e0$@ssu.ac.kr> <0b4201d3c63b$79038400$6b0a8c00$@ssu.ac.kr> <0cf201d3c72f$2b3f5ec0$81be1c40$@ssu.ac.kr> <0d8101d3c754$41e73c90$c5b5b5b0$@ssu.ac.kr> <38E590A3-69BF-4BE1-A701-FA8171429D46@nokia.com> <00e801d3ca25$29befee0$7d3cfca0$@ssu.ac.kr> <000a01d3caf4$90584010$b108c030$@ssu.ac.kr> <003c01d3cb45$fda29930$f8e7cb90$@ssu.ac.kr> Message-ID: <020b01d3cc80$ca56c240$5f0446c0$@ssu.ac.kr> Hello Ifat, Thanks for the good comments. It was very helpful. As you said, I tested for std.ssh, and I was able to get much better results. I am confident that this is what I want. We can use std.ssh to provide convenience to users with a much more efficient way to configure shell scripts / monitoring agent automation(for Zabbix history,etc) / other commands. In addition, std_actions.py contained a number of features that could be used for this proposal (such as HTTP). So if we actively use and utilize the actions in std_actions.py, we might be able to construct neat code without the duplicate functionality that you worried about. It has been a great help. In addition, I also agree that Vitrage action is required for Mistral. If possible, I might be able to do that in the future.(ASAP) Thank you. Best regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Wednesday, April 4, 2018 4:21 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, I discussed this issue with a Mistral contributor. Mistral has a long list of actions that can be used. Specifically, you can use the std.ssh action to execute shell scripts. Some useful commands: mistral action-list mistral action-get I’m not sure about the output of the std.ssh, and whether you can get it from the action. I suggest you try it and see how it works. The action is implemented here: https://github.com/openstack/mistral/blob/master/mistral/actions/std_actions .py If std.ssh does not suit your needs, you also have an option to implement and run your own action in Mistral (either as an ssh action or as a python code). And BTW, it is not related to your current use case, but we can also add Vitrage actions to Mistral, so the user can access Vitrage information (get topology, get alarms) from Mistral workflows. Best regards, Ifat From: MinWookKim > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Tuesday, 3 April 2018 at 15:19 To: "'OpenStack Development Mailing List (not for usage questions)'" > Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thanks for your reply. Your comments have been a great help to the proposal. (sorry, I did not think we could use Mistral). If we use the Mistral workflow for the proposal, we can get better results (we can get good results on performance and code conciseness). Also, if we use the Mistral workflow, we do not need to write any unnecessary code. Since I don't know about mistral yet, I think it would be better to do the most efficient design including mistral after grasping it. If we run a check through a Mistral workflow, how about providing users with a choice of tools that have the capability to perform checks? We can get the results of the check through the Mistral and tools, but I think we need the least functionality to manage them. What do you think? I attached a picture of the actual UI that I simply implemented. I hope it helps you understand. (The parameter and content have no meaning and are a simple example.) : ) Thanks. Best regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Tuesday, April 3, 2018 8:31 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, Thanks for the explanation, I understand the reasons for not running these checks on a regular basis in Zabbix or other monitoring tools. It makes sense. However, I don’t want to re-invent the wheel and add to Vitrage functionality that already exists in other projects. How about using Mistral for the purpose of manually running these extra checks? If you prepare the script/agent in advance, as well as the Mistral workflow, I believe that Mistral can successfully execute the check and return the results. I’m not so sure about the UI part, we will have to figure out how and where the user can see the output. But it will save a lot of effort around managing the checks, running a new service, supporting a new API, etc. What do you think? Ifat From: MinWookKim > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Tuesday, 3 April 2018 at 5:36 To: "'OpenStack Development Mailing List (not for usage questions)'" > Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, I also thought about several scenarios that use monitoring tools like Zabbix, Nagios, and Prometheus. But there are some limitations, so we have to think about it. We also need to think about targets, scope, and so on. The reason I do not think of tools like Zabbix, Nagios, and Prometheus as a tool to run checks is because we need to configure an agent or an exporter. I think it is not hard to configure an agent for monitoring objects such as a physical host. But the scope of the idea I think involves the VM's interior. Therefore, configuring the agent automatically inside the VM may not be easy. (although we can use parameters like user-data) If we exclude VM internal checks from scope, we can simply perform a check via Zabbix. (Like Zabbix's remote command, history) On the other hand, if we include the inside of a VM in a scope, and configure each of them, we have a rather constant overhead. The check service may incur temporary overhead, but the agent configuration can cause constant overhead. And Zabbix history can be another task for Vitrage. If we configure the agents themselves and exclude the VM's internal checks, we can provide functionality with simple code. how is it? Thank you. Best regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Monday, April 2, 2018 10:22 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, Thinking about it again, writing a new service for these checks might be an unnecessary overhead. Have you considered using an existing tool, like Zabbix, for running such checks? If you use Zabbix, you can define new triggers that run the new checks, and whenever needed the user can ask to open Zabbix and show the relevant metrics. The format will not be exactly the same as in your example, but it will save a lot of work and spare you the need to write and manage a new service. Some technical details: * The current information that Vitrage stores is not enough for opening the right Zabbix page. We will need to keep a little more data, like the item id, on the alarm vertex. But can be done easily. * A relevant Zabbix API is history.get [1] * If you are not using Zabbix, I assume that other monitoring tools have similar capabilities What do you think? Do you think it can work with your scenario? Or do you see a benefit to the user is viewing the data in the format that you suggested? [1] https://www.zabbix.com/documentation/3.0/manual/api/reference/history/get Thanks, Ifat From: MinWookKim < delightwook at ssu.ac.kr> Reply-To: "OpenStack Development Mailing List (not for usage questions)" < openstack-dev at lists.openstack.org> Date: Monday, 2 April 2018 at 4:51 To: "'OpenStack Development Mailing List (not for usage questions)'" < openstack-dev at lists.openstack.org> Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thank you for the reply. :) It is my opinion only, so if I'm wrong, we can change the implementation part at any time. (Even if it differs from my initial intention) The same security issues arise as you say. But now Vitrage does not call external APIs. The Vitrage-dashboard uses Vitrageclient libraries for Topology, Alarms, and RCA requests to Vitrage. So if we add an API, it will have the following flow. Vitrage-dashboard requests checks using the Vitrageclient library. -> Vitrage receives the API. -> api / controllers / v1 / checks.py is called. -> checks service is called. In accordance with the above flow, passing through the Vitrage API is the purpose of data passing and function calls. I think Vitrage does not need to call external APIs. If you do not want to go through the Vitrage API, we need to create a function for the check action in the Vitrage-dashboard, and write code to call the function. If I think wrong, please tell me anytime. :) Thank you. Best regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [ mailto:ifat.afek at nokia.com] Sent: Sunday, April 1, 2018 3:40 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, I understand your concern about the security issue. But how would that be different if the API call is passed through Vitrage API? The authentication from vitrage-dashboard to vitrage API will work, but then Vitrage will call an external API and you’ll have the same security issue, right? I don’t understand what is the difference between calling the external component from vitrage-dashboard and calling it from vitrage. Best regards, Ifat. From: MinWookKim < delightwook at ssu.ac.kr> Reply-To: "OpenStack Development Mailing List (not for usage questions)" < openstack-dev at lists.openstack.org> Date: Thursday, 29 March 2018 at 14:51 To: "'OpenStack Development Mailing List (not for usage questions)'" < openstack-dev at lists.openstack.org> Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thanks for your reply. : ) I wrote my opinion on your comment. Why do you think the request should pass through the Vitrage API? Why can’t vitrage-dashboard call the check component directly? Authentication issues: I think the check component is a separate component based on the API. In my opinion, if the check component has a separate api address from the vitrage to receive requests from the Vitrage-dashboard, the Vitrage-dashboard needs to know the api address for the check component. This can result in a request / response situation open to anyone, regardless of the authentication supported by openstack between the Vitrage-dashboard and the request / response procedure of check component. This is possible not only through the Vitrage-dashboard, but also with simple commands such as curl. (I think it is unnecessary to implement a separate authentication system for the check component.) This problem may occur if someone knows the api address for the check component, which can cause the host and VM to execute system commands. what should happen if the user closes the check window before the checks are over? I assume that the checks will finish, but the user won’t be able to see the results? If the window is closed before the check is finished, the user can not check the result. To solve this problem, I think that temporarily saving a list of recent results is also a solution. By storing temporary lists (for example, up to 10), the user can see the previous results and think that it is also possible to empty the list by the user. how is it? Thank you. Best Regrads, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [ mailto:ifat.afek at nokia.com] Sent: Thursday, March 29, 2018 8:07 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, Why do you think the request should pass through the Vitrage API? Why can’t vitrage-dashboard call the check component directly? And another question: what should happen if the user closes the check window before the checks are over? I assume that the checks will finish, but the user won’t be able to see the results? Thanks, Ifat. From: MinWookKim < delightwook at ssu.ac.kr> Reply-To: "OpenStack Development Mailing List (not for usage questions)" < openstack-dev at lists.openstack.org> Date: Thursday, 29 March 2018 at 10:25 To: "'OpenStack Development Mailing List (not for usage questions)'" < openstack-dev at lists.openstack.org> Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat and Vitrage team. I would like to explain more about the implementation part of the mail I sent last time. The flow is as follows. Vitrage-dashboard (action-list-panel) -> Vitrage-api -> check component The last time I mentioned it as api-handler, it would be better to call the check component directly from Vitarge-api without having to use it. I hope this helps you understand. Thank you Best Regards, Minwook. From: MinWookKim [ mailto:delightwook at ssu.ac.kr] Sent: Wednesday, March 28, 2018 11:21 AM To: 'OpenStack Development Mailing List (not for usage questions)' Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thanks for your reply. : ) This proposal is a proposal that we expect to be useful from a user perspective. >From a manager's point of view, we need an implementation that minimizes the overhead incurred by the proposal. The answers to some of your questions are: • I assume that these checks will not be implemented in Vitrage, and the results will not be stored in Vitrage, right? Vitrage role is to be a place where it is easy and intuitive for the user to execute external actions/checks. Yes, that's right. We do not need to save it to Vitrage because we just need to check the results. However, it is possible to implement the function directly in Vitrage-dashboard separately from Vitrage like add-action-list panel, but it seems that it is not enough to implement all the functions. If you do not mind, we will have the following flow. 1. The user requests the check action from the vitrage-dashboard (add-action-list-panel). 2. Call the check component through the vitrage's API handler. 3. The check component executes the command and returns the result. Because it is my opinion only, please tell us if there is an unnecessary part. :) • Do you expect the user to click an entity, select an action to run (e.g. ‘P2P check’), and wait by the open panel for the results? What if the user switches to another menu before the check is done? What if the user asks to run an additional check in parallel? What if the user wants to see again a previous result? My idea was to select the task, wait for the results in an open panel, and then instantly see it in the panel. If we switch to another menu before the scan is complete, we will not be able to see the results. Parallel checking is a matter of fact. (This can cause excessive overhead.) For earlier results, it may be okay to temporarily save the open panel until we exit the panel. We can see the previous results through the temporary saved results. • Any thoughts of what component will implement those checks? Or maybe these will be just scripts? I think I implement a separate component to request it. • It could be nice if, as a result of an action check, a new alarm will be raised in Vitrage. A specific alarm with the additional details that were found. However, it might not be trivial to implement it. We could think about it as phase #2. It is expected to be really good. It would be very useful if an Entity-Graph generates an alarm based on the check result. I think that part will be able to talk in detail later. My answer is my opinions and assumptions. If you think my implementation is wrong, or an inefficient implementation, please do not hesitate to tell me. Thanks. Best Regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [ mailto:ifat.afek at nokia.com] Sent: Wednesday, March 28, 2018 2:23 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, I think that from a user’s perspective, these are very good ideas. I have some questions regarding the UX and the implementation, since I’m trying to think what could be the best way to execute such actions from Vitrage. * I assume that these checks will not be implemented in Vitrage, and the results will not be stored in Vitrage, right? Vitrage role is to be a place where it is easy and intuitive for the user to execute external actions/checks. * Do you expect the user to click an entity, select an action to run (e.g. ‘P2P check’), and wait by the open panel for the results? What if the user switches to another menu before the check is done? What if the user asks to run an additional check in parallel? What if the user wants to see again a previous result? * Any thoughts of what component will implement those checks? Or maybe these will be just scripts? * It could be nice if, as a result of an action check, a new alarm will be raised in Vitrage. A specific alarm with the additional details that were found. However, it might not be trivial to implement it. We could think about it as phase #2. Best Regards, Ifat From: MinWookKim < delightwook at ssu.ac.kr> Reply-To: "OpenStack Development Mailing List (not for usage questions)" < openstack-dev at lists.openstack.org> Date: Tuesday, 27 March 2018 at 14:45 To: " openstack-dev at lists.openstack.org" < openstack-dev at lists.openstack.org> Subject: [openstack-dev] [Vitrage] New proposal for analysis. Hello Vitrage team. I am currently working on the Vitrage-Dashboard proposal for the ‘Add action list panel for entity click action’. ( https://review.openstack.org/#/c/531141/) I would like to make a new proposal based on the action list panel mentioned above. The new proposal is to provide multidimensional analysis capabilities in several entities that make up the infrastructure in the entity graph. Vitrage's entity-graph allows us to efficiently monitor alarms from various monitoring tools. In the current state, when there is a problem with the VM and Host, or when we want to check the status, we need to access the console individually for each VM and Host. This situation causes unnecessary behavior when the number of VMs and hosts increases. My new suggestion is that if we have a large number of vm and host, we do not need to directly connect to each VM, host console to enter the system command. Instead, we can send a system command to VM and hosts in the cloud through this proposal. It is only checking results. I have written some use-cases for an efficient explanation of the function. >From an implementation perspective, the goals of the proposal are: 1. To execute commands without installing any Agent / Client that can cause load on VM, Host. 2. I want to provide a simple UI so that users or administrators can get the desired information to multiple VMs and hosts. 3. I want to be able to grasp the results at a glance. 4. I want to implement a component that can support many additional scenarios in plug-in format. I would be happy if you could comment on the proposal or ask questions. Thanks. Best Regards, Minwook. -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 52126 bytes Desc: not available URL: From iwienand at redhat.com Thu Apr 5 04:04:40 2018 From: iwienand at redhat.com (Ian Wienand) Date: Thu, 5 Apr 2018 14:04:40 +1000 Subject: [openstack-dev] Asking for ask.openstack.org In-Reply-To: <20180404223030.GA12345@localhost.localdomain> References: <5AC542F4.2090205@openstack.org> <20180404223030.GA12345@localhost.localdomain> Message-ID: <4b684314-bdba-8ead-6354-3984b7610705@redhat.com> On 04/05/2018 08:30 AM, Paul Belanger wrote: > We likely need to reduce the number of days we retain database > backups / http logs or look to attach a volume to increase storage. We've long had problems with this host and I've looked at it before [1]. It often drops out. It seems there's enough interest we should dive a bit deeper. Here's what I've found out: askbot ------ Of the askbot site, it seems under control, except for an unbounded session log file. Proposed [2] root at ask:/srv# du -hs * 2.0G askbot-site 579M dist overall ------- The major consumer is /var; where we've got 3.9G log 5.9G backups 9.4G lib backups ------- The backup seem under control at least; we're rotating them out and we keep 10, and the size is pretty consistently 500mb: root at ask:/var/backups/pgsql_backups# ls -lh total 5.9G -rw-r--r-- 1 root root 599M Apr 5 00:03 askbotdb.sql.gz -rw-r--r-- 1 root root 598M Apr 4 00:03 askbotdb.sql.gz.1 ... We could reduce the backup rotations to just one if we like -- the server is backed up nightly via bup, so at any point we can get previous dumps from there. bup should de-duplicate everything, but still, it's probably not necessary. The db directory was sitting at ~9gb root at ask:/var/lib/postgresql# du -hs 8.9G . AFAICT, it seems like the autovacuum is running OK on the busy tables askbotdb=# select relname,last_vacuum, last_autovacuum, last_analyze, last_autoanalyze from pg_stat_user_tables where last_autovacuum is not NULL; relname | last_vacuum | last_autovacuum | last_analyze | last_autoanalyze ------------------+-------------+-------------------------------+-------------------------------+------------------------------- django_session | | 2018-04-02 17:29:48.329915+00 | 2018-04-05 02:18:39.300126+00 | 2018-04-05 00:11:23.456602+00 askbot_badgedata | | 2018-04-04 07:19:21.357461+00 | | 2018-04-04 07:18:16.201376+00 askbot_thread | | 2018-04-04 16:24:45.124492+00 | | 2018-04-04 20:32:25.845164+00 auth_message | | 2018-04-04 12:29:24.273651+00 | 2018-04-05 02:18:07.633781+00 | 2018-04-04 21:26:38.178586+00 djkombu_message | | 2018-04-05 02:11:50.186631+00 | | 2018-04-05 02:14:45.22926+00 Out of interest I did run a manual su - postgres -c "vacuumdb --all --full --analyze" We dropped something root at ask:/var/lib/postgresql# du -hs 8.9G . (after) 5.8G I installed pg_activity and watched for a while; nothing seemed to be really stressing it. Ergo, I'm not sure if there's much to do in the db layers. logs ---- This leaves the logs 1.1G jetty 2.9G apache2 The jetty logs are cleaned regularly. I think they could be made more quiet, but they seem to be bounded. Apache logs are rotated but never cleaned up. Surely logs from 2015 aren't useful. Proposed [3] Random offline -------------- [3] is an example of a user reporting the site was offline. Looking at the logs, it seems that puppet found httpd not running at 07:14 and restarted it: Apr 4 07:14:40 ask puppet-user[20737]: (Scope(Class[Postgresql::Server])) Passing "version" to postgresql::server is deprecated; please use postgresql::globals instead. Apr 4 07:14:42 ask puppet-user[20737]: Compiled catalog for ask.openstack.org in environment production in 4.59 seconds Apr 4 07:14:44 ask crontab[20987]: (root) LIST (root) Apr 4 07:14:49 ask puppet-user[20737]: (/Stage[main]/Httpd/Service[httpd]/ensure) ensure changed 'stopped' to 'running' Apr 4 07:14:54 ask puppet-user[20737]: Finished catalog run in 10.43 seconds Which first explains why when I looked, it seemed OK. Checking the apache logs we have: [Wed Apr 04 07:01:08.144746 2018] [:error] [pid 12491:tid 140439253419776] [remote 176.233.126.142:43414] mod_wsgi (pid=12491): Exception occurred processing WSGI script '/srv/askbot-site/config/django.wsgi'. [Wed Apr 04 07:01:08.144870 2018] [:error] [pid 12491:tid 140439253419776] [remote 176.233.126.142:43414] IOError: failed to write data ... more until ... [Wed Apr 04 07:15:58.270180 2018] [:error] [pid 17060:tid 140439253419776] [remote 176.233.126.142:43414] mod_wsgi (pid=17060): Exception occurred processing WSGI script '/srv/askbot-site/config/django.wsgi'. [Wed Apr 04 07:15:58.270303 2018] [:error] [pid 17060:tid 140439253419776] [remote 176.233.126.142:43414] IOError: failed to write data and the restart logged [Wed Apr 04 07:14:48.912626 2018] [core:warn] [pid 21247:tid 140439370192768] AH00098: pid file /var/run/apache2/apache2.pid overwritten -- Unclean shutdown of previous Apache run? [Wed Apr 04 07:14:48.913548 2018] [mpm_event:notice] [pid 21247:tid 140439370192768] AH00489: Apache/2.4.7 (Ubuntu) OpenSSL/1.0.1f mod_wsgi/3.4 Python/2.7.6 configured -- resuming normal operations [Wed Apr 04 07:14:48.913583 2018] [core:notice] [pid 21247:tid 140439370192768] AH00094: Command line: '/usr/sbin/apache2' [Wed Apr 04 14:59:55.408060 2018] [mpm_event:error] [pid 21247:tid 140439370192768] AH00485: scoreboard is full, not at MaxRequestWorkers This does not appear to be disk-space related; see the cacti graphs for that period that show the disk is full-ish, but not full [5]. What caused the I/O errors? dmesg has nothing in it since 30/Mar. kern.log is empty. Server ------ Most importantly, this sever wants a Xenial upgrade. At the very least that apache is known to handle the "scoreboard is full" issue better. We should ensure that we use a bigger instance; it's using up some swap postgres at ask:~$ free -h total used free shared buffers cached Mem: 3.9G 3.6G 269M 136M 11M 819M -/+ buffers/cache: 2.8G 1.1G Swap: 3.8G 259M 3.6G tl;dr ----- I don't think there's anything run-away bad going on, but the server is undersized and needs a system update. Since I've got this far with it, over the next few days I'll see where we are with the puppet for a Xenial upgrade and see if we can't get a migration underway. Thanks, -i [1] https://review.openstack.org/406670 [2] https://review.openstack.org/558977 [3] https://review.openstack.org/558985 [4] http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2018-04-04.log.html#t2018-04-04T07:11:22 [5] http://cacti.openstack.org/cacti/graph.php?action=zoom&local_graph_id=2547&rra_id=0&view_type=tree&graph_start=1522859103&graph_end=1522879839 From iwienand at redhat.com Thu Apr 5 04:12:23 2018 From: iwienand at redhat.com (Ian Wienand) Date: Thu, 5 Apr 2018 14:12:23 +1000 Subject: [openstack-dev] Asking for ask.openstack.org In-Reply-To: <998761ae-b016-ec97-ceb5-a4d4fc725b14@redhat.com> References: <5AC542F4.2090205@openstack.org> <998761ae-b016-ec97-ceb5-a4d4fc725b14@redhat.com> Message-ID: <71b8b916-a227-1aaa-7954-987772a645ff@redhat.com> On 04/05/2018 10:23 AM, Zane Bitter wrote: > On 04/04/18 17:26, Jimmy McArthur wrote: > Here's the thing: email alerts. They're broken. This is the type of thing we can fix if we know about it ... I will contact you off-list because the last email to what I presume is you went to an address that isn't what you've sent from here, but it was accepted by the remote end. -i From gmann at ghanshyammann.com Thu Apr 5 07:08:59 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 5 Apr 2018 16:08:59 +0900 Subject: [openstack-dev] [qa] QA Office Hours 9:00 UTC is cancelled Message-ID: Hi All, Today QA Office Hours @9:00 UTC is canceled due to unavailability of members. -gmann From bdobreli at redhat.com Thu Apr 5 08:12:53 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Thu, 5 Apr 2018 10:12:53 +0200 Subject: [openstack-dev] [tripleo] PTG session about All-In-One installer: recap & roadmap In-Reply-To: References: <1941500803.14106606.1522764003036.JavaMail.zimbra@redhat.com> Message-ID: <6ebee3da-7a11-682c-29e0-bb2206161613@redhat.com> On 4/3/18 9:57 PM, Wesley Hayutin wrote: > > > On Tue, 3 Apr 2018 at 13:53 Dan Prince > wrote: > > On Tue, Apr 3, 2018 at 10:00 AM, Javier Pena > wrote: > > > >> Greeting folks, > >> > >> During the last PTG we spent time discussing some ideas around > an All-In-One > >> installer, using 100% of the TripleO bits to deploy a single > node OpenStack > >> very similar with what we have today with the containerized > undercloud and > >> what we also have with other tools like Packstack or Devstack. > >> > >> https://etherpad.openstack.org/p/tripleo-rocky-all-in-one > >> > > > > I'm really +1 to this. And as a Packstack developer, I'd love to > see this as a > > mid-term Packstack replacement. So let's dive into the details. > > Curious on this one actually, do you see a need for continued > baremetal support? Today we support both baremetal and containers. > Perhaps "support" is a strong word. We support both in terms of > installation but only containers now have fully supported upgrades. > > The interfaces we have today still support baremetal and containers > but there were some suggestions about getting rid of baremetal support > and only having containers. If we were to remove baremetal support > though, Could we keep the Packstack case intact by just using > containers instead? > > Dan > > > Hey couple thoughts.. > 1.  I've added this topic to the RDO meeting tomorrow. > 2.  Just a thought, the "elf owl" is the worlds smallest owl at least > according to the internets   Maybe the all in one could be nick named > tripleo elf?  Talon is cool too. +1 for elf as a smallest owl :) > 3.  From a CI perspective, I see this being very help with: >   a: faster run times generally, but especially for an upgrade tests. > It may be possible to have upgrades gating tripleo projects again. >   b: enabling more packaging tests to be done with TripleO >   c: If developers dig it, we have a better chance at getting TripleO > into other project's check jobs / third party jobs where current > requirements and run times are prohibitive. >   d: Generally speaking replacing packstack / devstack in devel and CI > workflows  where it still exists. >   e: Improved utilization of our resources in RDO-Cloud > > It would be interesting to me to see more design and a little more > thought into the potential use cases before we get far along.  Looks > like there is a good start to that here [2]. > I'll add some comments with the potential use cases for CI. > > /me is very happy to see this moving! Thanks all > > [1] https://en.wikipedia.org/wiki/Elf_owl > [2] > https://review.openstack.org/#/c/547038/1/doc/source/install/advanced_deployment/all_in_one.rst > > > > > >> One of the problems that we're trying to solve here is to give a > simple tool > >> for developers so they can both easily and quickly deploy an > OpenStack for > >> their needs. > >> > >> "As a developer, I need to deploy OpenStack in a VM on my > laptop, quickly and > >> without complexity, reproducing the same exact same tooling as > TripleO is > >> using." > >> "As a Neutron developer, I need to develop a feature in Neutron > and test it > >> with TripleO in my local env." > >> "As a TripleO dev, I need to implement a new service and test > its deployment > >> in my local env." > >> "As a developer, I need to reproduce a bug in TripleO CI that > blocks the > >> production chain, quickly and simply." > >> > > > > "As a packager, I want an easy/low overhead way to test updated > packages with TripleO bits, so I can make sure they will not break > any automation". > > > >> Probably more use cases, but to me that's what came into my mind > now. > >> > >> Dan kicked-off a doc patch a month ago: > >> https://review.openstack.org/#/c/547038/ > >> And I just went ahead and proposed a blueprint: > >> https://blueprints.launchpad.net/tripleo/+spec/all-in-one > >> So hopefully we can start prototyping something during Rocky. > >> > >> Before talking about the actual implementation, I would like to > gather > >> feedback from people interested by the use-cases. If you > recognize yourself > >> in these use-cases and you're not using TripleO today to test > your things > >> because it's too complex to deploy, we want to hear from you. > >> I want to see feedback (positive or negative) about this idea. > We need to > >> gather ideas, use cases, needs, before we go design a prototype > in Rocky. > >> > > > > I would like to offer help with initial testing once there is > something in the repos, so count me in! > > > > Regards, > > Javier > > > >> Thanks everyone who'll be involved, > >> -- > >> Emilien Macchi > >> > >> > __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Bogdan Dobrelya, Irc #bogdando From geguileo at redhat.com Thu Apr 5 08:15:58 2018 From: geguileo at redhat.com (Gorka Eguileor) Date: Thu, 5 Apr 2018 10:15:58 +0200 Subject: [openstack-dev] [cinder][nova] about re-image the volume In-Reply-To: <96adcaac-632a-95c3-71c8-51211c1c57bd@gmail.com> References: <20180329142813.GA25762@sm-xps> <20180402115959.3y3j6ytab6ruorrg@localhost> <96adcaac-632a-95c3-71c8-51211c1c57bd@gmail.com> Message-ID: <20180405081558.vf7bibu4fcv5kov3@localhost> On 04/04, Matt Riedemann wrote: > On 4/2/2018 6:59 AM, Gorka Eguileor wrote: > > I can only see one benefit from implementing this feature in Cinder > > versus doing it in Nova, and that is that we can preserve the volume's > > UUID, but I don't think this is even relevant for this use case, so why > > is it better to implement this in Cinder than in Nova? > > With a new image, the volume_image_metadata in the volume would also be > wrong, and I don't think nova should (or even can) update that information. > So nova re-imaging the volume doesn't seem like a good fit to me given > Cinder "owns" the volume along with any metadata about it. > > If Cinder isn't agreeable to this new re-image API, then I think we're stuck Hi Matt, I didn't mean to imply that the Cinder team is against this proposal, I just want to make sure that Cinder is the right place to do it and that we will actually get some benefits from doing it in Cinder, because right now I don't see that many... > with the original proposal of creating a new volume and swapping out the > root disk, along with all of the problems that can arise from that (original > volume type is gone, tenant goes over-quota, what do we do with the original > volume (delete it?), etc). > > -- > > Thanks, > > Matt > This is what I thought the Nova alternative was, so that's why I didn't understand the image metadata issue. For clarification, the original volume type cannot be gone, as the type delete operation prevents used volume types to be deleted, and if for some reason it were gone (though I don't see how) Cinder would find itself with the exact same problem, so there's no difference here. The flow you are describing is basically what the generic implementation for that functionality would do in Cinder: - Create a new volume from image using the same volume type - Swap the volume information like we do in the live migration case - Delete the original volume - Nova will have to swap the root volume (request new connection information for that volume and attach it to the node). Because the alternative is for Cinder to download the image and dd it into the original volume, which breaks all the optimizations that Cinder has for speed and storage saving in the backend (there would be no cloning). So reading your response I expand the benefits to 2 if done by Cinder: - Preserve volume UUID - Remove unlikely race condition of someone deleting the volume type between Nova deleting the original volume and creating the new one (in this order to avoid the quota issue) when there is no other volume using that volume type. I guess the user facing volume UUID preservation is good enough reason to have this API in Cinder, as one would assume re-imaging a volume would never result in having a new volume ID. But just to be clear, Nova will have to initialize the connection with the re-imagined volume and attach it again to the node, as in all cases (except when defaulting to downloading the image and dd-ing it to the volume) the result will be a new volume in the backend. Cheers, Gorka. From jpena at redhat.com Thu Apr 5 08:57:39 2018 From: jpena at redhat.com (Javier Pena) Date: Thu, 5 Apr 2018 04:57:39 -0400 (EDT) Subject: [openstack-dev] [tripleo] PTG session about All-In-One installer: recap & roadmap In-Reply-To: References: <1941500803.14106606.1522764003036.JavaMail.zimbra@redhat.com> Message-ID: <947773465.14592906.1522918659604.JavaMail.zimbra@redhat.com> ----- Original Message ----- > On Tue, Apr 3, 2018 at 10:00 AM, Javier Pena wrote: > > > >> Greeting folks, > >> > >> During the last PTG we spent time discussing some ideas around an > >> All-In-One > >> installer, using 100% of the TripleO bits to deploy a single node > >> OpenStack > >> very similar with what we have today with the containerized undercloud and > >> what we also have with other tools like Packstack or Devstack. > >> > >> https://etherpad.openstack.org/p/tripleo-rocky-all-in-one > >> > > > > I'm really +1 to this. And as a Packstack developer, I'd love to see this > > as a > > mid-term Packstack replacement. So let's dive into the details. > > Curious on this one actually, do you see a need for continued > baremetal support? Today we support both baremetal and containers. > Perhaps "support" is a strong word. We support both in terms of > installation but only containers now have fully supported upgrades. > > The interfaces we have today still support baremetal and containers > but there were some suggestions about getting rid of baremetal support > and only having containers. If we were to remove baremetal support > though, Could we keep the Packstack case intact by just using > containers instead? I don't have a strong opinion on this. As long as we can update containers with under-review packages, I guess we should be ok. That said, I still think some kind of baremetal support is a good idea to catch co-installability issues, if it is not very expensive to mantain. Regards, Javier > > Dan > > > > >> One of the problems that we're trying to solve here is to give a simple > >> tool > >> for developers so they can both easily and quickly deploy an OpenStack for > >> their needs. > >> > >> "As a developer, I need to deploy OpenStack in a VM on my laptop, quickly > >> and > >> without complexity, reproducing the same exact same tooling as TripleO is > >> using." > >> "As a Neutron developer, I need to develop a feature in Neutron and test > >> it > >> with TripleO in my local env." > >> "As a TripleO dev, I need to implement a new service and test its > >> deployment > >> in my local env." > >> "As a developer, I need to reproduce a bug in TripleO CI that blocks the > >> production chain, quickly and simply." > >> > > > > "As a packager, I want an easy/low overhead way to test updated packages > > with TripleO bits, so I can make sure they will not break any automation". > > > >> Probably more use cases, but to me that's what came into my mind now. > >> > >> Dan kicked-off a doc patch a month ago: > >> https://review.openstack.org/#/c/547038/ > >> And I just went ahead and proposed a blueprint: > >> https://blueprints.launchpad.net/tripleo/+spec/all-in-one > >> So hopefully we can start prototyping something during Rocky. > >> > >> Before talking about the actual implementation, I would like to gather > >> feedback from people interested by the use-cases. If you recognize > >> yourself > >> in these use-cases and you're not using TripleO today to test your things > >> because it's too complex to deploy, we want to hear from you. > >> I want to see feedback (positive or negative) about this idea. We need to > >> gather ideas, use cases, needs, before we go design a prototype in Rocky. > >> > > > > I would like to offer help with initial testing once there is something in > > the repos, so count me in! > > > > Regards, > > Javier > > > >> Thanks everyone who'll be involved, > >> -- > >> Emilien Macchi > >> > >> __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From lucasagomes at gmail.com Thu Apr 5 09:35:21 2018 From: lucasagomes at gmail.com (Lucas Alvares Gomes) Date: Thu, 5 Apr 2018 10:35:21 +0100 Subject: [openstack-dev] [neutron] [OVN] Tempest API / Scenario tests and OVN metadata Message-ID: Hi, The tests below are failing in the tempest API / Scenario job that runs in the networking-ovn gate (non-voting): neutron_tempest_plugin.api.admin.test_quotas_negative.QuotasAdminNegativeTestJSON.test_create_port_when_quotas_is_full neutron_tempest_plugin.api.test_routers.RoutersIpV6Test.test_router_interface_status neutron_tempest_plugin.api.test_routers.RoutersTest.test_router_interface_status neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_prefixlen neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_quota neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_subnet_cidr Digging a bit into it I noticed that with the exception of the two "test_router_interface_status" (ipv6 and ipv4) all other tests are failing because the way metadata works in networking-ovn. Taking the "test_create_port_when_quotas_is_full" as an example. The reason why it fails is because when the OVN metadata is enabled, networking-ovn will metadata port at the moment a network is created [0] and that will already fulfill the quota limit set by that test [1]. That port will also allocate an IP from the subnet which will cause the rest of the tests to fail with a "No more IP addresses available on network ..." error. This is not very trivial to fix because: 1. Tempest should be backend agnostic. So, adding a conditional in the tempest test to check whether OVN is being used or not doesn't sound correct. 2. Creating a port to be used by the metadata agent is a core part of the design implementation for the metadata functionality [2] So, I'm sending this email to try to figure out what would be the best approach to deal with this problem and start working towards having that job to be voting in our gate. Here are some ideas: 1. Simple disable the tests that are affected by the metadata approach. 2. Disable metadata for the tempest API / Scenario tests (here's a test patch doing it [3]) 3. Same as 1. but also create similar tempest tests specific for OVN somewhere else (in the networking-ovn tree?!) What you think would be the best way to workaround this problem, any other ideas ? As for the "test_router_interface_status" tests that are failing independent of the metadata, there's a bug reporting the problem here [4]. So we should just fix it. [0] https://github.com/openstack/networking-ovn/blob/f3f5257fc465bbf44d589cc16e9ef7781f6b5b1d/networking_ovn/common/ovn_client.py#L1154 [1] https://github.com/openstack/neutron-tempest-plugin/blob/35bf37d1830328d72606f9c790b270d4fda2b854/neutron_tempest_plugin/api/admin/test_quotas_negative.py#L66 [2] https://docs.openstack.org/networking-ovn/latest/contributor/design/metadata_api.html#overview-of-proposed-approach [3] https://review.openstack.org/#/c/558792/ [4] https://bugs.launchpad.net/networking-ovn/+bug/1713835 Cheers, Lucas From e0ne at e0ne.info Thu Apr 5 10:42:55 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Thu, 5 Apr 2018 13:42:55 +0300 Subject: [openstack-dev] [horizon] Do we want new meeting time? In-Reply-To: References: Message-ID: Hi team, It's a friendly reminder that we've got voting open [1] until next meeting. If you would like to attend Horizon meetings, please, select comfortable options for you. [1] https://doodle.com/poll/ei5gstt73d8v3a35 Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Wed, Mar 21, 2018 at 6:40 PM, Ivan Kolodyazhny wrote: > Hi team, > > As was discussed at PTG, usually we've got a very few participants on our > weekly meetings. I hope, mostly it's because of not comfort meeting time > for many of us. > > Let's try to re-schedule Horizon weekly meetings and get more attendees > there. I've created a doodle for it [1]. Please, vote for the best time for > you. > > > [1] https://doodle.com/poll/ei5gstt73d8v3a35 > > Regards, > Ivan Kolodyazhny, > http://blog.e0ne.info/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dprince at redhat.com Thu Apr 5 12:06:41 2018 From: dprince at redhat.com (Dan Prince) Date: Thu, 5 Apr 2018 08:06:41 -0400 Subject: [openstack-dev] [Ironic][Tripleo] ipmitool issues HP machines In-Reply-To: References: Message-ID: On Wed, Apr 4, 2018 at 1:18 PM, Jim Rollenhagen wrote: > On Wed, Apr 4, 2018 at 8:39 AM, Dan Prince wrote: >> >> Kind of a support question but figured I'd ask here in case there are >> suggestions for workarounds for specific machines. >> >> Setting up a new rack of mixed machines this week and hit this issue >> with HP machines using the ipmi power driver for Ironic. Curious if >> anyone else has seen this before? The same commands work great with my >> Dell boxes! >> >> ----- >> >> [root at localhost ~]# cat x.sh >> set -x >> # this is how Ironic sends its IPMI commands it fails >> echo -n password > /tmp/tmprmdOOv >> ipmitool -I lanplus -H 172.31.0.19 -U Adminstrator -f /tmp/tmprmdOOv >> power status >> >> # this works great >> ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power >> status >> >> [root at localhost ~]# bash x.sh >> + echo -n password >> + ipmitool -I lanplus -H 172.31.0.19 -U Adminstrator -f /tmp/tmprmdOOv >> power status >> Error: Unable to establish IPMI v2 / RMCP+ session >> + ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power >> status >> Chassis Power is on > > > Very strange. A tcpdump of both would probably be enlightening. :) Ack, I will see about getting these. > > Also curious what version of ipmitool this is, maybe you're hitting an old > bug. RHEL 7.5 so this: ipmitool-1.8.18-7.el7.rpm Dan > > // jim > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From dprince at redhat.com Thu Apr 5 12:13:01 2018 From: dprince at redhat.com (Dan Prince) Date: Thu, 5 Apr 2018 08:13:01 -0400 Subject: [openstack-dev] [Ironic][Tripleo] ipmitool issues HP machines In-Reply-To: References: Message-ID: On Wed, Apr 4, 2018 at 1:27 PM, Jim Rollenhagen wrote: > On Wed, Apr 4, 2018 at 1:18 PM, Jim Rollenhagen > wrote: >> >> On Wed, Apr 4, 2018 at 8:39 AM, Dan Prince wrote: >>> >>> Kind of a support question but figured I'd ask here in case there are >>> suggestions for workarounds for specific machines. >>> >>> Setting up a new rack of mixed machines this week and hit this issue >>> with HP machines using the ipmi power driver for Ironic. Curious if >>> anyone else has seen this before? The same commands work great with my >>> Dell boxes! >>> >>> ----- >>> >>> [root at localhost ~]# cat x.sh >>> set -x >>> # this is how Ironic sends its IPMI commands it fails >>> echo -n password > /tmp/tmprmdOOv >>> ipmitool -I lanplus -H 172.31.0.19 -U Adminstrator -f /tmp/tmprmdOOv >>> power status >>> >>> # this works great >>> ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power >>> status >>> >>> [root at localhost ~]# bash x.sh >>> + echo -n password >>> + ipmitool -I lanplus -H 172.31.0.19 -U Adminstrator -f /tmp/tmprmdOOv >>> power status >>> Error: Unable to establish IPMI v2 / RMCP+ session >>> + ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power >>> status >>> Chassis Power is on >> >> >> Very strange. A tcpdump of both would probably be enlightening. :) >> >> Also curious what version of ipmitool this is, maybe you're hitting an old >> bug. > > > https://sourceforge.net/p/ipmitool/bugs/90/ would seem like a prime suspect > here. Thanks for the suggestion Jim! So I tried a few very short passwords and no dice so far. Looking into the tcpdump info a bit now. I'm in a bit of a rush so I may hack in a quick patch Ironic to make ipmitool to use the -P option to proceed and loop back to fix this a bit later. Dan > > // jim > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From paul.bourke at oracle.com Thu Apr 5 12:16:00 2018 From: paul.bourke at oracle.com (Paul Bourke) Date: Thu, 5 Apr 2018 13:16:00 +0100 Subject: [openstack-dev] [kolla] [tripleo] On moving start scripts out of Kolla images Message-ID: <0892491c-f57e-2952-eac3-a86797db5a8e@oracle.com> Hi all, This mail is to serve as a follow on to the discussion during yesterday's team meeting[4], which was regarding the desire to move start scripts out of the kolla images [0]. There's a few factors at play, and it may well be best left to discuss in person at the summit in May, but hopefully we can get at least some of this hashed out before then. I'll start by summarising why I think this is a good idea, and then attempt to address some of the concerns that have come up since. First off, to be frank, this is effort is driven by wanting to add support for loci images[1] in kolla-ansible. I think it would be unreasonable for anyone to argue this is a bad objective to have, loci images have very obvious benefits over what we have in Kolla today. I'm not looking to drop support for Kolla images at all, I simply want to continue decoupling things to the point where operators can pick and choose what works best for them. Stemming from this, I think moving these scripts out of the images provides a clear benefit to our consumers, both users of kolla and third parties such as triple-o. Let me explain why. Normally, to run a docker image, a user will do 'docker run helloworld:latest'. In any non trivial application, config needs to be provided. In the vast majority of cases this is either provided via a bind mount (docker run -v hello.conf:/etc/hello.conf helloworld:latest), or via environment variables (docker run --env HELLO=paul helloworld:latest). This is all bog standard stuff, something anyone who's spent an hour learning docker can understand. Now, lets say someone wants to try out OpenStack with Docker, and they look at Kolla. First off they have to look at something called set_configs.py[2] - over 400 lines of Python. Next they need to understand what that script consumes, config.json [3]. The only reference for config.json is the files that live in kolla-ansible, a mass of jinja and assumptions about how the service will be run. Next, they need to figure out how to bind mount the config files and config.json into the container in a way that can be consumed by set_configs.py (which by the way, requires the base kolla image in all cases). This is only for the config. For the service start up command, this need to also be provided in config.json. This command is then parsed out and written to a location in the image, which is consumed by a series of start/extend start shell scripts. Kolla is *unique* in this regard, no other project in the container world is interfacing with images in this way. Being a snowflake in this regard is not a good thing. I'm still waiting to hear from a real world operator who would prefer to spend time learning the above to doing: docker run -v /etc/keystone:/etc/keystone keystone:latest --entrypoint /usr/bin/keystone [args] This is the Docker API, it's easy to understand and pretty much the standard at this point. The other argument is that this removes the possibility for immutable infrastructure. The concern is, with the new approach, a rookie operator will modify one of the start scripts - resulting in uncertainty that what was first deployed matches what is currently running. But with the way Kolla is now, an operator can still do this! They can restart containers with a custom entrypoint or additional bind mounts, they can exec in and change config files, etc. etc. Kolla containers have never been immutable and we're bending over backwards to artificially try and make this the case. We cant protect a bad or inexperienced operator from shooting themselves in the foot, there are better ways of doing so. If/when Docker or the upstream container world solves this problem, it would then make sense for Kolla to follow suit. On the face of it, what the spec proposes is a simple change, it should not radically pull the carpet out under people, or even change the way kolla-ansible works in the near term. If consumers such as tripleo or other parties feel it would in fact do so please do let me know and we can discuss and mitigate these problems. Cheers, -Paul [0] https://review.openstack.org/#/c/550958/ [1] https://github.com/openstack/loci [2] https://github.com/openstack/kolla/blob/master/docker/base/set_configs.py [3] https://github.com/openstack/kolla-ansible/blob/master/ansible/roles/keystone/templates/keystone.json.j2 [4] http://eavesdrop.openstack.org/meetings/kolla/2018/kolla.2018-04-04-16.00.log.txt From dprince at redhat.com Thu Apr 5 12:46:26 2018 From: dprince at redhat.com (Dan Prince) Date: Thu, 5 Apr 2018 08:46:26 -0400 Subject: [openstack-dev] [Ironic][Tripleo] ipmitool issues HP machines In-Reply-To: References: Message-ID: Sigh. And the answer is: user error. Adminstrator != Administrator. Well this was fun. Sorry for the bother. All is well. :) Dan On Thu, Apr 5, 2018 at 8:13 AM, Dan Prince wrote: > On Wed, Apr 4, 2018 at 1:27 PM, Jim Rollenhagen wrote: >> On Wed, Apr 4, 2018 at 1:18 PM, Jim Rollenhagen >> wrote: >>> >>> On Wed, Apr 4, 2018 at 8:39 AM, Dan Prince wrote: >>>> >>>> Kind of a support question but figured I'd ask here in case there are >>>> suggestions for workarounds for specific machines. >>>> >>>> Setting up a new rack of mixed machines this week and hit this issue >>>> with HP machines using the ipmi power driver for Ironic. Curious if >>>> anyone else has seen this before? The same commands work great with my >>>> Dell boxes! >>>> >>>> ----- >>>> >>>> [root at localhost ~]# cat x.sh >>>> set -x >>>> # this is how Ironic sends its IPMI commands it fails >>>> echo -n password > /tmp/tmprmdOOv >>>> ipmitool -I lanplus -H 172.31.0.19 -U Adminstrator -f /tmp/tmprmdOOv >>>> power status >>>> >>>> # this works great >>>> ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power >>>> status >>>> >>>> [root at localhost ~]# bash x.sh >>>> + echo -n password >>>> + ipmitool -I lanplus -H 172.31.0.19 -U Adminstrator -f /tmp/tmprmdOOv >>>> power status >>>> Error: Unable to establish IPMI v2 / RMCP+ session >>>> + ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power >>>> status >>>> Chassis Power is on >>> >>> >>> Very strange. A tcpdump of both would probably be enlightening. :) >>> >>> Also curious what version of ipmitool this is, maybe you're hitting an old >>> bug. >> >> >> https://sourceforge.net/p/ipmitool/bugs/90/ would seem like a prime suspect >> here. > > Thanks for the suggestion Jim! So I tried a few very short passwords > and no dice so far. Looking into the tcpdump info a bit now. > > I'm in a bit of a rush so I may hack in a quick patch Ironic to make > ipmitool to use the -P option to proceed and loop back to fix this a > bit later. > > Dan > >> >> // jim >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> From mriedemos at gmail.com Thu Apr 5 13:21:28 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 5 Apr 2018 08:21:28 -0500 Subject: [openstack-dev] [cinder][nova] about re-image the volume In-Reply-To: <20180405081558.vf7bibu4fcv5kov3@localhost> References: <20180329142813.GA25762@sm-xps> <20180402115959.3y3j6ytab6ruorrg@localhost> <96adcaac-632a-95c3-71c8-51211c1c57bd@gmail.com> <20180405081558.vf7bibu4fcv5kov3@localhost> Message-ID: On 4/5/2018 3:15 AM, Gorka Eguileor wrote: > But just to be clear, Nova will have to initialize the connection with > the re-imagined volume and attach it again to the node, as in all cases > (except when defaulting to downloading the image and dd-ing it to the > volume) the result will be a new volume in the backend. Yeah I think I pointed this out earlier in this thread on what I thought the steps would be on the nova side with respect to creating a new empty attachment to keep the volume 'reserved' while we delete the old attachment, re-image the volume, and then update the volume attachment for the new connection. I think that would be similar to how shelve and unshelve works in nova. Would this really require a swap volume call from Cinder? I'd hope not since swap volume in itself is a pretty gross operation on the nova side. -- Thanks, Matt From jimmy at openstack.org Thu Apr 5 13:39:08 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Thu, 05 Apr 2018 08:39:08 -0500 Subject: [openstack-dev] Asking for ask.openstack.org In-Reply-To: <4b684314-bdba-8ead-6354-3984b7610705@redhat.com> References: <5AC542F4.2090205@openstack.org> <20180404223030.GA12345@localhost.localdomain> <4b684314-bdba-8ead-6354-3984b7610705@redhat.com> Message-ID: <5AC626FC.9030706@openstack.org> Ian, thanks for digging in and helping sort out some of these issues! > Ian Wienand > April 4, 2018 at 11:04 PM > > We've long had problems with this host and I've looked at it before > [1]. It often drops out. > > It seems there's enough interest we should dive a bit deeper. Here's > what I've found out: > > askbot > ------ > > Of the askbot site, it seems under control, except for an unbounded > session log file. Proposed [2] > > root at ask:/srv# du -hs * > 2.0G askbot-site > 579M dist > > overall > ------- > > The major consumer is /var; where we've got > > 3.9G log > 5.9G backups > 9.4G lib > > backups > ------- > > The backup seem under control at least; we're rotating them out and we > keep 10, and the size is pretty consistently 500mb: > > root at ask:/var/backups/pgsql_backups# ls -lh > total 5.9G > -rw-r--r-- 1 root root 599M Apr 5 00:03 askbotdb.sql.gz > -rw-r--r-- 1 root root 598M Apr 4 00:03 askbotdb.sql.gz.1 > ... > > We could reduce the backup rotations to just one if we like -- the > server is backed up nightly via bup, so at any point we can get > previous dumps from there. bup should de-duplicate everything, but > still, it's probably not necessary. > > The db directory was sitting at ~9gb > > root at ask:/var/lib/postgresql# du -hs > 8.9G . > > AFAICT, it seems like the autovacuum is running OK on the busy tables > > askbotdb=# select relname,last_vacuum, last_autovacuum, last_analyze, > last_autoanalyze from pg_stat_user_tables where last_autovacuum is not > NULL; > relname | last_vacuum | last_autovacuum | last_analyze | last_autoanalyze > ------------------+-------------+-------------------------------+-------------------------------+------------------------------- > django_session | | 2018-04-02 17:29:48.329915+00 | 2018-04-05 > 02:18:39.300126+00 | 2018-04-05 00:11:23.456602+00 > askbot_badgedata | | 2018-04-04 07:19:21.357461+00 | | 2018-04-04 > 07:18:16.201376+00 > askbot_thread | | 2018-04-04 16:24:45.124492+00 | | 2018-04-04 > 20:32:25.845164+00 > auth_message | | 2018-04-04 12:29:24.273651+00 | 2018-04-05 > 02:18:07.633781+00 | 2018-04-04 21:26:38.178586+00 > djkombu_message | | 2018-04-05 02:11:50.186631+00 | | 2018-04-05 > 02:14:45.22926+00 > > Out of interest I did run a manual > > su - postgres -c "vacuumdb --all --full --analyze" > > We dropped something > > root at ask:/var/lib/postgresql# du -hs > 8.9G . > (after) > 5.8G > > I installed pg_activity and watched for a while; nothing seemed to be > really stressing it. > > Ergo, I'm not sure if there's much to do in the db layers. > > logs > ---- > > This leaves the logs > > 1.1G jetty > 2.9G apache2 > > The jetty logs are cleaned regularly. I think they could be made more > quiet, but they seem to be bounded. > > Apache logs are rotated but never cleaned up. Surely logs from 2015 > aren't useful. Proposed [3] > > Random offline > -------------- > > [3] is an example of a user reporting the site was offline. Looking > at the logs, it seems that puppet found httpd not running at 07:14 and > restarted it: > > Apr 4 07:14:40 ask puppet-user[20737]: > (Scope(Class[Postgresql::Server])) Passing "version" to > postgresql::server is deprecated; please use postgresql::globals instead. > Apr 4 07:14:42 ask puppet-user[20737]: Compiled catalog for > ask.openstack.org in environment production in 4.59 seconds > Apr 4 07:14:44 ask crontab[20987]: (root) LIST (root) > Apr 4 07:14:49 ask puppet-user[20737]: > (/Stage[main]/Httpd/Service[httpd]/ensure) ensure changed 'stopped' to > 'running' > Apr 4 07:14:54 ask puppet-user[20737]: Finished catalog run in 10.43 > seconds > > Which first explains why when I looked, it seemed OK. Checking the > apache logs we have: > > [Wed Apr 04 07:01:08.144746 2018] [:error] [pid 12491:tid > 140439253419776] [remote 176.233.126.142:43414] mod_wsgi (pid=12491): > Exception occurred processing WSGI script > '/srv/askbot-site/config/django.wsgi'. > [Wed Apr 04 07:01:08.144870 2018] [:error] [pid 12491:tid > 140439253419776] [remote 176.233.126.142:43414] IOError: failed to > write data > ... more until ... > [Wed Apr 04 07:15:58.270180 2018] [:error] [pid 17060:tid > 140439253419776] [remote 176.233.126.142:43414] mod_wsgi (pid=17060): > Exception occurred processing WSGI script > '/srv/askbot-site/config/django.wsgi'. > [Wed Apr 04 07:15:58.270303 2018] [:error] [pid 17060:tid > 140439253419776] [remote 176.233.126.142:43414] IOError: failed to > write data > > and the restart logged > > [Wed Apr 04 07:14:48.912626 2018] [core:warn] [pid 21247:tid > 140439370192768] AH00098: pid file /var/run/apache2/apache2.pid > overwritten -- Unclean shutdown of previous Apache run? > [Wed Apr 04 07:14:48.913548 2018] [mpm_event:notice] [pid 21247:tid > 140439370192768] AH00489: Apache/2.4.7 (Ubuntu) OpenSSL/1.0.1f > mod_wsgi/3.4 Python/2.7.6 configured -- resuming normal operations > [Wed Apr 04 07:14:48.913583 2018] [core:notice] [pid 21247:tid > 140439370192768] AH00094: Command line: '/usr/sbin/apache2' > [Wed Apr 04 14:59:55.408060 2018] [mpm_event:error] [pid 21247:tid > 140439370192768] AH00485: scoreboard is full, not at MaxRequestWorkers > > This does not appear to be disk-space related; see the cacti graphs > for that period that show the disk is full-ish, but not full [5]. > > What caused the I/O errors? dmesg has nothing in it since 30/Mar. > kern.log is empty. > > Server > ------ > > Most importantly, this sever wants a Xenial upgrade. At the very > least that apache is known to handle the "scoreboard is full" issue > better. > > We should ensure that we use a bigger instance; it's using up some > swap > > postgres at ask:~$ free -h > total used free shared buffers cached > Mem: 3.9G 3.6G 269M 136M 11M 819M > -/+ buffers/cache: 2.8G 1.1G > Swap: 3.8G 259M 3.6G > > tl;dr > ----- > > I don't think there's anything run-away bad going on, but the server > is undersized and needs a system update. > > Since I've got this far with it, over the next few days I'll see where > we are with the puppet for a Xenial upgrade and see if we can't get a > migration underway. > > Thanks, > > -i > > [1] https://review.openstack.org/406670 > [2] https://review.openstack.org/558977 > [3] https://review.openstack.org/558985 > [4] > http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2018-04-04.log.html#t2018-04-04T07:11:22 > [5] > http://cacti.openstack.org/cacti/graph.php?action=zoom&local_graph_id=2547&rra_id=0&view_type=tree&graph_start=1522859103&graph_end=1522879839 > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Paul Belanger > April 4, 2018 at 5:30 PM > > We also have a 2nd issue where the ask.o.o server doesn't appear to be > large > enough any more to handle the traffic. A few times over the last few > weeks we've > had outages due to the HDD being full. > > We likely need to reduce the number of days we retain database backups > / http > logs or look to attach a volume to increase storage. > > Paul > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jimmy McArthur > April 4, 2018 at 4:26 PM > Hi everyone! > > We have a very robust and vibrant community at ask.openstack.org > . There are literally dozens of posts a > day. However, many of them don't receive knowledgeable answers. I'm > really worried about this becoming a vacuum where potential community > members get frustrated and don't realize how to get more involved with > the community. > > I'm looking for thoughts/ideas/feelings about this tool as well as > potential admin volunteers to help us manage the constant influx of > technical and not-so-technical questions around OpenStack. > > For those of you already contributing there, Thank You! For those > that are interested in becoming a moderator (instant AUC status!) or > have some additional ideas around fostering this community, please > respond. > > Looking forward to your thoughts :) > > Thanks! > Jimmy > irc: jamesmcarthur > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Thu Apr 5 14:06:10 2018 From: zbitter at redhat.com (Zane Bitter) Date: Thu, 5 Apr 2018 10:06:10 -0400 Subject: [openstack-dev] Asking for ask.openstack.org In-Reply-To: <71b8b916-a227-1aaa-7954-987772a645ff@redhat.com> References: <5AC542F4.2090205@openstack.org> <998761ae-b016-ec97-ceb5-a4d4fc725b14@redhat.com> <71b8b916-a227-1aaa-7954-987772a645ff@redhat.com> Message-ID: <1c130383-6711-e579-9492-61c7656ca985@redhat.com> On 05/04/18 00:12, Ian Wienand wrote: > On 04/05/2018 10:23 AM, Zane Bitter wrote: >> On 04/04/18 17:26, Jimmy McArthur wrote: >> Here's the thing: email alerts. They're broken. > > This is the type of thing we can fix if we know about it ... I will > contact you off-list because the last email to what I presume is you > went to an address that isn't what you've sent from here, but it was > accepted by the remote end. Yeah, my mails get proxied through a fedora project address. I am getting them now though (since the SW update in January 2017 - and even before that I did get notifications if somebody @'d me). The issue is the content is not filtered by subscribed tags according to the preferences I have set, so they're useless for keeping up with my areas of interest. It's not just a mail delivery problem, and I guarantee it's not just me. It's a bug somewhere in StackExchange itself. cheers, Zane. From james.slagle at gmail.com Thu Apr 5 14:38:33 2018 From: james.slagle at gmail.com (James Slagle) Date: Thu, 5 Apr 2018 10:38:33 -0400 Subject: [openstack-dev] [TripleO][ci][ceph] switching to config-download by default Message-ID: I've pushed up for review a set of patches to switch us over to using config-download by default: https://review.openstack.org/#/q/topic:bp/config-download-default I believe I've come up with the proper series of steps to switch things over. Let me know if you have any feedback or foresee any issues: FIrst, we update remaining multinode jobs (https://review.openstack.org/558965) and ovb jobs (https://review.openstack.org/559067) that run against master to opt-in to config-download. This will expose any issues with these jobs and config-download and let us fix those issues. We can then switch tripleoclient (https://review.openstack.org/558925) over to use config-download by default. Since this also requires a Heat environment, we must forcibly inject that environment via tripleoclient. Once the tripleoclient patch lands, we can update tripleo-heat-templates to use the mappings from config-download in the default resource registry (https://review.openstack.org/558927). We can then remove the forcibly injected environment from tripleoclient (https://review.openstack.org/558931) Finally, we can go back and update the multinode/ovb jobs on master to not be opt-in for config-download since it would now be the default (no patch yet). Now...for Ceph it will be slightly different: We have a patch that migrates from workflow_tasks to external_deploy_tasks (https://review.openstack.org/#/c/546966/) and that depends on a quickstart patch to update the Ceph scenarios to use config-download (https://review.openstack.org/#/c/548306/). These patches are co-dependencies and present a problem in that they both must land at the same time. To workaround that, I think we need to update the tripleo-heat-templates patch to include both the existing workflow_tasks *and* the new external_deploy_tasks. Once we've proven the external_deploy_tasks work, we remove the depends-on and land the tripleo-heat-templates patch. It will pass the existing Ceph scenario jobs b/c they will be using workflow_tasks. We then land the quickstart patch to switch those scenario jobs to use external_deploy_tasks. Then we can circle back and remove workflow_tasks from the ceph templates in tripleo-heat-templates. I think this will allow everything to land and keep CI green along the way. Please let me know any feedback as we plan to try and push on this work over the next couple of weeks. -- -- James Slagle -- From james.slagle at gmail.com Thu Apr 5 14:42:59 2018 From: james.slagle at gmail.com (James Slagle) Date: Thu, 5 Apr 2018 10:42:59 -0400 Subject: [openstack-dev] [TripleO][ci][ceph] switching to config-download by default In-Reply-To: References: Message-ID: On Thu, Apr 5, 2018 at 10:38 AM, James Slagle wrote: > I've pushed up for review a set of patches to switch us over to using > config-download by default: > > https://review.openstack.org/#/q/topic:bp/config-download-default > > I believe I've come up with the proper series of steps to switch > things over. Let me know if you have any feedback or foresee any > issues: > > FIrst, we update remaining multinode jobs > (https://review.openstack.org/558965) and ovb jobs > (https://review.openstack.org/559067) that run against master to > opt-in to config-download. This will expose any issues with these jobs > and config-download and let us fix those issues. > > We can then switch tripleoclient (https://review.openstack.org/558925) > over to use config-download by default. Since this also requires a > Heat environment, we must forcibly inject that environment via > tripleoclient. > > Once the tripleoclient patch lands, we can update > tripleo-heat-templates to use the mappings from config-download in the > default resource registry (https://review.openstack.org/558927). I forgot to mention that at this point the UI would have to be working with config-download before we land that tripleo-heat-templates patch. Or, the UI could opt-in to the disable-config-download-environment.yaml that I'm providing with that patch. -- -- James Slagle -- From bjozsa at jinkit.com Thu Apr 5 15:30:19 2018 From: bjozsa at jinkit.com (Brandon Jozsa) Date: Thu, 5 Apr 2018 15:30:19 +0000 Subject: [openstack-dev] [kolla][tc][openstack-helm][tripleo]propose retire kolla-kubernetes project In-Reply-To: References: <78478c68-7ff0-7b05-59f0-2e27c2635a4f@openstack.org> <20180331134428.znkeo7rn5n5adqxo@yuggoth.org> <20180331193453.3dj72kqkbyc6gvzz@yuggoth.org> Message-ID: On April 4, 2018 at 4:21:58 PM, Michał Jastrzębski (inc007 at gmail.com) wrote: On 4 April 2018 at 14:45, Brandon Jozsa wrote: > I’ve been a part of the OpenStack-Helm project from the very beginning, and > there was a lot of early brainstorming on how we could collaborate and > contribute directly to Kolla-Kubernetes. In fact, this was the original > intent when we met with Kolla back in Barcelona. We didn’t like the idea of > fragmenting interested Kubernetes developers/operators in the > OpenStack-via-Kubernetes space. Whatever the project, we wanted all the > domain expertise concentrated on a single deployment effort. Even though > OSH/K-k8s couldn’t reach an agreement on how to handle configmaps (our > biggest difference from the start), there was a lot of early collaboration > between the project cores. Early K-k8s contributors may remember Halcyon, > which cores from both sides promoted for early development of > OpenStack-via-Kubernetes, regardless of the project. > > One of the requests from the initial OSH team (in Barcelona) was to formally > separate Kolla from Kolla-foo deployment projects, both at a project level > and from a core perspective. Why have the same cores giving +2’s to Kolla, > Kolla-Ansible, Kolla-Mesos (now dead) and Kolla-Kubernetes, who may not have > any interest in another given discipline? We wanted reviews to be timely, > and laser-focused, and we felt that this more atomic approach would benefit > Kolla in the end. But unfortunately there was heavy resistance with limited > yet very influential cores. I honestly think pushback was also because it > would mean that any Kolla sub-projects would be subject to re-acceptance as > big tent projects. Limited, but very influential cores sounds like bad community, and as it happens I was leading this community at that time, so I feel I should comment. We would love to increase number of cores (raise a limit) of images, but that comes with a cost. Cost being that person who would like to become a core would need to contribute to project in question and review other people contributions. Proper way to address this problem would be just that - contributing to Kolla and reviewing code. If I failed to notice contributions from someone who did that a lot (I hope I didn't), I'm sorry. This is best and only way to solve problem in question. I think you did the best you could, Michal. As I understand it there are essentially three active projects under Kolla today; Kolla, Kolla-Ansible, Kolla-Kubernetes (and others that have been abandoned or dead), and only Kolla shows up under the project navigator. I assume this means they are all still under one project umbrella? I think this is a bit of a stretch for a single-project core team, especially since there are fundamental differences between Ansible and Kubernetes. So my comment was far less about you or anyone personally as a PTL or core, but really more about the “laser-focused” discipline of the group as a whole. Kolla is the only project I am aware of that has this catch all mission allowing it to be any type of deployment that consumes Kolla as a base, and it leverages the same resources. In fact, any other container-based OpenStack projects have been met with a bit of a strange resistance. See: https://review.openstack.org/#/c/449224/ > > There were also countless discussions about the preservation of the Kolla > API, or Ansible + Jinja portions of Kolla-Ansible. It became clear to us > that Kubernetes wasn’t going to be the first class citizen for the > deployment model in Kolla-Kubernetes, forcing operators to troubleshoot > between OpenStack, Kolla (container builds), Ansible, Kubernetes, and Helm. > This is apparent still today. And while I understand the hesitation to > change Kolla/Kolla-Ansible, I think this code-debt has somewhat contributed > to sustainability of Kolla-Kubernetes. Somewhat to the point of tension, I > very much agree with Thierry’s comments earlier. How k8s wasn't first class citizen? I don't understand. All processes were the same, time in PTG was generous compared to ansible etc. More people uses Ansible due to it's maturity so it's obvious it's going to have better testing etc, but again, solved by contributions. I’m not talking about at a project level. I think Kolla has done a wonderful job in that respect. All of my comments above are about the mixed-use of technologies leveraged to drive the project. Compare how Kubernetes configmaps are generated for each of the projects. Then ask yourself what drove that design? Was it simplicity or adherence to previous debt/models? Technical details like these need to be called out (good/bad/indifferent), and planned for accordingly in the event that the projects do merge at some point. I think this is the most confusing part for new users, because both communities have been asked countless times “what are the differences between x and y”. I think the suggestion of combining the projects would really benefit OpenStack operators/users. > I want all of these projects to succeed but consolidation with purposeful > and deliberate planning, which Rich has so kindly agreed to do, could be the > right answer. So I +1 the idea, because I think it puts all like-minded > individuals on the same focus (for the overall benefit of OpenStack and the > overall OpenStack community). But we have to make sure there isn’t a lot of > fallout from the decision either. To Steve Dake's previous point, there > could be orphaned users/operators who feel “forced” into another project. I > would hate to see that. It would be nice to at least plan this with the > user-base and give them fair warning. And to this point, what is the active > specific Kolla-Kubernetes core? Who is “PTL” of Kolla-Kubernetes today? As per election results it's Jeffrey. > On the other hand, I think that OSH has some improvements to make as well. > Gating could use some help and the OpenStack-Infra team has been kindly > helping out recently (a huge "thank you" to them). Docs…I think docs could > always use some love. Please offer your experiences to the OSH team! We > would love to hear your user input. Ultimately, if users/operators want to > run something that even closely resembles production, then we need some > decent production quality docs as opposed to leveraging the nascent gate > scripts (Zuulv3 ansible+heat). Releases and release planning should be > addressed, as users/vendors are going to want to be closer to OpenStack > release dates (recent versions of OpenStack, Helm and Kubernetes). Clear and > open roadmaps, with potential use of community-led planning tools. Open > elections for PTL. Finally, the OSH team may still be be interested in > diversifying it’s core-base. Matt M. would have to address this. I know that > I was actively seeking cores when I was initially PTL, and > truthfully…there’s nobody nicer or easier to work with than Matt. He’s an > awesome PTL, and any project would be fortunate to have him. All these > things could all be improved on, but it requires a diverse base with a lot > of great ideas. Kolla has one of most diverse core team in OpenStack. As I said, all it takes is valuable reviews to become core. > That said, I am in favor of consolidation…if it makes sense and if there's a > strong argument for it. We just need to think what’s best for the OpenStack > community as a whole, and put away the positions of the individual projects > for a moment. To me, that makes things pretty clear, regardless of where the > commits are going. And with the +1’s, I think we’re hearing you. Now we just > have to plan it out and take action (on both sides). > > Brandon > > > On April 2, 2018 at 11:14:01 AM, Martin André (m.andre at redhat.com) wrote: > > On Mon, Apr 2, 2018 at 4:38 PM, Steven Dake (stdake) > wrote: >> >> >> >> On April 2, 2018 at 6:00:15 AM, Martin André (m.andre at redhat.com) wrote: >> >> On Sun, Apr 1, 2018 at 12:07 AM, Steven Dake (stdake) >> wrote: >>> My viewpoint is as all deployments projects are already on an equal >>> footing >>> when using Kolla containers. >> >> While I acknowledge Kolla reviewers are doing a very good job at >> treating all incoming reviews equally, we can't realistically state >> these projects stand on an equal footing today. >> >> >> At the very least we need to have kolla changes _gating_ on TripleO >> and OSH jobs before we can say so. Of course, I'm not saying other >> kolla devs are opposed to adding more CI jobs to kolla, I'm pretty >> sure they would welcome the changes if someone volunteers for it, but >> right now when I'm approving a kolla patches I can only say with >> confidence that it does not break kolla-ansible. In that sense, >> kolla_ansible is special. >> >> Martin, >> >> Personally I think all of OpenStack projects that have a dependency or >> inverse dependency should cross-gate. For example, Nova should gate on >> kolla-ansible, and at one point I think they agreed to this, if we >> submitted >> gate work to do so. We never did that. >> >> Nobody from TripleO or OSH has submitted gates for Kolla. Submit them and >> they will follow the standard mechanism used in OpenStack >> experimental->non-voting->voting (if people are on-call to resolve >> problems). I don't think gating is relevant to equal footing. TripleO for >> the moment has chosen to gate on their own image builds, which is fine. If >> the gating should be enhanced, write the gates :) >> >> Here is a simple definition from the internet: >> >> "with the same rights and conditions as someone you are competing with" >> >> Does that mean if you want to split the kolla repo into 40+ repos for each >> separate project, the core team will do that? No. Does that mean if there >> is a reasonable addition to the API the patch would merge? Yes. >> >> Thats right, deployment tools compete, but they also cooperate and >> collaborate. The containers (atleast from my perspective) are an area >> where >> Kolla has chosen to collaborate. FWIW I also think we have chosen to >> collobrate a bit in areas we compete (the deployment tooling itself). Its >> a >> very complex topic. Splitting the governance and PTLs doesn't change the >> makeup of the core review team who ultimately makes the decision about >> what >> is reasonable. > > Collaboration is good, there is no question about it. > I suppose the question we need to answer is "would splitting kolla and > kolla-ansible further benefit kolla and the projects that consume > it?". I believe if you look at it from this angle maybe you'll find > areas that are neglected because they are lower priority for > kolla-ansible developers. > >>> I would invite the TripleO team who did integration with the Kolla API to >>> provide their thoughts. >> >> The Kolla API is stable and incredibly useful... it's also >> undocumented. I have a stub for a documentation change that's been >> collecting dust on my hard drive for month, maybe it's time I brush it >> >> Most of Kolla unfortunately is undocumented. The API is simple and >> straightforward enough that TripleO, OSH, and several proprietary vendors >> (the ones Jeffrey mentioned) have managed to implement deployment tooling >> that consume the API. Documentation for any part of Kolla would be highly >> valued - IMO it is the Kolla project's biggest weakness. >> >> >> up and finally submit it. Today unless you're a kolla developer >> yourself, it's difficult to understand how to use the API, not the >> most user friendly. >> >> Another thing that comes for free with Kolla, the extend_start.sh >> scripts are for the most part only useful in the context of >> kolla_ansible. For instance, hardcoding path for log dirs to >> /var/log/kolla and changing groups to 'kolla'. >> In TripleO, we've chosen to not depend on the extend_start.sh scripts >> whenever possible for this exact reason. >> >> I don't disagree. I was never fond of extend_start, and thought any >> special >> operations it provided belong in the API itself. This is why there are >> mkdir operations and chmod/chown -R operations in the API. The JSON blob >> handed to the API during runtime is where the API begins and ends. The >> implementation (what set_cfg.py does with start.sh and extend_start.sh) >> are >> not part of the API but part of the API implementation. > > One could argue that the environment variables we pass to the > containers to control what extend_start.sh does are also part of the > API. That's not my point. There is a lot of cruft in these scripts > that remain from the days where kolla-ansible was the only consumer of > kolla images. > >> I don't think I said anywhere the API is perfectly implemented. I'm not >> sure I've ever seen this mythical perfection thing in an API anyway :) >> >> Patches are welcome to improve the API to make it more general, as long as >> they maintain backward compatibility. >> >> >> >> The other critical kolla feature we're making extensive use of in >> TripleO is the ability to customize the image in any imaginable way >> thanks to the template override mechanism. There would be no >> containerized deployments via TripleO without it. >> >> >> We knew people would find creative ways to use the plugin templating >> technology, and help drive adoption of Kolla as a standard... >> >> Kolla is a great framework for building container images for OpenStack >> services any project can consume. We could do a better job at >> advertising it. I guess bringing kolla and kolla-kubernetes under >> separate governance (even it the team remains mostly the same) is one >> way to enforce the independence of kolla-the-images project and >> recognize people may be interested in the images but not the >> deployment tools. >> >> One last though. Would you imagine a kolla PTL who is not heavily >> invested in kolla_ansible? >> >> >> Do you mean to imply a conflict of interest? I guess I don't understand >> the >> statement. Would you clarify please? > > All I'm saying is that we can't truly claim we've fully decoupled > Kolla and Kolla-ansible until we're ready to accept someone who is not > a dedicated contributor to kolla-ansible as kolla PTL. Until then, > some might rightfully say kolla-ansible is driving the kolla project. > It's OK, maybe as the kolla community that's what we want, but we > can't legitimately say all consumers are on an equal footing. > > Martin > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From prometheanfire at gentoo.org Thu Apr 5 15:47:37 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Thu, 5 Apr 2018 10:47:37 -0500 Subject: [openstack-dev] [all][requirements] uncapping eventlet Message-ID: <20180405154737.lhxhlpsj3uttjevu@gentoo.org> eventlet-0.22.1 has been out for a while now, we should try and use it. Going to be fun times. I have a review projects can depend upon if they wish to test. https://review.openstack.org/533021 -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From emilien at redhat.com Thu Apr 5 16:24:34 2018 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 5 Apr 2018 09:24:34 -0700 Subject: [openstack-dev] Fwd: [tripleo] PTG session about All-In-One installer: recap & roadmap In-Reply-To: References: Message-ID: On Thu, Apr 5, 2018 at 4:37 AM, Dan Prince wrote: Much of the work on this is already there. We've been using this stuff > for over a year to dev/test the containerization efforts for a long > time now (and thanks for your help with this effort). The problem I > think is how it is all packaged. While you can use it today it > involves some tricks (docker in docker), or requires you to use an > extra VM to minimize the installation footprint on your laptop. > > Much of the remaining work here is really just about packaging and > technical debt. If we put tripleoclient and heat-monolith into a > container that solves much of the requirements problems and > essentially gives you a container which can transform Heat templates > to Ansible. From the ansible side we need to do a bit more work to > mimimize our dependencies (i.e. heat hooks). Using a virtual-env would > be one option for developers if we could make that work. I lighter set > of RPM packages would be another way to do it. Perhaps both... > Then a smaller wrapper around these things (which I personally would > like to name) to make it all really tight. So if I summarize the discussion: - A lot of positive feedback about the idea and many use cases, which is great. - Support for non-containerized services is not required, as long as we provide a way to update containers with under-review patches for developers. - We'll probably want to breakdown the "openstack undercloud deploy" process into pieces * start an ephemeral Heat container * create the Heat stack passing all requested -e's * run config-download and save the output And then remove undercloud specific logic, so we can provide a generic way to create the config-download playbooks. This generic way would be consumed by the undercloud deploy commands but also by the new all-in-one wrapper. - Speaking of the wrapper, we will probably have a new one. Several names were proposed: * openstack tripleo deploy * openstack talon deploy * openstack elf deploy - The wrapper would work with deployed-server, so we would noop Neutron networks and use fixed IPs. - Investigate the packaging work: containerize tripleoclient and dependencies, see how we can containerized Ansible + dependencies (and eventually reduce them at strict minimum). Let me know if I missed something important, hopefully we can move things forward during this cycle. -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Thu Apr 5 16:35:23 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 5 Apr 2018 11:35:23 -0500 Subject: [openstack-dev] [release] Release countdown for week R-20, April 9-13 Message-ID: <20180405163523.GA28172@sm-xps> Welcome to our regular release countdown email. Development Focus ----------------- Team focus should be on spec approval and implementation for priority features. Please be aware of the project specific deadlines that vary slightly from the overall release schedule [1]. Teams should now be making progress towards the cycle goals [2]. Please prioritize reviews for these appropriately. [1] https://releases.openstack.org/rocky/schedule.html [2] https://governance.openstack.org/tc/goals/rocky/index.html General Information ------------------- We are already coming up on the first Rocky milestone on Thursday, April 19. This is the last week for projects to switch release models if they are considering it. Stop by the #openstack-release channel if you have any questions about how this works. Another quick reminder - if your project has a library that is still a 0.x release, start thinking about when it will be appropriate to do a 1.0 version. The version number does signal the state, real or perceived, of the library, so we strongly encourage going to a full major version once things are in a good and usable state. Finally, we would love to have all the liaisons attend the release team meeting every Friday [3]. Anyone is welcome. [3] http://eavesdrop.openstack.org/#Release_Team_Meeting Upcoming Deadlines & Dates -------------------------- Rocky-1 milestone: April 19 (R-19 week) Forum at OpenStack Summit in Vancouver: May 21-24 -- Sean McGinnis (smcginnis) From whayutin at redhat.com Thu Apr 5 16:55:36 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Thu, 05 Apr 2018 16:55:36 +0000 Subject: [openstack-dev] Fwd: [tripleo] PTG session about All-In-One installer: recap & roadmap In-Reply-To: References: Message-ID: On Thu, 5 Apr 2018 at 12:25 Emilien Macchi wrote: > On Thu, Apr 5, 2018 at 4:37 AM, Dan Prince wrote: > > Much of the work on this is already there. We've been using this stuff >> for over a year to dev/test the containerization efforts for a long >> time now (and thanks for your help with this effort). The problem I >> think is how it is all packaged. While you can use it today it >> involves some tricks (docker in docker), or requires you to use an >> extra VM to minimize the installation footprint on your laptop. >> >> Much of the remaining work here is really just about packaging and >> technical debt. If we put tripleoclient and heat-monolith into a >> container that solves much of the requirements problems and >> essentially gives you a container which can transform Heat templates >> to Ansible. From the ansible side we need to do a bit more work to >> mimimize our dependencies (i.e. heat hooks). Using a virtual-env would >> be one option for developers if we could make that work. I lighter set >> of RPM packages would be another way to do it. Perhaps both... >> Then a smaller wrapper around these things (which I personally would >> like to name) to make it all really tight. > > > So if I summarize the discussion: > > - A lot of positive feedback about the idea and many use cases, which is > great. > > - Support for non-containerized services is not required, as long as we > provide a way to update containers with under-review patches for developers. > Hrm.. I was just speaking to Alfredo about this. We may need to have a better understanding of the various ecosystems where TripleO is in play here to have a fully informed decision. By ecosystem I'm referring to RDO, centos, and upstream and the containers used in deployments. I suspect a non-containerized deployment may still be required, but looking for the packaging team to weigh in. > > - We'll probably want to breakdown the "openstack undercloud deploy" > process into pieces > * start an ephemeral Heat container > * create the Heat stack passing all requested -e's > * run config-download and save the output > > And then remove undercloud specific logic, so we can provide a generic > way to create the config-download playbooks. > This generic way would be consumed by the undercloud deploy commands but > also by the new all-in-one wrapper. > > - Speaking of the wrapper, we will probably have a new one. Several names > were proposed: > * openstack tripleo deploy > * openstack talon deploy > * openstack elf deploy > > - The wrapper would work with deployed-server, so we would noop Neutron > networks and use fixed IPs. > > - Investigate the packaging work: containerize tripleoclient and > dependencies, see how we can containerized Ansible + dependencies (and > eventually reduce them at strict minimum). > > Let me know if I missed something important, hopefully we can move things > forward during this cycle. > -- > Emilien Macchi > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dprince at redhat.com Thu Apr 5 17:02:33 2018 From: dprince at redhat.com (Dan Prince) Date: Thu, 5 Apr 2018 13:02:33 -0400 Subject: [openstack-dev] Fwd: [tripleo] PTG session about All-In-One installer: recap & roadmap In-Reply-To: References: Message-ID: On Thu, Apr 5, 2018 at 12:24 PM, Emilien Macchi wrote: > On Thu, Apr 5, 2018 at 4:37 AM, Dan Prince wrote: > >> Much of the work on this is already there. We've been using this stuff >> for over a year to dev/test the containerization efforts for a long >> time now (and thanks for your help with this effort). The problem I >> think is how it is all packaged. While you can use it today it >> involves some tricks (docker in docker), or requires you to use an >> extra VM to minimize the installation footprint on your laptop. >> >> Much of the remaining work here is really just about packaging and >> technical debt. If we put tripleoclient and heat-monolith into a >> container that solves much of the requirements problems and >> essentially gives you a container which can transform Heat templates >> to Ansible. From the ansible side we need to do a bit more work to >> mimimize our dependencies (i.e. heat hooks). Using a virtual-env would >> be one option for developers if we could make that work. I lighter set >> of RPM packages would be another way to do it. Perhaps both... >> Then a smaller wrapper around these things (which I personally would >> like to name) to make it all really tight. > > > So if I summarize the discussion: > > - A lot of positive feedback about the idea and many use cases, which is > great. > > - Support for non-containerized services is not required, as long as we > provide a way to update containers with under-review patches for developers. I think we still desire some (basic no upgrades) support for non-containerized baremetal at this time. > > - We'll probably want to breakdown the "openstack undercloud deploy" process > into pieces > * start an ephemeral Heat container It already supports this if use don't use the --heat-native option, also you can customize the container used for heat via --heat-container-image. So we already have this! But rather than do this I personally prefer the container to have python-tripleoclient and heat-monolith in it. That way everything everything is in there to generate Ansible templates. If you just use Heat you are missing some of the pieces that you'd still have to install elsewhere on your host. Having them all be in one scoped container which generates Ansible playbooks from Heat templates is better IMO. > * create the Heat stack passing all requested -e's > * run config-download and save the output > > And then remove undercloud specific logic, so we can provide a generic way > to create the config-download playbooks. Yes. Lets remove some of the undercloud logic. But do note that most of the undercloud specific login is now in undercloud_config.py anyway so this is mostly already on its way. > This generic way would be consumed by the undercloud deploy commands but > also by the new all-in-one wrapper. > > - Speaking of the wrapper, we will probably have a new one. Several names > were proposed: > * openstack tripleo deploy > * openstack talon deploy > * openstack elf deploy The wrapper could be just another set of playbooks. That we give a name too... and perhaps put a CLI in front of as well. > > - The wrapper would work with deployed-server, so we would noop Neutron > networks and use fixed IPs. This would be configurable I think depending on which templates were used. Noop as a default for developer deployments but do note that some services like Neutron aren't going to work unless you have some basic network setup. Noop is useful if you prefer to do this manually, but our os-net-config templates are quite useful to automate things. > > - Investigate the packaging work: containerize tripleoclient and > dependencies, see how we can containerized Ansible + dependencies (and > eventually reduce them at strict minimum). > > Let me know if I missed something important, hopefully we can move things > forward during this cycle. > -- > Emilien Macchi From mfedosin at redhat.com Thu Apr 5 17:15:16 2018 From: mfedosin at redhat.com (Mikhail Fedosin) Date: Thu, 5 Apr 2018 19:15:16 +0200 Subject: [openstack-dev] [k8s] OpenStack and Containers White Paper In-Reply-To: References: <47CB0DA7-9332-4DE1-B9AF-B14C663ACE41@openstack.org> Message-ID: Hello! I'm working on Keystone authentication and authorization and other related parts of openstack cloud provider. I will be happy to help you! Best, Mike On Tue, Apr 3, 2018 at 8:38 AM, Jaesuk Ahn wrote: > Hi Chris, > > I can probably help on proof-reading and making some contents on the > openstack-helm part. > As Pete pointed out, LOCI and OpenStack-Helm (OSH) are agnostic to each > other. OSH is working well with both kolla image and loci image. > > IMHO, following categorization might be better to capture the nature of > these project. Just suggestion. > > * OpenStack Containerization tools > * Kolla > * Loci > * Container-based deployment tools for installing and managing OpenStack > * Kolla-Ansible > * OpenStack Helm > > > On Tue, Apr 3, 2018 at 10:08 AM Pete Birley wrote: > >> Chris, >> >> I'd be happy to help out where I can, mostly related to OSH and LOCI. One >> thing we should make clear is that both of these projects are agnostic to >> each other: we gate OSH with both LOCI and kolla images, and conversely >> LOCI has uses far beyond just OSH. >> >> Pete >> >> On Monday, April 2, 2018, Chris Hoge wrote: >> >>> Hi everyone, >>> >>> In advance of the Vancouver Summit, I'm leading an effort to publish a >>> community produced white-paper on OpenStack and container integrations. >>> This has come out of a need to develop materials, both short and long >>> form, to help explain how OpenStack interacts with container >>> technologies across the entire stack, from infrastructure to >>> application. The rough outline of the white-paper proposes an entire >>> technology stack and discuss deployment and usage strategies at every >>> level. The white-paper will focus on existing technologies, and how they >>> are being used in production today across our community. Beginning at >>> the hardware layer, we have the following outline (which may be inverted >>> for clarity): >>> >>> * OpenStack Ironic for managing bare metal deployments. >>> * Container-based deployment tools for installing and managing OpenStack >>> * Kolla containers and Kolla-Ansible >>> * Loci containers and OpenStack Helm >>> * OpenStack-hosted APIs for managing container application >>> infrastructure. >>> * Magnum >>> * Zun >>> * Community-driven integration of Kubernetes and OpenStack with K8s >>> Cloud Provider OpenStack >>> * Projects that can stand alone in integrations with Kubernetes and >>> other cloud technology >>> * Cinder >>> * Neutron with Kuryr and Calico integrations >>> * Keystone authentication and authorization >>> >>> I'm looking for volunteers to help produce the content for these sections >>> (and any others we may uncover to be useful) for presenting a complete >>> picture of OpenStack and container integrations. If you're involved with >>> one of these projects, or are using any of these tools in >>> production, it would be fantastic to get your input in producing the >>> appropriate section. We especially want real-world deployments to use as >>> small case studies to inform the work. >>> >>> During the process of creating the white-paper, we will be working with a >>> technical writer and the Foundation design team to produce a document >>> that >>> is consistent in voice, has accurate and informative graphics that >>> can be used to illustrate the major points and themes of the white-paper, >>> and that can be used as stand-alone media for conferences and >>> presentations. >>> >>> Over the next week, I'll be reaching out to individuals and inviting them >>> to collaborate. This is also a general invitation to collaborate, and if >>> you'd like to help out with a section please reach out to me here, on the >>> K8s #sig-openstack Slack channel, or at my work e-mail, >>> chris at openstack.org. >>> Starting next week, we'll work out a schedule for producing and >>> delivering >>> the white-paper by the Vancouver Summit. We are very short on time, so >>> we will have to be focused to quickly produce high-quality content. >>> >>> Thanks in advance to everyone who participates in writing this >>> document. I'm looking forward to working with you in the coming weeks to >>> publish this important resource for clearly describing the multitude of >>> interactions between these complementary technologies. >>> >>> -Chris Hoge >>> K8s-SIG-OpenStack/OpenStack-SIG-K8s Co-Lead >>> >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >>> unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> -- >> >> [image: Port.direct] >> >> Pete Birley / Director >> pete at port.direct / +447446862551 <+44%207446%20862551> >> >> *PORT.*DIRECT >> United Kingdom >> https://port.direct >> >> This e-mail message may contain confidential or legally privileged >> information and is intended only for the use of the intended recipient(s). >> Any unauthorized disclosure, dissemination, distribution, copying or the >> taking of any action in reliance on the information herein is prohibited. >> E-mails are not secure and cannot be guaranteed to be error free as they >> can be intercepted, amended, or contain viruses. Anyone who communicates >> with us by e-mail is deemed to have accepted these risks. Port.direct is >> not responsible for errors or omissions in this message and denies any >> responsibility for any damage arising from the use of e-mail. Any opinion >> and other statement contained in this message and any attachment are solely >> those of the author and do not necessarily represent those of the company. >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -- > > Jaesuk Ahn, Team Lead > Virtualization SW Lab, SW R&D Center > > SK Telecom > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mjturek at linux.vnet.ibm.com Thu Apr 5 17:36:14 2018 From: mjturek at linux.vnet.ibm.com (Michael Turek) Date: Thu, 5 Apr 2018 13:36:14 -0400 Subject: [openstack-dev] [ironic] Bug Day April 12th poll (was originally April 6th) Message-ID: <13e9712b-9928-ac18-e189-dec410e09331@linux.vnet.ibm.com> Hey everyone, At this week's ironic IRC meeting we decided to push the bug day to April 12th. I updated the poll name to indicate this and it unfortunately wiped the results of the poll. If you can recast your vote here it would be appreciated https://doodle.com/poll/xa999rx653pb58t6 It's looking like a 2 hour window would be the right length, but if you have any opinions on that please respond here. Thanks! Mike Turek From msm at redhat.com Thu Apr 5 18:08:05 2018 From: msm at redhat.com (Michael McCune) Date: Thu, 5 Apr 2018 14:08:05 -0400 Subject: [openstack-dev] [all][api] POST /api-sig/news Message-ID: Greetings OpenStack community, Today's meeting was quite lively with a good discussion about versions and microversions across OpenStack and their usage within the API schema world. We began with a review of outstanding work: elmiko is continuing to work on an update to the microversion history doc[7], and edleafe has reached out[8] to the SDK community to gauge interest in a session for the upcoming Vancouver forum. dtanstur has also also made an update[9] to the HTTP guideline layout that is currently under review. The change was already largely approved; this just improves the appearance of the refactored guidelines. The API-SIG has not confirmed any sessions for the Vancouver forum yet, but we continue to reach out[8] and would ideally like to host a session including the API, SDK and user community groups. The topics and schedule for this session will be highly influenced by input from the larger OpenStack community. If you are interested in seeing this event happen, please add your thoughts to the mailing sent out by edleafe[8]. The next chunk of the meeting was spent discussing the OpenAPI proposal[10] that elmiko has created. The discussion went well and several new ideas were exposed. Additionally, a deep dive into version/microversion usage across the OpenStack ecosystem was exposed with several points being raised about how microversions are used and how they are perceived by end users. There is no firm output from this discussion yet, but elmiko is going to contact interested parties and continue to update the proposal. mordred informed the SIG that he has started working on discover/version things in keystoneauth and should be returning to the related specs within the next few days. and there was much rejoicing. \o/ On the topic of reviews, the SIG has identified one[11] that is ready for freeze this week. Lastly, the SIG reviewed a newly opened bug[12] asking to add a "severity" field to the error structure. After a short discussion, the group agreed that this was not something that should be accepted and have marked it as "won't fix". For more details please see the comments on the bug review. As always if you're interested in helping out, in addition to coming to the meetings, there's also: * The list of bugs [5] indicates several missing or incomplete guidelines. * The existing guidelines [2] always need refreshing to account for changes over time. If you find something that's not quite right, submit a patch [6] to fix it. * Have you done something for which you think guidance would have made things easier but couldn't find any? Submit a patch and help others [6]. # Newly Published Guidelines * Add guideline on exposing microversions in SDKs https://review.openstack.org/#/c/532814 # API Guidelines Proposed for Freeze Guidelines that are ready for wider review by the whole community. * Update the errors guidance to use service-type for code https://review.openstack.org/#/c/554921/ # Guidelines Currently Under Review [3] * Break up the HTTP guideline into smaller documents https://review.openstack.org/#/c/554234/ * Add guidance on needing cache-control headers https://review.openstack.org/550468 * Update parameter names in microversion sdk spec https://review.openstack.org/#/c/557773/ * Add API-schema guide (still being defined) https://review.openstack.org/#/c/524467/ * A (shrinking) suite of several documents about doing version and service discovery Start at https://review.openstack.org/#/c/459405/ * WIP: microversion architecture archival doc (very early; not yet ready for review) https://review.openstack.org/444892 # Highlighting your API impacting issues If you seek further review and insight from the API SIG about APIs that you are developing or changing, please address your concerns in an email to the OpenStack developer mailing list[1] with the tag "[api]" in the subject. In your email, you should include any relevant reviews, links, and comments to help guide the discussion of the specific challenge you are facing. To learn more about the API SIG mission and the work we do, see our wiki page [4] and guidelines [2]. Thanks for reading and see you next week! # References [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] http://specs.openstack.org/openstack/api-wg/ [3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z [4] https://wiki.openstack.org/wiki/API_SIG [5] https://bugs.launchpad.net/openstack-api-wg [6] https://git.openstack.org/cgit/openstack/api-wg [7] https://review.openstack.org/444892 [8] http://lists.openstack.org/pipermail/openstack-sigs/2018-March/000353.html [9] https://review.openstack.org/#/c/554234/ [10] https://gist.github.com/elmiko/7d97fef591887aa0c594c3dafad83442 [11] https://review.openstack.org/#/c/554921/ [12] https://bugs.launchpad.net/openstack-api-wg/+bug/1761475 Meeting Agenda https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda Past Meeting Records http://eavesdrop.openstack.org/meetings/api_sig/ Open Bugs https://bugs.launchpad.net/openstack-api-wg From whayutin at redhat.com Thu Apr 5 18:55:06 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Thu, 05 Apr 2018 18:55:06 +0000 Subject: [openstack-dev] [tripleo][ci] use of tags in launchpad bugs Message-ID: FYI... This is news to me so thanks to Emilien for pointing it out [1]. There are official tags for tripleo launchpad bugs. Personally, I like what I've seen recently with some extra tags as they could be helpful in finding the history of particular issues. So hypothetically would it be "wrong" to create an official tag for each featureset config number upstream. I ask because that is adding a lot of tags but also serves as a good test case for what is good/bad use of tags. Thanks [1] https://bugs.launchpad.net/tripleo/+manage-official-tags -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Thu Apr 5 19:04:27 2018 From: aschultz at redhat.com (Alex Schultz) Date: Thu, 5 Apr 2018 13:04:27 -0600 Subject: [openstack-dev] [tripleo][ci] use of tags in launchpad bugs In-Reply-To: References: Message-ID: On Thu, Apr 5, 2018 at 12:55 PM, Wesley Hayutin wrote: > FYI... > > This is news to me so thanks to Emilien for pointing it out [1]. > There are official tags for tripleo launchpad bugs. Personally, I like what > I've seen recently with some extra tags as they could be helpful in finding > the history of particular issues. > So hypothetically would it be "wrong" to create an official tag for each > featureset config number upstream. I ask because that is adding a lot of > tags but also serves as a good test case for what is good/bad use of tags. > We list official tags over in the specs repo[0]. That being said as we investigate switching over to storyboard, we'll probably want to revisit tags as they will have to be used more to replace some of the functionality we had with launchpad (e.g. milestones). You could always add the tags without being an official tag. I'm not sure I would really want all the featuresets as tags. I'd rather see us actually figure out what component is actually failing than relying on a featureset (and the Rosetta stone for decoding featuresets to functionality[1]). Thanks, -Alex [0] http://git.openstack.org/cgit/openstack/tripleo-specs/tree/specs/policy/bug-tagging.rst#n30 [1] https://git.openstack.org/cgit/openstack/tripleo-quickstart/tree/doc/source/feature-configuration.rst#n21 > Thanks > > [1] https://bugs.launchpad.net/tripleo/+manage-official-tags > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From gr at ham.ie Thu Apr 5 19:11:04 2018 From: gr at ham.ie (Graham Hayes) Date: Thu, 5 Apr 2018 20:11:04 +0100 Subject: [openstack-dev] [all][requirements] uncapping eventlet In-Reply-To: <20180405154737.lhxhlpsj3uttjevu@gentoo.org> References: <20180405154737.lhxhlpsj3uttjevu@gentoo.org> Message-ID: On 05/04/18 16:47, Matthew Thode wrote: > eventlet-0.22.1 has been out for a while now, we should try and use it. > Going to be fun times. > > I have a review projects can depend upon if they wish to test. > https://review.openstack.org/533021 It looks like we may have an issue with oslo.service - https://review.openstack.org/#/c/559144/ is failing gates. Also - what is the dance for this to get merged? It doesn't look like we can merge this while oslo.service has the old requirement restrictions. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From prometheanfire at gentoo.org Thu Apr 5 19:26:19 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Thu, 5 Apr 2018 14:26:19 -0500 Subject: [openstack-dev] [all][requirements] uncapping eventlet In-Reply-To: References: <20180405154737.lhxhlpsj3uttjevu@gentoo.org> Message-ID: <20180405192619.nk7ykeel6qnwsk2y@gentoo.org> On 18-04-05 20:11:04, Graham Hayes wrote: > On 05/04/18 16:47, Matthew Thode wrote: > > eventlet-0.22.1 has been out for a while now, we should try and use it. > > Going to be fun times. > > > > I have a review projects can depend upon if they wish to test. > > https://review.openstack.org/533021 > > It looks like we may have an issue with oslo.service - > https://review.openstack.org/#/c/559144/ is failing gates. > > Also - what is the dance for this to get merged? It doesn't look like we > can merge this while oslo.service has the old requirement restrictions. > The dance is as follows. 0. provide review for projects to test new eventlet version projects using eventlet should make backwards compat code changes at this time. 1. uncap requirements for eventlet (do not raise upper constraint) step 0 does not have to be done for this to occur, but it'd be nice. 2. make sure all projects in projects.txt uncap eventlet (this is harder now that we have per-project requirements) 3. raise the constraint for eventlet, optionally also raise the global requirement for it as well -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From m.andre at redhat.com Thu Apr 5 19:28:31 2018 From: m.andre at redhat.com (=?UTF-8?Q?Martin_Andr=C3=A9?=) Date: Thu, 5 Apr 2018 21:28:31 +0200 Subject: [openstack-dev] [kolla] [tripleo] On moving start scripts out of Kolla images In-Reply-To: <0892491c-f57e-2952-eac3-a86797db5a8e@oracle.com> References: <0892491c-f57e-2952-eac3-a86797db5a8e@oracle.com> Message-ID: On Thu, Apr 5, 2018 at 2:16 PM, Paul Bourke wrote: > Hi all, > > This mail is to serve as a follow on to the discussion during yesterday's > team meeting[4], which was regarding the desire to move start scripts out of > the kolla images [0]. There's a few factors at play, and it may well be best > left to discuss in person at the summit in May, but hopefully we can get at > least some of this hashed out before then. > > I'll start by summarising why I think this is a good idea, and then attempt > to address some of the concerns that have come up since. > > First off, to be frank, this is effort is driven by wanting to add support > for loci images[1] in kolla-ansible. I think it would be unreasonable for > anyone to argue this is a bad objective to have, loci images have very > obvious benefits over what we have in Kolla today. I'm not looking to drop > support for Kolla images at all, I simply want to continue decoupling things > to the point where operators can pick and choose what works best for them. > Stemming from this, I think moving these scripts out of the images provides > a clear benefit to our consumers, both users of kolla and third parties such > as triple-o. Let me explain why. It's still very obscure to me how removing the scripts from kolla images will benefit consumers. If the reason is that you want to re-use them in other, non-kolla images, I believe we should package the scripts. I've left some comments in your spec review. > Normally, to run a docker image, a user will do 'docker run > helloworld:latest'. In any non trivial application, config needs to be > provided. In the vast majority of cases this is either provided via a bind > mount (docker run -v hello.conf:/etc/hello.conf helloworld:latest), or via > environment variables (docker run --env HELLO=paul helloworld:latest). This > is all bog standard stuff, something anyone who's spent an hour learning > docker can understand. > > Now, lets say someone wants to try out OpenStack with Docker, and they look > at Kolla. First off they have to look at something called set_configs.py[2] > - over 400 lines of Python. Next they need to understand what that script > consumes, config.json [3]. The only reference for config.json is the files > that live in kolla-ansible, a mass of jinja and assumptions about how the > service will be run. Next, they need to figure out how to bind mount the > config files and config.json into the container in a way that can be > consumed by set_configs.py (which by the way, requires the base kolla image > in all cases). This is only for the config. For the service start up > command, this need to also be provided in config.json. This command is then > parsed out and written to a location in the image, which is consumed by a > series of start/extend start shell scripts. Kolla is *unique* in this > regard, no other project in the container world is interfacing with images > in this way. Being a snowflake in this regard is not a good thing. I'm still > waiting to hear from a real world operator who would prefer to spend time > learning the above to doing: You're pointing a very real documentation issue. I've mentioned in the other kolla thread that I have a stub for the kolla API documentation. I'll push a patch for what I have and we can iterate on that. > docker run -v /etc/keystone:/etc/keystone keystone:latest --entrypoint > /usr/bin/keystone [args] > > This is the Docker API, it's easy to understand and pretty much the standard > at this point. Sure, using the docker API works for simpler cases, not too surprisingly once you start doing more funky things with your containers you're quickly reach the docker API limitations. That's when the kolla API comes in handy. See for example this recent patch https://review.openstack.org/#/c/556673/ where we needed to change some file permission to the uid/gid of the user inside the container. The first iteration basically used the docker API and started an additional container to fix the permissions: docker run -v /etc/pki/tls/certs/neutron.crt:/etc/pki/tls/certs/neutron.crt:rw \ -v /etc/pki/tls/private/neutron.key:/etc/pki/tls/private/neutron.key:rw \ neutron_image \ /bin/bash -c 'chown neutron:neutron /etc/pki/tls/certs/neutron.crt; chown neutron:neutron /etc/pki/tls/private/neutron.key' You'll agree this is not the most obvious. And it had a nasty side effect that is changes the permissions of the files _on the host_. While using kolla API we could simply add to our config.json: - path: /etc/pki/tls/certs/neutron.crt owner: neutron:neutron - path: /etc/pki/tls/private/neutron.key owner: neutron:neutron > The other argument is that this removes the possibility for immutable > infrastructure. The concern is, with the new approach, a rookie operator > will modify one of the start scripts - resulting in uncertainty that what > was first deployed matches what is currently running. But with the way Kolla > is now, an operator can still do this! They can restart containers with a > custom entrypoint or additional bind mounts, they can exec in and change > config files, etc. etc. Kolla containers have never been immutable and we're > bending over backwards to artificially try and make this the case. We cant > protect a bad or inexperienced operator from shooting themselves in the > foot, there are better ways of doing so. If/when Docker or the upstream > container world solves this problem, it would then make sense for Kolla to > follow suit. > > On the face of it, what the spec proposes is a simple change, it should not > radically pull the carpet out under people, or even change the way > kolla-ansible works in the near term. If consumers such as tripleo or other > parties feel it would in fact do so please do let me know and we can discuss > and mitigate these problems. TripleO uses these scripts extensively, we certainly do not want to see them go away from kolla images. Martin > Cheers, > -Paul > > [0] https://review.openstack.org/#/c/550958/ > [1] https://github.com/openstack/loci > [2] > https://github.com/openstack/kolla/blob/master/docker/base/set_configs.py > [3] > https://github.com/openstack/kolla-ansible/blob/master/ansible/roles/keystone/templates/keystone.json.j2 > [4] > http://eavesdrop.openstack.org/meetings/kolla/2018/kolla.2018-04-04-16.00.log.txt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From cboylan at sapwetik.org Thu Apr 5 20:27:13 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 05 Apr 2018 13:27:13 -0700 Subject: [openstack-dev] [infra][qa][requirements] Pip 10 is on the way In-Reply-To: <1522685637.1678193.1323782608.022AAF87@webmail.messagingengine.com> References: <20180331150026.nqrwaxakxcn3vmqz@yuggoth.org> <20180402150635.5d4jbbnzry2biowu@gentoo.org> <1522685637.1678193.1323782608.022AAF87@webmail.messagingengine.com> Message-ID: <1522960033.3841849.1328067440.7342057C@webmail.messagingengine.com> On Mon, Apr 2, 2018, at 9:13 AM, Clark Boylan wrote: > On Mon, Apr 2, 2018, at 8:06 AM, Matthew Thode wrote: > > On 18-03-31 15:00:27, Jeremy Stanley wrote: > > > According to a notice[1] posted to the pypa-announce and > > > distutils-sig mailing lists, pip 10.0.0.b1 is on PyPI now and 10.0.0 > > > is expected to be released in two weeks (over the April 14/15 > > > weekend). We know it's at least going to start breaking[2] DevStack > > > and we need to come up with a plan for addressing that, but we don't > > > know how much more widespread the problem might end up being so > > > encourage everyone to try it out now where they can. > > > > > > > I'd like to suggest locking down pip/setuptools/wheel like openstack > > ansible is doing in > > https://github.com/openstack/openstack-ansible/blob/master/global-requirement-pins.txt > > > > We could maintain it as a separate constraints file (or infra could > > maintian it, doesn't mater). The file would only be used for the > > initial get-pip install. > > In the past we've done our best to avoid pinning these tools because 1) > we've told people they should use latest for openstack to work and 2) it > is really difficult to actually control what versions of these tools end > up on your systems if not latest. > > I would strongly push towards addressing the distutils package deletion > problem that we've run into with pip10 instead. One of the approaches > thrown out that pabelanger is working on is to use a common virtualenv > for devstack and avoid the system package conflict entirely. I was mistaken and pabelanger was working to get devstack's USE_VENV option working which installs each service (if the service supports it) into its own virtualenv. There are two big drawbacks to this. This first is that we would lose coinstallation of all the openstack services which is one way we ensure they all work together at the end of the day. The second is that not all services in "base" devstack support USE_VENV and I doubt many plugins do either (neutron apparently doesn't?). I've since worked out a change that passes tempest using a global virtualenv installed devstack at https://review.openstack.org/#/c/558930/. This needs to be cleaned up so that we only check for and install the virtualenv(s) once and we need to handle mixed python2 and python3 environments better (so that you can run a python2 swift and python3 everything else). The other major issue we've run into is that nova file injection (which is tested by tempest) seems to require either libguestfs or nbd. libguestfs bindings for python aren't available on pypi and instead we get them from system packaging. This means if we want libguestfs support we have to enable system site packages when using virtualenvs. The alternative is to use nbd which apparently isn't preferred by nova and doesn't work under current devstack anyways. Why is this a problem? Well the new pip10 behavior that breaks devstack is pip10's refusable to remove distutils installed packages. Distro packages by and large are distutils packaged which means if you mix system packages and pip installed packages there is a good chance something will break (and it does break for current devstack). I'm not sure that using a virtualenv with system site packages enabled will sufficiently protect us from this case (but we should test it further). Also it feels wrong to enable system packages in a virtualenv if the entire point is avoiding system python packages. I'm not sure what the best option is here but if we can show that system site packages with virtualenvs is viable with pip10 and people want to move forward with devstack using a global virtualenv we can work to clean up this change and make it mergeable. Clark From zigo at debian.org Thu Apr 5 20:32:13 2018 From: zigo at debian.org (Thomas Goirand) Date: Thu, 5 Apr 2018 22:32:13 +0200 Subject: [openstack-dev] RFC: Next minimum libvirt / QEMU versions for "Stein" release In-Reply-To: <20180404084507.GA18076@paraplu> References: <20180330142643.ff3czxy35khmjakx@eukaryote> <20180331140929.r5kj3qyrefvsovwf@eukaryote> <20180404084507.GA18076@paraplu> Message-ID: On 04/04/2018 10:45 AM, Kashyap Chamarthy wrote: > Answering my own questions about Debian -- > > From looking at the Debian Archive[1][2], these are the versions for > 'Stretch' (the current stable release) and in the upcoming 'Buster' > release: > > libvirt | 3.0.0-4+deb9u2 | stretch > libvirt | 4.1.0-2 | buster > > qemu | 1:2.8+dfsg-6+deb9u3 | stretch > qemu | 1:2.11+dfsg-1 | buster > > I also talked on #debian-backports IRC channel on OFTC network, where I > asked: > > "What I'm essentially looking for is: "How can 'stretch' users get > libvirt 3.2.0 and QEMU 2.9.0, even if via a different repository. > As they are proposed to be least common denominator versions across > distributions." > > And two people said: Then the versions from 'Buster' could be backported > to 'stretch-backports'. The process for that is to: "ask the maintainer > of those package and Cc to the backports mailing list." > > Any takers? > > [0] https://packages.debian.org/stretch-backports/ > [1] https://qa.debian.org/madison.php?package=libvirt > [2] https://qa.debian.org/madison.php?package=qemu Hi Kashyap, Thanks for your considering of Debian, asking me and giving enough time for answering! Here's my thoughts. I updated the wiki page as you suggested [1]. As i wrote on IRC, we don't need to care about Jessie, so I removed Jessie, and added Buster/SID. tl;dr: just skip this section & go to conclusion backport of libvirt/QEMU/libguestfs more in details --------------------------------------------------- I already attempted the backports from Debian Buster to Stretch. All of the 3 components (libvirt, qemu & libguestfs) could be built without extra dependency, which is a very good thing. - libvirt 4.1.0 compiled without issue, though the dh_install phase failed with this error: dh_install: Cannot find (any matches for) "/usr/lib/*/wireshark/" (tried in "." and "debian/tmp") dh_install: libvirt-wireshark missing files: /usr/lib/*/wireshark/ dh_install: missing files, aborting Without more investigation but this build log, it's likely a minor fix in debian/*.install files to make it possible to backport the package. - qemu 2.11 built perfectly with zero change. - libguestfs 1.36.13 only needed to have fdisk replaced by util-linux as build-depends (fdisk is now a separate package in Buster). So it looks like easy to backport these 3 *AT THIS TIME*. [2] However, without a cristal ball, nobody can tell how hard it will be to backport these *IN A YEAR FROM NOW*. Conclusion: ----------- If you don't absolutely need new features from libvirt 3.2.0 and 3.0.0 is fine, please choose 3.0.0 as minimum. If you don't absolutely need new features from qemu 2.9.0 and 2.8.0 is fine, please choose 2.8.0 as minimum. If you don't absolutely need new features from libguestfs 1.36 and 1.34 is fine, please choose 1.34 as minimum. If you do need these new features, I'll do my best adapt. :) About Buster freeze & OpenStack Stein backports to Debian Stretch ----------------------------------------------------------------- Now, about Buster. As you know, Debian doesn't have planned release dates. Though here's the stats showing that roughly, there's a new Debian every 2 years, and the freeze takes about 6 months. https://wiki.debian.org/DebianReleases#Release_statistics With this logic and considering Stretch was released last year in June, after Stein is released, Buster will probably start its freeze. If the Debian freeze happens later, good for me, I'll have more time to make Stein better. But then Debian users will probably expect an OpenStack Stein backport to Debian Stretch, and that's where it can become tricky to backport these 3 packages. The end ------- I hope the above isn't too long, and helps to take the best decision, Cheers, Thomas Goirand (zigo) [1] https://wiki.openstack.org/wiki/LibvirtDistroSupportMatrix#Distro_minimum_versions [2] I'm not shouting, just highlighting the important part! :) From zbitter at redhat.com Thu Apr 5 20:36:32 2018 From: zbitter at redhat.com (Zane Bitter) Date: Thu, 5 Apr 2018 16:36:32 -0400 Subject: [openstack-dev] Following the new PTI for document build, broken local builds In-Reply-To: <1521629342.8587.20.camel@redhat.com> References: <1521629342.8587.20.camel@redhat.com> Message-ID: <8da6121b-a48a-5dd5-8865-75b9e0d38e15@redhat.com> On 21/03/18 06:49, Stephen Finucane wrote: > As noted by Monty in a prior openstack-dev post [2], some projects rely > on a pbr extension to the 'build_sphinx' setuptools command which can > automatically run the 'sphinx-apidoc' tool before building docs. This > is enabled by configuring some settings in the '[pbr]' section of the > 'setup.cfg' file [3]. To ensure this continued working, the zuul jobs > definitions [4] check for the presence of these settings and build docs > using the legacy 'build_sphinx' command if found. **At no point do the > jobs call the tox job**. As a result, if you convert a project to use > 'sphinx-build' in 'tox.ini' without resolving the autodoc issues, you > lose the ability to build docs locally. > > I've gone through and proposed a couple of reverts to fix projects > we've already broken. However, going forward, there are two things > people should do to prevent issues like this popping up. > > * Firstly, you should remove the '[build_sphinx]' and '[pbr]' sections > from 'setup.cfg' in any patches that aim to convert a project to use > the new PTI. This will ensure the gate catches any potential > issues. How can we enable warning_is_error in the gate with the new PTI? It's easy enough to add the -W flag in tox.ini for local builds, but as you say the tox job is never called in the gate. In the gate zuul checks for it in the [build_sphinx] section of setup.cfg: https://git.openstack.org/cgit/openstack-infra/zuul-jobs/tree/roles/sphinx/library/sphinx_check_warning_is_error.py#n23 So I think it makes more sense to remove the [pbr] section, but leave the [build_sphinx] section? thanks, Zane. From inc007 at gmail.com Thu Apr 5 20:41:41 2018 From: inc007 at gmail.com (=?UTF-8?B?TWljaGHFgiBKYXN0cnrEmWJza2k=?=) Date: Thu, 5 Apr 2018 13:41:41 -0700 Subject: [openstack-dev] [kolla] [tripleo] On moving start scripts out of Kolla images In-Reply-To: References: <0892491c-f57e-2952-eac3-a86797db5a8e@oracle.com> Message-ID: So I'll re-iterate comment which I made in BCN. In previous thread we praised how Kolla provided stable API for images, and I agree that it was great design choice (to provide stable API, not necessarily how API looks), and this change would break it. So *if* we decide to do it, we need to follow deprecation, that means we could deprecate these files in this release and start removing them in next. Support for LOCI in kolla-ansible is good thing, but I don't think changing Kolla image API is required for that. LOCI provides base image arument, so we could simply create base-image with all the extended-start and set-config mechanisms and some shim to source extended-start script that belongs to particular container. We will need kolla layer image anyway because set_config is there to stay (as Martin pointed out it's valuable tool fixing real issue and it's used by more projects than just kolla-ansible). We could add another script that would look like extended_start.sh -> source $CONTAINER_NAME-extended-start.sh and copy all kolla's extended start scripts to dir with proper naming (I believe this is solution that Sam came up with shortly after BCN). This is purely techincal and not that hard to do, much quicker and easier than deprecating API... On 5 April 2018 at 12:28, Martin André wrote: > On Thu, Apr 5, 2018 at 2:16 PM, Paul Bourke wrote: >> Hi all, >> >> This mail is to serve as a follow on to the discussion during yesterday's >> team meeting[4], which was regarding the desire to move start scripts out of >> the kolla images [0]. There's a few factors at play, and it may well be best >> left to discuss in person at the summit in May, but hopefully we can get at >> least some of this hashed out before then. >> >> I'll start by summarising why I think this is a good idea, and then attempt >> to address some of the concerns that have come up since. >> >> First off, to be frank, this is effort is driven by wanting to add support >> for loci images[1] in kolla-ansible. I think it would be unreasonable for >> anyone to argue this is a bad objective to have, loci images have very >> obvious benefits over what we have in Kolla today. I'm not looking to drop >> support for Kolla images at all, I simply want to continue decoupling things >> to the point where operators can pick and choose what works best for them. >> Stemming from this, I think moving these scripts out of the images provides >> a clear benefit to our consumers, both users of kolla and third parties such >> as triple-o. Let me explain why. > > It's still very obscure to me how removing the scripts from kolla > images will benefit consumers. If the reason is that you want to > re-use them in other, non-kolla images, I believe we should package > the scripts. I've left some comments in your spec review. > >> Normally, to run a docker image, a user will do 'docker run >> helloworld:latest'. In any non trivial application, config needs to be >> provided. In the vast majority of cases this is either provided via a bind >> mount (docker run -v hello.conf:/etc/hello.conf helloworld:latest), or via >> environment variables (docker run --env HELLO=paul helloworld:latest). This >> is all bog standard stuff, something anyone who's spent an hour learning >> docker can understand. >> >> Now, lets say someone wants to try out OpenStack with Docker, and they look >> at Kolla. First off they have to look at something called set_configs.py[2] >> - over 400 lines of Python. Next they need to understand what that script >> consumes, config.json [3]. The only reference for config.json is the files >> that live in kolla-ansible, a mass of jinja and assumptions about how the >> service will be run. Next, they need to figure out how to bind mount the >> config files and config.json into the container in a way that can be >> consumed by set_configs.py (which by the way, requires the base kolla image >> in all cases). This is only for the config. For the service start up >> command, this need to also be provided in config.json. This command is then >> parsed out and written to a location in the image, which is consumed by a >> series of start/extend start shell scripts. Kolla is *unique* in this >> regard, no other project in the container world is interfacing with images >> in this way. Being a snowflake in this regard is not a good thing. I'm still >> waiting to hear from a real world operator who would prefer to spend time >> learning the above to doing: > > You're pointing a very real documentation issue. I've mentioned in the > other kolla thread that I have a stub for the kolla API documentation. > I'll push a patch for what I have and we can iterate on that. > >> docker run -v /etc/keystone:/etc/keystone keystone:latest --entrypoint >> /usr/bin/keystone [args] >> >> This is the Docker API, it's easy to understand and pretty much the standard >> at this point. > > Sure, using the docker API works for simpler cases, not too > surprisingly once you start doing more funky things with your > containers you're quickly reach the docker API limitations. That's > when the kolla API comes in handy. > See for example this recent patch > https://review.openstack.org/#/c/556673/ where we needed to change > some file permission to the uid/gid of the user inside the container. > > The first iteration basically used the docker API and started an > additional container to fix the permissions: > > docker run -v > /etc/pki/tls/certs/neutron.crt:/etc/pki/tls/certs/neutron.crt:rw \ > -v /etc/pki/tls/private/neutron.key:/etc/pki/tls/private/neutron.key:rw > \ > neutron_image \ > /bin/bash -c 'chown neutron:neutron > /etc/pki/tls/certs/neutron.crt; chown neutron:neutron > /etc/pki/tls/private/neutron.key' > > You'll agree this is not the most obvious. And it had a nasty side > effect that is changes the permissions of the files _on the host_. > While using kolla API we could simply add to our config.json: > > - path: /etc/pki/tls/certs/neutron.crt > owner: neutron:neutron > - path: /etc/pki/tls/private/neutron.key > owner: neutron:neutron > >> The other argument is that this removes the possibility for immutable >> infrastructure. The concern is, with the new approach, a rookie operator >> will modify one of the start scripts - resulting in uncertainty that what >> was first deployed matches what is currently running. But with the way Kolla >> is now, an operator can still do this! They can restart containers with a >> custom entrypoint or additional bind mounts, they can exec in and change >> config files, etc. etc. Kolla containers have never been immutable and we're >> bending over backwards to artificially try and make this the case. We cant >> protect a bad or inexperienced operator from shooting themselves in the >> foot, there are better ways of doing so. If/when Docker or the upstream >> container world solves this problem, it would then make sense for Kolla to >> follow suit. >> >> On the face of it, what the spec proposes is a simple change, it should not >> radically pull the carpet out under people, or even change the way >> kolla-ansible works in the near term. If consumers such as tripleo or other >> parties feel it would in fact do so please do let me know and we can discuss >> and mitigate these problems. > > TripleO uses these scripts extensively, we certainly do not want to > see them go away from kolla images. > > Martin > >> Cheers, >> -Paul >> >> [0] https://review.openstack.org/#/c/550958/ >> [1] https://github.com/openstack/loci >> [2] >> https://github.com/openstack/kolla/blob/master/docker/base/set_configs.py >> [3] >> https://github.com/openstack/kolla-ansible/blob/master/ansible/roles/keystone/templates/keystone.json.j2 >> [4] >> http://eavesdrop.openstack.org/meetings/kolla/2018/kolla.2018-04-04-16.00.log.txt >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From fungi at yuggoth.org Thu Apr 5 20:44:32 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 5 Apr 2018 20:44:32 +0000 Subject: [openstack-dev] [infra][qa][requirements] Pip 10 is on the way In-Reply-To: <1522960033.3841849.1328067440.7342057C@webmail.messagingengine.com> References: <20180331150026.nqrwaxakxcn3vmqz@yuggoth.org> <20180402150635.5d4jbbnzry2biowu@gentoo.org> <1522685637.1678193.1323782608.022AAF87@webmail.messagingengine.com> <1522960033.3841849.1328067440.7342057C@webmail.messagingengine.com> Message-ID: <20180405204432.u3hpq56murf4wder@yuggoth.org> On 2018-04-05 13:27:13 -0700 (-0700), Clark Boylan wrote: [...] > I'm not sure what the best option is here but if we can show that > system site packages with virtualenvs is viable with pip10 and > people want to move forward with devstack using a global > virtualenv we can work to clean up this change and make it > mergeable. Ideally, someone convinces the libguestfs authors of the benefits of putting sdist/wheel builds of their python module on PyPI like (eventually) happened with libvirt. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From ltoscano at redhat.com Thu Apr 5 20:56:55 2018 From: ltoscano at redhat.com (Luigi Toscano) Date: Thu, 05 Apr 2018 22:56:55 +0200 Subject: [openstack-dev] [infra][qa][requirements] Pip 10 is on the way In-Reply-To: <20180405204432.u3hpq56murf4wder@yuggoth.org> References: <20180331150026.nqrwaxakxcn3vmqz@yuggoth.org> <1522960033.3841849.1328067440.7342057C@webmail.messagingengine.com> <20180405204432.u3hpq56murf4wder@yuggoth.org> Message-ID: <2169495.RU2An0EURy@whitebase.usersys.redhat.com> On Thursday, 5 April 2018 22:44:32 CEST Jeremy Stanley wrote: > On 2018-04-05 13:27:13 -0700 (-0700), Clark Boylan wrote: > [...] > > > I'm not sure what the best option is here but if we can show that > > system site packages with virtualenvs is viable with pip10 and > > people want to move forward with devstack using a global > > virtualenv we can work to clean up this change and make it > > mergeable. > > Ideally, someone convinces the libguestfs authors of the benefits of > putting sdist/wheel builds of their python module on PyPI like > (eventually) happened with libvirt. It may be not trivial: https://bugzilla.redhat.com/show_bug.cgi?id=1075594 On the other side, not being able to use system site packages with a virtualenv does not sound good either. Ciao -- Luigi From cboylan at sapwetik.org Thu Apr 5 21:38:15 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 05 Apr 2018 14:38:15 -0700 Subject: [openstack-dev] [infra][qa][requirements] Pip 10 is on the way In-Reply-To: <1522960033.3841849.1328067440.7342057C@webmail.messagingengine.com> References: <20180331150026.nqrwaxakxcn3vmqz@yuggoth.org> <20180402150635.5d4jbbnzry2biowu@gentoo.org> <1522685637.1678193.1323782608.022AAF87@webmail.messagingengine.com> <1522960033.3841849.1328067440.7342057C@webmail.messagingengine.com> Message-ID: <1522964295.3869865.1328145080.40A3869A@webmail.messagingengine.com> On Thu, Apr 5, 2018, at 1:27 PM, Clark Boylan wrote: > The other major issue we've run into is that nova file injection (which > is tested by tempest) seems to require either libguestfs or nbd. > libguestfs bindings for python aren't available on pypi and instead we > get them from system packaging. This means if we want libguestfs support > we have to enable system site packages when using virtualenvs. The > alternative is to use nbd which apparently isn't preferred by nova and > doesn't work under current devstack anyways. > > Why is this a problem? Well the new pip10 behavior that breaks devstack > is pip10's refusable to remove distutils installed packages. Distro > packages by and large are distutils packaged which means if you mix > system packages and pip installed packages there is a good chance > something will break (and it does break for current devstack). I'm not > sure that using a virtualenv with system site packages enabled will > sufficiently protect us from this case (but we should test it further). > Also it feels wrong to enable system packages in a virtualenv if the > entire point is avoiding system python packages. Good news everyone, http://logs.openstack.org/74/559174/1/check/tempest-full-py3/4c5548f/job-output.txt.gz#_2018-04-05_21_26_36_669943 shows the pip10 appears to do the right thing with a virtualenv using system-site-package option when attempting to install a newer version of a package that would require being deleted if done on the system python proper. It determines there is an existing package, that it is outside the env and it cannot uninstall it, then installs a newer version of the package anyways. If you look later in the job run you'll see it fails in the system python context on this same package, http://logs.openstack.org/74/559174/1/check/tempest-full-py3/4c5548f/job-output.txt.gz#_2018-04-05_21_29_31_399895. I think that means this is a viable workaround for us even if it isn't ideal. Clark From melwittt at gmail.com Thu Apr 5 21:43:38 2018 From: melwittt at gmail.com (melanie witt) Date: Thu, 5 Apr 2018 14:43:38 -0700 Subject: [openstack-dev] [nova] heads up, tox -e[pep|fast]8 defaulting to python3 Message-ID: <2f149aeb-f774-35b5-352c-da12161c4790@gmail.com> Howdy everyone, We recently updated the tox pep8 and fast8 environments to default to using python3 [0] because it has stricter checks and we wanted to make sure we don't let pep8 errors get through the CI gate [1]. Because of this, you'll need the python3 and python3-dev packages in your environment in order to run tox -e[pep|fast]8. Thanks, -melanie [0] https://review.openstack.org/#/c/558648 [1] http://lists.openstack.org/pipermail/openstack-dev/2018-April/129025.html From mriedemos at gmail.com Thu Apr 5 23:11:26 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 5 Apr 2018 18:11:26 -0500 Subject: [openstack-dev] RFC: Next minimum libvirt / QEMU versions for "Stein" release In-Reply-To: References: <20180330142643.ff3czxy35khmjakx@eukaryote> <20180331140929.r5kj3qyrefvsovwf@eukaryote> <20180404084507.GA18076@paraplu> Message-ID: <902f872d-5b4a-af99-1bc5-3fa2bfdf3fe3@gmail.com> On 4/5/2018 3:32 PM, Thomas Goirand wrote: > If you don't absolutely need new features from libvirt 3.2.0 and 3.0.0 > is fine, please choose 3.0.0 as minimum. > > If you don't absolutely need new features from qemu 2.9.0 and 2.8.0 is > fine, please choose 2.8.0 as minimum. > > If you don't absolutely need new features from libguestfs 1.36 and 1.34 > is fine, please choose 1.34 as minimum. New features in the libvirt driver which depend on minimum versions of libvirt/qemu/libguestfs (or arch for that matter) are always conditional, so I think it's reasonable to go with the lower bound for Debian. We can still support the features for the newer versions if you're running a system with those versions, but not penalize people with slightly older versions if not. -- Thanks, Matt From mbirru at gmail.com Thu Apr 5 23:27:08 2018 From: mbirru at gmail.com (Murali B) Date: Thu, 5 Apr 2018 16:27:08 -0700 Subject: [openstack-dev] zun-api error Message-ID: Hi Hongbin, Thank you for your help As per the our discussion here is the output for my current api on pike. I am not sure which version of zun client client I should use for pike root at cluster3-2:~/python-zunclient# zun service-list ERROR: Not Acceptable (HTTP 406) (Request-ID: req-be69266e-b641-44b9-9739-0c2d050f18b3) root at cluster3-2:~/python-zunclient# zun --debug service-list DEBUG (extension:180) found extension EntryPoint.parse('vitrage-keycloak = vitrageclient.auth:VitrageKeycloakLoader') DEBUG (extension:180) found extension EntryPoint.parse('vitrage-noauth = vitrageclient.auth:VitrageNoAuthLoader') DEBUG (extension:180) found extension EntryPoint.parse('noauth = cinderclient.contrib.noauth:CinderNoAuthLoader') DEBUG (extension:180) found extension EntryPoint.parse('v2token = keystoneauth1.loading._plugins.identity.v2:Token') DEBUG (extension:180) found extension EntryPoint.parse('none = keystoneauth1.loading._plugins.noauth:NoAuth') DEBUG (extension:180) found extension EntryPoint.parse('v3oauth1 = keystoneauth1.extras.oauth1._loading:V3OAuth1') DEBUG (extension:180) found extension EntryPoint.parse('admin_token = keystoneauth1.loading._plugins.admin_token:AdminToken') DEBUG (extension:180) found extension EntryPoint.parse('v3oidcauthcode = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAuthorizationCode') DEBUG (extension:180) found extension EntryPoint.parse('v2password = keystoneauth1.loading._plugins.identity.v2:Password') DEBUG (extension:180) found extension EntryPoint.parse('v3samlpassword = keystoneauth1.extras._saml2._loading:Saml2Password') DEBUG (extension:180) found extension EntryPoint.parse('v3password = keystoneauth1.loading._plugins.identity.v3:Password') DEBUG (extension:180) found extension EntryPoint.parse('v3adfspassword = keystoneauth1.extras._saml2._loading:ADFSPassword') DEBUG (extension:180) found extension EntryPoint.parse('v3oidcaccesstoken = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAccessToken') DEBUG (extension:180) found extension EntryPoint.parse('v3oidcpassword = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectPassword') DEBUG (extension:180) found extension EntryPoint.parse('v3kerberos = keystoneauth1.extras.kerberos._loading:Kerberos') DEBUG (extension:180) found extension EntryPoint.parse('token = keystoneauth1.loading._plugins.identity.generic:Token') DEBUG (extension:180) found extension EntryPoint.parse('v3oidcclientcredentials = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectClientCredentials') DEBUG (extension:180) found extension EntryPoint.parse('v3tokenlessauth = keystoneauth1.loading._plugins.identity.v3:TokenlessAuth') DEBUG (extension:180) found extension EntryPoint.parse('v3token = keystoneauth1.loading._plugins.identity.v3:Token') DEBUG (extension:180) found extension EntryPoint.parse('v3totp = keystoneauth1.loading._plugins.identity.v3:TOTP') DEBUG (extension:180) found extension EntryPoint.parse('v3applicationcredential = keystoneauth1.loading._plugins.identity.v3:ApplicationCredential') DEBUG (extension:180) found extension EntryPoint.parse('password = keystoneauth1.loading._plugins.identity.generic:Password') DEBUG (extension:180) found extension EntryPoint.parse('v3fedkerb = keystoneauth1.extras.kerberos._loading:MappedKerberos') DEBUG (extension:180) found extension EntryPoint.parse('v1password = swiftclient.authv1:PasswordLoader') DEBUG (extension:180) found extension EntryPoint.parse('token_endpoint = openstackclient.api.auth_plugin:TokenEndpoint') DEBUG (extension:180) found extension EntryPoint.parse('gnocchi-basic = gnocchiclient.auth:GnocchiBasicLoader') DEBUG (extension:180) found extension EntryPoint.parse('gnocchi-noauth = gnocchiclient.auth:GnocchiNoAuthLoader') DEBUG (extension:180) found extension EntryPoint.parse('aodh-noauth = aodhclient.noauth:AodhNoAuthLoader') DEBUG (session:372) REQ: curl -g -i -X GET http://ubuntu16:35357/v3 -H "Accept: application/json" -H "User-Agent: zun keystoneauth1/3.4.0 python-requests/2.18.1 CPython/2.7.12" DEBUG (connectionpool:207) Starting new HTTP connection (1): ubuntu16 DEBUG (connectionpool:395) http://ubuntu16:35357 "GET /v3 HTTP/1.1" 200 248 DEBUG (session:419) RESP: [200] Date: Thu, 05 Apr 2018 23:11:07 GMT Server: Apache/2.4.18 (Ubuntu) Vary: X-Auth-Token X-Distribution: Ubuntu x-openstack-request-id: req-3b1a12cc-fb3f-4d05-87fc-d2a1ff43395c Content-Length: 248 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: application/json RESP BODY: {"version": {"status": "stable", "updated": "2017-02-22T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.8", "links": [{"href": "http://ubuntu16:35357/v3/", "rel": "self"}]}} DEBUG (session:722) GET call to None for http://ubuntu16:35357/v3 used request id req-3b1a12cc-fb3f-4d05-87fc-d2a1ff43395c DEBUG (base:175) Making authentication request to http://ubuntu16:35357/v3/auth/tokens DEBUG (connectionpool:395) http://ubuntu16:35357 "POST /v3/auth/tokens HTTP/1.1" 201 10333 DEBUG (base:180) {"token": {"is_domain": false, "methods": ["password"], "roles": [{"id": "4000a662be2d47fd8fdf5a0fef66767d", "name": "admin"}], "expires_at": "2018-04-06T00:11:08.000000Z", "project": {"domain": {"id": "default", "name": "Default"}, "id": "a391261cffba4f4c827ab7420a352fe1", "name": "admin"}, "catalog": [{"endpoints": [{"url": " http://cluster3-2:9517/v1", "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "5a634bafa38c45dbb571f0edb3702101"}, {"url": "http://cluster3-2:9517/v1", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "8926d37d276a4fe49df66bb513f7906a"}, {"url": "http://cluster3-2:9517/v1", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "a74e1b4faf39436aa5d6f9b446ceee92"}], "type": "container-zun", "id": "025154eef222461da9edcfe32ae79e5e", "name": "zun"}, {"endpoints": [{"url": " http://ubuntu16:9001", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "3a94c0df20da47d1b922541a87576ab0"}, {"url": "http://ubuntu16:9001", "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "5fcab2a59c72433581510d7aafe29961"}, {"url": "http://ubuntu16:9001", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "71e314291a4b4c648aa5ba662b216fa6"}], "type": "dns", "id": "07677b58ad4d469d80dbda8e9fa908bc", "name": "designate"}, {"endpoints": [{"url": "http://ubuntu16:8776/v2/a391261cffba4f4c827ab7420a352fe1", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "4d56ee7967994c869239007146e52ab8"}, {"url": " http://ubuntu16:8776/v2/a391261cffba4f4c827ab7420a352fe1", "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "9845138d25ec41b1a7102d8365f1b9c7"}, {"url": " http://ubuntu16:8776/v2/a391261cffba4f4c827ab7420a352fe1", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "f99f9bf4b0eb4e19aa8dbe72fc13e648"}], "type": "volumev2", "id": "077bd5ecfc59499ab84f49e410efef4f", "name": "cinderv2"}, {"endpoints": [{"url": "http://ubuntu16:8004/v1/a391261cffba4f4c827ab7420a352fe1", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "355c6c323653469c8315d5dea2998b0d"}, {"url": " http://ubuntu16:8004/v1/a391261cffba4f4c827ab7420a352fe1", "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "841768ec3edb42d7b18fe6a2a17f4dbc"}, {"url": " http://10.11.142.2:8004/v1/a391261cffba4f4c827ab7420a352fe1", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "afdbc1d2a5114cd9b0714331eb227ba9"}], "type": "orchestration", "id": "116243d61e3a4c90b7144d6a8b5a170a", "name": "heat"}, {"endpoints": [{"url": "http://ubuntu16:8778", "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "2dacce3eed484464b3f521b7b2720cd9"}, {"url": "http://ubuntu16:8778", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "5300f9ae336c41b8a8bb93400db35a30"}, {"url": "http://ubuntu16:8778", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "5c7e2cc977f74051b0ed104abb1d46a9"}], "type": "placement", "id": "1d270e2d3d4f488e82597097af933e7a", "name": "placement"}, {"endpoints": [{"url": "http://ubuntu16:8042", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "337f147396f143679e6cf7fbdd3601ab"}, {"url": "http://ubuntu16:8042", "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "a97d660772e64894b4b13092d7719298"}, {"url": "http://ubuntu16:8042", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "bb5caf186c9947aca31e6ee2a37f6bbd"}], "type": "alarming", "id": "2a19c1a28a42433caa8eb919910ec06f", "name": "aodh"}, {"endpoints": [], "type": "volume", "id": "39c740b891764e4a9081773709269848", "name": "cinder"}, {"endpoints": [{"url": "http://ubuntu16:8041", "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "9d455913a5fb4f15bbe15740f4dee260"}, {"url": "http://ubuntu16:8041", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "c5c2471db1cb4ae7a1f3e847404d4b37"}, {"url": "http://ubuntu16:8041", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "cc12daed5ea342a1a47602720589cb9e"}], "type": "metric", "id": "39fdf2d5300343aa8ebe5509d29ba7ce", "name": "gnocchi"}, {"endpoints": [{"url": "http://cluster3-2:9890", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "1c7ddc56ba984afd8187cd1894a75bf1"}, {"url": "http://cluster3-2:9890", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "888925c4fc8b48859f086860333c3ab4"}, {"url": "http://cluster3-2:9890", "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "9bfd7198dab14f6a8b7eba444f920020"}], "type": "nfv-orchestration", "id": "3da88eae843a4949806186db8a9a3bd0", "name": "tacker"}, {"endpoints": [{"url": "http://10.11.142.2:8999", "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "32880f809a2f45598a9838e4b168ce5b"}, {"url": "http://10.11.142.2:8999", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "530711f56f234ad19775fae65774c0ab"}, {"url": "http://10.11.142.2:8999", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "8d7493ad752b453b87d789d0ec5cae93"}], "type": "rca", "id": "55f78369ea5e40e3b9aa9ded854cb163", "name": "vitrage"}, {"endpoints": [{"url": "http://10.11.142.2:5000/v3/", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "afba4b58fd734baeaed94f8f2380a986"}, {"url": "http://ubuntu16:5000/v3/", "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "b4b864acfc1746b3ad2d22c6a28e1361"}, {"url": " http://ubuntu16:35357/v3/", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "bf256df5f8d34e9c80c00b78da122118"}], "type": "identity", "id": "58b4ff04dc764fc2aae4bfd9d0f1eb8e", "name": "keystone"}, {"endpoints": [{"url": " http://ubuntu16:8776/v3/a391261cffba4f4c827ab7420a352fe1", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "260f8b9e9e214cc1a39407517b3ca826"}, {"url": " http://ubuntu16:8776/v3/a391261cffba4f4c827ab7420a352fe1", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "81adeaccba1c4203bddb7734f23116a8"}, {"url": " http://ubuntu16:8776/v3/a391261cffba4f4c827ab7420a352fe1", "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "e63332e8b15e43c6b9c331d9ee8551ab"}], "type": "volumev3", "id": "8cd6101718e94ee198cf9ba9894bf1c9", "name": "cinderv3"}, {"endpoints": [{"url": "http://ubuntu16:9696", "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "65a0b4233436428ab42aa3b40b1ce53f"}, {"url": "http://ubuntu16:9696", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "b8354dd727154056b3c9b81b89054bab"}, {"url": "http://ubuntu16:9696", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "ca44db85238b46cf9fbb6dc6f1d9dff5"}], "type": "network", "id": "ade912885a73431f95a3a01d8a8e6498", "name": "neutron"}, {"endpoints": [{"url": "http://ubuntu16:8000/v1", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "5d7559010ea94cca9edd7ab6213f6b2c"}, {"url": "http://ubuntu16:8000/v1", "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "af77025677284808b0715488e22729d4"}, {"url": " http://10.11.142.2:8000/v1", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "c17b650eccf14045af49d5e9d050e875"}], "type": "cloudformation", "id": "b04f735f46e743969e2bb0fff3aee1b5", "name": "heat-cfn"}, {"endpoints": [{"url": " http://ubuntu16:8774/v2.1/a391261cffba4f4c827ab7420a352fe1", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "18580f7a6dea4c53bc66d161e7e0a71e"}, {"url": " http://ubuntu16:8774/v2.1/a391261cffba4f4c827ab7420a352fe1", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "b4a8575704a4426494edc57551f40e58"}, {"url": " http://ubuntu16:8774/v2.1/a391261cffba4f4c827ab7420a352fe1", "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "c41ec544b61c41098c07030bc84ba2a0"}], "type": "compute", "id": "b06f4aa21a4a488c8f0c5a835e639bd3", "name": "nova"}, {"endpoints": [{"url": "http://ubuntu16:9292", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "4ed27e537ca34b6fb93a8c72d8921d24"}, {"url": "http://ubuntu16:9292", "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "ab0c37600ecf45d797e7972dc6a4fde2"}, {"url": "http://ubuntu16:9292", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "f4a0f97be4f343d698ea12633e3823d6"}], "type": "image", "id": "bbe4fbb4a1d7495f948faa9baf1e3828", "name": "glance"}, {"endpoints": [{"url": "http://ubuntu16:8777", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "3d160f2286634811b24b8abd6ad72c1f"}, {"url": "http://ubuntu16:8777", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "a988e821ff1f4760ae3873c17ab87294"}, {"url": "http://ubuntu16:8777", "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "def8c07174184a0ca26e2f0f26d60a73"}], "type": "metering", "id": "f4450730522d4342ac6626b81567b36c", "name": "ceilometer"}, {"endpoints": [{"url": "http://ubuntu16:9511/v1", "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", "id": "19e14e5c5c5a4d3db6a6a632db728668"}, {"url": "http://10.11.142.2:9511/v1", "interface": "public", "region": "RegionOne", "region_id": "RegionOne", "id": "28fb2092bcc748ce88dfb1284ace1264"}, {"url": " http://10.11.142.2:9511/v1", "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", "id": "c33f5b4a355d4067aa2e7093606cd153"}], "type": "container", "id": "fdbcff09ecd545c8ba28bfd96782794a", "name": "magnum"}], "user": {"domain": {"id": "default", "name": "Default"}, "password_expires_at": null, "name": "admin", "id": "3b136545b47b40709b78b1e36cdcdc63"}, "audit_ids": ["Ad1z5kAmRBehcgxG6-8IYA"], "issued_at": "2018-04-05T23:11:08.000000Z"}} DEBUG (session:372) REQ: curl -g -i -X GET http://10.11.142.2:9511/v1/services -H "OpenStack-API-Version: container 1.2" -H "X-Auth-Token: {SHA1}7523b440595290414cefa54434fc7c8adbec5c3d" -H "Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: None" DEBUG (connectionpool:207) Starting new HTTP connection (1): 10.11.142.2 DEBUG (connectionpool:395) http://10.11.142.2:9511 "GET /v1/services HTTP/1.1" 406 166 DEBUG (session:419) RESP: [406] Content-Type: application/json Content-Length: 166 x-openstack-request-id: req-63b7de1b-ef63-4be8-93c1-a27972c9b4c0 Server: Werkzeug/0.10.4 Python/2.7.12 Date: Thu, 05 Apr 2018 23:11:09 GMT RESP BODY: {"errors": [{"status": 406, "code": "", "links": [], "title": "Not Acceptable", "detail": "Invalid service type for OpenStack-API-Version header", "request_id": ""}]} DEBUG (session:722) GET call to container for http://10.11.142.2:9511/v1/services used request id req-63b7de1b-ef63-4be8-93c1-a27972c9b4c0 DEBUG (shell:705) Not Acceptable (HTTP 406) (Request-ID: req-63b7de1b-ef63-4be8-93c1-a27972c9b4c0) Traceback (most recent call last): File "/usr/local/lib/python2.7/dist-packages/zunclient/shell.py", line 703, in main map(encodeutils.safe_decode, sys.argv[1:])) File "/usr/local/lib/python2.7/dist-packages/zunclient/shell.py", line 639, in main args.func(self.cs, args) File "/usr/local/lib/python2.7/dist-packages/zunclient/v1/services_shell.py", line 22, in do_service_list services = cs.services.list() File "/usr/local/lib/python2.7/dist-packages/zunclient/v1/services.py", line 70, in list return self._list(self._path(path), "services") File "/usr/local/lib/python2.7/dist-packages/zunclient/common/base.py", line 128, in _list resp, body = self.api.json_request('GET', url) File "/usr/local/lib/python2.7/dist-packages/zunclient/common/httpclient.py", line 368, in json_request resp = self._http_request(url, method, **kwargs) File "/usr/local/lib/python2.7/dist-packages/zunclient/common/httpclient.py", line 351, in _http_request error_json.get('debuginfo'), method, url) NotAcceptable: Not Acceptable (HTTP 406) (Request-ID: req-63b7de1b-ef63-4be8-93c1-a27972c9b4c0) ERROR: Not Acceptable (HTTP 406) (Request-ID: req-63b7de1b-ef63-4be8-93c1-a27972c9b4c0) Thanks -Murali -------------- next part -------------- An HTML attachment was scrubbed... URL: From pabelanger at redhat.com Fri Apr 6 00:39:09 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Thu, 5 Apr 2018 20:39:09 -0400 Subject: [openstack-dev] [infra][qa][requirements] Pip 10 is on the way In-Reply-To: <1522960033.3841849.1328067440.7342057C@webmail.messagingengine.com> References: <20180331150026.nqrwaxakxcn3vmqz@yuggoth.org> <20180402150635.5d4jbbnzry2biowu@gentoo.org> <1522685637.1678193.1323782608.022AAF87@webmail.messagingengine.com> <1522960033.3841849.1328067440.7342057C@webmail.messagingengine.com> Message-ID: <20180406003909.GA28653@localhost.localdomain> On Thu, Apr 05, 2018 at 01:27:13PM -0700, Clark Boylan wrote: > On Mon, Apr 2, 2018, at 9:13 AM, Clark Boylan wrote: > > On Mon, Apr 2, 2018, at 8:06 AM, Matthew Thode wrote: > > > On 18-03-31 15:00:27, Jeremy Stanley wrote: > > > > According to a notice[1] posted to the pypa-announce and > > > > distutils-sig mailing lists, pip 10.0.0.b1 is on PyPI now and 10.0.0 > > > > is expected to be released in two weeks (over the April 14/15 > > > > weekend). We know it's at least going to start breaking[2] DevStack > > > > and we need to come up with a plan for addressing that, but we don't > > > > know how much more widespread the problem might end up being so > > > > encourage everyone to try it out now where they can. > > > > > > > > > > I'd like to suggest locking down pip/setuptools/wheel like openstack > > > ansible is doing in > > > https://github.com/openstack/openstack-ansible/blob/master/global-requirement-pins.txt > > > > > > We could maintain it as a separate constraints file (or infra could > > > maintian it, doesn't mater). The file would only be used for the > > > initial get-pip install. > > > > In the past we've done our best to avoid pinning these tools because 1) > > we've told people they should use latest for openstack to work and 2) it > > is really difficult to actually control what versions of these tools end > > up on your systems if not latest. > > > > I would strongly push towards addressing the distutils package deletion > > problem that we've run into with pip10 instead. One of the approaches > > thrown out that pabelanger is working on is to use a common virtualenv > > for devstack and avoid the system package conflict entirely. > > I was mistaken and pabelanger was working to get devstack's USE_VENV option working which installs each service (if the service supports it) into its own virtualenv. There are two big drawbacks to this. This first is that we would lose coinstallation of all the openstack services which is one way we ensure they all work together at the end of the day. The second is that not all services in "base" devstack support USE_VENV and I doubt many plugins do either (neutron apparently doesn't?). > Yah, I agree your approach is the better, i just wanted to toggle what was supported by default. However, it is pretty broken today. I can't imagine anybody actually using it, if so they must be carrying downstream patches. If we think USE_VENV is valid use case, for per project VENV, I suggest we continue to fix it and update neutron to support it. Otherwise, we maybe should rip and replace it. Paul > I've since worked out a change that passes tempest using a global virtualenv installed devstack at https://review.openstack.org/#/c/558930/. This needs to be cleaned up so that we only check for and install the virtualenv(s) once and we need to handle mixed python2 and python3 environments better (so that you can run a python2 swift and python3 everything else). > > The other major issue we've run into is that nova file injection (which is tested by tempest) seems to require either libguestfs or nbd. libguestfs bindings for python aren't available on pypi and instead we get them from system packaging. This means if we want libguestfs support we have to enable system site packages when using virtualenvs. The alternative is to use nbd which apparently isn't preferred by nova and doesn't work under current devstack anyways. > > Why is this a problem? Well the new pip10 behavior that breaks devstack is pip10's refusable to remove distutils installed packages. Distro packages by and large are distutils packaged which means if you mix system packages and pip installed packages there is a good chance something will break (and it does break for current devstack). I'm not sure that using a virtualenv with system site packages enabled will sufficiently protect us from this case (but we should test it further). Also it feels wrong to enable system packages in a virtualenv if the entire point is avoiding system python packages. > > I'm not sure what the best option is here but if we can show that system site packages with virtualenvs is viable with pip10 and people want to move forward with devstack using a global virtualenv we can work to clean up this change and make it mergeable. > > Clark > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From pabelanger at redhat.com Fri Apr 6 01:52:22 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Thu, 5 Apr 2018 21:52:22 -0400 Subject: [openstack-dev] [bifrost][helm][OSA][barbican] Switching from fedora-26 to fedora-27 In-Reply-To: <20180404152734.GA30139@localhost.localdomain> References: <20180305234513.GA26473@localhost.localdomain> <20180313145426.GA14285@localhost.localdomain> <20180404152734.GA30139@localhost.localdomain> Message-ID: <20180406015222.GA31818@localhost.localdomain> On Wed, Apr 04, 2018 at 11:27:34AM -0400, Paul Belanger wrote: > On Tue, Mar 13, 2018 at 10:54:26AM -0400, Paul Belanger wrote: > > On Mon, Mar 05, 2018 at 06:45:13PM -0500, Paul Belanger wrote: > > > Greetings, > > > > > > A quick search of git shows your projects are using fedora-26 nodes for testing. > > > Please take a moment to look at gerrit[1] and help land patches. We'd like to > > > remove fedora-26 nodes in the next week and to avoid broken jobs you'll need to > > > approve these patches. > > > > > > If you jobs are failing under fedora-27, please take the time to fix any issue > > > or update said patches to make them non-voting. > > > > > > We (openstack-infra) aim to only keep the latest fedora image online, which > > > changes aprox every 6 months. > > > > > > Thanks for your help and understanding, > > > Paul > > > > > > [1] https://review.openstack.org/#/q/topic:fedora-27+status:open > > > > > Greetings, > > > > This is a friendly reminder, about moving jobs to fedora-27. I'd like to remove > > our fedora-26 images next week and if jobs haven't been migrated you may start > > to see NODE_FAILURE messages while running jobs. Please take a moment to merge > > the open changes or update them to be non-voting while you work on fixes. > > > > Thanks again, > > Paul > > > Hi, > > It's been a month since we started asking projects to migrate to fedora-26. > > I've proposed the patch to review fedora-26 nodes from nodepool[2], if your > project hasn't merge the patches above you will start to see NODE_FAILURE > results for your jobs. Please take the time to approve the changes above. > > Because new fedora images come online every 6 months, we like to only keep one > of them online at any given time. Fedora is meant to be a fast moving distro to > pick up new versions of software out side of the Ubuntu LTS releases. > > If you have any questions please reach out to us in #openstack-infra. > > Thanks, > Paul > > [2] https://review.openstack.org/558847/ > We've just landed the patch, fedora-26 images are now removed. If you haven't upgraded your jobs to fedora-27, you'll now start setting NODE_FAILURE return by zuul. If you have any questions please reach out to us in #openstack-infra. Thanks, Paul From gdubreui at redhat.com Fri Apr 6 02:00:24 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Fri, 6 Apr 2018 12:00:24 +1000 Subject: [openstack-dev] [api] Adding a SDK to developer.openstack.org pages Message-ID: <5cf52faf-9755-2ddd-4ba3-d19f1a4d4490@redhat.com> Hi, I'd like to update the developer.openstack.org to add details about a new SDK. What would be the corresponding repo? My searches landed me into https://docs.openstack.org/doc-contrib-guide/ which is about updating the docs.openstack.org but not developer.openstack.org. Is the developer section inside the docs section? Thanks, Gilles From zhipengh512 at gmail.com Fri Apr 6 02:16:28 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Fri, 6 Apr 2018 10:16:28 +0800 Subject: [openstack-dev] [cyborg] Promote Li Liu as new core reviewer Message-ID: Hi Team, This is an email for my nomination of adding Li Liu to the core reviewer team. Li Liu has been instrumental in the resource provider data model implementation for Cyborg during Queens release, as well as metadata standardization and programming design for Rocky. His overall stats [0] and current stats [1] for Rocky speaks for itself. His patches could be found here [2]. Given the amount of work undergoing for Rocky, it would be great to add such an amazing force :) [0] http://stackalytics.com/?module=cyborg-group&metric=person-day&release=all [1] http://stackalytics.com/?module=cyborg-group&metric=person-day&release=rocky [2] https://review.openstack.org/#/q/owner:liliueecg%2540gmail.com -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From adriant at catalyst.net.nz Fri Apr 6 03:55:40 2018 From: adriant at catalyst.net.nz (Adrian Turjak) Date: Fri, 6 Apr 2018 15:55:40 +1200 Subject: [openstack-dev] Proposal: The OpenStack Client Library Guide Message-ID: <2e195faf-6c1f-6a81-794d-59c97a371fd8@catalyst.net.nz> Hello fellow OpenStackers, As some of you have probably heard me rant, I've been thinking about how to better solve the problem with various tools that support OpenStack or are meant to be OpenStack clients/tools which don't always work as expected by those of us directly in the community. Mostly around things like auth and variable name conventions, and things which often there should really be consistency and overlap. The example that most recently triggered this discussion was how OpenStackClient (and os-client-config) supports certain elements of clouds.yaml and ENVVAR config, while Terraform supports it differently. Both you'd often run on the cli and often both in the same terminal, so it is always weird when certain auth and scoping values don't work the same. This is being worked on, but little problems like this an an ongoing problem. The proposal, write an authoritative guide/spec on the basics of implementing a client library or tool for any given language that talks to OpenStack. Elements we ought to cover: - How all the various auth methods in Keystone work, how the whole authn and authz process works with Keystone, and how to actually use it to do what you want. - What common client configuration options exist and how they work (common variable names, ENVVARs, clouds.yaml), with something like common ENVVARs documented and a list maintained so there is one definitive source for what to expect people to be using. - Per project guides on how the API might act that helps facilitate starting to write code against it beyond just the API reference, and examples of what to expect. Not exactly a duplicate of the API ref, but more a 'common pitfalls and confusing elements to be ware of' section that builds on the API ref of each project. There are likely other things we want to include, and we need to work out what those are, but ideally this should be a new documentation focused project which will result in useful guide on what someone needs to take any programming language, and write a library that works as we expect it should against OpenStack. Such a guide would also help any existing libraries ensure they themselves do fully understand and use the OpenStack auth and service APIs as expected. It should also help to ensure programmers working across multiple languages and systems have a much easier time interacting with all the various libraries they might touch. A lot of this knowledge exists, but it's hard to parse and not well documented. We have reference implementations of it all in the likes of OpenStackClient, Keystoneauth1, and the OpenStackSDK itself (which os-client-config is now a part of), but what we need is a language agnostic guide rather than the assumption that people will read the code of our official projects. Even the API ref itself isn't entirely helpful since in a lot of cases it only covers the most basic of examples for each API. There appears to be interest in something like this, so lets start with a mailing list discussion, and potentially turn it into something more official if this leads anywhere useful. :) Cheers, Adrian From thierry at openstack.org Fri Apr 6 07:25:56 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 6 Apr 2018 09:25:56 +0200 Subject: [openstack-dev] Asking for ask.openstack.org In-Reply-To: <1c130383-6711-e579-9492-61c7656ca985@redhat.com> References: <5AC542F4.2090205@openstack.org> <998761ae-b016-ec97-ceb5-a4d4fc725b14@redhat.com> <71b8b916-a227-1aaa-7954-987772a645ff@redhat.com> <1c130383-6711-e579-9492-61c7656ca985@redhat.com> Message-ID: <5d5fc37f-c6fd-e7c1-5e49-87ab457bfe43@openstack.org> Zane Bitter wrote: > On 05/04/18 00:12, Ian Wienand wrote: >> On 04/05/2018 10:23 AM, Zane Bitter wrote: >>> On 04/04/18 17:26, Jimmy McArthur wrote: >>> Here's the thing: email alerts. They're broken. >> >> This is the type of thing we can fix if we know about it ... I will >> contact you off-list because the last email to what I presume is you >> went to an address that isn't what you've sent from here, but it was >> accepted by the remote end. > > Yeah, my mails get proxied through a fedora project address. I am > getting them now though (since the SW update in January 2017 - and even > before that I did get notifications if somebody @'d me). The issue is > the content is not filtered by subscribed tags according to the > preferences I have set, so they're useless for keeping up with my areas > of interest. > > It's not just a mail delivery problem, and I guarantee it's not just me. > It's a bug somewhere in StackExchange itself. Yes I can confirm email alerts are broken. I currently receive a weekly digest about "ceilometer", "vip", "api", "nova", "openstack" tags while I'm subscribed to "release" and "rootwrap". It's as if I received someone else's email alerts... (Software is not StackExchange, it's AskBot). -- Thierry Carrez (ttx) From superuser151093 at gmail.com Fri Apr 6 08:10:32 2018 From: superuser151093 at gmail.com (super user) Date: Fri, 6 Apr 2018 17:10:32 +0900 Subject: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects In-Reply-To: <1522850139-sup-8937@lrrr.local> References: <1521110096-sup-3634@lrrr.local> <1522276901-sup-6868@lrrr.local> <1522850139-sup-8937@lrrr.local> Message-ID: Hope you fix this soon, there are many patches depend on the 'match the minimum version' problem which causes requirements-check fail. On Wed, Apr 4, 2018 at 10:58 PM, Doug Hellmann wrote: > Excerpts from Doug Hellmann's message of 2018-03-28 18:53:03 -0400: > > > > Because we had some communication issues and did a few steps out > > of order, when this patch lands projects that have approved > > bot-proposed requirements updates may find that their requirements > > and lower-constraints files no longer match, which may lead to job > > failures. It should be easy enough to fix the problems by making > > the values in the constraints files match the values in the > > requirements files (by editing either set of files, depending on > > what is appropriate). I apologize for any inconvenience this causes. > > In part because of this, and in part because of some issues calculating > the initial set of lower-constraints, we have several projects where > their lower-constraints don't match the lower bounds in the requirements > file(s). Now that the check job has been updated with the new rules, > this is preventing us from landing the patches to add the > lower-constraints test job (so those rules are working!). > > I've prepared a script to help fix up the lower-constraints.txt > based on values in requirements.txt and test-requirements.txt. > That's not everything, but it should make it easier to fix the rest. > > See https://review.openstack.org/#/c/558610/ for the script. I'll work > on those pep8 errors later today so we can hopefully land it soon, but > in the mean time you'll need to check out that commit and follow the > instructions for setting up a virtualenv to run the script. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Fri Apr 6 08:31:10 2018 From: geguileo at redhat.com (Gorka Eguileor) Date: Fri, 6 Apr 2018 10:31:10 +0200 Subject: [openstack-dev] [cinder][nova] about re-image the volume In-Reply-To: References: <20180329142813.GA25762@sm-xps> <20180402115959.3y3j6ytab6ruorrg@localhost> <96adcaac-632a-95c3-71c8-51211c1c57bd@gmail.com> <20180405081558.vf7bibu4fcv5kov3@localhost> Message-ID: <20180406083110.tydltwfe23kiq7bw@localhost> On 05/04, Matt Riedemann wrote: > On 4/5/2018 3:15 AM, Gorka Eguileor wrote: > > But just to be clear, Nova will have to initialize the connection with > > the re-imagined volume and attach it again to the node, as in all cases > > (except when defaulting to downloading the image and dd-ing it to the > > volume) the result will be a new volume in the backend. > > Yeah I think I pointed this out earlier in this thread on what I thought the > steps would be on the nova side with respect to creating a new empty > attachment to keep the volume 'reserved' while we delete the old attachment, > re-image the volume, and then update the volume attachment for the new > connection. I think that would be similar to how shelve and unshelve works > in nova. > > Would this really require a swap volume call from Cinder? I'd hope not since > swap volume in itself is a pretty gross operation on the nova side. > > -- > > Thanks, > > Matt > Hi Matt, Yes, it will require a volume swap, with the worst case scenario exception where we dd the image into the volume. In the same way that anyone would expect a re-imaging preserving the volume id, one would also expect it to behave like creating a new volume from the same image: be as fast and take up as much space on the backend. And to do so we have to use existing optimized mechanisms that will only work when creating a new volume. The alternative would be to have the worst case scenario as the default (attach and dd the image) and make *ALL* Cinder drivers implement the optimized mechanism where they can efficiently re-imagine a volume. I can't talk for the Cinder team, but I for one would oppose this alternative. Cheers, Gorka. From thierry at openstack.org Fri Apr 6 08:35:45 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 6 Apr 2018 10:35:45 +0200 Subject: [openstack-dev] [tc] Technical Committee Status update, April 6th Message-ID: <36f2295a-6a28-1bcb-b1cc-ff34f3b212af@openstack.org> Hi! This is the weekly summary of Technical Committee initiatives. You can find the full list of currently-considered changes at: https://wiki.openstack.org/wiki/Technical_Committee_Tracker We also track TC objectives for the cycle using StoryBoard at: https://storyboard.openstack.org/#!/project/923 == Recently-approved changes == * Removed repository: puppet-ganesha == Voting in progress == We are still missing a couple of votes on the proposal to set the expectation early on that official projects will have to drop direct tagging (or branching) rights in their Gerrit ACLs once they are made official, as those actions will be handled by the Release Management team through the openstack/releases repository. This will likely be approved early next week, so please post your concerns on the review if you have any: https://review.openstack.org/557737 == Under discussion == We got lots of replies and comments on the thread and the review proposing the split of the kolla-kubernetes deliverable out of the Kolla team. Discussion has now moved to reviewing the deliverables currently regrouped under the Kolla team, and considering whether the current grouping is a feature or a bug. If you have an opinion on that, please chime in on the review or the ML thread: https://review.openstack.org/#/c/552531/ http://lists.openstack.org/pipermail/openstack-dev/2018-March/128822.html The discussion is also still on-going the Adjutant project team addition. The main concerns raised are: (1) presenting Adjutant as an "API factory" around any business logic raises interoperability fears. We don't really want an official "OpenStack service" with an open-ended API or set of APIs; and (2) there is concern that some of the "core plugins" could actually be implemented in existing projects, and that Adjutant is working around the pain of landing those features in the (set of) projects where they belong by creating a whole-new project to land them faster. You can jump in the discussion here: https://review.openstack.org/#/c/553643/ The last open discussion is around a proposed tag to track which deliverables implemented a lower dependency bounds check voting test job. After discussion at the last TC office hour, it might be abandoned in favor of making it a community goal for the Stein cycle and then a general expectation for projects using global requirements. Please see: https://review.openstack.org/557501 == TC member actions/focus/discussions for the coming week(s) == For the coming week we'll confirm which topics we want to propose for the Forum in Vancouver, and file them on forumtopics.openstack.org before the April 15 deadline. There is still time to propose some at: https://etherpad.openstack.org/p/YVR-forum-TC-sessions The election season will start on April 10 with nominations for candidates for the 7 open seats. I also expect debate to continue around the three proposals under discussion. == Office hours == To be more inclusive of all timezones and more mindful of people for which English is not the primary language, the Technical Committee dropped its dependency on weekly meetings. So that you can still get hold of TC members on IRC, we instituted a series of office hours on #openstack-tc: * 09:00 UTC on Tuesdays * 01:00 UTC on Wednesdays * 15:00 UTC on Thursdays Feel free to add your own office hour conversation starter at: https://etherpad.openstack.org/p/tc-office-hour-conversation-starters Cheers, -- Thierry Carrez (ttx) From slawek at kaplonski.pl Fri Apr 6 08:38:03 2018 From: slawek at kaplonski.pl (=?utf-8?B?U8WCYXdlayBLYXDFgm/FhHNraQ==?=) Date: Fri, 6 Apr 2018 10:38:03 +0200 Subject: [openstack-dev] [ALL][PTLs] [Community goal] Toggle the debug option at runtime In-Reply-To: References: Message-ID: Hi, One more question about implementation of this goal. Should we take care (and add to story board [1]) projects like: openstack/neutron-lbaas openstack/networking-cisco openstack/networking-dpm openstack/networking-infoblox openstack/networking-l2gw openstack/networking-lagopus openstack/neutron-dynamic-routing Which looks that should be probably also changed in some way. Or maybe list of affected projects in [1] is „closed” and if some project is not there it shouldn’t be changed to accomplish this community goal? [1] https://storyboard.openstack.org/#!/story/2001545 — Best regards Slawek Kaplonski slawek at kaplonski.pl > Wiadomość napisana przez ChangBo Guo w dniu 26.03.2018, o godz. 14:15: > > > 2018-03-22 16:12 GMT+08:00 Sławomir Kapłoński : > Hi, > > I took care of implementation of [1] in Neutron and I have couple questions to about this goal. > > 1. Should we only change "restart_method" to mutate as is described in [2] ? I did already something like that in [3] - is it what is expected? > > Yes , let's the only thing. we need test if that if it works . > > 2. How I can check if this change is fine and config option are mutable exactly? For now when I change any config option for any of neutron agents and send SIGHUP to it it is in fact "restarted" and config is reloaded even with this old restart method. > > good question, we indeed thought this question when we proposal the goal. But It seems difficult to test that consuming projects like Neutron automatically. > > 3. Should we add any automatic tests for such change also? Any examples of such tests in other projects maybe? > There is no example for tests now, we only have some unit tests in oslo.service . > > [1] https://governance.openstack.org/tc/goals/rocky/enable-mutable-configuration.html > [2] https://docs.openstack.org/oslo.config/latest/reference/mutable.html > [3] https://review.openstack.org/#/c/554259/ > > — > Best regards > Slawek Kaplonski > slawek at kaplonski.pl > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > ChangBo Guo(gcb) > Community Director @EasyStack > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From xinni.ge1990 at gmail.com Fri Apr 6 08:53:03 2018 From: xinni.ge1990 at gmail.com (Xinni Ge) Date: Fri, 6 Apr 2018 17:53:03 +0900 Subject: [openstack-dev] [openstack-infra][openstack-zuul-jobs]Questions about playbook copy module Message-ID: Hello there, I have some questions about the value of parameter `dest` of copy module in this file. openstack-zuul-jobs/playbooks/xstatic/check-version.yaml Line:6 dest: xstatic_check_version.py Ansible documents describe `dest` as "Remote absolute path where the file should be copied to". (http://docs.ansible.com/ansible/devel/modules/copy_module.html#id2) I am not quite familiar with ansible but maybe it could be `{{ zuul.executor.log_root }}/openstack-zuul-jobs/playbooks/xstatic/check-version.yaml` or something similar ? Actually I ran into the problem trying to release a new xstatic package. The release patch was merged but fail to execute the release job. Just wondering whether or not it could be the reason of the failure. I am not sure about how to debug this, or how to re-launch the release job. I am very appreciate of it if anybody could kindly help me. Best Regards, Xinni Ge -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Fri Apr 6 08:59:32 2018 From: aj at suse.com (Andreas Jaeger) Date: Fri, 6 Apr 2018 10:59:32 +0200 Subject: [openstack-dev] [openstack-infra][openstack-zuul-jobs]Questions about playbook copy module In-Reply-To: References: Message-ID: <80f29416-245c-fc06-1018-1f7a873b79a1@suse.com> On 2018-04-06 10:53, Xinni Ge wrote: > Hello there, > > I have some questions about the value of parameter `dest` of copy module > in this file. > > openstack-zuul-jobs/playbooks/xstatic/check-version.yaml > Line:6        dest: xstatic_check_version.py > > Ansible documents describe `dest` as  "Remote absolute path where the > file should be copied to". > (http://docs.ansible.com/ansible/devel/modules/copy_module.html#id2) >   > I am not quite familiar with ansible but maybe it could be `{{ > zuul.executor.log_root > }}/openstack-zuul-jobs/playbooks/xstatic/check-version.yaml` or > something similar ? > > Actually I ran into the problem trying to release a new xstatic package. > The release patch was merged but fail to execute the release job. Just > wondering whether or not it could be the reason of the failure. Could you share a link to the logs for the job that failed, please? > I am not sure about how to debug this, or how to re-launch the release job. > I am very appreciate of it if anybody could kindly help me. Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From j.harbott at x-ion.de Fri Apr 6 09:02:29 2018 From: j.harbott at x-ion.de (Jens Harbott) Date: Fri, 6 Apr 2018 09:02:29 +0000 Subject: [openstack-dev] [all][requirements] uncapping eventlet In-Reply-To: <20180405192619.nk7ykeel6qnwsk2y@gentoo.org> References: <20180405154737.lhxhlpsj3uttjevu@gentoo.org> <20180405192619.nk7ykeel6qnwsk2y@gentoo.org> Message-ID: 2018-04-05 19:26 GMT+00:00 Matthew Thode : > On 18-04-05 20:11:04, Graham Hayes wrote: >> On 05/04/18 16:47, Matthew Thode wrote: >> > eventlet-0.22.1 has been out for a while now, we should try and use it. >> > Going to be fun times. >> > >> > I have a review projects can depend upon if they wish to test. >> > https://review.openstack.org/533021 >> >> It looks like we may have an issue with oslo.service - >> https://review.openstack.org/#/c/559144/ is failing gates. >> >> Also - what is the dance for this to get merged? It doesn't look like we >> can merge this while oslo.service has the old requirement restrictions. >> > > The dance is as follows. > > 0. provide review for projects to test new eventlet version > projects using eventlet should make backwards compat code changes at > this time. But this step is currently failing. Keystone doesn't even start when eventlet-0.22.1 is installed, because loading oslo.service fails with its pkg definition still requiring the capped eventlet: http://logs.openstack.org/21/533021/4/check/legacy-requirements-integration-dsvm/7f7c3a8/logs/screen-keystone.txt.gz#_Apr_05_16_11_27_748482 So it looks like we need to have an uncapped release of oslo.service before we can proceed here. From dtantsur at redhat.com Fri Apr 6 09:02:56 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Fri, 6 Apr 2018 11:02:56 +0200 Subject: [openstack-dev] Proposal: The OpenStack Client Library Guide In-Reply-To: <2e195faf-6c1f-6a81-794d-59c97a371fd8@catalyst.net.nz> References: <2e195faf-6c1f-6a81-794d-59c97a371fd8@catalyst.net.nz> Message-ID: <4aaab1f2-1000-5151-115a-f2c19016c8cc@redhat.com> Hi Adrian, Thanks for starting this discussion. I'm adding openstack-sigs ML, please keep it in the loop. We in API SIG are interested in providing guidance on not only writing OpenStack APIs, but also consuming them. For example, we have merged a guideline on consuming API versions: http://specs.openstack.org/openstack/api-wg/guidelines/sdk-exposing-microversions.html More inline. On 04/06/2018 05:55 AM, Adrian Turjak wrote: > Hello fellow OpenStackers, > > As some of you have probably heard me rant, I've been thinking about how > to better solve the problem with various tools that support OpenStack or > are meant to be OpenStack clients/tools which don't always work as > expected by those of us directly in the community. > > Mostly around things like auth and variable name conventions, and things > which often there should really be consistency and overlap. > > The example that most recently triggered this discussion was how > OpenStackClient (and os-client-config) supports certain elements of > clouds.yaml and ENVVAR config, while Terraform supports it differently. > Both you'd often run on the cli and often both in the same terminal, so > it is always weird when certain auth and scoping values don't work the > same. This is being worked on, but little problems like this an an > ongoing problem. > > The proposal, write an authoritative guide/spec on the basics of > implementing a client library or tool for any given language that talks > to OpenStack. > > Elements we ought to cover: > - How all the various auth methods in Keystone work, how the whole authn > and authz process works with Keystone, and how to actually use it to do > what you want. Yes please! > - What common client configuration options exist and how they work > (common variable names, ENVVARs, clouds.yaml), with something like > common ENVVARs documented and a list maintained so there is one > definitive source for what to expect people to be using. Even bigger YES > - Per project guides on how the API might act that helps facilitate > starting to write code against it beyond just the API reference, and > examples of what to expect. Not exactly a duplicate of the API ref, but > more a 'common pitfalls and confusing elements to be ware of' section > that builds on the API ref of each project. Oh yeah, esp. what to be mindful of when writing an SDK in a statically typed language (I had quite some fun with rust-openstack, I guess Terraform had similar issues). > > There are likely other things we want to include, and we need to work > out what those are, but ideally this should be a new documentation > focused project which will result in useful guide on what someone needs > to take any programming language, and write a library that works as we > expect it should against OpenStack. Such a guide would also help any > existing libraries ensure they themselves do fully understand and use > the OpenStack auth and service APIs as expected. It should also help to > ensure programmers working across multiple languages and systems have a > much easier time interacting with all the various libraries they might > touch. > > A lot of this knowledge exists, but it's hard to parse and not well > documented. We have reference implementations of it all in the likes of > OpenStackClient, Keystoneauth1, and the OpenStackSDK itself (which > os-client-config is now a part of), but what we need is a language > agnostic guide rather than the assumption that people will read the code > of our official projects. Even the API ref itself isn't entirely helpful > since in a lot of cases it only covers the most basic of examples for > each API. > > There appears to be interest in something like this, so lets start with > a mailing list discussion, and potentially turn it into something more > official if this leads anywhere useful. :) Count me in :) > > Cheers, > Adrian > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From delightwook at ssu.ac.kr Fri Apr 6 09:07:25 2018 From: delightwook at ssu.ac.kr (MinWookKim) Date: Fri, 6 Apr 2018 18:07:25 +0900 Subject: [openstack-dev] [Vitrage] New proposal for analysis. References: <0a7201d3c5c1$2ab596a0$8020c3e0$@ssu.ac.kr> <0b4201d3c63b$79038400$6b0a8c00$@ssu.ac.kr> <0cf201d3c72f$2b3f5ec0$81be1c40$@ssu.ac.kr> <0d8101d3c754$41e73c90$c5b5b5b0$@ssu.ac.kr> <38E590A3-69BF-4BE1-A701-FA8171429D46@nokia.com> <00e801d3ca25$29befee0$7d3cfca0$@ssu.ac.kr> <000a01d3caf4$90584010$b108c030$@ssu.ac.kr> <003c01d3cb45$fda29930$f8e7cb90$@ssu.ac.kr> Message-ID: <004a01d3cd86$aea447f0$0becd7d0$@ssu.ac.kr> Hello Ifat, If possible, could i write a blueprint based on what we discussed? (architecture, specs) After checking the blueprint, it would be better to proceed with specific updates on the various issues. what do you think? Thanks. Best regards, Minwook. From: MinWookKim [mailto:delightwook at ssu.ac.kr] Sent: Thursday, April 5, 2018 10:53 AM To: 'OpenStack Development Mailing List (not for usage questions)' Subject: RE: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thanks for the good comments. It was very helpful. As you said, I tested for std.ssh, and I was able to get much better results. I am confident that this is what I want. We can use std.ssh to provide convenience to users with a much more efficient way to configure shell scripts / monitoring agent automation(for Zabbix history,etc) / other commands. In addition, std_actions.py contained a number of features that could be used for this proposal (such as HTTP). So if we actively use and utilize the actions in std_actions.py, we might be able to construct neat code without the duplicate functionality that you worried about. It has been a great help. In addition, I also agree that Vitrage action is required for Mistral. If possible, I might be able to do that in the future.(ASAP) Thank you. Best regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Wednesday, April 4, 2018 4:21 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, I discussed this issue with a Mistral contributor. Mistral has a long list of actions that can be used. Specifically, you can use the std.ssh action to execute shell scripts. Some useful commands: mistral action-list mistral action-get I’m not sure about the output of the std.ssh, and whether you can get it from the action. I suggest you try it and see how it works. The action is implemented here: https://github.com/openstack/mistral/blob/master/mistral/actions/std_actions .py If std.ssh does not suit your needs, you also have an option to implement and run your own action in Mistral (either as an ssh action or as a python code). And BTW, it is not related to your current use case, but we can also add Vitrage actions to Mistral, so the user can access Vitrage information (get topology, get alarms) from Mistral workflows. Best regards, Ifat From: MinWookKim > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Tuesday, 3 April 2018 at 15:19 To: "'OpenStack Development Mailing List (not for usage questions)'" > Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thanks for your reply. Your comments have been a great help to the proposal. (sorry, I did not think we could use Mistral). If we use the Mistral workflow for the proposal, we can get better results (we can get good results on performance and code conciseness). Also, if we use the Mistral workflow, we do not need to write any unnecessary code. Since I don't know about mistral yet, I think it would be better to do the most efficient design including mistral after grasping it. If we run a check through a Mistral workflow, how about providing users with a choice of tools that have the capability to perform checks? We can get the results of the check through the Mistral and tools, but I think we need the least functionality to manage them. What do you think? I attached a picture of the actual UI that I simply implemented. I hope it helps you understand. (The parameter and content have no meaning and are a simple example.) : ) Thanks. Best regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Tuesday, April 3, 2018 8:31 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, Thanks for the explanation, I understand the reasons for not running these checks on a regular basis in Zabbix or other monitoring tools. It makes sense. However, I don’t want to re-invent the wheel and add to Vitrage functionality that already exists in other projects. How about using Mistral for the purpose of manually running these extra checks? If you prepare the script/agent in advance, as well as the Mistral workflow, I believe that Mistral can successfully execute the check and return the results. I’m not so sure about the UI part, we will have to figure out how and where the user can see the output. But it will save a lot of effort around managing the checks, running a new service, supporting a new API, etc. What do you think? Ifat From: MinWookKim > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Tuesday, 3 April 2018 at 5:36 To: "'OpenStack Development Mailing List (not for usage questions)'" > Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, I also thought about several scenarios that use monitoring tools like Zabbix, Nagios, and Prometheus. But there are some limitations, so we have to think about it. We also need to think about targets, scope, and so on. The reason I do not think of tools like Zabbix, Nagios, and Prometheus as a tool to run checks is because we need to configure an agent or an exporter. I think it is not hard to configure an agent for monitoring objects such as a physical host. But the scope of the idea I think involves the VM's interior. Therefore, configuring the agent automatically inside the VM may not be easy. (although we can use parameters like user-data) If we exclude VM internal checks from scope, we can simply perform a check via Zabbix. (Like Zabbix's remote command, history) On the other hand, if we include the inside of a VM in a scope, and configure each of them, we have a rather constant overhead. The check service may incur temporary overhead, but the agent configuration can cause constant overhead. And Zabbix history can be another task for Vitrage. If we configure the agents themselves and exclude the VM's internal checks, we can provide functionality with simple code. how is it? Thank you. Best regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Monday, April 2, 2018 10:22 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, Thinking about it again, writing a new service for these checks might be an unnecessary overhead. Have you considered using an existing tool, like Zabbix, for running such checks? If you use Zabbix, you can define new triggers that run the new checks, and whenever needed the user can ask to open Zabbix and show the relevant metrics. The format will not be exactly the same as in your example, but it will save a lot of work and spare you the need to write and manage a new service. Some technical details: * The current information that Vitrage stores is not enough for opening the right Zabbix page. We will need to keep a little more data, like the item id, on the alarm vertex. But can be done easily. * A relevant Zabbix API is history.get [1] * If you are not using Zabbix, I assume that other monitoring tools have similar capabilities What do you think? Do you think it can work with your scenario? Or do you see a benefit to the user is viewing the data in the format that you suggested? [1] https://www.zabbix.com/documentation/3.0/manual/api/reference/history/get Thanks, Ifat From: MinWookKim < delightwook at ssu.ac.kr> Reply-To: "OpenStack Development Mailing List (not for usage questions)" < openstack-dev at lists.openstack.org> Date: Monday, 2 April 2018 at 4:51 To: "'OpenStack Development Mailing List (not for usage questions)'" < openstack-dev at lists.openstack.org> Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thank you for the reply. :) It is my opinion only, so if I'm wrong, we can change the implementation part at any time. (Even if it differs from my initial intention) The same security issues arise as you say. But now Vitrage does not call external APIs. The Vitrage-dashboard uses Vitrageclient libraries for Topology, Alarms, and RCA requests to Vitrage. So if we add an API, it will have the following flow. Vitrage-dashboard requests checks using the Vitrageclient library. -> Vitrage receives the API. -> api / controllers / v1 / checks.py is called. -> checks service is called. In accordance with the above flow, passing through the Vitrage API is the purpose of data passing and function calls. I think Vitrage does not need to call external APIs. If you do not want to go through the Vitrage API, we need to create a function for the check action in the Vitrage-dashboard, and write code to call the function. If I think wrong, please tell me anytime. :) Thank you. Best regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [ mailto:ifat.afek at nokia.com] Sent: Sunday, April 1, 2018 3:40 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, I understand your concern about the security issue. But how would that be different if the API call is passed through Vitrage API? The authentication from vitrage-dashboard to vitrage API will work, but then Vitrage will call an external API and you’ll have the same security issue, right? I don’t understand what is the difference between calling the external component from vitrage-dashboard and calling it from vitrage. Best regards, Ifat. From: MinWookKim < delightwook at ssu.ac.kr> Reply-To: "OpenStack Development Mailing List (not for usage questions)" < openstack-dev at lists.openstack.org> Date: Thursday, 29 March 2018 at 14:51 To: "'OpenStack Development Mailing List (not for usage questions)'" < openstack-dev at lists.openstack.org> Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thanks for your reply. : ) I wrote my opinion on your comment. Why do you think the request should pass through the Vitrage API? Why can’t vitrage-dashboard call the check component directly? Authentication issues: I think the check component is a separate component based on the API. In my opinion, if the check component has a separate api address from the vitrage to receive requests from the Vitrage-dashboard, the Vitrage-dashboard needs to know the api address for the check component. This can result in a request / response situation open to anyone, regardless of the authentication supported by openstack between the Vitrage-dashboard and the request / response procedure of check component. This is possible not only through the Vitrage-dashboard, but also with simple commands such as curl. (I think it is unnecessary to implement a separate authentication system for the check component.) This problem may occur if someone knows the api address for the check component, which can cause the host and VM to execute system commands. what should happen if the user closes the check window before the checks are over? I assume that the checks will finish, but the user won’t be able to see the results? If the window is closed before the check is finished, the user can not check the result. To solve this problem, I think that temporarily saving a list of recent results is also a solution. By storing temporary lists (for example, up to 10), the user can see the previous results and think that it is also possible to empty the list by the user. how is it? Thank you. Best Regrads, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [ mailto:ifat.afek at nokia.com] Sent: Thursday, March 29, 2018 8:07 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, Why do you think the request should pass through the Vitrage API? Why can’t vitrage-dashboard call the check component directly? And another question: what should happen if the user closes the check window before the checks are over? I assume that the checks will finish, but the user won’t be able to see the results? Thanks, Ifat. From: MinWookKim < delightwook at ssu.ac.kr> Reply-To: "OpenStack Development Mailing List (not for usage questions)" < openstack-dev at lists.openstack.org> Date: Thursday, 29 March 2018 at 10:25 To: "'OpenStack Development Mailing List (not for usage questions)'" < openstack-dev at lists.openstack.org> Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat and Vitrage team. I would like to explain more about the implementation part of the mail I sent last time. The flow is as follows. Vitrage-dashboard (action-list-panel) -> Vitrage-api -> check component The last time I mentioned it as api-handler, it would be better to call the check component directly from Vitarge-api without having to use it. I hope this helps you understand. Thank you Best Regards, Minwook. From: MinWookKim [ mailto:delightwook at ssu.ac.kr] Sent: Wednesday, March 28, 2018 11:21 AM To: 'OpenStack Development Mailing List (not for usage questions)' Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thanks for your reply. : ) This proposal is a proposal that we expect to be useful from a user perspective. >From a manager's point of view, we need an implementation that minimizes the overhead incurred by the proposal. The answers to some of your questions are: • I assume that these checks will not be implemented in Vitrage, and the results will not be stored in Vitrage, right? Vitrage role is to be a place where it is easy and intuitive for the user to execute external actions/checks. Yes, that's right. We do not need to save it to Vitrage because we just need to check the results. However, it is possible to implement the function directly in Vitrage-dashboard separately from Vitrage like add-action-list panel, but it seems that it is not enough to implement all the functions. If you do not mind, we will have the following flow. 1. The user requests the check action from the vitrage-dashboard (add-action-list-panel). 2. Call the check component through the vitrage's API handler. 3. The check component executes the command and returns the result. Because it is my opinion only, please tell us if there is an unnecessary part. :) • Do you expect the user to click an entity, select an action to run (e.g. ‘P2P check’), and wait by the open panel for the results? What if the user switches to another menu before the check is done? What if the user asks to run an additional check in parallel? What if the user wants to see again a previous result? My idea was to select the task, wait for the results in an open panel, and then instantly see it in the panel. If we switch to another menu before the scan is complete, we will not be able to see the results. Parallel checking is a matter of fact. (This can cause excessive overhead.) For earlier results, it may be okay to temporarily save the open panel until we exit the panel. We can see the previous results through the temporary saved results. • Any thoughts of what component will implement those checks? Or maybe these will be just scripts? I think I implement a separate component to request it. • It could be nice if, as a result of an action check, a new alarm will be raised in Vitrage. A specific alarm with the additional details that were found. However, it might not be trivial to implement it. We could think about it as phase #2. It is expected to be really good. It would be very useful if an Entity-Graph generates an alarm based on the check result. I think that part will be able to talk in detail later. My answer is my opinions and assumptions. If you think my implementation is wrong, or an inefficient implementation, please do not hesitate to tell me. Thanks. Best Regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [ mailto:ifat.afek at nokia.com] Sent: Wednesday, March 28, 2018 2:23 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, I think that from a user’s perspective, these are very good ideas. I have some questions regarding the UX and the implementation, since I’m trying to think what could be the best way to execute such actions from Vitrage. * I assume that these checks will not be implemented in Vitrage, and the results will not be stored in Vitrage, right? Vitrage role is to be a place where it is easy and intuitive for the user to execute external actions/checks. * Do you expect the user to click an entity, select an action to run (e.g. ‘P2P check’), and wait by the open panel for the results? What if the user switches to another menu before the check is done? What if the user asks to run an additional check in parallel? What if the user wants to see again a previous result? * Any thoughts of what component will implement those checks? Or maybe these will be just scripts? * It could be nice if, as a result of an action check, a new alarm will be raised in Vitrage. A specific alarm with the additional details that were found. However, it might not be trivial to implement it. We could think about it as phase #2. Best Regards, Ifat From: MinWookKim < delightwook at ssu.ac.kr> Reply-To: "OpenStack Development Mailing List (not for usage questions)" < openstack-dev at lists.openstack.org> Date: Tuesday, 27 March 2018 at 14:45 To: " openstack-dev at lists.openstack.org" < openstack-dev at lists.openstack.org> Subject: [openstack-dev] [Vitrage] New proposal for analysis. Hello Vitrage team. I am currently working on the Vitrage-Dashboard proposal for the ‘Add action list panel for entity click action’. ( https://review.openstack.org/#/c/531141/) I would like to make a new proposal based on the action list panel mentioned above. The new proposal is to provide multidimensional analysis capabilities in several entities that make up the infrastructure in the entity graph. Vitrage's entity-graph allows us to efficiently monitor alarms from various monitoring tools. In the current state, when there is a problem with the VM and Host, or when we want to check the status, we need to access the console individually for each VM and Host. This situation causes unnecessary behavior when the number of VMs and hosts increases. My new suggestion is that if we have a large number of vm and host, we do not need to directly connect to each VM, host console to enter the system command. Instead, we can send a system command to VM and hosts in the cloud through this proposal. It is only checking results. I have written some use-cases for an efficient explanation of the function. >From an implementation perspective, the goals of the proposal are: 1. To execute commands without installing any Agent / Client that can cause load on VM, Host. 2. I want to provide a simple UI so that users or administrators can get the desired information to multiple VMs and hosts. 3. I want to be able to grasp the results at a glance. 4. I want to implement a component that can support many additional scenarios in plug-in format. I would be happy if you could comment on the proposal or ask questions. Thanks. Best Regards, Minwook. -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 53342 bytes Desc: not available URL: From jistr at redhat.com Fri Apr 6 09:09:42 2018 From: jistr at redhat.com (=?UTF-8?B?SmnFmcOtIFN0csOhbnNrw70=?=) Date: Fri, 6 Apr 2018 11:09:42 +0200 Subject: [openstack-dev] [tripleo][ci] use of tags in launchpad bugs In-Reply-To: References: Message-ID: <51180736-f593-9e95-f499-528798890bde@redhat.com> On 5.4.2018 21:04, Alex Schultz wrote: > On Thu, Apr 5, 2018 at 12:55 PM, Wesley Hayutin wrote: >> FYI... >> >> This is news to me so thanks to Emilien for pointing it out [1]. >> There are official tags for tripleo launchpad bugs. Personally, I like what >> I've seen recently with some extra tags as they could be helpful in finding >> the history of particular issues. >> So hypothetically would it be "wrong" to create an official tag for each >> featureset config number upstream. I ask because that is adding a lot of >> tags but also serves as a good test case for what is good/bad use of tags. >> > > We list official tags over in the specs repo[0]. That being said as > we investigate switching over to storyboard, we'll probably want to > revisit tags as they will have to be used more to replace some of the > functionality we had with launchpad (e.g. milestones). You could > always add the tags without being an official tag. I'm not sure I > would really want all the featuresets as tags. I'd rather see us > actually figure out what component is actually failing than relying on > a featureset (and the Rosetta stone for decoding featuresets to > functionality[1]). We could also use both alongside. Component-based tags better relate to the actual root cause of the bug, while featureset-based tags are useful in relation to CI. E.g. "I see fs037 failing, i wonder if anyone already reported a bug for it" -- if the reporter tagged the bug, it would be really easy to figure out the answer. This might also again bring up the question of better job names to allow easier mapping to featuresets. IMO: tripleo-ci-centos-7-containers-multinode -- not great tripleo-ci-centos-7-featureset010 -- not great tripleo-ci-centos-7-containers-mn-fs010 -- *happy face* Jirka > > > Thanks, > -Alex > > > [0] http://git.openstack.org/cgit/openstack/tripleo-specs/tree/specs/policy/bug-tagging.rst#n30 > [1] https://git.openstack.org/cgit/openstack/tripleo-quickstart/tree/doc/source/feature-configuration.rst#n21 >> Thanks >> >> [1] https://bugs.launchpad.net/tripleo/+manage-official-tags >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From xinni.ge1990 at gmail.com Fri Apr 6 09:20:05 2018 From: xinni.ge1990 at gmail.com (Xinni Ge) Date: Fri, 6 Apr 2018 18:20:05 +0900 Subject: [openstack-dev] [openstack-infra][openstack-zuul-jobs]Questions about playbook copy module In-Reply-To: References: <80f29416-245c-fc06-1018-1f7a873b79a1@suse.com> Message-ID: Sorry, forgot to reply to the mail list. On Fri, Apr 6, 2018 at 6:18 PM, Xinni Ge wrote: > Hi, Andreas. > > Thanks for reply. This is the link of log I am seeing. > http://logs.openstack.org/39/39067dbc1dee99d227f8001595633b > 5cc98cfc53/release/xstatic-check-version/9172297/ara-report/ > > > On Fri, Apr 6, 2018 at 5:59 PM, Andreas Jaeger wrote: > >> On 2018-04-06 10:53, Xinni Ge wrote: >> > Hello there, >> > >> > I have some questions about the value of parameter `dest` of copy module >> > in this file. >> > >> > openstack-zuul-jobs/playbooks/xstatic/check-version.yaml >> > Line:6 dest: xstatic_check_version.py >> > >> > Ansible documents describe `dest` as "Remote absolute path where the >> > file should be copied to". >> > (http://docs.ansible.com/ansible/devel/modules/copy_module.html#id2) >> > >> > I am not quite familiar with ansible but maybe it could be `{{ >> > zuul.executor.log_root >> > }}/openstack-zuul-jobs/playbooks/xstatic/check-version.yaml` or >> > something similar ? >> > >> > Actually I ran into the problem trying to release a new xstatic package. >> > The release patch was merged but fail to execute the release job. Just >> > wondering whether or not it could be the reason of the failure. >> >> Could you share a link to the logs for the job that failed, please? >> >> > I am not sure about how to debug this, or how to re-launch the release >> job. >> > I am very appreciate of it if anybody could kindly help me. >> >> Andreas >> -- >> Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi >> SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany >> GF: Felix Imendörffer, Jane Smithard, Graham Norton, >> HRB 21284 (AG Nürnberg) >> GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 >> >> > > > -- > 葛馨霓 Xinni Ge > -- 葛馨霓 Xinni Ge -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Fri Apr 6 09:32:54 2018 From: aj at suse.com (Andreas Jaeger) Date: Fri, 6 Apr 2018 11:32:54 +0200 Subject: [openstack-dev] [openstack-infra][openstack-zuul-jobs]Questions about playbook copy module In-Reply-To: References: <80f29416-245c-fc06-1018-1f7a873b79a1@suse.com> Message-ID: <7cc5b0e9-b366-a80d-9bfb-38032291d91a@suse.com> On 2018-04-06 11:20, Xinni Ge wrote: > Sorry, forgot to reply to the mail list. > > On Fri, Apr 6, 2018 at 6:18 PM, Xinni Ge > wrote: > > Hi, Andreas. > > Thanks for reply. This is the link of log I am seeing. > http://logs.openstack.org/39/39067dbc1dee99d227f8001595633b5cc98cfc53/release/xstatic-check-version/9172297/ara-report/ > > thanks, your analysis is correct, seem we seldom release xstatic packages ;( fix is at https://review.openstack.org/559300 Once that is merged, an infra-root can rerun the release job - please ask on #openstack-infra IRC channel, Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From amotoki at gmail.com Fri Apr 6 09:34:35 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Fri, 6 Apr 2018 18:34:35 +0900 Subject: [openstack-dev] [ALL][PTLs] [Community goal] Toggle the debug option at runtime In-Reply-To: References: Message-ID: Hi Slawek, 2018-04-06 17:38 GMT+09:00 Sławek Kapłoński : > Hi, > > One more question about implementation of this goal. Should we take care > (and add to story board [1]) projects like: > In my understanding, tasks in the storyboard story are prepared per project team listed in the governance. IMHO, repositories which belong to a project team should be handled as a single task. The situations vary across repositories. > openstack/neutron-lbaas > This should be covered by octavia team. > openstack/networking-cisco > openstack/networking-dpm > openstack/networking-infoblox > openstack/networking-l2gw > openstack/networking-lagopus > The above repos are not official repos. Maintainers of each repo can follow the community goal, but there is no need to be tracked as the neutron team. > openstack/neutron-dynamic-routing > This repo is part of the neutron team. We, the neutron team need to cover this. FYI: The official repositories covered by the neutron team is available here. https://governance.openstack.org/tc/reference/projects/neutron.html Thanks, Akihiro > > Which looks that should be probably also changed in some way. Or maybe > list of affected projects in [1] is „closed” and if some project is not > there it shouldn’t be changed to accomplish this community goal? > > [1] https://storyboard.openstack.org/#!/story/2001545 > > — > Best regards > Slawek Kaplonski > slawek at kaplonski.pl > > > > > > Wiadomość napisana przez ChangBo Guo w dniu > 26.03.2018, o godz. 14:15: > > > > > > 2018-03-22 16:12 GMT+08:00 Sławomir Kapłoński : > > Hi, > > > > I took care of implementation of [1] in Neutron and I have couple > questions to about this goal. > > > > 1. Should we only change "restart_method" to mutate as is described in > [2] ? I did already something like that in [3] - is it what is expected? > > > > Yes , let's the only thing. we need test if that if it works . > > > > 2. How I can check if this change is fine and config option are mutable > exactly? For now when I change any config option for any of neutron agents > and send SIGHUP to it it is in fact "restarted" and config is reloaded even > with this old restart method. > > > > good question, we indeed thought this question when we proposal the > goal. But It seems difficult to test that consuming projects like Neutron > automatically. > > > > 3. Should we add any automatic tests for such change also? Any examples > of such tests in other projects maybe? > > There is no example for tests now, we only have some unit tests in > oslo.service . > > > > [1] https://governance.openstack.org/tc/goals/rocky/enable- > mutable-configuration.html > > [2] https://docs.openstack.org/oslo.config/latest/reference/mutable.html > > [3] https://review.openstack.org/#/c/554259/ > > > > — > > Best regards > > Slawek Kaplonski > > slawek at kaplonski.pl > > > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > -- > > ChangBo Guo(gcb) > > Community Director @EasyStack > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xinni.ge1990 at gmail.com Fri Apr 6 09:37:41 2018 From: xinni.ge1990 at gmail.com (Xinni Ge) Date: Fri, 06 Apr 2018 09:37:41 +0000 Subject: [openstack-dev] [openstack-infra][openstack-zuul-jobs]Questions about playbook copy module In-Reply-To: <7cc5b0e9-b366-a80d-9bfb-38032291d91a@suse.com> References: <80f29416-245c-fc06-1018-1f7a873b79a1@suse.com> <7cc5b0e9-b366-a80d-9bfb-38032291d91a@suse.com> Message-ID: Thank you very much! I will follow up via irc. 2018年4月6日(金) 18:34 Andreas Jaeger : > On 2018-04-06 11:20, Xinni Ge wrote: > > Sorry, forgot to reply to the mail list. > > > > On Fri, Apr 6, 2018 at 6:18 PM, Xinni Ge > > wrote: > > > > Hi, Andreas. > > > > Thanks for reply. This is the link of log I am seeing. > > > http://logs.openstack.org/39/39067dbc1dee99d227f8001595633b5cc98cfc53/release/xstatic-check-version/9172297/ara-report/ > > < > http://logs.openstack.org/39/39067dbc1dee99d227f8001595633b5cc98cfc53/release/xstatic-check-version/9172297/ara-report/ > > > > > > thanks, your analysis is correct, seem we seldom release xstatic packages > ;( > > fix is at https://review.openstack.org/559300 > > Once that is merged, an infra-root can rerun the release job - please > ask on #openstack-infra IRC channel, > > Andreas > -- > Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi > SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany > GF: Felix Imendörffer, Jane Smithard, Graham Norton, > HRB 21284 (AG Nürnberg) > GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 > > -- Best Regards, Xinni Ge -------------- next part -------------- An HTML attachment was scrubbed... URL: From kchamart at redhat.com Fri Apr 6 10:07:14 2018 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Fri, 6 Apr 2018 12:07:14 +0200 Subject: [openstack-dev] RFC: Next minimum libvirt / QEMU versions for "Stein" release In-Reply-To: References: <20180330142643.ff3czxy35khmjakx@eukaryote> <20180331140929.r5kj3qyrefvsovwf@eukaryote> <20180404084507.GA18076@paraplu> Message-ID: <20180406100714.GB18076@paraplu> On Thu, Apr 05, 2018 at 10:32:13PM +0200, Thomas Goirand wrote: Hey Zigo, thanks for the detailed response; a couple of comments below. [...] > backport of libvirt/QEMU/libguestfs more in details > --------------------------------------------------- > > I already attempted the backports from Debian Buster to Stretch. > > All of the 3 components (libvirt, qemu & libguestfs) could be built > without extra dependency, which is a very good thing. > > - libvirt 4.1.0 compiled without issue, though the dh_install phase > failed with this error: > > dh_install: Cannot find (any matches for) "/usr/lib/*/wireshark/" (tried > in "." and "debian/tmp") > dh_install: libvirt-wireshark missing files: /usr/lib/*/wireshark/ > dh_install: missing files, aborting That seems like a problem in the Debian packaging system, not in libvirt. I double-checked with the upstream folks, and the install rules for Wireshark plugin doesn't have /*/ in there. > - qemu 2.11 built perfectly with zero change. > > - libguestfs 1.36.13 only needed to have fdisk replaced by util-linux as > build-depends (fdisk is now a separate package in Buster). Great. Note: You don't even have to build the versions from 'Buster', which are quite new. Just the slightly more conservative libvirt 3.2.0 and QEMU 2.9.0 -- only if it's possbile. [...] > Conclusion: > ----------- > > If you don't absolutely need new features from libvirt 3.2.0 and 3.0.0 > is fine, please choose 3.0.0 as minimum. > > If you don't absolutely need new features from qemu 2.9.0 and 2.8.0 is > fine, please choose 2.8.0 as minimum. > > If you don't absolutely need new features from libguestfs 1.36 and 1.34 > is fine, please choose 1.34 as minimum. > > If you do need these new features, I'll do my best adapt. :) Sure, can use the 3.0.0 (& QEMU 2.8.0), instead of 3.2.0, as we don't want to "penalize" (that was never the intention) distros with slightly older versions. That said ... I just spent comparing the release notes of libvirt 3.0.0 and libvirt 3.2.0[1][2]. By using libvirt 3.2.0 and QEMU 2.9.0, Debian users will be spared from a lot of critical bugs (see all the list in [3]) in CPU comparision area. [1] https://www.redhat.com/archives/libvirt-announce/2017-April/msg00000.html -- Release of libvirt-3.2.0 [2] https://www.redhat.com/archives/libvirt-announce/2017-January/msg00003.html -- Release of libvirt-3.0.0 [3] https://www.redhat.com/archives/libvir-list/2017-February/msg01295.html [...] -- /kashyap From mbooth at redhat.com Fri Apr 6 10:09:26 2018 From: mbooth at redhat.com (Matthew Booth) Date: Fri, 6 Apr 2018 11:09:26 +0100 Subject: [openstack-dev] [cinder][nova] about re-image the volume In-Reply-To: <20180406083110.tydltwfe23kiq7bw@localhost> References: <20180329142813.GA25762@sm-xps> <20180402115959.3y3j6ytab6ruorrg@localhost> <96adcaac-632a-95c3-71c8-51211c1c57bd@gmail.com> <20180405081558.vf7bibu4fcv5kov3@localhost> <20180406083110.tydltwfe23kiq7bw@localhost> Message-ID: On 6 April 2018 at 09:31, Gorka Eguileor wrote: > On 05/04, Matt Riedemann wrote: >> On 4/5/2018 3:15 AM, Gorka Eguileor wrote: >> > But just to be clear, Nova will have to initialize the connection with >> > the re-imagined volume and attach it again to the node, as in all cases >> > (except when defaulting to downloading the image and dd-ing it to the >> > volume) the result will be a new volume in the backend. >> >> Yeah I think I pointed this out earlier in this thread on what I thought the >> steps would be on the nova side with respect to creating a new empty >> attachment to keep the volume 'reserved' while we delete the old attachment, >> re-image the volume, and then update the volume attachment for the new >> connection. I think that would be similar to how shelve and unshelve works >> in nova. >> >> Would this really require a swap volume call from Cinder? I'd hope not since >> swap volume in itself is a pretty gross operation on the nova side. >> >> -- >> >> Thanks, >> >> Matt >> > > Hi Matt, > > Yes, it will require a volume swap, with the worst case scenario > exception where we dd the image into the volume. I think you're talking at cross purposes here: this won't require a swap volume. Apart from anything else, swap volume only works on an attached volume, and as previously discussed Nova will detach and re-attach. Gorka, the Nova api Matt is referring to is called volume update externally. It's the operation required for live migrating an attached volume between backends. It's called swap volume internally in Nova. Matt > > In the same way that anyone would expect a re-imaging preserving the > volume id, one would also expect it to behave like creating a new volume > from the same image: be as fast and take up as much space on the > backend. > > And to do so we have to use existing optimized mechanisms that will only > work when creating a new volume. > > The alternative would be to have the worst case scenario as the default > (attach and dd the image) and make *ALL* Cinder drivers implement the > optimized mechanism where they can efficiently re-imagine a volume. I > can't talk for the Cinder team, but I for one would oppose this > alternative. > > Cheers, > Gorka. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Matthew Booth Red Hat OpenStack Engineer, Compute DFG Phone: +442070094448 (UK) From slawek at kaplonski.pl Fri Apr 6 10:37:10 2018 From: slawek at kaplonski.pl (=?utf-8?B?U8WCYXdlayBLYXDFgm/FhHNraQ==?=) Date: Fri, 6 Apr 2018 12:37:10 +0200 Subject: [openstack-dev] [ALL][PTLs] [Community goal] Toggle the debug option at runtime In-Reply-To: References: Message-ID: <5A518503-7049-4574-B3AC-0F093E94EF75@kaplonski.pl> Hi, Thanks Akihiro for help. I added „neutron-dynamic-routing” task to this story and I will push patch for it soon. There is still so many things that I need to learn about OpenStack and Neutron :) — Best regards Slawek Kaplonski slawek at kaplonski.pl > Wiadomość napisana przez Akihiro Motoki w dniu 06.04.2018, o godz. 11:34: > > > Hi Slawek, > > 2018-04-06 17:38 GMT+09:00 Sławek Kapłoński : > Hi, > > One more question about implementation of this goal. Should we take care (and add to story board [1]) projects like: > > In my understanding, tasks in the storyboard story are prepared per project team listed in the governance. > IMHO, repositories which belong to a project team should be handled as a single task. > > The situations vary across repositories. > > > openstack/neutron-lbaas > > This should be covered by octavia team. > > openstack/networking-cisco > openstack/networking-dpm > openstack/networking-infoblox > openstack/networking-l2gw > openstack/networking-lagopus > > The above repos are not official repos. > Maintainers of each repo can follow the community goal, but there is no need to be tracked as the neutron team. > > openstack/neutron-dynamic-routing > > This repo is part of the neutron team. We, the neutron team need to cover this. > > FYI: The official repositories covered by the neutron team is available here. > https://governance.openstack.org/tc/reference/projects/neutron.html > > Thanks, > Akihiro > > > Which looks that should be probably also changed in some way. Or maybe list of affected projects in [1] is „closed” and if some project is not there it shouldn’t be changed to accomplish this community goal? > > [1] https://storyboard.openstack.org/#!/story/2001545 > > — > Best regards > Slawek Kaplonski > slawek at kaplonski.pl > > > > > > Wiadomość napisana przez ChangBo Guo w dniu 26.03.2018, o godz. 14:15: > > > > > > 2018-03-22 16:12 GMT+08:00 Sławomir Kapłoński : > > Hi, > > > > I took care of implementation of [1] in Neutron and I have couple questions to about this goal. > > > > 1. Should we only change "restart_method" to mutate as is described in [2] ? I did already something like that in [3] - is it what is expected? > > > > Yes , let's the only thing. we need test if that if it works . > > > > 2. How I can check if this change is fine and config option are mutable exactly? For now when I change any config option for any of neutron agents and send SIGHUP to it it is in fact "restarted" and config is reloaded even with this old restart method. > > > > good question, we indeed thought this question when we proposal the goal. But It seems difficult to test that consuming projects like Neutron automatically. > > > > 3. Should we add any automatic tests for such change also? Any examples of such tests in other projects maybe? > > There is no example for tests now, we only have some unit tests in oslo.service . > > > > [1] https://governance.openstack.org/tc/goals/rocky/enable-mutable-configuration.html > > [2] https://docs.openstack.org/oslo.config/latest/reference/mutable.html > > [3] https://review.openstack.org/#/c/554259/ > > > > — > > Best regards > > Slawek Kaplonski > > slawek at kaplonski.pl > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > -- > > ChangBo Guo(gcb) > > Community Director @EasyStack > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From doug at doughellmann.com Fri Apr 6 11:42:14 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 06 Apr 2018 07:42:14 -0400 Subject: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects In-Reply-To: References: <1521110096-sup-3634@lrrr.local> <1522276901-sup-6868@lrrr.local> <1522850139-sup-8937@lrrr.local> Message-ID: <1523014869-sup-4635@lrrr.local> Excerpts from super user's message of 2018-04-06 17:10:32 +0900: > Hope you fix this soon, there are many patches depend on the 'match the > minimum version' problem which causes requirements-check fail. The problem is with *those patches* and not the check. I've been trying to update some, but my time has been limited this week for personal reasons. I encourage project teams to run the script I provided or edit their lower-constraints.txt file by hand to fix the issues. Doug From mordred at inaugust.com Fri Apr 6 11:57:31 2018 From: mordred at inaugust.com (Monty Taylor) Date: Fri, 6 Apr 2018 06:57:31 -0500 Subject: [openstack-dev] [infra][qa][requirements] Pip 10 is on the way In-Reply-To: <20180406003909.GA28653@localhost.localdomain> References: <20180331150026.nqrwaxakxcn3vmqz@yuggoth.org> <20180402150635.5d4jbbnzry2biowu@gentoo.org> <1522685637.1678193.1323782608.022AAF87@webmail.messagingengine.com> <1522960033.3841849.1328067440.7342057C@webmail.messagingengine.com> <20180406003909.GA28653@localhost.localdomain> Message-ID: <51e52655-4aa5-5809-1360-db4fe5fe4443@inaugust.com> On 04/05/2018 07:39 PM, Paul Belanger wrote: > On Thu, Apr 05, 2018 at 01:27:13PM -0700, Clark Boylan wrote: >> On Mon, Apr 2, 2018, at 9:13 AM, Clark Boylan wrote: >>> On Mon, Apr 2, 2018, at 8:06 AM, Matthew Thode wrote: >>>> On 18-03-31 15:00:27, Jeremy Stanley wrote: >>>>> According to a notice[1] posted to the pypa-announce and >>>>> distutils-sig mailing lists, pip 10.0.0.b1 is on PyPI now and 10.0.0 >>>>> is expected to be released in two weeks (over the April 14/15 >>>>> weekend). We know it's at least going to start breaking[2] DevStack >>>>> and we need to come up with a plan for addressing that, but we don't >>>>> know how much more widespread the problem might end up being so >>>>> encourage everyone to try it out now where they can. >>>>> >>>> >>>> I'd like to suggest locking down pip/setuptools/wheel like openstack >>>> ansible is doing in >>>> https://github.com/openstack/openstack-ansible/blob/master/global-requirement-pins.txt >>>> >>>> We could maintain it as a separate constraints file (or infra could >>>> maintian it, doesn't mater). The file would only be used for the >>>> initial get-pip install. >>> >>> In the past we've done our best to avoid pinning these tools because 1) >>> we've told people they should use latest for openstack to work and 2) it >>> is really difficult to actually control what versions of these tools end >>> up on your systems if not latest. >>> >>> I would strongly push towards addressing the distutils package deletion >>> problem that we've run into with pip10 instead. One of the approaches >>> thrown out that pabelanger is working on is to use a common virtualenv >>> for devstack and avoid the system package conflict entirely. >> >> I was mistaken and pabelanger was working to get devstack's USE_VENV option working which installs each service (if the service supports it) into its own virtualenv. There are two big drawbacks to this. This first is that we would lose coinstallation of all the openstack services which is one way we ensure they all work together at the end of the day. The second is that not all services in "base" devstack support USE_VENV and I doubt many plugins do either (neutron apparently doesn't?). >> > Yah, I agree your approach is the better, i just wanted to toggle what was > supported by default. However, it is pretty broken today. I can't imagine > anybody actually using it, if so they must be carrying downstream patches. > > If we think USE_VENV is valid use case, for per project VENV, I suggest we > continue to fix it and update neutron to support it. Otherwise, we maybe should > rip and replace it. I'd vote for ripping it out. From kchamart at redhat.com Fri Apr 6 12:08:49 2018 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Fri, 6 Apr 2018 14:08:49 +0200 Subject: [openstack-dev] RFC: Next minimum libvirt / QEMU versions for "Stein" release In-Reply-To: <902f872d-5b4a-af99-1bc5-3fa2bfdf3fe3@gmail.com> References: <20180330142643.ff3czxy35khmjakx@eukaryote> <20180331140929.r5kj3qyrefvsovwf@eukaryote> <20180404084507.GA18076@paraplu> <902f872d-5b4a-af99-1bc5-3fa2bfdf3fe3@gmail.com> Message-ID: <20180406120849.GC18076@paraplu> On Thu, Apr 05, 2018 at 06:11:26PM -0500, Matt Riedemann wrote: > On 4/5/2018 3:32 PM, Thomas Goirand wrote: > > If you don't absolutely need new features from libvirt 3.2.0 and 3.0.0 > > is fine, please choose 3.0.0 as minimum. > > > > If you don't absolutely need new features from qemu 2.9.0 and 2.8.0 is > > fine, please choose 2.8.0 as minimum. > > > > If you don't absolutely need new features from libguestfs 1.36 and 1.34 > > is fine, please choose 1.34 as minimum. > > New features in the libvirt driver which depend on minimum versions of > libvirt/qemu/libguestfs (or arch for that matter) are always conditional, so > I think it's reasonable to go with the lower bound for Debian. We can still > support the features for the newer versions if you're running a system with > those versions, but not penalize people with slightly older versions if not. Yep, we can trivially set the lower bound to versions in 'Stretch'. The intention was never to "penalize" distributions w/ older versions. I was just checking if Debian 'Stretch' users can be spared from the myriad of CPU-modelling related issues (see my other reply for specifics) that are all fixed with 3.2.0 (and QMEU 2.9.0) by default -- without spending inordinate amounts of time and messy backporting procedures. Since rest of all the other stable distributions are using it. I'll wait a day to hear from Zigo, then I'll just rewrite the patch[*] to use what's currently in 'Stretch'. [*] https://review.openstack.org/#/c/558171/ -- /kashyap From fungi at yuggoth.org Fri Apr 6 12:37:53 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 6 Apr 2018 12:37:53 +0000 Subject: [openstack-dev] [api] Adding a SDK to developer.openstack.org pages In-Reply-To: <5cf52faf-9755-2ddd-4ba3-d19f1a4d4490@redhat.com> References: <5cf52faf-9755-2ddd-4ba3-d19f1a4d4490@redhat.com> Message-ID: <20180406123753.ljldunkd3cxn7z6i@yuggoth.org> On 2018-04-06 12:00:24 +1000 (+1000), Gilles Dubreuil wrote: > I'd like to update the developer.openstack.org to add details about a new > SDK. > > What would be the corresponding repo? My searches landed me into > https://docs.openstack.org/doc-contrib-guide/ which is about updating the > docs.openstack.org but not developer.openstack.org. Is the developer section > inside the docs section? Looks like we could do a better job of linking to the relevant git repositories from some documents. I think the file you're looking for is probably: https://git.openstack.org/cgit/openstack/api-site/tree/www/index.html Happy hacking! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From sfinucan at redhat.com Fri Apr 6 12:44:52 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Fri, 06 Apr 2018 13:44:52 +0100 Subject: [openstack-dev] Following the new PTI for document build, broken local builds In-Reply-To: <8da6121b-a48a-5dd5-8865-75b9e0d38e15@redhat.com> References: <1521629342.8587.20.camel@redhat.com> <8da6121b-a48a-5dd5-8865-75b9e0d38e15@redhat.com> Message-ID: <1523018692.22377.1.camel@redhat.com> On Thu, 2018-04-05 at 16:36 -0400, Zane Bitter wrote: > On 21/03/18 06:49, Stephen Finucane wrote: > > As noted by Monty in a prior openstack-dev post [2], some projects rely > > on a pbr extension to the 'build_sphinx' setuptools command which can > > automatically run the 'sphinx-apidoc' tool before building docs. This > > is enabled by configuring some settings in the '[pbr]' section of the > > 'setup.cfg' file [3]. To ensure this continued working, the zuul jobs > > definitions [4] check for the presence of these settings and build docs > > using the legacy 'build_sphinx' command if found. **At no point do the > > jobs call the tox job**. As a result, if you convert a project to use > > 'sphinx-build' in 'tox.ini' without resolving the autodoc issues, you > > lose the ability to build docs locally. > > > > I've gone through and proposed a couple of reverts to fix projects > > we've already broken. However, going forward, there are two things > > people should do to prevent issues like this popping up. > > > > * Firstly, you should remove the '[build_sphinx]' and '[pbr]' sections > > from 'setup.cfg' in any patches that aim to convert a project to use > > the new PTI. This will ensure the gate catches any potential > > issues. > > How can we enable warning_is_error in the gate with the new PTI? It's > easy enough to add the -W flag in tox.ini for local builds, but as you > say the tox job is never called in the gate. In the gate zuul checks for > it in the [build_sphinx] section of setup.cfg: > > https://git.openstack.org/cgit/openstack-infra/zuul-jobs/tree/roles/sphinx/library/sphinx_check_warning_is_error.py#n23 > > So I think it makes more sense to remove the [pbr] section, but leave > the [build_sphinx] section? > > thanks, > Zane. I'd be more in favour of changing the zuul job to build with the '-W' flag. To be honest, there is no good reason to not have this flag enabled. I'm not sure that will be a popular opinion though as it may break some projects' builds (correctly, but still). I'll propose a patch against zuul-jobs and see what happens :) Stephen From sfinucan at redhat.com Fri Apr 6 12:50:08 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Fri, 06 Apr 2018 13:50:08 +0100 Subject: [openstack-dev] Replacing pbr's autodoc feature with sphinxcontrib-apidoc In-Reply-To: <1522247496.4003.31.camel@redhat.com> References: <1522247496.4003.31.camel@redhat.com> Message-ID: <1523019008.22377.5.camel@redhat.com> On Wed, 2018-03-28 at 15:31 +0100, Stephen Finucane wrote: > As noted last week [1], we're trying to move away from pbr's autodoc > feature as part of the new docs PTI. To that end, I've created > sphinxcontrib-apidoc, which should do what pbr was previously doing for > us by via a Sphinx extension. > > https://pypi.org/project/sphinxcontrib-apidoc/ > > This works by reading some configuration from your documentation's > 'conf.py' file and using this to call 'sphinx-apidoc'. It means we no > longer need pbr to do this for. > > I have pushed version 0.1.0 to PyPi already but before I add this to > global requirements, I'd like to ensure things are working as expected. > smcginnis was kind enough to test this out on glance and it seemed to > work for him but I'd appreciate additional data points. The > configuration steps for this extension are provided in the above link. > To test this yourself, you simply need to do the following: > > 1. Add 'sphinxcontrib-apidoc' to your test-requirements.txt or > doc/requirements.txt file > 2. Configure as noted above and remove the '[pbr]' and '[build_sphinx]' > configuration from 'setup.cfg' > 3. Replace 'python setup.py build_sphinx' with a call to 'sphinx-build' > 4. Run 'tox -e docs' > 5. Profit? > > Be sure to let me know if anyone encounters issues. If not, I'll be > pushing for this to be included in global requirements so we can start > the migration. > > Cheers, > Stephen > > [1] http://lists.openstack.org/pipermail/openstack-dev/2018-March/128594.html 'sphinxcontrib.apidoc' has now been added to requirements [1]. The README [2] provides a far more detailed overview of how one can migrate from the pbr features than I gave above and I'd advise anyone making changes to their documentation to follow that guide. Feel free to ping me here or on IRC (stephenfin) if you've any questions. Next up: deprecating this feature in pbr. Stephen [1] https://review.openstack.org/#/c/557532/ [2] https://github.com/sphinx-contrib/apidoc#migration-from-pbr From cdent+os at anticdent.org Fri Apr 6 12:54:36 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 6 Apr 2018 13:54:36 +0100 (BST) Subject: [openstack-dev] [nova] [placement] placement update 18-14 Message-ID: This is "contract" style update. New stuff will not be added to the lists. # Most Important There doesn't appear to be anything new with regard to most important. That which was important remains important. At the scheduler team meeting at the start of the week there was talk of working out ways to trim the amount of work in progress by using the nova priorities tracking etherpad to help sort things out: https://etherpad.openstack.org/p/rocky-nova-priorities-tracking Update provider tree and nested allocation candidates remain critical basic functionality on which much else is based. With most of provider tree done, it's really on nested allocation candidates. # What's Changed Quite a bit of provider tree related code has merged. Some negotiation happened with regard to when/if the fixes for shared providers is going to happen. I'm not sure how that resolved, if someone can follow up with that, that would be most excellent. Most of the placement-req-filter series merged. The spec for error codes in the placement API merged (code is in progress and ready for review, see below). # Questions * Eric and I discussed earlier in the week that it might be a good time to start an #openstack-placement IRC channel, for two main reasons: break things up so as to limit the crosstalk in the often very busy #openstack-nova channel and to lend a bit of momentum for going in that direction. Is this okay with everyone? If not, please say so, otherwise I'll make it happen soon. * Shared providers status? (I really think we need to make this go. It was one of the original value propositions of placement: being able to accurate manage shared disk.) # Bugs * Placement related bugs not yet in progress: https://goo.gl/TgiPXb 15, -1 on last week * In progress placement bugs: https://goo.gl/vzGGDQ 13, +1 on last week # Specs These seem to be divided into three classes: * Normal stuff * Old stuff not getting attention or newer stuff that ought to be abandoned because of lack of support * Anything related to the client side of using nested providers effectively. This apparently needs a lot of thinking. If there are some general sticking points we can extract and resolve, that might help move the whole thing forward? * https://review.openstack.org/#/c/549067/ VMware: place instances on resource pool (using update_provider_tree) * https://review.openstack.org/#/c/545057/ mirror nova host aggregates to placement API * https://review.openstack.org/#/c/552924/ Proposes NUMA topology with RPs * https://review.openstack.org/#/c/544683/ Account for host agg allocation ratio in placement * https://review.openstack.org/#/c/552927/ Spec for isolating configuration of placement database (This has a strong +2 on it but needs one more.) * https://review.openstack.org/#/c/552105/ Support default allocation ratios * https://review.openstack.org/#/c/438640/ Spec on preemptible servers * https://review.openstack.org/#/c/556873/ Handle nested providers for allocation candidates * https://review.openstack.org/#/c/556971/ Add Generation to Consumers * https://review.openstack.org/#/c/557065/ Proposes Multiple GPU types * https://review.openstack.org/#/c/555081/ Standardize CPU resource tracking * https://review.openstack.org/#/c/502306/ Network bandwidth resource provider * https://review.openstack.org/#/c/509042/ Propose counting quota usage from placement # Main Themes ## Update Provider Tree Most of the main guts of this have merged (huzzah!). What's left are some loose end details, and clean handling of aggregates: https://review.openstack.org/#/q/topic:bp/update-provider-tree ## Nested providers in allocation candidates Representing nested provides in the response to GET /allocation_candidates is required to actually make use of all the topology that update provider tree will report. That work is in progress at: https://review.openstack.org/#/q/topic:bp/nested-resource-providers https://review.openstack.org/#/q/topic:bp/nested-resource-providers-allocation-candidates Note that some of this includes the up-for-debate shared handling. ## Request Filters As far as I can tell this is mostly done (yay!) but there is a loose end: We merged an updated spec to support multiple member_of parameters, but it's not clear anybody is currently owning that: https://review.openstack.org/#/c/555413/ ## Mirror nova host aggregates to placement This makes it so some kinds of aggregate filtering can be done "placement side" by mirroring nova host aggregates into placement aggregates. https://review.openstack.org/#/q/topic:bp/placement-mirror-host-aggregates It's part of what will make the req filters above useful. ## Forbidden Traits A way of expressing "I'd like resources that do _not_ have trait X". This is ready for review: https://review.openstack.org/#/q/topic:bp/placement-forbidden-traits ## Consumer Generations This allows multiple agents to "safely" update allocations for a single consumer. There is both a spec and code in progress for this: https://review.openstack.org/#/q/topic:bp/add-consumer-generation # Extraction Small bits of work on extraction continue on the bp/placement-extract topic: https://review.openstack.org/#/q/topic:bp/placement-extract The spec for optional database handling got some nice support but needs more attention: https://review.openstack.org/#/c/552927/ Jay has declared that he's going to start work on the os-resources-classes library. I've posted a 6th in my placement container playground series: https://anticdent.org/placement-container-playground-6.html Though not directly related to extraction, that experimentation has exposed a lot of the areas where work remains to be done to make placement independent of nova. A recent experiment with shrinking the repo to just the placement dir reinforced a few things we already know: * The placement tests need their own base test to avoid 'from nova import test' * That will need to provide database and other fixtures (such a config and the self.flags feature). * And, of course, eventually, config handling. The container experiments above demonstrate just how little config placement actually needs (by design, let's keep it that way). # Other This is a contract week, so nothing new has been added here, despite there being new work. Part of the intent here it make sure we are queue-like where we can be. This list maintains its ordering from week to week: newly discovered things are added to the end. There are 14 entries here, -7 on last week. That's good. However some of the removals are the result of some code changing topic (and having been listed here by topic). Some of the oldest stuff at the top of the list has not moved. * https://review.openstack.org/#/c/546660/ Purge comp_node and res_prvdr records during deletion of cells/hosts * https://review.openstack.org/#/q/topic:bp/placement-osc-plugin-rocky A huge pile of improvements to osc-placement * https://review.openstack.org/#/c/546713/ Add compute capabilities traits (to os-traits) * https://review.openstack.org/#/c/524425/ General policy sample file for placement * https://review.openstack.org/#/c/546177/ Provide framework for setting placement error codes * https://review.openstack.org/#/c/527791/ Get resource provider by uuid or name (osc-placement) * https://review.openstack.org/#/c/477478/ placement: Make API history doc more consistent * https://review.openstack.org/#/c/556669/ Handle agg generation conflict in report client * https://review.openstack.org/#/c/556628/ Slugification utilities for placement names * https://review.openstack.org/#/c/557086/ Remove usage of [placement]os_region_name * https://review.openstack.org/#/c/556633/ Get rid of 406 paths in report client * https://review.openstack.org/#/c/537614/ Add unit test for non-placement resize * https://review.openstack.org/#/c/554357/ Address issues raised in adding member_of to GET /a-c * https://review.openstack.org/#/c/493865/ cover migration cases with functional tests # End 2 runway slots open up this coming Wednesday, the 11th. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From marcin.juszkiewicz at linaro.org Fri Apr 6 12:59:40 2018 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Fri, 6 Apr 2018 14:59:40 +0200 Subject: [openstack-dev] RFC: Next minimum libvirt / QEMU versions for "Stein" release In-Reply-To: <20180406100714.GB18076@paraplu> References: <20180330142643.ff3czxy35khmjakx@eukaryote> <20180331140929.r5kj3qyrefvsovwf@eukaryote> <20180404084507.GA18076@paraplu> <20180406100714.GB18076@paraplu> Message-ID: <9d357ea4-9b3b-3a32-dce7-820358a3dcbd@linaro.org> W dniu 06.04.2018 o 12:07, Kashyap Chamarthy pisze: >> - libvirt 4.1.0 compiled without issue, though the dh_install phase >> failed with this error: >> >> dh_install: Cannot find (any matches for) "/usr/lib/*/wireshark/" (tried >> in "." and "debian/tmp") >> dh_install: libvirt-wireshark missing files: /usr/lib/*/wireshark/ >> dh_install: missing files, aborting > That seems like a problem in the Debian packaging system, not in > libvirt. I double-checked with the upstream folks, and the install > rules for Wireshark plugin doesn't have /*/ in there. Known bug in wireshark package: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=880428 Status: maybe one day... From sean.mcginnis at gmx.com Fri Apr 6 13:02:06 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 6 Apr 2018 08:02:06 -0500 Subject: [openstack-dev] Following the new PTI for document build, broken local builds In-Reply-To: <1523018692.22377.1.camel@redhat.com> References: <1521629342.8587.20.camel@redhat.com> <8da6121b-a48a-5dd5-8865-75b9e0d38e15@redhat.com> <1523018692.22377.1.camel@redhat.com> Message-ID: <20180406130205.GA15660@smcginnis-mbp.local> > > > > How can we enable warning_is_error in the gate with the new PTI? It's > > easy enough to add the -W flag in tox.ini for local builds, but as you > > say the tox job is never called in the gate. In the gate zuul checks for > > it in the [build_sphinx] section of setup.cfg: > > > > https://git.openstack.org/cgit/openstack-infra/zuul-jobs/tree/roles/sphinx/library/sphinx_check_warning_is_error.py#n23 > > > > [...] > > I'd be more in favour of changing the zuul job to build with the '-W' > flag. To be honest, there is no good reason to not have this flag > enabled. I'm not sure that will be a popular opinion though as it may > break some projects' builds (correctly, but still). > > I'll propose a patch against zuul-jobs and see what happens :) > > Stephen > I am in favor of this too. We will probably need to give some teams some time to get warnings fixed though. I haven't done any kind of extensive audit of projects, but from a few I looked through, there are definitely a few that are not erroring on warnings and are likely to be blocked if we suddenly flipped the switch and errored on those. This is a legitimate issue though. In Cinder we had -W in the tox docs job, but since that is no longer being enforced in the gate, running "tox -e docs" from a fresh clone of master was failing. We really do need some way to enforce this so things like that do not happen. From dalvarez at redhat.com Fri Apr 6 13:30:40 2018 From: dalvarez at redhat.com (Daniel Alvarez Sanchez) Date: Fri, 6 Apr 2018 15:30:40 +0200 Subject: [openstack-dev] [neutron] [OVN] Tempest API / Scenario tests and OVN metadata In-Reply-To: References: Message-ID: Hi, Thanks Lucas for writing this down. On Thu, Apr 5, 2018 at 11:35 AM, Lucas Alvares Gomes wrote: > Hi, > > The tests below are failing in the tempest API / Scenario job that > runs in the networking-ovn gate (non-voting): > > neutron_tempest_plugin.api.admin.test_quotas_negative. > QuotasAdminNegativeTestJSON.test_create_port_when_quotas_is_full > neutron_tempest_plugin.api.test_routers.RoutersIpV6Test. > test_router_interface_status > neutron_tempest_plugin.api.test_routers.RoutersTest.test_ > router_interface_status > neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_ > subnet_from_pool_with_prefixlen > neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_ > subnet_from_pool_with_quota > neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_ > subnet_from_pool_with_subnet_cidr > > Digging a bit into it I noticed that with the exception of the two > "test_router_interface_status" (ipv6 and ipv4) all other tests are > failing because the way metadata works in networking-ovn. > > Taking the "test_create_port_when_quotas_is_full" as an example. The > reason why it fails is because when the OVN metadata is enabled, > networking-ovn will metadata port at the moment a network is created > [0] and that will already fulfill the quota limit set by that test > [1]. > > That port will also allocate an IP from the subnet which will cause > the rest of the tests to fail with a "No more IP addresses available > on network ..." error. > With ML2/OVS we would run into the same Quota problem if DHCP would be enabled for the created subnets. This means that if we modify the current tests to enable DHCP on them and we account this extra port it would be valid for all networking-ovn as well. Does it sound good or we still want to isolate quotas? > > This is not very trivial to fix because: > > 1. Tempest should be backend agnostic. So, adding a conditional in the > tempest test to check whether OVN is being used or not doesn't sound > correct. > > 2. Creating a port to be used by the metadata agent is a core part of > the design implementation for the metadata functionality [2] > > So, I'm sending this email to try to figure out what would be the best > approach to deal with this problem and start working towards having > that job to be voting in our gate. Here are some ideas: > > 1. Simple disable the tests that are affected by the metadata approach. > > 2. Disable metadata for the tempest API / Scenario tests (here's a > test patch doing it [3]) > IMHO, we don't want to do this as metadata is likely to be enabled in all the clouds either using ML2/OVS or OVN so it's good to keep exercising this part. > > 3. Same as 1. but also create similar tempest tests specific for OVN > somewhere else (in the networking-ovn tree?!) > As we discussed on IRC I'm keen on doing this instead of getting bits in tempest to do different things depending on the backend used. Unless we want to enable DHCP on the subnets that these tests create :) > What you think would be the best way to workaround this problem, any > other ideas ? > > As for the "test_router_interface_status" tests that are failing > independent of the metadata, there's a bug reporting the problem here > [4]. So we should just fix it. > > [0] https://github.com/openstack/networking-ovn/blob/ > f3f5257fc465bbf44d589cc16e9ef7781f6b5b1d/networking_ovn/ > common/ovn_client.py#L1154 > [1] https://github.com/openstack/neutron-tempest-plugin/blob/ > 35bf37d1830328d72606f9c790b270d4fda2b854/neutron_tempest_ > plugin/api/admin/test_quotas_negative.py#L66 > [2] https://docs.openstack.org/networking-ovn/latest/ > contributor/design/metadata_api.html#overview-of-proposed-approach > [3] https://review.openstack.org/#/c/558792/ > [4] https://bugs.launchpad.net/networking-ovn/+bug/1713835 > > Cheers, > Lucas > Thanks, Daniel > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rfolco at redhat.com Fri Apr 6 13:41:04 2018 From: rfolco at redhat.com (Rafael Folco) Date: Fri, 6 Apr 2018 10:41:04 -0300 Subject: [openstack-dev] [tripleo][ci] use of tags in launchpad bugs In-Reply-To: <51180736-f593-9e95-f499-528798890bde@redhat.com> References: <51180736-f593-9e95-f499-528798890bde@redhat.com> Message-ID: Thanks for the clarifications about official tags. I was the one creating random/non-official tags for tripleo bugs. Although this may be annoying for some people, it helped me while ruckering/rovering CI to open unique bugs and avoid dups for the first time(s). There isn't a standard way of filing a bug. People open bugs using different/non-standard wording in summary and description. I just thought it was a good idea to tag featuresetXXX, ovb, branch, etc., so when somebody asks me if there is a bug for the job XYZ, the bug could be found more easily. Since sprint 10 ruck/rover started recording notes [1] and this helps to keep track of the issues. Perhaps the CI team could implement something on CI monitoring that links a bug to the failing job(s), e.g: [LP XXXXXX]. I'm doing a cleanup for the open bugs removing the non-official tags. Thanks, --Folco [1] https://review.rdoproject.org/etherpad/p/ruckrover-sprint11 On Fri, Apr 6, 2018 at 6:09 AM, Jiří Stránský wrote: > On 5.4.2018 21:04, Alex Schultz wrote: > >> On Thu, Apr 5, 2018 at 12:55 PM, Wesley Hayutin >> wrote: >> >>> FYI... >>> >>> This is news to me so thanks to Emilien for pointing it out [1]. >>> There are official tags for tripleo launchpad bugs. Personally, I like >>> what >>> I've seen recently with some extra tags as they could be helpful in >>> finding >>> the history of particular issues. >>> So hypothetically would it be "wrong" to create an official tag for each >>> featureset config number upstream. I ask because that is adding a lot of >>> tags but also serves as a good test case for what is good/bad use of >>> tags. >>> >>> >> We list official tags over in the specs repo[0]. That being said as >> we investigate switching over to storyboard, we'll probably want to >> revisit tags as they will have to be used more to replace some of the >> functionality we had with launchpad (e.g. milestones). You could >> always add the tags without being an official tag. I'm not sure I >> would really want all the featuresets as tags. I'd rather see us >> actually figure out what component is actually failing than relying on >> a featureset (and the Rosetta stone for decoding featuresets to >> functionality[1]). >> > > We could also use both alongside. Component-based tags better relate to > the actual root cause of the bug, while featureset-based tags are useful in > relation to CI. > > E.g. "I see fs037 failing, i wonder if anyone already reported a bug for > it" -- if the reporter tagged the bug, it would be really easy to figure > out the answer. > > This might also again bring up the question of better job names to allow > easier mapping to featuresets. IMO: > > tripleo-ci-centos-7-containers-multinode -- not great > tripleo-ci-centos-7-featureset010 -- not great > tripleo-ci-centos-7-containers-mn-fs010 -- *happy face* > > Jirka > > > >> >> Thanks, >> -Alex >> >> >> [0] http://git.openstack.org/cgit/openstack/tripleo-specs/tree/s >> pecs/policy/bug-tagging.rst#n30 >> [1] https://git.openstack.org/cgit/openstack/tripleo-quickstart/ >> tree/doc/source/feature-configuration.rst#n21 >> >>> Thanks >>> >>> [1] https://bugs.launchpad.net/tripleo/+manage-official-tags >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Rafael Folco Senior Software Engineer -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Fri Apr 6 14:21:07 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 6 Apr 2018 09:21:07 -0500 Subject: [openstack-dev] [cinder][nova] about re-image the volume In-Reply-To: References: <20180329142813.GA25762@sm-xps> <20180402115959.3y3j6ytab6ruorrg@localhost> <96adcaac-632a-95c3-71c8-51211c1c57bd@gmail.com> <20180405081558.vf7bibu4fcv5kov3@localhost> <20180406083110.tydltwfe23kiq7bw@localhost> Message-ID: On 4/6/2018 5:09 AM, Matthew Booth wrote: > I think you're talking at cross purposes here: this won't require a > swap volume. Apart from anything else, swap volume only works on an > attached volume, and as previously discussed Nova will detach and > re-attach. > > Gorka, the Nova api Matt is referring to is called volume update > externally. It's the operation required for live migrating an attached > volume between backends. It's called swap volume internally in Nova. Yeah I was hoping we were just having a misunderstanding of what 'swap volume' in nova is, which is the blockRebase for an already attached volume to the guest, called from cinder during a volume retype or migration. As for the re-image thing, nova would be detaching the volume from the guest prior to calling the new cinder re-image API, and then re-attach to the guest afterward - similar to how shelve and unshelve work, and for that matter how rebuild works today with non-root volumes. -- Thanks, Matt From sfinucan at redhat.com Fri Apr 6 14:52:46 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Fri, 06 Apr 2018 15:52:46 +0100 Subject: [openstack-dev] Following the new PTI for document build, broken local builds In-Reply-To: <20180406130205.GA15660@smcginnis-mbp.local> References: <1521629342.8587.20.camel@redhat.com> <8da6121b-a48a-5dd5-8865-75b9e0d38e15@redhat.com> <1523018692.22377.1.camel@redhat.com> <20180406130205.GA15660@smcginnis-mbp.local> Message-ID: <1523026366.22377.13.camel@redhat.com> On Fri, 2018-04-06 at 08:02 -0500, Sean McGinnis wrote: > > > > > > How can we enable warning_is_error in the gate with the new PTI? It's > > > easy enough to add the -W flag in tox.ini for local builds, but as you > > > say the tox job is never called in the gate. In the gate zuul checks for > > > it in the [build_sphinx] section of setup.cfg: > > > > > > https://git.openstack.org/cgit/openstack-infra/zuul-jobs/tree/roles/sphinx/library/sphinx_check_warning_is_error.pyLovel#n23 > > > > > > [...] > > > > I'd be more in favour of changing the zuul job to build with the '-W' > > flag. To be honest, there is no good reason to not have this flag > > enabled. I'm not sure that will be a popular opinion though as it may > > break some projects' builds (correctly, but still). > > > > I'll propose a patch against zuul-jobs and see what happens :) > > > > Stephen > > > > I am in favor of this too. We will probably need to give some teams some time > to get warnings fixed though. I haven't done any kind of extensive audit of > projects, but from a few I looked through, there are definitely a few that are > not erroring on warnings and are likely to be blocked if we suddenly flipped > the switch and errored on those. > > This is a legitimate issue though. In Cinder we had -W in the tox docs job, but > since that is no longer being enforced in the gate, running "tox -e docs" from > a fresh clone of master was failing. We really do need some way to enforce this > so things like that do not happen. This. While forcing work on teams to do busywork is undeniably A Very Bad Thing (TM), I do think the longer we leave this, the worse it'll get. The zuul-jobs [1] patch will probably introduce some pain for projects but it seems like inevitable pain and we're in the right part of the cycle in which to do something like this. I'd be willing to help projects fix issues they encounter, which I expect will be minimal for most projects. Stephen [1] https://review.openstack.org/559348 From slawek at kaplonski.pl Fri Apr 6 15:08:29 2018 From: slawek at kaplonski.pl (=?utf-8?B?U8WCYXdlayBLYXDFgm/FhHNraQ==?=) Date: Fri, 6 Apr 2018 17:08:29 +0200 Subject: [openstack-dev] [neutron] [OVN] Tempest API / Scenario tests and OVN metadata In-Reply-To: References: Message-ID: <4C4EB692-8B09-4689-BDC2-E6447D719073@kaplonski.pl> Hi, I don’t know how networking-ovn is working but I have one question. > Wiadomość napisana przez Daniel Alvarez Sanchez w dniu 06.04.2018, o godz. 15:30: > > Hi, > > Thanks Lucas for writing this down. > > On Thu, Apr 5, 2018 at 11:35 AM, Lucas Alvares Gomes wrote: > Hi, > > The tests below are failing in the tempest API / Scenario job that > runs in the networking-ovn gate (non-voting): > > neutron_tempest_plugin.api.admin.test_quotas_negative.QuotasAdminNegativeTestJSON.test_create_port_when_quotas_is_full > neutron_tempest_plugin.api.test_routers.RoutersIpV6Test.test_router_interface_status > neutron_tempest_plugin.api.test_routers.RoutersTest.test_router_interface_status > neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_prefixlen > neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_quota > neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_subnet_cidr > > Digging a bit into it I noticed that with the exception of the two > "test_router_interface_status" (ipv6 and ipv4) all other tests are > failing because the way metadata works in networking-ovn. > > Taking the "test_create_port_when_quotas_is_full" as an example. The > reason why it fails is because when the OVN metadata is enabled, > networking-ovn will metadata port at the moment a network is created > [0] and that will already fulfill the quota limit set by that test > [1]. > > That port will also allocate an IP from the subnet which will cause > the rest of the tests to fail with a "No more IP addresses available > on network ..." error. > > With ML2/OVS we would run into the same Quota problem if DHCP would be > enabled for the created subnets. This means that if we modify the current tests > to enable DHCP on them and we account this extra port it would be valid for > all networking-ovn as well. Does it sound good or we still want to isolate quotas? If DHCP will be enabled for networking-ovn, will it use one more port also or not? If so then You will still have the same problem with DHCP as in ML2/OVS You will have one port created and for networking-ovn it will be 2 ports. If it’s not like that then I think that this solution, with some comment in test code why DHCP is enabled should be good IMO. > > This is not very trivial to fix because: > > 1. Tempest should be backend agnostic. So, adding a conditional in the > tempest test to check whether OVN is being used or not doesn't sound > correct. > > 2. Creating a port to be used by the metadata agent is a core part of > the design implementation for the metadata functionality [2] > > So, I'm sending this email to try to figure out what would be the best > approach to deal with this problem and start working towards having > that job to be voting in our gate. Here are some ideas: > > 1. Simple disable the tests that are affected by the metadata approach. > > 2. Disable metadata for the tempest API / Scenario tests (here's a > test patch doing it [3]) > > IMHO, we don't want to do this as metadata is likely to be enabled in all the > clouds either using ML2/OVS or OVN so it's good to keep exercising > this part. > > > 3. Same as 1. but also create similar tempest tests specific for OVN > somewhere else (in the networking-ovn tree?!) > > As we discussed on IRC I'm keen on doing this instead of getting bits in > tempest to do different things depending on the backend used. Unless > we want to enable DHCP on the subnets that these tests create :) > > > What you think would be the best way to workaround this problem, any > other ideas ? > > As for the "test_router_interface_status" tests that are failing > independent of the metadata, there's a bug reporting the problem here > [4]. So we should just fix it. > > [0] https://github.com/openstack/networking-ovn/blob/f3f5257fc465bbf44d589cc16e9ef7781f6b5b1d/networking_ovn/common/ovn_client.py#L1154 > [1] https://github.com/openstack/neutron-tempest-plugin/blob/35bf37d1830328d72606f9c790b270d4fda2b854/neutron_tempest_plugin/api/admin/test_quotas_negative.py#L66 > [2] https://docs.openstack.org/networking-ovn/latest/contributor/design/metadata_api.html#overview-of-proposed-approach > [3] https://review.openstack.org/#/c/558792/ > [4] https://bugs.launchpad.net/networking-ovn/+bug/1713835 > > Cheers, > Lucas > > Thanks, > Daniel > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Best regards Slawek Kaplonski slawek at kaplonski.pl -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From openstack at nemebean.com Fri Apr 6 15:25:25 2018 From: openstack at nemebean.com (Ben Nemec) Date: Fri, 6 Apr 2018 10:25:25 -0500 Subject: [openstack-dev] Proposal: The OpenStack Client Library Guide In-Reply-To: <2e195faf-6c1f-6a81-794d-59c97a371fd8@catalyst.net.nz> References: <2e195faf-6c1f-6a81-794d-59c97a371fd8@catalyst.net.nz> Message-ID: As someone who has dealt with this in the past[1], I can appreciate the complexities of trying to write software that can handle all the various auth options in OpenStack. In my case I gave up and passed all of that off to os-client-config (which, to be fair, is probably what I should have done in the first place, but isn't an option for non-Python projects). I'm not sure I can actually help with this since it didn't make any sense to me, but a big +1 to bringing some sanity to this whole thing. 1: http://blog.nemebean.com/content/creating-openstack-client-instances-python On 04/05/2018 10:55 PM, Adrian Turjak wrote: > Hello fellow OpenStackers, > > As some of you have probably heard me rant, I've been thinking about how > to better solve the problem with various tools that support OpenStack or > are meant to be OpenStack clients/tools which don't always work as > expected by those of us directly in the community. > > Mostly around things like auth and variable name conventions, and things > which often there should really be consistency and overlap. > > The example that most recently triggered this discussion was how > OpenStackClient (and os-client-config) supports certain elements of > clouds.yaml and ENVVAR config, while Terraform supports it differently. > Both you'd often run on the cli and often both in the same terminal, so > it is always weird when certain auth and scoping values don't work the > same. This is being worked on, but little problems like this an an > ongoing problem. > > The proposal, write an authoritative guide/spec on the basics of > implementing a client library or tool for any given language that talks > to OpenStack. > > Elements we ought to cover: > - How all the various auth methods in Keystone work, how the whole authn > and authz process works with Keystone, and how to actually use it to do > what you want. > - What common client configuration options exist and how they work > (common variable names, ENVVARs, clouds.yaml), with something like > common ENVVARs documented and a list maintained so there is one > definitive source for what to expect people to be using. > - Per project guides on how the API might act that helps facilitate > starting to write code against it beyond just the API reference, and > examples of what to expect. Not exactly a duplicate of the API ref, but > more a 'common pitfalls and confusing elements to be ware of' section > that builds on the API ref of each project. > > There are likely other things we want to include, and we need to work > out what those are, but ideally this should be a new documentation > focused project which will result in useful guide on what someone needs > to take any programming language, and write a library that works as we > expect it should against OpenStack. Such a guide would also help any > existing libraries ensure they themselves do fully understand and use > the OpenStack auth and service APIs as expected. It should also help to > ensure programmers working across multiple languages and systems have a > much easier time interacting with all the various libraries they might > touch. > > A lot of this knowledge exists, but it's hard to parse and not well > documented. We have reference implementations of it all in the likes of > OpenStackClient, Keystoneauth1, and the OpenStackSDK itself (which > os-client-config is now a part of), but what we need is a language > agnostic guide rather than the assumption that people will read the code > of our official projects. Even the API ref itself isn't entirely helpful > since in a lot of cases it only covers the most basic of examples for > each API. > > There appears to be interest in something like this, so lets start with > a mailing list discussion, and potentially turn it into something more > official if this leads anywhere useful. :) > > Cheers, > Adrian > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From pkovar at redhat.com Fri Apr 6 15:27:14 2018 From: pkovar at redhat.com (Petr Kovar) Date: Fri, 6 Apr 2018 17:27:14 +0200 Subject: [openstack-dev] Following the new PTI for document build, broken local builds In-Reply-To: <1523026366.22377.13.camel@redhat.com> References: <1521629342.8587.20.camel@redhat.com> <8da6121b-a48a-5dd5-8865-75b9e0d38e15@redhat.com> <1523018692.22377.1.camel@redhat.com> <20180406130205.GA15660@smcginnis-mbp.local> <1523026366.22377.13.camel@redhat.com> Message-ID: <20180406172714.d8cdbd0a03d77f9de657a20e@redhat.com> On Fri, 06 Apr 2018 15:52:46 +0100 Stephen Finucane wrote: > On Fri, 2018-04-06 at 08:02 -0500, Sean McGinnis wrote: > > > > > > > > How can we enable warning_is_error in the gate with the new PTI? It's > > > > easy enough to add the -W flag in tox.ini for local builds, but as you > > > > say the tox job is never called in the gate. In the gate zuul checks for > > > > it in the [build_sphinx] section of setup.cfg: > > > > > > > > https://git.openstack.org/cgit/openstack-infra/zuul-jobs/tree/roles/sphinx/library/sphinx_check_warning_is_error.pyLovel#n23 > > > > > > > > [...] > > > > > > I'd be more in favour of changing the zuul job to build with the '-W' > > > flag. To be honest, there is no good reason to not have this flag > > > enabled. I'm not sure that will be a popular opinion though as it may > > > break some projects' builds (correctly, but still). > > > > > > I'll propose a patch against zuul-jobs and see what happens :) > > > > > > Stephen > > > > > > > I am in favor of this too. We will probably need to give some teams some time > > to get warnings fixed though. I haven't done any kind of extensive audit of > > projects, but from a few I looked through, there are definitely a few that are > > not erroring on warnings and are likely to be blocked if we suddenly flipped > > the switch and errored on those. > > > > This is a legitimate issue though. In Cinder we had -W in the tox docs job, but > > since that is no longer being enforced in the gate, running "tox -e docs" from > > a fresh clone of master was failing. We really do need some way to enforce this > > so things like that do not happen. > > This. While forcing work on teams to do busywork is undeniably A Very > Bad Thing (TM), I do think the longer we leave this, the worse it'll > get. The zuul-jobs [1] patch will probably introduce some pain for > projects but it seems like inevitable pain and we're in the right part > of the cycle in which to do something like this. I'd be willing to help > projects fix issues they encounter, which I expect will be minimal for > most projects. I too think enforcing -W is the way to go, so count me in for the broken docs build help. Thanks for pushing this forward! Cheers, pk From hongbin034 at gmail.com Fri Apr 6 15:45:45 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Fri, 6 Apr 2018 11:45:45 -0400 Subject: [openstack-dev] [zun] zun-api error Message-ID: Hi Murali, It looks your zunclient was sending API requests to http://10.11.142.2:9511/v1/services , which doesn't seem to be the right API endpoint. According to the Keystone endpoint you configured, the API endpoint of Zun should be http://10.11.142.2:9517/v1/services (it is on port 9517 instead of 9511). What confused the zunclient is the endpoint's type you configured in Keystone. Zun expects an endpoint of type "container" but it was configured to be "zun-container" in your setup. I believe the error will be resolved if you can update the Zun endpoint from type "zun-container" to type "container". Please give it a try and let us know. Best regards, Hongbin On Thu, Apr 5, 2018 at 7:27 PM, Murali B wrote: > Hi Hongbin, > > Thank you for your help > > As per the our discussion here is the output for my current api on pike. I > am not sure which version of zun client client I should use for pike > > root at cluster3-2:~/python-zunclient# zun service-list > ERROR: Not Acceptable (HTTP 406) (Request-ID: req-be69266e-b641-44b9-9739- > 0c2d050f18b3) > root at cluster3-2:~/python-zunclient# zun --debug service-list > DEBUG (extension:180) found extension EntryPoint.parse('vitrage-keycloak > = vitrageclient.auth:VitrageKeycloakLoader') > DEBUG (extension:180) found extension EntryPoint.parse('vitrage-noauth = > vitrageclient.auth:VitrageNoAuthLoader') > DEBUG (extension:180) found extension EntryPoint.parse('noauth = > cinderclient.contrib.noauth:CinderNoAuthLoader') > DEBUG (extension:180) found extension EntryPoint.parse('v2token = > keystoneauth1.loading._plugins.identity.v2:Token') > DEBUG (extension:180) found extension EntryPoint.parse('none = > keystoneauth1.loading._plugins.noauth:NoAuth') > DEBUG (extension:180) found extension EntryPoint.parse('v3oauth1 = > keystoneauth1.extras.oauth1._loading:V3OAuth1') > DEBUG (extension:180) found extension EntryPoint.parse('admin_token = > keystoneauth1.loading._plugins.admin_token:AdminToken') > DEBUG (extension:180) found extension EntryPoint.parse('v3oidcauthcode = > keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAuthorizationCode > ') > DEBUG (extension:180) found extension EntryPoint.parse('v2password = > keystoneauth1.loading._plugins.identity.v2:Password') > DEBUG (extension:180) found extension EntryPoint.parse('v3samlpassword = > keystoneauth1.extras._saml2._loading:Saml2Password') > DEBUG (extension:180) found extension EntryPoint.parse('v3password = > keystoneauth1.loading._plugins.identity.v3:Password') > DEBUG (extension:180) found extension EntryPoint.parse('v3adfspassword = > keystoneauth1.extras._saml2._loading:ADFSPassword') > DEBUG (extension:180) found extension EntryPoint.parse('v3oidcaccesstoken > = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAccessToken') > DEBUG (extension:180) found extension EntryPoint.parse('v3oidcpassword = > keystoneauth1.loading._plugins.identity.v3:OpenIDConnectPassword') > DEBUG (extension:180) found extension EntryPoint.parse('v3kerberos = > keystoneauth1.extras.kerberos._loading:Kerberos') > DEBUG (extension:180) found extension EntryPoint.parse('token = > keystoneauth1.loading._plugins.identity.generic:Token') > DEBUG (extension:180) found extension EntryPoint.parse('v3oidcclientcredentials > = keystoneauth1.loading._plugins.identity.v3: > OpenIDConnectClientCredentials') > DEBUG (extension:180) found extension EntryPoint.parse('v3tokenlessauth = > keystoneauth1.loading._plugins.identity.v3:TokenlessAuth') > DEBUG (extension:180) found extension EntryPoint.parse('v3token = > keystoneauth1.loading._plugins.identity.v3:Token') > DEBUG (extension:180) found extension EntryPoint.parse('v3totp = > keystoneauth1.loading._plugins.identity.v3:TOTP') > DEBUG (extension:180) found extension EntryPoint.parse('v3applicationcredential > = keystoneauth1.loading._plugins.identity.v3:ApplicationCredential') > DEBUG (extension:180) found extension EntryPoint.parse('password = > keystoneauth1.loading._plugins.identity.generic:Password') > DEBUG (extension:180) found extension EntryPoint.parse('v3fedkerb = > keystoneauth1.extras.kerberos._loading:MappedKerberos') > DEBUG (extension:180) found extension EntryPoint.parse('v1password = > swiftclient.authv1:PasswordLoader') > DEBUG (extension:180) found extension EntryPoint.parse('token_endpoint = > openstackclient.api.auth_plugin:TokenEndpoint') > DEBUG (extension:180) found extension EntryPoint.parse('gnocchi-basic = > gnocchiclient.auth:GnocchiBasicLoader') > DEBUG (extension:180) found extension EntryPoint.parse('gnocchi-noauth = > gnocchiclient.auth:GnocchiNoAuthLoader') > DEBUG (extension:180) found extension EntryPoint.parse('aodh-noauth = > aodhclient.noauth:AodhNoAuthLoader') > DEBUG (session:372) REQ: curl -g -i -X GET http://ubuntu16:35357/v3 -H > "Accept: application/json" -H "User-Agent: zun keystoneauth1/3.4.0 > python-requests/2.18.1 CPython/2.7.12" > DEBUG (connectionpool:207) Starting new HTTP connection (1): ubuntu16 > DEBUG (connectionpool:395) http://ubuntu16:35357 "GET /v3 HTTP/1.1" 200 > 248 > DEBUG (session:419) RESP: [200] Date: Thu, 05 Apr 2018 23:11:07 GMT > Server: Apache/2.4.18 (Ubuntu) Vary: X-Auth-Token X-Distribution: Ubuntu > x-openstack-request-id: req-3b1a12cc-fb3f-4d05-87fc-d2a1ff43395c > Content-Length: 248 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive > Content-Type: application/json > RESP BODY: {"version": {"status": "stable", "updated": > "2017-02-22T00:00:00Z", "media-types": [{"base": "application/json", > "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.8", > "links": [{"href": "http://ubuntu16:35357/v3/", "rel": "self"}]}} > > DEBUG (session:722) GET call to None for http://ubuntu16:35357/v3 used > request id req-3b1a12cc-fb3f-4d05-87fc-d2a1ff43395c > DEBUG (base:175) Making authentication request to > http://ubuntu16:35357/v3/auth/tokens > DEBUG (connectionpool:395) http://ubuntu16:35357 "POST /v3/auth/tokens > HTTP/1.1" 201 10333 > DEBUG (base:180) {"token": {"is_domain": false, "methods": ["password"], > "roles": [{"id": "4000a662be2d47fd8fdf5a0fef66767d", "name": "admin"}], > "expires_at": "2018-04-06T00:11:08.000000Z", "project": {"domain": {"id": > "default", "name": "Default"}, "id": "a391261cffba4f4c827ab7420a352fe1", > "name": "admin"}, "catalog": [{"endpoints": [{"url": " > http://cluster3-2:9517/v1", "interface": "internal", "region": > "RegionOne", "region_id": "RegionOne", "id": " > 5a634bafa38c45dbb571f0edb3702101"}, {"url": "http://cluster3-2:9517/v1", > "interface": "public", "region": "RegionOne", "region_id": "RegionOne", > "id": "8926d37d276a4fe49df66bb513f7906a"}, {"url": " > http://cluster3-2:9517/v1", "interface": "admin", "region": "RegionOne", > "region_id": "RegionOne", "id": "a74e1b4faf39436aa5d6f9b446ceee92"}], > "type": "container-zun", "id": "025154eef222461da9edcfe32ae79e5e", > "name": "zun"}, {"endpoints": [{"url": "http://ubuntu16:9001", > "interface": "public", "region": "RegionOne", "region_id": "RegionOne", > "id": "3a94c0df20da47d1b922541a87576ab0"}, {"url": "http://ubuntu16:9001", > "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", > "id": "5fcab2a59c72433581510d7aafe29961"}, {"url": "http://ubuntu16:9001", > "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", > "id": "71e314291a4b4c648aa5ba662b216fa6"}], "type": "dns", "id": " > 07677b58ad4d469d80dbda8e9fa908bc", "name": "designate"}, {"endpoints": > [{"url": "http://ubuntu16:8776/v2/a391261cffba4f4c827ab7420a352fe1", > "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", > "id": "4d56ee7967994c869239007146e52ab8"}, {"url": " > http://ubuntu16:8776/v2/a391261cffba4f4c827ab7420a352fe1", "interface": > "internal", "region": "RegionOne", "region_id": "RegionOne", "id": " > 9845138d25ec41b1a7102d8365f1b9c7"}, {"url": "http://ubuntu16:8776/v2/ > a391261cffba4f4c827ab7420a352fe1", "interface": "public", "region": > "RegionOne", "region_id": "RegionOne", "id": " > f99f9bf4b0eb4e19aa8dbe72fc13e648"}], "type": "volumev2", "id": " > 077bd5ecfc59499ab84f49e410efef4f", "name": "cinderv2"}, {"endpoints": > [{"url": "http://ubuntu16:8004/v1/a391261cffba4f4c827ab7420a352fe1", > "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", > "id": "355c6c323653469c8315d5dea2998b0d"}, {"url": " > http://ubuntu16:8004/v1/a391261cffba4f4c827ab7420a352fe1", "interface": > "internal", "region": "RegionOne", "region_id": "RegionOne", "id": " > 841768ec3edb42d7b18fe6a2a17f4dbc"}, {"url": "http://10.11.142.2:8004/v1/ > a391261cffba4f4c827ab7420a352fe1", "interface": "public", "region": > "RegionOne", "region_id": "RegionOne", "id": " > afdbc1d2a5114cd9b0714331eb227ba9"}], "type": "orchestration", "id": " > 116243d61e3a4c90b7144d6a8b5a170a", "name": "heat"}, {"endpoints": > [{"url": "http://ubuntu16:8778", "interface": "internal", "region": > "RegionOne", "region_id": "RegionOne", "id": " > 2dacce3eed484464b3f521b7b2720cd9"}, {"url": "http://ubuntu16:8778", > "interface": "public", "region": "RegionOne", "region_id": "RegionOne", > "id": "5300f9ae336c41b8a8bb93400db35a30"}, {"url": "http://ubuntu16:8778", > "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", > "id": "5c7e2cc977f74051b0ed104abb1d46a9"}], "type": "placement", "id": " > 1d270e2d3d4f488e82597097af933e7a", "name": "placement"}, {"endpoints": > [{"url": "http://ubuntu16:8042", "interface": "public", "region": > "RegionOne", "region_id": "RegionOne", "id": " > 337f147396f143679e6cf7fbdd3601ab"}, {"url": "http://ubuntu16:8042", > "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", > "id": "a97d660772e64894b4b13092d7719298"}, {"url": "http://ubuntu16:8042", > "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", > "id": "bb5caf186c9947aca31e6ee2a37f6bbd"}], "type": "alarming", "id": " > 2a19c1a28a42433caa8eb919910ec06f", "name": "aodh"}, {"endpoints": [], > "type": "volume", "id": "39c740b891764e4a9081773709269848", "name": > "cinder"}, {"endpoints": [{"url": "http://ubuntu16:8041", "interface": > "internal", "region": "RegionOne", "region_id": "RegionOne", "id": " > 9d455913a5fb4f15bbe15740f4dee260"}, {"url": "http://ubuntu16:8041", > "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", > "id": "c5c2471db1cb4ae7a1f3e847404d4b37"}, {"url": "http://ubuntu16:8041", > "interface": "public", "region": "RegionOne", "region_id": "RegionOne", > "id": "cc12daed5ea342a1a47602720589cb9e"}], "type": "metric", "id": " > 39fdf2d5300343aa8ebe5509d29ba7ce", "name": "gnocchi"}, {"endpoints": > [{"url": "http://cluster3-2:9890", "interface": "public", "region": > "RegionOne", "region_id": "RegionOne", "id": " > 1c7ddc56ba984afd8187cd1894a75bf1"}, {"url": "http://cluster3-2:9890", > "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", > "id": "888925c4fc8b48859f086860333c3ab4"}, {"url": "http://cluster3-2:9890", > "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", > "id": "9bfd7198dab14f6a8b7eba444f920020"}], "type": "nfv-orchestration", > "id": "3da88eae843a4949806186db8a9a3bd0", "name": "tacker"}, > {"endpoints": [{"url": "http://10.11.142.2:8999", "interface": > "internal", "region": "RegionOne", "region_id": "RegionOne", "id": " > 32880f809a2f45598a9838e4b168ce5b"}, {"url": "http://10.11.142.2:8999", > "interface": "public", "region": "RegionOne", "region_id": "RegionOne", > "id": "530711f56f234ad19775fae65774c0ab"}, {"url": " > http://10.11.142.2:8999", "interface": "admin", "region": "RegionOne", > "region_id": "RegionOne", "id": "8d7493ad752b453b87d789d0ec5cae93"}], > "type": "rca", "id": "55f78369ea5e40e3b9aa9ded854cb163", "name": > "vitrage"}, {"endpoints": [{"url": "http://10.11.142.2:5000/v3/", > "interface": "public", "region": "RegionOne", "region_id": "RegionOne", > "id": "afba4b58fd734baeaed94f8f2380a986"}, {"url": " > http://ubuntu16:5000/v3/", "interface": "internal", "region": > "RegionOne", "region_id": "RegionOne", "id": " > b4b864acfc1746b3ad2d22c6a28e1361"}, {"url": "http://ubuntu16:35357/v3/", > "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", > "id": "bf256df5f8d34e9c80c00b78da122118"}], "type": "identity", "id": " > 58b4ff04dc764fc2aae4bfd9d0f1eb8e", "name": "keystone"}, {"endpoints": > [{"url": "http://ubuntu16:8776/v3/a391261cffba4f4c827ab7420a352fe1", > "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", > "id": "260f8b9e9e214cc1a39407517b3ca826"}, {"url": " > http://ubuntu16:8776/v3/a391261cffba4f4c827ab7420a352fe1", "interface": > "public", "region": "RegionOne", "region_id": "RegionOne", "id": " > 81adeaccba1c4203bddb7734f23116a8"}, {"url": "http://ubuntu16:8776/v3/ > a391261cffba4f4c827ab7420a352fe1", "interface": "internal", "region": > "RegionOne", "region_id": "RegionOne", "id": " > e63332e8b15e43c6b9c331d9ee8551ab"}], "type": "volumev3", "id": " > 8cd6101718e94ee198cf9ba9894bf1c9", "name": "cinderv3"}, {"endpoints": > [{"url": "http://ubuntu16:9696", "interface": "internal", "region": > "RegionOne", "region_id": "RegionOne", "id": " > 65a0b4233436428ab42aa3b40b1ce53f"}, {"url": "http://ubuntu16:9696", > "interface": "public", "region": "RegionOne", "region_id": "RegionOne", > "id": "b8354dd727154056b3c9b81b89054bab"}, {"url": "http://ubuntu16:9696", > "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", > "id": "ca44db85238b46cf9fbb6dc6f1d9dff5"}], "type": "network", "id": " > ade912885a73431f95a3a01d8a8e6498", "name": "neutron"}, {"endpoints": > [{"url": "http://ubuntu16:8000/v1", "interface": "admin", "region": > "RegionOne", "region_id": "RegionOne", "id": " > 5d7559010ea94cca9edd7ab6213f6b2c"}, {"url": "http://ubuntu16:8000/v1", > "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", > "id": "af77025677284808b0715488e22729d4"}, {"url": " > http://10.11.142.2:8000/v1", "interface": "public", "region": > "RegionOne", "region_id": "RegionOne", "id": " > c17b650eccf14045af49d5e9d050e875"}], "type": "cloudformation", "id": " > b04f735f46e743969e2bb0fff3aee1b5", "name": "heat-cfn"}, {"endpoints": > [{"url": "http://ubuntu16:8774/v2.1/a391261cffba4f4c827ab7420a352fe1", > "interface": "public", "region": "RegionOne", "region_id": "RegionOne", > "id": "18580f7a6dea4c53bc66d161e7e0a71e"}, {"url": " > http://ubuntu16:8774/v2.1/a391261cffba4f4c827ab7420a352fe1", "interface": > "admin", "region": "RegionOne", "region_id": "RegionOne", "id": " > b4a8575704a4426494edc57551f40e58"}, {"url": "http://ubuntu16:8774/v2.1/ > a391261cffba4f4c827ab7420a352fe1", "interface": "internal", "region": > "RegionOne", "region_id": "RegionOne", "id": " > c41ec544b61c41098c07030bc84ba2a0"}], "type": "compute", "id": " > b06f4aa21a4a488c8f0c5a835e639bd3", "name": "nova"}, {"endpoints": > [{"url": "http://ubuntu16:9292", "interface": "public", "region": > "RegionOne", "region_id": "RegionOne", "id": " > 4ed27e537ca34b6fb93a8c72d8921d24"}, {"url": "http://ubuntu16:9292", > "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", > "id": "ab0c37600ecf45d797e7972dc6a4fde2"}, {"url": "http://ubuntu16:9292", > "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", > "id": "f4a0f97be4f343d698ea12633e3823d6"}], "type": "image", "id": " > bbe4fbb4a1d7495f948faa9baf1e3828", "name": "glance"}, {"endpoints": > [{"url": "http://ubuntu16:8777", "interface": "public", "region": > "RegionOne", "region_id": "RegionOne", "id": " > 3d160f2286634811b24b8abd6ad72c1f"}, {"url": "http://ubuntu16:8777", > "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", > "id": "a988e821ff1f4760ae3873c17ab87294"}, {"url": "http://ubuntu16:8777", > "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", > "id": "def8c07174184a0ca26e2f0f26d60a73"}], "type": "metering", "id": " > f4450730522d4342ac6626b81567b36c", "name": "ceilometer"}, {"endpoints": > [{"url": "http://ubuntu16:9511/v1", "interface": "internal", "region": > "RegionOne", "region_id": "RegionOne", "id": " > 19e14e5c5c5a4d3db6a6a632db728668"}, {"url": "http://10.11.142.2:9511/v1", > "interface": "public", "region": "RegionOne", "region_id": "RegionOne", > "id": "28fb2092bcc748ce88dfb1284ace1264"}, {"url": " > http://10.11.142.2:9511/v1", "interface": "admin", "region": "RegionOne", > "region_id": "RegionOne", "id": "c33f5b4a355d4067aa2e7093606cd153"}], > "type": "container", "id": "fdbcff09ecd545c8ba28bfd96782794a", "name": > "magnum"}], "user": {"domain": {"id": "default", "name": "Default"}, > "password_expires_at": null, "name": "admin", "id": " > 3b136545b47b40709b78b1e36cdcdc63"}, "audit_ids": > ["Ad1z5kAmRBehcgxG6-8IYA"], "issued_at": "2018-04-05T23:11:08.000000Z"}} > DEBUG (session:372) REQ: curl -g -i -X GET http://10.11.142.2:9511/v1/ > services -H "OpenStack-API-Version: container 1.2" -H "X-Auth-Token: > {SHA1}7523b440595290414cefa54434fc7c8adbec5c3d" -H "Content-Type: > application/json" -H "Accept: application/json" -H "User-Agent: None" > DEBUG (connectionpool:207) Starting new HTTP connection (1): 10.11.142.2 > DEBUG (connectionpool:395) http://10.11.142.2:9511 "GET /v1/services > HTTP/1.1" 406 166 > DEBUG (session:419) RESP: [406] Content-Type: application/json > Content-Length: 166 x-openstack-request-id: req-63b7de1b-ef63-4be8-93c1-a27972c9b4c0 > Server: Werkzeug/0.10.4 Python/2.7.12 Date: Thu, 05 Apr 2018 23:11:09 GMT > RESP BODY: {"errors": [{"status": 406, "code": "", "links": [], "title": > "Not Acceptable", "detail": "Invalid service type for OpenStack-API-Version > header", "request_id": ""}]} > > DEBUG (session:722) GET call to container for http://10.11.142.2:9511/v1/ > services used request id req-63b7de1b-ef63-4be8-93c1-a27972c9b4c0 > DEBUG (shell:705) Not Acceptable (HTTP 406) (Request-ID: > req-63b7de1b-ef63-4be8-93c1-a27972c9b4c0) > Traceback (most recent call last): > File "/usr/local/lib/python2.7/dist-packages/zunclient/shell.py", line > 703, in main > map(encodeutils.safe_decode, sys.argv[1:])) > File "/usr/local/lib/python2.7/dist-packages/zunclient/shell.py", line > 639, in main > args.func(self.cs, args) > File "/usr/local/lib/python2.7/dist-packages/zunclient/v1/services_shell.py", > line 22, in do_service_list > services = cs.services.list() > File "/usr/local/lib/python2.7/dist-packages/zunclient/v1/services.py", > line 70, in list > return self._list(self._path(path), "services") > File "/usr/local/lib/python2.7/dist-packages/zunclient/common/base.py", > line 128, in _list > resp, body = self.api.json_request('GET', url) > File "/usr/local/lib/python2.7/dist-packages/zunclient/common/httpclient.py", > line 368, in json_request > resp = self._http_request(url, method, **kwargs) > File "/usr/local/lib/python2.7/dist-packages/zunclient/common/httpclient.py", > line 351, in _http_request > error_json.get('debuginfo'), method, url) > NotAcceptable: Not Acceptable (HTTP 406) (Request-ID: > req-63b7de1b-ef63-4be8-93c1-a27972c9b4c0) > ERROR: Not Acceptable (HTTP 406) (Request-ID: req-63b7de1b-ef63-4be8-93c1- > a27972c9b4c0) > > > > Thanks > -Murali > -------------- next part -------------- An HTML attachment was scrubbed... URL: From superuser151093 at gmail.com Fri Apr 6 15:47:11 2018 From: superuser151093 at gmail.com (super user) Date: Sat, 7 Apr 2018 00:47:11 +0900 Subject: [openstack-dev] [all][requirements] a plan to stop syncing requirements into projects In-Reply-To: <1523014869-sup-4635@lrrr.local> References: <1521110096-sup-3634@lrrr.local> <1522276901-sup-6868@lrrr.local> <1522850139-sup-8937@lrrr.local> <1523014869-sup-4635@lrrr.local> Message-ID: I will help to update some. On Fri, Apr 6, 2018 at 8:42 PM, Doug Hellmann wrote: > Excerpts from super user's message of 2018-04-06 17:10:32 +0900: > > Hope you fix this soon, there are many patches depend on the 'match the > > minimum version' problem which causes requirements-check fail. > > The problem is with *those patches* and not the check. > > I've been trying to update some, but my time has been limited this week > for personal reasons. I encourage project teams to run the script I > provided or edit their lower-constraints.txt file by hand to fix the > issues. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Fri Apr 6 16:07:18 2018 From: zigo at debian.org (Thomas Goirand) Date: Fri, 6 Apr 2018 18:07:18 +0200 Subject: [openstack-dev] RFC: Next minimum libvirt / QEMU versions for "Stein" release In-Reply-To: <20180406100714.GB18076@paraplu> References: <20180330142643.ff3czxy35khmjakx@eukaryote> <20180331140929.r5kj3qyrefvsovwf@eukaryote> <20180404084507.GA18076@paraplu> <20180406100714.GB18076@paraplu> Message-ID: <98bf44f7-331d-7239-6eec-971f8afd604d@debian.org> On 04/06/2018 12:07 PM, Kashyap Chamarthy wrote: >> dh_install: Cannot find (any matches for) "/usr/lib/*/wireshark/" (tried >> in "." and "debian/tmp") >> dh_install: libvirt-wireshark missing files: /usr/lib/*/wireshark/ >> dh_install: missing files, aborting > > That seems like a problem in the Debian packaging system, not in > libvirt. It sure is. As I wrote, it should be a minor packaging issue. > I double-checked with the upstream folks, and the install > rules for Wireshark plugin doesn't have /*/ in there. That part (ie: the path with *) isn't a mistake, it's because Debian has multiarch support, so for example, we get path like this (just a random example from my laptop): /usr/lib/i386-linux-gnu/pulseaudio /usr/lib/x86_64-linux-gnu/pulseaudio > Note: You don't even have to build the versions from 'Buster', which are > quite new. Just the slightly more conservative libvirt 3.2.0 and QEMU > 2.9.0 -- only if it's possbile. Actually, for *official* backports, it's the policy to always update to whatever is in testing until testing is frozen. I could maintain an unofficial backport in stretch-stein.debian.net though. > That said ... I just spent comparing the release notes of libvirt 3.0.0 > and libvirt 3.2.0[1][2]. By using libvirt 3.2.0 and QEMU 2.9.0, Debian users > will be spared from a lot of critical bugs (see all the list in [3]) in > CPU comparision area. > > [1] https://www.redhat.com/archives/libvirt-announce/2017-April/msg00000.html > -- Release of libvirt-3.2.0 > [2] https://www.redhat.com/archives/libvirt-announce/2017-January/msg00003.html > -- Release of libvirt-3.0.0 > [3] https://www.redhat.com/archives/libvir-list/2017-February/msg01295.html So, because of these bugs, would you already advise Nova users to use libvirt 3.2.0 for Queens? Cheers, Thomas Goirand (zigo) From prometheanfire at gentoo.org Fri Apr 6 16:34:33 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Fri, 6 Apr 2018 11:34:33 -0500 Subject: [openstack-dev] [all][requirements] uncapping eventlet In-Reply-To: References: <20180405154737.lhxhlpsj3uttjevu@gentoo.org> <20180405192619.nk7ykeel6qnwsk2y@gentoo.org> Message-ID: <20180406163433.fyj6qnq5oegivb4t@gentoo.org> On 18-04-06 09:02:29, Jens Harbott wrote: > 2018-04-05 19:26 GMT+00:00 Matthew Thode : > > On 18-04-05 20:11:04, Graham Hayes wrote: > >> On 05/04/18 16:47, Matthew Thode wrote: > >> > eventlet-0.22.1 has been out for a while now, we should try and use it. > >> > Going to be fun times. > >> > > >> > I have a review projects can depend upon if they wish to test. > >> > https://review.openstack.org/533021 > >> > >> It looks like we may have an issue with oslo.service - > >> https://review.openstack.org/#/c/559144/ is failing gates. > >> > >> Also - what is the dance for this to get merged? It doesn't look like we > >> can merge this while oslo.service has the old requirement restrictions. > >> > > > > The dance is as follows. > > > > 0. provide review for projects to test new eventlet version > > projects using eventlet should make backwards compat code changes at > > this time. > > But this step is currently failing. Keystone doesn't even start when > eventlet-0.22.1 is installed, because loading oslo.service fails with > its pkg definition still requiring the capped eventlet: > > http://logs.openstack.org/21/533021/4/check/legacy-requirements-integration-dsvm/7f7c3a8/logs/screen-keystone.txt.gz#_Apr_05_16_11_27_748482 > > So it looks like we need to have an uncapped release of oslo.service > before we can proceed here. > Ya, we may have to uncap and rely on upper-constraints to keep openstack gate from falling over. The new steps would be the following: 1. uncap eventlet https://review.openstack.org/559367 2. push uncapped eventlet out via requirements updates to all consumers 3. make review in requirements changing upper-constraints.txt for eventlet 4. projects depend on requirements change to do work on the new eventlet the patch generated should merge into project without the requirements change merged (this means the change should pass in the dependant review (to test 0.22.1) AND in a separate non-dependant review (test the current constraint). You would merge the non-dependant once both reviews are passing. 5. Once some non-determined set of projects work with the new eventlet we'd merge the updated upper-constraint into requirements. steps 2 and 3 can happen in parallel, projects can move to step 4 after step 3 is done (step 2 is only needed for their project and their project's dependencies). There is bound to be projects that will break as they didn't take the opportunity to fix themselves, but this should help reduce breakage. I suggest a 1 month deadline after step 2/3 is considered complete before step 5 is performed. -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From cboylan at sapwetik.org Fri Apr 6 16:41:07 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 06 Apr 2018 09:41:07 -0700 Subject: [openstack-dev] [all][requirements] uncapping eventlet In-Reply-To: <20180406163433.fyj6qnq5oegivb4t@gentoo.org> References: <20180405154737.lhxhlpsj3uttjevu@gentoo.org> <20180405192619.nk7ykeel6qnwsk2y@gentoo.org> <20180406163433.fyj6qnq5oegivb4t@gentoo.org> Message-ID: <1523032867.936315.1329051592.0E63BA5F@webmail.messagingengine.com> On Fri, Apr 6, 2018, at 9:34 AM, Matthew Thode wrote: > On 18-04-06 09:02:29, Jens Harbott wrote: > > 2018-04-05 19:26 GMT+00:00 Matthew Thode : > > > On 18-04-05 20:11:04, Graham Hayes wrote: > > >> On 05/04/18 16:47, Matthew Thode wrote: > > >> > eventlet-0.22.1 has been out for a while now, we should try and use it. > > >> > Going to be fun times. > > >> > > > >> > I have a review projects can depend upon if they wish to test. > > >> > https://review.openstack.org/533021 > > >> > > >> It looks like we may have an issue with oslo.service - > > >> https://review.openstack.org/#/c/559144/ is failing gates. > > >> > > >> Also - what is the dance for this to get merged? It doesn't look like we > > >> can merge this while oslo.service has the old requirement restrictions. > > >> > > > > > > The dance is as follows. > > > > > > 0. provide review for projects to test new eventlet version > > > projects using eventlet should make backwards compat code changes at > > > this time. > > > > But this step is currently failing. Keystone doesn't even start when > > eventlet-0.22.1 is installed, because loading oslo.service fails with > > its pkg definition still requiring the capped eventlet: > > > > http://logs.openstack.org/21/533021/4/check/legacy-requirements-integration-dsvm/7f7c3a8/logs/screen-keystone.txt.gz#_Apr_05_16_11_27_748482 > > > > So it looks like we need to have an uncapped release of oslo.service > > before we can proceed here. > > > > Ya, we may have to uncap and rely on upper-constraints to keep openstack > gate from falling over. The new steps would be the following: My understanding of our use of upper constraints was that this should (almost) always be the case for (almost) all dependencies. We should rely on constraints instead of requirements caps. Capping libs like pbr or eventlet and any other that is in use globally is incredibly difficult to work with when you want to uncap it because you have to coordinate globally. Instead if using constraints you just bump the constraint and are done. It is probably worthwhile examining if we have any other deps in the situation and proactively addressing them rather than waiting for when we really need to fix them. Clark From prometheanfire at gentoo.org Fri Apr 6 16:45:57 2018 From: prometheanfire at gentoo.org (Matthew Thode) Date: Fri, 6 Apr 2018 11:45:57 -0500 Subject: [openstack-dev] [all][requirements] uncapping eventlet In-Reply-To: <1523032867.936315.1329051592.0E63BA5F@webmail.messagingengine.com> References: <20180405154737.lhxhlpsj3uttjevu@gentoo.org> <20180405192619.nk7ykeel6qnwsk2y@gentoo.org> <20180406163433.fyj6qnq5oegivb4t@gentoo.org> <1523032867.936315.1329051592.0E63BA5F@webmail.messagingengine.com> Message-ID: <20180406164557.usfsilplditq4iab@gentoo.org> On 18-04-06 09:41:07, Clark Boylan wrote: > On Fri, Apr 6, 2018, at 9:34 AM, Matthew Thode wrote: > > On 18-04-06 09:02:29, Jens Harbott wrote: > > > 2018-04-05 19:26 GMT+00:00 Matthew Thode : > > > > On 18-04-05 20:11:04, Graham Hayes wrote: > > > >> On 05/04/18 16:47, Matthew Thode wrote: > > > >> > eventlet-0.22.1 has been out for a while now, we should try and use it. > > > >> > Going to be fun times. > > > >> > > > > >> > I have a review projects can depend upon if they wish to test. > > > >> > https://review.openstack.org/533021 > > > >> > > > >> It looks like we may have an issue with oslo.service - > > > >> https://review.openstack.org/#/c/559144/ is failing gates. > > > >> > > > >> Also - what is the dance for this to get merged? It doesn't look like we > > > >> can merge this while oslo.service has the old requirement restrictions. > > > >> > > > > > > > > The dance is as follows. > > > > > > > > 0. provide review for projects to test new eventlet version > > > > projects using eventlet should make backwards compat code changes at > > > > this time. > > > > > > But this step is currently failing. Keystone doesn't even start when > > > eventlet-0.22.1 is installed, because loading oslo.service fails with > > > its pkg definition still requiring the capped eventlet: > > > > > > http://logs.openstack.org/21/533021/4/check/legacy-requirements-integration-dsvm/7f7c3a8/logs/screen-keystone.txt.gz#_Apr_05_16_11_27_748482 > > > > > > So it looks like we need to have an uncapped release of oslo.service > > > before we can proceed here. > > > > > > > Ya, we may have to uncap and rely on upper-constraints to keep openstack > > gate from falling over. The new steps would be the following: > > My understanding of our use of upper constraints was that this should (almost) always be the case for (almost) all dependencies. We should rely on constraints instead of requirements caps. Capping libs like pbr or eventlet and any other that is in use globally is incredibly difficult to work with when you want to uncap it because you have to coordinate globally. Instead if using constraints you just bump the constraint and are done. > > It is probably worthwhile examining if we have any other deps in the situation and proactively addressing them rather than waiting for when we really need to fix them. > That's constantly on our list of things to do. In the past the only time we've capped is when we know upstream is realeasing breaking versions and we want to hold off for a cycle or until it's fixed. It also has the benefit of telling consumers/packagers about something 'hard breaking'. networkx is next on the list... -- Matthew Thode (prometheanfire) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From cboylan at sapwetik.org Fri Apr 6 16:56:12 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 06 Apr 2018 09:56:12 -0700 Subject: [openstack-dev] [openstack-infra][openstack-zuul-jobs]Questions about playbook copy module In-Reply-To: <7cc5b0e9-b366-a80d-9bfb-38032291d91a@suse.com> References: <80f29416-245c-fc06-1018-1f7a873b79a1@suse.com> <7cc5b0e9-b366-a80d-9bfb-38032291d91a@suse.com> Message-ID: <1523033772.942998.1329069832.418B0A00@webmail.messagingengine.com> On Fri, Apr 6, 2018, at 2:32 AM, Andreas Jaeger wrote: > On 2018-04-06 11:20, Xinni Ge wrote: > > Sorry, forgot to reply to the mail list. > > > > On Fri, Apr 6, 2018 at 6:18 PM, Xinni Ge > > wrote: > > > > Hi, Andreas. > > > > Thanks for reply. This is the link of log I am seeing. > > http://logs.openstack.org/39/39067dbc1dee99d227f8001595633b5cc98cfc53/release/xstatic-check-version/9172297/ara-report/ > > > > > > thanks, your analysis is correct, seem we seldom release xstatic packages ;( > > fix is at https://review.openstack.org/559300 > > Once that is merged, an infra-root can rerun the release job - please > ask on #openstack-infra IRC channel, I've re-enqueued the tag ref and we now have a new failure: http://logs.openstack.org/39/39067dbc1dee99d227f8001595633b5cc98cfc53/release/xstatic-check-version/c5baf7e/ara-report/result/09433617-44dd-4ffd-9c57-d62e04dfd75e/. Reading into that we appear to be running the script from the wrong local directory so relative paths don't work as expected. I have proposed https://review.openstack.org/559373 to fix this. Clark From slawek at kaplonski.pl Fri Apr 6 17:04:29 2018 From: slawek at kaplonski.pl (=?utf-8?B?U8WCYXdlayBLYXDFgm/FhHNraQ==?=) Date: Fri, 6 Apr 2018 19:04:29 +0200 Subject: [openstack-dev] [neutron] [OVN] Tempest API / Scenario tests and OVN metadata In-Reply-To: <4C4EB692-8B09-4689-BDC2-E6447D719073@kaplonski.pl> References: <4C4EB692-8B09-4689-BDC2-E6447D719073@kaplonski.pl> Message-ID: <4A490EDA-BD7F-444C-AA4F-65562FE21408@kaplonski.pl> Hi, Another idea is to modify test that it will: 1. Check how many ports are in tenant, 2. Set quota to actual number of ports + 1 instead of hardcoded 1 as it is now, 3. Try to add 2 ports - exactly as it is now, I think that this should be still backend agnostic and should fix this problem. > Wiadomość napisana przez Sławek Kapłoński w dniu 06.04.2018, o godz. 17:08: > > Hi, > > I don’t know how networking-ovn is working but I have one question. > > >> Wiadomość napisana przez Daniel Alvarez Sanchez w dniu 06.04.2018, o godz. 15:30: >> >> Hi, >> >> Thanks Lucas for writing this down. >> >> On Thu, Apr 5, 2018 at 11:35 AM, Lucas Alvares Gomes wrote: >> Hi, >> >> The tests below are failing in the tempest API / Scenario job that >> runs in the networking-ovn gate (non-voting): >> >> neutron_tempest_plugin.api.admin.test_quotas_negative.QuotasAdminNegativeTestJSON.test_create_port_when_quotas_is_full >> neutron_tempest_plugin.api.test_routers.RoutersIpV6Test.test_router_interface_status >> neutron_tempest_plugin.api.test_routers.RoutersTest.test_router_interface_status >> neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_prefixlen >> neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_quota >> neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_subnet_cidr >> >> Digging a bit into it I noticed that with the exception of the two >> "test_router_interface_status" (ipv6 and ipv4) all other tests are >> failing because the way metadata works in networking-ovn. >> >> Taking the "test_create_port_when_quotas_is_full" as an example. The >> reason why it fails is because when the OVN metadata is enabled, >> networking-ovn will metadata port at the moment a network is created >> [0] and that will already fulfill the quota limit set by that test >> [1]. >> >> That port will also allocate an IP from the subnet which will cause >> the rest of the tests to fail with a "No more IP addresses available >> on network ..." error. >> >> With ML2/OVS we would run into the same Quota problem if DHCP would be >> enabled for the created subnets. This means that if we modify the current tests >> to enable DHCP on them and we account this extra port it would be valid for >> all networking-ovn as well. Does it sound good or we still want to isolate quotas? > > If DHCP will be enabled for networking-ovn, will it use one more port also or not? If so then You will still have the same problem with DHCP as in ML2/OVS You will have one port created and for networking-ovn it will be 2 ports. > If it’s not like that then I think that this solution, with some comment in test code why DHCP is enabled should be good IMO. > >> >> This is not very trivial to fix because: >> >> 1. Tempest should be backend agnostic. So, adding a conditional in the >> tempest test to check whether OVN is being used or not doesn't sound >> correct. >> >> 2. Creating a port to be used by the metadata agent is a core part of >> the design implementation for the metadata functionality [2] >> >> So, I'm sending this email to try to figure out what would be the best >> approach to deal with this problem and start working towards having >> that job to be voting in our gate. Here are some ideas: >> >> 1. Simple disable the tests that are affected by the metadata approach. >> >> 2. Disable metadata for the tempest API / Scenario tests (here's a >> test patch doing it [3]) >> >> IMHO, we don't want to do this as metadata is likely to be enabled in all the >> clouds either using ML2/OVS or OVN so it's good to keep exercising >> this part. >> >> >> 3. Same as 1. but also create similar tempest tests specific for OVN >> somewhere else (in the networking-ovn tree?!) >> >> As we discussed on IRC I'm keen on doing this instead of getting bits in >> tempest to do different things depending on the backend used. Unless >> we want to enable DHCP on the subnets that these tests create :) >> >> >> What you think would be the best way to workaround this problem, any >> other ideas ? >> >> As for the "test_router_interface_status" tests that are failing >> independent of the metadata, there's a bug reporting the problem here >> [4]. So we should just fix it. >> >> [0] https://github.com/openstack/networking-ovn/blob/f3f5257fc465bbf44d589cc16e9ef7781f6b5b1d/networking_ovn/common/ovn_client.py#L1154 >> [1] https://github.com/openstack/neutron-tempest-plugin/blob/35bf37d1830328d72606f9c790b270d4fda2b854/neutron_tempest_plugin/api/admin/test_quotas_negative.py#L66 >> [2] https://docs.openstack.org/networking-ovn/latest/contributor/design/metadata_api.html#overview-of-proposed-approach >> [3] https://review.openstack.org/#/c/558792/ >> [4] https://bugs.launchpad.net/networking-ovn/+bug/1713835 >> >> Cheers, >> Lucas >> >> Thanks, >> Daniel >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > — > Best regards > Slawek Kaplonski > slawek at kaplonski.pl > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Best regards Slawek Kaplonski slawek at kaplonski.pl -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From kchamart at redhat.com Fri Apr 6 17:07:03 2018 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Fri, 6 Apr 2018 19:07:03 +0200 Subject: [openstack-dev] RFC: Next minimum libvirt / QEMU versions for "Stein" release In-Reply-To: <98bf44f7-331d-7239-6eec-971f8afd604d@debian.org> References: <20180330142643.ff3czxy35khmjakx@eukaryote> <20180331140929.r5kj3qyrefvsovwf@eukaryote> <20180404084507.GA18076@paraplu> <20180406100714.GB18076@paraplu> <98bf44f7-331d-7239-6eec-971f8afd604d@debian.org> Message-ID: <20180406170703.GD18076@paraplu> On Fri, Apr 06, 2018 at 06:07:18PM +0200, Thomas Goirand wrote: > On 04/06/2018 12:07 PM, Kashyap Chamarthy wrote: [...] > > Note: You don't even have to build the versions from 'Buster', which are > > quite new. Just the slightly more conservative libvirt 3.2.0 and QEMU > > 2.9.0 -- only if it's possbile. > > Actually, for *official* backports, it's the policy to always update to > wwhatever is in testing until testing is frozen. I see. Sure, that's fine, too (as "Queens" UCA also has it). Whatever is efficient and least painful from a maintenance POV. > I could maintain an unofficial backport in stretch-stein.debian.net > though. > > > That said ... I just spent comparing the release notes of libvirt 3.0.0 > > and libvirt 3.2.0[1][2]. By using libvirt 3.2.0 and QEMU 2.9.0, Debian users > > will be spared from a lot of critical bugs (see all the list in [3]) in > > CPU comparision area. > > > > [1] https://www.redhat.com/archives/libvirt-announce/2017-April/msg00000.html > > -- Release of libvirt-3.2.0 > > [2] https://www.redhat.com/archives/libvirt-announce/2017-January/msg00003.html > > -- Release of libvirt-3.0.0 > > [3] https://www.redhat.com/archives/libvir-list/2017-February/msg01295.html > > So, because of these bugs, would you already advise Nova users to use > libvirt 3.2.0 for Queens? FWIW, I'd suggest so, if it's not too much maintenance. It'll just spare you additional bug reports in that area, and the overall default experience when dealing with CPU models would be relatively much better. (Another way to look at it is, multiple other "conservative" long-term stable distributions also provide libvirt 3.2.0 and QEMU 2.9.0, so that should give you confidence.) Again, I don't want to push too hard on this. If that'll be messy from a package maintainance POV for you / Debian maintainers, then we could settle with whatever is in 'Stretch'. Thanks for looking into it. -- /kashyap From mriedemos at gmail.com Fri Apr 6 17:12:31 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 6 Apr 2018 12:12:31 -0500 Subject: [openstack-dev] RFC: Next minimum libvirt / QEMU versions for "Stein" release In-Reply-To: <20180406170703.GD18076@paraplu> References: <20180330142643.ff3czxy35khmjakx@eukaryote> <20180331140929.r5kj3qyrefvsovwf@eukaryote> <20180404084507.GA18076@paraplu> <20180406100714.GB18076@paraplu> <98bf44f7-331d-7239-6eec-971f8afd604d@debian.org> <20180406170703.GD18076@paraplu> Message-ID: <355fafcc-8d7c-67a2-88c0-2823a51296f8@gmail.com> On 4/6/2018 12:07 PM, Kashyap Chamarthy wrote: > FWIW, I'd suggest so, if it's not too much maintenance. It'll just > spare you additional bug reports in that area, and the overall default > experience when dealing with CPU models would be relatively much better. > (Another way to look at it is, multiple other "conservative" long-term > stable distributions also provide libvirt 3.2.0 and QEMU 2.9.0, so that > should give you confidence.) > > Again, I don't want to push too hard on this. If that'll be messy from > a package maintainance POV for you / Debian maintainers, then we could > settle with whatever is in 'Stretch'. Keep in mind that Kashyap has a tendency to want the latest and greatest of libvirt and qemu at all times for all of those delicious bug fixes. But we also know that new code also brings new not-yet-fixed bugs. Keep in mind the big picture here, we're talking about bumping from minimum required (in Rocky) libvirt 1.3.1 to at least 3.0.0 (in Stein) and qemu 2.5.0 to at least 2.8.0, so I think that's already covering some good ground. Let's not get greedy. :) -- Thanks, Matt From johnsomor at gmail.com Fri Apr 6 17:21:29 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Fri, 6 Apr 2018 10:21:29 -0700 Subject: [openstack-dev] [ALL][PTLs] [Community goal] Toggle the debug option at runtime In-Reply-To: <5A518503-7049-4574-B3AC-0F093E94EF75@kaplonski.pl> References: <5A518503-7049-4574-B3AC-0F093E94EF75@kaplonski.pl> Message-ID: Yeah, neutron-lbaas runs in the context of the neutron service (it is a neutron extension), so would be covered by neutron completing the goal. Michael On Fri, Apr 6, 2018 at 3:37 AM, Sławek Kapłoński wrote: > Hi, > > Thanks Akihiro for help. I added „neutron-dynamic-routing” task to this story and I will push patch for it soon. > There is still so many things that I need to learn about OpenStack and Neutron :) > > — > Best regards > Slawek Kaplonski > slawek at kaplonski.pl > > > > >> Wiadomość napisana przez Akihiro Motoki w dniu 06.04.2018, o godz. 11:34: >> >> >> Hi Slawek, >> >> 2018-04-06 17:38 GMT+09:00 Sławek Kapłoński : >> Hi, >> >> One more question about implementation of this goal. Should we take care (and add to story board [1]) projects like: >> >> In my understanding, tasks in the storyboard story are prepared per project team listed in the governance. >> IMHO, repositories which belong to a project team should be handled as a single task. >> >> The situations vary across repositories. >> >> >> openstack/neutron-lbaas >> >> This should be covered by octavia team. >> >> openstack/networking-cisco >> openstack/networking-dpm >> openstack/networking-infoblox >> openstack/networking-l2gw >> openstack/networking-lagopus >> >> The above repos are not official repos. >> Maintainers of each repo can follow the community goal, but there is no need to be tracked as the neutron team. >> >> openstack/neutron-dynamic-routing >> >> This repo is part of the neutron team. We, the neutron team need to cover this. >> >> FYI: The official repositories covered by the neutron team is available here. >> https://governance.openstack.org/tc/reference/projects/neutron.html >> >> Thanks, >> Akihiro >> >> >> Which looks that should be probably also changed in some way. Or maybe list of affected projects in [1] is „closed” and if some project is not there it shouldn’t be changed to accomplish this community goal? >> >> [1] https://storyboard.openstack.org/#!/story/2001545 >> >> — >> Best regards >> Slawek Kaplonski >> slawek at kaplonski.pl >> >> >> >> >> > Wiadomość napisana przez ChangBo Guo w dniu 26.03.2018, o godz. 14:15: >> > >> > >> > 2018-03-22 16:12 GMT+08:00 Sławomir Kapłoński : >> > Hi, >> > >> > I took care of implementation of [1] in Neutron and I have couple questions to about this goal. >> > >> > 1. Should we only change "restart_method" to mutate as is described in [2] ? I did already something like that in [3] - is it what is expected? >> > >> > Yes , let's the only thing. we need test if that if it works . >> > >> > 2. How I can check if this change is fine and config option are mutable exactly? For now when I change any config option for any of neutron agents and send SIGHUP to it it is in fact "restarted" and config is reloaded even with this old restart method. >> > >> > good question, we indeed thought this question when we proposal the goal. But It seems difficult to test that consuming projects like Neutron automatically. >> > >> > 3. Should we add any automatic tests for such change also? Any examples of such tests in other projects maybe? >> > There is no example for tests now, we only have some unit tests in oslo.service . >> > >> > [1] https://governance.openstack.org/tc/goals/rocky/enable-mutable-configuration.html >> > [2] https://docs.openstack.org/oslo.config/latest/reference/mutable.html >> > [3] https://review.openstack.org/#/c/554259/ >> > >> > — >> > Best regards >> > Slawek Kaplonski >> > slawek at kaplonski.pl >> > >> > >> > __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> > >> > >> > -- >> > ChangBo Guo(gcb) >> > Community Director @EasyStack >> > __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From mbirru at gmail.com Fri Apr 6 18:00:42 2018 From: mbirru at gmail.com (Murali B) Date: Fri, 6 Apr 2018 11:00:42 -0700 Subject: [openstack-dev] [zun] zun-api error In-Reply-To: References: Message-ID: Hi Hongbin Lu, Thank you. After changing the endpoint it worked. Actually I was using magnum service also. I used the service as "container" for magnum that is why its is going to 9511 instead of 9517 After I corrected it worked. Thanks -Murali On Fri, Apr 6, 2018 at 8:45 AM, Hongbin Lu wrote: > Hi Murali, > > It looks your zunclient was sending API requests to > http://10.11.142.2:9511/v1/services , which doesn't seem to be the right > API endpoint. According to the Keystone endpoint you configured, the API > endpoint of Zun should be http://10.11.142.2:9517/v1/services > (it is on port 9517 instead of > 9511). > > What confused the zunclient is the endpoint's type you configured in > Keystone. Zun expects an endpoint of type "container" but it was configured > to be "zun-container" in your setup. I believe the error will be resolved > if you can update the Zun endpoint from type "zun-container" to type > "container". Please give it a try and let us know. > > Best regards, > Hongbin > > On Thu, Apr 5, 2018 at 7:27 PM, Murali B wrote: > >> Hi Hongbin, >> >> Thank you for your help >> >> As per the our discussion here is the output for my current api on pike. >> I am not sure which version of zun client client I should use for pike >> >> root at cluster3-2:~/python-zunclient# zun service-list >> ERROR: Not Acceptable (HTTP 406) (Request-ID: >> req-be69266e-b641-44b9-9739-0c2d050f18b3) >> root at cluster3-2:~/python-zunclient# zun --debug service-list >> DEBUG (extension:180) found extension EntryPoint.parse('vitrage-keycloak >> = vitrageclient.auth:VitrageKeycloakLoader') >> DEBUG (extension:180) found extension EntryPoint.parse('vitrage-noauth = >> vitrageclient.auth:VitrageNoAuthLoader') >> DEBUG (extension:180) found extension EntryPoint.parse('noauth = >> cinderclient.contrib.noauth:CinderNoAuthLoader') >> DEBUG (extension:180) found extension EntryPoint.parse('v2token = >> keystoneauth1.loading._plugins.identity.v2:Token') >> DEBUG (extension:180) found extension EntryPoint.parse('none = >> keystoneauth1.loading._plugins.noauth:NoAuth') >> DEBUG (extension:180) found extension EntryPoint.parse('v3oauth1 = >> keystoneauth1.extras.oauth1._loading:V3OAuth1') >> DEBUG (extension:180) found extension EntryPoint.parse('admin_token = >> keystoneauth1.loading._plugins.admin_token:AdminToken') >> DEBUG (extension:180) found extension EntryPoint.parse('v3oidcauthcode = >> keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAuth >> orizationCode') >> DEBUG (extension:180) found extension EntryPoint.parse('v2password = >> keystoneauth1.loading._plugins.identity.v2:Password') >> DEBUG (extension:180) found extension EntryPoint.parse('v3samlpassword = >> keystoneauth1.extras._saml2._loading:Saml2Password') >> DEBUG (extension:180) found extension EntryPoint.parse('v3password = >> keystoneauth1.loading._plugins.identity.v3:Password') >> DEBUG (extension:180) found extension EntryPoint.parse('v3adfspassword = >> keystoneauth1.extras._saml2._loading:ADFSPassword') >> DEBUG (extension:180) found extension EntryPoint.parse('v3oidcaccesstoken >> = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAccessToken') >> DEBUG (extension:180) found extension EntryPoint.parse('v3oidcpassword = >> keystoneauth1.loading._plugins.identity.v3:OpenIDConnectPassword') >> DEBUG (extension:180) found extension EntryPoint.parse('v3kerberos = >> keystoneauth1.extras.kerberos._loading:Kerberos') >> DEBUG (extension:180) found extension EntryPoint.parse('token = >> keystoneauth1.loading._plugins.identity.generic:Token') >> DEBUG (extension:180) found extension EntryPoint.parse('v3oidcclientcredentials >> = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectClie >> ntCredentials') >> DEBUG (extension:180) found extension EntryPoint.parse('v3tokenlessauth >> = keystoneauth1.loading._plugins.identity.v3:TokenlessAuth') >> DEBUG (extension:180) found extension EntryPoint.parse('v3token = >> keystoneauth1.loading._plugins.identity.v3:Token') >> DEBUG (extension:180) found extension EntryPoint.parse('v3totp = >> keystoneauth1.loading._plugins.identity.v3:TOTP') >> DEBUG (extension:180) found extension EntryPoint.parse('v3applicationcredential >> = keystoneauth1.loading._plugins.identity.v3:ApplicationCredential') >> DEBUG (extension:180) found extension EntryPoint.parse('password = >> keystoneauth1.loading._plugins.identity.generic:Password') >> DEBUG (extension:180) found extension EntryPoint.parse('v3fedkerb = >> keystoneauth1.extras.kerberos._loading:MappedKerberos') >> DEBUG (extension:180) found extension EntryPoint.parse('v1password = >> swiftclient.authv1:PasswordLoader') >> DEBUG (extension:180) found extension EntryPoint.parse('token_endpoint = >> openstackclient.api.auth_plugin:TokenEndpoint') >> DEBUG (extension:180) found extension EntryPoint.parse('gnocchi-basic = >> gnocchiclient.auth:GnocchiBasicLoader') >> DEBUG (extension:180) found extension EntryPoint.parse('gnocchi-noauth = >> gnocchiclient.auth:GnocchiNoAuthLoader') >> DEBUG (extension:180) found extension EntryPoint.parse('aodh-noauth = >> aodhclient.noauth:AodhNoAuthLoader') >> DEBUG (session:372) REQ: curl -g -i -X GET http://ubuntu16:35357/v3 -H >> "Accept: application/json" -H "User-Agent: zun keystoneauth1/3.4.0 >> python-requests/2.18.1 CPython/2.7.12" >> DEBUG (connectionpool:207) Starting new HTTP connection (1): ubuntu16 >> DEBUG (connectionpool:395) http://ubuntu16:35357 "GET /v3 HTTP/1.1" 200 >> 248 >> DEBUG (session:419) RESP: [200] Date: Thu, 05 Apr 2018 23:11:07 GMT >> Server: Apache/2.4.18 (Ubuntu) Vary: X-Auth-Token X-Distribution: Ubuntu >> x-openstack-request-id: req-3b1a12cc-fb3f-4d05-87fc-d2a1ff43395c >> Content-Length: 248 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive >> Content-Type: application/json >> RESP BODY: {"version": {"status": "stable", "updated": >> "2017-02-22T00:00:00Z", "media-types": [{"base": "application/json", >> "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.8", >> "links": [{"href": "http://ubuntu16:35357/v3/", "rel": "self"}]}} >> >> DEBUG (session:722) GET call to None for http://ubuntu16:35357/v3 used >> request id req-3b1a12cc-fb3f-4d05-87fc-d2a1ff43395c >> DEBUG (base:175) Making authentication request to >> http://ubuntu16:35357/v3/auth/tokens >> DEBUG (connectionpool:395) http://ubuntu16:35357 "POST /v3/auth/tokens >> HTTP/1.1" 201 10333 >> DEBUG (base:180) {"token": {"is_domain": false, "methods": ["password"], >> "roles": [{"id": "4000a662be2d47fd8fdf5a0fef66767d", "name": "admin"}], >> "expires_at": "2018-04-06T00:11:08.000000Z", "project": {"domain": {"id": >> "default", "name": "Default"}, "id": "a391261cffba4f4c827ab7420a352fe1", >> "name": "admin"}, "catalog": [{"endpoints": [{"url": " >> http://cluster3-2:9517/v1", "interface": "internal", "region": >> "RegionOne", "region_id": "RegionOne", "id": "5a634bafa38c45dbb571f0edb3702101"}, >> {"url": "http://cluster3-2:9517/v1", "interface": "public", "region": >> "RegionOne", "region_id": "RegionOne", "id": "8926d37d276a4fe49df66bb513f7906a"}, >> {"url": "http://cluster3-2:9517/v1", "interface": "admin", "region": >> "RegionOne", "region_id": "RegionOne", "id": "a74e1b4faf39436aa5d6f9b446ceee92"}], >> "type": "container-zun", "id": "025154eef222461da9edcfe32ae79e5e", >> "name": "zun"}, {"endpoints": [{"url": "http://ubuntu16:9001", >> "interface": "public", "region": "RegionOne", "region_id": "RegionOne", >> "id": "3a94c0df20da47d1b922541a87576ab0"}, {"url": "http://ubuntu16:9001", >> "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", >> "id": "5fcab2a59c72433581510d7aafe29961"}, {"url": "http://ubuntu16:9001", >> "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", >> "id": "71e314291a4b4c648aa5ba662b216fa6"}], "type": "dns", "id": >> "07677b58ad4d469d80dbda8e9fa908bc", "name": "designate"}, {"endpoints": >> [{"url": "http://ubuntu16:8776/v2/a391261cffba4f4c827ab7420a352fe1", >> "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", >> "id": "4d56ee7967994c869239007146e52ab8"}, {"url": " >> http://ubuntu16:8776/v2/a391261cffba4f4c827ab7420a352fe1", "interface": >> "internal", "region": "RegionOne", "region_id": "RegionOne", "id": >> "9845138d25ec41b1a7102d8365f1b9c7"}, {"url": " >> http://ubuntu16:8776/v2/a391261cffba4f4c827ab7420a352fe1", "interface": >> "public", "region": "RegionOne", "region_id": "RegionOne", "id": >> "f99f9bf4b0eb4e19aa8dbe72fc13e648"}], "type": "volumev2", "id": >> "077bd5ecfc59499ab84f49e410efef4f", "name": "cinderv2"}, {"endpoints": >> [{"url": "http://ubuntu16:8004/v1/a391261cffba4f4c827ab7420a352fe1", >> "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", >> "id": "355c6c323653469c8315d5dea2998b0d"}, {"url": " >> http://ubuntu16:8004/v1/a391261cffba4f4c827ab7420a352fe1", "interface": >> "internal", "region": "RegionOne", "region_id": "RegionOne", "id": >> "841768ec3edb42d7b18fe6a2a17f4dbc"}, {"url": " >> http://10.11.142.2:8004/v1/a391261cffba4f4c827ab7420a352fe1", >> "interface": "public", "region": "RegionOne", "region_id": "RegionOne", >> "id": "afdbc1d2a5114cd9b0714331eb227ba9"}], "type": "orchestration", >> "id": "116243d61e3a4c90b7144d6a8b5a170a", "name": "heat"}, {"endpoints": >> [{"url": "http://ubuntu16:8778", "interface": "internal", "region": >> "RegionOne", "region_id": "RegionOne", "id": "2dacce3eed484464b3f521b7b2720cd9"}, >> {"url": "http://ubuntu16:8778", "interface": "public", "region": >> "RegionOne", "region_id": "RegionOne", "id": "5300f9ae336c41b8a8bb93400db35a30"}, >> {"url": "http://ubuntu16:8778", "interface": "admin", "region": >> "RegionOne", "region_id": "RegionOne", "id": "5c7e2cc977f74051b0ed104abb1d46a9"}], >> "type": "placement", "id": "1d270e2d3d4f488e82597097af933e7a", "name": >> "placement"}, {"endpoints": [{"url": "http://ubuntu16:8042", >> "interface": "public", "region": "RegionOne", "region_id": "RegionOne", >> "id": "337f147396f143679e6cf7fbdd3601ab"}, {"url": "http://ubuntu16:8042", >> "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", >> "id": "a97d660772e64894b4b13092d7719298"}, {"url": "http://ubuntu16:8042", >> "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", >> "id": "bb5caf186c9947aca31e6ee2a37f6bbd"}], "type": "alarming", "id": >> "2a19c1a28a42433caa8eb919910ec06f", "name": "aodh"}, {"endpoints": [], >> "type": "volume", "id": "39c740b891764e4a9081773709269848", "name": >> "cinder"}, {"endpoints": [{"url": "http://ubuntu16:8041", "interface": >> "internal", "region": "RegionOne", "region_id": "RegionOne", "id": >> "9d455913a5fb4f15bbe15740f4dee260"}, {"url": "http://ubuntu16:8041", >> "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", >> "id": "c5c2471db1cb4ae7a1f3e847404d4b37"}, {"url": "http://ubuntu16:8041", >> "interface": "public", "region": "RegionOne", "region_id": "RegionOne", >> "id": "cc12daed5ea342a1a47602720589cb9e"}], "type": "metric", "id": >> "39fdf2d5300343aa8ebe5509d29ba7ce", "name": "gnocchi"}, {"endpoints": >> [{"url": "http://cluster3-2:9890", "interface": "public", "region": >> "RegionOne", "region_id": "RegionOne", "id": "1c7ddc56ba984afd8187cd1894a75bf1"}, >> {"url": "http://cluster3-2:9890", "interface": "admin", "region": >> "RegionOne", "region_id": "RegionOne", "id": "888925c4fc8b48859f086860333c3ab4"}, >> {"url": "http://cluster3-2:9890", "interface": "internal", "region": >> "RegionOne", "region_id": "RegionOne", "id": "9bfd7198dab14f6a8b7eba444f920020"}], >> "type": "nfv-orchestration", "id": "3da88eae843a4949806186db8a9a3bd0", >> "name": "tacker"}, {"endpoints": [{"url": "http://10.11.142.2:8999", >> "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", >> "id": "32880f809a2f45598a9838e4b168ce5b"}, {"url": " >> http://10.11.142.2:8999", "interface": "public", "region": "RegionOne", >> "region_id": "RegionOne", "id": "530711f56f234ad19775fae65774c0ab"}, >> {"url": "http://10.11.142.2:8999", "interface": "admin", "region": >> "RegionOne", "region_id": "RegionOne", "id": "8d7493ad752b453b87d789d0ec5cae93"}], >> "type": "rca", "id": "55f78369ea5e40e3b9aa9ded854cb163", "name": >> "vitrage"}, {"endpoints": [{"url": "http://10.11.142.2:5000/v3/", >> "interface": "public", "region": "RegionOne", "region_id": "RegionOne", >> "id": "afba4b58fd734baeaed94f8f2380a986"}, {"url": " >> http://ubuntu16:5000/v3/", "interface": "internal", "region": >> "RegionOne", "region_id": "RegionOne", "id": "b4b864acfc1746b3ad2d22c6a28e1361"}, >> {"url": "http://ubuntu16:35357/v3/", "interface": "admin", "region": >> "RegionOne", "region_id": "RegionOne", "id": "bf256df5f8d34e9c80c00b78da122118"}], >> "type": "identity", "id": "58b4ff04dc764fc2aae4bfd9d0f1eb8e", "name": >> "keystone"}, {"endpoints": [{"url": "http://ubuntu16:8776/v3/a3912 >> 61cffba4f4c827ab7420a352fe1", "interface": "admin", "region": >> "RegionOne", "region_id": "RegionOne", "id": "260f8b9e9e214cc1a39407517b3ca826"}, >> {"url": "http://ubuntu16:8776/v3/a391261cffba4f4c827ab7420a352fe1", >> "interface": "public", "region": "RegionOne", "region_id": "RegionOne", >> "id": "81adeaccba1c4203bddb7734f23116a8"}, {"url": " >> http://ubuntu16:8776/v3/a391261cffba4f4c827ab7420a352fe1", "interface": >> "internal", "region": "RegionOne", "region_id": "RegionOne", "id": >> "e63332e8b15e43c6b9c331d9ee8551ab"}], "type": "volumev3", "id": >> "8cd6101718e94ee198cf9ba9894bf1c9", "name": "cinderv3"}, {"endpoints": >> [{"url": "http://ubuntu16:9696", "interface": "internal", "region": >> "RegionOne", "region_id": "RegionOne", "id": "65a0b4233436428ab42aa3b40b1ce53f"}, >> {"url": "http://ubuntu16:9696", "interface": "public", "region": >> "RegionOne", "region_id": "RegionOne", "id": "b8354dd727154056b3c9b81b89054bab"}, >> {"url": "http://ubuntu16:9696", "interface": "admin", "region": >> "RegionOne", "region_id": "RegionOne", "id": "ca44db85238b46cf9fbb6dc6f1d9dff5"}], >> "type": "network", "id": "ade912885a73431f95a3a01d8a8e6498", "name": >> "neutron"}, {"endpoints": [{"url": "http://ubuntu16:8000/v1", >> "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", >> "id": "5d7559010ea94cca9edd7ab6213f6b2c"}, {"url": " >> http://ubuntu16:8000/v1", "interface": "internal", "region": >> "RegionOne", "region_id": "RegionOne", "id": "af77025677284808b0715488e22729d4"}, >> {"url": "http://10.11.142.2:8000/v1", "interface": "public", "region": >> "RegionOne", "region_id": "RegionOne", "id": "c17b650eccf14045af49d5e9d050e875"}], >> "type": "cloudformation", "id": "b04f735f46e743969e2bb0fff3aee1b5", >> "name": "heat-cfn"}, {"endpoints": [{"url": " >> http://ubuntu16:8774/v2.1/a391261cffba4f4c827ab7420a352fe1", >> "interface": "public", "region": "RegionOne", "region_id": "RegionOne", >> "id": "18580f7a6dea4c53bc66d161e7e0a71e"}, {"url": " >> http://ubuntu16:8774/v2.1/a391261cffba4f4c827ab7420a352fe1", >> "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", >> "id": "b4a8575704a4426494edc57551f40e58"}, {"url": " >> http://ubuntu16:8774/v2.1/a391261cffba4f4c827ab7420a352fe1", >> "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", >> "id": "c41ec544b61c41098c07030bc84ba2a0"}], "type": "compute", "id": >> "b06f4aa21a4a488c8f0c5a835e639bd3", "name": "nova"}, {"endpoints": >> [{"url": "http://ubuntu16:9292", "interface": "public", "region": >> "RegionOne", "region_id": "RegionOne", "id": "4ed27e537ca34b6fb93a8c72d8921d24"}, >> {"url": "http://ubuntu16:9292", "interface": "internal", "region": >> "RegionOne", "region_id": "RegionOne", "id": "ab0c37600ecf45d797e7972dc6a4fde2"}, >> {"url": "http://ubuntu16:9292", "interface": "admin", "region": >> "RegionOne", "region_id": "RegionOne", "id": "f4a0f97be4f343d698ea12633e3823d6"}], >> "type": "image", "id": "bbe4fbb4a1d7495f948faa9baf1e3828", "name": >> "glance"}, {"endpoints": [{"url": "http://ubuntu16:8777", "interface": >> "public", "region": "RegionOne", "region_id": "RegionOne", "id": >> "3d160f2286634811b24b8abd6ad72c1f"}, {"url": "http://ubuntu16:8777", >> "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", >> "id": "a988e821ff1f4760ae3873c17ab87294"}, {"url": "http://ubuntu16:8777", >> "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", >> "id": "def8c07174184a0ca26e2f0f26d60a73"}], "type": "metering", "id": >> "f4450730522d4342ac6626b81567b36c", "name": "ceilometer"}, {"endpoints": >> [{"url": "http://ubuntu16:9511/v1", "interface": "internal", "region": >> "RegionOne", "region_id": "RegionOne", "id": "19e14e5c5c5a4d3db6a6a632db728668"}, >> {"url": "http://10.11.142.2:9511/v1", "interface": "public", "region": >> "RegionOne", "region_id": "RegionOne", "id": "28fb2092bcc748ce88dfb1284ace1264"}, >> {"url": "http://10.11.142.2:9511/v1", "interface": "admin", "region": >> "RegionOne", "region_id": "RegionOne", "id": "c33f5b4a355d4067aa2e7093606cd153"}], >> "type": "container", "id": "fdbcff09ecd545c8ba28bfd96782794a", "name": >> "magnum"}], "user": {"domain": {"id": "default", "name": "Default"}, >> "password_expires_at": null, "name": "admin", "id": >> "3b136545b47b40709b78b1e36cdcdc63"}, "audit_ids": >> ["Ad1z5kAmRBehcgxG6-8IYA"], "issued_at": "2018-04-05T23:11:08.000000Z"}} >> DEBUG (session:372) REQ: curl -g -i -X GET http://10.11.142.2:9511/v1/ser >> vices -H "OpenStack-API-Version: container 1.2" -H "X-Auth-Token: >> {SHA1}7523b440595290414cefa54434fc7c8adbec5c3d" -H "Content-Type: >> application/json" -H "Accept: application/json" -H "User-Agent: None" >> DEBUG (connectionpool:207) Starting new HTTP connection (1): 10.11.142.2 >> DEBUG (connectionpool:395) http://10.11.142.2:9511 "GET /v1/services >> HTTP/1.1" 406 166 >> DEBUG (session:419) RESP: [406] Content-Type: application/json >> Content-Length: 166 x-openstack-request-id: req-63b7de1b-ef63-4be8-93c1-a27972c9b4c0 >> Server: Werkzeug/0.10.4 Python/2.7.12 Date: Thu, 05 Apr 2018 23:11:09 GMT >> RESP BODY: {"errors": [{"status": 406, "code": "", "links": [], "title": >> "Not Acceptable", "detail": "Invalid service type for OpenStack-API-Version >> header", "request_id": ""}]} >> >> DEBUG (session:722) GET call to container for >> http://10.11.142.2:9511/v1/services used request id >> req-63b7de1b-ef63-4be8-93c1-a27972c9b4c0 >> DEBUG (shell:705) Not Acceptable (HTTP 406) (Request-ID: >> req-63b7de1b-ef63-4be8-93c1-a27972c9b4c0) >> Traceback (most recent call last): >> File "/usr/local/lib/python2.7/dist-packages/zunclient/shell.py", line >> 703, in main >> map(encodeutils.safe_decode, sys.argv[1:])) >> File "/usr/local/lib/python2.7/dist-packages/zunclient/shell.py", line >> 639, in main >> args.func(self.cs, args) >> File "/usr/local/lib/python2.7/dist-packages/zunclient/v1/services_shell.py", >> line 22, in do_service_list >> services = cs.services.list() >> File "/usr/local/lib/python2.7/dist-packages/zunclient/v1/services.py", >> line 70, in list >> return self._list(self._path(path), "services") >> File "/usr/local/lib/python2.7/dist-packages/zunclient/common/base.py", >> line 128, in _list >> resp, body = self.api.json_request('GET', url) >> File "/usr/local/lib/python2.7/dist-packages/zunclient/common/httpclient.py", >> line 368, in json_request >> resp = self._http_request(url, method, **kwargs) >> File "/usr/local/lib/python2.7/dist-packages/zunclient/common/httpclient.py", >> line 351, in _http_request >> error_json.get('debuginfo'), method, url) >> NotAcceptable: Not Acceptable (HTTP 406) (Request-ID: >> req-63b7de1b-ef63-4be8-93c1-a27972c9b4c0) >> ERROR: Not Acceptable (HTTP 406) (Request-ID: >> req-63b7de1b-ef63-4be8-93c1-a27972c9b4c0) >> >> >> >> Thanks >> -Murali >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Fri Apr 6 18:41:01 2018 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 6 Apr 2018 19:41:01 +0100 Subject: [openstack-dev] [kolla] [tripleo] On moving start scripts out of Kolla images In-Reply-To: References: <0892491c-f57e-2952-eac3-a86797db5a8e@oracle.com> Message-ID: One benefit of the kolla API that I've not seen mentioned yet (sorry if I missed it) is that you can change files on the host without affecting the running container. Bind mounts don't have this property. This is handy for reconfiguration/upgrade operations, where we write out a new set of config before recreating/restarting the container. COPY_ONCE is the king of immutable here, but even for COPY_ALWAYS, this works as long as the container doesn't restart while the config files are being written. Mark On 5 April 2018 at 21:41, Michał Jastrzębski wrote: > So I'll re-iterate comment which I made in BCN. In previous thread we > praised how Kolla provided stable API for images, and I agree that it > was great design choice (to provide stable API, not necessarily how > API looks), and this change would break it. So *if* we decide to do > it, we need to follow deprecation, that means we could deprecate these > files in this release and start removing them in next. > > Support for LOCI in kolla-ansible is good thing, but I don't think > changing Kolla image API is required for that. LOCI provides base > image arument, so we could simply create base-image with all the > extended-start and set-config mechanisms and some shim to source > extended-start script that belongs to particular container. We will > need kolla layer image anyway because set_config is there to stay (as > Martin pointed out it's valuable tool fixing real issue and it's used > by more projects than just kolla-ansible). We could add another script > that would look like extended_start.sh -> source > $CONTAINER_NAME-extended-start.sh and copy all kolla's extended start > scripts to dir with proper naming (I believe this is solution that Sam > came up with shortly after BCN). This is purely techincal and not that > hard to do, much quicker and easier than deprecating API... > > On 5 April 2018 at 12:28, Martin André wrote: > > On Thu, Apr 5, 2018 at 2:16 PM, Paul Bourke > wrote: > >> Hi all, > >> > >> This mail is to serve as a follow on to the discussion during > yesterday's > >> team meeting[4], which was regarding the desire to move start scripts > out of > >> the kolla images [0]. There's a few factors at play, and it may well be > best > >> left to discuss in person at the summit in May, but hopefully we can > get at > >> least some of this hashed out before then. > >> > >> I'll start by summarising why I think this is a good idea, and then > attempt > >> to address some of the concerns that have come up since. > >> > >> First off, to be frank, this is effort is driven by wanting to add > support > >> for loci images[1] in kolla-ansible. I think it would be unreasonable > for > >> anyone to argue this is a bad objective to have, loci images have very > >> obvious benefits over what we have in Kolla today. I'm not looking to > drop > >> support for Kolla images at all, I simply want to continue decoupling > things > >> to the point where operators can pick and choose what works best for > them. > >> Stemming from this, I think moving these scripts out of the images > provides > >> a clear benefit to our consumers, both users of kolla and third parties > such > >> as triple-o. Let me explain why. > > > > It's still very obscure to me how removing the scripts from kolla > > images will benefit consumers. If the reason is that you want to > > re-use them in other, non-kolla images, I believe we should package > > the scripts. I've left some comments in your spec review. > > > >> Normally, to run a docker image, a user will do 'docker run > >> helloworld:latest'. In any non trivial application, config needs to be > >> provided. In the vast majority of cases this is either provided via a > bind > >> mount (docker run -v hello.conf:/etc/hello.conf helloworld:latest), or > via > >> environment variables (docker run --env HELLO=paul helloworld:latest). > This > >> is all bog standard stuff, something anyone who's spent an hour learning > >> docker can understand. > >> > >> Now, lets say someone wants to try out OpenStack with Docker, and they > look > >> at Kolla. First off they have to look at something called > set_configs.py[2] > >> - over 400 lines of Python. Next they need to understand what that > script > >> consumes, config.json [3]. The only reference for config.json is the > files > >> that live in kolla-ansible, a mass of jinja and assumptions about how > the > >> service will be run. Next, they need to figure out how to bind mount the > >> config files and config.json into the container in a way that can be > >> consumed by set_configs.py (which by the way, requires the base kolla > image > >> in all cases). This is only for the config. For the service start up > >> command, this need to also be provided in config.json. This command is > then > >> parsed out and written to a location in the image, which is consumed by > a > >> series of start/extend start shell scripts. Kolla is *unique* in this > >> regard, no other project in the container world is interfacing with > images > >> in this way. Being a snowflake in this regard is not a good thing. I'm > still > >> waiting to hear from a real world operator who would prefer to spend > time > >> learning the above to doing: > > > > You're pointing a very real documentation issue. I've mentioned in the > > other kolla thread that I have a stub for the kolla API documentation. > > I'll push a patch for what I have and we can iterate on that. > > > >> docker run -v /etc/keystone:/etc/keystone keystone:latest --entrypoint > >> /usr/bin/keystone [args] > >> > >> This is the Docker API, it's easy to understand and pretty much the > standard > >> at this point. > > > > Sure, using the docker API works for simpler cases, not too > > surprisingly once you start doing more funky things with your > > containers you're quickly reach the docker API limitations. That's > > when the kolla API comes in handy. > > See for example this recent patch > > https://review.openstack.org/#/c/556673/ where we needed to change > > some file permission to the uid/gid of the user inside the container. > > > > The first iteration basically used the docker API and started an > > additional container to fix the permissions: > > > > docker run -v > > /etc/pki/tls/certs/neutron.crt:/etc/pki/tls/certs/neutron.crt:rw \ > > -v /etc/pki/tls/private/neutron.key:/etc/pki/tls/private/ > neutron.key:rw > > \ > > neutron_image \ > > /bin/bash -c 'chown neutron:neutron > > /etc/pki/tls/certs/neutron.crt; chown neutron:neutron > > /etc/pki/tls/private/neutron.key' > > > > You'll agree this is not the most obvious. And it had a nasty side > > effect that is changes the permissions of the files _on the host_. > > While using kolla API we could simply add to our config.json: > > > > - path: /etc/pki/tls/certs/neutron.crt > > owner: neutron:neutron > > - path: /etc/pki/tls/private/neutron.key > > owner: neutron:neutron > > > >> The other argument is that this removes the possibility for immutable > >> infrastructure. The concern is, with the new approach, a rookie operator > >> will modify one of the start scripts - resulting in uncertainty that > what > >> was first deployed matches what is currently running. But with the way > Kolla > >> is now, an operator can still do this! They can restart containers with > a > >> custom entrypoint or additional bind mounts, they can exec in and change > >> config files, etc. etc. Kolla containers have never been immutable and > we're > >> bending over backwards to artificially try and make this the case. We > cant > >> protect a bad or inexperienced operator from shooting themselves in the > >> foot, there are better ways of doing so. If/when Docker or the upstream > >> container world solves this problem, it would then make sense for Kolla > to > >> follow suit. > >> > >> On the face of it, what the spec proposes is a simple change, it should > not > >> radically pull the carpet out under people, or even change the way > >> kolla-ansible works in the near term. If consumers such as tripleo or > other > >> parties feel it would in fact do so please do let me know and we can > discuss > >> and mitigate these problems. > > > > TripleO uses these scripts extensively, we certainly do not want to > > see them go away from kolla images. > > > > Martin > > > >> Cheers, > >> -Paul > >> > >> [0] https://review.openstack.org/#/c/550958/ > >> [1] https://github.com/openstack/loci > >> [2] > >> https://github.com/openstack/kolla/blob/master/docker/base/ > set_configs.py > >> [3] > >> https://github.com/openstack/kolla-ansible/blob/master/ > ansible/roles/keystone/templates/keystone.json.j2 > >> [4] > >> http://eavesdrop.openstack.org/meetings/kolla/2018/kolla. > 2018-04-04-16.00.log.txt > >> > >> ____________________________________________________________ > ______________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stdake at cisco.com Fri Apr 6 18:54:29 2018 From: stdake at cisco.com (Steven Dake (stdake)) Date: Fri, 6 Apr 2018 18:54:29 +0000 Subject: [openstack-dev] [kolla] [tripleo] On moving start scripts out of Kolla images In-Reply-To: References: <0892491c-f57e-2952-eac3-a86797db5a8e@oracle.com> Message-ID: <32E42C9F-5A89-4A6F-ABED-5DE5A7C37793@cisco.com> +1. From: Mark Goddard Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Friday, April 6, 2018 at 11:41 AM To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] [kolla] [tripleo] On moving start scripts out of Kolla images One benefit of the kolla API that I've not seen mentioned yet (sorry if I missed it) is that you can change files on the host without affecting the running container. Bind mounts don't have this property. This is handy for reconfiguration/upgrade operations, where we write out a new set of config before recreating/restarting the container. COPY_ONCE is the king of immutable here, but even for COPY_ALWAYS, this works as long as the container doesn't restart while the config files are being written. Mark On 5 April 2018 at 21:41, Michał Jastrzębski > wrote: So I'll re-iterate comment which I made in BCN. In previous thread we praised how Kolla provided stable API for images, and I agree that it was great design choice (to provide stable API, not necessarily how API looks), and this change would break it. So *if* we decide to do it, we need to follow deprecation, that means we could deprecate these files in this release and start removing them in next. Support for LOCI in kolla-ansible is good thing, but I don't think changing Kolla image API is required for that. LOCI provides base image arument, so we could simply create base-image with all the extended-start and set-config mechanisms and some shim to source extended-start script that belongs to particular container. We will need kolla layer image anyway because set_config is there to stay (as Martin pointed out it's valuable tool fixing real issue and it's used by more projects than just kolla-ansible). We could add another script that would look like extended_start.sh -> source $CONTAINER_NAME-extended-start.sh and copy all kolla's extended start scripts to dir with proper naming (I believe this is solution that Sam came up with shortly after BCN). This is purely techincal and not that hard to do, much quicker and easier than deprecating API... On 5 April 2018 at 12:28, Martin André > wrote: > On Thu, Apr 5, 2018 at 2:16 PM, Paul Bourke > wrote: >> Hi all, >> >> This mail is to serve as a follow on to the discussion during yesterday's >> team meeting[4], which was regarding the desire to move start scripts out of >> the kolla images [0]. There's a few factors at play, and it may well be best >> left to discuss in person at the summit in May, but hopefully we can get at >> least some of this hashed out before then. >> >> I'll start by summarising why I think this is a good idea, and then attempt >> to address some of the concerns that have come up since. >> >> First off, to be frank, this is effort is driven by wanting to add support >> for loci images[1] in kolla-ansible. I think it would be unreasonable for >> anyone to argue this is a bad objective to have, loci images have very >> obvious benefits over what we have in Kolla today. I'm not looking to drop >> support for Kolla images at all, I simply want to continue decoupling things >> to the point where operators can pick and choose what works best for them. >> Stemming from this, I think moving these scripts out of the images provides >> a clear benefit to our consumers, both users of kolla and third parties such >> as triple-o. Let me explain why. > > It's still very obscure to me how removing the scripts from kolla > images will benefit consumers. If the reason is that you want to > re-use them in other, non-kolla images, I believe we should package > the scripts. I've left some comments in your spec review. > >> Normally, to run a docker image, a user will do 'docker run >> helloworld:latest'. In any non trivial application, config needs to be >> provided. In the vast majority of cases this is either provided via a bind >> mount (docker run -v hello.conf:/etc/hello.conf helloworld:latest), or via >> environment variables (docker run --env HELLO=paul helloworld:latest). This >> is all bog standard stuff, something anyone who's spent an hour learning >> docker can understand. >> >> Now, lets say someone wants to try out OpenStack with Docker, and they look >> at Kolla. First off they have to look at something called set_configs.py[2] >> - over 400 lines of Python. Next they need to understand what that script >> consumes, config.json [3]. The only reference for config.json is the files >> that live in kolla-ansible, a mass of jinja and assumptions about how the >> service will be run. Next, they need to figure out how to bind mount the >> config files and config.json into the container in a way that can be >> consumed by set_configs.py (which by the way, requires the base kolla image >> in all cases). This is only for the config. For the service start up >> command, this need to also be provided in config.json. This command is then >> parsed out and written to a location in the image, which is consumed by a >> series of start/extend start shell scripts. Kolla is *unique* in this >> regard, no other project in the container world is interfacing with images >> in this way. Being a snowflake in this regard is not a good thing. I'm still >> waiting to hear from a real world operator who would prefer to spend time >> learning the above to doing: > > You're pointing a very real documentation issue. I've mentioned in the > other kolla thread that I have a stub for the kolla API documentation. > I'll push a patch for what I have and we can iterate on that. > >> docker run -v /etc/keystone:/etc/keystone keystone:latest --entrypoint >> /usr/bin/keystone [args] >> >> This is the Docker API, it's easy to understand and pretty much the standard >> at this point. > > Sure, using the docker API works for simpler cases, not too > surprisingly once you start doing more funky things with your > containers you're quickly reach the docker API limitations. That's > when the kolla API comes in handy. > See for example this recent patch > https://review.openstack.org/#/c/556673/ where we needed to change > some file permission to the uid/gid of the user inside the container. > > The first iteration basically used the docker API and started an > additional container to fix the permissions: > > docker run -v > /etc/pki/tls/certs/neutron.crt:/etc/pki/tls/certs/neutron.crt:rw \ > -v /etc/pki/tls/private/neutron.key:/etc/pki/tls/private/neutron.key:rw > \ > neutron_image \ > /bin/bash -c 'chown neutron:neutron > /etc/pki/tls/certs/neutron.crt; chown neutron:neutron > /etc/pki/tls/private/neutron.key' > > You'll agree this is not the most obvious. And it had a nasty side > effect that is changes the permissions of the files _on the host_. > While using kolla API we could simply add to our config.json: > > - path: /etc/pki/tls/certs/neutron.crt > owner: neutron:neutron > - path: /etc/pki/tls/private/neutron.key > owner: neutron:neutron > >> The other argument is that this removes the possibility for immutable >> infrastructure. The concern is, with the new approach, a rookie operator >> will modify one of the start scripts - resulting in uncertainty that what >> was first deployed matches what is currently running. But with the way Kolla >> is now, an operator can still do this! They can restart containers with a >> custom entrypoint or additional bind mounts, they can exec in and change >> config files, etc. etc. Kolla containers have never been immutable and we're >> bending over backwards to artificially try and make this the case. We cant >> protect a bad or inexperienced operator from shooting themselves in the >> foot, there are better ways of doing so. If/when Docker or the upstream >> container world solves this problem, it would then make sense for Kolla to >> follow suit. >> >> On the face of it, what the spec proposes is a simple change, it should not >> radically pull the carpet out under people, or even change the way >> kolla-ansible works in the near term. If consumers such as tripleo or other >> parties feel it would in fact do so please do let me know and we can discuss >> and mitigate these problems. > > TripleO uses these scripts extensively, we certainly do not want to > see them go away from kolla images. > > Martin > >> Cheers, >> -Paul >> >> [0] https://review.openstack.org/#/c/550958/ >> [1] https://github.com/openstack/loci >> [2] >> https://github.com/openstack/kolla/blob/master/docker/base/set_configs.py >> [3] >> https://github.com/openstack/kolla-ansible/blob/master/ansible/roles/keystone/templates/keystone.json.j2 >> [4] >> http://eavesdrop.openstack.org/meetings/kolla/2018/kolla.2018-04-04-16.00.log.txt >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Fri Apr 6 19:18:07 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Fri, 6 Apr 2018 20:18:07 +0100 Subject: [openstack-dev] [nova] [placement] placement update 18-14 In-Reply-To: References: Message-ID: <027de90a-9b72-a8a1-0231-4f5ca1963473@gmail.com> Thanks, as always, for the excellent summary emails, Chris. Comments inline. On 04/06/2018 01:54 PM, Chris Dent wrote: > > This is "contract" style update. New stuff will not be added to the > lists. > > # Most Important > > There doesn't appear to be anything new with regard to most > important. That which was important remains important. At the > scheduler team meeting at the start of the week there was talk of > working out ways to trim the amount of work in progress by using the > nova priorities tracking etherpad to help sort things out: > >     https://etherpad.openstack.org/p/rocky-nova-priorities-tracking > > Update provider tree and nested allocation candidates remain > critical basic functionality on which much else is based. With most > of provider tree done, it's really on nested allocation candidates. Yup. And that series is deadlocked on a disagreement about whether granular request groups should be "separate by default" (meaning: if you request multiple groups of resources, the expectation is that they will be served by distinct resource providers) or "unrestricted by default" (meaning: if you request multiple groups of resources, those resources may or may not be serviced by distinct resource providers). For folk's information, the latter (unrestricted by default) is the *existing* behaviour as outlined in the granular request groups spec: http://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/granular-resource-requests.html Specifically, it is Requirement 3 on the above spec that is the primary driver for this debate. I currently have an action item to resolve this debate and move forward with a decision, whatever that may be. > # What's Changed > > Quite a bit of provider tree related code has merged. > > Some negotiation happened with regard to when/if the fixes for > shared providers is going to happen. I'm not sure how that resolved, > if someone can follow up with that, that would be most excellent. Sharing providers are in a weird place right now, agreed. We have landed lots of code on the placement side of the house for handling sharing providers. However, the nova-compute service still does not know about the providers that share resources with it. This makes it impossible right now to have a compute node with local disk storage as well as shared disk resources. > Most of the placement-req-filter series merged. > > The spec for error codes in the placement API merged (code is in > progress and ready for review, see below). > > # Questions > > * Eric and I discussed earlier in the week that it might be a good >   time to start an #openstack-placement IRC channel, for two main >   reasons: break things up so as to limit the crosstalk in the often >   very busy #openstack-nova channel and to lend a bit of momentum >   for going in that direction. Is this okay with everyone? If not, >   please say so, otherwise I'll make it happen soon. Cool with me. I know Matt has wanted a separate placement channel for a while now. > * Shared providers status? >   (I really think we need to make this go. It was one of the >   original value propositions of placement: being able to accurate >   manage shared disk.) Agreed, but you know.... NUMA. And CPU pinning. And vGPUs. And FPGAs. And physnet network bandwidth scheduling. And... well, you get the idea. Best, -jay > # Bugs > > * Placement related bugs not yet in progress:  https://goo.gl/TgiPXb >    15, -1 on last week > * In progress placement bugs: https://goo.gl/vzGGDQ >    13, +1 on last week > > # Specs > > These seem to be divided into three classes: > > * Normal stuff > * Old stuff not getting attention or newer stuff that ought to be >   abandoned because of lack of support > * Anything related to the client side of using nested providers >   effectively. This apparently needs a lot of thinking. If there are >   some general sticking points we can extract and resolve, that >   might help move the whole thing forward? > > * https://review.openstack.org/#/c/549067/ >       VMware: place instances on resource pool >       (using update_provider_tree) > > * https://review.openstack.org/#/c/545057/ >       mirror nova host aggregates to placement API > > * https://review.openstack.org/#/c/552924/ >      Proposes NUMA topology with RPs > > * https://review.openstack.org/#/c/544683/ >      Account for host agg allocation ratio in placement > > * https://review.openstack.org/#/c/552927/ >      Spec for isolating configuration of placement database >      (This has a strong +2 on it but needs one more.) > > * https://review.openstack.org/#/c/552105/ >      Support default allocation ratios > > * https://review.openstack.org/#/c/438640/ >      Spec on preemptible servers > > * https://review.openstack.org/#/c/556873/ >    Handle nested providers for allocation candidates > > * https://review.openstack.org/#/c/556971/ >    Add Generation to Consumers > > * https://review.openstack.org/#/c/557065/ >    Proposes Multiple GPU types > > * https://review.openstack.org/#/c/555081/ >    Standardize CPU resource tracking > > * https://review.openstack.org/#/c/502306/ >    Network bandwidth resource provider > > * https://review.openstack.org/#/c/509042/ >    Propose counting quota usage from placement > > # Main Themes > > ## Update Provider Tree > > Most of the main guts of this have merged (huzzah!). What's left are > some loose end details, and clean handling of aggregates: > >     https://review.openstack.org/#/q/topic:bp/update-provider-tree > > ## Nested providers in allocation candidates > > Representing nested provides in the response to GET > /allocation_candidates is required to actually make use of all the > topology that update provider tree will report. That work is in > progress at: > >     https://review.openstack.org/#/q/topic:bp/nested-resource-providers > > https://review.openstack.org/#/q/topic:bp/nested-resource-providers-allocation-candidates > > > Note that some of this includes the up-for-debate shared handling. > > ## Request Filters > > As far as I can tell this is mostly done (yay!) but there is a loose > end: We merged an updated spec to support multiple member_of > parameters, but it's not clear anybody is currently owning that: > >      https://review.openstack.org/#/c/555413/ > > ## Mirror nova host aggregates to placement > > This makes it so some kinds of aggregate filtering can be done > "placement side" by mirroring nova host aggregates into placement > aggregates. > > > https://review.openstack.org/#/q/topic:bp/placement-mirror-host-aggregates > > It's part of what will make the req filters above useful. > > ## Forbidden Traits > > A way of expressing "I'd like resources that do _not_ have trait X". > This is ready for review: > >       https://review.openstack.org/#/q/topic:bp/placement-forbidden-traits > > ## Consumer Generations > > This allows multiple agents to "safely" update allocations for a > single consumer. There is both a spec and code in progress for this: > >      https://review.openstack.org/#/q/topic:bp/add-consumer-generation > > # Extraction > > Small bits of work on extraction continue on the > bp/placement-extract topic: > >     https://review.openstack.org/#/q/topic:bp/placement-extract > > The spec for optional database handling got some nice support > but needs more attention: > >      https://review.openstack.org/#/c/552927/ > > Jay has declared that he's going to start work on the > os-resources-classes library. > > I've posted a 6th in my placement container playground series: > >     https://anticdent.org/placement-container-playground-6.html > > Though not directly related to extraction, that experimentation has > exposed a lot of the areas where work remains to be done to make > placement independent of nova. > > A recent experiment with shrinking the repo to just the placement > dir reinforced a few things we already know: > > * The placement tests need their own base test to avoid 'from nova >   import test' > * That will need to provide database and other fixtures (such a >   config and the self.flags feature). > * And, of course, eventually, config handling. The container >   experiments above demonstrate just how little config placement >   actually needs (by design, let's keep it that way). > > # Other > > This is a contract week, so nothing new has been added here, despite > there being new work. Part of the intent here it make sure we are > queue-like where we can be. This list maintains its ordering from > week to week: newly discovered things are added to the end. > > There are 14 entries here, -7 on last week. > > That's good. However some of the removals are the result of some > code changing topic (and having been listed here by topic). Some of > the oldest stuff at the top of the list has not moved. > > * https://review.openstack.org/#/c/546660/ >       Purge comp_node and res_prvdr records during deletion of >       cells/hosts > > * https://review.openstack.org/#/q/topic:bp/placement-osc-plugin-rocky >       A huge pile of improvements to osc-placement > > * https://review.openstack.org/#/c/546713/ >       Add compute capabilities traits (to os-traits) > > * https://review.openstack.org/#/c/524425/ >       General policy sample file for placement > > * https://review.openstack.org/#/c/546177/ >       Provide framework for setting placement error codes > > * https://review.openstack.org/#/c/527791/ >      Get resource provider by uuid or name (osc-placement) > > * https://review.openstack.org/#/c/477478/ >      placement: Make API history doc more consistent > > * https://review.openstack.org/#/c/556669/ >    Handle agg generation conflict in report client > > * https://review.openstack.org/#/c/556628/ >    Slugification utilities for placement names > > * https://review.openstack.org/#/c/557086/ >    Remove usage of [placement]os_region_name > > * https://review.openstack.org/#/c/556633/ >    Get rid of 406 paths in report client > > * https://review.openstack.org/#/c/537614/ >    Add unit test for non-placement resize > > * https://review.openstack.org/#/c/554357/ >    Address issues raised in adding member_of to GET /a-c > > * https://review.openstack.org/#/c/493865/ >    cover migration cases with functional tests > > # End > > 2 runway slots open up this coming Wednesday, the 11th. > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From mbayer at redhat.com Fri Apr 6 19:21:19 2018 From: mbayer at redhat.com (Michael Bayer) Date: Fri, 6 Apr 2018 15:21:19 -0400 Subject: [openstack-dev] [oslo.db] innodb OPTIMIZE TABLE ? In-Reply-To: <20180404090026.xl22i4kyplurq36z@localhost> References: <04e33bc7-90cf-e9c6-c276-a852212c25c7@gmail.com> <20180404090026.xl22i4kyplurq36z@localhost> Message-ID: On Wed, Apr 4, 2018 at 5:00 AM, Gorka Eguileor wrote: > On 03/04, Jay Pipes wrote: >> On 04/03/2018 11:07 AM, Michael Bayer wrote: >> > The MySQL / MariaDB variants we use nowadays default to >> > innodb_file_per_table=ON and we also set this flag to ON in installer >> > tools like TripleO. The reason we like file per table is so that >> > we don't grow an enormous ibdata file that can't be shrunk without >> > rebuilding the database. Instead, we have lots of little .ibd >> > datafiles for each table throughout each openstack database. >> > >> > But now we have the issue that these files also can benefit from >> > periodic optimization which can shrink them and also have a beneficial >> > effect on performance. The OPTIMIZE TABLE statement achieves this, >> > but as would be expected it itself can lock tables for potentially a >> > long time. Googling around reveals a lot of controversy, as various >> > users and publications suggest that OPTIMIZE is never needed and would >> > have only a negligible effect on performance. However here we seek >> > to use OPTIMIZE so that we can reclaim disk space on tables that have >> > lots of DELETE activity, such as keystone "token" and ceilometer >> > "sample". >> > >> > Questions for the group: >> > >> > 1. is OPTIMIZE table worthwhile to be run for tables where the >> > datafile has grown much larger than the number of rows we have in the >> > table? >> >> Possibly, though it's questionable to use MySQL/InnoDB for storing transient >> data that is deleted often like ceilometer samples and keystone tokens. A >> much better solution is to use RDBMS partitioning so you can simply ALTER >> TABLE .. DROP PARTITION those partitions that are no longer relevant (and >> don't even bother DELETEing individual rows) or, in the case of Ceilometer >> samples, don't use a traditional RDBMS for timeseries data at all... >> >> But since that is unfortunately already the case, yes it is probably a good >> idea to OPTIMIZE TABLE on those tables. >> >> > 2. from people's production experience how safe is it to run OPTIMIZE, >> > e.g. how long is it locking tables, etc. >> >> Is it safe? Yes. >> >> Does it lock the entire table for the duration of the operation? No. It uses >> online DDL operations: >> >> https://dev.mysql.com/doc/refman/5.7/en/innodb-file-defragmenting.html >> >> Note that OPTIMIZE TABLE is mapped to ALTER TABLE tbl_name FORCE for InnoDB >> tables. >> >> > 3. is there a heuristic we can use to measure when we might run this >> > -.e.g my plan is we measure the size in bytes of each row in a table >> > and then compare that in some ratio to the size of the corresponding >> > .ibd file, if the .ibd file is N times larger than the logical data >> > size we run OPTIMIZE ? >> >> I don't believe so, no. Most things I see recommended is to simply run >> OPTIMIZE TABLE in a cron job on each table periodically. >> >> > 4. I'd like to propose this job of scanning table datafile sizes in >> > ratio to logical data sizes, then running OPTIMIZE, be a utility >> > script that is delivered via oslo.db, and would run for all innodb >> > tables within a target MySQL/ MariaDB server generically. That is, I >> > really *dont* want this to be a script that Keystone, Nova, Ceilometer >> > etc. are all maintaining delivering themselves. this should be done >> > as a generic pass on a whole database (noting, again, we are only >> > running it for very specific InnoDB tables that we observe have a poor >> > logical/physical size ratio). >> >> I don't believe this should be in oslo.db. This is strictly the purview of >> deployment tools and should stay there, IMHO. >> > > Hi, > > As far as I know most projects do "soft deletes" where we just flag the > rows as deleted and don't remove them from the DB, so it's only when we > use a management tool and run the "purge" command that we actually > remove these rows. > > Since running the optimize without purging would be meaningless, I'm > wondering if we should trigger the OPTIMIZE also within the purging > code. This way we could avoid innefective runs of the optimize command > when no purge has happened and even when we do the optimization we could > skip the ratio calculation altogether for tables where no rows have been > deleted (the ratio hasn't changed). > the issue is that this OPTIMIZE will block on Galera unless it is run on a per-individual node basis along with the changing of the wsrep_OSU_method parameter, this is way out of scope both to be redundantly hardcoded in multiple openstack projects, as well as there's no portable way for Keystone and others to get at the individual Galera node addresses. Putting it in oslo.db would at least be a place that most of this logic can live but even then it needs to run for multiple Galera nodes and needs to have deployment-specific configuration. *unless* we say, the OPTIMIZE here will short for a purged table, let's just let it block. > Ideally the ratio calculation and optimization code would be provided by > oslo.db to reduce code duplication between projects. I was hoping to have this be part of oslo.db but there's disagreement on that :) If this can't be in oslo.db then the biggest issue facing me on this is building out a new application and getting it packaged since this feature has no home, unless I can ship it as some kind of script packaged in tripleo. > > Cheers, > Gorka. > > >> > 5. for Galera this gets more tricky, as we might want to run OPTIMIZE >> > on individual nodes directly. The script at [1] illustrates how to >> > run this on individual nodes one at a time. >> > >> > More succinctly, the Q is: >> > >> > a. OPTIMIZE, yes or no? >> >> Yes. >> >> > b. oslo.db script to run generically, yes or no? >> >> No. Just have Triple-O install galera_innoptimizer and run it in a cron job. >> >> Best, >> -jay >> >> > thanks for your thoughts! >> > >> > >> > >> > [1] https://github.com/deimosfr/galera_innoptimizer >> > >> > __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mark at stackhpc.com Fri Apr 6 19:29:58 2018 From: mark at stackhpc.com (Mark Goddard) Date: Fri, 06 Apr 2018 19:29:58 +0000 Subject: [openstack-dev] [kolla] [tripleo] On moving start scripts out of Kolla images In-Reply-To: References: <0892491c-f57e-2952-eac3-a86797db5a8e@oracle.com> Message-ID: On Thu, 5 Apr 2018, 20:28 Martin André, wrote: > On Thu, Apr 5, 2018 at 2:16 PM, Paul Bourke > wrote: > > Hi all, > > > > This mail is to serve as a follow on to the discussion during yesterday's > > team meeting[4], which was regarding the desire to move start scripts > out of > > the kolla images [0]. There's a few factors at play, and it may well be > best > > left to discuss in person at the summit in May, but hopefully we can get > at > > least some of this hashed out before then. > > > > I'll start by summarising why I think this is a good idea, and then > attempt > > to address some of the concerns that have come up since. > > > > First off, to be frank, this is effort is driven by wanting to add > support > > for loci images[1] in kolla-ansible. I think it would be unreasonable for > > anyone to argue this is a bad objective to have, loci images have very > > obvious benefits over what we have in Kolla today. I'm not looking to > drop > > support for Kolla images at all, I simply want to continue decoupling > things > > to the point where operators can pick and choose what works best for > them. > > Stemming from this, I think moving these scripts out of the images > provides > > a clear benefit to our consumers, both users of kolla and third parties > such > > as triple-o. Let me explain why. > > It's still very obscure to me how removing the scripts from kolla > images will benefit consumers. If the reason is that you want to > re-use them in other, non-kolla images, I believe we should package > the scripts. I've left some comments in your spec review. > +1 to extracting and packaging the kolla API. This will make it easier to test and document, allow for versioning, and make it a first class citizen rather than a file in the build context of the base image. Plus, if it really is as good as some people are arguing, then it should be shared. For many of the other helper scripts that get bundled into the kolla images, I can see an argument for pulling these up to the deployment layer. These could easily be moved to kolla-ansible, and added via config.json. I guess it would be useful to know whether other deployment tools (tripleo) are using any of these - if they are shared then perhaps the images are the best place for them. > > Normally, to run a docker image, a user will do 'docker run > > helloworld:latest'. In any non trivial application, config needs to be > > provided. In the vast majority of cases this is either provided via a > bind > > mount (docker run -v hello.conf:/etc/hello.conf helloworld:latest), or > via > > environment variables (docker run --env HELLO=paul helloworld:latest). > This > > is all bog standard stuff, something anyone who's spent an hour learning > > docker can understand. > > > > Now, lets say someone wants to try out OpenStack with Docker, and they > look > > at Kolla. First off they have to look at something called > set_configs.py[2] > > - over 400 lines of Python. Next they need to understand what that script > > consumes, config.json [3]. The only reference for config.json is the > files > > that live in kolla-ansible, a mass of jinja and assumptions about how the > > service will be run. Next, they need to figure out how to bind mount the > > config files and config.json into the container in a way that can be > > consumed by set_configs.py (which by the way, requires the base kolla > image > > in all cases). This is only for the config. For the service start up > > command, this need to also be provided in config.json. This command is > then > > parsed out and written to a location in the image, which is consumed by a > > series of start/extend start shell scripts. Kolla is *unique* in this > > regard, no other project in the container world is interfacing with > images > > in this way. Being a snowflake in this regard is not a good thing. I'm > still > > waiting to hear from a real world operator who would prefer to spend time > > learning the above to doing: > > You're pointing a very real documentation issue. I've mentioned in the > other kolla thread that I have a stub for the kolla API documentation. > I'll push a patch for what I have and we can iterate on that. > > > docker run -v /etc/keystone:/etc/keystone keystone:latest --entrypoint > > /usr/bin/keystone [args] > > > > This is the Docker API, it's easy to understand and pretty much the > standard > > at this point. > > Sure, using the docker API works for simpler cases, not too > surprisingly once you start doing more funky things with your > containers you're quickly reach the docker API limitations. That's > when the kolla API comes in handy. > See for example this recent patch > https://review.openstack.org/#/c/556673/ where we needed to change > some file permission to the uid/gid of the user inside the container. > > The first iteration basically used the docker API and started an > additional container to fix the permissions: > > docker run -v > /etc/pki/tls/certs/neutron.crt:/etc/pki/tls/certs/neutron.crt:rw \ > -v > /etc/pki/tls/private/neutron.key:/etc/pki/tls/private/neutron.key:rw > \ > neutron_image \ > /bin/bash -c 'chown neutron:neutron > /etc/pki/tls/certs/neutron.crt; chown neutron:neutron > /etc/pki/tls/private/neutron.key' > > You'll agree this is not the most obvious. And it had a nasty side > effect that is changes the permissions of the files _on the host_. > While using kolla API we could simply add to our config.json: > > - path: /etc/pki/tls/certs/neutron.crt > owner: neutron:neutron > - path: /etc/pki/tls/private/neutron.key > owner: neutron:neutron > > > The other argument is that this removes the possibility for immutable > > infrastructure. The concern is, with the new approach, a rookie operator > > will modify one of the start scripts - resulting in uncertainty that what > > was first deployed matches what is currently running. But with the way > Kolla > > is now, an operator can still do this! They can restart containers with a > > custom entrypoint or additional bind mounts, they can exec in and change > > config files, etc. etc. Kolla containers have never been immutable and > we're > > bending over backwards to artificially try and make this the case. We > cant > > protect a bad or inexperienced operator from shooting themselves in the > > foot, there are better ways of doing so. If/when Docker or the upstream > > container world solves this problem, it would then make sense for Kolla > to > > follow suit. > > > > On the face of it, what the spec proposes is a simple change, it should > not > > radically pull the carpet out under people, or even change the way > > kolla-ansible works in the near term. If consumers such as tripleo or > other > > parties feel it would in fact do so please do let me know and we can > discuss > > and mitigate these problems. > > TripleO uses these scripts extensively, we certainly do not want to > see them go away from kolla images. > > Martin > > > Cheers, > > -Paul > > > > [0] https://review.openstack.org/#/c/550958/ > > [1] https://github.com/openstack/loci > > [2] > > > https://github.com/openstack/kolla/blob/master/docker/base/set_configs.py > > [3] > > > https://github.com/openstack/kolla-ansible/blob/master/ansible/roles/keystone/templates/keystone.json.j2 > > [4] > > > http://eavesdrop.openstack.org/meetings/kolla/2018/kolla.2018-04-04-16.00.log.txt > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From colleen at gazlene.net Fri Apr 6 21:10:01 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 06 Apr 2018 23:10:01 +0200 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 2 April 2018 Message-ID: <1523049001.3879488.1329324352.75A0B818@webmail.messagingengine.com> # Keystone Team Update - Week of 2 April 2018 ## News Relatively quiet week. Most of our activity was focused on polishing up specs. ## Open Specs Search query: https://goo.gl/eyTktx No new specs have been proposed since last week. We're getting some good feedback on the cross-project spec to implement default roles[1], which will need more discussion and clarification. One hot debate was (is?) over what the role names should be (as a team we're really good at naming things). The JWT spec[2] also needs some attentive eyes on it, and the unified limits spec[3] may need to have its scope narrowed down. The application credentials spec[4] is probably one or two revisions away from being ready to merge. [1] https://review.openstack.org/523973 [2] https://review.openstack.org/541903 [3] https://review.openstack.org/540803 [4] https://review.openstack.org/396331 ## Recently Merged Changes Search query: https://goo.gl/hdD9Kw We merged 8 changes this week. ## Changes that need Attention Search query: https://goo.gl/tW5PiH There are 38 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. ## Milestone Outlook https://releases.openstack.org/rocky/schedule.html The keystone spec proposal freeze is in two weeks. ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter From openstack at fried.cc Fri Apr 6 21:41:38 2018 From: openstack at fried.cc (Eric Fried) Date: Fri, 6 Apr 2018 16:41:38 -0500 Subject: [openstack-dev] [nova] [placement] placement update 18-14 In-Reply-To: <027de90a-9b72-a8a1-0231-4f5ca1963473@gmail.com> References: <027de90a-9b72-a8a1-0231-4f5ca1963473@gmail.com> Message-ID: <3f7dfc82-e53a-193f-e199-993cf9c8fa9a@fried.cc> >> it's really on nested allocation candidates. > > Yup. And that series is deadlocked on a disagreement about whether > granular request groups should be "separate by default" (meaning: if you > request multiple groups of resources, the expectation is that they will > be served by distinct resource providers) or "unrestricted by default" > (meaning: if you request multiple groups of resources, those resources > may or may not be serviced by distinct resource providers). This is really a granular thing, not a nested thing. I was holding up the nrp-in-alloc-cands spec [1] for other reasons, but I've stopped doing that now. We should be able to proceed with the nrp work. I'm working on the granular code, wherein I can hopefully isolate the separate-vs-unrestricted decision such that we can go either way once that issue is resolved. [1] https://review.openstack.org/#/c/556873/ >> Some negotiation happened with regard to when/if the fixes for >> shared providers is going to happen. I'm not sure how that resolved, >> if someone can follow up with that, that would be most excellent. This is the subject of another thread [2] that's still "dangling". We discussed it in the sched meeting this week [3] and concluded [4] that we shouldn't do it in Rocky. BUT tetsuro later pointed out that part of the series in question [5] is still needed to satisfy NRP-in-alloc-cands (return the whole tree's providers in provider_summaries - even the ones that aren't providing resource to the request). That patch changes behavior, so needs a microversion (mostly done already in that patch), so needs a spec. We haven't yet resolved whether this is truly needed, so haven't assigned a body to the spec work. I believe Jay is still planning [6] to parse and respond to the ML thread. After he clones himself. [2] http://lists.openstack.org/pipermail/openstack-dev/2018-March/128944.html [3] http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-04-02-14.00.log.html#l-91 [4] http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-04-02-14.00.log.html#l-137 [5] https://review.openstack.org/#/c/558045/ [6] http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-04-02-14.00.log.html#l-104 >> * Shared providers status? >>    (I really think we need to make this go. It was one of the >>    original value propositions of placement: being able to accurate >>    manage shared disk.) > > Agreed, but you know.... NUMA. And CPU pinning. And vGPUs. And FPGAs. > And physnet network bandwidth scheduling. And... well, you get the idea. Right. I will say that Tetsuro has been doing an excellent job slinging code for this, though. So the bottleneck is really reviewer bandwidth (already an issue for the work we *are* trying to fit in Rocky). If it's still on the table by Stein, we ought to consider making it a high priority. (Our Rocky punchlist seems to be favoring "urgent" over "important" to some extent.) -efried From dalvarez at redhat.com Fri Apr 6 22:34:55 2018 From: dalvarez at redhat.com (Daniel Alvarez) Date: Sat, 7 Apr 2018 00:34:55 +0200 Subject: [openstack-dev] [neutron] [OVN] Tempest API / Scenario tests and OVN metadata In-Reply-To: <4A490EDA-BD7F-444C-AA4F-65562FE21408@kaplonski.pl> References: <4C4EB692-8B09-4689-BDC2-E6447D719073@kaplonski.pl> <4A490EDA-BD7F-444C-AA4F-65562FE21408@kaplonski.pl> Message-ID: > On 6 Apr 2018, at 19:04, Sławek Kapłoński wrote: > > Hi, > > Another idea is to modify test that it will: > 1. Check how many ports are in tenant, > 2. Set quota to actual number of ports + 1 instead of hardcoded 1 as it is now, > 3. Try to add 2 ports - exactly as it is now, > > I think that this should be still backend agnostic and should fix this problem. > >> Wiadomość napisana przez Sławek Kapłoński w dniu 06.04.2018, o godz. 17:08: >> >> Hi, >> >> I don’t know how networking-ovn is working but I have one question. >> >> >>> Wiadomość napisana przez Daniel Alvarez Sanchez w dniu 06.04.2018, o godz. 15:30: >>> >>> Hi, >>> >>> Thanks Lucas for writing this down. >>> >>> On Thu, Apr 5, 2018 at 11:35 AM, Lucas Alvares Gomes wrote: >>> Hi, >>> >>> The tests below are failing in the tempest API / Scenario job that >>> runs in the networking-ovn gate (non-voting): >>> >>> neutron_tempest_plugin.api.admin.test_quotas_negative.QuotasAdminNegativeTestJSON.test_create_port_when_quotas_is_full >>> neutron_tempest_plugin.api.test_routers.RoutersIpV6Test.test_router_interface_status >>> neutron_tempest_plugin.api.test_routers.RoutersTest.test_router_interface_status >>> neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_prefixlen >>> neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_quota >>> neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_subnet_cidr >>> >>> Digging a bit into it I noticed that with the exception of the two >>> "test_router_interface_status" (ipv6 and ipv4) all other tests are >>> failing because the way metadata works in networking-ovn. >>> >>> Taking the "test_create_port_when_quotas_is_full" as an example. The >>> reason why it fails is because when the OVN metadata is enabled, >>> networking-ovn will metadata port at the moment a network is created >>> [0] and that will already fulfill the quota limit set by that test >>> [1]. >>> >>> That port will also allocate an IP from the subnet which will cause >>> the rest of the tests to fail with a "No more IP addresses available >>> on network ..." error. >>> >>> With ML2/OVS we would run into the same Quota problem if DHCP would be >>> enabled for the created subnets. This means that if we modify the current tests >>> to enable DHCP on them and we account this extra port it would be valid for >>> all networking-ovn as well. Does it sound good or we still want to isolate quotas? >> >> If DHCP will be enabled for networking-ovn, will it use one more port also or not? If so then You will still have the same problem with DHCP as in ML2/OVS You will have one port created and for networking-ovn it will be 2 ports. >> If it’s not like that then I think that this solution, with some comment in test code why DHCP is enabled should be good IMO. No, networking-ovn won’t create an extra port when DHCP is enabled so it should work fine. Thanks Slaweq! >> >>> >>> This is not very trivial to fix because: >>> >>> 1. Tempest should be backend agnostic. So, adding a conditional in the >>> tempest test to check whether OVN is being used or not doesn't sound >>> correct. >>> >>> 2. Creating a port to be used by the metadata agent is a core part of >>> the design implementation for the metadata functionality [2] >>> >>> So, I'm sending this email to try to figure out what would be the best >>> approach to deal with this problem and start working towards having >>> that job to be voting in our gate. Here are some ideas: >>> >>> 1. Simple disable the tests that are affected by the metadata approach. >>> >>> 2. Disable metadata for the tempest API / Scenario tests (here's a >>> test patch doing it [3]) >>> >>> IMHO, we don't want to do this as metadata is likely to be enabled in all the >>> clouds either using ML2/OVS or OVN so it's good to keep exercising >>> this part. >>> >>> >>> 3. Same as 1. but also create similar tempest tests specific for OVN >>> somewhere else (in the networking-ovn tree?!) >>> >>> As we discussed on IRC I'm keen on doing this instead of getting bits in >>> tempest to do different things depending on the backend used. Unless >>> we want to enable DHCP on the subnets that these tests create :) >>> >>> >>> What you think would be the best way to workaround this problem, any >>> other ideas ? >>> >>> As for the "test_router_interface_status" tests that are failing >>> independent of the metadata, there's a bug reporting the problem here >>> [4]. So we should just fix it. >>> >>> [0] https://github.com/openstack/networking-ovn/blob/f3f5257fc465bbf44d589cc16e9ef7781f6b5b1d/networking_ovn/common/ovn_client.py#L1154 >>> [1] https://github.com/openstack/neutron-tempest-plugin/blob/35bf37d1830328d72606f9c790b270d4fda2b854/neutron_tempest_plugin/api/admin/test_quotas_negative.py#L66 >>> [2] https://docs.openstack.org/networking-ovn/latest/contributor/design/metadata_api.html#overview-of-proposed-approach >>> [3] https://review.openstack.org/#/c/558792/ >>> [4] https://bugs.launchpad.net/networking-ovn/+bug/1713835 >>> >>> Cheers, >>> Lucas >>> >>> Thanks, >>> Daniel >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> — >> Best regards >> Slawek Kaplonski >> slawek at kaplonski.pl >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > — > Best regards > Slawek Kaplonski > slawek at kaplonski.pl > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From dalvarez at redhat.com Fri Apr 6 22:35:35 2018 From: dalvarez at redhat.com (Daniel Alvarez) Date: Sat, 7 Apr 2018 00:35:35 +0200 Subject: [openstack-dev] [neutron] [OVN] Tempest API / Scenario tests and OVN metadata In-Reply-To: <4A490EDA-BD7F-444C-AA4F-65562FE21408@kaplonski.pl> References: <4C4EB692-8B09-4689-BDC2-E6447D719073@kaplonski.pl> <4A490EDA-BD7F-444C-AA4F-65562FE21408@kaplonski.pl> Message-ID: <0E9F528E-CAFA-4229-981F-FCD75EDEF5A9@redhat.com> > On 6 Apr 2018, at 19:04, Sławek Kapłoński wrote: > > Hi, > > Another idea is to modify test that it will: > 1. Check how many ports are in tenant, > 2. Set quota to actual number of ports + 1 instead of hardcoded 1 as it is now, > 3. Try to add 2 ports - exactly as it is now, > Cool, I like this one :-) Good idea. > I think that this should be still backend agnostic and should fix this problem. > >> Wiadomość napisana przez Sławek Kapłoński w dniu 06.04.2018, o godz. 17:08: >> >> Hi, >> >> I don’t know how networking-ovn is working but I have one question. >> >> >>> Wiadomość napisana przez Daniel Alvarez Sanchez w dniu 06.04.2018, o godz. 15:30: >>> >>> Hi, >>> >>> Thanks Lucas for writing this down. >>> >>> On Thu, Apr 5, 2018 at 11:35 AM, Lucas Alvares Gomes wrote: >>> Hi, >>> >>> The tests below are failing in the tempest API / Scenario job that >>> runs in the networking-ovn gate (non-voting): >>> >>> neutron_tempest_plugin.api.admin.test_quotas_negative.QuotasAdminNegativeTestJSON.test_create_port_when_quotas_is_full >>> neutron_tempest_plugin.api.test_routers.RoutersIpV6Test.test_router_interface_status >>> neutron_tempest_plugin.api.test_routers.RoutersTest.test_router_interface_status >>> neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_prefixlen >>> neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_quota >>> neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_subnet_cidr >>> >>> Digging a bit into it I noticed that with the exception of the two >>> "test_router_interface_status" (ipv6 and ipv4) all other tests are >>> failing because the way metadata works in networking-ovn. >>> >>> Taking the "test_create_port_when_quotas_is_full" as an example. The >>> reason why it fails is because when the OVN metadata is enabled, >>> networking-ovn will metadata port at the moment a network is created >>> [0] and that will already fulfill the quota limit set by that test >>> [1]. >>> >>> That port will also allocate an IP from the subnet which will cause >>> the rest of the tests to fail with a "No more IP addresses available >>> on network ..." error. >>> >>> With ML2/OVS we would run into the same Quota problem if DHCP would be >>> enabled for the created subnets. This means that if we modify the current tests >>> to enable DHCP on them and we account this extra port it would be valid for >>> all networking-ovn as well. Does it sound good or we still want to isolate quotas? >> >> If DHCP will be enabled for networking-ovn, will it use one more port also or not? If so then You will still have the same problem with DHCP as in ML2/OVS You will have one port created and for networking-ovn it will be 2 ports. >> If it’s not like that then I think that this solution, with some comment in test code why DHCP is enabled should be good IMO. >> >>> >>> This is not very trivial to fix because: >>> >>> 1. Tempest should be backend agnostic. So, adding a conditional in the >>> tempest test to check whether OVN is being used or not doesn't sound >>> correct. >>> >>> 2. Creating a port to be used by the metadata agent is a core part of >>> the design implementation for the metadata functionality [2] >>> >>> So, I'm sending this email to try to figure out what would be the best >>> approach to deal with this problem and start working towards having >>> that job to be voting in our gate. Here are some ideas: >>> >>> 1. Simple disable the tests that are affected by the metadata approach. >>> >>> 2. Disable metadata for the tempest API / Scenario tests (here's a >>> test patch doing it [3]) >>> >>> IMHO, we don't want to do this as metadata is likely to be enabled in all the >>> clouds either using ML2/OVS or OVN so it's good to keep exercising >>> this part. >>> >>> >>> 3. Same as 1. but also create similar tempest tests specific for OVN >>> somewhere else (in the networking-ovn tree?!) >>> >>> As we discussed on IRC I'm keen on doing this instead of getting bits in >>> tempest to do different things depending on the backend used. Unless >>> we want to enable DHCP on the subnets that these tests create :) >>> >>> >>> What you think would be the best way to workaround this problem, any >>> other ideas ? >>> >>> As for the "test_router_interface_status" tests that are failing >>> independent of the metadata, there's a bug reporting the problem here >>> [4]. So we should just fix it. >>> >>> [0] https://github.com/openstack/networking-ovn/blob/f3f5257fc465bbf44d589cc16e9ef7781f6b5b1d/networking_ovn/common/ovn_client.py#L1154 >>> [1] https://github.com/openstack/neutron-tempest-plugin/blob/35bf37d1830328d72606f9c790b270d4fda2b854/neutron_tempest_plugin/api/admin/test_quotas_negative.py#L66 >>> [2] https://docs.openstack.org/networking-ovn/latest/contributor/design/metadata_api.html#overview-of-proposed-approach >>> [3] https://review.openstack.org/#/c/558792/ >>> [4] https://bugs.launchpad.net/networking-ovn/+bug/1713835 >>> >>> Cheers, >>> Lucas >>> >>> Thanks, >>> Daniel >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> — >> Best regards >> Slawek Kaplonski >> slawek at kaplonski.pl >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > — > Best regards > Slawek Kaplonski > slawek at kaplonski.pl > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From stdake at cisco.com Fri Apr 6 22:47:41 2018 From: stdake at cisco.com (Steven Dake (stdake)) Date: Fri, 6 Apr 2018 22:47:41 +0000 Subject: [openstack-dev] [kolla] [tripleo] On moving start scripts out of Kolla images In-Reply-To: References: <0892491c-f57e-2952-eac3-a86797db5a8e@oracle.com> Message-ID: <63481CFF-1BDA-4F88-BF5D-E0C3766935A8@cisco.com> Mark, TLDR good proposal I don’t think Paul was proposing what you proposed. However: You make a strong case for separately packaging the api (mostly which Is setcfg.py and the json API + docs + samples). I am super surprised nobody has ever proposed this in the past, but now is as good of a time as any to propose a good model for managing the JSON->setcfg.py API. We could unit test this with extreme clarity, document with extreme clarity, and provide an easier path for people to submit changes to the API that they require to run the OpenStack containers. Finally, it would provide complete semver semantics for managing change and provide perfect backwards compatibility. A separate repo for this proposed api split makes sense to me. I think initially we would want to seed with the kolla core team but be open to anyone that reviews + contributes to join the kolla-api core team (just as happens with other kolla deliverables). This should reduce cross-project developer friction which was an implied but unstated problem in the various threads over the last week and produce the many other beneficial effects APIs produce along with the benefits you stated above. I’m not sure if this approach is technically sound –but I’d be in favor of this approach if it were not too disruptive, provided full backwards compatibility and was felt to be an improvement by the consumers of kolla images. I don’t think deprecation is something that is all that viable with an API model like the one we have nor this new repo and think we need to set clear boundaries around what would/would not be done. I do know that a change of this magnitude is a lot of work for the community to take on – and just like adding or removing any deliverable in kolla, would require a majority vote from the CR team. Also, repeating myself, I don’t think the current API is good nor perfect, I don’t think perfection is necessarily possible, but this may help drive towards that mythical perfection that interested parties seek to achieve. Cheers -steve From: Mark Goddard Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Friday, April 6, 2018 at 12:30 PM To: "OpenStack Development Mailing List (not for usage questions)" Subject: Re: [openstack-dev] [kolla] [tripleo] On moving start scripts out of Kolla images On Thu, 5 Apr 2018, 20:28 Martin André, > wrote: On Thu, Apr 5, 2018 at 2:16 PM, Paul Bourke > wrote: > Hi all, > > This mail is to serve as a follow on to the discussion during yesterday's > team meeting[4], which was regarding the desire to move start scripts out of > the kolla images [0]. There's a few factors at play, and it may well be best > left to discuss in person at the summit in May, but hopefully we can get at > least some of this hashed out before then. > > I'll start by summarising why I think this is a good idea, and then attempt > to address some of the concerns that have come up since. > > First off, to be frank, this is effort is driven by wanting to add support > for loci images[1] in kolla-ansible. I think it would be unreasonable for > anyone to argue this is a bad objective to have, loci images have very > obvious benefits over what we have in Kolla today. I'm not looking to drop > support for Kolla images at all, I simply want to continue decoupling things > to the point where operators can pick and choose what works best for them. > Stemming from this, I think moving these scripts out of the images provides > a clear benefit to our consumers, both users of kolla and third parties such > as triple-o. Let me explain why. It's still very obscure to me how removing the scripts from kolla images will benefit consumers. If the reason is that you want to re-use them in other, non-kolla images, I believe we should package the scripts. I've left some comments in your spec review. +1 to extracting and packaging the kolla API. This will make it easier to test and document, allow for versioning, and make it a first class citizen rather than a file in the build context of the base image. Plus, if it really is as good as some people are arguing, then it should be shared. For many of the other helper scripts that get bundled into the kolla images, I can see an argument for pulling these up to the deployment layer. These could easily be moved to kolla-ansible, and added via config.json. I guess it would be useful to know whether other deployment tools (tripleo) are using any of these - if they are shared then perhaps the images are the best place for them. > Normally, to run a docker image, a user will do 'docker run > helloworld:latest'. In any non trivial application, config needs to be > provided. In the vast majority of cases this is either provided via a bind > mount (docker run -v hello.conf:/etc/hello.conf helloworld:latest), or via > environment variables (docker run --env HELLO=paul helloworld:latest). This > is all bog standard stuff, something anyone who's spent an hour learning > docker can understand. > > Now, lets say someone wants to try out OpenStack with Docker, and they look > at Kolla. First off they have to look at something called set_configs.py[2] > - over 400 lines of Python. Next they need to understand what that script > consumes, config.json [3]. The only reference for config.json is the files > that live in kolla-ansible, a mass of jinja and assumptions about how the > service will be run. Next, they need to figure out how to bind mount the > config files and config.json into the container in a way that can be > consumed by set_configs.py (which by the way, requires the base kolla image > in all cases). This is only for the config. For the service start up > command, this need to also be provided in config.json. This command is then > parsed out and written to a location in the image, which is consumed by a > series of start/extend start shell scripts. Kolla is *unique* in this > regard, no other project in the container world is interfacing with images > in this way. Being a snowflake in this regard is not a good thing. I'm still > waiting to hear from a real world operator who would prefer to spend time > learning the above to doing: You're pointing a very real documentation issue. I've mentioned in the other kolla thread that I have a stub for the kolla API documentation. I'll push a patch for what I have and we can iterate on that. > docker run -v /etc/keystone:/etc/keystone keystone:latest --entrypoint > /usr/bin/keystone [args] > > This is the Docker API, it's easy to understand and pretty much the standard > at this point. Sure, using the docker API works for simpler cases, not too surprisingly once you start doing more funky things with your containers you're quickly reach the docker API limitations. That's when the kolla API comes in handy. See for example this recent patch https://review.openstack.org/#/c/556673/ where we needed to change some file permission to the uid/gid of the user inside the container. The first iteration basically used the docker API and started an additional container to fix the permissions: docker run -v /etc/pki/tls/certs/neutron.crt:/etc/pki/tls/certs/neutron.crt:rw \ -v /etc/pki/tls/private/neutron.key:/etc/pki/tls/private/neutron.key:rw \ neutron_image \ /bin/bash -c 'chown neutron:neutron /etc/pki/tls/certs/neutron.crt; chown neutron:neutron /etc/pki/tls/private/neutron.key' You'll agree this is not the most obvious. And it had a nasty side effect that is changes the permissions of the files _on the host_. While using kolla API we could simply add to our config.json: - path: /etc/pki/tls/certs/neutron.crt owner: neutron:neutron - path: /etc/pki/tls/private/neutron.key owner: neutron:neutron > The other argument is that this removes the possibility for immutable > infrastructure. The concern is, with the new approach, a rookie operator > will modify one of the start scripts - resulting in uncertainty that what > was first deployed matches what is currently running. But with the way Kolla > is now, an operator can still do this! They can restart containers with a > custom entrypoint or additional bind mounts, they can exec in and change > config files, etc. etc. Kolla containers have never been immutable and we're > bending over backwards to artificially try and make this the case. We cant > protect a bad or inexperienced operator from shooting themselves in the > foot, there are better ways of doing so. If/when Docker or the upstream > container world solves this problem, it would then make sense for Kolla to > follow suit. > > On the face of it, what the spec proposes is a simple change, it should not > radically pull the carpet out under people, or even change the way > kolla-ansible works in the near term. If consumers such as tripleo or other > parties feel it would in fact do so please do let me know and we can discuss > and mitigate these problems. TripleO uses these scripts extensively, we certainly do not want to see them go away from kolla images. Martin > Cheers, > -Paul > > [0] https://review.openstack.org/#/c/550958/ > [1] https://github.com/openstack/loci > [2] > https://github.com/openstack/kolla/blob/master/docker/base/set_configs.py > [3] > https://github.com/openstack/kolla-ansible/blob/master/ansible/roles/keystone/templates/keystone.json.j2 > [4] > http://eavesdrop.openstack.org/meetings/kolla/2018/kolla.2018-04-04-16.00.log.txt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhang.lei.fly at gmail.com Sat Apr 7 02:11:50 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Sat, 7 Apr 2018 10:11:50 +0800 Subject: [openstack-dev] [kolla] [tripleo] On moving start scripts out of Kolla images In-Reply-To: <63481CFF-1BDA-4F88-BF5D-E0C3766935A8@cisco.com> References: <0892491c-f57e-2952-eac3-a86797db5a8e@oracle.com> <63481CFF-1BDA-4F88-BF5D-E0C3766935A8@cisco.com> Message-ID: +1 for kolla-api Migrate all scripts from kolla(image) to kolla-ansible, will make image hard to use by downstream. Martin explain this clearly. we need some API to make images more easy to use. For the operator, I don't think he needs to read all the set_config.py file. Just knowing how the config.json file looks like and effects of the file are enough. So a doc is enough. For images, we need to add some common functions before using them. Instead of using the upstream image directly. For example, if we support loci, mostly we will use upgrade infra images. like mariadb, redis etc. But is them really enough for production use directly? there is some concern here - drop root. does it work when it runs without root? - init process. Does it contain a init process binary? - configuration. The different image may use different configuration method. Should we need unify them? - lack of packages. what the image lack some packages we needed? One of a possible solution for this, I think, is use upstream image + kolla-api to generate a image with the features. On Sat, Apr 7, 2018 at 6:47 AM, Steven Dake (stdake) wrote: > Mark, > > > > TLDR good proposal > > > > I don’t think Paul was proposing what you proposed. However: > > > > You make a strong case for separately packaging the api (mostly which Is > setcfg.py and the json API + docs + samples). I am super surprised nobody > has ever proposed this in the past, but now is as good of a time as any to > propose a good model for managing the JSON->setcfg.py API. We could unit > test this with extreme clarity, document with extreme clarity, and provide > an easier path for people to submit changes to the API that they require to > run the OpenStack containers. Finally, it would provide complete semver > semantics for managing change and provide perfect backwards compatibility. > > > > A separate repo for this proposed api split makes sense to me. I think > initially we would want to seed with the kolla core team but be open to > anyone that reviews + contributes to join the kolla-api core team (just as > happens with other kolla deliverables). > > > > This should reduce cross-project developer friction which was an implied > but unstated problem in the various threads over the last week and produce > the many other beneficial effects APIs produce along with the benefits you > stated above. > > > > I’m not sure if this approach is technically sound –but I’d be in favor of > this approach if it were not too disruptive, provided full backwards > compatibility and was felt to be an improvement by the consumers of kolla > images. I don’t think deprecation is something that is all that viable > with an API model like the one we have nor this new repo and think we need > to set clear boundaries around what would/would not be done. > > > > I do know that a change of this magnitude is a lot of work for the > community to take on – and just like adding or removing any deliverable in > kolla, would require a majority vote from the CR team. > > > > Also, repeating myself, I don’t think the current API is good nor perfect, > I don’t think perfection is necessarily possible, but this may help drive > towards that mythical perfection that interested parties seek to achieve. > > > Cheers > > -steve > > > > *From: *Mark Goddard > *Reply-To: *"OpenStack Development Mailing List (not for usage > questions)" > *Date: *Friday, April 6, 2018 at 12:30 PM > *To: *"OpenStack Development Mailing List (not for usage questions)" < > openstack-dev at lists.openstack.org> > *Subject: *Re: [openstack-dev] [kolla] [tripleo] On moving start scripts > out of Kolla images > > > > > > On Thu, 5 Apr 2018, 20:28 Martin André, wrote: > > On Thu, Apr 5, 2018 at 2:16 PM, Paul Bourke > wrote: > > Hi all, > > > > This mail is to serve as a follow on to the discussion during yesterday's > > team meeting[4], which was regarding the desire to move start scripts > out of > > the kolla images [0]. There's a few factors at play, and it may well be > best > > left to discuss in person at the summit in May, but hopefully we can get > at > > least some of this hashed out before then. > > > > I'll start by summarising why I think this is a good idea, and then > attempt > > to address some of the concerns that have come up since. > > > > First off, to be frank, this is effort is driven by wanting to add > support > > for loci images[1] in kolla-ansible. I think it would be unreasonable for > > anyone to argue this is a bad objective to have, loci images have very > > obvious benefits over what we have in Kolla today. I'm not looking to > drop > > support for Kolla images at all, I simply want to continue decoupling > things > > to the point where operators can pick and choose what works best for > them. > > Stemming from this, I think moving these scripts out of the images > provides > > a clear benefit to our consumers, both users of kolla and third parties > such > > as triple-o. Let me explain why. > > It's still very obscure to me how removing the scripts from kolla > images will benefit consumers. If the reason is that you want to > re-use them in other, non-kolla images, I believe we should package > the scripts. I've left some comments in your spec review. > > > > +1 to extracting and packaging the kolla API. This will make it easier to > test and document, allow for versioning, and make it a first class citizen > rather than a file in the build context of the base image. Plus, if it > really is as good as some people are arguing, then it should be shared. > > > > For many of the other helper scripts that get bundled into the kolla > images, I can see an argument for pulling these up to the deployment layer. > These could easily be moved to kolla-ansible, and added via config.json. I > guess it would be useful to know whether other deployment tools (tripleo) > are using any of these - if they are shared then perhaps the images are the > best place for them. > > > > > > Normally, to run a docker image, a user will do 'docker run > > helloworld:latest'. In any non trivial application, config needs to be > > provided. In the vast majority of cases this is either provided via a > bind > > mount (docker run -v hello.conf:/etc/hello.conf helloworld:latest), or > via > > environment variables (docker run --env HELLO=paul helloworld:latest). > This > > is all bog standard stuff, something anyone who's spent an hour learning > > docker can understand. > > > > Now, lets say someone wants to try out OpenStack with Docker, and they > look > > at Kolla. First off they have to look at something called > set_configs.py[2] > > - over 400 lines of Python. Next they need to understand what that script > > consumes, config.json [3]. The only reference for config.json is the > files > > that live in kolla-ansible, a mass of jinja and assumptions about how the > > service will be run. Next, they need to figure out how to bind mount the > > config files and config.json into the container in a way that can be > > consumed by set_configs.py (which by the way, requires the base kolla > image > > in all cases). This is only for the config. For the service start up > > command, this need to also be provided in config.json. This command is > then > > parsed out and written to a location in the image, which is consumed by a > > series of start/extend start shell scripts. Kolla is *unique* in this > > regard, no other project in the container world is interfacing with > images > > in this way. Being a snowflake in this regard is not a good thing. I'm > still > > waiting to hear from a real world operator who would prefer to spend time > > learning the above to doing: > > You're pointing a very real documentation issue. I've mentioned in the > other kolla thread that I have a stub for the kolla API documentation. > I'll push a patch for what I have and we can iterate on that. > > > docker run -v /etc/keystone:/etc/keystone keystone:latest --entrypoint > > /usr/bin/keystone [args] > > > > This is the Docker API, it's easy to understand and pretty much the > standard > > at this point. > > Sure, using the docker API works for simpler cases, not too > surprisingly once you start doing more funky things with your > containers you're quickly reach the docker API limitations. That's > when the kolla API comes in handy. > See for example this recent patch > https://review.openstack.org/#/c/556673/ where we needed to change > some file permission to the uid/gid of the user inside the container. > > The first iteration basically used the docker API and started an > additional container to fix the permissions: > > docker run -v > /etc/pki/tls/certs/neutron.crt:/etc/pki/tls/certs/neutron.crt:rw \ > -v /etc/pki/tls/private/neutron.key:/etc/pki/tls/private/ > neutron.key:rw > \ > neutron_image \ > /bin/bash -c 'chown neutron:neutron > /etc/pki/tls/certs/neutron.crt; chown neutron:neutron > /etc/pki/tls/private/neutron.key' > > You'll agree this is not the most obvious. And it had a nasty side > effect that is changes the permissions of the files _on the host_. > While using kolla API we could simply add to our config.json: > > - path: /etc/pki/tls/certs/neutron.crt > owner: neutron:neutron > - path: /etc/pki/tls/private/neutron.key > owner: neutron:neutron > > > The other argument is that this removes the possibility for immutable > > infrastructure. The concern is, with the new approach, a rookie operator > > will modify one of the start scripts - resulting in uncertainty that what > > was first deployed matches what is currently running. But with the way > Kolla > > is now, an operator can still do this! They can restart containers with a > > custom entrypoint or additional bind mounts, they can exec in and change > > config files, etc. etc. Kolla containers have never been immutable and > we're > > bending over backwards to artificially try and make this the case. We > cant > > protect a bad or inexperienced operator from shooting themselves in the > > foot, there are better ways of doing so. If/when Docker or the upstream > > container world solves this problem, it would then make sense for Kolla > to > > follow suit. > > > > On the face of it, what the spec proposes is a simple change, it should > not > > radically pull the carpet out under people, or even change the way > > kolla-ansible works in the near term. If consumers such as tripleo or > other > > parties feel it would in fact do so please do let me know and we can > discuss > > and mitigate these problems. > > TripleO uses these scripts extensively, we certainly do not want to > see them go away from kolla images. > > Martin > > > Cheers, > > -Paul > > > > [0] https://review.openstack.org/#/c/550958/ > > [1] https://github.com/openstack/loci > > [2] > > https://github.com/openstack/kolla/blob/master/docker/base/ > set_configs.py > > [3] > > https://github.com/openstack/kolla-ansible/blob/master/ > ansible/roles/keystone/templates/keystone.json.j2 > > [4] > > http://eavesdrop.openstack.org/meetings/kolla/2018/kolla. > 2018-04-04-16.00.log.txt > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From majopela at redhat.com Sat Apr 7 05:55:53 2018 From: majopela at redhat.com (Miguel Angel Ajo Pelayo) Date: Sat, 07 Apr 2018 05:55:53 +0000 Subject: [openstack-dev] [neutron] [OVN] Tempest API / Scenario tests and OVN metadata In-Reply-To: <0E9F528E-CAFA-4229-981F-FCD75EDEF5A9@redhat.com> References: <4C4EB692-8B09-4689-BDC2-E6447D719073@kaplonski.pl> <4A490EDA-BD7F-444C-AA4F-65562FE21408@kaplonski.pl> <0E9F528E-CAFA-4229-981F-FCD75EDEF5A9@redhat.com> Message-ID: this issue isn't only for networking ovn, please note that it happens with a flew other vendor plugins (like nsx), at least this is something we have found in downstream certifications. Cheers, On Sat, Apr 7, 2018, 12:36 AM Daniel Alvarez wrote: > > > > On 6 Apr 2018, at 19:04, Sławek Kapłoński wrote: > > > > Hi, > > > > Another idea is to modify test that it will: > > 1. Check how many ports are in tenant, > > 2. Set quota to actual number of ports + 1 instead of hardcoded 1 as it > is now, > > 3. Try to add 2 ports - exactly as it is now, > > > Cool, I like this one :-) > Good idea. > > > I think that this should be still backend agnostic and should fix this > problem. > > > >> Wiadomość napisana przez Sławek Kapłoński w dniu > 06.04.2018, o godz. 17:08: > >> > >> Hi, > >> > >> I don’t know how networking-ovn is working but I have one question. > >> > >> > >>> Wiadomość napisana przez Daniel Alvarez Sanchez > w dniu 06.04.2018, o godz. 15:30: > >>> > >>> Hi, > >>> > >>> Thanks Lucas for writing this down. > >>> > >>> On Thu, Apr 5, 2018 at 11:35 AM, Lucas Alvares Gomes < > lucasagomes at gmail.com> wrote: > >>> Hi, > >>> > >>> The tests below are failing in the tempest API / Scenario job that > >>> runs in the networking-ovn gate (non-voting): > >>> > >>> > neutron_tempest_plugin.api.admin.test_quotas_negative.QuotasAdminNegativeTestJSON.test_create_port_when_quotas_is_full > >>> > neutron_tempest_plugin.api.test_routers.RoutersIpV6Test.test_router_interface_status > >>> > neutron_tempest_plugin.api.test_routers.RoutersTest.test_router_interface_status > >>> > neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_prefixlen > >>> > neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_quota > >>> > neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_subnet_cidr > >>> > >>> Digging a bit into it I noticed that with the exception of the two > >>> "test_router_interface_status" (ipv6 and ipv4) all other tests are > >>> failing because the way metadata works in networking-ovn. > >>> > >>> Taking the "test_create_port_when_quotas_is_full" as an example. The > >>> reason why it fails is because when the OVN metadata is enabled, > >>> networking-ovn will metadata port at the moment a network is created > >>> [0] and that will already fulfill the quota limit set by that test > >>> [1]. > >>> > >>> That port will also allocate an IP from the subnet which will cause > >>> the rest of the tests to fail with a "No more IP addresses available > >>> on network ..." error. > >>> > >>> With ML2/OVS we would run into the same Quota problem if DHCP would be > >>> enabled for the created subnets. This means that if we modify the > current tests > >>> to enable DHCP on them and we account this extra port it would be > valid for > >>> all networking-ovn as well. Does it sound good or we still want to > isolate quotas? > >> > >> If DHCP will be enabled for networking-ovn, will it use one more port > also or not? If so then You will still have the same problem with DHCP as > in ML2/OVS You will have one port created and for networking-ovn it will be > 2 ports. > >> If it’s not like that then I think that this solution, with some > comment in test code why DHCP is enabled should be good IMO. > >> > >>> > >>> This is not very trivial to fix because: > >>> > >>> 1. Tempest should be backend agnostic. So, adding a conditional in the > >>> tempest test to check whether OVN is being used or not doesn't sound > >>> correct. > >>> > >>> 2. Creating a port to be used by the metadata agent is a core part of > >>> the design implementation for the metadata functionality [2] > >>> > >>> So, I'm sending this email to try to figure out what would be the best > >>> approach to deal with this problem and start working towards having > >>> that job to be voting in our gate. Here are some ideas: > >>> > >>> 1. Simple disable the tests that are affected by the metadata approach. > >>> > >>> 2. Disable metadata for the tempest API / Scenario tests (here's a > >>> test patch doing it [3]) > >>> > >>> IMHO, we don't want to do this as metadata is likely to be enabled in > all the > >>> clouds either using ML2/OVS or OVN so it's good to keep exercising > >>> this part. > >>> > >>> > >>> 3. Same as 1. but also create similar tempest tests specific for OVN > >>> somewhere else (in the networking-ovn tree?!) > >>> > >>> As we discussed on IRC I'm keen on doing this instead of getting bits > in > >>> tempest to do different things depending on the backend used. Unless > >>> we want to enable DHCP on the subnets that these tests create :) > >>> > >>> > >>> What you think would be the best way to workaround this problem, any > >>> other ideas ? > >>> > >>> As for the "test_router_interface_status" tests that are failing > >>> independent of the metadata, there's a bug reporting the problem here > >>> [4]. So we should just fix it. > >>> > >>> [0] > https://github.com/openstack/networking-ovn/blob/f3f5257fc465bbf44d589cc16e9ef7781f6b5b1d/networking_ovn/common/ovn_client.py#L1154 > >>> [1] > https://github.com/openstack/neutron-tempest-plugin/blob/35bf37d1830328d72606f9c790b270d4fda2b854/neutron_tempest_plugin/api/admin/test_quotas_negative.py#L66 > >>> [2] > https://docs.openstack.org/networking-ovn/latest/contributor/design/metadata_api.html#overview-of-proposed-approach > >>> [3] https://review.openstack.org/#/c/558792/ > >>> [4] https://bugs.launchpad.net/networking-ovn/+bug/1713835 > >>> > >>> Cheers, > >>> Lucas > >>> > >>> Thanks, > >>> Daniel > >>> > >>> > __________________________________________________________________________ > >>> OpenStack Development Mailing List (not for usage questions) > >>> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> > >>> > __________________________________________________________________________ > >>> OpenStack Development Mailing List (not for usage questions) > >>> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > >> — > >> Best regards > >> Slawek Kaplonski > >> slawek at kaplonski.pl > >> > >> > __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > — > > Best regards > > Slawek Kaplonski > > slawek at kaplonski.pl > > > > > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From noama at mellanox.com Sat Apr 7 07:04:28 2018 From: noama at mellanox.com (Noam Angel) Date: Sat, 7 Apr 2018 07:04:28 +0000 Subject: [openstack-dev] [Ironic][Tripleo] ipmitool issues HP machines In-Reply-To: References: , Message-ID: Try to update server bios and web managment Get Outlook for Android ________________________________ From: Jim Rollenhagen Sent: Wednesday, April 4, 2018 8:18:02 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Ironic][Tripleo] ipmitool issues HP machines On Wed, Apr 4, 2018 at 8:39 AM, Dan Prince > wrote: Kind of a support question but figured I'd ask here in case there are suggestions for workarounds for specific machines. Setting up a new rack of mixed machines this week and hit this issue with HP machines using the ipmi power driver for Ironic. Curious if anyone else has seen this before? The same commands work great with my Dell boxes! ----- [root at localhost ~]# cat x.sh set -x # this is how Ironic sends its IPMI commands it fails echo -n password > /tmp/tmprmdOOv ipmitool -I lanplus -H 172.31.0.19 -U Adminstrator -f /tmp/tmprmdOOv power status # this works great ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power status [root at localhost ~]# bash x.sh + echo -n password + ipmitool -I lanplus -H 172.31.0.19 -U Adminstrator -f /tmp/tmprmdOOv power status Error: Unable to establish IPMI v2 / RMCP+ session + ipmitool -I lanplus -H 172.31.0.19 -U Administrator -P password power status Chassis Power is on Very strange. A tcpdump of both would probably be enlightening. :) Also curious what version of ipmitool this is, maybe you're hitting an old bug. // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From sagarun at gmail.com Sat Apr 7 07:56:31 2018 From: sagarun at gmail.com (Arun SAG) Date: Sat, 7 Apr 2018 00:56:31 -0700 Subject: [openstack-dev] [infra][qa][requirements] Pip 10 is on the way In-Reply-To: <20180406003909.GA28653@localhost.localdomain> References: <20180331150026.nqrwaxakxcn3vmqz@yuggoth.org> <20180402150635.5d4jbbnzry2biowu@gentoo.org> <1522685637.1678193.1323782608.022AAF87@webmail.messagingengine.com> <1522960033.3841849.1328067440.7342057C@webmail.messagingengine.com> <20180406003909.GA28653@localhost.localdomain> Message-ID: Hello, On Thu, Apr 5, 2018 at 5:39 PM, Paul Belanger wrote: > Yah, I agree your approach is the better, i just wanted to toggle what was > supported by default. However, it is pretty broken today. I can't imagine > anybody actually using it, if so they must be carrying downstream patches. > > If we think USE_VENV is valid use case, for per project VENV, I suggest we > continue to fix it and update neutron to support it. Otherwise, we maybe should > rip and replace it. I work for Yahoo (Oath). We use USE_VENV a lot in our CI. We use VENVs to deploy software to production as well. we have some downstream patches to devstack to fix some issues with USE_VENV feature, i would be happy to upstream them. Please do not rip this out. Thanks. -- Arun S A G http://zer0c00l.in/ From andrea.frittoli at gmail.com Sat Apr 7 09:56:47 2018 From: andrea.frittoli at gmail.com (Andrea Frittoli) Date: Sat, 07 Apr 2018 09:56:47 +0000 Subject: [openstack-dev] [infra][qa][requirements] Pip 10 is on the way In-Reply-To: <1522960033.3841849.1328067440.7342057C@webmail.messagingengine.com> References: <20180331150026.nqrwaxakxcn3vmqz@yuggoth.org> <20180402150635.5d4jbbnzry2biowu@gentoo.org> <1522685637.1678193.1323782608.022AAF87@webmail.messagingengine.com> <1522960033.3841849.1328067440.7342057C@webmail.messagingengine.com> Message-ID: On Thu, Apr 5, 2018 at 9:27 PM Clark Boylan wrote: > On Mon, Apr 2, 2018, at 9:13 AM, Clark Boylan wrote: > > On Mon, Apr 2, 2018, at 8:06 AM, Matthew Thode wrote: > > > On 18-03-31 15:00:27, Jeremy Stanley wrote: > > > > According to a notice[1] posted to the pypa-announce and > > > > distutils-sig mailing lists, pip 10.0.0.b1 is on PyPI now and 10.0.0 > > > > is expected to be released in two weeks (over the April 14/15 > > > > weekend). We know it's at least going to start breaking[2] DevStack > > > > and we need to come up with a plan for addressing that, but we don't > > > > know how much more widespread the problem might end up being so > > > > encourage everyone to try it out now where they can. > > > > > > > > > > I'd like to suggest locking down pip/setuptools/wheel like openstack > > > ansible is doing in > > > > https://github.com/openstack/openstack-ansible/blob/master/global-requirement-pins.txt > > > > > > We could maintain it as a separate constraints file (or infra could > > > maintian it, doesn't mater). The file would only be used for the > > > initial get-pip install. > > > > In the past we've done our best to avoid pinning these tools because 1) > > we've told people they should use latest for openstack to work and 2) it > > is really difficult to actually control what versions of these tools end > > up on your systems if not latest. > > > > I would strongly push towards addressing the distutils package deletion > > problem that we've run into with pip10 instead. One of the approaches > > thrown out that pabelanger is working on is to use a common virtualenv > > for devstack and avoid the system package conflict entirely. > > I was mistaken and pabelanger was working to get devstack's USE_VENV > option working which installs each service (if the service supports it) > into its own virtualenv. There are two big drawbacks to this. This first is > that we would lose coinstallation of all the openstack services which is > one way we ensure they all work together at the end of the day. The second > is that not all services in "base" devstack support USE_VENV and I doubt > many plugins do either (neutron apparently doesn't?). > > I've since worked out a change that passes tempest using a global > virtualenv installed devstack at https://review.openstack.org/#/c/558930/. > This needs to be cleaned up so that we only check for and install the > virtualenv(s) once and we need to handle mixed python2 and python3 > environments better (so that you can run a python2 swift and python3 > everything else). > > The other major issue we've run into is that nova file injection (which is > tested by tempest) seems to require either libguestfs or nbd. libguestfs > bindings for python aren't available on pypi and instead we get them from > system packaging. This means if we want libguestfs support we have to > enable system site packages when using virtualenvs. The alternative is to > use nbd which apparently isn't preferred by nova and doesn't work under > current devstack anyways. > > Why is this a problem? Well the new pip10 behavior that breaks devstack is > pip10's refusable to remove distutils installed packages. Distro packages > by and large are distutils packaged which means if you mix system packages > and pip installed packages there is a good chance something will break (and > it does break for current devstack). I'm not sure that using a virtualenv > with system site packages enabled will sufficiently protect us from this > case (but we should test it further). Also it feels wrong to enable system > packages in a virtualenv if the entire point is avoiding system python > packages. > > I'm not sure what the best option is here but if we can show that system > site packages with virtualenvs is viable with pip10 and people want to move > forward with devstack using a global virtualenv we can work to clean up > this change and make it mergeable. > Thanks Clark for looking into this. One of the things that will break using a global virtual env is the "all-plugin" Tempest tox environment, which is still used in a few places [0]. The "all-plugin" tox environment is deprecated anyways, so this may actually push things moving in the right direction. Some background the "all-plugin": Tempest plugins used to live in tree for many projects - for Tempest to discover those plugins, "all-plugin" installs Tempest in a virtual environment with system site-packages enabled. After the Tempest plugin community goal in Queens, most plugins are now hosted in a dedicated repository, and "all-plugin" should not be needed anymore. Andrea Frittoli (andreaf) [0] http://codesearch.openstack.org/?q=all-plugin&i=nope&files=&repos= > > Clark > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaetan at xeberon.net Sun Apr 8 07:39:26 2018 From: gaetan at xeberon.net (Gaetan) Date: Sun, 8 Apr 2018 09:39:26 +0200 Subject: [openstack-dev] Patches on PBR Message-ID: Hello, I have started a few patch on PBR which fail, but I am not sure the reason, they seem related to something external of my changes: - https://review.openstack.org/#/c/559484/6: 'pbr boostrap' command. Error seems:"testtools.matchers._impl.MismatchError: b'STARTING test server pbr_testpackage.wsgi' not in b''" - https://review.openstack.org/#/c/558181/: proposal for update of sem-ver 3 doc - https://review.openstack.org/#/c/524436/: Pipfile support (still WIP) Can you review them? Thanks, ----- Gaetan -------------- next part -------------- An HTML attachment was scrubbed... URL: From ifat.afek at nokia.com Sun Apr 8 09:00:41 2018 From: ifat.afek at nokia.com (Afek, Ifat (Nokia - IL/Kfar Sava)) Date: Sun, 8 Apr 2018 09:00:41 +0000 Subject: [openstack-dev] [Vitrage] New proposal for analysis. In-Reply-To: <004a01d3cd86$aea447f0$0becd7d0$@ssu.ac.kr> References: <0a7201d3c5c1$2ab596a0$8020c3e0$@ssu.ac.kr> <0b4201d3c63b$79038400$6b0a8c00$@ssu.ac.kr> <0cf201d3c72f$2b3f5ec0$81be1c40$@ssu.ac.kr> <0d8101d3c754$41e73c90$c5b5b5b0$@ssu.ac.kr> <38E590A3-69BF-4BE1-A701-FA8171429D46@nokia.com> <00e801d3ca25$29befee0$7d3cfca0$@ssu.ac.kr> <000a01d3caf4$90584010$b108c030$@ssu.ac.kr> <003c01d3cb45$fda29930$f8e7cb90$@ssu.ac.kr> <004a01d3cd86$aea447f0$0becd7d0$@ssu.ac.kr> Message-ID: <10262245-AAFB-49EA-BBF6-10EC12013DA5@nokia.com> Hi Minwook, Sounds like a good idea ☺ Thanks, Ifat From: MinWookKim Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Friday, 6 April 2018 at 12:07 To: "'OpenStack Development Mailing List (not for usage questions)'" Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, If possible, could i write a blueprint based on what we discussed? (architecture, specs) After checking the blueprint, it would be better to proceed with specific updates on the various issues. what do you think? Thanks. Best regards, Minwook. From: MinWookKim [mailto:delightwook at ssu.ac.kr] Sent: Thursday, April 5, 2018 10:53 AM To: 'OpenStack Development Mailing List (not for usage questions)' Subject: RE: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thanks for the good comments. It was very helpful. As you said, I tested for std.ssh, and I was able to get much better results. I am confident that this is what I want. We can use std.ssh to provide convenience to users with a much more efficient way to configure shell scripts / monitoring agent automation(for Zabbix history,etc) / other commands. In addition, std_actions.py contained a number of features that could be used for this proposal (such as HTTP). So if we actively use and utilize the actions in std_actions.py, we might be able to construct neat code without the duplicate functionality that you worried about. It has been a great help. In addition, I also agree that Vitrage action is required for Mistral. If possible, I might be able to do that in the future.(ASAP) Thank you. Best regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Wednesday, April 4, 2018 4:21 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, I discussed this issue with a Mistral contributor. Mistral has a long list of actions that can be used. Specifically, you can use the std.ssh action to execute shell scripts. Some useful commands: mistral action-list mistral action-get I’m not sure about the output of the std.ssh, and whether you can get it from the action. I suggest you try it and see how it works. The action is implemented here: https://github.com/openstack/mistral/blob/master/mistral/actions/std_actions.py If std.ssh does not suit your needs, you also have an option to implement and run your own action in Mistral (either as an ssh action or as a python code). And BTW, it is not related to your current use case, but we can also add Vitrage actions to Mistral, so the user can access Vitrage information (get topology, get alarms) from Mistral workflows. Best regards, Ifat From: MinWookKim > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Tuesday, 3 April 2018 at 15:19 To: "'OpenStack Development Mailing List (not for usage questions)'" > Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thanks for your reply. Your comments have been a great help to the proposal. (sorry, I did not think we could use Mistral). If we use the Mistral workflow for the proposal, we can get better results (we can get good results on performance and code conciseness). Also, if we use the Mistral workflow, we do not need to write any unnecessary code. Since I don't know about mistral yet, I think it would be better to do the most efficient design including mistral after grasping it. If we run a check through a Mistral workflow, how about providing users with a choice of tools that have the capability to perform checks? We can get the results of the check through the Mistral and tools, but I think we need the least functionality to manage them. What do you think? I attached a picture of the actual UI that I simply implemented. I hope it helps you understand. (The parameter and content have no meaning and are a simple example.) : ) Thanks. Best regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Tuesday, April 3, 2018 8:31 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, Thanks for the explanation, I understand the reasons for not running these checks on a regular basis in Zabbix or other monitoring tools. It makes sense. However, I don’t want to re-invent the wheel and add to Vitrage functionality that already exists in other projects. How about using Mistral for the purpose of manually running these extra checks? If you prepare the script/agent in advance, as well as the Mistral workflow, I believe that Mistral can successfully execute the check and return the results. I’m not so sure about the UI part, we will have to figure out how and where the user can see the output. But it will save a lot of effort around managing the checks, running a new service, supporting a new API, etc. What do you think? Ifat From: MinWookKim > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Tuesday, 3 April 2018 at 5:36 To: "'OpenStack Development Mailing List (not for usage questions)'" > Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, I also thought about several scenarios that use monitoring tools like Zabbix, Nagios, and Prometheus. But there are some limitations, so we have to think about it. We also need to think about targets, scope, and so on. The reason I do not think of tools like Zabbix, Nagios, and Prometheus as a tool to run checks is because we need to configure an agent or an exporter. I think it is not hard to configure an agent for monitoring objects such as a physical host. But the scope of the idea I think involves the VM's interior. Therefore, configuring the agent automatically inside the VM may not be easy. (although we can use parameters like user-data) If we exclude VM internal checks from scope, we can simply perform a check via Zabbix. (Like Zabbix's remote command, history) On the other hand, if we include the inside of a VM in a scope, and configure each of them, we have a rather constant overhead. The check service may incur temporary overhead, but the agent configuration can cause constant overhead. And Zabbix history can be another task for Vitrage. If we configure the agents themselves and exclude the VM's internal checks, we can provide functionality with simple code. how is it? Thank you. Best regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Monday, April 2, 2018 10:22 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, Thinking about it again, writing a new service for these checks might be an unnecessary overhead. Have you considered using an existing tool, like Zabbix, for running such checks? If you use Zabbix, you can define new triggers that run the new checks, and whenever needed the user can ask to open Zabbix and show the relevant metrics. The format will not be exactly the same as in your example, but it will save a lot of work and spare you the need to write and manage a new service. Some technical details: · The current information that Vitrage stores is not enough for opening the right Zabbix page. We will need to keep a little more data, like the item id, on the alarm vertex. But can be done easily. · A relevant Zabbix API is history.get [1] · If you are not using Zabbix, I assume that other monitoring tools have similar capabilities What do you think? Do you think it can work with your scenario? Or do you see a benefit to the user is viewing the data in the format that you suggested? [1] https://www.zabbix.com/documentation/3.0/manual/api/reference/history/get Thanks, Ifat From: MinWookKim > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Monday, 2 April 2018 at 4:51 To: "'OpenStack Development Mailing List (not for usage questions)'" > Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thank you for the reply. :) It is my opinion only, so if I'm wrong, we can change the implementation part at any time. (Even if it differs from my initial intention) The same security issues arise as you say. But now Vitrage does not call external APIs. The Vitrage-dashboard uses Vitrageclient libraries for Topology, Alarms, and RCA requests to Vitrage. So if we add an API, it will have the following flow. Vitrage-dashboard requests checks using the Vitrageclient library. -> Vitrage receives the API. -> api / controllers / v1 / checks.py is called. -> checks service is called. In accordance with the above flow, passing through the Vitrage API is the purpose of data passing and function calls. I think Vitrage does not need to call external APIs. If you do not want to go through the Vitrage API, we need to create a function for the check action in the Vitrage-dashboard, and write code to call the function. If I think wrong, please tell me anytime. :) Thank you. Best regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Sunday, April 1, 2018 3:40 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, I understand your concern about the security issue. But how would that be different if the API call is passed through Vitrage API? The authentication from vitrage-dashboard to vitrage API will work, but then Vitrage will call an external API and you’ll have the same security issue, right? I don’t understand what is the difference between calling the external component from vitrage-dashboard and calling it from vitrage. Best regards, Ifat. From: MinWookKim > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Thursday, 29 March 2018 at 14:51 To: "'OpenStack Development Mailing List (not for usage questions)'" > Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thanks for your reply. : ) I wrote my opinion on your comment. Why do you think the request should pass through the Vitrage API? Why can’t vitrage-dashboard call the check component directly? Authentication issues: I think the check component is a separate component based on the API. In my opinion, if the check component has a separate api address from the vitrage to receive requests from the Vitrage-dashboard, the Vitrage-dashboard needs to know the api address for the check component. This can result in a request / response situation open to anyone, regardless of the authentication supported by openstack between the Vitrage-dashboard and the request / response procedure of check component. This is possible not only through the Vitrage-dashboard, but also with simple commands such as curl. (I think it is unnecessary to implement a separate authentication system for the check component.) This problem may occur if someone knows the api address for the check component, which can cause the host and VM to execute system commands. what should happen if the user closes the check window before the checks are over? I assume that the checks will finish, but the user won’t be able to see the results? If the window is closed before the check is finished, the user can not check the result. To solve this problem, I think that temporarily saving a list of recent results is also a solution. By storing temporary lists (for example, up to 10), the user can see the previous results and think that it is also possible to empty the list by the user. how is it? Thank you. Best Regrads, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Thursday, March 29, 2018 8:07 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, Why do you think the request should pass through the Vitrage API? Why can’t vitrage-dashboard call the check component directly? And another question: what should happen if the user closes the check window before the checks are over? I assume that the checks will finish, but the user won’t be able to see the results? Thanks, Ifat. From: MinWookKim > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Thursday, 29 March 2018 at 10:25 To: "'OpenStack Development Mailing List (not for usage questions)'" > Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat and Vitrage team. I would like to explain more about the implementation part of the mail I sent last time. The flow is as follows. Vitrage-dashboard (action-list-panel) -> Vitrage-api -> check component The last time I mentioned it as api-handler, it would be better to call the check component directly from Vitarge-api without having to use it. I hope this helps you understand. Thank you Best Regards, Minwook. From: MinWookKim [mailto:delightwook at ssu.ac.kr] Sent: Wednesday, March 28, 2018 11:21 AM To: 'OpenStack Development Mailing List (not for usage questions)' Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thanks for your reply. : ) This proposal is a proposal that we expect to be useful from a user perspective. From a manager's point of view, we need an implementation that minimizes the overhead incurred by the proposal. The answers to some of your questions are: • I assume that these checks will not be implemented in Vitrage, and the results will not be stored in Vitrage, right? Vitrage role is to be a place where it is easy and intuitive for the user to execute external actions/checks. Yes, that's right. We do not need to save it to Vitrage because we just need to check the results. However, it is possible to implement the function directly in Vitrage-dashboard separately from Vitrage like add-action-list panel, but it seems that it is not enough to implement all the functions. If you do not mind, we will have the following flow. 1. The user requests the check action from the vitrage-dashboard (add-action-list-panel). 2. Call the check component through the vitrage's API handler. 3. The check component executes the command and returns the result. Because it is my opinion only, please tell us if there is an unnecessary part. :) • Do you expect the user to click an entity, select an action to run (e.g. ‘P2P check’), and wait by the open panel for the results? What if the user switches to another menu before the check is done? What if the user asks to run an additional check in parallel? What if the user wants to see again a previous result? My idea was to select the task, wait for the results in an open panel, and then instantly see it in the panel. If we switch to another menu before the scan is complete, we will not be able to see the results. Parallel checking is a matter of fact. (This can cause excessive overhead.) For earlier results, it may be okay to temporarily save the open panel until we exit the panel. We can see the previous results through the temporary saved results. • Any thoughts of what component will implement those checks? Or maybe these will be just scripts? I think I implement a separate component to request it. • It could be nice if, as a result of an action check, a new alarm will be raised in Vitrage. A specific alarm with the additional details that were found. However, it might not be trivial to implement it. We could think about it as phase #2. It is expected to be really good. It would be very useful if an Entity-Graph generates an alarm based on the check result. I think that part will be able to talk in detail later. My answer is my opinions and assumptions. If you think my implementation is wrong, or an inefficient implementation, please do not hesitate to tell me. Thanks. Best Regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Wednesday, March 28, 2018 2:23 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, I think that from a user’s perspective, these are very good ideas. I have some questions regarding the UX and the implementation, since I’m trying to think what could be the best way to execute such actions from Vitrage. · I assume that these checks will not be implemented in Vitrage, and the results will not be stored in Vitrage, right? Vitrage role is to be a place where it is easy and intuitive for the user to execute external actions/checks. · Do you expect the user to click an entity, select an action to run (e.g. ‘P2P check’), and wait by the open panel for the results? What if the user switches to another menu before the check is done? What if the user asks to run an additional check in parallel? What if the user wants to see again a previous result? · Any thoughts of what component will implement those checks? Or maybe these will be just scripts? · It could be nice if, as a result of an action check, a new alarm will be raised in Vitrage. A specific alarm with the additional details that were found. However, it might not be trivial to implement it. We could think about it as phase #2. Best Regards, Ifat From: MinWookKim > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Tuesday, 27 March 2018 at 14:45 To: "openstack-dev at lists.openstack.org" > Subject: [openstack-dev] [Vitrage] New proposal for analysis. Hello Vitrage team. I am currently working on the Vitrage-Dashboard proposal for the ‘Add action list panel for entity click action’. (https://review.openstack.org/#/c/531141/) I would like to make a new proposal based on the action list panel mentioned above. The new proposal is to provide multidimensional analysis capabilities in several entities that make up the infrastructure in the entity graph. Vitrage's entity-graph allows us to efficiently monitor alarms from various monitoring tools. In the current state, when there is a problem with the VM and Host, or when we want to check the status, we need to access the console individually for each VM and Host. This situation causes unnecessary behavior when the number of VMs and hosts increases. My new suggestion is that if we have a large number of vm and host, we do not need to directly connect to each VM, host console to enter the system command. Instead, we can send a system command to VM and hosts in the cloud through this proposal. It is only checking results. I have written some use-cases for an efficient explanation of the function. From an implementation perspective, the goals of the proposal are: 1. To execute commands without installing any Agent / Client that can cause load on VM, Host. 2. I want to provide a simple UI so that users or administrators can get the desired information to multiple VMs and hosts. 3. I want to be able to grasp the results at a glance. 4. I want to implement a component that can support many additional scenarios in plug-in format. I would be happy if you could comment on the proposal or ask questions. Thanks. Best Regards, Minwook. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaetan at xeberon.net Sun Apr 8 09:10:57 2018 From: gaetan at xeberon.net (Gaetan) Date: Sun, 8 Apr 2018 11:10:57 +0200 Subject: [openstack-dev] PBR and Pipfile Message-ID: Hello OpenStack dev community, I am currently working on the support of Pipfile for PBR ([1]), and I also follow actively the work on pipenv, which is now in officially supported by PyPA. There have been recently an intense discussion on the difficulties about Python libraries development, and how to spread good practices [2] on the pipenv community and enhance its documentation. As a user of PBR, and big fan of it, I try to bridge the link between pbr and pipenv (with [1]) but I am interested in getting the feedback of Python developers of OpenStack that may have much more experience using PBR and more generally packaging python libraries than me. The main point is that packaging an application is quite easy or at least understandable by newcomers, using `requirements.txt` or `Pipfile`+ `Pipfile.lock` with pipenv. At least it is easily "teachable". Packaging a library is harder, and require to explain why by default `requirements.txt`(or `Pipfile`) does not work. Some "advanced" documentation exists but it still hard to understand why Python ended up with something complex for libraries ([3]). One needs to ensure `install_requires`declares the dependencies to that pip can find them during transitive dependencies installation (that is, installing the dependencies of a given dependency). PBR helps on this point but some does not want its other features. There is also works on PEP around pyproject.toml ([4]), which looks quite similar to PBR's setup.cfg. What do you think about it? My opinion is this difference in behaviour between lib and app has technical reasons, but as a community we would gain a lot of unifying both workflows. I am using PBR + a few hacks [5], and I am pretty satisfied with the overall result. So, in short, I simply start a general thread here to retrieve your general feedback around these points. Thanks for your feedbacks Gaetan [1]: https://review.openstack.org/#/c/524436/ [2]: https://github.com/pypa/pipenv/issues/1911 [3]: https://docs.pipenv.org/advanced/#pipfile-vs-setup-py [4]: https://www.python.org/dev/peps/pep-0518/ [5]: library: - pipenv to maintain Pipfile and Pipfile.lock - Pipfile.lock not tracked (local reproductivity), - pipenv-to-requirements [6] to generate a `requirements.txt` without version freeze, also tracked applications: - pipenv to maintain Pipfile and Pipfile.lock - Pipfile.lock not tracked (global reproductivity), - pipenv-to-requirements [6] to generate a `requirements.txt` and `requirements-dev.txt` with version freeze, both tracked The development done with [1] should allow to get rid of [6]. [6] https://github.com/gsemet/pipenv-to-requirements ----- Gaetan -------------- next part -------------- An HTML attachment was scrubbed... URL: From ylavi at redhat.com Sun Apr 8 12:35:12 2018 From: ylavi at redhat.com (Yaniv Lavi) Date: Sun, 8 Apr 2018 15:35:12 +0300 Subject: [openstack-dev] [ovirt-devel] [kubevirt-dev] Re: [virt-tools-list] Project for profiles and defaults for libvirt domains In-Reply-To: References: <20180320142031.GB23007@wheatley> <20180320151012.GU4530@redhat.com> <20180322145401.GD19999@wheatley> <20180322171753.GU3583@redhat.com> Message-ID: [resending to include OSP devs ] YANIV LAVI SENIOR TECHNICAL PRODUCT MANAGER Red Hat Israel Ltd. 34 Jerusalem Road, Building A, 1st floor Ra'anana, Israel 4350109 ylavi at redhat.com T: +972-9-7692306/8272306 F: +972-9-7692223 IM: ylavi TRIED. TESTED. TRUSTED. @redhatnews Red Hat Red Hat On Wed, Apr 4, 2018 at 7:23 PM, Yaniv Lavi wrote: > [resending to include KubeVirt devs ] > > YANIV LAVI > > SENIOR TECHNICAL PRODUCT MANAGER > > Red Hat Israel Ltd. > > 34 Jerusalem Road, Building A, 1st floor > > Ra'anana, Israel 4350109 > > ylavi at redhat.com T: +972-9-7692306/8272306 F: +972-9-7692223 IM: ylavi > TRIED. TESTED. TRUSTED. > @redhatnews Red Hat Red Hat > > > On Wed, Apr 4, 2018 at 7:07 PM, Yaniv Lavi wrote: > >> Hi, >> I'd like to go one step back and discuss why we should try to do this on >> the high level. >> >> For the last 5-10 years of KVM development, we are pragmatically >> providing the Linux host level APIs via project specific host >> agents/integration code (Nova agent, oVirt host agent, virt-manager). >> In recent time we see new projects that have similar requirements >> (Cockpit, different automation tool, KubeVirt), this means that all of the >> Linux virt stack consumers are reinventing the wheel and using very >> different paths to consume the partial solutions that are provided today. >> >> The use of the Linux virt stack is well defined by the existing projects >> scope and it makes a lot of sense to try to provide the common patterns via >> the virt stack directly as a host level API that different client or >> management consume. >> The main goal is to improve the developer experience for virtualization >> management applications with an API set that is useful to the entire set of >> tools (OSP, oVirt, KubeVirt, Cockpit and so on). >> >> The Linux virt developer community currently is not able to provide best >> practices and optimizations from single node knowledge. This means that all >> of that smarts is locked to the specific project integration in the good >> case or not provided at all and the projects as a whole lose from that. >> When testing the Linux virt stack itself and since each project has >> different usage pattern, we lose the ability to test abilities on the lower >> level making the entire stack less stable and complete for new features. >> >> This also limits the different projects ability to contribute back to the >> Linux stack based on their user and market experience for others in open >> source to gain. >> >> I understand this shift is technically challenging for existing projects, >> but I do see value in doing this even for new implementation like Cockpit >> and KubeVirt. >> I also believe that the end result could be appealing enough to cause >> project like OSP, virt-manager and oVirt to consider to reduce the existing >> capabilities of their host side integrations/agents to shims on the host >> level and reuse the common/better-tested pattern as clients that was >> developed against the experience of the different projects. >> >> I call us all to collaborate and try to converge on a solution that will >> help all in the long term in the value you get from the common base. >> >> >> Thanks, >> >> YANIV LAVI >> >> SENIOR TECHNICAL PRODUCT MANAGER >> >> Red Hat Israel Ltd. >> >> 34 Jerusalem Road, Building A, 1st floor >> >> Ra'anana, Israel 4350109 >> >> ylavi at redhat.com T: +972-9-7692306/8272306 F: +972-9-7692223 IM: ylavi >> TRIED. TESTED. TRUSTED. >> @redhatnews Red Hat Red Hat >> >> >> On Thu, Mar 22, 2018 at 7:18 PM, Daniel P. Berrangé >> wrote: >> >>> On Thu, Mar 22, 2018 at 03:54:01PM +0100, Martin Kletzander wrote: >>> > > >>> > > > One more thing could be automatically figuring out best values >>> based on >>> > > > libosinfo-provided data. >>> > > > >>> > > > 2) Policies >>> > > > >>> > > > Lot of the time there are parts of the domain definition that need >>> to be >>> > > > added, but nobody really cares about them. Sometimes it's enough >>> to >>> > > > have few templates, another time you might want to have a policy >>> > > > per-scenario and want to combine them in various ways. For >>> example with >>> > > > the data provided by point 1). >>> > > > >>> > > > For example if you want PCI-Express, you need the q35 machine >>> type, but >>> > > > you don't really want to care about the machine type. Or you want >>> to >>> > > > use SPICE, but you don't want to care about adding QXL. >>> > > > >>> > > > What if some of these policies could be specified once (using some >>> DSL >>> > > > for example), and used by virtuned to merge them in a unified and >>> > > > predictable way? >>> > > > >>> > > > 3) Abstracting the XML >>> > > > >>> > > > This is probably just usable for stateless apps, but it might >>> happen >>> > > > that some apps don't really want to care about the XML at all. >>> They >>> > > > just want an abstract view of the domain, possibly add/remove a >>> device >>> > > > and that's it. We could do that as well. I can't really tell how >>> much >>> > > > of a demand there is for it, though. >>> > > >>> > > It is safe to say that applications do not want to touch XML at all. >>> > > Any non-trivial application has created an abstraction around XML, >>> > > so that they have an API to express what they want, rather than >>> > > manipulating of strings to format/parse XML. >>> > > >>> > >>> > Sure, this was just meant to be a question as to whether it's worth >>> > pursuing or not. You make a good point on why it is not (at least for >>> > existing apps). >>> > >>> > However, since this was optional, the way this would look without the >>> > XML abstraction is that both input and output would be valid domain >>> > definitions, ultimately resulting in something similar to virt-xml with >>> > the added benefit of applying a policy from a file/string either >>> > supplied by the application itself. Whether that policy was taken from >>> > a common repository of such knowledge is orthogonal to this idea. >>> Since >>> > you would work with the same data, the upgrade could be incremental as >>> > you'd only let virtuned fill in values for new options and could slowly >>> > move on to using it for some pre-existing ones. None of the previous >>> > approaches did this, if I'm not mistaken. Of course it gets more >>> > difficult when you need to expose all the bits libvirt does and keep >>> > them in sync (as you write below). >>> >>> That has implications for how mgmt app deals with XML. Nova has object >>> models for representing XML in memory, but it doesn't aim to have >>> loss-less roundtrip from parse -> object -> format. So if Nova gets >>> basic XML from virttuned, parses it into its object to let it set >>> more fields and then formats it again, chances are it will have lost >>> a bunch of stuff from virttuned. Of course if you know about this >>> need upfront you can design the application such that it can safely >>> round-trip, but this is just example of problem with integrating to >>> existing apps. >>> >>> The other thing that concerns is that there are dependancies between >>> different bits of XML for a given device. ie if feature X is set to >>> a certain value, that prevents use of feature Y. So if virttuned >>> sets feature X, but the downstream application uses feature Y, the >>> final result can be incompatible. The application won't know this >>> because it doesn't control what stuff virttuned would be setting. >>> This can in turn cause ordering constraints. >>> >>> eg the application needs to say that virtio-net is being used, then >>> virttuned can set some defaults like enabling vhost-net, and then >>> the application can fill in more bits that it cares about. Or if >>> we let virttuned go first, setting virtio-net model + vhost-net, >>> then application wants to change model to e1000e, it has to be >>> aware that it must now delete the vhost-net bit that virtuned >>> added. This ends up being more complicated that just ignoring >>> virttuned and coding up use of vhost-net in application code. >>> >>> >>> > > This is the same kind of problem we faced wrt libvirt-gconfig and >>> > > libvirt-gobject usage from virt-manager - it has an extensive code >>> > > base that already works, and rewriting it to use something new >>> > > is alot of work for no short-term benefit. libvirt-gconfig/gobject >>> > > were supposed to be the "easy" bits for virt-manager to adopt, as >>> > > they don't really include much logic that would step on >>> virt-manager's >>> > > toes. libvirt-designer was going to be a very opinionated library >>> > > and in retrospective that makes it even harder to consider adopting >>> > > it for usage in virt-manager, as it'll have signficant liklihood >>> > > of making functionally significant changes in behaviour. >>> > > >>> > >>> > The initial idea (which I forgot to mention) was that all the decisions >>> > libvirt currently does (so that it keeps the guest ABI stable) would be >>> > moved into data (let's say some DSL) and it could then be switched or >>> > adjusted if that's not what the mgmt app wants (on a per-definition >>> > basis, of course). I didn't feel very optimistic about the upstream >>> > acceptance for that idea, so I figured that there could be something >>> > that lives beside libvirt, helps with some policies if requested and >>> > then the resulting XML could be fed into libvirt for determining the >>> > rest. >>> >>> I can't even imagine how we would go about encoding the stable guest >>> ABI logic libvirt does today in data ! >>> >>> > >>> > > There's also the problem with use of native libraries that would >>> > > impact many apps. We only got OpenStack to grudgingly allow the >>> > >>> > By native you mean actual binary libraries or native to the OpenStack >>> > code as in python module? Because what I had in mind for this project >>> > was a python module with optional wrapper for REST API. >>> >>> I meant native binary libraries. ie openstack is not happy in general >>> with adding dependancies on new OS services, because there's a big >>> time lag for getting them into all distros. By comparison a pure >>> python library, they can just handle automatically in their deployment >>> tools, just pip installing on any OS distro straight from pypi. This >>> is what made use of libosinfo a hard sell in Nova. >>> >>> The same thing is seen with Go / Rust where some applications have >>> decided they're better of actually re-implementing the libvirt RPC >>> protocol in Go / Rust rather than use the libvirt.so client. I think >>> this is a bad tradeoff in general, but I can see why they like it >>> >>> Regards, >>> Daniel >>> -- >>> |: https://berrange.com -o- https://www.flickr.com/photos/ >>> dberrange :| >>> |: https://libvirt.org -o- >>> https://fstop138.berrange.com :| >>> |: https://entangle-photo.org -o- https://www.instagram.com/dber >>> range :| >>> _______________________________________________ >>> Devel mailing list >>> Devel at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/devel >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gkotton at vmware.com Sun Apr 8 13:51:20 2018 From: gkotton at vmware.com (Gary Kotton) Date: Sun, 8 Apr 2018 13:51:20 +0000 Subject: [openstack-dev] [neutron] [OVN] Tempest API / Scenario tests and OVN metadata References: <4C4EB692-8B09-4689-BDC2-E6447D719073@kaplonski.pl> <4A490EDA-BD7F-444C-AA4F-65562FE21408@kaplonski.pl> <0E9F528E-CAFA-4229-981F-FCD75EDEF5A9@redhat.com> Message-ID: Hi, There are some tempest tests that check realization of resources on the networking platform and connectivity. Here things are challenging as each networking platform may be more restrictive than the upstream ML2 plugin. My thinking here is that we should leverage the tempest plugins for each networking platform and they can overwrite the problematic tests and address them as suitable for the specific plugin. Thanks Gary From: Miguel Angel Ajo Pelayo Reply-To: OpenStack List Date: Saturday, April 7, 2018 at 8:56 AM To: OpenStack List Subject: Re: [openstack-dev] [neutron] [OVN] Tempest API / Scenario tests and OVN metadata this issue isn't only for networking ovn, please note that it happens with a flew other vendor plugins (like nsx), at least this is something we have found in downstream certifications. Cheers, On Sat, Apr 7, 2018, 12:36 AM Daniel Alvarez > wrote: > On 6 Apr 2018, at 19:04, Sławek Kapłoński > wrote: > > Hi, > > Another idea is to modify test that it will: > 1. Check how many ports are in tenant, > 2. Set quota to actual number of ports + 1 instead of hardcoded 1 as it is now, > 3. Try to add 2 ports - exactly as it is now, > Cool, I like this one :-) Good idea. > I think that this should be still backend agnostic and should fix this problem. > >> Wiadomość napisana przez Sławek Kapłoński > w dniu 06.04.2018, o godz. 17:08: >> >> Hi, >> >> I don’t know how networking-ovn is working but I have one question. >> >> >>> Wiadomość napisana przez Daniel Alvarez Sanchez > w dniu 06.04.2018, o godz. 15:30: >>> >>> Hi, >>> >>> Thanks Lucas for writing this down. >>> >>> On Thu, Apr 5, 2018 at 11:35 AM, Lucas Alvares Gomes > wrote: >>> Hi, >>> >>> The tests below are failing in the tempest API / Scenario job that >>> runs in the networking-ovn gate (non-voting): >>> >>> neutron_tempest_plugin.api.admin.test_quotas_negative.QuotasAdminNegativeTestJSON.test_create_port_when_quotas_is_full >>> neutron_tempest_plugin.api.test_routers.RoutersIpV6Test.test_router_interface_status >>> neutron_tempest_plugin.api.test_routers.RoutersTest.test_router_interface_status >>> neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_prefixlen >>> neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_quota >>> neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_subnet_cidr >>> >>> Digging a bit into it I noticed that with the exception of the two >>> "test_router_interface_status" (ipv6 and ipv4) all other tests are >>> failing because the way metadata works in networking-ovn. >>> >>> Taking the "test_create_port_when_quotas_is_full" as an example. The >>> reason why it fails is because when the OVN metadata is enabled, >>> networking-ovn will metadata port at the moment a network is created >>> [0] and that will already fulfill the quota limit set by that test >>> [1]. >>> >>> That port will also allocate an IP from the subnet which will cause >>> the rest of the tests to fail with a "No more IP addresses available >>> on network ..." error. >>> >>> With ML2/OVS we would run into the same Quota problem if DHCP would be >>> enabled for the created subnets. This means that if we modify the current tests >>> to enable DHCP on them and we account this extra port it would be valid for >>> all networking-ovn as well. Does it sound good or we still want to isolate quotas? >> >> If DHCP will be enabled for networking-ovn, will it use one more port also or not? If so then You will still have the same problem with DHCP as in ML2/OVS You will have one port created and for networking-ovn it will be 2 ports. >> If it’s not like that then I think that this solution, with some comment in test code why DHCP is enabled should be good IMO. >> >>> >>> This is not very trivial to fix because: >>> >>> 1. Tempest should be backend agnostic. So, adding a conditional in the >>> tempest test to check whether OVN is being used or not doesn't sound >>> correct. >>> >>> 2. Creating a port to be used by the metadata agent is a core part of >>> the design implementation for the metadata functionality [2] >>> >>> So, I'm sending this email to try to figure out what would be the best >>> approach to deal with this problem and start working towards having >>> that job to be voting in our gate. Here are some ideas: >>> >>> 1. Simple disable the tests that are affected by the metadata approach. >>> >>> 2. Disable metadata for the tempest API / Scenario tests (here's a >>> test patch doing it [3]) >>> >>> IMHO, we don't want to do this as metadata is likely to be enabled in all the >>> clouds either using ML2/OVS or OVN so it's good to keep exercising >>> this part. >>> >>> >>> 3. Same as 1. but also create similar tempest tests specific for OVN >>> somewhere else (in the networking-ovn tree?!) >>> >>> As we discussed on IRC I'm keen on doing this instead of getting bits in >>> tempest to do different things depending on the backend used. Unless >>> we want to enable DHCP on the subnets that these tests create :) >>> >>> >>> What you think would be the best way to workaround this problem, any >>> other ideas ? >>> >>> As for the "test_router_interface_status" tests that are failing >>> independent of the metadata, there's a bug reporting the problem here >>> [4]. So we should just fix it. >>> >>> [0] https://github.com/openstack/networking-ovn/blob/f3f5257fc465bbf44d589cc16e9ef7781f6b5b1d/networking_ovn/common/ovn_client.py#L1154 >>> [1] https://github.com/openstack/neutron-tempest-plugin/blob/35bf37d1830328d72606f9c790b270d4fda2b854/neutron_tempest_plugin/api/admin/test_quotas_negative.py#L66 >>> [2] https://docs.openstack.org/networking-ovn/latest/contributor/design/metadata_api.html#overview-of-proposed-approach >>> [3] https://review.openstack.org/#/c/558792/ >>> [4] https://bugs.launchpad.net/networking-ovn/+bug/1713835 >>> >>> Cheers, >>> Lucas >>> >>> Thanks, >>> Daniel >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> — >> Best regards >> Slawek Kaplonski >> slawek at kaplonski.pl >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > — > Best regards > Slawek Kaplonski > slawek at kaplonski.pl > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From pabelanger at redhat.com Sun Apr 8 17:18:19 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Sun, 8 Apr 2018 13:18:19 -0400 Subject: [openstack-dev] [infra][qa][requirements] Pip 10 is on the way In-Reply-To: References: <20180331150026.nqrwaxakxcn3vmqz@yuggoth.org> <20180402150635.5d4jbbnzry2biowu@gentoo.org> <1522685637.1678193.1323782608.022AAF87@webmail.messagingengine.com> <1522960033.3841849.1328067440.7342057C@webmail.messagingengine.com> <20180406003909.GA28653@localhost.localdomain> Message-ID: <20180408171819.GA25533@localhost.localdomain> On Sat, Apr 07, 2018 at 12:56:31AM -0700, Arun SAG wrote: > Hello, > > On Thu, Apr 5, 2018 at 5:39 PM, Paul Belanger wrote: > > > Yah, I agree your approach is the better, i just wanted to toggle what was > > supported by default. However, it is pretty broken today. I can't imagine > > anybody actually using it, if so they must be carrying downstream patches. > > > > If we think USE_VENV is valid use case, for per project VENV, I suggest we > > continue to fix it and update neutron to support it. Otherwise, we maybe should > > rip and replace it. > > I work for Yahoo (Oath). We use USE_VENV a lot in our CI. We use VENVs > to deploy software to > production as well. we have some downstream patches to devstack to fix > some issues with > USE_VENV feature, i would be happy to upstream them. Please do not rip > this out. Thanks. > Yes, please upstream them if at all possible. I've been tracking all the fixes so far at https://review.openstack.org/552939/ but still having an issue with rootwrap. I think clarkb managed to fix this in his patchset. Paul From xinni.ge1990 at gmail.com Mon Apr 9 00:54:15 2018 From: xinni.ge1990 at gmail.com (Xinni Ge) Date: Mon, 9 Apr 2018 09:54:15 +0900 Subject: [openstack-dev] [horizon][xstatic]How to handle xstatic if upstream files are modified Message-ID: Hello, team. Sorry for talking about xstatic repo for so many times. I didn't realize xstatic repositories should be provided with exactly the same file as upstream, and should have talked about it at very first. I modified several upstream files because some of them files couldn't be used directly under my expectation. For example, {{ }} are used in some original files as template tags, but Horizon adopts {$ $} in angular module, so I modified them to be recognized properly. Another major modification is that css files are converted into scss files to solve some css import issue previously. Besides, after collecting statics, some png file paths in css cannot be referenced properly and shown as 404 errors, I also modified css itself to handle this issues. I will recheck all the un-matched xstatic repositories and try to replace with upstream files as much as I can. But I if I really have to modify some original files, is it acceptable to still use it as embedded files with license info appeared at the top? Best Regards, Xinni Ge -------------- next part -------------- An HTML attachment was scrubbed... URL: From delightwook at ssu.ac.kr Mon Apr 9 01:22:40 2018 From: delightwook at ssu.ac.kr (MinWookKim) Date: Mon, 9 Apr 2018 10:22:40 +0900 Subject: [openstack-dev] [Vitrage] New proposal for analysis. In-Reply-To: <10262245-AAFB-49EA-BBF6-10EC12013DA5@nokia.com> References: <0a7201d3c5c1$2ab596a0$8020c3e0$@ssu.ac.kr> <0b4201d3c63b$79038400$6b0a8c00$@ssu.ac.kr> <0cf201d3c72f$2b3f5ec0$81be1c40$@ssu.ac.kr> <0d8101d3c754$41e73c90$c5b5b5b0$@ssu.ac.kr> <38E590A3-69BF-4BE1-A701-FA8171429D46@nokia.com> <00e801d3ca25$29befee0$7d3cfca0$@ssu.ac.kr> <000a01d3caf4$90584010$b108c030$@ssu.ac.kr> <003c01d3cb45$fda29930$f8e7cb90$@ssu.ac.kr> <004a01d3cd86$aea447f0$0becd7d0$@ssu.ac.kr> <10262245-AAFB-49EA-BBF6-10EC12013DA5@nokia.com> Message-ID: <01b201d3cfa1$40d7b8c0$c2872a40$@ssu.ac.kr> Hello Ifat, I'll update the BP soon. : ) Thank you. Best regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Sunday, April 8, 2018 6:01 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, Sounds like a good idea :) Thanks, Ifat From: MinWookKim > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Friday, 6 April 2018 at 12:07 To: "'OpenStack Development Mailing List (not for usage questions)'" > Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, If possible, could i write a blueprint based on what we discussed? (architecture, specs) After checking the blueprint, it would be better to proceed with specific updates on the various issues. what do you think? Thanks. Best regards, Minwook. From: MinWookKim [mailto:delightwook at ssu.ac.kr] Sent: Thursday, April 5, 2018 10:53 AM To: 'OpenStack Development Mailing List (not for usage questions)' Subject: RE: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thanks for the good comments. It was very helpful. As you said, I tested for std.ssh, and I was able to get much better results. I am confident that this is what I want. We can use std.ssh to provide convenience to users with a much more efficient way to configure shell scripts / monitoring agent automation(for Zabbix history,etc) / other commands. In addition, std_actions.py contained a number of features that could be used for this proposal (such as HTTP). So if we actively use and utilize the actions in std_actions.py, we might be able to construct neat code without the duplicate functionality that you worried about. It has been a great help. In addition, I also agree that Vitrage action is required for Mistral. If possible, I might be able to do that in the future.(ASAP) Thank you. Best regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Wednesday, April 4, 2018 4:21 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, I discussed this issue with a Mistral contributor. Mistral has a long list of actions that can be used. Specifically, you can use the std.ssh action to execute shell scripts. Some useful commands: mistral action-list mistral action-get I’m not sure about the output of the std.ssh, and whether you can get it from the action. I suggest you try it and see how it works. The action is implemented here: https://github.com/openstack/mistral/blob/master/mistral/actions/std_actions .py If std.ssh does not suit your needs, you also have an option to implement and run your own action in Mistral (either as an ssh action or as a python code). And BTW, it is not related to your current use case, but we can also add Vitrage actions to Mistral, so the user can access Vitrage information (get topology, get alarms) from Mistral workflows. Best regards, Ifat From: MinWookKim > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Tuesday, 3 April 2018 at 15:19 To: "'OpenStack Development Mailing List (not for usage questions)'" > Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thanks for your reply. Your comments have been a great help to the proposal. (sorry, I did not think we could use Mistral). If we use the Mistral workflow for the proposal, we can get better results (we can get good results on performance and code conciseness). Also, if we use the Mistral workflow, we do not need to write any unnecessary code. Since I don't know about mistral yet, I think it would be better to do the most efficient design including mistral after grasping it. If we run a check through a Mistral workflow, how about providing users with a choice of tools that have the capability to perform checks? We can get the results of the check through the Mistral and tools, but I think we need the least functionality to manage them. What do you think? I attached a picture of the actual UI that I simply implemented. I hope it helps you understand. (The parameter and content have no meaning and are a simple example.) : ) Thanks. Best regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Tuesday, April 3, 2018 8:31 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, Thanks for the explanation, I understand the reasons for not running these checks on a regular basis in Zabbix or other monitoring tools. It makes sense. However, I don’t want to re-invent the wheel and add to Vitrage functionality that already exists in other projects. How about using Mistral for the purpose of manually running these extra checks? If you prepare the script/agent in advance, as well as the Mistral workflow, I believe that Mistral can successfully execute the check and return the results. I’m not so sure about the UI part, we will have to figure out how and where the user can see the output. But it will save a lot of effort around managing the checks, running a new service, supporting a new API, etc. What do you think? Ifat From: MinWookKim > Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Tuesday, 3 April 2018 at 5:36 To: "'OpenStack Development Mailing List (not for usage questions)'" > Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, I also thought about several scenarios that use monitoring tools like Zabbix, Nagios, and Prometheus. But there are some limitations, so we have to think about it. We also need to think about targets, scope, and so on. The reason I do not think of tools like Zabbix, Nagios, and Prometheus as a tool to run checks is because we need to configure an agent or an exporter. I think it is not hard to configure an agent for monitoring objects such as a physical host. But the scope of the idea I think involves the VM's interior. Therefore, configuring the agent automatically inside the VM may not be easy. (although we can use parameters like user-data) If we exclude VM internal checks from scope, we can simply perform a check via Zabbix. (Like Zabbix's remote command, history) On the other hand, if we include the inside of a VM in a scope, and configure each of them, we have a rather constant overhead. The check service may incur temporary overhead, but the agent configuration can cause constant overhead. And Zabbix history can be another task for Vitrage. If we configure the agents themselves and exclude the VM's internal checks, we can provide functionality with simple code. how is it? Thank you. Best regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Monday, April 2, 2018 10:22 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, Thinking about it again, writing a new service for these checks might be an unnecessary overhead. Have you considered using an existing tool, like Zabbix, for running such checks? If you use Zabbix, you can define new triggers that run the new checks, and whenever needed the user can ask to open Zabbix and show the relevant metrics. The format will not be exactly the same as in your example, but it will save a lot of work and spare you the need to write and manage a new service. Some technical details: * The current information that Vitrage stores is not enough for opening the right Zabbix page. We will need to keep a little more data, like the item id, on the alarm vertex. But can be done easily. * A relevant Zabbix API is history.get [1] * If you are not using Zabbix, I assume that other monitoring tools have similar capabilities What do you think? Do you think it can work with your scenario? Or do you see a benefit to the user is viewing the data in the format that you suggested? [1] https://www.zabbix.com/documentation/3.0/manual/api/reference/history/get Thanks, Ifat From: MinWookKim < delightwook at ssu.ac.kr> Reply-To: "OpenStack Development Mailing List (not for usage questions)" < openstack-dev at lists.openstack.org> Date: Monday, 2 April 2018 at 4:51 To: "'OpenStack Development Mailing List (not for usage questions)'" < openstack-dev at lists.openstack.org> Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thank you for the reply. :) It is my opinion only, so if I'm wrong, we can change the implementation part at any time. (Even if it differs from my initial intention) The same security issues arise as you say. But now Vitrage does not call external APIs. The Vitrage-dashboard uses Vitrageclient libraries for Topology, Alarms, and RCA requests to Vitrage. So if we add an API, it will have the following flow. Vitrage-dashboard requests checks using the Vitrageclient library. -> Vitrage receives the API. -> api / controllers / v1 / checks.py is called. -> checks service is called. In accordance with the above flow, passing through the Vitrage API is the purpose of data passing and function calls. I think Vitrage does not need to call external APIs. If you do not want to go through the Vitrage API, we need to create a function for the check action in the Vitrage-dashboard, and write code to call the function. If I think wrong, please tell me anytime. :) Thank you. Best regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [ mailto:ifat.afek at nokia.com] Sent: Sunday, April 1, 2018 3:40 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, I understand your concern about the security issue. But how would that be different if the API call is passed through Vitrage API? The authentication from vitrage-dashboard to vitrage API will work, but then Vitrage will call an external API and you’ll have the same security issue, right? I don’t understand what is the difference between calling the external component from vitrage-dashboard and calling it from vitrage. Best regards, Ifat. From: MinWookKim < delightwook at ssu.ac.kr> Reply-To: "OpenStack Development Mailing List (not for usage questions)" < openstack-dev at lists.openstack.org> Date: Thursday, 29 March 2018 at 14:51 To: "'OpenStack Development Mailing List (not for usage questions)'" < openstack-dev at lists.openstack.org> Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thanks for your reply. : ) I wrote my opinion on your comment. Why do you think the request should pass through the Vitrage API? Why can’t vitrage-dashboard call the check component directly? Authentication issues: I think the check component is a separate component based on the API. In my opinion, if the check component has a separate api address from the vitrage to receive requests from the Vitrage-dashboard, the Vitrage-dashboard needs to know the api address for the check component. This can result in a request / response situation open to anyone, regardless of the authentication supported by openstack between the Vitrage-dashboard and the request / response procedure of check component. This is possible not only through the Vitrage-dashboard, but also with simple commands such as curl. (I think it is unnecessary to implement a separate authentication system for the check component.) This problem may occur if someone knows the api address for the check component, which can cause the host and VM to execute system commands. what should happen if the user closes the check window before the checks are over? I assume that the checks will finish, but the user won’t be able to see the results? If the window is closed before the check is finished, the user can not check the result. To solve this problem, I think that temporarily saving a list of recent results is also a solution. By storing temporary lists (for example, up to 10), the user can see the previous results and think that it is also possible to empty the list by the user. how is it? Thank you. Best Regrads, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [ mailto:ifat.afek at nokia.com] Sent: Thursday, March 29, 2018 8:07 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, Why do you think the request should pass through the Vitrage API? Why can’t vitrage-dashboard call the check component directly? And another question: what should happen if the user closes the check window before the checks are over? I assume that the checks will finish, but the user won’t be able to see the results? Thanks, Ifat. From: MinWookKim < delightwook at ssu.ac.kr> Reply-To: "OpenStack Development Mailing List (not for usage questions)" < openstack-dev at lists.openstack.org> Date: Thursday, 29 March 2018 at 10:25 To: "'OpenStack Development Mailing List (not for usage questions)'" < openstack-dev at lists.openstack.org> Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat and Vitrage team. I would like to explain more about the implementation part of the mail I sent last time. The flow is as follows. Vitrage-dashboard (action-list-panel) -> Vitrage-api -> check component The last time I mentioned it as api-handler, it would be better to call the check component directly from Vitarge-api without having to use it. I hope this helps you understand. Thank you Best Regards, Minwook. From: MinWookKim [ mailto:delightwook at ssu.ac.kr] Sent: Wednesday, March 28, 2018 11:21 AM To: 'OpenStack Development Mailing List (not for usage questions)' Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hello Ifat, Thanks for your reply. : ) This proposal is a proposal that we expect to be useful from a user perspective. >From a manager's point of view, we need an implementation that minimizes the overhead incurred by the proposal. The answers to some of your questions are: • I assume that these checks will not be implemented in Vitrage, and the results will not be stored in Vitrage, right? Vitrage role is to be a place where it is easy and intuitive for the user to execute external actions/checks. Yes, that's right. We do not need to save it to Vitrage because we just need to check the results. However, it is possible to implement the function directly in Vitrage-dashboard separately from Vitrage like add-action-list panel, but it seems that it is not enough to implement all the functions. If you do not mind, we will have the following flow. 1. The user requests the check action from the vitrage-dashboard (add-action-list-panel). 2. Call the check component through the vitrage's API handler. 3. The check component executes the command and returns the result. Because it is my opinion only, please tell us if there is an unnecessary part. :) • Do you expect the user to click an entity, select an action to run (e.g. ‘P2P check’), and wait by the open panel for the results? What if the user switches to another menu before the check is done? What if the user asks to run an additional check in parallel? What if the user wants to see again a previous result? My idea was to select the task, wait for the results in an open panel, and then instantly see it in the panel. If we switch to another menu before the scan is complete, we will not be able to see the results. Parallel checking is a matter of fact. (This can cause excessive overhead.) For earlier results, it may be okay to temporarily save the open panel until we exit the panel. We can see the previous results through the temporary saved results. • Any thoughts of what component will implement those checks? Or maybe these will be just scripts? I think I implement a separate component to request it. • It could be nice if, as a result of an action check, a new alarm will be raised in Vitrage. A specific alarm with the additional details that were found. However, it might not be trivial to implement it. We could think about it as phase #2. It is expected to be really good. It would be very useful if an Entity-Graph generates an alarm based on the check result. I think that part will be able to talk in detail later. My answer is my opinions and assumptions. If you think my implementation is wrong, or an inefficient implementation, please do not hesitate to tell me. Thanks. Best Regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [ mailto:ifat.afek at nokia.com] Sent: Wednesday, March 28, 2018 2:23 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] New proposal for analysis. Hi Minwook, I think that from a user’s perspective, these are very good ideas. I have some questions regarding the UX and the implementation, since I’m trying to think what could be the best way to execute such actions from Vitrage. * I assume that these checks will not be implemented in Vitrage, and the results will not be stored in Vitrage, right? Vitrage role is to be a place where it is easy and intuitive for the user to execute external actions/checks. * Do you expect the user to click an entity, select an action to run (e.g. ‘P2P check’), and wait by the open panel for the results? What if the user switches to another menu before the check is done? What if the user asks to run an additional check in parallel? What if the user wants to see again a previous result? * Any thoughts of what component will implement those checks? Or maybe these will be just scripts? * It could be nice if, as a result of an action check, a new alarm will be raised in Vitrage. A specific alarm with the additional details that were found. However, it might not be trivial to implement it. We could think about it as phase #2. Best Regards, Ifat From: MinWookKim < delightwook at ssu.ac.kr> Reply-To: "OpenStack Development Mailing List (not for usage questions)" < openstack-dev at lists.openstack.org> Date: Tuesday, 27 March 2018 at 14:45 To: " openstack-dev at lists.openstack.org" < openstack-dev at lists.openstack.org> Subject: [openstack-dev] [Vitrage] New proposal for analysis. Hello Vitrage team. I am currently working on the Vitrage-Dashboard proposal for the ‘Add action list panel for entity click action’. ( https://review.openstack.org/#/c/531141/) I would like to make a new proposal based on the action list panel mentioned above. The new proposal is to provide multidimensional analysis capabilities in several entities that make up the infrastructure in the entity graph. Vitrage's entity-graph allows us to efficiently monitor alarms from various monitoring tools. In the current state, when there is a problem with the VM and Host, or when we want to check the status, we need to access the console individually for each VM and Host. This situation causes unnecessary behavior when the number of VMs and hosts increases. My new suggestion is that if we have a large number of vm and host, we do not need to directly connect to each VM, host console to enter the system command. Instead, we can send a system command to VM and hosts in the cloud through this proposal. It is only checking results. I have written some use-cases for an efficient explanation of the function. >From an implementation perspective, the goals of the proposal are: 1. To execute commands without installing any Agent / Client that can cause load on VM, Host. 2. I want to provide a simple UI so that users or administrators can get the desired information to multiple VMs and hosts. 3. I want to be able to grasp the results at a glance. 4. I want to implement a component that can support many additional scenarios in plug-in format. I would be happy if you could comment on the proposal or ask questions. Thanks. Best Regards, Minwook. -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 56974 bytes Desc: not available URL: From nakamura.tetsuro at lab.ntt.co.jp Mon Apr 9 02:23:29 2018 From: nakamura.tetsuro at lab.ntt.co.jp (TETSURO NAKAMURA) Date: Mon, 9 Apr 2018 11:23:29 +0900 Subject: [openstack-dev] [nova] [placement] placement update 18-14 In-Reply-To: <3f7dfc82-e53a-193f-e199-993cf9c8fa9a@fried.cc> References: <027de90a-9b72-a8a1-0231-4f5ca1963473@gmail.com> <3f7dfc82-e53a-193f-e199-993cf9c8fa9a@fried.cc> Message-ID: <61c8a914-8bf4-9565-11cd-f0a5058ba661@lab.ntt.co.jp> Hi Novaers, On 2018/04/07 6:41, Eric Fried wrote: >>> Some negotiation happened with regard to when/if the fixes for >>> shared providers is going to happen. I'm not sure how that resolved, >>> if someone can follow up with that, that would be most excellent. > > This is the subject of another thread [2] that's still "dangling". We > discussed it in the sched meeting this week [3] and concluded [4] that > we shouldn't do it in Rocky. BUT tetsuro later pointed out that part of > the series in question [5] is still needed to satisfy NRP-in-alloc-cands > (return the whole tree's providers in provider_summaries - even the ones > that aren't providing resource to the request). That patch changes > behavior, so needs a microversion (mostly done already in that patch), > so needs a spec. We haven't yet resolved whether this is truly needed, > so haven't assigned a body to the spec work. Specs are where we discuss whether proposed functions are truly needed, so I've uploaded the spec[7] and put my thoughts there :) [7] https://review.openstack.org/#/c/559466/ The implementation is in [8]. I've also submitted on it several patches for nested scenario. [8] https://review.openstack.org/#/c/558045/ > > [2] > http://lists.openstack.org/pipermail/openstack-dev/2018-March/128944.html > [3] > http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-04-02-14.00.log.html#l-91 > [4] > http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-04-02-14.00.log.html#l-137 > [5] https://review.openstack.org/#/c/558045/ > [6] > http://eavesdrop.openstack.org/meetings/nova_scheduler/2018/nova_scheduler.2018-04-02-14.00.log.html#l-104 > P.S. The hottest news in Japan this week is Shohei Otani's home runs @Los Angeles Angels. He started playing in MLB this year. You should +2 on this without discussions. Thanks! -- Tetsuro Nakamura NTT Network Service Systems Laboratories TEL:0422 59 6914(National)/+81 422 59 6914(International) 3-9-11, Midori-Cho Musashino-Shi, Tokyo 180-8585 Japan From tony at bakeyournoodle.com Mon Apr 9 03:39:30 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Mon, 9 Apr 2018 13:39:30 +1000 Subject: [openstack-dev] [all][requirements] uncapping eventlet In-Reply-To: <1523032867.936315.1329051592.0E63BA5F@webmail.messagingengine.com> References: <20180405154737.lhxhlpsj3uttjevu@gentoo.org> <20180405192619.nk7ykeel6qnwsk2y@gentoo.org> <20180406163433.fyj6qnq5oegivb4t@gentoo.org> <1523032867.936315.1329051592.0E63BA5F@webmail.messagingengine.com> Message-ID: <20180409033928.GB28028@thor.bakeyournoodle.com> On Fri, Apr 06, 2018 at 09:41:07AM -0700, Clark Boylan wrote: > My understanding of our use of upper constraints was that this should > (almost) always be the case for (almost) all dependencies. We should > rely on constraints instead of requirements caps. Capping libs like > pbr or eventlet and any other that is in use globally is incredibly > difficult to work with when you want to uncap it because you have to > coordinate globally. Instead if using constraints you just bump the > constraint and are done. Part of the reason that we have the caps it to prevent the tools that auto-generate the constraints syncs from considering these versions and then depending on the requirements team to strip that from the bot change before committing (assuming it passes CI). Once the work Doug's doing is complete we could consider tweaking the tools to use a different mechanism, but that's only part of the reason for the caps in g-r. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tony at bakeyournoodle.com Mon Apr 9 04:16:19 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Mon, 9 Apr 2018 14:16:19 +1000 Subject: [openstack-dev] [Openstack-stable-maint] Stable check of openstack/networking-midonet failed In-Reply-To: <144369c3-204e-fcf7-9265-855f952bdb02@ericsson.com> References: <20180401035507.GD4343@thor.bakeyournoodle.com> <144369c3-204e-fcf7-9265-855f952bdb02@ericsson.com> Message-ID: <20180409041618.GC28028@thor.bakeyournoodle.com> On Tue, Apr 03, 2018 at 02:05:35PM +0200, Elõd Illés wrote: > Hi, > > These patches probably solve the issue, if someone could review them: > > https://review.openstack.org/#/c/557005/ > > and > > https://review.openstack.org/#/c/557006/ > > Thanks, Thanks for digging into that. I've approved these even though they don't have a +2 from the neutron stable team. They look safe as the only impact tests, unblock the gate and also have +1's from subject matter experts. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From majopela at redhat.com Mon Apr 9 08:00:50 2018 From: majopela at redhat.com (Miguel Angel Ajo Pelayo) Date: Mon, 09 Apr 2018 08:00:50 +0000 Subject: [openstack-dev] [neutron] [OVN] Tempest API / Scenario tests and OVN metadata In-Reply-To: References: <4C4EB692-8B09-4689-BDC2-E6447D719073@kaplonski.pl> <4A490EDA-BD7F-444C-AA4F-65562FE21408@kaplonski.pl> <0E9F528E-CAFA-4229-981F-FCD75EDEF5A9@redhat.com> Message-ID: I don't necessarily agree that rewriting test is the solution here. May be for some extreme cases that could be fine, but from the maintenance point of view doesn't sound very practical IMHO. In some cases it can be just a parametrization of tests as they are, or simply accounting for a bit of extra headroom in quotas (when of course the purpose of such specific tests is not to verify the quota behaviour, for example). On Sun, Apr 8, 2018 at 3:52 PM Gary Kotton wrote: > Hi, > > There are some tempest tests that check realization of resources on the > networking platform and connectivity. Here things are challenging as each > networking platform may be more restrictive than the upstream ML2 plugin. > My thinking here is that we should leverage the tempest plugins for each > networking platform and they can overwrite the problematic tests and > address them as suitable for the specific plugin. > > Thanks > > Gary > > > > *From: *Miguel Angel Ajo Pelayo > *Reply-To: *OpenStack List > *Date: *Saturday, April 7, 2018 at 8:56 AM > *To: *OpenStack List > *Subject: *Re: [openstack-dev] [neutron] [OVN] Tempest API / Scenario > tests and OVN metadata > > > > this issue isn't only for networking ovn, please note that it happens with > a flew other vendor plugins (like nsx), at least this is something we have > found in downstream certifications. > > > > Cheers, > > On Sat, Apr 7, 2018, 12:36 AM Daniel Alvarez wrote: > > > > > On 6 Apr 2018, at 19:04, Sławek Kapłoński wrote: > > > > Hi, > > > > Another idea is to modify test that it will: > > 1. Check how many ports are in tenant, > > 2. Set quota to actual number of ports + 1 instead of hardcoded 1 as it > is now, > > 3. Try to add 2 ports - exactly as it is now, > > > Cool, I like this one :-) > Good idea. > > > I think that this should be still backend agnostic and should fix this > problem. > > > >> Wiadomość napisana przez Sławek Kapłoński w dniu > 06.04.2018, o godz. 17:08: > >> > >> Hi, > >> > >> I don’t know how networking-ovn is working but I have one question. > >> > >> > >>> Wiadomość napisana przez Daniel Alvarez Sanchez > w dniu 06.04.2018, o godz. 15:30: > >>> > >>> Hi, > >>> > >>> Thanks Lucas for writing this down. > >>> > >>> On Thu, Apr 5, 2018 at 11:35 AM, Lucas Alvares Gomes < > lucasagomes at gmail.com> wrote: > >>> Hi, > >>> > >>> The tests below are failing in the tempest API / Scenario job that > >>> runs in the networking-ovn gate (non-voting): > >>> > >>> > neutron_tempest_plugin.api.admin.test_quotas_negative.QuotasAdminNegativeTestJSON.test_create_port_when_quotas_is_full > >>> > neutron_tempest_plugin.api.test_routers.RoutersIpV6Test.test_router_interface_status > >>> > neutron_tempest_plugin.api.test_routers.RoutersTest.test_router_interface_status > >>> > neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_prefixlen > >>> > neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_quota > >>> > neutron_tempest_plugin.api.test_subnetpools.SubnetPoolsTest.test_create_subnet_from_pool_with_subnet_cidr > >>> > >>> Digging a bit into it I noticed that with the exception of the two > >>> "test_router_interface_status" (ipv6 and ipv4) all other tests are > >>> failing because the way metadata works in networking-ovn. > >>> > >>> Taking the "test_create_port_when_quotas_is_full" as an example. The > >>> reason why it fails is because when the OVN metadata is enabled, > >>> networking-ovn will metadata port at the moment a network is created > >>> [0] and that will already fulfill the quota limit set by that test > >>> [1]. > >>> > >>> That port will also allocate an IP from the subnet which will cause > >>> the rest of the tests to fail with a "No more IP addresses available > >>> on network ..." error. > >>> > >>> With ML2/OVS we would run into the same Quota problem if DHCP would be > >>> enabled for the created subnets. This means that if we modify the > current tests > >>> to enable DHCP on them and we account this extra port it would be > valid for > >>> all networking-ovn as well. Does it sound good or we still want to > isolate quotas? > >> > >> If DHCP will be enabled for networking-ovn, will it use one more port > also or not? If so then You will still have the same problem with DHCP as > in ML2/OVS You will have one port created and for networking-ovn it will be > 2 ports. > >> If it’s not like that then I think that this solution, with some > comment in test code why DHCP is enabled should be good IMO. > >> > >>> > >>> This is not very trivial to fix because: > >>> > >>> 1. Tempest should be backend agnostic. So, adding a conditional in the > >>> tempest test to check whether OVN is being used or not doesn't sound > >>> correct. > >>> > >>> 2. Creating a port to be used by the metadata agent is a core part of > >>> the design implementation for the metadata functionality [2] > >>> > >>> So, I'm sending this email to try to figure out what would be the best > >>> approach to deal with this problem and start working towards having > >>> that job to be voting in our gate. Here are some ideas: > >>> > >>> 1. Simple disable the tests that are affected by the metadata approach. > >>> > >>> 2. Disable metadata for the tempest API / Scenario tests (here's a > >>> test patch doing it [3]) > >>> > >>> IMHO, we don't want to do this as metadata is likely to be enabled in > all the > >>> clouds either using ML2/OVS or OVN so it's good to keep exercising > >>> this part. > >>> > >>> > >>> 3. Same as 1. but also create similar tempest tests specific for OVN > >>> somewhere else (in the networking-ovn tree?!) > >>> > >>> As we discussed on IRC I'm keen on doing this instead of getting bits > in > >>> tempest to do different things depending on the backend used. Unless > >>> we want to enable DHCP on the subnets that these tests create :) > >>> > >>> > >>> What you think would be the best way to workaround this problem, any > >>> other ideas ? > >>> > >>> As for the "test_router_interface_status" tests that are failing > >>> independent of the metadata, there's a bug reporting the problem here > >>> [4]. So we should just fix it. > >>> > >>> [0] > https://github.com/openstack/networking-ovn/blob/f3f5257fc465bbf44d589cc16e9ef7781f6b5b1d/networking_ovn/common/ovn_client.py#L1154 > > >>> [1] > https://github.com/openstack/neutron-tempest-plugin/blob/35bf37d1830328d72606f9c790b270d4fda2b854/neutron_tempest_plugin/api/admin/test_quotas_negative.py#L66 > > >>> [2] > https://docs.openstack.org/networking-ovn/latest/contributor/design/metadata_api.html#overview-of-proposed-approach > >>> [3] https://review.openstack.org/#/c/558792/ > >>> [4] https://bugs.launchpad.net/networking-ovn/+bug/1713835 > >>> > >>> Cheers, > >>> Lucas > >>> > >>> Thanks, > >>> Daniel > >>> > >>> > __________________________________________________________________________ > >>> OpenStack Development Mailing List (not for usage questions) > >>> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> > >>> > __________________________________________________________________________ > >>> OpenStack Development Mailing List (not for usage questions) > >>> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > >> — > >> Best regards > >> Slawek Kaplonski > >> slawek at kaplonski.pl > >> > >> > __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > — > > Best regards > > Slawek Kaplonski > > slawek at kaplonski.pl > > > > > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From luo.lujin at jp.fujitsu.com Mon Apr 9 08:42:24 2018 From: luo.lujin at jp.fujitsu.com (Luo, Lujin) Date: Mon, 9 Apr 2018 08:42:24 +0000 Subject: [openstack-dev] [neutron] Bug deputy report Message-ID: Hello everyone, I was on bug deputy between 2018/04/02 and 2018/04/09. I am sending a short summary of the bugs reported during this period. We do not have many bug reported this week. https://bugs.launchpad.net/neutron/+bug/1760047 - Confirmed but the importance is not yet decided. It is about when spawning large number of VMs at the same time, some ports not becoming ACTIVE. It seems we need more details from the bug reporter or we need to figure out a way to reproduce it in small scale. I will bring this to Miguel too. https://bugs.launchpad.net/neutron/+bug/1760584 - Medium. This is about how tempest tests warnings about subnet CIDR . The possible fix is propsed by haleyb but no one has assigned this bug yet. If anyone if interested, please take it over. https://bugs.launchpad.net/neutron/+bug/1760902 - Low. Hongbin proposes we align segment resource to contain standard attributes. https://bugs.launchpad.net/neutron/+bug/1761070 - Medium. It is about bridge mappings, where neutron/agent/linux/iptables_firewall.py doesn't take into account mappings and just uses the default bridge name which is derived from the network ID. It is not assigned yet. Anyone interested, please take it over. https://bugs.launchpad.net/neutron/+bug/1761555 and https://bugs.launchpad.net/neutron/+bug/1761591 - Triaging. Swami has been following up with the bug reporter to find out what the problems are. https://bugs.launchpad.net/neutron/+bug/1761748 - Medium. CI failures in networking-hyperv about not able to get port details for devices. It is not assigned yet. Anyone interested, please take it over. https://bugs.launchpad.net/neutron/+bug/1761823 - RFE. This derives from another RFE that we should add /ip-address resource to API. It needs discussion on the drivers meeting. Best regards, Lujin ∽------------------------------------------- Lujin Luo Email: luo.lujin at jp.fujitsu.com Tel: (81) 044-754-2027 Linux Development Division Platform Software Business Unit Fujitsu Ltd. ------------------------------------------∽ From singh.surya64mnnit at gmail.com Mon Apr 9 08:45:06 2018 From: singh.surya64mnnit at gmail.com (Surya Singh) Date: Mon, 9 Apr 2018 17:45:06 +0900 Subject: [openstack-dev] [kolla] [tripleo] On moving start scripts out of Kolla images In-Reply-To: References: <0892491c-f57e-2952-eac3-a86797db5a8e@oracle.com> <63481CFF-1BDA-4F88-BF5D-E0C3766935A8@cisco.com> Message-ID: On Sat, Apr 7, 2018 at 11:11 AM, Jeffrey Zhang wrote: > +1 for kolla-api > > Migrate all scripts from kolla(image) to kolla-ansible, will make image hard > to use by > downstream. Martin explain this clearly. we need some API to make images > more easy to use. > For the operator, I don't think he needs to read all the set_config.py file. > Just knowing > how the config.json file looks like and effects of the file are enough. So a > doc is enough. > Yes agree, moving the scripts from kolla will not be that soft to use for downstream. And it seems very reasonable to me that kolla API can be a good thing to make images easy to use > > For images, we need to add some common functions before using them. Instead > of > using the upstream image directly. For example, if we support loci, mostly > we > will use upgrade infra images. like mariadb, redis etc. But is them really > enough > for production use directly? there is some concern here > > - drop root. does it work when it runs without root? > - init process. Does it contain a init process binary? > - configuration. The different image may use different configuration method. > Should we need > unify them? > - lack of packages. what the image lack some packages we needed? > > > One of a possible solution for this, I think, is use upstream image + > kolla-api to generate a > image with the features. > > On Sat, Apr 7, 2018 at 6:47 AM, Steven Dake (stdake) > wrote: >> >> Mark, >> >> >> >> TLDR good proposal >> >> >> >> I don’t think Paul was proposing what you proposed. However: >> >> >> >> You make a strong case for separately packaging the api (mostly which Is >> setcfg.py and the json API + docs + samples). I am super surprised nobody >> has ever proposed this in the past, but now is as good of a time as any to >> propose a good model for managing the JSON->setcfg.py API. We could unit >> test this with extreme clarity, document with extreme clarity, and provide >> an easier path for people to submit changes to the API that they require to >> run the OpenStack containers. Finally, it would provide complete semver >> semantics for managing change and provide perfect backwards compatibility. >> >> >> >> A separate repo for this proposed api split makes sense to me. I think >> initially we would want to seed with the kolla core team but be open to >> anyone that reviews + contributes to join the kolla-api core team (just as >> happens with other kolla deliverables). >> >> >> >> This should reduce cross-project developer friction which was an implied >> but unstated problem in the various threads over the last week and produce >> the many other beneficial effects APIs produce along with the benefits you >> stated above. >> >> >> >> I’m not sure if this approach is technically sound –but I’d be in favor of >> this approach if it were not too disruptive, provided full backwards >> compatibility and was felt to be an improvement by the consumers of kolla >> images. I don’t think deprecation is something that is all that viable with >> an API model like the one we have nor this new repo and think we need to set >> clear boundaries around what would/would not be done. >> >> >> >> I do know that a change of this magnitude is a lot of work for the >> community to take on – and just like adding or removing any deliverable in >> kolla, would require a majority vote from the CR team. >> >> >> >> Also, repeating myself, I don’t think the current API is good nor perfect, >> I don’t think perfection is necessarily possible, but this may help drive >> towards that mythical perfection that interested parties seek to achieve. >> >> >> Cheers >> >> -steve >> >> >> >> From: Mark Goddard >> Reply-To: "OpenStack Development Mailing List (not for usage questions)" >> >> Date: Friday, April 6, 2018 at 12:30 PM >> To: "OpenStack Development Mailing List (not for usage questions)" >> >> Subject: Re: [openstack-dev] [kolla] [tripleo] On moving start scripts out >> of Kolla images >> >> >> >> >> >> On Thu, 5 Apr 2018, 20:28 Martin André, wrote: >> >> On Thu, Apr 5, 2018 at 2:16 PM, Paul Bourke >> wrote: >> > Hi all, >> > >> > This mail is to serve as a follow on to the discussion during >> > yesterday's >> > team meeting[4], which was regarding the desire to move start scripts >> > out of >> > the kolla images [0]. There's a few factors at play, and it may well be >> > best >> > left to discuss in person at the summit in May, but hopefully we can get >> > at >> > least some of this hashed out before then. >> > >> > I'll start by summarising why I think this is a good idea, and then >> > attempt >> > to address some of the concerns that have come up since. >> > >> > First off, to be frank, this is effort is driven by wanting to add >> > support >> > for loci images[1] in kolla-ansible. I think it would be unreasonable >> > for >> > anyone to argue this is a bad objective to have, loci images have very >> > obvious benefits over what we have in Kolla today. I'm not looking to >> > drop >> > support for Kolla images at all, I simply want to continue decoupling >> > things >> > to the point where operators can pick and choose what works best for >> > them. >> > Stemming from this, I think moving these scripts out of the images >> > provides >> > a clear benefit to our consumers, both users of kolla and third parties >> > such >> > as triple-o. Let me explain why. >> >> It's still very obscure to me how removing the scripts from kolla >> images will benefit consumers. If the reason is that you want to >> re-use them in other, non-kolla images, I believe we should package >> the scripts. I've left some comments in your spec review. >> >> >> >> +1 to extracting and packaging the kolla API. This will make it easier to >> test and document, allow for versioning, and make it a first class citizen >> rather than a file in the build context of the base image. Plus, if it >> really is as good as some people are arguing, then it should be shared. >> >> >> >> For many of the other helper scripts that get bundled into the kolla >> images, I can see an argument for pulling these up to the deployment layer. >> These could easily be moved to kolla-ansible, and added via config.json. I >> guess it would be useful to know whether other deployment tools (tripleo) >> are using any of these - if they are shared then perhaps the images are the >> best place for them. >> >> >> >> >> > Normally, to run a docker image, a user will do 'docker run >> > helloworld:latest'. In any non trivial application, config needs to be >> > provided. In the vast majority of cases this is either provided via a >> > bind >> > mount (docker run -v hello.conf:/etc/hello.conf helloworld:latest), or >> > via >> > environment variables (docker run --env HELLO=paul helloworld:latest). >> > This >> > is all bog standard stuff, something anyone who's spent an hour learning >> > docker can understand. >> > >> > Now, lets say someone wants to try out OpenStack with Docker, and they >> > look >> > at Kolla. First off they have to look at something called >> > set_configs.py[2] >> > - over 400 lines of Python. Next they need to understand what that >> > script >> > consumes, config.json [3]. The only reference for config.json is the >> > files >> > that live in kolla-ansible, a mass of jinja and assumptions about how >> > the >> > service will be run. Next, they need to figure out how to bind mount the >> > config files and config.json into the container in a way that can be >> > consumed by set_configs.py (which by the way, requires the base kolla >> > image >> > in all cases). This is only for the config. For the service start up >> > command, this need to also be provided in config.json. This command is >> > then >> > parsed out and written to a location in the image, which is consumed by >> > a >> > series of start/extend start shell scripts. Kolla is *unique* in this >> > regard, no other project in the container world is interfacing with >> > images >> > in this way. Being a snowflake in this regard is not a good thing. I'm >> > still >> > waiting to hear from a real world operator who would prefer to spend >> > time >> > learning the above to doing: >> >> You're pointing a very real documentation issue. I've mentioned in the >> other kolla thread that I have a stub for the kolla API documentation. >> I'll push a patch for what I have and we can iterate on that. >> >> > docker run -v /etc/keystone:/etc/keystone keystone:latest --entrypoint >> > /usr/bin/keystone [args] >> > >> > This is the Docker API, it's easy to understand and pretty much the >> > standard >> > at this point. >> >> Sure, using the docker API works for simpler cases, not too >> surprisingly once you start doing more funky things with your >> containers you're quickly reach the docker API limitations. That's >> when the kolla API comes in handy. >> See for example this recent patch >> https://review.openstack.org/#/c/556673/ where we needed to change >> some file permission to the uid/gid of the user inside the container. >> >> The first iteration basically used the docker API and started an >> additional container to fix the permissions: >> >> docker run -v >> /etc/pki/tls/certs/neutron.crt:/etc/pki/tls/certs/neutron.crt:rw \ >> -v >> /etc/pki/tls/private/neutron.key:/etc/pki/tls/private/neutron.key:rw >> \ >> neutron_image \ >> /bin/bash -c 'chown neutron:neutron >> /etc/pki/tls/certs/neutron.crt; chown neutron:neutron >> /etc/pki/tls/private/neutron.key' >> >> You'll agree this is not the most obvious. And it had a nasty side >> effect that is changes the permissions of the files _on the host_. >> While using kolla API we could simply add to our config.json: >> >> - path: /etc/pki/tls/certs/neutron.crt >> owner: neutron:neutron >> - path: /etc/pki/tls/private/neutron.key >> owner: neutron:neutron >> >> > The other argument is that this removes the possibility for immutable >> > infrastructure. The concern is, with the new approach, a rookie operator >> > will modify one of the start scripts - resulting in uncertainty that >> > what >> > was first deployed matches what is currently running. But with the way >> > Kolla >> > is now, an operator can still do this! They can restart containers with >> > a >> > custom entrypoint or additional bind mounts, they can exec in and change >> > config files, etc. etc. Kolla containers have never been immutable and >> > we're >> > bending over backwards to artificially try and make this the case. We >> > cant >> > protect a bad or inexperienced operator from shooting themselves in the >> > foot, there are better ways of doing so. If/when Docker or the upstream >> > container world solves this problem, it would then make sense for Kolla >> > to >> > follow suit. >> > >> > On the face of it, what the spec proposes is a simple change, it should >> > not >> > radically pull the carpet out under people, or even change the way >> > kolla-ansible works in the near term. If consumers such as tripleo or >> > other >> > parties feel it would in fact do so please do let me know and we can >> > discuss >> > and mitigate these problems. >> >> TripleO uses these scripts extensively, we certainly do not want to >> see them go away from kolla images. >> >> Martin >> >> > Cheers, >> > -Paul >> > >> > [0] https://review.openstack.org/#/c/550958/ >> > [1] https://github.com/openstack/loci >> > [2] >> > >> > https://github.com/openstack/kolla/blob/master/docker/base/set_configs.py >> > [3] >> > >> > https://github.com/openstack/kolla-ansible/blob/master/ansible/roles/keystone/templates/keystone.json.j2 >> > [4] >> > >> > http://eavesdrop.openstack.org/meetings/kolla/2018/kolla.2018-04-04-16.00.log.txt >> > >> > >> > __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > -- > Regards, > Jeffrey Zhang > Blog: http://xcodest.me > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From geguileo at redhat.com Mon Apr 9 08:51:23 2018 From: geguileo at redhat.com (Gorka Eguileor) Date: Mon, 9 Apr 2018 10:51:23 +0200 Subject: [openstack-dev] [cinder][nova] about re-image the volume In-Reply-To: References: <20180329142813.GA25762@sm-xps> <20180402115959.3y3j6ytab6ruorrg@localhost> <96adcaac-632a-95c3-71c8-51211c1c57bd@gmail.com> <20180405081558.vf7bibu4fcv5kov3@localhost> <20180406083110.tydltwfe23kiq7bw@localhost> Message-ID: <20180409085123.ydm5n3i3lngqsgjc@localhost> On 06/04, Matt Riedemann wrote: > On 4/6/2018 5:09 AM, Matthew Booth wrote: > > I think you're talking at cross purposes here: this won't require a > > swap volume. Apart from anything else, swap volume only works on an > > attached volume, and as previously discussed Nova will detach and > > re-attach. > > > > Gorka, the Nova api Matt is referring to is called volume update > > externally. It's the operation required for live migrating an attached > > volume between backends. It's called swap volume internally in Nova. > > Yeah I was hoping we were just having a misunderstanding of what 'swap > volume' in nova is, which is the blockRebase for an already attached volume > to the guest, called from cinder during a volume retype or migration. > > As for the re-image thing, nova would be detaching the volume from the guest > prior to calling the new cinder re-image API, and then re-attach to the > guest afterward - similar to how shelve and unshelve work, and for that > matter how rebuild works today with non-root volumes. > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Hi, Thanks for the clarification. When I was talking about "swapping" I was referring to the fact that Nova will have to not only detach the volume locally using OS-Brick, but it will also need to use new connection information to do the attach after the volume has been re-imaged. As I see it, the process would look something like this: - Nova detaches volume using OS-Brick - Nova calls Cinder re-image passing the node's info (like we do when attaching a new volume) - Cinder would: - Ensure only that node is connected to the volume - Terminate connection to the original volume - If we can do optimized volume creation: - If encrypted volume we create a copy of the encryption key on Barbican or copy the ID field from the DB and ensure we don't delete the Barbican key on the delete. - Create new volume from image - Swap DB fields to preserve the UUID - Delete original volume - If it cannot do optimized volume creation: - Initialize+Attach volume to Cinder node - DD the new image into the volume - Detach+Terminate volume - Initialize connection for the new volume to the Nova node - Return connection information to the volume - Nova attaches volume with OS-Brick using returned connection information. So I agree, it's not a blockRebase operation, just a change in the volume that is used. Regards, Gorka. From openstack at sheep.art.pl Mon Apr 9 09:32:23 2018 From: openstack at sheep.art.pl (Radomir Dopieralski) Date: Mon, 9 Apr 2018 11:32:23 +0200 Subject: [openstack-dev] [horizon][xstatic]How to handle xstatic if upstream files are modified In-Reply-To: References: Message-ID: The whole idea about xstatic files is that they are generic, not specific to Horizon or OpenStack, usable by other projects that need those static files. In fact, at the time we started using xstatic, it was being used by the MoinMoin wiki project (which is now dead, sadly). The modifications you made are very specific to your usecase and would make it impossible to reuse the packages by other applications (or even by other Horizon plugins). The whole idea of a library is that you are using it as it is provided, and not modifying it. We generally try to use all the libraries as they are, and if there are any modifications necessary, we push them upstream, to the original library. Otherwise there would be quite a bit of maintenance overhead necessary to keep all our downstream patches. When considerable modification is necessary that can't be pushed upstream, we fork the library either into its own repository, or include it in the repository of the application that is using it. On Mon, Apr 9, 2018 at 2:54 AM, Xinni Ge wrote: > Hello, team. > > Sorry for talking about xstatic repo for so many times. > > I didn't realize xstatic repositories should be provided with exactly the > same file as upstream, and should have talked about it at very first. > > I modified several upstream files because some of them files couldn't be > used directly under my expectation. > > For example, {{ }} are used in some original files as template tags, but > Horizon adopts {$ $} in angular module, so I modified them to be recognized > properly. > > Another major modification is that css files are converted into scss files > to solve some css import issue previously. > Besides, after collecting statics, some png file paths in css cannot be > referenced properly and shown as 404 errors, I also modified css itself to > handle this issues. > > I will recheck all the un-matched xstatic repositories and try to replace > with upstream files as much as I can. > But I if I really have to modify some original files, is it acceptable to > still use it as embedded files with license info appeared at the top? > > > Best Regards, > Xinni Ge > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Mon Apr 9 09:53:06 2018 From: geguileo at redhat.com (Gorka Eguileor) Date: Mon, 9 Apr 2018 11:53:06 +0200 Subject: [openstack-dev] [oslo.db] innodb OPTIMIZE TABLE ? In-Reply-To: References: <04e33bc7-90cf-e9c6-c276-a852212c25c7@gmail.com> <20180404090026.xl22i4kyplurq36z@localhost> Message-ID: <20180409095306.s4qxqi7q3m7p46d2@localhost> On 06/04, Michael Bayer wrote: > On Wed, Apr 4, 2018 at 5:00 AM, Gorka Eguileor wrote: > > On 03/04, Jay Pipes wrote: > >> On 04/03/2018 11:07 AM, Michael Bayer wrote: > >> > The MySQL / MariaDB variants we use nowadays default to > >> > innodb_file_per_table=ON and we also set this flag to ON in installer > >> > tools like TripleO. The reason we like file per table is so that > >> > we don't grow an enormous ibdata file that can't be shrunk without > >> > rebuilding the database. Instead, we have lots of little .ibd > >> > datafiles for each table throughout each openstack database. > >> > > >> > But now we have the issue that these files also can benefit from > >> > periodic optimization which can shrink them and also have a beneficial > >> > effect on performance. The OPTIMIZE TABLE statement achieves this, > >> > but as would be expected it itself can lock tables for potentially a > >> > long time. Googling around reveals a lot of controversy, as various > >> > users and publications suggest that OPTIMIZE is never needed and would > >> > have only a negligible effect on performance. However here we seek > >> > to use OPTIMIZE so that we can reclaim disk space on tables that have > >> > lots of DELETE activity, such as keystone "token" and ceilometer > >> > "sample". > >> > > >> > Questions for the group: > >> > > >> > 1. is OPTIMIZE table worthwhile to be run for tables where the > >> > datafile has grown much larger than the number of rows we have in the > >> > table? > >> > >> Possibly, though it's questionable to use MySQL/InnoDB for storing transient > >> data that is deleted often like ceilometer samples and keystone tokens. A > >> much better solution is to use RDBMS partitioning so you can simply ALTER > >> TABLE .. DROP PARTITION those partitions that are no longer relevant (and > >> don't even bother DELETEing individual rows) or, in the case of Ceilometer > >> samples, don't use a traditional RDBMS for timeseries data at all... > >> > >> But since that is unfortunately already the case, yes it is probably a good > >> idea to OPTIMIZE TABLE on those tables. > >> > >> > 2. from people's production experience how safe is it to run OPTIMIZE, > >> > e.g. how long is it locking tables, etc. > >> > >> Is it safe? Yes. > >> > >> Does it lock the entire table for the duration of the operation? No. It uses > >> online DDL operations: > >> > >> https://dev.mysql.com/doc/refman/5.7/en/innodb-file-defragmenting.html > >> > >> Note that OPTIMIZE TABLE is mapped to ALTER TABLE tbl_name FORCE for InnoDB > >> tables. > >> > >> > 3. is there a heuristic we can use to measure when we might run this > >> > -.e.g my plan is we measure the size in bytes of each row in a table > >> > and then compare that in some ratio to the size of the corresponding > >> > .ibd file, if the .ibd file is N times larger than the logical data > >> > size we run OPTIMIZE ? > >> > >> I don't believe so, no. Most things I see recommended is to simply run > >> OPTIMIZE TABLE in a cron job on each table periodically. > >> > >> > 4. I'd like to propose this job of scanning table datafile sizes in > >> > ratio to logical data sizes, then running OPTIMIZE, be a utility > >> > script that is delivered via oslo.db, and would run for all innodb > >> > tables within a target MySQL/ MariaDB server generically. That is, I > >> > really *dont* want this to be a script that Keystone, Nova, Ceilometer > >> > etc. are all maintaining delivering themselves. this should be done > >> > as a generic pass on a whole database (noting, again, we are only > >> > running it for very specific InnoDB tables that we observe have a poor > >> > logical/physical size ratio). > >> > >> I don't believe this should be in oslo.db. This is strictly the purview of > >> deployment tools and should stay there, IMHO. > >> > > > > Hi, > > > > As far as I know most projects do "soft deletes" where we just flag the > > rows as deleted and don't remove them from the DB, so it's only when we > > use a management tool and run the "purge" command that we actually > > remove these rows. > > > > Since running the optimize without purging would be meaningless, I'm > > wondering if we should trigger the OPTIMIZE also within the purging > > code. This way we could avoid innefective runs of the optimize command > > when no purge has happened and even when we do the optimization we could > > skip the ratio calculation altogether for tables where no rows have been > > deleted (the ratio hasn't changed). > > > > the issue is that this OPTIMIZE will block on Galera unless it is run > on a per-individual node basis along with the changing of the > wsrep_OSU_method parameter, this is way out of scope both to be > redundantly hardcoded in multiple openstack projects, as well as > there's no portable way for Keystone and others to get at the > individual Galera node addresses. Putting it in oslo.db would at > least be a place that most of this logic can live but even then it > needs to run for multiple Galera nodes and needs to have > deployment-specific configuration. *unless* we say, the OPTIMIZE > here will short for a purged table, let's just let it block. > I see... What about a hybrid solution? Use the alter table as mentioned in the comment [1] to not block the table for systems that support it, and going with the RSU mode when it's not supported? [1] https://mariadb.com/kb/en/library/optimize-table/#comment_3191 > > > Ideally the ratio calculation and optimization code would be provided by > > oslo.db to reduce code duplication between projects. > > I was hoping to have this be part of oslo.db but there's disagreement on that :) > > If this can't be in oslo.db then the biggest issue facing me on this > is building out a new application and getting it packaged since this > feature has no home, unless I can ship it as some kind of script > packaged in tripleo. > > I think the oslo.db home you proposed has the great benefit of making it available in all deployments regardless of the installer, if that's not possible I would go with the TripleO script before creating yet another project that needs to be packaged and maintained. Cheers, Gorka. > > > > > >> > 5. for Galera this gets more tricky, as we might want to run OPTIMIZE > >> > on individual nodes directly. The script at [1] illustrates how to > >> > run this on individual nodes one at a time. > >> > > >> > More succinctly, the Q is: > >> > > >> > a. OPTIMIZE, yes or no? > >> > >> Yes. > >> > >> > b. oslo.db script to run generically, yes or no? > >> > >> No. Just have Triple-O install galera_innoptimizer and run it in a cron job. > >> > >> Best, > >> -jay > >> > >> > thanks for your thoughts! > >> > > >> > > >> > > >> > [1] https://github.com/deimosfr/galera_innoptimizer > >> > > >> > __________________________________________________________________________ > >> > OpenStack Development Mailing List (not for usage questions) > >> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > >> > >> __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From kchamart at redhat.com Mon Apr 9 09:58:58 2018 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Mon, 9 Apr 2018 11:58:58 +0200 Subject: [openstack-dev] [Openstack-operators] RFC: Next minimum libvirt / QEMU versions for "Stein" release In-Reply-To: <355fafcc-8d7c-67a2-88c0-2823a51296f8@gmail.com> References: <20180330142643.ff3czxy35khmjakx@eukaryote> <20180331140929.r5kj3qyrefvsovwf@eukaryote> <20180404084507.GA18076@paraplu> <20180406100714.GB18076@paraplu> <98bf44f7-331d-7239-6eec-971f8afd604d@debian.org> <20180406170703.GD18076@paraplu> <355fafcc-8d7c-67a2-88c0-2823a51296f8@gmail.com> Message-ID: <20180409095858.GE18076@paraplu> On Fri, Apr 06, 2018 at 12:12:31PM -0500, Matt Riedemann wrote: > On 4/6/2018 12:07 PM, Kashyap Chamarthy wrote: > > FWIW, I'd suggest so, if it's not too much maintenance. It'll just > > spare you additional bug reports in that area, and the overall default > > experience when dealing with CPU models would be relatively much better. > > (Another way to look at it is, multiple other "conservative" long-term > > stable distributions also provide libvirt 3.2.0 and QEMU 2.9.0, so that > > should give you confidence.) > > > > Again, I don't want to push too hard on this. If that'll be messy from > > a package maintainance POV for you / Debian maintainers, then we could > > settle with whatever is in 'Stretch'. > > Keep in mind that Kashyap has a tendency to want the latest and greatest of > libvirt and qemu at all times for all of those delicious bug fixes. Keep in mind that Matt has a tendency to sometimes unfairly over-simplify others views ;-). More seriously, c'mon Matt; I went out of my way to spend time learning about Debian's packaging structure and trying to get the details right by talking to folks on #debian-backports. And as you may have seen, I marked the patch[*] as "RFC", and repeatedly said that I'm working on an agreeable lowest common denominator. > But we also know that new code also brings new not-yet-fixed bugs. Yep, of course. > Keep in mind the big picture here, we're talking about bumping from > minimum required (in Rocky) libvirt 1.3.1 to at least 3.0.0 (in Stein) > and qemu 2.5.0 to at least 2.8.0, so I think that's already covering > some good ground. Let's not get greedy. :) Sure :-) Also if there's a way we can avoid bugs in the default experience with minimal effort, we should. Anyway, there we go: changed the patch[*] to what's in Stretch. [*] https://review.openstack.org/#/c/558171/ -- /kashyap From dougal at redhat.com Mon Apr 9 11:01:44 2018 From: dougal at redhat.com (Dougal Matthews) Date: Mon, 9 Apr 2018 12:01:44 +0100 Subject: [openstack-dev] Todays Office Hour Time Change Message-ID: Hey all, I have moved the office hour today from 16:00 UTC to 15:00 UTC. If there is demand we could make it a 2 hour slot or move it back. I wasn't able to make it for 16:00 UTC today and it will often be tricky for me. I think one of the biggest advantages to doing office hours is that we can be more flexible. So if there isn't a slot that suits you, please propose one! On Friday we had a good triage session, reducing the untriaged bugs by about 25. I hope to do something similar today unless somebody comes along with specific topics they want to discuss. The hours now are: - Mon 15.00 UTC - Wed 3.00 UTC - Fri 8.00 UTC The Office hour etherpad is: https://etherpad.openstack.org/p/mistral-office-hours (Side note: As far as I know there hasn't been any activity on the Wednesday slot, so we may want to move that. It is at 2am for me, so I wont ever make it personally.) Cheers, Dougal -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Mon Apr 9 12:02:02 2018 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 9 Apr 2018 14:02:02 +0200 Subject: [openstack-dev] Vancouver Forum - Post your selected topics now Message-ID: <7af5f78e-2a3f-dacb-77ef-ebe171d74361@openstack.org> Hi everyone, You've been actively brainstorming ideas of topics for discussion at the "Forum" at the Vancouver OpenStack Summit. Now it's time to select which ones you want to propose, and file them at forumtopics.openstack.org ! The topic submission website will be open until EOD on Sunday, April 15, at which point the Forum selection committee will take the entries and make the final selection. So you have the whole week to enter your selection of ideas on the website. Thanks ! -- Thierry Carrez (ttx) From gergely.csatari at nokia.com Mon Apr 9 12:22:52 2018 From: gergely.csatari at nokia.com (Csatari, Gergely (Nokia - HU/Budapest)) Date: Mon, 9 Apr 2018 12:22:52 +0000 Subject: [openstack-dev] Vancouver Forum - Post your selected topics now In-Reply-To: <7af5f78e-2a3f-dacb-77ef-ebe171d74361@openstack.org> References: <7af5f78e-2a3f-dacb-77ef-ebe171d74361@openstack.org> Message-ID: Hi, There are two lists of etherpads for forum brainstorming in https://wiki.openstack.org/wiki/Forum/Vancouver2018 and there is http://forumtopics.openstack.org/ . Is my understanding correct, that ultimatelly all ideas should go to http://forumtopics.openstack.org/ ? Thanks, Gerg0 -----Original Message----- From: Thierry Carrez [mailto:thierry at openstack.org] Sent: Monday, April 9, 2018 2:02 PM To: OpenStack Development Mailing List ; openstack-operators at lists.openstack.org Subject: [openstack-dev] Vancouver Forum - Post your selected topics now Hi everyone, You've been actively brainstorming ideas of topics for discussion at the "Forum" at the Vancouver OpenStack Summit. Now it's time to select which ones you want to propose, and file them at forumtopics.openstack.org ! The topic submission website will be open until EOD on Sunday, April 15, at which point the Forum selection committee will take the entries and make the final selection. So you have the whole week to enter your selection of ideas on the website. Thanks ! -- Thierry Carrez (ttx) __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From cdent+os at anticdent.org Mon Apr 9 12:22:54 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Mon, 9 Apr 2018 13:22:54 +0100 (BST) Subject: [openstack-dev] [all] [api] Re-Reminder on the state of WSME Message-ID: A little over two years ago I sent a reminder that WSME is not being actively maintained: http://lists.openstack.org/pipermail/openstack-dev/2016-March/088658.html Today I was reminded of this becasue a random (typo-related) patchset demonstrated that the tests were no longer passing and fixing them is enough of a chore that I (at least temporarily) marked one test as an expected failure.o https://review.openstack.org/#/c/559717/ The following projects appear to still use WSME: aodh blazar cloudkitty cloudpulse cyborg glance gluon iotronic ironic magnum mistral mogan octavia panko qinling radar ranger searchlight solum storyboard surveil terracotta watcher Most of these are using the 'types' handling in WSME and sometimes the pecan extension, and not the (potentially broken) Flask extension, so things should be stable. However: nobody is working on keeping WSME up to date. It is not a good long term investment. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From gkotton at vmware.com Mon Apr 9 12:32:40 2018 From: gkotton at vmware.com (Gary Kotton) Date: Mon, 9 Apr 2018 12:32:40 +0000 Subject: [openstack-dev] [neutron][horizon][l2gw] Unable to create a floating IP Message-ID: Hi, From Queens onwards we have a issue with horizon and L2GW. We are unable to create a floating IP. This does not occur when using the CLI only via horizon. The error received is ‘Error: User does not have admin privileges: Cannot GET resource for non admin tenant. Neutron server returns request_ids: ['req-f07a3aac-0994-4d3a-8409-1e55b374af9d']’ This is due to: https://github.com/openstack/networking-l2gw/blob/master/networking_l2gw/db/l2gateway/l2gateway_db.py#L316 This worked in Ocata and not sure what has changed since then ☹. Maybe in the past the Ocata quota’s were not checking L2gw. Any ideas? Thanks Gary -------------- next part -------------- An HTML attachment was scrubbed... URL: From julien at danjou.info Mon Apr 9 13:25:42 2018 From: julien at danjou.info (Julien Danjou) Date: Mon, 09 Apr 2018 15:25:42 +0200 Subject: [openstack-dev] [oslo.db] innodb OPTIMIZE TABLE ? In-Reply-To: <04e33bc7-90cf-e9c6-c276-a852212c25c7@gmail.com> (Jay Pipes's message of "Tue, 3 Apr 2018 11:41:15 -0400") References: <04e33bc7-90cf-e9c6-c276-a852212c25c7@gmail.com> Message-ID: On Tue, Apr 03 2018, Jay Pipes wrote: > Possibly, though it's questionable to use MySQL/InnoDB for storing transient > data that is deleted often like ceilometer samples and keystone tokens. A much > better solution is to use RDBMS partitioning so you can simply ALTER TABLE .. > DROP PARTITION those partitions that are no longer relevant (and don't even > bother DELETEing individual rows) or, in the case of Ceilometer samples, don't > use a traditional RDBMS for timeseries data at all... For the record, and because I imagine not everyone follows Ceilometer, this codes does not exist anymore in Queens. Ceilometer storage (and API) has been deprecated for 2 cycles already and removed last release. Feel free to continue discussing the problem, but you can ignore Ceilometer. :) -- Julien Danjou /* Free Software hacker https://julien.danjou.info */ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 832 bytes Desc: not available URL: From lbragstad at gmail.com Mon Apr 9 13:43:07 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Mon, 9 Apr 2018 08:43:07 -0500 Subject: [openstack-dev] [keystone] Rocky forum topics Message-ID: Hey all, I've created an etherpad [0] to collect ideas/proposals for forum sessions in Vancouver. Please take a look and add anything that you think we should propose as a forum session. The deadline for submissions is this Sunday. Thanks, Lance [0] https://etherpad.openstack.org/p/YVR-keystone-forum-sessions -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From sbauza at redhat.com Mon Apr 9 13:45:00 2018 From: sbauza at redhat.com (Sylvain Bauza) Date: Mon, 9 Apr 2018 15:45:00 +0200 Subject: [openstack-dev] [nova] [placement] placement update 18-14 In-Reply-To: References: Message-ID: On Fri, Apr 6, 2018 at 2:54 PM, Chris Dent wrote: > > This is "contract" style update. New stuff will not be added to the > lists. > > # Most Important > > There doesn't appear to be anything new with regard to most > important. That which was important remains important. At the > scheduler team meeting at the start of the week there was talk of > working out ways to trim the amount of work in progress by using the > nova priorities tracking etherpad to help sort things out: > > https://etherpad.openstack.org/p/rocky-nova-priorities-tracking > > Update provider tree and nested allocation candidates remain > critical basic functionality on which much else is based. With most > of provider tree done, it's really on nested allocation candidates. > > # What's Changed > > Quite a bit of provider tree related code has merged. > > Some negotiation happened with regard to when/if the fixes for > shared providers is going to happen. I'm not sure how that resolved, > if someone can follow up with that, that would be most excellent. > > Most of the placement-req-filter series merged. > > The spec for error codes in the placement API merged (code is in > progress and ready for review, see below). > > # Questions > > * Eric and I discussed earlier in the week that it might be a good > time to start an #openstack-placement IRC channel, for two main > reasons: break things up so as to limit the crosstalk in the often > very busy #openstack-nova channel and to lend a bit of momentum > for going in that direction. Is this okay with everyone? If not, > please say so, otherwise I'll make it happen soon. > > Fine by me. It's sometimes difficult to follow all the conversations so having a separate channel looks good to me, at least for discussing only about specific Placement questions. For Nova related points (like how to use nested RPs for example with NUMA), maybe #openstack-nova is still the main IRC channel for that. * Shared providers status? > (I really think we need to make this go. It was one of the > original value propositions of placement: being able to accurate > manage shared disk.) > > # Bugs > > * Placement related bugs not yet in progress: https://goo.gl/TgiPXb > 15, -1 on last week > * In progress placement bugs: https://goo.gl/vzGGDQ > 13, +1 on last week > > # Specs > > These seem to be divided into three classes: > > * Normal stuff > * Old stuff not getting attention or newer stuff that ought to be > abandoned because of lack of support > * Anything related to the client side of using nested providers > effectively. This apparently needs a lot of thinking. If there are > some general sticking points we can extract and resolve, that > might help move the whole thing forward? > > * https://review.openstack.org/#/c/549067/ > VMware: place instances on resource pool > (using update_provider_tree) > > * https://review.openstack.org/#/c/545057/ > mirror nova host aggregates to placement API > > * https://review.openstack.org/#/c/552924/ > Proposes NUMA topology with RPs > > * https://review.openstack.org/#/c/544683/ > Account for host agg allocation ratio in placement > > * https://review.openstack.org/#/c/552927/ > Spec for isolating configuration of placement database > (This has a strong +2 on it but needs one more.) > > * https://review.openstack.org/#/c/552105/ > Support default allocation ratios > > * https://review.openstack.org/#/c/438640/ > Spec on preemptible servers > > * https://review.openstack.org/#/c/556873/ > Handle nested providers for allocation candidates > > * https://review.openstack.org/#/c/556971/ > Add Generation to Consumers > > * https://review.openstack.org/#/c/557065/ > Proposes Multiple GPU types > > * https://review.openstack.org/#/c/555081/ > Standardize CPU resource tracking > > * https://review.openstack.org/#/c/502306/ > Network bandwidth resource provider > > * https://review.openstack.org/#/c/509042/ > Propose counting quota usage from placement > > # Main Themes > > ## Update Provider Tree > > Most of the main guts of this have merged (huzzah!). What's left are > some loose end details, and clean handling of aggregates: > > https://review.openstack.org/#/q/topic:bp/update-provider-tree > > ## Nested providers in allocation candidates > > Representing nested provides in the response to GET > /allocation_candidates is required to actually make use of all the > topology that update provider tree will report. That work is in > progress at: > > https://review.openstack.org/#/q/topic:bp/nested-resource-providers > https://review.openstack.org/#/q/topic:bp/nested-resource-pr > oviders-allocation-candidates > > Note that some of this includes the up-for-debate shared handling. > > ## Request Filters > > As far as I can tell this is mostly done (yay!) but there is a loose > end: We merged an updated spec to support multiple member_of > parameters, but it's not clear anybody is currently owning that: > > https://review.openstack.org/#/c/555413/ > > ## Mirror nova host aggregates to placement > > This makes it so some kinds of aggregate filtering can be done > "placement side" by mirroring nova host aggregates into placement > aggregates. > > https://review.openstack.org/#/q/topic:bp/placement-mirror-h > ost-aggregates > > It's part of what will make the req filters above useful. > > ## Forbidden Traits > > A way of expressing "I'd like resources that do _not_ have trait X". > This is ready for review: > > https://review.openstack.org/#/q/topic:bp/placement-forbidden-traits > > ## Consumer Generations > > This allows multiple agents to "safely" update allocations for a > single consumer. There is both a spec and code in progress for this: > > https://review.openstack.org/#/q/topic:bp/add-consumer-generation > > # Extraction > > Small bits of work on extraction continue on the > bp/placement-extract topic: > > https://review.openstack.org/#/q/topic:bp/placement-extract > > The spec for optional database handling got some nice support > but needs more attention: > > https://review.openstack.org/#/c/552927/ > > Jay has declared that he's going to start work on the > os-resources-classes library. > > I've posted a 6th in my placement container playground series: > > https://anticdent.org/placement-container-playground-6.html > > Though not directly related to extraction, that experimentation has > exposed a lot of the areas where work remains to be done to make > placement independent of nova. > > A recent experiment with shrinking the repo to just the placement > dir reinforced a few things we already know: > > * The placement tests need their own base test to avoid 'from nova > import test' > * That will need to provide database and other fixtures (such a > config and the self.flags feature). > * And, of course, eventually, config handling. The container > experiments above demonstrate just how little config placement > actually needs (by design, let's keep it that way). > > # Other > > This is a contract week, so nothing new has been added here, despite > there being new work. Part of the intent here it make sure we are > queue-like where we can be. This list maintains its ordering from > week to week: newly discovered things are added to the end. > > There are 14 entries here, -7 on last week. > > That's good. However some of the removals are the result of some > code changing topic (and having been listed here by topic). Some of > the oldest stuff at the top of the list has not moved. > > * https://review.openstack.org/#/c/546660/ > Purge comp_node and res_prvdr records during deletion of > cells/hosts > > * https://review.openstack.org/#/q/topic:bp/placement-osc-plugin-rocky > A huge pile of improvements to osc-placement > > * https://review.openstack.org/#/c/546713/ > Add compute capabilities traits (to os-traits) > > * https://review.openstack.org/#/c/524425/ > General policy sample file for placement > > * https://review.openstack.org/#/c/546177/ > Provide framework for setting placement error codes > > * https://review.openstack.org/#/c/527791/ > Get resource provider by uuid or name (osc-placement) > > * https://review.openstack.org/#/c/477478/ > placement: Make API history doc more consistent > > * https://review.openstack.org/#/c/556669/ > Handle agg generation conflict in report client > > * https://review.openstack.org/#/c/556628/ > Slugification utilities for placement names > > * https://review.openstack.org/#/c/557086/ > Remove usage of [placement]os_region_name > > * https://review.openstack.org/#/c/556633/ > Get rid of 406 paths in report client > > * https://review.openstack.org/#/c/537614/ > Add unit test for non-placement resize > > * https://review.openstack.org/#/c/554357/ > Address issues raised in adding member_of to GET /a-c > > * https://review.openstack.org/#/c/493865/ > cover migration cases with functional tests > > # End > > 2 runway slots open up this coming Wednesday, the 11th. > > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent tw: @anticdent > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Apr 9 13:58:28 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 09 Apr 2018 09:58:28 -0400 Subject: [openstack-dev] [all][requirements] uncapping eventlet In-Reply-To: <20180409033928.GB28028@thor.bakeyournoodle.com> References: <20180405154737.lhxhlpsj3uttjevu@gentoo.org> <20180405192619.nk7ykeel6qnwsk2y@gentoo.org> <20180406163433.fyj6qnq5oegivb4t@gentoo.org> <1523032867.936315.1329051592.0E63BA5F@webmail.messagingengine.com> <20180409033928.GB28028@thor.bakeyournoodle.com> Message-ID: <1523282186-sup-2@lrrr.local> Excerpts from Tony Breeds's message of 2018-04-09 13:39:30 +1000: > On Fri, Apr 06, 2018 at 09:41:07AM -0700, Clark Boylan wrote: > > > My understanding of our use of upper constraints was that this should > > (almost) always be the case for (almost) all dependencies. We should > > rely on constraints instead of requirements caps. Capping libs like > > pbr or eventlet and any other that is in use globally is incredibly > > difficult to work with when you want to uncap it because you have to > > coordinate globally. Instead if using constraints you just bump the > > constraint and are done. > > Part of the reason that we have the caps it to prevent the tools that > auto-generate the constraints syncs from considering these versions and > then depending on the requirements team to strip that from the bot > change before committing (assuming it passes CI). > > Once the work Doug's doing is complete we could consider tweaking the > tools to use a different mechanism, but that's only part of the reason > for the caps in g-r. > > Yours Tony. Now that projects don't have to match the global requirements list entries exactly we should be able to remove caps from within the projects and keep caps in the global list for cases like this where we know we frequently encounter breaking changes in new releases. The changes to support that were part of https://review.openstack.org/#/c/555402/ Doug From mbayer at redhat.com Mon Apr 9 14:36:59 2018 From: mbayer at redhat.com (Michael Bayer) Date: Mon, 9 Apr 2018 10:36:59 -0400 Subject: [openstack-dev] [oslo.db] innodb OPTIMIZE TABLE ? In-Reply-To: <20180409095306.s4qxqi7q3m7p46d2@localhost> References: <04e33bc7-90cf-e9c6-c276-a852212c25c7@gmail.com> <20180404090026.xl22i4kyplurq36z@localhost> <20180409095306.s4qxqi7q3m7p46d2@localhost> Message-ID: On Mon, Apr 9, 2018 at 5:53 AM, Gorka Eguileor wrote: > On 06/04, Michael Bayer wrote: >> On Wed, Apr 4, 2018 at 5:00 AM, Gorka Eguileor wrote: >> > On 03/04, Jay Pipes wrote: >> >> On 04/03/2018 11:07 AM, Michael Bayer wrote: >> >> > The MySQL / MariaDB variants we use nowadays default to >> >> > innodb_file_per_table=ON and we also set this flag to ON in installer >> >> > tools like TripleO. The reason we like file per table is so that >> >> > we don't grow an enormous ibdata file that can't be shrunk without >> >> > rebuilding the database. Instead, we have lots of little .ibd >> >> > datafiles for each table throughout each openstack database. >> >> > >> >> > But now we have the issue that these files also can benefit from >> >> > periodic optimization which can shrink them and also have a beneficial >> >> > effect on performance. The OPTIMIZE TABLE statement achieves this, >> >> > but as would be expected it itself can lock tables for potentially a >> >> > long time. Googling around reveals a lot of controversy, as various >> >> > users and publications suggest that OPTIMIZE is never needed and would >> >> > have only a negligible effect on performance. However here we seek >> >> > to use OPTIMIZE so that we can reclaim disk space on tables that have >> >> > lots of DELETE activity, such as keystone "token" and ceilometer >> >> > "sample". >> >> > >> >> > Questions for the group: >> >> > >> >> > 1. is OPTIMIZE table worthwhile to be run for tables where the >> >> > datafile has grown much larger than the number of rows we have in the >> >> > table? >> >> >> >> Possibly, though it's questionable to use MySQL/InnoDB for storing transient >> >> data that is deleted often like ceilometer samples and keystone tokens. A >> >> much better solution is to use RDBMS partitioning so you can simply ALTER >> >> TABLE .. DROP PARTITION those partitions that are no longer relevant (and >> >> don't even bother DELETEing individual rows) or, in the case of Ceilometer >> >> samples, don't use a traditional RDBMS for timeseries data at all... >> >> >> >> But since that is unfortunately already the case, yes it is probably a good >> >> idea to OPTIMIZE TABLE on those tables. >> >> >> >> > 2. from people's production experience how safe is it to run OPTIMIZE, >> >> > e.g. how long is it locking tables, etc. >> >> >> >> Is it safe? Yes. >> >> >> >> Does it lock the entire table for the duration of the operation? No. It uses >> >> online DDL operations: >> >> >> >> https://dev.mysql.com/doc/refman/5.7/en/innodb-file-defragmenting.html >> >> >> >> Note that OPTIMIZE TABLE is mapped to ALTER TABLE tbl_name FORCE for InnoDB >> >> tables. >> >> >> >> > 3. is there a heuristic we can use to measure when we might run this >> >> > -.e.g my plan is we measure the size in bytes of each row in a table >> >> > and then compare that in some ratio to the size of the corresponding >> >> > .ibd file, if the .ibd file is N times larger than the logical data >> >> > size we run OPTIMIZE ? >> >> >> >> I don't believe so, no. Most things I see recommended is to simply run >> >> OPTIMIZE TABLE in a cron job on each table periodically. >> >> >> >> > 4. I'd like to propose this job of scanning table datafile sizes in >> >> > ratio to logical data sizes, then running OPTIMIZE, be a utility >> >> > script that is delivered via oslo.db, and would run for all innodb >> >> > tables within a target MySQL/ MariaDB server generically. That is, I >> >> > really *dont* want this to be a script that Keystone, Nova, Ceilometer >> >> > etc. are all maintaining delivering themselves. this should be done >> >> > as a generic pass on a whole database (noting, again, we are only >> >> > running it for very specific InnoDB tables that we observe have a poor >> >> > logical/physical size ratio). >> >> >> >> I don't believe this should be in oslo.db. This is strictly the purview of >> >> deployment tools and should stay there, IMHO. >> >> >> > >> > Hi, >> > >> > As far as I know most projects do "soft deletes" where we just flag the >> > rows as deleted and don't remove them from the DB, so it's only when we >> > use a management tool and run the "purge" command that we actually >> > remove these rows. >> > >> > Since running the optimize without purging would be meaningless, I'm >> > wondering if we should trigger the OPTIMIZE also within the purging >> > code. This way we could avoid innefective runs of the optimize command >> > when no purge has happened and even when we do the optimization we could >> > skip the ratio calculation altogether for tables where no rows have been >> > deleted (the ratio hasn't changed). >> > >> >> the issue is that this OPTIMIZE will block on Galera unless it is run >> on a per-individual node basis along with the changing of the >> wsrep_OSU_method parameter, this is way out of scope both to be >> redundantly hardcoded in multiple openstack projects, as well as >> there's no portable way for Keystone and others to get at the >> individual Galera node addresses. Putting it in oslo.db would at >> least be a place that most of this logic can live but even then it >> needs to run for multiple Galera nodes and needs to have >> deployment-specific configuration. *unless* we say, the OPTIMIZE >> here will short for a purged table, let's just let it block. >> > > I see... What about a hybrid solution? Use the alter table as mentioned > in the comment [1] to not block the table for systems that support it, > and going with the RSU mode when it's not supported? > sure, it just depends on if we have Galera running or not, so I intend to detect if the current MySQL database is a Galera cluster or not by looking for wsrep_* variables and status. Tripleo will know to deploy the script directly to each MySQL database, galera or not, on the local host that MySQL is running and the script will just do the right thing without any of the downstream apps having to know about it. > > [1] https://mariadb.com/kb/en/library/optimize-table/#comment_3191 > > >> >> > Ideally the ratio calculation and optimization code would be provided by >> > oslo.db to reduce code duplication between projects. >> >> I was hoping to have this be part of oslo.db but there's disagreement on that :) >> >> If this can't be in oslo.db then the biggest issue facing me on this >> is building out a new application and getting it packaged since this >> feature has no home, unless I can ship it as some kind of script >> packaged in tripleo. >> >> > > I think the oslo.db home you proposed has the great benefit of making it > available in all deployments regardless of the installer, if that's not > possible I would go with the TripleO script before creating yet another > project that needs to be packaged and maintained. > > Cheers, > Gorka. > > >> > >> > >> >> > 5. for Galera this gets more tricky, as we might want to run OPTIMIZE >> >> > on individual nodes directly. The script at [1] illustrates how to >> >> > run this on individual nodes one at a time. >> >> > >> >> > More succinctly, the Q is: >> >> > >> >> > a. OPTIMIZE, yes or no? >> >> >> >> Yes. >> >> >> >> > b. oslo.db script to run generically, yes or no? >> >> >> >> No. Just have Triple-O install galera_innoptimizer and run it in a cron job. >> >> >> >> Best, >> >> -jay >> >> >> >> > thanks for your thoughts! >> >> > >> >> > >> >> > >> >> > [1] https://github.com/deimosfr/galera_innoptimizer >> >> > >> >> > __________________________________________________________________________ >> >> > OpenStack Development Mailing List (not for usage questions) >> >> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > >> >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > >> > __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From lucasagomes at gmail.com Mon Apr 9 14:56:38 2018 From: lucasagomes at gmail.com (Lucas Alvares Gomes) Date: Mon, 9 Apr 2018 15:56:38 +0100 Subject: [openstack-dev] [neutron] [OVN] Tempest API / Scenario tests and OVN metadata In-Reply-To: <4A490EDA-BD7F-444C-AA4F-65562FE21408@kaplonski.pl> References: <4C4EB692-8B09-4689-BDC2-E6447D719073@kaplonski.pl> <4A490EDA-BD7F-444C-AA4F-65562FE21408@kaplonski.pl> Message-ID: Hi, > Another idea is to modify test that it will: > 1. Check how many ports are in tenant, > 2. Set quota to actual number of ports + 1 instead of hardcoded 1 as it is now, > 3. Try to add 2 ports - exactly as it is now, > > I think that this should be still backend agnostic and should fix this problem. > Great idea! I've gave it a go and proposed it at https://review.openstack.org/559758 Cheers, Lucas From mriedemos at gmail.com Mon Apr 9 15:22:49 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 9 Apr 2018 10:22:49 -0500 Subject: [openstack-dev] [cinder][nova] about re-image the volume In-Reply-To: <20180409085123.ydm5n3i3lngqsgjc@localhost> References: <20180329142813.GA25762@sm-xps> <20180402115959.3y3j6ytab6ruorrg@localhost> <96adcaac-632a-95c3-71c8-51211c1c57bd@gmail.com> <20180405081558.vf7bibu4fcv5kov3@localhost> <20180406083110.tydltwfe23kiq7bw@localhost> <20180409085123.ydm5n3i3lngqsgjc@localhost> Message-ID: On 4/9/2018 3:51 AM, Gorka Eguileor wrote: > As I see it, the process would look something like this: > > - Nova detaches volume using OS-Brick > - Nova calls Cinder re-image passing the node's info (like we do when > attaching a new volume) > - Cinder would: > - Ensure only that node is connected to the volume > - Terminate connection to the original volume > - If we can do optimized volume creation: > - If encrypted volume we create a copy of the encryption key on > Barbican or copy the ID field from the DB and ensure we don't > delete the Barbican key on the delete. > - Create new volume from image > - Swap DB fields to preserve the UUID > - Delete original volume > - If it cannot do optimized volume creation: > - Initialize+Attach volume to Cinder node > - DD the new image into the volume > - Detach+Terminate volume > - Initialize connection for the new volume to the Nova node > - Return connection information to the volume > - Nova attaches volume with OS-Brick using returned connection > information. > > So I agree, it's not a blockRebase operation, just a change in the > volume that is used. Yeah we're on the same page with respect to the high level changes on the nova side. -- Thanks, Matt From cdent+os at anticdent.org Mon Apr 9 15:35:06 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Mon, 9 Apr 2018 16:35:06 +0100 (BST) Subject: [openstack-dev] [nova] [placement] placement update 18-14 In-Reply-To: References: Message-ID: On Fri, 6 Apr 2018, Chris Dent wrote: > * Eric and I discussed earlier in the week that it might be a good > time to start an #openstack-placement IRC channel, for two main > reasons: break things up so as to limit the crosstalk in the often > very busy #openstack-nova channel and to lend a bit of momentum > for going in that direction. Is this okay with everyone? If not, > please say so, otherwise I'll make it happen soon. After confirmation in today's scheduler meeting this has been done. #openstack-placement now exists, is registered, and various *bot additions are in progress: https://review.openstack.org/559768 https://review.openstack.org/559769 http://p.anticdent.org/logs/openstack-placement -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From jim at jimrollenhagen.com Mon Apr 9 15:49:20 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Mon, 9 Apr 2018 11:49:20 -0400 Subject: [openstack-dev] [barbican][nova-powervm][pyghmi][solum][trove] Switching to cryptography from pycrypto In-Reply-To: References: <20180331232401.hp5j4iommgw7tj3j@gentoo.org> Message-ID: On Mon, Apr 2, 2018 at 8:26 AM, Jim Rollenhagen wrote: > On Sat, Mar 31, 2018 at 7:24 PM, Matthew Thode > wrote: > >> Here's the current status. I'd like to ask the projects what's keeping >> them from removing pycrypto in facor of a maintained library. >> >> pyghmi: >> - (merge conflict) https://review.openstack.org/#/c/331828 >> - (merge conflict) https://review.openstack.org/#/c/545465 >> - (doesn't change the import) https://review.openstack.org/#/c/545182 > > > Looks like py26 support might be a blocker here. While we've brought > pyghmi into the ironic project, it's still a project mostly built and > maintained > by Jarrod, and he has customers outside of OpenStack that depend on it. > The ironic team will have to discuss this with Jarrod and find a good path > forward. > > My initial thought is that we need to move forward on this, so > perhaps we can release this change as a major version, and keep a py26 > branch that can be released on the previous minor version for the people > that need this on 2.6. Thoughts? > I reached out to Jarrod off-list and sounds like this is roughly the plan: > FWIW, I did at least merge a change to work with cryptodomex and moved pyghmi to that when available (I could not discern a way to have requirements allow one of multiple choices). > > I thought about cryptodome, but that breaks paramiko in that environment. > > I’ll probably do a 1.1.0 that uses cryptography, and continue 1.0 with pycrypto/pycryptodomex. // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Mon Apr 9 16:55:03 2018 From: corvus at inaugust.com (James E. Blair) Date: Mon, 09 Apr 2018 09:55:03 -0700 Subject: [openstack-dev] [all] Changes to Zuul role checkouts Message-ID: <87r2nonwuw.fsf@meyer.lemoncheese.net> Hi, We recently fixed a subtle but important bug related to how Zuul checks out repositories it uses to find Ansible roles for jobs. This may result in a behavior change, or even an error, for jobs which use roles defined in projects with multiple branches. Previously, Zuul would (with some exceptions) generally check out the 'master' branch of any repository which appeared in the 'roles:' stanza in the job definition. Now Zuul will follow its usual procedure of trying to find the most appropriate branch to check out. That means it tries the project override-checkout branch first, then the job override-checkout branch, then the branch of the change, and finally the default branch of the project. This should produce more predictable behavior which matches the checkouts of all other projects involved in a job. If you find that the wrong branch of a role is being checked out, depending on circumstances, you may need to set a job or project override-checkout value to force the correct one, or you may need to backport a role to an older branch. If you encounter any problems related to this, please chat with us in #openstack-infra. Thanks, Jim From dmsimard at redhat.com Mon Apr 9 16:59:14 2018 From: dmsimard at redhat.com (David Moreau Simard) Date: Mon, 9 Apr 2018 12:59:14 -0400 Subject: [openstack-dev] [all] Changes to Zuul role checkouts In-Reply-To: <87r2nonwuw.fsf@meyer.lemoncheese.net> References: <87r2nonwuw.fsf@meyer.lemoncheese.net> Message-ID: If you're not familiar with the "override-checkout" configuration, you can find the documentation about it here [1] and some example usage here [2]. [1]: https://zuul-ci.org/docs/zuul/user/config.html#attr-job.override-checkout [2]: http://codesearch.openstack.org/?q=override-checkout David Moreau Simard Senior Software Engineer | OpenStack RDO dmsimard = [irc, github, twitter] On Mon, Apr 9, 2018 at 12:55 PM, James E. Blair wrote: > Hi, > > We recently fixed a subtle but important bug related to how Zuul checks > out repositories it uses to find Ansible roles for jobs. > > This may result in a behavior change, or even an error, for jobs which > use roles defined in projects with multiple branches. > > Previously, Zuul would (with some exceptions) generally check out the > 'master' branch of any repository which appeared in the 'roles:' stanza > in the job definition. Now Zuul will follow its usual procedure of > trying to find the most appropriate branch to check out. That means it > tries the project override-checkout branch first, then the job > override-checkout branch, then the branch of the change, and finally the > default branch of the project. > > This should produce more predictable behavior which matches the > checkouts of all other projects involved in a job. > > If you find that the wrong branch of a role is being checked out, > depending on circumstances, you may need to set a job or project > override-checkout value to force the correct one, or you may need to > backport a role to an older branch. > > If you encounter any problems related to this, please chat with us in > #openstack-infra. > > Thanks, > > Jim > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From melwittt at gmail.com Mon Apr 9 17:09:12 2018 From: melwittt at gmail.com (melanie witt) Date: Mon, 9 Apr 2018 10:09:12 -0700 Subject: [openstack-dev] [nova] Rocky forum topics brainstorming Message-ID: <0037fa0a-aa31-1744-b050-783e8be81138@gmail.com> Hey everyone, Let's collect forum topic brainstorming ideas for the Forum sessions in Vancouver in this etherpad [0]. Once we've brainstormed, we'll select and submit our topic proposals for consideration at the end of this week. The deadline for submissions is Sunday April 15. Thanks, -melanie [0] https://etherpad.openstack.org/p/YVR-nova-brainstorming From openstack at nemebean.com Mon Apr 9 17:16:47 2018 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 9 Apr 2018 12:16:47 -0500 Subject: [openstack-dev] [all] [api] Re-Reminder on the state of WSME In-Reply-To: References: Message-ID: <4bb99da6-1071-3f7b-2c87-979e0d48876d@nemebean.com> On 04/09/2018 07:22 AM, Chris Dent wrote: > > A little over two years ago I sent a reminder that WSME is not being > actively maintained: > > > http://lists.openstack.org/pipermail/openstack-dev/2016-March/088658.html > > Today I was reminded of this becasue a random (typo-related) > patchset demonstrated that the tests were no longer passing and > fixing them is enough of a chore that I (at least temporarily) > marked one test as an expected failure.o > >     https://review.openstack.org/#/c/559717/ > > The following projects appear to still use WSME: > >     aodh >     blazar >     cloudkitty >     cloudpulse >     cyborg >     glance >     gluon >     iotronic >     ironic >     magnum >     mistral >     mogan >     octavia >     panko >     qinling >     radar >     ranger >     searchlight >     solum >     storyboard >     surveil >     terracotta >     watcher > > Most of these are using the 'types' handling in WSME and sometimes > the pecan extension, and not the (potentially broken) Flask > extension, so things should be stable. > > However: nobody is working on keeping WSME up to date. It is not a > good long term investment. What would be the recommended alternative, either for new work or as a migration path for existing projects? From thierry at openstack.org Mon Apr 9 17:18:09 2018 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 9 Apr 2018 19:18:09 +0200 Subject: [openstack-dev] Vancouver Forum - Post your selected topics now In-Reply-To: References: <7af5f78e-2a3f-dacb-77ef-ebe171d74361@openstack.org> Message-ID: <7360ce59-4c46-ab35-8027-e3c26d1ad5fe@openstack.org> Csatari, Gergely (Nokia - HU/Budapest) wrote: > There are two lists of etherpads for forum brainstorming in https://wiki.openstack.org/wiki/Forum/Vancouver2018 and there is http://forumtopics.openstack.org/ . > > Is my understanding correct, that ultimatelly all ideas should go to http://forumtopics.openstack.org/ ? Yes! The process recommends that each workgroup uses etherpads to brainstorm ideas and converge to a set of sessions they want to propose, and then someone on that group can file the proposed set. (The idea being to foster a discussion early and reduce duplicate / overlapping proposals) -- Thierry From mriedemos at gmail.com Mon Apr 9 17:55:16 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 9 Apr 2018 12:55:16 -0500 Subject: [openstack-dev] [nova] Changes to ComputeVirtAPI.wait_for_instance_event Message-ID: As part of a bug fix [1], the internal ComputeVirtAPI.wait_for_instance_event interface is changing to no longer accept event names that are strings, and will now require the (name, tag) tuple form which all of the in-tree virt drivers are already using. If you have an out of tree driver that uses this interface, heads up that you'll need to be using the tuple form if you are not already doing so. [1] https://review.openstack.org/#/c/558059/ -- Thanks, Matt From duncan.thomas at gmail.com Mon Apr 9 18:00:56 2018 From: duncan.thomas at gmail.com (Duncan Thomas) Date: Mon, 9 Apr 2018 19:00:56 +0100 Subject: [openstack-dev] [cinder][nova] about re-image the volume In-Reply-To: <20180409085123.ydm5n3i3lngqsgjc@localhost> References: <20180329142813.GA25762@sm-xps> <20180402115959.3y3j6ytab6ruorrg@localhost> <96adcaac-632a-95c3-71c8-51211c1c57bd@gmail.com> <20180405081558.vf7bibu4fcv5kov3@localhost> <20180406083110.tydltwfe23kiq7bw@localhost> <20180409085123.ydm5n3i3lngqsgjc@localhost> Message-ID: Hopefully this flow means we can do rebuild root filesystem from snapshot/backup too? It seems rather artificially limiting to only do restore-from-image. I'd expect restore-from-snap to be a more common use case, personally. On 9 April 2018 at 09:51, Gorka Eguileor wrote: > On 06/04, Matt Riedemann wrote: >> On 4/6/2018 5:09 AM, Matthew Booth wrote: >> > I think you're talking at cross purposes here: this won't require a >> > swap volume. Apart from anything else, swap volume only works on an >> > attached volume, and as previously discussed Nova will detach and >> > re-attach. >> > >> > Gorka, the Nova api Matt is referring to is called volume update >> > externally. It's the operation required for live migrating an attached >> > volume between backends. It's called swap volume internally in Nova. >> >> Yeah I was hoping we were just having a misunderstanding of what 'swap >> volume' in nova is, which is the blockRebase for an already attached volume >> to the guest, called from cinder during a volume retype or migration. >> >> As for the re-image thing, nova would be detaching the volume from the guest >> prior to calling the new cinder re-image API, and then re-attach to the >> guest afterward - similar to how shelve and unshelve work, and for that >> matter how rebuild works today with non-root volumes. >> >> -- >> >> Thanks, >> >> Matt >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > Hi, > > Thanks for the clarification. When I was talking about "swapping" I was > referring to the fact that Nova will have to not only detach the volume > locally using OS-Brick, but it will also need to use new connection > information to do the attach after the volume has been re-imaged. > > As I see it, the process would look something like this: > > - Nova detaches volume using OS-Brick > - Nova calls Cinder re-image passing the node's info (like we do when > attaching a new volume) > - Cinder would: > - Ensure only that node is connected to the volume > - Terminate connection to the original volume > - If we can do optimized volume creation: > - If encrypted volume we create a copy of the encryption key on > Barbican or copy the ID field from the DB and ensure we don't > delete the Barbican key on the delete. > - Create new volume from image > - Swap DB fields to preserve the UUID > - Delete original volume > - If it cannot do optimized volume creation: > - Initialize+Attach volume to Cinder node > - DD the new image into the volume > - Detach+Terminate volume > - Initialize connection for the new volume to the Nova node > - Return connection information to the volume > - Nova attaches volume with OS-Brick using returned connection > information. > > So I agree, it's not a blockRebase operation, just a change in the > volume that is used. > > Regards, > Gorka. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Duncan Thomas From openstack at nemebean.com Mon Apr 9 18:12:30 2018 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 9 Apr 2018 13:12:30 -0500 Subject: [openstack-dev] [all][requirements] uncapping eventlet In-Reply-To: References: <20180405154737.lhxhlpsj3uttjevu@gentoo.org> <20180405192619.nk7ykeel6qnwsk2y@gentoo.org> Message-ID: On 04/06/2018 04:02 AM, Jens Harbott wrote: > 2018-04-05 19:26 GMT+00:00 Matthew Thode : >> On 18-04-05 20:11:04, Graham Hayes wrote: >>> On 05/04/18 16:47, Matthew Thode wrote: >>>> eventlet-0.22.1 has been out for a while now, we should try and use it. >>>> Going to be fun times. >>>> >>>> I have a review projects can depend upon if they wish to test. >>>> https://review.openstack.org/533021 >>> >>> It looks like we may have an issue with oslo.service - >>> https://review.openstack.org/#/c/559144/ is failing gates. >>> >>> Also - what is the dance for this to get merged? It doesn't look like we >>> can merge this while oslo.service has the old requirement restrictions. >>> >> >> The dance is as follows. >> >> 0. provide review for projects to test new eventlet version >> projects using eventlet should make backwards compat code changes at >> this time. > > But this step is currently failing. Keystone doesn't even start when > eventlet-0.22.1 is installed, because loading oslo.service fails with > its pkg definition still requiring the capped eventlet: > > http://logs.openstack.org/21/533021/4/check/legacy-requirements-integration-dsvm/7f7c3a8/logs/screen-keystone.txt.gz#_Apr_05_16_11_27_748482 > > So it looks like we need to have an uncapped release of oslo.service > before we can proceed here. I've proposed a patch[1] to uncap eventlet in oslo.service, but it's failing the unit tests[2]. I'll look into it, but I thought I'd provide an update in the meantime. 1: https://review.openstack.org/559800 2: http://logs.openstack.org/00/559800/1/check/openstack-tox-py27/cef8fcb/job-output.txt.gz From sundar.nadathur at intel.com Mon Apr 9 19:13:49 2018 From: sundar.nadathur at intel.com (Nadathur, Sundar) Date: Mon, 9 Apr 2018 12:13:49 -0700 Subject: [openstack-dev] [cyborg] Promote Li Liu as new core reviewer In-Reply-To: References: Message-ID: <55e1f32d-eb8b-10f6-e982-280604ff2d8b@intel.com> Agreed! +1 Regards, Sundar > Hi Team, > > This is an email for my nomination of adding Li Liu to the core > reviewer team. Li Liu has been instrumental in the resource provider > data model implementation for Cyborg during Queens release, as well as > metadata standardization and programming design for Rocky. > > His overall stats [0] and current stats [1] for Rocky speaks for > itself. His patches could be found here [2]. > > Given the amount of work undergoing for Rocky, it would be great to > add such an amazing force :) > > [0] > http://stackalytics.com/?module=cyborg-group&metric=person-day&release=all > [1] > http://stackalytics.com/?module=cyborg-group&metric=person-day&release=rocky > [2] https://review.openstack.org/#/q/owner:liliueecg%2540gmail.com > > -- > Zhipeng (Howard) Huang > > Standard Engineer > IT Standard & Patent/IT Product Line > Huawei Technologies Co,. Ltd > Email: huangzhipeng at huawei.com > Office: Huawei Industrial Base, Longgang, Shenzhen > > (Previous) > Research Assistant > Mobile Ad-Hoc Network Lab, Calit2 > University of California, Irvine > Email: zhipengh at uci.edu > Office: Calit2 Building Room 2402 > > OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Mon Apr 9 19:15:52 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 9 Apr 2018 14:15:52 -0500 Subject: [openstack-dev] [cinder][nova] about re-image the volume In-Reply-To: References: <20180329142813.GA25762@sm-xps> <20180402115959.3y3j6ytab6ruorrg@localhost> <96adcaac-632a-95c3-71c8-51211c1c57bd@gmail.com> <20180405081558.vf7bibu4fcv5kov3@localhost> <20180406083110.tydltwfe23kiq7bw@localhost> <20180409085123.ydm5n3i3lngqsgjc@localhost> Message-ID: <20180409191551.GA13852@sm-xps> On Mon, Apr 09, 2018 at 07:00:56PM +0100, Duncan Thomas wrote: > Hopefully this flow means we can do rebuild root filesystem from > snapshot/backup too? It seems rather artificially limiting to only do > restore-from-image. I'd expect restore-from-snap to be a more common > use case, personally. > That could get tricky. We only support reverting to the last snapshot if we reuse the same volume. Otherwise, we can create volume from snapshot, but I don't think it's often that the first thing a user does is create a snapshot on initial creation of a boot image. If it was created from image cache, and the backend creates those cached volume by using a snapshot, then that might be an option. But these are a lot of ifs, so that seems like it would make the logic for this much more complicated. Maybe a phase II optimization we can look into? From e0ne at e0ne.info Mon Apr 9 19:58:43 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Mon, 9 Apr 2018 22:58:43 +0300 Subject: [openstack-dev] [horizon][xstatic]How to handle xstatic if upstream files are modified In-Reply-To: References: Message-ID: Hi, Xinni, I absolutely agree with Radomir. We should keep xstatic files without modifications. We don't know if they are used outside of OpenStack or not, so they should be the same as NPM packages Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Mon, Apr 9, 2018 at 12:32 PM, Radomir Dopieralski wrote: > The whole idea about xstatic files is that they are generic, not specific > to Horizon or OpenStack, usable by other projects that need those static > files. In fact, at the time we started using xstatic, it was being used by > the MoinMoin wiki project (which is now dead, sadly). The modifications you > made are very specific to your usecase and would make it impossible to > reuse the packages by other applications (or even by other Horizon > plugins). The whole idea of a library is that you are using it as it is > provided, and not modifying it. > > We generally try to use all the libraries as they are, and if there are > any modifications necessary, we push them upstream, to the original > library. Otherwise there would be quite a bit of maintenance overhead > necessary to keep all our downstream patches. When considerable > modification is necessary that can't be pushed upstream, we fork the > library either into its own repository, or include it in the repository of > the application that is using it. > > On Mon, Apr 9, 2018 at 2:54 AM, Xinni Ge wrote: > >> Hello, team. >> >> Sorry for talking about xstatic repo for so many times. >> >> I didn't realize xstatic repositories should be provided with exactly the >> same file as upstream, and should have talked about it at very first. >> >> I modified several upstream files because some of them files couldn't be >> used directly under my expectation. >> >> For example, {{ }} are used in some original files as template tags, but >> Horizon adopts {$ $} in angular module, so I modified them to be recognized >> properly. >> >> Another major modification is that css files are converted into scss >> files to solve some css import issue previously. >> Besides, after collecting statics, some png file paths in css cannot be >> referenced properly and shown as 404 errors, I also modified css itself to >> handle this issues. >> >> I will recheck all the un-matched xstatic repositories and try to replace >> with upstream files as much as I can. >> But I if I really have to modify some original files, is it acceptable to >> still use it as embedded files with license info appeared at the top? >> >> >> Best Regards, >> Xinni Ge >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Mon Apr 9 20:05:53 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 9 Apr 2018 15:05:53 -0500 Subject: [openstack-dev] [cinder][nova] about re-image the volume In-Reply-To: References: <20180329142813.GA25762@sm-xps> <20180402115959.3y3j6ytab6ruorrg@localhost> <96adcaac-632a-95c3-71c8-51211c1c57bd@gmail.com> <20180405081558.vf7bibu4fcv5kov3@localhost> <20180406083110.tydltwfe23kiq7bw@localhost> <20180409085123.ydm5n3i3lngqsgjc@localhost> Message-ID: On 4/9/2018 1:00 PM, Duncan Thomas wrote: > Hopefully this flow means we can do rebuild root filesystem from > snapshot/backup too? It seems rather artificially limiting to only do > restore-from-image. I'd expect restore-from-snap to be a more common > use case, personally. Hmm, now you've got me thinking about image-defined block device mappings, which is something you'd have if you snapshot a volume-backed instance and then later use that image snapshot, which has metadata about the volume snapshot in it, to later create (or rebuild?) a server. Tempest has a scenario test for the boot from volume case here: https://review.openstack.org/#/c/555495/ I should note that even if you did snapshot a volume-backed server and then used that image to rebuild another non-volume-backed server, nova won't even look at the block_device_mapping_v2 metadata in the snapshot image during rebuild, it doesn't treat it like boot from volume does where nova uses the image-defined BDM to create a new volume-backed instance. And now that I've said that, I wonder if people would expect the same semantics for rebuild as boot from volume with those types of images...it makes my head hurt. Maybe mdbooth would like to weigh in on this given he's present in this thread. -- Thanks, Matt From mordred at inaugust.com Mon Apr 9 21:05:41 2018 From: mordred at inaugust.com (Monty Taylor) Date: Mon, 9 Apr 2018 16:05:41 -0500 Subject: [openstack-dev] PBR and Pipfile In-Reply-To: References: Message-ID: On 04/08/2018 04:10 AM, Gaetan wrote: > Hello OpenStack dev community, > > I am currently working on the support of Pipfile for PBR ([1]), and I > also follow actively the work on pipenv, which is now in officially > supported by PyPA. Awesome - welcome! This is a fun topic ... > There have been recently an intense discussion on the difficulties about > Python libraries development, and how to spread good practices [2] on > the pipenv community and enhance its documentation. > > As a user of PBR, and big fan of it, I try to bridge the link between > pbr and pipenv (with [1]) but I am interested in getting the feedback of > Python developers of OpenStack that may have much more experience using > PBR and more generally packaging python libraries than me. Great - I'll comment more on this a little later. > The main point is that packaging an application is quite easy or at > least understandable by newcomers, using `requirements.txt` or > `Pipfile`+ `Pipfile.lock` with pipenv. At least it is easily "teachable". > Packaging a library is harder, and require to explain why by default > `requirements.txt`(or `Pipfile`) does not work. Some "advanced" > documentation exists but it still hard to understand why Python ended up > with something complex for libraries ([3]). > One needs to ensure `install_requires`declares the dependencies to that > pip can find them during transitive dependencies installation (that is, > installing the dependencies of a given dependency). PBR helps on this > point but some does not want its other features. In general, as you might imagine, pbr has a difference of opinion with the pypa community about requirements.txt and install_requires. I'm going to respond from my POV about how things should work - and how I believe they MUST work for a project such as OpenStack to be able to operate. There are actually three different relevant use cases here, with some patterns available to draw from. I'm going to spell them out to just make sure we're on the same page. * Library * Application * Suite of Coordinated Applications A Library needs to declare the requirements it has along with any relevant ranges. Such as "this library requires 'foo' at at least version 2 but less than version 4". Since it's a library it needs to be able to handle being included in more than one application that may have different sets of requirements, so as a library it should attempt to have as wide a set of acceptable requirements as possible - but it should declare if there are versions of requirements it does not work with. In Pipfile world, this means "commit Pipfile but not Pipfile.lock". In pbr+requirements.txt it means "commit the requirements.txt with ranges and not == declared." An Application isn't included in other things, it's the end point. So declaring a specific set of versions of things that the application is known to work in addition to the logical requirement range is considered a best practice. In Pipfile world, this is "commit both Pipefile and Pipfile.lock". There isn't a direct analog for pbr+requirements.txt, although you could simulate this by executing pip with a -c constraints.txt file. A Suite of Coordinated Applications (like OpenStack) needs to communicate the specific versions the applications have been tested to work with, but they need to be the same so that all of the applications can be deployed side-by-side on the same machine without conflict. In OpenStack we do this by keeping a centrally managed constraints file [1] that our CI system adds to the pip install line when installing any of the OpenStack projects. A person who wants to install OpenStack from pip can also choose to do so using the upper-constraints.txt file and they can know they'll be getting the versions of dependencies we tested with. There is also no direct support for making this easier in pbr. For Pipfile, I believe we'd want to see is adding support for --constraints to pipenv install - so that we can update our Pipfile.lock file for each application in the context of the global constraints file. This can be simulated today without any support from pipenv directly like this: pipenv install $(pipenv --venv)/bin/pip install -U -c https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt -r requirements.txt pipenv lock > There is also works on PEP around pyproject.toml ([4]), which looks > quite similar to PBR's setup.cfg. What do you think about it? It's a bit different. There is also a philosophical disagreement about the use of TOML that's not worth going in to here - but from a pbr perspecitve I'd like to minimize use of pyproject.toml to the bare minimm needed to bootstrap things into pbr's control. In the first phase I expect to replace our current setup.py boilerplate: setuptools.setup( setup_requires=['pbr'], pbr=True) with: setuptool.setup(pbr=True) and add pyproject.toml files with: [build-system] requires = ["setuptools", "wheel", "pbr"] This will allow us to reasonably have projects declare minimum ranges on the pbr depend - and can allow us to give pbr dependencies (which is impossible today due to how setup_requires works) If we made setuptools and wheel depends of pbr,we could redue the pyproject.toml file to: [build-system] requires = ["pbr"] but we need to test to make sure that works first. Once pep517 is implemented, we can implement a hook in pbr and add a line to pyproject.toml in all of the projects, something like: build-backend = "pbr.core:build" Come to think of it, we could go ahead and implement pep517 support in pbr today and go ahead and start having the pbr pyproject.toml file to be: [build-system] requires = ["pbr"] build-backend = "pbr.core:build" We'll have to keep the setup.py files until such a time as pip has full 517 supported added. > My opinion is this difference in behaviourbetween lib and app has > technical reasons, but as a community we would gain a lot of unifying > both workflows. I am using PBR + a few hacks [5], and I am pretty > satisfied with the overall result. There are two topics your pbr patch opens up that need to be covered: * pbr behavior * dependencies ** pbr behavior ** I appreciate what you're saying about unifying the lib and app workflow, but I think the general pattern across the language communities (javascript and rust both have similar patterns to Pipefile) is that the two different options are important. We may just need to have a better document - rust has an excellent description: https://doc.rust-lang.org/cargo/guide/cargo-toml-vs-cargo-lock.html In any case, I think what pbr should do with pipfiles is: * If pbr discovers a Pipfile and no Pipfile.lock, it should treat the content in the packages section of Pipfile as it currently does with requirements.txt (and how you have done in the current patch) * If pbr discoves a Pipfile.lock, it should treat the content in Pipfile.lock as it currently does requirements.txt. This way if someone commits a Pipfile.lock because they have chosen the application workflow, pbr will behave as they would expect. Then, we either need to: * Add support to pipfile install for specifying a pip-style constraints file * Add support to pipfile install for specifying a constraints file that is in the format of a Pipfile.lock - but which does the same thing. * Write a pbr utility subcommand for generating a Pipfile.lock from a Pipfile taking a provided constraints file into account. We may also want to write a utility for creating a Pipefile and/or lock from a pbr-oriented requirements.txt/test-requirements.txt. (it should use pipfile on the backend of course) that can do the appropriate initial dance. ** dependencies ** The pep518 support in pip10 is really important here. Because ... We should not vendor code into pbr. While vendoring code has been found to be acceptable by other portions of the Python community, it is not acceptable here. Once pip10 is released next week with pyproject.toml support, as mentioned earlier, we'll be able to start using (a reasonable set) of dependencies as is appropriate. In order to ensure backwards compat, I would recommend we do the following: * Add toml and pipfile as depends to pbr * Protect their imports with a try/except (for people using old pip which won't install any depends pbr has) * Declare that pbr support for Pipfile will only work with people using pip>=10 and for projects that add a pyproject.toml to their project containing [build-system] requires = ["pbr"] * If pbr tries to import toml/pipfile and fails, it should fall back to reading requirements.txt (this allows us to keep backwards compat until it's reasonable to expect everyone to be on pip10) To support that last point, we should write a utility function, let's call it 'pbr lock', with the following behavior: * If a Pipfile and a Pipfile.lock are present, it runs: pipfile lock -r * If there is no Pipfile.lock, simply read the Pipfile and write the specifiers into requirements.txt in non-pinned format. This will allow pbr users to maintain their projects in such a way as to be backwards compatible while they start to use Pipfile/Pipefile.lock We MAY want to consider adding an option flag to setup.cfg, like: [pbr] type = application or [pbr] type = library for declaring to pbr which of Pipfile / Pipfile.lock should pbr pay attention to, regardless of which files might be present. I'm not sure whether that would be better or worse than inferring behavior from the presence of files. Of course, we could have the default behavior if the config setting isn't there to be to infer behavior from presence of files, but have the config setting for people who want to be explicit - and in the docs just don't mention omitting the setting - tell people to choose one or the other. What do you think? > So, in short, I simply start a general thread here to retrieve your > general feedback around these points. > > Thanks for your feedbacks [1] http://git.openstack.org/cgit/openstack/requirements/tree/upper-constraints.txt > Gaetan > > [1]: https://review.openstack.org/#/c/524436/ > [2]: https://github.com/pypa/pipenv/issues/1911 > [3]: https://docs.pipenv.org/advanced/#pipfile-vs-setup-py > [4]: https://www.python.org/dev/peps/pep-0518/ > [5]: library: >   - pipenv to maintain Pipfile and Pipfile.lock >   - Pipfile.lock not tracked (local reproductivity), >   - pipenv-to-requirements [6] to generate a `requirements.txt` without > version freeze, also tracked > applications: >   - pipenv to maintain Pipfile and Pipfile.lock >   - Pipfile.lock not tracked (global reproductivity), >   - pipenv-to-requirements [6] to generate a `requirements.txt` and > `requirements-dev.txt` with version freeze, both tracked > The development done with [1] should allow to get rid of [6]. > > [6] https://github.com/gsemet/pipenv-to-requirements > ----- > Gaetan > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From mriedemos at gmail.com Mon Apr 9 21:24:06 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 9 Apr 2018 16:24:06 -0500 Subject: [openstack-dev] [Openstack-operators] RFC: Next minimum libvirt / QEMU versions for "Stein" release In-Reply-To: <20180409095858.GE18076@paraplu> References: <20180330142643.ff3czxy35khmjakx@eukaryote> <20180331140929.r5kj3qyrefvsovwf@eukaryote> <20180404084507.GA18076@paraplu> <20180406100714.GB18076@paraplu> <98bf44f7-331d-7239-6eec-971f8afd604d@debian.org> <20180406170703.GD18076@paraplu> <355fafcc-8d7c-67a2-88c0-2823a51296f8@gmail.com> <20180409095858.GE18076@paraplu> Message-ID: <4a1a2732-cc78-eb4a-8517-f43a8f99c779@gmail.com> On 4/9/2018 4:58 AM, Kashyap Chamarthy wrote: > Keep in mind that Matt has a tendency to sometimes unfairly > over-simplify others views;-). More seriously, c'mon Matt; I went out > of my way to spend time learning about Debian's packaging structure and > trying to get the details right by talking to folks on > #debian-backports. And as you may have seen, I marked the patch[*] as > "RFC", and repeatedly said that I'm working on an agreeable lowest > common denominator. Sorry Kashyap, I didn't mean to offend. I was hoping "delicious bugs" would have made that obvious but I can see how it's not. You've done a great, thorough job on sorting this all out. Since I didn't know what "RFC" meant until googling it today, how about dropping that from the patch so I can +2 it? -- Thanks, Matt From openstack at nemebean.com Mon Apr 9 21:32:28 2018 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 9 Apr 2018 16:32:28 -0500 Subject: [openstack-dev] [all][requirements] uncapping eventlet In-Reply-To: References: <20180405154737.lhxhlpsj3uttjevu@gentoo.org> <20180405192619.nk7ykeel6qnwsk2y@gentoo.org> Message-ID: <14e34b97-32c2-b4a2-9d82-8a19f37737d9@nemebean.com> On 04/09/2018 01:12 PM, Ben Nemec wrote: > > > On 04/06/2018 04:02 AM, Jens Harbott wrote: >> 2018-04-05 19:26 GMT+00:00 Matthew Thode : >>> On 18-04-05 20:11:04, Graham Hayes wrote: >>>> On 05/04/18 16:47, Matthew Thode wrote: >>>>> eventlet-0.22.1 has been out for a while now, we should try and use >>>>> it. >>>>> Going to be fun times. >>>>> >>>>> I have a review projects can depend upon if they wish to test. >>>>> https://review.openstack.org/533021 >>>> >>>> It looks like we may have an issue with oslo.service - >>>> https://review.openstack.org/#/c/559144/ is failing gates. >>>> >>>> Also - what is the dance for this to get merged? It doesn't look >>>> like we >>>> can merge this while oslo.service has the old requirement restrictions. >>>> >>> >>> The dance is as follows. >>> >>> 0. provide review for projects to test new eventlet version >>>     projects using eventlet should make backwards compat code changes at >>>     this time. >> >> But this step is currently failing. Keystone doesn't even start when >> eventlet-0.22.1 is installed, because loading oslo.service fails with >> its pkg definition still requiring the capped eventlet: >> >> http://logs.openstack.org/21/533021/4/check/legacy-requirements-integration-dsvm/7f7c3a8/logs/screen-keystone.txt.gz#_Apr_05_16_11_27_748482 >> >> >> So it looks like we need to have an uncapped release of oslo.service >> before we can proceed here. > > I've proposed a patch[1] to uncap eventlet in oslo.service, but it's > failing the unit tests[2].  I'll look into it, but I thought I'd provide > an update in the meantime. Oh, the unit test failures are unrelated. Apparently the unit tests have been failing in oslo.service for a while. dims has a patch up at https://review.openstack.org/#/c/559831/ that looks to be addressing the problem, although it's also failing the unit tests. :-/ > > 1: https://review.openstack.org/559800 > 2: > http://logs.openstack.org/00/559800/1/check/openstack-tox-py27/cef8fcb/job-output.txt.gz > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ramamani.yeleswarapu at intel.com Mon Apr 9 22:44:22 2018 From: ramamani.yeleswarapu at intel.com (Yeleswarapu, Ramamani) Date: Mon, 9 Apr 2018 22:44:22 +0000 Subject: [openstack-dev] [ironic] this week's priorities and subteam reports Message-ID: Hi, We are glad to present this week's priorities and subteam report for Ironic. As usual, this is pulled directly from the Ironic whiteboard[0] and formatted. This Week's Priorities (as of the weekly ironic meeting) ======================================================== Weekly priorities ----------------- - Remaining Rescue patches - https://review.openstack.org/#/c/499050/ - Fix ``agent`` deploy interface to call ``boot.prepare_instance`` - https://review.openstack.org/#/c/546919/ - Prior fix for unrescuiing with whole disk image - https://review.openstack.org/#/c/528699/ - Tempest tests with nova (This can land after nova work is done. But, it should be ready to get the nova patch reviewed.) Needs Rebase. - Management interface boot_mode change - https://review.openstack.org/#/c/526773/ - Bios interface support - https://review.openstack.org/#/c/511162/ - https://review.openstack.org/#/c/528609/ - db api - https://review.openstack.org/#/c/511402/ - Bug fixes: - https://review.openstack.org/#/c/556748 - Storyboard related changes - https://review.openstack.org/556671 - https://review.openstack.org/556649 - https://review.openstack.org/556645 - https://review.openstack.org/556644 - https://review.openstack.org/#/c/556618/ Needs Revision For next week (TheJulia): https://review.openstack.org/#/c/558027/ https://review.openstack.org/#/c/557850/ Vendor priorities ----------------- cisco-ucs: Patches in works for SDK update, but not posted yet, currently rebuilding third party CI infra after a disaster... idrac: RFE and first several patches for adding UEFI support will be posted by Tuesday, 1/9 ilo: None irmc: None - a few works are work in progress oneview: None at this time - No subteam at present. xclarity: None at this time - No subteam at present. Subproject priorities --------------------- bifrost: ironic-inspector (or its client): networking-baremetal: networking-generic-switch: sushy and the redfish driver: Bugs (dtantsur, vdrok, TheJulia) -------------------------------- - (TheJulia) Ironic has moved to Storyboard. Dtantsur has indicated he will update the tool that generates these stats. - Stats (diff between 12 Mar 2018 and 19 Mar 2018) - Ironic: 225 bugs (+14) + 250 wishlist items (+2). 15 new (+10), 152 in progress, 1 critical, 36 high (+3) and 26 incomplete (+2) - Inspector: 15 bugs (+1) + 26 wishlist items. 1 new (+1), 14 in progress, 0 critical, 3 high and 4 incomplete - Nova bugs with Ironic tag: 14 (-1). 1 new, 0 critical, 0 high - critical: - sushy: https://bugs.launchpad.net/sushy/+bug/1754514 (basic auth broken when SessionService is not present) - Queens backport release: https://review.openstack.org/#/c/558799/ Pending. - the dashboard was abruptly deleted and needs a new home :( - use it locally with `tox -erun` if you need to - HIGH bugs with patches to review: - Clean steps are not tested in gate https://bugs.launchpad.net/ironic/+bug/1523640: Add manual clean step ironic standalone test https://review.openstack.org/#/c/429770/15 - Needs to be reproposed to the ironic tempest plugin repository. - prepare_instance() is not called for whole disk images with 'agent' deploy interface https://bugs.launchpad.net/ironic/+bug/1713916: - Fix ``agent`` deploy interface to call ``boot.prepare_instance`` https://review.openstack.org/#/c/499050/ - (TheJulia) Currently WF-1, as revision is required for deprecation. Priorities ========== Deploy Steps (rloo, mgoddard) ----------------------------- - status as of 9 April 2018: - spec for deployment steps framework has merged: https://review.openstack.org/#/c/549493/ - waiting for code from rloo, no timeframe yet BIOS config framework(zshi, yolanda, mgoddard, hshiina) ------------------------------------------------------- - status as of 9 April 2018: - Spec has merged: https://review.openstack.org/#/c/496481/ - List of ordered patches: - BIOS Settings: Add DB model: https://review.openstack.org/511162 need to fix unit tests and merge conflict - Add bios_interface db field https://review.openstack.org/528609 2x+3 - BIOS Settings: Add DB API: https://review.openstack.org/511402 - BIOS Settings: Add RPC object https://review.openstack.org/511714 - Add BIOSInterface to base driver class https://review.openstack.org/507793 - BIOS Settings: Add BIOS caching: https://review.openstack.org/512200 - Add Node BIOS support - REST API: https://review.openstack.org/512579 Conductor Location Awareness (jroll, dtantsur) ---------------------------------------------- - (April 9) started spec, about halfway done https://review.openstack.org/#/c/559420/ Reference architecture guide (dtantsur, jroll) ---------------------------------------------- - story: https://storyboard.openstack.org/#!/story/2001745 - status as of 9 April 2018: - Dublin PTG consensus was to start with small architectural building blocks. - list of cases from the Denver PTG - see in the story - First story up: https://review.openstack.org/#/c/556986/ - MERGED Graphical console interface (mkrai, anup-d-navare, TheJulia) ------------------------------------------------------------ - status as of 2 Apr 2018: - No update - VNC Graphical console spec: https://review.openstack.org/#/c/306074/ - needs update, address comments - nova blueprint: https://blueprints.launchpad.net/nova/+spec/ironic-vnc-console Neutron event processing (vdrok) -------------------------------- - status as of 02 April 2018: - spec at https://review.openstack.org/343684 - Needs update - WIP code at https://review.openstack.org/440778 - code is being rewritten to look a bit nicer (major rewrite), spec update coming afterwards Goals ===== Updating nova virt to use REST API (TheJulia) --------------------------------------------- Status as of 2 APR 2018: (TheJulia) Some back and forth on this topic. It looks like we're going to keep using python-ironicclient for now but wire in the ability to set the microversion on a per call level. Storyboard migration (TheJulia, dtantsur) ----------------------------------------- Status as of Apr 2nd. - Done! - TheJulia to propose patches to docs where appropriate. - Patches in review. - dtantsur to rewrite the bug dashboard Management interface refactoring (etingof, dtantsur) ---------------------------------------------------- - Status as of 9 Apr: - boot mode in ManagementInterface: https://review.openstack.org/#/c/526773/ 2x-1 Getting clean steps (rloo, TheJulia) ------------------------------------ - Stat as of April 2nd 2018 - No update - Status as of March 26th: - Cleanhold specification updated - https://review.openstack.org/#/c/507910/ Project vision (jroll, TheJulia) -------------------------------- - Status as of April 9: - jroll still trying to find time to collect enough thoughts for an email SIGHUP support (rloo) --------------------- - Proposed for ironic by rloo -- this is done: https://review.openstack.org/474331 MERGED\o/ - TODO: - ironic-inspector -kaifeng volunteered to do this - networking-baremetal - hjensas volunteered to do this Stretch Goals ============= NOTE: These items will be migrated into storyboard and will be removed from the weekly whiteboard once storyboard is in-place Classic driver removal formerly Classic drivers deprecation (dtantsur) ---------------------------------------------------------------------- - spec: http://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/classic-drivers-future.html - status as of 26 Mar 2018: - switch documentation to hardware types: - api-ref examples: TODO - update https://wiki.openstack.org/wiki/Ironic/Drivers: TODO - or should we kill it with fire in favour of the docs? - ironic-inspector: - documentation: https://review.openstack.org/#/c/545285/ MERGED - backport: https://review.openstack.org/#/c/554586/ - enable fake-hardware in devstack: https://review.openstack.org/#/c/550811/ MERGED - change the default discovery driver: https://review.openstack.org/#/c/550464/ - migration of CI to hardware types - IPA: https://review.openstack.org/553431 MERGED - ironic-lib: https://review.openstack.org/#/c/552537/ MERGED - python-ironicclient: https://review.openstack.org/552543 MERGED - python-ironic-inspector-client: https://review.openstack.org/552546 +A MERGED - virtualbmc: https://review.openstack.org/#/c/555361/ MERGED - started an ML thread tagging potentially affected projects: http://lists.openstack.org/pipermail/openstack-dev/2018-March/128438.html Redfish OOB inspection (etingof, deray, stendulker) --------------------------------------------------- Zuul v3 playbook refactoring (sambetts, pas-ha) ----------------------------------------------- Before Rocky ============ CI refactoring and missing test coverage ---------------------------------------- - not considered a priority, it's a 'do it always' thing - Standalone CI tests (vsaienk0) - next patch to be reviewed, needed for 3rd party CI: https://review.openstack.org/#/c/429770/ - localboot with partitioned image patches: - Ironic - add localboot partitioned image test: https://review.openstack.org/#/c/502886/ Rebase/update required - when previous are merged TODO (vsaienko) - Upload tinycore partitioned image to tarbals.openstack.org - Switch ironic to use tinyipa partitioned image by default - Missing test coverage (all) - portgroups and attach/detach tempest tests: https://review.openstack.org/382476 - adoption: https://review.openstack.org/#/c/344975/ - should probably be changed to use standalone tests - root device hints: TODO - node take over - resource classes integration tests: https://review.openstack.org/#/c/443628/ - radosgw (https://bugs.launchpad.net/ironic/+bug/1737957) Queens High Priorities ====================== Routed network support (sambetts, vsaienk0, bfournie, hjensas) -------------------------------------------------------------- - status as of 12 Feb 2018: - All code patches are merged. - One CI patch left, rework devstack baremetal simulation. To be done in Rocky? - This is to have actual 'flat' networks in CI. - Placement API work to be done in Rocky due to: Challenges with integration to Placement due to the way the integration was done in neutron. Neutron will create a resource provider for network segments in Placement, then it creates an os-aggregate in Nova for the segment, adds nova compute hosts to this aggregate. Ironic nodes cannot be added to host-aggregates. I (hjensas) had a short discussion with neutron devs (mlavalle) on the issue: http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2018-01-12.log.html#t2018-01-12T17:05:38 There are patches in Nova to add support for ironic nodes in host-aggregates: - https://review.openstack.org/#/c/526753/ allow compute nodes to be associated with host agg - https://review.openstack.org/#/c/529135/ (Spec) - Patches: - CI Patches: - https://review.openstack.org/#/c/392959/ Rework Ironic devstack baremetal network simulation - RFEs (Rocky) - https://bugs.launchpad.net/networking-baremetal/+bug/1749166 - TheJulia, March 19th 2018: This RFE seems not to contain detail on what is desired to be improved upon, and ultimately just seems like refactoring/improvement work and may not then need an rfe. - https://bugs.launchpad.net/networking-baremetal/+bug/1749162 - TheJulia, March 19th 2018: This RFE makes sense, although I would classify it as a general improvement. If we wish to adhere to strict RFE approval for networking-baremetal work, then I think we should consider this approved since it is minor enhancement to improve operation. Rescue mode (rloo, stendulker) ------------------------------ - Status as on 12 Feb 2018 - spec: http://specs.openstack.org/openstack/ironic-specs/specs/approved/implement-rescue-mode.html - code: https://review.openstack.org/#/q/topic:bug/1526449+status:open+OR+status:merged - ironic side: - all code patches have merged except for - Rescue mode standalone tests: https://review.openstack.org/#/c/538119/ (failing CI, not ready for reviews) - Tempest tests with nova: https://review.openstack.org/#/c/528699/ - Run the tempest test on the CI: https://review.openstack.org/#/c/528704/ - succeeded in rescuing: http://logs.openstack.org/04/528704/16/check/ironic-tempest-dsvm-ipa-wholedisk-bios-agent_ipmitool-tinyipa/4b74169/logs/screen-ir-cond.txt.gz#_Feb_02_09_44_12_940007 - nova side: - https://blueprints.launchpad.net/nova/+spec/ironic-rescue-mode: - approved for Queens but didn't get the ironic code (client) done in time - (TheJulia) Nova has indicated that this is deferred until Rocky. - To get the nova patch merged, we need: - release new python-ironicclient - Done - update ironicclient version in upper-constraints (this patch will be posted automatically) - update ironicclient version in global-requirement (this patch needs to be posted manually) Posted https://review.openstack.org/554673 - code patch: https://review.openstack.org/#/c/416487/ Needs revision - CI is needed for nova part to land - tiendc is working for CI Clean up deploy interfaces (vdrok) ---------------------------------- - status as of 5 Feb 2017: - patch https://review.openstack.org/524433 needs update and rebase Zuul v3 jobs in-tree (sambetts, derekh, jlvillal, rloo) ------------------------------------------------------- - etherpad tracking zuul v3 -> intree: https://etherpad.openstack.org/p/ironic-zuulv3-intree-tracking - cleaning up/centralizing job descriptions (eg 'irrelevant-files'): DONE - Next TODO is to convert jobs on master, to proper ansible. NOT a high priority though. - (pas-ha) DNM experimental patch with "devstack-tempest" as base job https://review.openstack.org/#/c/520167/ OpenStack Priorities ==================== Mox --- - TheJulia needs to just declare this done. Python 3.5 compatibility (Nisha, Ankit) --------------------------------------- - Topic: https://review.openstack.org/#/q/topic:goal-python35+NOT+project:openstack/governance+NOT+project:openstack/releases - this include all projects, not only ironic - please tag all reviews with topic "goal-python35" - TODO submit the python3 job for IPA - for ironic and ironic-inspector job enabled by disabling swift as swift is still lacking py3.5 support. - anupn to update the python3 job to build tinyipa with python3 - (anupn): Talked with swift folks and there is a bug upstream opened https://review.openstack.org/#/c/401397 for py3 support in swift. But this is not on their priority - Right now patch pass all gate jobs except agent_- drivers. - (TheJulia) It seems we might not have py3 compatibility with swift until the T- cycle. - updating setup.cfg (part of requirements for the goal): - ironic: https://review.openstack.org/#/c/539500/ - MERGED - ironic-inspector: https://review.openstack.org/#/c/539502/ - MERGED Deploying with Apache and WSGI in CI (pas-ha, vsaienk0) ------------------------------------------------------- - ironic is mostly finished - (pas-ha) needs to be rewritten for uWSGI, patches on review: - https://review.openstack.org/#/c/507067 - inspector is TODO and depends on https://review.openstack.org/#/q/topic:bug/1525218 - delayed as the HA work seems to take a different direction - (TheJulia, March 19th, 2018) Perhaps because of the different direction, we should consider ourselves done? Subprojects =========== Inspector (dtantsur) -------------------- - trying to flip dsvm-discovery to use the new dnsmasq pxe filter and failing because of bash :Dhttps://review.openstack.org/#/c/525685/6/devstack/plugin.sh at 202 - follow-ups being merged/reviewed; working on state consistency enhancements https://review.openstack.org/#/c/510928/ too (HA demo follow-up) Bifrost (TheJulia) ------------------ - Also seems a recent authentication change in keystoneauth1 has broken processing of the clouds.yaml files, i.e. `openstack` command does not work. - TheJulia will try to look at this this week. Drivers: -------- OneView (???) ~~~~~~~~~~~~~ - Oneview presently does not have a subteam. Cisco UCS (sambetts) Last updated 2018/02/05 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - Cisco CIMC driver CI back up and working on every patch - Cisco UCSM driver CI in development - Patches for updating the UCS python SDKs are in the works and should be posted soon ......... Until next week, --rama [0] https://etherpad.openstack.org/p/IronicWhiteBoard -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbirru at gmail.com Tue Apr 10 01:41:10 2018 From: mbirru at gmail.com (Murali B) Date: Mon, 9 Apr 2018 18:41:10 -0700 Subject: [openstack-dev] [zun] zun-api error In-Reply-To: References: Message-ID: Hi Hongbin Lu, After I run the etcd service up and tried to create container I see the below error and my container is in error state Could you please share me if I need to change any configuration in neutron for docker kuryer ckercfg'] find_config_file /usr/local/lib/python2.7/dist-packages/docker/utils/config.py:21 2018-04-09 16:47:44.058 41736 DEBUG docker.utils.config [req-0afc6b91-e50e-4a5a-a673-c2cecd6f2986 - - - - -] No config file found find_config_file /usr/local/lib/python2.7/dist-packages/docker/utils/config.py:28 2018-04-09 16:47:44.345 41736 ERROR zun.compute.manager [req-0afc6b91-e50e-4a5a-a673-c2cecd6f2986 - - - - -] Error occurred while calling Docker start API: Docker internal error: 500 Server Error: Internal Server Error ("IpamDriver.RequestAddress: Requested ip address {'subnet_id': u'fb768eca-8ad9-4afc-99f7-e13b9c36096e', 'ip_address': u'3.3.3.12'} already belongs to a bound Neutron port: 401a5599-2309-482e-b100-e2317c4118cf").: DockerError: Docker internal error: 500 Server Error: Internal Server Error ("IpamDriver.RequestAddress: Requested ip address {'subnet_id': u'fb768eca-8ad9-4afc-99f7-e13b9c36096e', 'ip_address': u'3.3.3.12'} already belongs to a bound Neutron port: 401a5599-2309-482e-b100-e2317c4118cf"). 2018-04-09 16:47:44.372 41736 DEBUG oslo_concurrency.lockutils [req-0afc6b91-e50e-4a5a-a673-c2cecd6f2986 - - - - -] Lock "b861d7cc-3e18-4037-8eaf-c6d0076b02a5" released by "zun.compute.manager.do_container_create" :: held 5.163s inner /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:285 2018-04-09 16:47:48.493 41610 DEBUG eventlet.wsgi.server [-] (41610) accepted ('10.11.142.2', 60664) server /usr/lib/python2.7/dis Thanks -Murali On Fri, Apr 6, 2018 at 11:00 AM, Murali B wrote: > Hi Hongbin Lu, > > Thank you. After changing the endpoint it worked. Actually I was using > magnum service also. I used the service as "container" for magnum that is > why its is going to 9511 instead of 9517 > After I corrected it worked. > > Thanks > -Murali > > On Fri, Apr 6, 2018 at 8:45 AM, Hongbin Lu wrote: > >> Hi Murali, >> >> It looks your zunclient was sending API requests to >> http://10.11.142.2:9511/v1/services , which doesn't seem to be the right >> API endpoint. According to the Keystone endpoint you configured, the API >> endpoint of Zun should be http://10.11.142.2:9517/v1/services >> (it is on port 9517 instead of >> 9511). >> >> What confused the zunclient is the endpoint's type you configured in >> Keystone. Zun expects an endpoint of type "container" but it was configured >> to be "zun-container" in your setup. I believe the error will be resolved >> if you can update the Zun endpoint from type "zun-container" to type >> "container". Please give it a try and let us know. >> >> Best regards, >> Hongbin >> >> On Thu, Apr 5, 2018 at 7:27 PM, Murali B wrote: >> >>> Hi Hongbin, >>> >>> Thank you for your help >>> >>> As per the our discussion here is the output for my current api on pike. >>> I am not sure which version of zun client client I should use for pike >>> >>> root at cluster3-2:~/python-zunclient# zun service-list >>> ERROR: Not Acceptable (HTTP 406) (Request-ID: >>> req-be69266e-b641-44b9-9739-0c2d050f18b3) >>> root at cluster3-2:~/python-zunclient# zun --debug service-list >>> DEBUG (extension:180) found extension EntryPoint.parse('vitrage-keycloak >>> = vitrageclient.auth:VitrageKeycloakLoader') >>> DEBUG (extension:180) found extension EntryPoint.parse('vitrage-noauth >>> = vitrageclient.auth:VitrageNoAuthLoader') >>> DEBUG (extension:180) found extension EntryPoint.parse('noauth = >>> cinderclient.contrib.noauth:CinderNoAuthLoader') >>> DEBUG (extension:180) found extension EntryPoint.parse('v2token = >>> keystoneauth1.loading._plugins.identity.v2:Token') >>> DEBUG (extension:180) found extension EntryPoint.parse('none = >>> keystoneauth1.loading._plugins.noauth:NoAuth') >>> DEBUG (extension:180) found extension EntryPoint.parse('v3oauth1 = >>> keystoneauth1.extras.oauth1._loading:V3OAuth1') >>> DEBUG (extension:180) found extension EntryPoint.parse('admin_token = >>> keystoneauth1.loading._plugins.admin_token:AdminToken') >>> DEBUG (extension:180) found extension EntryPoint.parse('v3oidcauthcode >>> = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAuth >>> orizationCode') >>> DEBUG (extension:180) found extension EntryPoint.parse('v2password = >>> keystoneauth1.loading._plugins.identity.v2:Password') >>> DEBUG (extension:180) found extension EntryPoint.parse('v3samlpassword >>> = keystoneauth1.extras._saml2._loading:Saml2Password') >>> DEBUG (extension:180) found extension EntryPoint.parse('v3password = >>> keystoneauth1.loading._plugins.identity.v3:Password') >>> DEBUG (extension:180) found extension EntryPoint.parse('v3adfspassword >>> = keystoneauth1.extras._saml2._loading:ADFSPassword') >>> DEBUG (extension:180) found extension EntryPoint.parse('v3oidcaccesstoken >>> = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAccessToken') >>> DEBUG (extension:180) found extension EntryPoint.parse('v3oidcpassword >>> = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectPassword') >>> DEBUG (extension:180) found extension EntryPoint.parse('v3kerberos = >>> keystoneauth1.extras.kerberos._loading:Kerberos') >>> DEBUG (extension:180) found extension EntryPoint.parse('token = >>> keystoneauth1.loading._plugins.identity.generic:Token') >>> DEBUG (extension:180) found extension EntryPoint.parse('v3oidcclientcredentials >>> = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectClie >>> ntCredentials') >>> DEBUG (extension:180) found extension EntryPoint.parse('v3tokenlessauth >>> = keystoneauth1.loading._plugins.identity.v3:TokenlessAuth') >>> DEBUG (extension:180) found extension EntryPoint.parse('v3token = >>> keystoneauth1.loading._plugins.identity.v3:Token') >>> DEBUG (extension:180) found extension EntryPoint.parse('v3totp = >>> keystoneauth1.loading._plugins.identity.v3:TOTP') >>> DEBUG (extension:180) found extension EntryPoint.parse('v3applicationcredential >>> = keystoneauth1.loading._plugins.identity.v3:ApplicationCredential') >>> DEBUG (extension:180) found extension EntryPoint.parse('password = >>> keystoneauth1.loading._plugins.identity.generic:Password') >>> DEBUG (extension:180) found extension EntryPoint.parse('v3fedkerb = >>> keystoneauth1.extras.kerberos._loading:MappedKerberos') >>> DEBUG (extension:180) found extension EntryPoint.parse('v1password = >>> swiftclient.authv1:PasswordLoader') >>> DEBUG (extension:180) found extension EntryPoint.parse('token_endpoint >>> = openstackclient.api.auth_plugin:TokenEndpoint') >>> DEBUG (extension:180) found extension EntryPoint.parse('gnocchi-basic = >>> gnocchiclient.auth:GnocchiBasicLoader') >>> DEBUG (extension:180) found extension EntryPoint.parse('gnocchi-noauth >>> = gnocchiclient.auth:GnocchiNoAuthLoader') >>> DEBUG (extension:180) found extension EntryPoint.parse('aodh-noauth = >>> aodhclient.noauth:AodhNoAuthLoader') >>> DEBUG (session:372) REQ: curl -g -i -X GET http://ubuntu16:35357/v3 -H >>> "Accept: application/json" -H "User-Agent: zun keystoneauth1/3.4.0 >>> python-requests/2.18.1 CPython/2.7.12" >>> DEBUG (connectionpool:207) Starting new HTTP connection (1): ubuntu16 >>> DEBUG (connectionpool:395) http://ubuntu16:35357 "GET /v3 HTTP/1.1" 200 >>> 248 >>> DEBUG (session:419) RESP: [200] Date: Thu, 05 Apr 2018 23:11:07 GMT >>> Server: Apache/2.4.18 (Ubuntu) Vary: X-Auth-Token X-Distribution: Ubuntu >>> x-openstack-request-id: req-3b1a12cc-fb3f-4d05-87fc-d2a1ff43395c >>> Content-Length: 248 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive >>> Content-Type: application/json >>> RESP BODY: {"version": {"status": "stable", "updated": >>> "2017-02-22T00:00:00Z", "media-types": [{"base": "application/json", >>> "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.8", >>> "links": [{"href": "http://ubuntu16:35357/v3/", "rel": "self"}]}} >>> >>> DEBUG (session:722) GET call to None for http://ubuntu16:35357/v3 used >>> request id req-3b1a12cc-fb3f-4d05-87fc-d2a1ff43395c >>> DEBUG (base:175) Making authentication request to >>> http://ubuntu16:35357/v3/auth/tokens >>> DEBUG (connectionpool:395) http://ubuntu16:35357 "POST /v3/auth/tokens >>> HTTP/1.1" 201 10333 >>> DEBUG (base:180) {"token": {"is_domain": false, "methods": ["password"], >>> "roles": [{"id": "4000a662be2d47fd8fdf5a0fef66767d", "name": "admin"}], >>> "expires_at": "2018-04-06T00:11:08.000000Z", "project": {"domain": {"id": >>> "default", "name": "Default"}, "id": "a391261cffba4f4c827ab7420a352fe1", >>> "name": "admin"}, "catalog": [{"endpoints": [{"url": " >>> http://cluster3-2:9517/v1", "interface": "internal", "region": >>> "RegionOne", "region_id": "RegionOne", "id": "5a634bafa38c45dbb571f0edb3702101"}, >>> {"url": "http://cluster3-2:9517/v1", "interface": "public", "region": >>> "RegionOne", "region_id": "RegionOne", "id": "8926d37d276a4fe49df66bb513f7906a"}, >>> {"url": "http://cluster3-2:9517/v1", "interface": "admin", "region": >>> "RegionOne", "region_id": "RegionOne", "id": "a74e1b4faf39436aa5d6f9b446ceee92"}], >>> "type": "container-zun", "id": "025154eef222461da9edcfe32ae79e5e", >>> "name": "zun"}, {"endpoints": [{"url": "http://ubuntu16:9001", >>> "interface": "public", "region": "RegionOne", "region_id": "RegionOne", >>> "id": "3a94c0df20da47d1b922541a87576ab0"}, {"url": "http://ubuntu16:9001", >>> "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", >>> "id": "5fcab2a59c72433581510d7aafe29961"}, {"url": "http://ubuntu16:9001", >>> "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", >>> "id": "71e314291a4b4c648aa5ba662b216fa6"}], "type": "dns", "id": >>> "07677b58ad4d469d80dbda8e9fa908bc", "name": "designate"}, {"endpoints": >>> [{"url": "http://ubuntu16:8776/v2/a391261cffba4f4c827ab7420a352fe1", >>> "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", >>> "id": "4d56ee7967994c869239007146e52ab8"}, {"url": " >>> http://ubuntu16:8776/v2/a391261cffba4f4c827ab7420a352fe1", "interface": >>> "internal", "region": "RegionOne", "region_id": "RegionOne", "id": >>> "9845138d25ec41b1a7102d8365f1b9c7"}, {"url": " >>> http://ubuntu16:8776/v2/a391261cffba4f4c827ab7420a352fe1", "interface": >>> "public", "region": "RegionOne", "region_id": "RegionOne", "id": >>> "f99f9bf4b0eb4e19aa8dbe72fc13e648"}], "type": "volumev2", "id": >>> "077bd5ecfc59499ab84f49e410efef4f", "name": "cinderv2"}, {"endpoints": >>> [{"url": "http://ubuntu16:8004/v1/a391261cffba4f4c827ab7420a352fe1", >>> "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", >>> "id": "355c6c323653469c8315d5dea2998b0d"}, {"url": " >>> http://ubuntu16:8004/v1/a391261cffba4f4c827ab7420a352fe1", "interface": >>> "internal", "region": "RegionOne", "region_id": "RegionOne", "id": >>> "841768ec3edb42d7b18fe6a2a17f4dbc"}, {"url": " >>> http://10.11.142.2:8004/v1/a391261cffba4f4c827ab7420a352fe1", >>> "interface": "public", "region": "RegionOne", "region_id": "RegionOne", >>> "id": "afdbc1d2a5114cd9b0714331eb227ba9"}], "type": "orchestration", >>> "id": "116243d61e3a4c90b7144d6a8b5a170a", "name": "heat"}, >>> {"endpoints": [{"url": "http://ubuntu16:8778", "interface": "internal", >>> "region": "RegionOne", "region_id": "RegionOne", "id": >>> "2dacce3eed484464b3f521b7b2720cd9"}, {"url": "http://ubuntu16:8778", >>> "interface": "public", "region": "RegionOne", "region_id": "RegionOne", >>> "id": "5300f9ae336c41b8a8bb93400db35a30"}, {"url": "http://ubuntu16:8778", >>> "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", >>> "id": "5c7e2cc977f74051b0ed104abb1d46a9"}], "type": "placement", "id": >>> "1d270e2d3d4f488e82597097af933e7a", "name": "placement"}, {"endpoints": >>> [{"url": "http://ubuntu16:8042", "interface": "public", "region": >>> "RegionOne", "region_id": "RegionOne", "id": "337f147396f143679e6cf7fbdd3601ab"}, >>> {"url": "http://ubuntu16:8042", "interface": "internal", "region": >>> "RegionOne", "region_id": "RegionOne", "id": "a97d660772e64894b4b13092d7719298"}, >>> {"url": "http://ubuntu16:8042", "interface": "admin", "region": >>> "RegionOne", "region_id": "RegionOne", "id": "bb5caf186c9947aca31e6ee2a37f6bbd"}], >>> "type": "alarming", "id": "2a19c1a28a42433caa8eb919910ec06f", "name": >>> "aodh"}, {"endpoints": [], "type": "volume", "id": >>> "39c740b891764e4a9081773709269848", "name": "cinder"}, {"endpoints": >>> [{"url": "http://ubuntu16:8041", "interface": "internal", "region": >>> "RegionOne", "region_id": "RegionOne", "id": "9d455913a5fb4f15bbe15740f4dee260"}, >>> {"url": "http://ubuntu16:8041", "interface": "admin", "region": >>> "RegionOne", "region_id": "RegionOne", "id": "c5c2471db1cb4ae7a1f3e847404d4b37"}, >>> {"url": "http://ubuntu16:8041", "interface": "public", "region": >>> "RegionOne", "region_id": "RegionOne", "id": "cc12daed5ea342a1a47602720589cb9e"}], >>> "type": "metric", "id": "39fdf2d5300343aa8ebe5509d29ba7ce", "name": >>> "gnocchi"}, {"endpoints": [{"url": "http://cluster3-2:9890", >>> "interface": "public", "region": "RegionOne", "region_id": "RegionOne", >>> "id": "1c7ddc56ba984afd8187cd1894a75bf1"}, {"url": " >>> http://cluster3-2:9890", "interface": "admin", "region": "RegionOne", >>> "region_id": "RegionOne", "id": "888925c4fc8b48859f086860333c3ab4"}, >>> {"url": "http://cluster3-2:9890", "interface": "internal", "region": >>> "RegionOne", "region_id": "RegionOne", "id": "9bfd7198dab14f6a8b7eba444f920020"}], >>> "type": "nfv-orchestration", "id": "3da88eae843a4949806186db8a9a3bd0", >>> "name": "tacker"}, {"endpoints": [{"url": "http://10.11.142.2:8999", >>> "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", >>> "id": "32880f809a2f45598a9838e4b168ce5b"}, {"url": " >>> http://10.11.142.2:8999", "interface": "public", "region": "RegionOne", >>> "region_id": "RegionOne", "id": "530711f56f234ad19775fae65774c0ab"}, >>> {"url": "http://10.11.142.2:8999", "interface": "admin", "region": >>> "RegionOne", "region_id": "RegionOne", "id": "8d7493ad752b453b87d789d0ec5cae93"}], >>> "type": "rca", "id": "55f78369ea5e40e3b9aa9ded854cb163", "name": >>> "vitrage"}, {"endpoints": [{"url": "http://10.11.142.2:5000/v3/", >>> "interface": "public", "region": "RegionOne", "region_id": "RegionOne", >>> "id": "afba4b58fd734baeaed94f8f2380a986"}, {"url": " >>> http://ubuntu16:5000/v3/", "interface": "internal", "region": >>> "RegionOne", "region_id": "RegionOne", "id": "b4b864acfc1746b3ad2d22c6a28e1361"}, >>> {"url": "http://ubuntu16:35357/v3/", "interface": "admin", "region": >>> "RegionOne", "region_id": "RegionOne", "id": "bf256df5f8d34e9c80c00b78da122118"}], >>> "type": "identity", "id": "58b4ff04dc764fc2aae4bfd9d0f1eb8e", "name": >>> "keystone"}, {"endpoints": [{"url": "http://ubuntu16:8776/v3/a3912 >>> 61cffba4f4c827ab7420a352fe1", "interface": "admin", "region": >>> "RegionOne", "region_id": "RegionOne", "id": "260f8b9e9e214cc1a39407517b3ca826"}, >>> {"url": "http://ubuntu16:8776/v3/a391261cffba4f4c827ab7420a352fe1", >>> "interface": "public", "region": "RegionOne", "region_id": "RegionOne", >>> "id": "81adeaccba1c4203bddb7734f23116a8"}, {"url": " >>> http://ubuntu16:8776/v3/a391261cffba4f4c827ab7420a352fe1", "interface": >>> "internal", "region": "RegionOne", "region_id": "RegionOne", "id": >>> "e63332e8b15e43c6b9c331d9ee8551ab"}], "type": "volumev3", "id": >>> "8cd6101718e94ee198cf9ba9894bf1c9", "name": "cinderv3"}, {"endpoints": >>> [{"url": "http://ubuntu16:9696", "interface": "internal", "region": >>> "RegionOne", "region_id": "RegionOne", "id": "65a0b4233436428ab42aa3b40b1ce53f"}, >>> {"url": "http://ubuntu16:9696", "interface": "public", "region": >>> "RegionOne", "region_id": "RegionOne", "id": "b8354dd727154056b3c9b81b89054bab"}, >>> {"url": "http://ubuntu16:9696", "interface": "admin", "region": >>> "RegionOne", "region_id": "RegionOne", "id": "ca44db85238b46cf9fbb6dc6f1d9dff5"}], >>> "type": "network", "id": "ade912885a73431f95a3a01d8a8e6498", "name": >>> "neutron"}, {"endpoints": [{"url": "http://ubuntu16:8000/v1", >>> "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", >>> "id": "5d7559010ea94cca9edd7ab6213f6b2c"}, {"url": " >>> http://ubuntu16:8000/v1", "interface": "internal", "region": >>> "RegionOne", "region_id": "RegionOne", "id": "af77025677284808b0715488e22729d4"}, >>> {"url": "http://10.11.142.2:8000/v1", "interface": "public", "region": >>> "RegionOne", "region_id": "RegionOne", "id": "c17b650eccf14045af49d5e9d050e875"}], >>> "type": "cloudformation", "id": "b04f735f46e743969e2bb0fff3aee1b5", >>> "name": "heat-cfn"}, {"endpoints": [{"url": " >>> http://ubuntu16:8774/v2.1/a391261cffba4f4c827ab7420a352fe1", >>> "interface": "public", "region": "RegionOne", "region_id": "RegionOne", >>> "id": "18580f7a6dea4c53bc66d161e7e0a71e"}, {"url": " >>> http://ubuntu16:8774/v2.1/a391261cffba4f4c827ab7420a352fe1", >>> "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", >>> "id": "b4a8575704a4426494edc57551f40e58"}, {"url": " >>> http://ubuntu16:8774/v2.1/a391261cffba4f4c827ab7420a352fe1", >>> "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", >>> "id": "c41ec544b61c41098c07030bc84ba2a0"}], "type": "compute", "id": >>> "b06f4aa21a4a488c8f0c5a835e639bd3", "name": "nova"}, {"endpoints": >>> [{"url": "http://ubuntu16:9292", "interface": "public", "region": >>> "RegionOne", "region_id": "RegionOne", "id": "4ed27e537ca34b6fb93a8c72d8921d24"}, >>> {"url": "http://ubuntu16:9292", "interface": "internal", "region": >>> "RegionOne", "region_id": "RegionOne", "id": "ab0c37600ecf45d797e7972dc6a4fde2"}, >>> {"url": "http://ubuntu16:9292", "interface": "admin", "region": >>> "RegionOne", "region_id": "RegionOne", "id": "f4a0f97be4f343d698ea12633e3823d6"}], >>> "type": "image", "id": "bbe4fbb4a1d7495f948faa9baf1e3828", "name": >>> "glance"}, {"endpoints": [{"url": "http://ubuntu16:8777", "interface": >>> "public", "region": "RegionOne", "region_id": "RegionOne", "id": >>> "3d160f2286634811b24b8abd6ad72c1f"}, {"url": "http://ubuntu16:8777", >>> "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", >>> "id": "a988e821ff1f4760ae3873c17ab87294"}, {"url": "http://ubuntu16:8777", >>> "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", >>> "id": "def8c07174184a0ca26e2f0f26d60a73"}], "type": "metering", "id": >>> "f4450730522d4342ac6626b81567b36c", "name": "ceilometer"}, >>> {"endpoints": [{"url": "http://ubuntu16:9511/v1", "interface": >>> "internal", "region": "RegionOne", "region_id": "RegionOne", "id": >>> "19e14e5c5c5a4d3db6a6a632db728668"}, {"url": "http://10.11.142.2:9511/v1", >>> "interface": "public", "region": "RegionOne", "region_id": "RegionOne", >>> "id": "28fb2092bcc748ce88dfb1284ace1264"}, {"url": " >>> http://10.11.142.2:9511/v1", "interface": "admin", "region": >>> "RegionOne", "region_id": "RegionOne", "id": "c33f5b4a355d4067aa2e7093606cd153"}], >>> "type": "container", "id": "fdbcff09ecd545c8ba28bfd96782794a", "name": >>> "magnum"}], "user": {"domain": {"id": "default", "name": "Default"}, >>> "password_expires_at": null, "name": "admin", "id": >>> "3b136545b47b40709b78b1e36cdcdc63"}, "audit_ids": >>> ["Ad1z5kAmRBehcgxG6-8IYA"], "issued_at": "2018-04-05T23:11:08.000000Z"}} >>> DEBUG (session:372) REQ: curl -g -i -X GET >>> http://10.11.142.2:9511/v1/services -H "OpenStack-API-Version: >>> container 1.2" -H "X-Auth-Token: {SHA1}7523b440595290414cefa54434fc7c8adbec5c3d" >>> -H "Content-Type: application/json" -H "Accept: application/json" -H >>> "User-Agent: None" >>> DEBUG (connectionpool:207) Starting new HTTP connection (1): 10.11.142.2 >>> DEBUG (connectionpool:395) http://10.11.142.2:9511 "GET /v1/services >>> HTTP/1.1" 406 166 >>> DEBUG (session:419) RESP: [406] Content-Type: application/json >>> Content-Length: 166 x-openstack-request-id: req-63b7de1b-ef63-4be8-93c1-a27972c9b4c0 >>> Server: Werkzeug/0.10.4 Python/2.7.12 Date: Thu, 05 Apr 2018 23:11:09 GMT >>> RESP BODY: {"errors": [{"status": 406, "code": "", "links": [], "title": >>> "Not Acceptable", "detail": "Invalid service type for OpenStack-API-Version >>> header", "request_id": ""}]} >>> >>> DEBUG (session:722) GET call to container for >>> http://10.11.142.2:9511/v1/services used request id >>> req-63b7de1b-ef63-4be8-93c1-a27972c9b4c0 >>> DEBUG (shell:705) Not Acceptable (HTTP 406) (Request-ID: >>> req-63b7de1b-ef63-4be8-93c1-a27972c9b4c0) >>> Traceback (most recent call last): >>> File "/usr/local/lib/python2.7/dist-packages/zunclient/shell.py", >>> line 703, in main >>> map(encodeutils.safe_decode, sys.argv[1:])) >>> File "/usr/local/lib/python2.7/dist-packages/zunclient/shell.py", >>> line 639, in main >>> args.func(self.cs, args) >>> File "/usr/local/lib/python2.7/dist-packages/zunclient/v1/services_shell.py", >>> line 22, in do_service_list >>> services = cs.services.list() >>> File "/usr/local/lib/python2.7/dist-packages/zunclient/v1/services.py", >>> line 70, in list >>> return self._list(self._path(path), "services") >>> File "/usr/local/lib/python2.7/dist-packages/zunclient/common/base.py", >>> line 128, in _list >>> resp, body = self.api.json_request('GET', url) >>> File "/usr/local/lib/python2.7/dist-packages/zunclient/common/httpclient.py", >>> line 368, in json_request >>> resp = self._http_request(url, method, **kwargs) >>> File "/usr/local/lib/python2.7/dist-packages/zunclient/common/httpclient.py", >>> line 351, in _http_request >>> error_json.get('debuginfo'), method, url) >>> NotAcceptable: Not Acceptable (HTTP 406) (Request-ID: >>> req-63b7de1b-ef63-4be8-93c1-a27972c9b4c0) >>> ERROR: Not Acceptable (HTTP 406) (Request-ID: >>> req-63b7de1b-ef63-4be8-93c1-a27972c9b4c0) >>> >>> >>> >>> Thanks >>> -Murali >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jichenjc at cn.ibm.com Tue Apr 10 02:57:15 2018 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Tue, 10 Apr 2018 10:57:15 +0800 Subject: [openstack-dev] [nova] Changes toComputeVirtAPI.wait_for_instance_event In-Reply-To: References: Message-ID: Could you please help to share whether this kind of event is sent by neutron-server or neutron agent ? I searched neutron code from [1][2] this means the agent itself need tell neutron server the device(VIF) is up then neutron server will send notification to nova through REST API and in turn consumed by compute node? [1] https://github.com/openstack/neutron/tree/master/neutron/notify_port_active_direct [2] https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/rpc.py#L264 Best Regards! Kevin (Chen) Ji 纪 晨 Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com Phone: +86-10-82451493 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC From: Matt Riedemann To: "OpenStack Development Mailing List (not for usage questions)" Date: 04/10/2018 01:56 AM Subject: [openstack-dev] [nova] Changes to ComputeVirtAPI.wait_for_instance_event As part of a bug fix [1], the internal ComputeVirtAPI.wait_for_instance_event interface is changing to no longer accept event names that are strings, and will now require the (name, tag) tuple form which all of the in-tree virt drivers are already using. If you have an out of tree driver that uses this interface, heads up that you'll need to be using the tuple form if you are not already doing so. [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_c_558059_&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=yyfqhpyjmwq9aUO_EdyVhYZm-8zEDpEYxh2-hPu1kig&s=S5Asyhxw296d0rp-EOCg1VMsKcwVV39i1pGeqkobE2U&e= -- Thanks, Matt __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=yyfqhpyjmwq9aUO_EdyVhYZm-8zEDpEYxh2-hPu1kig&s=H3OLArdYuR4ARtKwSqJaI3ctLkqJSAVhldfty7GL9lo&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From rchugh at ncsu.edu Tue Apr 10 03:54:33 2018 From: rchugh at ncsu.edu (Rushil Chugh) Date: Mon, 9 Apr 2018 23:54:33 -0400 Subject: [openstack-dev] [cyborg] Promote Li Liu as new core reviewer In-Reply-To: <55e1f32d-eb8b-10f6-e982-280604ff2d8b@intel.com> References: <55e1f32d-eb8b-10f6-e982-280604ff2d8b@intel.com> Message-ID: +1 On Mon, Apr 9, 2018 at 3:13 PM, Nadathur, Sundar wrote: > Agreed! +1 > > Regards, > Sundar > > Hi Team, > > This is an email for my nomination of adding Li Liu to the core reviewer > team. Li Liu has been instrumental in the resource provider data model > implementation for Cyborg during Queens release, as well as metadata > standardization and programming design for Rocky. > > His overall stats [0] and current stats [1] for Rocky speaks for itself. > His patches could be found here [2]. > > Given the amount of work undergoing for Rocky, it would be great to add > such an amazing force :) > > [0] http://stackalytics.com/?module=cyborg-group&metric= > person-day&release=all > [1] http://stackalytics.com/?module=cyborg-group&metric= > person-day&release=rocky > [2] https://review.openstack.org/#/q/owner:liliueecg%2540gmail.com > > -- > Zhipeng (Howard) Huang > > Standard Engineer > IT Standard & Patent/IT Product Line > Huawei Technologies Co,. Ltd > Email: huangzhipeng at huawei.com > Office: Huawei Industrial Base, Longgang, Shenzhen > > (Previous) > Research Assistant > Mobile Ad-Hoc Network Lab, Calit2 > University of California, Irvine > Email: zhipengh at uci.edu > Office: Calit2 Building Room 2402 > > OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Tue Apr 10 05:50:26 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Tue, 10 Apr 2018 15:50:26 +1000 Subject: [openstack-dev] [all][requirements] uncapping eventlet In-Reply-To: <1523282186-sup-2@lrrr.local> References: <20180405154737.lhxhlpsj3uttjevu@gentoo.org> <20180405192619.nk7ykeel6qnwsk2y@gentoo.org> <20180406163433.fyj6qnq5oegivb4t@gentoo.org> <1523032867.936315.1329051592.0E63BA5F@webmail.messagingengine.com> <20180409033928.GB28028@thor.bakeyournoodle.com> <1523282186-sup-2@lrrr.local> Message-ID: <20180410055026.GI28028@thor.bakeyournoodle.com> On Mon, Apr 09, 2018 at 09:58:28AM -0400, Doug Hellmann wrote: > Now that projects don't have to match the global requirements list > entries exactly we should be able to remove caps from within the > projects and keep caps in the global list for cases like this where we > know we frequently encounter breaking changes in new releases. The > changes to support that were part of > https://review.openstack.org/#/c/555402/ True. I was trying to add context to why we don't always rely on upper-constraints.txt to save us. So yeah we can start working towards removing the caps per project. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From chdzsp at 163.com Tue Apr 10 06:03:54 2018 From: chdzsp at 163.com (=?GBK?B?1tPJ+sa9?=) Date: Tue, 10 Apr 2018 14:03:54 +0800 (CST) Subject: [openstack-dev] [puppet] Add new puppet-senlin repository to Puppet OpenStack Message-ID: <799a5ac3.115da68.162ae274d2b.Coremail.chdzsp@163.com> Hi, core members I have added new puppet-senlin repository to Puppet OpenStack[1][2][3]. I'm going to work on this module. Please review. [1]https://review.openstack.org/#/c/559537/ [2]https://review.openstack.org/#/c/559539/ [3]https://review.openstack.org/#/c/559563/ Thanks, Zhong Shengping -------------- next part -------------- An HTML attachment was scrubbed... URL: From xinni.ge1990 at gmail.com Tue Apr 10 06:08:30 2018 From: xinni.ge1990 at gmail.com (Xinni Ge) Date: Tue, 10 Apr 2018 15:08:30 +0900 Subject: [openstack-dev] [horizon][xstatic]How to handle xstatic if upstream files are modified In-Reply-To: References: Message-ID: Hi Radomir, Ivan, Thanks a lot for your advice. I will update the xstatic files just as the upstream. As for the customized lines, I will try to find a better way to solve it, maybe override the original functions inside the project. Best regards, Xinni On Tue, Apr 10, 2018 at 4:58 AM, Ivan Kolodyazhny wrote: > Hi, Xinni, > > I absolutely agree with Radomir. We should keep xstatic files without > modifications. We don't know if they are used outside of OpenStack or not, > so they should be the same as NPM packages > > > Regards, > Ivan Kolodyazhny, > http://blog.e0ne.info/ > > On Mon, Apr 9, 2018 at 12:32 PM, Radomir Dopieralski < > openstack at sheep.art.pl> wrote: > >> The whole idea about xstatic files is that they are generic, not specific >> to Horizon or OpenStack, usable by other projects that need those static >> files. In fact, at the time we started using xstatic, it was being used by >> the MoinMoin wiki project (which is now dead, sadly). The modifications you >> made are very specific to your usecase and would make it impossible to >> reuse the packages by other applications (or even by other Horizon >> plugins). The whole idea of a library is that you are using it as it is >> provided, and not modifying it. >> >> We generally try to use all the libraries as they are, and if there are >> any modifications necessary, we push them upstream, to the original >> library. Otherwise there would be quite a bit of maintenance overhead >> necessary to keep all our downstream patches. When considerable >> modification is necessary that can't be pushed upstream, we fork the >> library either into its own repository, or include it in the repository of >> the application that is using it. >> >> On Mon, Apr 9, 2018 at 2:54 AM, Xinni Ge wrote: >> >>> Hello, team. >>> >>> Sorry for talking about xstatic repo for so many times. >>> >>> I didn't realize xstatic repositories should be provided with exactly >>> the same file as upstream, and should have talked about it at very first. >>> >>> I modified several upstream files because some of them files couldn't be >>> used directly under my expectation. >>> >>> For example, {{ }} are used in some original files as template tags, >>> but Horizon adopts {$ $} in angular module, so I modified them to be >>> recognized properly. >>> >>> Another major modification is that css files are converted into scss >>> files to solve some css import issue previously. >>> Besides, after collecting statics, some png file paths in css cannot be >>> referenced properly and shown as 404 errors, I also modified css itself to >>> handle this issues. >>> >>> I will recheck all the un-matched xstatic repositories and try to >>> replace with upstream files as much as I can. >>> But I if I really have to modify some original files, is it acceptable >>> to still use it as embedded files with license info appeared at the top? >>> >>> >>> Best Regards, >>> Xinni Ge >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Best Regards, Xinni Ge -------------- next part -------------- An HTML attachment was scrubbed... URL: From anilvenkata at redhat.com Tue Apr 10 06:36:53 2018 From: anilvenkata at redhat.com (Anil Venkata) Date: Tue, 10 Apr 2018 12:06:53 +0530 Subject: [openstack-dev] [neutron] [OVN] Tempest API / Scenario tests and OVN metadata In-Reply-To: References: <4C4EB692-8B09-4689-BDC2-E6447D719073@kaplonski.pl> <4A490EDA-BD7F-444C-AA4F-65562FE21408@kaplonski.pl> Message-ID: How to override tempest tests in neutron or networking-ovn repo? Thanks Anil On Mon, Apr 9, 2018 at 8:26 PM, Lucas Alvares Gomes wrote: > Hi, > > > Another idea is to modify test that it will: > > 1. Check how many ports are in tenant, > > 2. Set quota to actual number of ports + 1 instead of hardcoded 1 as it > is now, > > 3. Try to add 2 ports - exactly as it is now, > > > > I think that this should be still backend agnostic and should fix this > problem. > > > > Great idea! I've gave it a go and proposed it at > https://review.openstack.org/559758 > > Cheers, > Lucas > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrea.frittoli at gmail.com Tue Apr 10 07:50:37 2018 From: andrea.frittoli at gmail.com (Andrea Frittoli) Date: Tue, 10 Apr 2018 07:50:37 +0000 Subject: [openstack-dev] [all] Changes to Zuul role checkouts In-Reply-To: <87r2nonwuw.fsf@meyer.lemoncheese.net> References: <87r2nonwuw.fsf@meyer.lemoncheese.net> Message-ID: On Mon, Apr 9, 2018 at 5:55 PM James E. Blair wrote: > Hi, > > We recently fixed a subtle but important bug related to how Zuul checks > out repositories it uses to find Ansible roles for jobs. > \o/ > > This may result in a behavior change, or even an error, for jobs which > use roles defined in projects with multiple branches. > > Previously, Zuul would (with some exceptions) generally check out the > 'master' branch of any repository which appeared in the 'roles:' stanza > in the job definition. Now Zuul will follow its usual procedure of > trying to find the most appropriate branch to check out. That means it > tries the project override-checkout branch first, then the job > override-checkout branch, then the branch of the change, and finally the > default branch of the project. > > This should produce more predictable behavior which matches the > checkouts of all other projects involved in a job. > > If you find that the wrong branch of a role is being checked out, > depending on circumstances, you may need to set a job or project > override-checkout value to force the correct one, or you may need to > backport a role to an older branch. > > If you encounter any problems related to this, please chat with us in > #openstack-infra. > > Thanks a lot Jim for fixing this! With this in place I can now continue the work on devstack, tempest and grenade base roles and jobs for zuul v3. Next steps (in order of dependency): - backport ansible devstack changes to queens and pike - start using the "orchestrate-devstack" role in the "devstack-tempest" base job - so that it can be used for multinode jobs as well - continue the work on setting up a grenade zuulv3 job Andrea Frittoli (andreaf) > Thanks, > > Jim > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at ericsson.com Tue Apr 10 08:05:33 2018 From: elod.illes at ericsson.com (=?UTF-8?B?RWzDtWQgSWxsw6lz?=) Date: Tue, 10 Apr 2018 10:05:33 +0200 Subject: [openstack-dev] [Openstack-stable-maint] Stable check of openstack/networking-midonet failed In-Reply-To: <20180409041618.GC28028@thor.bakeyournoodle.com> References: <20180401035507.GD4343@thor.bakeyournoodle.com> <144369c3-204e-fcf7-9265-855f952bdb02@ericsson.com> <20180409041618.GC28028@thor.bakeyournoodle.com> Message-ID: Hi, Thanks, too. I've prepared the remaining backport [1] for stable/ocata to solve the issue there as well [2] [1] https://review.openstack.org/#/c/559940/ [2] http://logs.openstack.org/periodic-stable/git.openstack.org/openstack/networking-midonet/stable/ocata/openstack-tox-py27/928af21/job-output.txt.gz#_2018-04-10_06_29_43_146966 Thanks, Előd On 2018-04-09 06:16, Tony Breeds wrote: > On Tue, Apr 03, 2018 at 02:05:35PM +0200, Elõd Illés wrote: >> Hi, >> >> These patches probably solve the issue, if someone could review them: >> >> https://review.openstack.org/#/c/557005/ >> >> and >> >> https://review.openstack.org/#/c/557006/ >> >> Thanks, > Thanks for digging into that. I've approved these even though they > don't have a +2 from the neutron stable team. They look safe as the > only impact tests, unblock the gate and also have +1's from subject > matter experts. > > Yours Tony. From chdzsp at 163.com Tue Apr 10 08:09:46 2018 From: chdzsp at 163.com (=?GBK?B?1tPJ+sa9?=) Date: Tue, 10 Apr 2018 16:09:46 +0800 (CST) Subject: [openstack-dev] [puppet] Add new puppet-senlin repository to Puppet OpenStack In-Reply-To: <799a5ac3.115da68.162ae274d2b.Coremail.chdzsp@163.com> References: <799a5ac3.115da68.162ae274d2b.Coremail.chdzsp@163.com> Message-ID: <4702c3a6.1161c69.162ae9a8b76.Coremail.chdzsp@163.com> Hi, Mohammed Needs PTL+1, Can you review? Thanks, Zhong Shengping At 2018-04-10 14:03:54, "钟生平" wrote: Hi, core members I have added new puppet-senlin repository to Puppet OpenStack[1][2][3]. I'm going to work on this module. Please review. [1]https://review.openstack.org/#/c/559537/ [2]https://review.openstack.org/#/c/559539/ [3]https://review.openstack.org/#/c/559563/ Thanks, Zhong Shengping -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Tue Apr 10 08:35:57 2018 From: geguileo at redhat.com (Gorka Eguileor) Date: Tue, 10 Apr 2018 10:35:57 +0200 Subject: [openstack-dev] [cinder][nova] about re-image the volume In-Reply-To: <20180409191551.GA13852@sm-xps> References: <20180402115959.3y3j6ytab6ruorrg@localhost> <96adcaac-632a-95c3-71c8-51211c1c57bd@gmail.com> <20180405081558.vf7bibu4fcv5kov3@localhost> <20180406083110.tydltwfe23kiq7bw@localhost> <20180409085123.ydm5n3i3lngqsgjc@localhost> <20180409191551.GA13852@sm-xps> Message-ID: <20180410083557.vsfmdmioebg2cm4r@localhost> On 09/04, Sean McGinnis wrote: > On Mon, Apr 09, 2018 at 07:00:56PM +0100, Duncan Thomas wrote: > > Hopefully this flow means we can do rebuild root filesystem from > > snapshot/backup too? It seems rather artificially limiting to only do > > restore-from-image. I'd expect restore-from-snap to be a more common > > use case, personally. > > > > That could get tricky. We only support reverting to the last snapshot if we > reuse the same volume. Otherwise, we can create volume from snapshot, but I > don't think it's often that the first thing a user does is create a snapshot on > initial creation of a boot image. If it was created from image cache, and the > backend creates those cached volume by using a snapshot, then that might be an > option. > > But these are a lot of ifs, so that seems like it would make the logic for this > much more complicated. > > Maybe a phase II optimization we can look into? > >From the Cinder side of things I think these two would be easier than the re-image, because we would have even fewer steps, and the functionality to do the copying is exactly what we have now, as it will copy the data to the same volume, so we wouldn't need to fiddle with the UUID fields etc. Moreover I know customers who have asked about this functionality in the past, mostly interested in restoring the root volume of an existing VM from a backup to preserve the system ID and not break licenses. From jichenjc at cn.ibm.com Tue Apr 10 08:36:25 2018 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Tue, 10 Apr 2018 16:36:25 +0800 Subject: [openstack-dev] [nova] EC2 cleanup ? In-Reply-To: References: Message-ID: A patch set has been proposed [1] , additional stuffs will be posted once got further feedback. [1] https://review.openstack.org/#/c/556778/ Best Regards! Kevin (Chen) Ji 纪 晨 Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com Phone: +86-10-82451493 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC From: Artom Lifshitz To: "OpenStack Development Mailing List (not for usage questions)" Date: 03/27/2018 02:30 AM Subject: Re: [openstack-dev] [nova] EC2 cleanup ? > That is easier said than done. There have been a couple of related attempts > in the past: > > https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_c_266425_&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=7Qqd9VAjMEUMU2lmoHzRFtYMcpBvm9XhPsTVsUd3OoU&s=mmdwfYEaSGeM5GvfgBGKf29XfGL9FPgXyPc1BoBkex4&e= > > https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_c_282872_&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=7Qqd9VAjMEUMU2lmoHzRFtYMcpBvm9XhPsTVsUd3OoU&s=Q9I4L9tbv9-GN91W7rW6wrYpBzC_gbQZtk165On8qwc&e= > > I don't remember exactly where those fell down, but it's worth looking at > this first before trying to do this again. Interesting. [1] exists, and I'm pretty sure that we ship it as part of Red Hat OpenStack (but I'm not a PM and this is not an official Red Hat stance, just me and my memory), so it works well enough. If we have things that depend on our in-tree ec2 api, maybe we need to get them moved over to [1]? [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openstack_ec2-2Dapi&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=7Qqd9VAjMEUMU2lmoHzRFtYMcpBvm9XhPsTVsUd3OoU&s=c5424HG9UKS4FXVFz3BkIbBYzOGyF9_5F2JyRa9y8i4&e= __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=7Qqd9VAjMEUMU2lmoHzRFtYMcpBvm9XhPsTVsUd3OoU&s=rO0wjEa_YYM2oLf0FJcTVJXmbvDB4h93NGieFts62aU&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From geguileo at redhat.com Tue Apr 10 08:53:00 2018 From: geguileo at redhat.com (Gorka Eguileor) Date: Tue, 10 Apr 2018 10:53:00 +0200 Subject: [openstack-dev] [oslo.db] innodb OPTIMIZE TABLE ? In-Reply-To: References: <04e33bc7-90cf-e9c6-c276-a852212c25c7@gmail.com> <20180404090026.xl22i4kyplurq36z@localhost> <20180409095306.s4qxqi7q3m7p46d2@localhost> Message-ID: <20180410085300.lzwo7zwjunmmrkxr@localhost> On 09/04, Michael Bayer wrote: > On Mon, Apr 9, 2018 at 5:53 AM, Gorka Eguileor wrote: > > On 06/04, Michael Bayer wrote: > >> On Wed, Apr 4, 2018 at 5:00 AM, Gorka Eguileor wrote: > >> > On 03/04, Jay Pipes wrote: > >> >> On 04/03/2018 11:07 AM, Michael Bayer wrote: > >> >> > The MySQL / MariaDB variants we use nowadays default to > >> >> > innodb_file_per_table=ON and we also set this flag to ON in installer > >> >> > tools like TripleO. The reason we like file per table is so that > >> >> > we don't grow an enormous ibdata file that can't be shrunk without > >> >> > rebuilding the database. Instead, we have lots of little .ibd > >> >> > datafiles for each table throughout each openstack database. > >> >> > > >> >> > But now we have the issue that these files also can benefit from > >> >> > periodic optimization which can shrink them and also have a beneficial > >> >> > effect on performance. The OPTIMIZE TABLE statement achieves this, > >> >> > but as would be expected it itself can lock tables for potentially a > >> >> > long time. Googling around reveals a lot of controversy, as various > >> >> > users and publications suggest that OPTIMIZE is never needed and would > >> >> > have only a negligible effect on performance. However here we seek > >> >> > to use OPTIMIZE so that we can reclaim disk space on tables that have > >> >> > lots of DELETE activity, such as keystone "token" and ceilometer > >> >> > "sample". > >> >> > > >> >> > Questions for the group: > >> >> > > >> >> > 1. is OPTIMIZE table worthwhile to be run for tables where the > >> >> > datafile has grown much larger than the number of rows we have in the > >> >> > table? > >> >> > >> >> Possibly, though it's questionable to use MySQL/InnoDB for storing transient > >> >> data that is deleted often like ceilometer samples and keystone tokens. A > >> >> much better solution is to use RDBMS partitioning so you can simply ALTER > >> >> TABLE .. DROP PARTITION those partitions that are no longer relevant (and > >> >> don't even bother DELETEing individual rows) or, in the case of Ceilometer > >> >> samples, don't use a traditional RDBMS for timeseries data at all... > >> >> > >> >> But since that is unfortunately already the case, yes it is probably a good > >> >> idea to OPTIMIZE TABLE on those tables. > >> >> > >> >> > 2. from people's production experience how safe is it to run OPTIMIZE, > >> >> > e.g. how long is it locking tables, etc. > >> >> > >> >> Is it safe? Yes. > >> >> > >> >> Does it lock the entire table for the duration of the operation? No. It uses > >> >> online DDL operations: > >> >> > >> >> https://dev.mysql.com/doc/refman/5.7/en/innodb-file-defragmenting.html > >> >> > >> >> Note that OPTIMIZE TABLE is mapped to ALTER TABLE tbl_name FORCE for InnoDB > >> >> tables. > >> >> > >> >> > 3. is there a heuristic we can use to measure when we might run this > >> >> > -.e.g my plan is we measure the size in bytes of each row in a table > >> >> > and then compare that in some ratio to the size of the corresponding > >> >> > .ibd file, if the .ibd file is N times larger than the logical data > >> >> > size we run OPTIMIZE ? > >> >> > >> >> I don't believe so, no. Most things I see recommended is to simply run > >> >> OPTIMIZE TABLE in a cron job on each table periodically. > >> >> > >> >> > 4. I'd like to propose this job of scanning table datafile sizes in > >> >> > ratio to logical data sizes, then running OPTIMIZE, be a utility > >> >> > script that is delivered via oslo.db, and would run for all innodb > >> >> > tables within a target MySQL/ MariaDB server generically. That is, I > >> >> > really *dont* want this to be a script that Keystone, Nova, Ceilometer > >> >> > etc. are all maintaining delivering themselves. this should be done > >> >> > as a generic pass on a whole database (noting, again, we are only > >> >> > running it for very specific InnoDB tables that we observe have a poor > >> >> > logical/physical size ratio). > >> >> > >> >> I don't believe this should be in oslo.db. This is strictly the purview of > >> >> deployment tools and should stay there, IMHO. > >> >> > >> > > >> > Hi, > >> > > >> > As far as I know most projects do "soft deletes" where we just flag the > >> > rows as deleted and don't remove them from the DB, so it's only when we > >> > use a management tool and run the "purge" command that we actually > >> > remove these rows. > >> > > >> > Since running the optimize without purging would be meaningless, I'm > >> > wondering if we should trigger the OPTIMIZE also within the purging > >> > code. This way we could avoid innefective runs of the optimize command > >> > when no purge has happened and even when we do the optimization we could > >> > skip the ratio calculation altogether for tables where no rows have been > >> > deleted (the ratio hasn't changed). > >> > > >> > >> the issue is that this OPTIMIZE will block on Galera unless it is run > >> on a per-individual node basis along with the changing of the > >> wsrep_OSU_method parameter, this is way out of scope both to be > >> redundantly hardcoded in multiple openstack projects, as well as > >> there's no portable way for Keystone and others to get at the > >> individual Galera node addresses. Putting it in oslo.db would at > >> least be a place that most of this logic can live but even then it > >> needs to run for multiple Galera nodes and needs to have > >> deployment-specific configuration. *unless* we say, the OPTIMIZE > >> here will short for a purged table, let's just let it block. > >> > > > > I see... What about a hybrid solution? Use the alter table as mentioned > > in the comment [1] to not block the table for systems that support it, > > and going with the RSU mode when it's not supported? > > > > sure, it just depends on if we have Galera running or not, so I intend > to detect if the current MySQL database is a Galera cluster or not by > looking for wsrep_* variables and status. Tripleo will know to > deploy the script directly to each MySQL database, galera or not, on > the local host that MySQL is running and the script will just do the > right thing without any of the downstream apps having to know about > it. > Maybe I misunderstood the comment, but it sounded that even clustered MariaDB with Galera would be able to avoid locking the whole table with a new enough version. In any case your plan sounds good to me. > > > > > > > [1] https://mariadb.com/kb/en/library/optimize-table/#comment_3191 > > > > > >> > >> > Ideally the ratio calculation and optimization code would be provided by > >> > oslo.db to reduce code duplication between projects. > >> > >> I was hoping to have this be part of oslo.db but there's disagreement on that :) > >> > >> If this can't be in oslo.db then the biggest issue facing me on this > >> is building out a new application and getting it packaged since this > >> feature has no home, unless I can ship it as some kind of script > >> packaged in tripleo. > >> > >> > > > > I think the oslo.db home you proposed has the great benefit of making it > > available in all deployments regardless of the installer, if that's not > > possible I would go with the TripleO script before creating yet another > > project that needs to be packaged and maintained. > > > > Cheers, > > Gorka. > > > > > >> > > >> > > >> >> > 5. for Galera this gets more tricky, as we might want to run OPTIMIZE > >> >> > on individual nodes directly. The script at [1] illustrates how to > >> >> > run this on individual nodes one at a time. > >> >> > > >> >> > More succinctly, the Q is: > >> >> > > >> >> > a. OPTIMIZE, yes or no? > >> >> > >> >> Yes. > >> >> > >> >> > b. oslo.db script to run generically, yes or no? > >> >> > >> >> No. Just have Triple-O install galera_innoptimizer and run it in a cron job. > >> >> > >> >> Best, > >> >> -jay > >> >> > >> >> > thanks for your thoughts! > >> >> > > >> >> > > >> >> > > >> >> > [1] https://github.com/deimosfr/galera_innoptimizer > >> >> > > >> >> > __________________________________________________________________________ > >> >> > OpenStack Development Mailing List (not for usage questions) > >> >> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> >> > > >> >> > >> >> __________________________________________________________________________ > >> >> OpenStack Development Mailing List (not for usage questions) > >> >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > > >> > __________________________________________________________________________ > >> > OpenStack Development Mailing List (not for usage questions) > >> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > >> __________________________________________________________________________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From kchamart at redhat.com Tue Apr 10 09:17:39 2018 From: kchamart at redhat.com (Kashyap Chamarthy) Date: Tue, 10 Apr 2018 11:17:39 +0200 Subject: [openstack-dev] [Openstack-operators] RFC: Next minimum libvirt / QEMU versions for "Stein" release In-Reply-To: <4a1a2732-cc78-eb4a-8517-f43a8f99c779@gmail.com> References: <20180330142643.ff3czxy35khmjakx@eukaryote> <20180331140929.r5kj3qyrefvsovwf@eukaryote> <20180404084507.GA18076@paraplu> <20180406100714.GB18076@paraplu> <98bf44f7-331d-7239-6eec-971f8afd604d@debian.org> <20180406170703.GD18076@paraplu> <355fafcc-8d7c-67a2-88c0-2823a51296f8@gmail.com> <20180409095858.GE18076@paraplu> <4a1a2732-cc78-eb4a-8517-f43a8f99c779@gmail.com> Message-ID: <20180410091739.GF18076@paraplu> On Mon, Apr 09, 2018 at 04:24:06PM -0500, Matt Riedemann wrote: > On 4/9/2018 4:58 AM, Kashyap Chamarthy wrote: > > Keep in mind that Matt has a tendency to sometimes unfairly > > over-simplify others views;-). More seriously, c'mon Matt; I went out > > of my way to spend time learning about Debian's packaging structure and > > trying to get the details right by talking to folks on > > #debian-backports. And as you may have seen, I marked the patch[*] as > > "RFC", and repeatedly said that I'm working on an agreeable lowest > > common denominator. > > Sorry Kashyap, I didn't mean to offend. I was hoping "delicious bugs" would > have made that obvious but I can see how it's not. You've done a great, > thorough job on sorting this all out. No problem at all. I know your communication style enough to not take offence :-). Thanks for the words! > Since I didn't know what "RFC" meant until googling it today, how about > dropping that from the patch so I can +2 it? Sure, I meant to remove it on my last iteration; now dropped it. (As you noted on the review, should've used '-Workflow', but I typed "RFC" out of muscle memory.) Thanks for the review. * * * Aside: On the other patch[+] that actually bumps for "Rocky" and fixes the resulting unit test fallout, I intend to fix the rest of the failing tests sometime this week. Remaining tests to be fixed: test_live_migration_update_serial_console_xml test_live_migration_with_valid_target_connect_addr test_live_migration_raises_exception test_virtuozzo_min_version_ok test_min_version_ppc_ok test_live_migration_update_graphics_xml test_min_version_s390_ok [+] https://review.openstack.org/#/c/558783/ -- libvirt: Bump MIN_{LIBVIRT,QEMU}_VERSION for "Rocky" -- /kashyap From lyarwood at redhat.com Tue Apr 10 10:12:55 2018 From: lyarwood at redhat.com (Lee Yarwood) Date: Tue, 10 Apr 2018 11:12:55 +0100 Subject: [openstack-dev] [nova][cinder] Concurrent requests to attach the same non-multiattach volume to multiple instances can succeed Message-ID: <20180410101255.i5s2sxfq2hfbgupi@lyarwood.usersys.redhat.com> Hello all, I just wanted to draw some attention to the following bug I stumbled across yesterday when sending concurrent requests to attach a non-multiattach volume to multiple instances : https://bugs.launchpad.net/cinder/+bug/1762687 Scanning over the v3 API code in Cinder suggests that this could be due to a complete lack of locking when creating the initial attachment but I might be missing something here. I've marked this as impacting both Nova and Cinder for now while but if I'm honest this strikes me as something we need to resolve in c-api alone. Cheers, -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: not available URL: From cdent+os at anticdent.org Tue Apr 10 11:15:49 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 10 Apr 2018 12:15:49 +0100 (BST) Subject: [openstack-dev] [tc] [all] TC Report 18-15 Message-ID: Also at: https://anticdent.org/tc-report-18-15.html It feels like people are busy. Traffic in the `#openstack-tc` channel has been light for the past week. # Forum Topics By this coming Thursday we hope to have determined which [of several topics](https://etherpad.openstack.org/p/YVR-forum-TC-sessions) should be proposed for [the forum](http://forumtopics.openstack.org/). # Kolla Situation [In](http://lists.openstack.org/pipermail/openstack-dev/2018-March/128822.html) [email](http://lists.openstack.org/pipermail/openstack-dev/2018-April/128950.html) and in IRC discussion [Wednesday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-04-04.log.html#t2018-04-04T17:52:45) and [Thursday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-04-05.log.html#t2018-04-05T15:01:20) the many faces of Kolla have been debated. This started as a discussion of whether kolla-kubernetes should be retired but has branched widely since then, including plenty of discussion on whether Kolla is doing containers "the right way" (whatever that might be), and whether the start scripts in Kolla images should [be moved](http://lists.openstack.org/pipermail/openstack-dev/2018-April/129088.html). One of the side topics within this discussion is the nature of the multiple hats worn by members of the community when engaging in discussion. Does membership on the TC mean that everything you say is with the voice of the TC? I certainly hope not, however there is probably more that can be done to be clear which hat is being worn in any given situation. (So it's clear, these reports are written wearing my Chris hat and are not the voice of the TC.) # Consumption Models There was some discussion [on Thursday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-04-05.log.html#t2018-04-05T15:40:23) about more effectively tracking the consumption (of OpenStack) models that are present in the community. ttx suggested some additional survey questions. Better data ought to help us decide if energy is being spent in the right ways. # Elections The nomination period for TC [elections](https://governance.openstack.org/election/) starts tonight at 1 minute to midnight, UTC. There are seven positions open for this election. The [potential forum topics](https://etherpad.openstack.org/p/YVR-forum-TC-sessions) give a pretty good overview of _some_ of the things that are on the TC's radar for the coming months. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From cdent+os at anticdent.org Tue Apr 10 12:16:46 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 10 Apr 2018 13:16:46 +0100 (BST) Subject: [openstack-dev] [tc] Proposal to start TC elections earlier Message-ID: I've posted a review https://review.openstack.org/#/c/560002/ that suggests we do elections for the TC with a bigger gap between the election and summit. From the commit message: The existing 3 weeks prior to summit target for TC elections can be problematic for travel planning for candidates who might only go to summit if they win their election, or might plan a different length of trip depending on their role(s) in the community. This change makes the target for the election to be six weeks prior to summit to ease that planning. In addition to helping with travel concerns, it also means that any newly elected TC members will be more involved in planning for the forum at the summit. If approved this change would go in effect for the second election in 2018. The first election of 2018 is already scheduled. If you have opinions on this please comment here or on the review. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From mriedemos at gmail.com Tue Apr 10 14:05:01 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 10 Apr 2018 09:05:01 -0500 Subject: [openstack-dev] [nova] Changes toComputeVirtAPI.wait_for_instance_event In-Reply-To: References: Message-ID: On 4/9/2018 9:57 PM, Chen CH Ji wrote: > Could you please help to share whether this kind of event is sent by > neutron-server or neutron agent ? I searched neutron code > from [1][2] this means the agent itself need tell neutron server the > device(VIF) is up then neutron server will send notification > to nova through REST API and in turn consumed by compute node? > > [1]https://github.com/openstack/neutron/tree/master/neutron/notify_port_active_direct > [2]https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/rpc.py#L264 I believe the neutron agent is the one that is getting (or polling) the information from the underlying network backend when VIFs are plugged or unplugged from a host, then route that information via RPC to the neutron server which then sends an os-server-external-events request to the compute REST API, which then routes the event information down to the nova-compute host where the instance is currently running. -- Thanks, Matt From scheuran at linux.vnet.ibm.com Tue Apr 10 14:18:41 2018 From: scheuran at linux.vnet.ibm.com (Andreas Scheuring) Date: Tue, 10 Apr 2018 16:18:41 +0200 Subject: [openstack-dev] [nova] Changes toComputeVirtAPI.wait_for_instance_event In-Reply-To: References: Message-ID: Yes, that’s how it works! --- Andreas Scheuring (andreas_s) On 10. Apr 2018, at 16:05, Matt Riedemann wrote: On 4/9/2018 9:57 PM, Chen CH Ji wrote: > Could you please help to share whether this kind of event is sent by neutron-server or neutron agent ? I searched neutron code > from [1][2] this means the agent itself need tell neutron server the device(VIF) is up then neutron server will send notification > to nova through REST API and in turn consumed by compute node? > [1]https://github.com/openstack/neutron/tree/master/neutron/notify_port_active_direct > [2]https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/rpc.py#L264 I believe the neutron agent is the one that is getting (or polling) the information from the underlying network backend when VIFs are plugged or unplugged from a host, then route that information via RPC to the neutron server which then sends an os-server-external-events request to the compute REST API, which then routes the event information down to the nova-compute host where the instance is currently running. -- Thanks, Matt __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From dimitri.pertin at inria.fr Tue Apr 10 14:57:14 2018 From: dimitri.pertin at inria.fr (Dimitri Pertin) Date: Tue, 10 Apr 2018 16:57:14 +0200 Subject: [openstack-dev] [FEMDC] Wed. 11 Apr - FEMDC IRC Meeting 15:00 UTC Message-ID: <7e1cfde7-f51f-b687-bdce-2ccfbf4bb993@inria.fr> Dear all, This is a gentle reminder for our tomorrow meeting at 15:00 UTC. A draft of the agenda is available at line 391, you are very welcome to add any item: https://etherpad.openstack.org/p/massively_distributed_ircmeetings_2018 Best regards, Dimitri From mnaser at vexxhost.com Tue Apr 10 15:45:30 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 10 Apr 2018 11:45:30 -0400 Subject: [openstack-dev] [puppet] Add new puppet-senlin repository to Puppet OpenStack In-Reply-To: <4702c3a6.1161c69.162ae9a8b76.Coremail.chdzsp@163.com> References: <799a5ac3.115da68.162ae274d2b.Coremail.chdzsp@163.com> <4702c3a6.1161c69.162ae9a8b76.Coremail.chdzsp@163.com> Message-ID: Done! Thanks :) On Tue, Apr 10, 2018 at 4:09 AM, 钟生平 wrote: > Hi, Mohammed > > Needs PTL+1, Can you review? > > Thanks, > Zhong Shengping > > > At 2018-04-10 14:03:54, "钟生平" wrote: > > Hi, core members > > I have added new puppet-senlin repository to Puppet OpenStack[1][2][3]. I'm > going to work on this module. Please review. > > [1]https://review.openstack.org/#/c/559537/ > [2]https://review.openstack.org/#/c/559539/ > [3]https://review.openstack.org/#/c/559563/ > > Thanks, > Zhong Shengping > > > > > > > -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. http://vexxhost.com From davanum at gmail.com Tue Apr 10 19:14:59 2018 From: davanum at gmail.com (Davanum Srinivas) Date: Tue, 10 Apr 2018 15:14:59 -0400 Subject: [openstack-dev] [Heat][Trove][Vitrage] (Re: [oslo][requirements][vitrage] oslo.service 1.28.1 breaks Vitrage gate) Message-ID: Dear Trove, Vitrage, Heat teams, Can you please see if this review will affect your CI jobs? https://review.openstack.org/#/c/558206/ I've tested at least the Heat unit tests. But need you to test as well as a previous version of this change broke stuff Thanks, Dims On Fri, Dec 15, 2017 at 5:51 AM, ChangBo Guo wrote: > Thanks for raising this, Oslo team will revert the change in > https://review.openstack.org/#/c/528202/ > > 2017-12-14 23:58 GMT+08:00 Afek, Ifat (Nokia - IL/Kfar Sava) > : >> >> Hi, >> >> >> >> The latest release of oslo.service 1.28.1 breaks the Vitrage gate. We are >> creating several threads and timers [1], but only the first thread is >> executed. We noticed that Trove project already reported this problem [2]. >> >> >> >> Please help us fix it. >> >> >> >> Thanks, >> >> Ifat. >> >> >> >> [1] >> https://github.com/openstack/vitrage/blob/master/vitrage/datasources/services.py >> >> [2] https://review.openstack.org/#/c/527755/ >> >> >> >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > -- > ChangBo Guo(gcb) > Community Director @EasyStack > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Davanum Srinivas :: https://twitter.com/dims From emilien at redhat.com Tue Apr 10 19:45:07 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 10 Apr 2018 12:45:07 -0700 Subject: [openstack-dev] [tripleo] The Weekly Owl - 16th Edition Message-ID: Note: this is the sixteenth edition of a weekly update of what happens in TripleO. The goal is to provide a short reading (less than 5 minutes) to learn where we are and what we're doing. Any contributions and feedback are welcome. Link to the previous version: http://lists.openstack.org/pipermail/openstack-dev/2018-April/129035.html +---------------------------------+ | General announcements | +---------------------------------+ +--> Rocky milestone 1 is next week! Please update your blueprints status accordingly) +------------------------------+ | Continuous Integration | +------------------------------+ +--> Rover is Arx and Ruck is Rafael. Please let them know any new CI issue. +--> Master promotion is 5 days, Queens is 12 days, Pike is 17 days and Ocata is 17 days. +--> Efforts around a simple "keystone-only" CI job across all branches. +--> Good progress on running Tempest for undercloud jobs, also tempest containerization. +--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting and https://goo.gl/D4WuBP +-------------+ | Upgrades | +-------------+ +--> Progress on FFU CLI in tripleoclient and FFU/Ceph as well. +--> Work on CI jobs for undercloud upgrades and containerized undercloud upgrades. +--> Need reviews, see etherpad +--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad-status +---------------+ | Containers | +---------------+ +--> Good progress on upgrades, now working on THT tasks to upgrade undercloud services +--> Focusing on UX problems: logs, permissions, directories, complete deployment message +--> Container workflow is still work in progress, and needed to make progress on CI / container updates +--> We had to revert containerized undercloud testing on fs010 : https://bugs.launchpad.net/tripleo/+bug/1762422 +--> More: https://etherpad.openstack.org/p/tripleo-containers-squad-status +----------------------+ | config-download | +----------------------+ +--> Moving to config-download by default is imminent. +--> ceph/octavia/skydive migration is wip. +--> Inventory improvements in progress. +--> Polishing tripleo-common deploy_plan and messaging patches to get correct deployment state tracking. +--> UI work is work in progress. +--> More: https://etherpad.openstack.org/p/tripleo-config- download-squad-status +--------------+ | Integration | +--------------+ +--> Migrate to new ceph-ansible container images naming style. +--> Config-download transition is still ongoing. +--> More: https://etherpad.openstack.org/p/tripleo-integration-squad-status +---------+ | UI/CLI | +---------+ +--> Efforts on config-download integration +--> Investigating undeploy_plan workflow in tripleo-common +--> Maintaining pending UI patches to be up to date with tripleo-common changes +--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status +---------------+ | Validations | +---------------+ +--> OpenShift on OpenStack validations in progress +--> Starting work on Custom validations/swift storage +--> Need reviews, see etherpad +--> More: https://etherpad.openstack.org/p/tripleo-validations-squad-status +---------------+ | Networking | +---------------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-networking-squad-status +--------------+ | Workflows | +--------------+ +--> Need reviews, see etherpad. +--> More: https://etherpad.openstack.org/p/tripleo-workflows-squad-status +-----------+ | Security | +-----------+ +--> Last meeting's was about Public TLS by default, Limit TripleO users and Security Hardening. +--> More: https://etherpad.openstack.org/p/tripleo-security-squad +------------+ | Owl fact | +------------+ Did you know owls were good hunters? Check this video: https://youtu.be/a68fIQzaDBY?t=39 Don't mess with owls ;-) Thanks all for reading and stay tuned! -- Your fellow reporter, Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Tue Apr 10 20:57:21 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 10 Apr 2018 20:57:21 +0000 Subject: [openstack-dev] [tc] Proposal to start TC elections earlier In-Reply-To: References: Message-ID: Thanks Chris! Sounds like a good plan to me. Added bonus for election officials, it pushes things a little further back from the Summit to avoid conflict with prep/planning. -Kendall (diablo_rojo) On Tue, Apr 10, 2018 at 5:17 AM Chris Dent wrote: > > I've posted a review > > https://review.openstack.org/#/c/560002/ > > that suggests we do elections for the TC with a bigger gap between > the election and summit. From the commit message: > > The existing 3 weeks prior to summit target for TC elections can > be problematic for travel planning for candidates who might only > go to summit if they win their election, or might plan a > different length of trip depending on their role(s) in the > community. This change makes the target for the election to be > six weeks prior to summit to ease that planning. > > In addition to helping with travel concerns, it also means that > any newly elected TC members will be more involved in planning > for the forum at the summit. > > If approved this change would go in effect for the second > election in 2018. The first election of 2018 is already > scheduled. > > If you have opinions on this please comment here or on the review. > > > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent tw: > @anticdent__________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Tue Apr 10 21:47:48 2018 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 10 Apr 2018 16:47:48 -0500 Subject: [openstack-dev] [all][requirements] uncapping eventlet In-Reply-To: <14e34b97-32c2-b4a2-9d82-8a19f37737d9@nemebean.com> References: <20180405154737.lhxhlpsj3uttjevu@gentoo.org> <20180405192619.nk7ykeel6qnwsk2y@gentoo.org> <14e34b97-32c2-b4a2-9d82-8a19f37737d9@nemebean.com> Message-ID: On 04/09/2018 04:32 PM, Ben Nemec wrote: > > > On 04/09/2018 01:12 PM, Ben Nemec wrote: >> >> >> On 04/06/2018 04:02 AM, Jens Harbott wrote: >>> 2018-04-05 19:26 GMT+00:00 Matthew Thode : >>>> On 18-04-05 20:11:04, Graham Hayes wrote: >>>>> On 05/04/18 16:47, Matthew Thode wrote: >>>>>> eventlet-0.22.1 has been out for a while now, we should try and >>>>>> use it. >>>>>> Going to be fun times. >>>>>> >>>>>> I have a review projects can depend upon if they wish to test. >>>>>> https://review.openstack.org/533021 >>>>> >>>>> It looks like we may have an issue with oslo.service - >>>>> https://review.openstack.org/#/c/559144/ is failing gates. >>>>> >>>>> Also - what is the dance for this to get merged? It doesn't look >>>>> like we >>>>> can merge this while oslo.service has the old requirement >>>>> restrictions. >>>>> >>>> >>>> The dance is as follows. >>>> >>>> 0. provide review for projects to test new eventlet version >>>>     projects using eventlet should make backwards compat code >>>> changes at >>>>     this time. >>> >>> But this step is currently failing. Keystone doesn't even start when >>> eventlet-0.22.1 is installed, because loading oslo.service fails with >>> its pkg definition still requiring the capped eventlet: >>> >>> http://logs.openstack.org/21/533021/4/check/legacy-requirements-integration-dsvm/7f7c3a8/logs/screen-keystone.txt.gz#_Apr_05_16_11_27_748482 >>> >>> >>> So it looks like we need to have an uncapped release of oslo.service >>> before we can proceed here. >> >> I've proposed a patch[1] to uncap eventlet in oslo.service, but it's >> failing the unit tests[2].  I'll look into it, but I thought I'd >> provide an update in the meantime. > > Oh, the unit test failures are unrelated.  Apparently the unit tests > have been failing in oslo.service for a while.  dims has a patch up at > https://review.openstack.org/#/c/559831/ that looks to be addressing the > problem, although it's also failing the unit tests. :-/ We finally got the uncap patch merged and a release request is up at https://review.openstack.org/560163. Hopefully once that is in u-c we'll be past this issue. > >> >> 1: https://review.openstack.org/559800 >> 2: >> http://logs.openstack.org/00/559800/1/check/openstack-tox-py27/cef8fcb/job-output.txt.gz >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From pabelanger at redhat.com Tue Apr 10 18:48:29 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Tue, 10 Apr 2018 14:48:29 -0400 Subject: [openstack-dev] [all] Gerrit server replacement scheduled for May 2nd 2018 Message-ID: <20180410184829.GA16085@localhost.localdomain> Hello from Infra. This is our weekly reminder of the upcoming gerrit replacement. We'll continue to send these announcements out up until the day of the migration. If you have any questions, please contact us in #openstack-infra. --- It's that time again... on Wednesday, May 02, 2018 20:00 UTC, the OpenStack Project Infrastructure team is upgrading the server which runs review.openstack.org to Ubuntu Xenial, and that means a new virtual machine instance with new IP addresses assigned by our service provider. The new IP addresses will be as follows: IPv4 -> 104.130.246.32 IPv6 -> 2001:4800:7819:103:be76:4eff:fe04:9229 They will replace these current production IP addresses: IPv4 -> 104.130.246.91 IPv6 -> 2001:4800:7819:103:be76:4eff:fe05:8525 We understand that some users may be running from egress-filtered networks with port 29418/tcp explicitly allowed to the current review.openstack.org IP addresses, and so are providing this information as far in advance as we can to allow them time to update their firewalls accordingly. Note that some users dealing with egress filtering may find it easier to switch their local configuration to use Gerrit's REST API via HTTPS instead, and the current release of git-review has support for that workflow as well. http://lists.openstack.org/pipermail/openstack-dev/2014-September/045385.html We will follow up with final confirmation in subsequent announcements. Thanks, Paul From johnsomor at gmail.com Tue Apr 10 22:56:44 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Tue, 10 Apr 2018 15:56:44 -0700 Subject: [openstack-dev] [all] [api] Re-Reminder on the state of WSME In-Reply-To: <4bb99da6-1071-3f7b-2c87-979e0d48876d@nemebean.com> References: <4bb99da6-1071-3f7b-2c87-979e0d48876d@nemebean.com> Message-ID: I echo Ben's question about what is the recommended replacement. Not long ago we were advised to use WSME over the alternatives which is why Octavia is using the WSME types and pecan extension. Thanks, Michael On Mon, Apr 9, 2018 at 10:16 AM, Ben Nemec wrote: > > > On 04/09/2018 07:22 AM, Chris Dent wrote: >> >> >> A little over two years ago I sent a reminder that WSME is not being >> actively maintained: >> >> >> http://lists.openstack.org/pipermail/openstack-dev/2016-March/088658.html >> >> Today I was reminded of this becasue a random (typo-related) >> patchset demonstrated that the tests were no longer passing and >> fixing them is enough of a chore that I (at least temporarily) >> marked one test as an expected failure.o >> >> https://review.openstack.org/#/c/559717/ >> >> The following projects appear to still use WSME: >> >> aodh >> blazar >> cloudkitty >> cloudpulse >> cyborg >> glance >> gluon >> iotronic >> ironic >> magnum >> mistral >> mogan >> octavia >> panko >> qinling >> radar >> ranger >> searchlight >> solum >> storyboard >> surveil >> terracotta >> watcher >> >> Most of these are using the 'types' handling in WSME and sometimes >> the pecan extension, and not the (potentially broken) Flask >> extension, so things should be stable. >> >> However: nobody is working on keeping WSME up to date. It is not a >> good long term investment. > > > What would be the recommended alternative, either for new work or as a > migration path for existing projects? > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From kennelson11 at gmail.com Wed Apr 11 00:00:11 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 11 Apr 2018 00:00:11 +0000 Subject: [openstack-dev] [All][Elections] Candidate Proposals for TC Positions Are Now Open Message-ID: Hello All! Nominations for the Technical Committee positions (7 positions) are now open and will remain open until 2018-04-17T23:45. All nominations must be submitted as a text file to the openstack/election repository as explained on the election website[1]. Please note that the name of the file should match an email address in the foundation member profile of the candidate. Also for TC candidates, election officials refer to the community member profiles at [2], so please take this opportunity to ensure that your profile contains current information. Candidates for the Technical Committee Positions: Any Foundation individual member can propose their candidacy for an available, directly-elected TC seat. The election will be held from 2018-04-23T23:59 through to 2018-04-30T23:45. The electorate are the Foundation individual members that are also committers for one of the official teams[3] over the Pike-Queens timeframe (22 Feb 2017 to 28 Feb 2018, as well as the extra-ATCs who are acknowledged by the TC[4]. Please see the website[5] for additional details about this election. Please find below the timeline: TC nomination starts @ 2018-04-10T23:59 TC nomination ends @ 2018-04-17T23:45 TC campaigning starts @ 2018-04-17T23:45 TC campaigning ends @ 2018-04-22T23:45 TC elections starts @ 2018-04-23T23:59 TC elections ends @ 2018-04-30T23:45 If you have any questions please be sure to either ask them on the mailing list or to the elections officials[6]. Thank you, Kendall Nelson (diablo_rojo) [1] http://governance.openstack.org/election/#how-to-submit-your-candidacy [2] http://www.openstack.org/community/members/ [3] https://governance.openstack.org/tc/reference/projects/ [4] https://releases.openstack.org/rocky/schedule.html#p-extra-atcs [5] https://governance.openstack.org/election/ [6] http://governance.openstack.org/election/#election-officials -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Wed Apr 11 00:50:42 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 10 Apr 2018 17:50:42 -0700 Subject: [openstack-dev] [tripleo] roadmap on containers workflow Message-ID: Greetings, Steve Baker and I had a quick chat today about the work that is being done around containers workflow in Rocky cycle. If you're not familiar with the topic, I suggest to first read the blueprint to understand the context here: https://blueprints.launchpad.net/tripleo/+spec/container-prepare-workflow One of the great outcomes of this blueprint is that in Rocky, the operator won't have to run all the "openstack overcloud container" commands to prepare the container registry and upload the containers. Indeed, it'll be driven by Heat and Mistral mostly. But today our discussion extended on 2 uses-cases that we're going to explore and find how we can address them: 1) I'm a developer and want to deploy a containerized undercloud with customized containers (more or less related to the all-in-one discussions on another thread [1]). 2) I'm submitting a patch in tripleo-common (let's say a workflow) and need my patch to be tested when the undercloud is containerized (see [2] for an excellent example). Both cases would require additional things: - The container registry needs to be deployed *before* actually installing the undercloud. - We need a tool to update containers from this registry and *before* deploying them. We already have this tool in place in our CI for the overcloud (see [3] and [4]). Now we need a similar thing for the undercloud. Next steps: - Agree that we need to deploy the container-registry before the undercloud. - If agreed, we'll create a new Ansible role called ansible-role-container-registry that for now will deploy exactly what we have in TripleO, without extra feature. - Drive the playbook runtime from tripleoclient to bootstrap the container registry (which of course could be disabled in undercloud.conf). - Create another Ansible role that would re-use container-check tool but the idea is to provide a role to modify containers when needed, and we could also control it from tripleoclient. The role would be using the ContainerImagePrepare parameter, which Steve is working on right now. Feedback is welcome, thanks. [1] All-In-One thread: http://lists.openstack.org/pipermail/openstack-dev/2018-March/128900.html [2] Bug report when undercloud is containeirzed https://bugs.launchpad.net/tripleo/+bug/1762422 [3] Tool to update containers if needed: https://github.com/imain/container-check [4] Container-check running in TripleO CI: https://review.openstack.org/#/c/558885/ and https://review.openstack.org/#/c/529399/ -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From jichenjc at cn.ibm.com Wed Apr 11 02:06:09 2018 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Wed, 11 Apr 2018 10:06:09 +0800 Subject: [openstack-dev] [nova] Changes toComputeVirtAPI.wait_for_instance_event In-Reply-To: References: Message-ID: Thanks for your info ,really helpful Best Regards! Kevin (Chen) Ji 纪 晨 Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com Phone: +86-10-82451493 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC From: Andreas Scheuring To: "OpenStack Development Mailing List (not for usage questions)" Date: 04/10/2018 10:19 PM Subject: Re: [openstack-dev] [nova] Changes toComputeVirtAPI.wait_for_instance_event Yes, that’s how it works! --- Andreas Scheuring (andreas_s) On 10. Apr 2018, at 16:05, Matt Riedemann wrote: On 4/9/2018 9:57 PM, Chen CH Ji wrote: Could you please help to share whether this kind of event is sent by neutron-server or neutron agent ? I searched neutron code from [1][2] this means the agent itself need tell neutron server the device(VIF) is up then neutron server will send notification to nova through REST API and in turn consumed by compute node? [1] https://github.com/openstack/neutron/tree/master/neutron/notify_port_active_direct [2] https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/rpc.py#L264 I believe the neutron agent is the one that is getting (or polling) the information from the underlying network backend when VIFs are plugged or unplugged from a host, then route that information via RPC to the neutron server which then sends an os-server-external-events request to the compute REST API, which then routes the event information down to the nova-compute host where the instance is currently running. -- Thanks, Matt __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=tIntFpZ0ffp-_h5CsqN1I9tv64hW2xugxBXaxDn7Z_I&s=z2jOgMD7B3XFoNsUHTtIO6hWKYXH-Dm4L4P0-u-oSSw&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From hongbin034 at gmail.com Wed Apr 11 02:12:51 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Tue, 10 Apr 2018 22:12:51 -0400 Subject: [openstack-dev] [zun] zun-api error In-Reply-To: References: Message-ID: HI Murali, This is the guide for installing and configuring kuryr-libnetwork: https://docs.openstack.org/kuryr-libnetwork/queens/install/ (it is for Queens version). Several questions from me: * Which version of Kuryr-libnetwork you installed. I asked this question because you mentioned that you were using Pike version but Kuryr-libnetwork doesn't have the stable/pike branch. The closest matching version is 0.2.0 ( https://github.com/openstack/kuryr-libnetwork/tree/0.2.0) which was cut for matching the Pike integration release. * Did you use dual stack (ipv4 & v6)? If yes, see if you were hitting this bug: https://bugs.launchpad.net/kuryr-libnetwork/+bug/1668803 To debug further, could you provide the following information? * The log of kuryr-libnetwork (ideally with debug model enabled). * The output of this command "pip freeze" Best regards, Hongbin On Mon, Apr 9, 2018 at 9:41 PM, Murali B wrote: > Hi Hongbin Lu, > > After I run the etcd service up and tried to create container I see the > below error and my container is in error state > > Could you please share me if I need to change any configuration in neutron > for docker kuryer > > ckercfg'] find_config_file /usr/local/lib/python2.7/dist- > packages/docker/utils/config.py:21 > 2018-04-09 16:47:44.058 41736 DEBUG docker.utils.config > [req-0afc6b91-e50e-4a5a-a673-c2cecd6f2986 - - - - -] No config file found > find_config_file /usr/local/lib/python2.7/dist- > packages/docker/utils/config.py:28 > 2018-04-09 16:47:44.345 41736 ERROR zun.compute.manager > [req-0afc6b91-e50e-4a5a-a673-c2cecd6f2986 - - - - -] Error occurred while > calling Docker start API: Docker internal error: 500 Server Error: Internal > Server Error ("IpamDriver.RequestAddress: Requested ip address > {'subnet_id': u'fb768eca-8ad9-4afc-99f7-e13b9c36096e', 'ip_address': > u'3.3.3.12'} already belongs to a bound Neutron port: > 401a5599-2309-482e-b100-e2317c4118cf").: DockerError: Docker internal > error: 500 Server Error: Internal Server Error ("IpamDriver.RequestAddress: > Requested ip address {'subnet_id': u'fb768eca-8ad9-4afc-99f7-e13b9c36096e', > 'ip_address': u'3.3.3.12'} already belongs to a bound Neutron port: > 401a5599-2309-482e-b100-e2317c4118cf"). > 2018-04-09 16:47:44.372 41736 DEBUG oslo_concurrency.lockutils > [req-0afc6b91-e50e-4a5a-a673-c2cecd6f2986 - - - - -] Lock > "b861d7cc-3e18-4037-8eaf-c6d0076b02a5" released by > "zun.compute.manager.do_container_create" :: held 5.163s inner > /usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:285 > 2018-04-09 16:47:48.493 41610 DEBUG eventlet.wsgi.server [-] (41610) > accepted ('10.11.142.2', 60664) server /usr/lib/python2.7/dis > > Thanks > -Murali > > On Fri, Apr 6, 2018 at 11:00 AM, Murali B wrote: > >> Hi Hongbin Lu, >> >> Thank you. After changing the endpoint it worked. Actually I was using >> magnum service also. I used the service as "container" for magnum that is >> why its is going to 9511 instead of 9517 >> After I corrected it worked. >> >> Thanks >> -Murali >> >> On Fri, Apr 6, 2018 at 8:45 AM, Hongbin Lu wrote: >> >>> Hi Murali, >>> >>> It looks your zunclient was sending API requests to >>> http://10.11.142.2:9511/v1/services , which doesn't seem to be the >>> right API endpoint. According to the Keystone endpoint you configured, the >>> API endpoint of Zun should be http://10.11.142.2:9517/v1/services >>> (it is on port 9517 instead of >>> 9511). >>> >>> What confused the zunclient is the endpoint's type you configured in >>> Keystone. Zun expects an endpoint of type "container" but it was configured >>> to be "zun-container" in your setup. I believe the error will be resolved >>> if you can update the Zun endpoint from type "zun-container" to type >>> "container". Please give it a try and let us know. >>> >>> Best regards, >>> Hongbin >>> >>> On Thu, Apr 5, 2018 at 7:27 PM, Murali B wrote: >>> >>>> Hi Hongbin, >>>> >>>> Thank you for your help >>>> >>>> As per the our discussion here is the output for my current api on >>>> pike. I am not sure which version of zun client client I should use for >>>> pike >>>> >>>> root at cluster3-2:~/python-zunclient# zun service-list >>>> ERROR: Not Acceptable (HTTP 406) (Request-ID: >>>> req-be69266e-b641-44b9-9739-0c2d050f18b3) >>>> root at cluster3-2:~/python-zunclient# zun --debug service-list >>>> DEBUG (extension:180) found extension EntryPoint.parse('vitrage-keycloak >>>> = vitrageclient.auth:VitrageKeycloakLoader') >>>> DEBUG (extension:180) found extension EntryPoint.parse('vitrage-noauth >>>> = vitrageclient.auth:VitrageNoAuthLoader') >>>> DEBUG (extension:180) found extension EntryPoint.parse('noauth = >>>> cinderclient.contrib.noauth:CinderNoAuthLoader') >>>> DEBUG (extension:180) found extension EntryPoint.parse('v2token = >>>> keystoneauth1.loading._plugins.identity.v2:Token') >>>> DEBUG (extension:180) found extension EntryPoint.parse('none = >>>> keystoneauth1.loading._plugins.noauth:NoAuth') >>>> DEBUG (extension:180) found extension EntryPoint.parse('v3oauth1 = >>>> keystoneauth1.extras.oauth1._loading:V3OAuth1') >>>> DEBUG (extension:180) found extension EntryPoint.parse('admin_token = >>>> keystoneauth1.loading._plugins.admin_token:AdminToken') >>>> DEBUG (extension:180) found extension EntryPoint.parse('v3oidcauthcode >>>> = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAuth >>>> orizationCode') >>>> DEBUG (extension:180) found extension EntryPoint.parse('v2password = >>>> keystoneauth1.loading._plugins.identity.v2:Password') >>>> DEBUG (extension:180) found extension EntryPoint.parse('v3samlpassword >>>> = keystoneauth1.extras._saml2._loading:Saml2Password') >>>> DEBUG (extension:180) found extension EntryPoint.parse('v3password = >>>> keystoneauth1.loading._plugins.identity.v3:Password') >>>> DEBUG (extension:180) found extension EntryPoint.parse('v3adfspassword >>>> = keystoneauth1.extras._saml2._loading:ADFSPassword') >>>> DEBUG (extension:180) found extension EntryPoint.parse('v3oidcaccesstoken >>>> = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectAccessToken') >>>> DEBUG (extension:180) found extension EntryPoint.parse('v3oidcpassword >>>> = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectPassword') >>>> DEBUG (extension:180) found extension EntryPoint.parse('v3kerberos = >>>> keystoneauth1.extras.kerberos._loading:Kerberos') >>>> DEBUG (extension:180) found extension EntryPoint.parse('token = >>>> keystoneauth1.loading._plugins.identity.generic:Token') >>>> DEBUG (extension:180) found extension EntryPoint.parse('v3oidcclientcredentials >>>> = keystoneauth1.loading._plugins.identity.v3:OpenIDConnectClie >>>> ntCredentials') >>>> DEBUG (extension:180) found extension EntryPoint.parse('v3tokenlessauth >>>> = keystoneauth1.loading._plugins.identity.v3:TokenlessAuth') >>>> DEBUG (extension:180) found extension EntryPoint.parse('v3token = >>>> keystoneauth1.loading._plugins.identity.v3:Token') >>>> DEBUG (extension:180) found extension EntryPoint.parse('v3totp = >>>> keystoneauth1.loading._plugins.identity.v3:TOTP') >>>> DEBUG (extension:180) found extension EntryPoint.parse('v3applicationcredential >>>> = keystoneauth1.loading._plugins.identity.v3:ApplicationCredential') >>>> DEBUG (extension:180) found extension EntryPoint.parse('password = >>>> keystoneauth1.loading._plugins.identity.generic:Password') >>>> DEBUG (extension:180) found extension EntryPoint.parse('v3fedkerb = >>>> keystoneauth1.extras.kerberos._loading:MappedKerberos') >>>> DEBUG (extension:180) found extension EntryPoint.parse('v1password = >>>> swiftclient.authv1:PasswordLoader') >>>> DEBUG (extension:180) found extension EntryPoint.parse('token_endpoint >>>> = openstackclient.api.auth_plugin:TokenEndpoint') >>>> DEBUG (extension:180) found extension EntryPoint.parse('gnocchi-basic >>>> = gnocchiclient.auth:GnocchiBasicLoader') >>>> DEBUG (extension:180) found extension EntryPoint.parse('gnocchi-noauth >>>> = gnocchiclient.auth:GnocchiNoAuthLoader') >>>> DEBUG (extension:180) found extension EntryPoint.parse('aodh-noauth = >>>> aodhclient.noauth:AodhNoAuthLoader') >>>> DEBUG (session:372) REQ: curl -g -i -X GET http://ubuntu16:35357/v3 -H >>>> "Accept: application/json" -H "User-Agent: zun keystoneauth1/3.4.0 >>>> python-requests/2.18.1 CPython/2.7.12" >>>> DEBUG (connectionpool:207) Starting new HTTP connection (1): ubuntu16 >>>> DEBUG (connectionpool:395) http://ubuntu16:35357 "GET /v3 HTTP/1.1" >>>> 200 248 >>>> DEBUG (session:419) RESP: [200] Date: Thu, 05 Apr 2018 23:11:07 GMT >>>> Server: Apache/2.4.18 (Ubuntu) Vary: X-Auth-Token X-Distribution: Ubuntu >>>> x-openstack-request-id: req-3b1a12cc-fb3f-4d05-87fc-d2a1ff43395c >>>> Content-Length: 248 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive >>>> Content-Type: application/json >>>> RESP BODY: {"version": {"status": "stable", "updated": >>>> "2017-02-22T00:00:00Z", "media-types": [{"base": "application/json", >>>> "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.8", >>>> "links": [{"href": "http://ubuntu16:35357/v3/", "rel": "self"}]}} >>>> >>>> DEBUG (session:722) GET call to None for http://ubuntu16:35357/v3 used >>>> request id req-3b1a12cc-fb3f-4d05-87fc-d2a1ff43395c >>>> DEBUG (base:175) Making authentication request to >>>> http://ubuntu16:35357/v3/auth/tokens >>>> DEBUG (connectionpool:395) http://ubuntu16:35357 "POST /v3/auth/tokens >>>> HTTP/1.1" 201 10333 >>>> DEBUG (base:180) {"token": {"is_domain": false, "methods": >>>> ["password"], "roles": [{"id": "4000a662be2d47fd8fdf5a0fef66767d", >>>> "name": "admin"}], "expires_at": "2018-04-06T00:11:08.000000Z", "project": >>>> {"domain": {"id": "default", "name": "Default"}, "id": >>>> "a391261cffba4f4c827ab7420a352fe1", "name": "admin"}, "catalog": >>>> [{"endpoints": [{"url": "http://cluster3-2:9517/v1", "interface": >>>> "internal", "region": "RegionOne", "region_id": "RegionOne", "id": >>>> "5a634bafa38c45dbb571f0edb3702101"}, {"url": "http://cluster3-2:9517/v1", >>>> "interface": "public", "region": "RegionOne", "region_id": "RegionOne", >>>> "id": "8926d37d276a4fe49df66bb513f7906a"}, {"url": " >>>> http://cluster3-2:9517/v1", "interface": "admin", "region": >>>> "RegionOne", "region_id": "RegionOne", "id": "a74e1b4faf39436aa5d6f9b446ceee92"}], >>>> "type": "container-zun", "id": "025154eef222461da9edcfe32ae79e5e", >>>> "name": "zun"}, {"endpoints": [{"url": "http://ubuntu16:9001", >>>> "interface": "public", "region": "RegionOne", "region_id": "RegionOne", >>>> "id": "3a94c0df20da47d1b922541a87576ab0"}, {"url": " >>>> http://ubuntu16:9001", "interface": "internal", "region": "RegionOne", >>>> "region_id": "RegionOne", "id": "5fcab2a59c72433581510d7aafe29961"}, >>>> {"url": "http://ubuntu16:9001", "interface": "admin", "region": >>>> "RegionOne", "region_id": "RegionOne", "id": "71e314291a4b4c648aa5ba662b216fa6"}], >>>> "type": "dns", "id": "07677b58ad4d469d80dbda8e9fa908bc", "name": >>>> "designate"}, {"endpoints": [{"url": "http://ubuntu16:8776/v2/a3912 >>>> 61cffba4f4c827ab7420a352fe1", "interface": "admin", "region": >>>> "RegionOne", "region_id": "RegionOne", "id": "4d56ee7967994c869239007146e52ab8"}, >>>> {"url": "http://ubuntu16:8776/v2/a391261cffba4f4c827ab7420a352fe1", >>>> "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", >>>> "id": "9845138d25ec41b1a7102d8365f1b9c7"}, {"url": " >>>> http://ubuntu16:8776/v2/a391261cffba4f4c827ab7420a352fe1", >>>> "interface": "public", "region": "RegionOne", "region_id": "RegionOne", >>>> "id": "f99f9bf4b0eb4e19aa8dbe72fc13e648"}], "type": "volumev2", "id": >>>> "077bd5ecfc59499ab84f49e410efef4f", "name": "cinderv2"}, {"endpoints": >>>> [{"url": "http://ubuntu16:8004/v1/a391261cffba4f4c827ab7420a352fe1", >>>> "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", >>>> "id": "355c6c323653469c8315d5dea2998b0d"}, {"url": " >>>> http://ubuntu16:8004/v1/a391261cffba4f4c827ab7420a352fe1", >>>> "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", >>>> "id": "841768ec3edb42d7b18fe6a2a17f4dbc"}, {"url": " >>>> http://10.11.142.2:8004/v1/a391261cffba4f4c827ab7420a352fe1", >>>> "interface": "public", "region": "RegionOne", "region_id": "RegionOne", >>>> "id": "afdbc1d2a5114cd9b0714331eb227ba9"}], "type": "orchestration", >>>> "id": "116243d61e3a4c90b7144d6a8b5a170a", "name": "heat"}, >>>> {"endpoints": [{"url": "http://ubuntu16:8778", "interface": >>>> "internal", "region": "RegionOne", "region_id": "RegionOne", "id": >>>> "2dacce3eed484464b3f521b7b2720cd9"}, {"url": "http://ubuntu16:8778", >>>> "interface": "public", "region": "RegionOne", "region_id": "RegionOne", >>>> "id": "5300f9ae336c41b8a8bb93400db35a30"}, {"url": " >>>> http://ubuntu16:8778", "interface": "admin", "region": "RegionOne", >>>> "region_id": "RegionOne", "id": "5c7e2cc977f74051b0ed104abb1d46a9"}], >>>> "type": "placement", "id": "1d270e2d3d4f488e82597097af933e7a", "name": >>>> "placement"}, {"endpoints": [{"url": "http://ubuntu16:8042", >>>> "interface": "public", "region": "RegionOne", "region_id": "RegionOne", >>>> "id": "337f147396f143679e6cf7fbdd3601ab"}, {"url": " >>>> http://ubuntu16:8042", "interface": "internal", "region": "RegionOne", >>>> "region_id": "RegionOne", "id": "a97d660772e64894b4b13092d7719298"}, >>>> {"url": "http://ubuntu16:8042", "interface": "admin", "region": >>>> "RegionOne", "region_id": "RegionOne", "id": "bb5caf186c9947aca31e6ee2a37f6bbd"}], >>>> "type": "alarming", "id": "2a19c1a28a42433caa8eb919910ec06f", "name": >>>> "aodh"}, {"endpoints": [], "type": "volume", "id": >>>> "39c740b891764e4a9081773709269848", "name": "cinder"}, {"endpoints": >>>> [{"url": "http://ubuntu16:8041", "interface": "internal", "region": >>>> "RegionOne", "region_id": "RegionOne", "id": "9d455913a5fb4f15bbe15740f4dee260"}, >>>> {"url": "http://ubuntu16:8041", "interface": "admin", "region": >>>> "RegionOne", "region_id": "RegionOne", "id": "c5c2471db1cb4ae7a1f3e847404d4b37"}, >>>> {"url": "http://ubuntu16:8041", "interface": "public", "region": >>>> "RegionOne", "region_id": "RegionOne", "id": "cc12daed5ea342a1a47602720589cb9e"}], >>>> "type": "metric", "id": "39fdf2d5300343aa8ebe5509d29ba7ce", "name": >>>> "gnocchi"}, {"endpoints": [{"url": "http://cluster3-2:9890", >>>> "interface": "public", "region": "RegionOne", "region_id": "RegionOne", >>>> "id": "1c7ddc56ba984afd8187cd1894a75bf1"}, {"url": " >>>> http://cluster3-2:9890", "interface": "admin", "region": "RegionOne", >>>> "region_id": "RegionOne", "id": "888925c4fc8b48859f086860333c3ab4"}, >>>> {"url": "http://cluster3-2:9890", "interface": "internal", "region": >>>> "RegionOne", "region_id": "RegionOne", "id": "9bfd7198dab14f6a8b7eba444f920020"}], >>>> "type": "nfv-orchestration", "id": "3da88eae843a4949806186db8a9a3bd0", >>>> "name": "tacker"}, {"endpoints": [{"url": "http://10.11.142.2:8999", >>>> "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", >>>> "id": "32880f809a2f45598a9838e4b168ce5b"}, {"url": " >>>> http://10.11.142.2:8999", "interface": "public", "region": >>>> "RegionOne", "region_id": "RegionOne", "id": "530711f56f234ad19775fae65774c0ab"}, >>>> {"url": "http://10.11.142.2:8999", "interface": "admin", "region": >>>> "RegionOne", "region_id": "RegionOne", "id": "8d7493ad752b453b87d789d0ec5cae93"}], >>>> "type": "rca", "id": "55f78369ea5e40e3b9aa9ded854cb163", "name": >>>> "vitrage"}, {"endpoints": [{"url": "http://10.11.142.2:5000/v3/", >>>> "interface": "public", "region": "RegionOne", "region_id": "RegionOne", >>>> "id": "afba4b58fd734baeaed94f8f2380a986"}, {"url": " >>>> http://ubuntu16:5000/v3/", "interface": "internal", "region": >>>> "RegionOne", "region_id": "RegionOne", "id": "b4b864acfc1746b3ad2d22c6a28e1361"}, >>>> {"url": "http://ubuntu16:35357/v3/", "interface": "admin", "region": >>>> "RegionOne", "region_id": "RegionOne", "id": "bf256df5f8d34e9c80c00b78da122118"}], >>>> "type": "identity", "id": "58b4ff04dc764fc2aae4bfd9d0f1eb8e", "name": >>>> "keystone"}, {"endpoints": [{"url": "http://ubuntu16:8776/v3/a3912 >>>> 61cffba4f4c827ab7420a352fe1", "interface": "admin", "region": >>>> "RegionOne", "region_id": "RegionOne", "id": "260f8b9e9e214cc1a39407517b3ca826"}, >>>> {"url": "http://ubuntu16:8776/v3/a391261cffba4f4c827ab7420a352fe1", >>>> "interface": "public", "region": "RegionOne", "region_id": "RegionOne", >>>> "id": "81adeaccba1c4203bddb7734f23116a8"}, {"url": " >>>> http://ubuntu16:8776/v3/a391261cffba4f4c827ab7420a352fe1", >>>> "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", >>>> "id": "e63332e8b15e43c6b9c331d9ee8551ab"}], "type": "volumev3", "id": >>>> "8cd6101718e94ee198cf9ba9894bf1c9", "name": "cinderv3"}, {"endpoints": >>>> [{"url": "http://ubuntu16:9696", "interface": "internal", "region": >>>> "RegionOne", "region_id": "RegionOne", "id": "65a0b4233436428ab42aa3b40b1ce53f"}, >>>> {"url": "http://ubuntu16:9696", "interface": "public", "region": >>>> "RegionOne", "region_id": "RegionOne", "id": "b8354dd727154056b3c9b81b89054bab"}, >>>> {"url": "http://ubuntu16:9696", "interface": "admin", "region": >>>> "RegionOne", "region_id": "RegionOne", "id": "ca44db85238b46cf9fbb6dc6f1d9dff5"}], >>>> "type": "network", "id": "ade912885a73431f95a3a01d8a8e6498", "name": >>>> "neutron"}, {"endpoints": [{"url": "http://ubuntu16:8000/v1", >>>> "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", >>>> "id": "5d7559010ea94cca9edd7ab6213f6b2c"}, {"url": " >>>> http://ubuntu16:8000/v1", "interface": "internal", "region": >>>> "RegionOne", "region_id": "RegionOne", "id": "af77025677284808b0715488e22729d4"}, >>>> {"url": "http://10.11.142.2:8000/v1", "interface": "public", "region": >>>> "RegionOne", "region_id": "RegionOne", "id": "c17b650eccf14045af49d5e9d050e875"}], >>>> "type": "cloudformation", "id": "b04f735f46e743969e2bb0fff3aee1b5", >>>> "name": "heat-cfn"}, {"endpoints": [{"url": " >>>> http://ubuntu16:8774/v2.1/a391261cffba4f4c827ab7420a352fe1", >>>> "interface": "public", "region": "RegionOne", "region_id": "RegionOne", >>>> "id": "18580f7a6dea4c53bc66d161e7e0a71e"}, {"url": " >>>> http://ubuntu16:8774/v2.1/a391261cffba4f4c827ab7420a352fe1", >>>> "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", >>>> "id": "b4a8575704a4426494edc57551f40e58"}, {"url": " >>>> http://ubuntu16:8774/v2.1/a391261cffba4f4c827ab7420a352fe1", >>>> "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", >>>> "id": "c41ec544b61c41098c07030bc84ba2a0"}], "type": "compute", "id": >>>> "b06f4aa21a4a488c8f0c5a835e639bd3", "name": "nova"}, {"endpoints": >>>> [{"url": "http://ubuntu16:9292", "interface": "public", "region": >>>> "RegionOne", "region_id": "RegionOne", "id": "4ed27e537ca34b6fb93a8c72d8921d24"}, >>>> {"url": "http://ubuntu16:9292", "interface": "internal", "region": >>>> "RegionOne", "region_id": "RegionOne", "id": "ab0c37600ecf45d797e7972dc6a4fde2"}, >>>> {"url": "http://ubuntu16:9292", "interface": "admin", "region": >>>> "RegionOne", "region_id": "RegionOne", "id": "f4a0f97be4f343d698ea12633e3823d6"}], >>>> "type": "image", "id": "bbe4fbb4a1d7495f948faa9baf1e3828", "name": >>>> "glance"}, {"endpoints": [{"url": "http://ubuntu16:8777", "interface": >>>> "public", "region": "RegionOne", "region_id": "RegionOne", "id": >>>> "3d160f2286634811b24b8abd6ad72c1f"}, {"url": "http://ubuntu16:8777", >>>> "interface": "admin", "region": "RegionOne", "region_id": "RegionOne", >>>> "id": "a988e821ff1f4760ae3873c17ab87294"}, {"url": " >>>> http://ubuntu16:8777", "interface": "internal", "region": "RegionOne", >>>> "region_id": "RegionOne", "id": "def8c07174184a0ca26e2f0f26d60a73"}], >>>> "type": "metering", "id": "f4450730522d4342ac6626b81567b36c", "name": >>>> "ceilometer"}, {"endpoints": [{"url": "http://ubuntu16:9511/v1", >>>> "interface": "internal", "region": "RegionOne", "region_id": "RegionOne", >>>> "id": "19e14e5c5c5a4d3db6a6a632db728668"}, {"url": " >>>> http://10.11.142.2:9511/v1", "interface": "public", "region": >>>> "RegionOne", "region_id": "RegionOne", "id": "28fb2092bcc748ce88dfb1284ace1264"}, >>>> {"url": "http://10.11.142.2:9511/v1", "interface": "admin", "region": >>>> "RegionOne", "region_id": "RegionOne", "id": "c33f5b4a355d4067aa2e7093606cd153"}], >>>> "type": "container", "id": "fdbcff09ecd545c8ba28bfd96782794a", "name": >>>> "magnum"}], "user": {"domain": {"id": "default", "name": "Default"}, >>>> "password_expires_at": null, "name": "admin", "id": >>>> "3b136545b47b40709b78b1e36cdcdc63"}, "audit_ids": >>>> ["Ad1z5kAmRBehcgxG6-8IYA"], "issued_at": "2018-04-05T23:11:08.000000Z"} >>>> } >>>> DEBUG (session:372) REQ: curl -g -i -X GET >>>> http://10.11.142.2:9511/v1/services -H "OpenStack-API-Version: >>>> container 1.2" -H "X-Auth-Token: {SHA1}7523b440595290414cefa54434fc7c8adbec5c3d" >>>> -H "Content-Type: application/json" -H "Accept: application/json" -H >>>> "User-Agent: None" >>>> DEBUG (connectionpool:207) Starting new HTTP connection (1): 10.11.142.2 >>>> DEBUG (connectionpool:395) http://10.11.142.2:9511 "GET /v1/services >>>> HTTP/1.1" 406 166 >>>> DEBUG (session:419) RESP: [406] Content-Type: application/json >>>> Content-Length: 166 x-openstack-request-id: req-63b7de1b-ef63-4be8-93c1-a27972c9b4c0 >>>> Server: Werkzeug/0.10.4 Python/2.7.12 Date: Thu, 05 Apr 2018 23:11:09 GMT >>>> RESP BODY: {"errors": [{"status": 406, "code": "", "links": [], >>>> "title": "Not Acceptable", "detail": "Invalid service type for >>>> OpenStack-API-Version header", "request_id": ""}]} >>>> >>>> DEBUG (session:722) GET call to container for >>>> http://10.11.142.2:9511/v1/services used request id >>>> req-63b7de1b-ef63-4be8-93c1-a27972c9b4c0 >>>> DEBUG (shell:705) Not Acceptable (HTTP 406) (Request-ID: >>>> req-63b7de1b-ef63-4be8-93c1-a27972c9b4c0) >>>> Traceback (most recent call last): >>>> File "/usr/local/lib/python2.7/dist-packages/zunclient/shell.py", >>>> line 703, in main >>>> map(encodeutils.safe_decode, sys.argv[1:])) >>>> File "/usr/local/lib/python2.7/dist-packages/zunclient/shell.py", >>>> line 639, in main >>>> args.func(self.cs, args) >>>> File "/usr/local/lib/python2.7/dist-packages/zunclient/v1/services_shell.py", >>>> line 22, in do_service_list >>>> services = cs.services.list() >>>> File "/usr/local/lib/python2.7/dist-packages/zunclient/v1/services.py", >>>> line 70, in list >>>> return self._list(self._path(path), "services") >>>> File "/usr/local/lib/python2.7/dist-packages/zunclient/common/base.py", >>>> line 128, in _list >>>> resp, body = self.api.json_request('GET', url) >>>> File "/usr/local/lib/python2.7/dist-packages/zunclient/common/httpclient.py", >>>> line 368, in json_request >>>> resp = self._http_request(url, method, **kwargs) >>>> File "/usr/local/lib/python2.7/dist-packages/zunclient/common/httpclient.py", >>>> line 351, in _http_request >>>> error_json.get('debuginfo'), method, url) >>>> NotAcceptable: Not Acceptable (HTTP 406) (Request-ID: >>>> req-63b7de1b-ef63-4be8-93c1-a27972c9b4c0) >>>> ERROR: Not Acceptable (HTTP 406) (Request-ID: >>>> req-63b7de1b-ef63-4be8-93c1-a27972c9b4c0) >>>> >>>> >>>> >>>> Thanks >>>> -Murali >>>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Wed Apr 11 03:52:41 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 11 Apr 2018 03:52:41 +0000 Subject: [openstack-dev] [tripleo] The Weekly Owl - 16th Edition In-Reply-To: References: Message-ID: On Tue, 10 Apr 2018 at 19:24 Emilien Macchi wrote: > Note: this is the sixteenth edition of a weekly update of what happens in > TripleO. > The goal is to provide a short reading (less than 5 minutes) to learn > where we are and what we're doing. > Any contributions and feedback are welcome. > Link to the previous version: > http://lists.openstack.org/pipermail/openstack-dev/2018-April/129035.html > > +---------------------------------+ > | General announcements | > +---------------------------------+ > > +--> Rocky milestone 1 is next week! Please update your blueprints status > accordingly) > > +------------------------------+ > | Continuous Integration | > +------------------------------+ > > +--> Rover is Arx and Ruck is Rafael. Please let them know any new CI > issue. > +--> Master promotion is 5 days, Queens is 12 days, Pike is 17 days and > Ocata is 17 days. > FYI.. Queens promoted today Master promotion is 5 days, Queens is 0 days, Pike is 17 days and Ocata is 17 Thanks Emilien! > +--> Efforts around a simple "keystone-only" CI job across all branches. > +--> Good progress on running Tempest for undercloud jobs, also tempest > containerization. > +--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting and > https://goo.gl/D4WuBP > > +-------------+ > | Upgrades | > +-------------+ > > +--> Progress on FFU CLI in tripleoclient and FFU/Ceph as well. > +--> Work on CI jobs for undercloud upgrades and containerized undercloud > upgrades. > +--> Need reviews, see etherpad > +--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad-status > > +---------------+ > | Containers | > +---------------+ > > +--> Good progress on upgrades, now working on THT tasks to upgrade > undercloud services > +--> Focusing on UX problems: logs, permissions, directories, complete > deployment message > +--> Container workflow is still work in progress, and needed to make > progress on CI / container updates > +--> We had to revert containerized undercloud testing on fs010 : > https://bugs.launchpad.net/tripleo/+bug/1762422 > +--> More: > https://etherpad.openstack.org/p/tripleo-containers-squad-status > > +----------------------+ > | config-download | > +----------------------+ > > +--> Moving to config-download by default is imminent. > +--> ceph/octavia/skydive migration is wip. > +--> Inventory improvements in progress. > +--> Polishing tripleo-common deploy_plan and messaging patches to get > correct deployment state tracking. > +--> UI work is work in progress. > +--> More: > https://etherpad.openstack.org/p/tripleo-config-download-squad-status > > +--------------+ > | Integration | > +--------------+ > > +--> Migrate to new ceph-ansible container images naming style. > +--> Config-download transition is still ongoing. > +--> More: > https://etherpad.openstack.org/p/tripleo-integration-squad-status > > +---------+ > | UI/CLI | > +---------+ > > +--> Efforts on config-download integration > +--> Investigating undeploy_plan workflow in tripleo-common > +--> Maintaining pending UI patches to be up to date with tripleo-common > changes > +--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status > > +---------------+ > | Validations | > +---------------+ > > +--> OpenShift on OpenStack validations in progress > +--> Starting work on Custom validations/swift storage > +--> Need reviews, see etherpad > +--> More: > https://etherpad.openstack.org/p/tripleo-validations-squad-status > > +---------------+ > | Networking | > +---------------+ > > +--> No updates this week. > +--> More: > https://etherpad.openstack.org/p/tripleo-networking-squad-status > > +--------------+ > | Workflows | > +--------------+ > > +--> Need reviews, see etherpad. > +--> More: https://etherpad.openstack.org/p/tripleo-workflows-squad-status > > +-----------+ > | Security | > +-----------+ > > +--> Last meeting's was about Public TLS by default, Limit TripleO users > and Security Hardening. > +--> More: https://etherpad.openstack.org/p/tripleo-security-squad > > +------------+ > | Owl fact | > +------------+ > > Did you know owls were good hunters? Check this video: > https://youtu.be/a68fIQzaDBY?t=39 > Don't mess with owls ;-) > > Thanks all for reading and stay tuned! > -- > Your fellow reporter, Emilien Macchi > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhang.lei.fly at gmail.com Wed Apr 11 04:25:07 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Wed, 11 Apr 2018 12:25:07 +0800 Subject: [openstack-dev] [kolla] stable-policy tag has been removed in kolla Message-ID: This has already happened[1] based on talks during Sydney summit[0]. But seems it is not noticed by some guys. Even though the tag is merged, we should still follow rule during backport patches to stable branches. The rules, I think, should be - do not break upgrade from y-1 or z-1 stream - API should be still backward compatible, and before removing any feature, we still need one more cycle to mark it as deprecated. and then remove it during next cycle. On the other hand, some bp-like patches like this[2] could be backported to stable branches. Since it won't break anything. [0] https://etherpad.openstack.org/p/SYD-stable-policy [1] https://review.openstack.org/#/c/519685/ [2] https://review.openstack.org/557729 -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From iwamoto at valinux.co.jp Wed Apr 11 09:19:02 2018 From: iwamoto at valinux.co.jp (IWAMOTO Toshihiro) Date: Wed, 11 Apr 2018 18:19:02 +0900 Subject: [openstack-dev] [all][requirements] uncapping eventlet In-Reply-To: <1523282186-sup-2@lrrr.local> References: <20180405154737.lhxhlpsj3uttjevu@gentoo.org> <20180405192619.nk7ykeel6qnwsk2y@gentoo.org> <20180406163433.fyj6qnq5oegivb4t@gentoo.org> <1523032867.936315.1329051592.0E63BA5F@webmail.messagingengine.com> <20180409033928.GB28028@thor.bakeyournoodle.com> <1523282186-sup-2@lrrr.local> Message-ID: <20180411093423.7A48AB3350@mail.valinux.co.jp> On Mon, 09 Apr 2018 22:58:28 +0900, Doug Hellmann wrote: > > Excerpts from Tony Breeds's message of 2018-04-09 13:39:30 +1000: > > On Fri, Apr 06, 2018 at 09:41:07AM -0700, Clark Boylan wrote: > > > > > My understanding of our use of upper constraints was that this should > > > (almost) always be the case for (almost) all dependencies. We should > > > rely on constraints instead of requirements caps. Capping libs like > > > pbr or eventlet and any other that is in use globally is incredibly > > > difficult to work with when you want to uncap it because you have to > > > coordinate globally. Instead if using constraints you just bump the > > > constraint and are done. > > > > Part of the reason that we have the caps it to prevent the tools that > > auto-generate the constraints syncs from considering these versions and > > then depending on the requirements team to strip that from the bot > > change before committing (assuming it passes CI). > > > > Once the work Doug's doing is complete we could consider tweaking the > > tools to use a different mechanism, but that's only part of the reason > > for the caps in g-r. > > > > Yours Tony. > > Now that projects don't have to match the global requirements list > entries exactly we should be able to remove caps from within the > projects and keep caps in the global list for cases like this where we > know we frequently encounter breaking changes in new releases. The > changes to support that were part of > https://review.openstack.org/#/c/555402/ As eventlet has been uncapped in g-r, requirements-check is complaining on unrelated project-local requirement changes. I'm not quite sure but doesn't seem to be a intended behavior. http://logs.openstack.org/57/451257/16/check/requirements-check/c32ee69/job-output.txt.gz -- IWAMOTO Toshihiro From zhipengh512 at gmail.com Wed Apr 11 10:07:43 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 11 Apr 2018 18:07:43 +0800 Subject: [openstack-dev] [cyborg]Weekly Team Meeting April 11, 2018 Message-ID: Hi Team, Our weekly meeting starting from UTC1400 at #openstack-cyborg as usual. The initial agenda is as follows: 1. Confirmation of new core reviewer promotion, 2. Critical Rocky Spec update and discussion 3. open patch discussion Please feel free to suggest new topics any time :) -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Wed Apr 11 10:39:13 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 11 Apr 2018 11:39:13 +0100 (BST) Subject: [openstack-dev] [all] [api] Re-Reminder on the state of WSME In-Reply-To: References: <4bb99da6-1071-3f7b-2c87-979e0d48876d@nemebean.com> Message-ID: On Tue, 10 Apr 2018, Michael Johnson wrote: > I echo Ben's question about what is the recommended replacement. It's a good question. Unfortunately I don't have a good answer. My involvement in WSME is simply the result of submitting some bug fixes in early 2015 and there being no one to review them. Lucas Gomes and I were pressganged into becoming the sole core reviews for a project that was already languishing. A short answer could be this: There doesn't have to be a replacement. There are people in the community who are active users of WSME, if those people would like to become maintainers of WSME, Lucas and I can make those people core and help them to shepherd the project to an active state. It may be that nothing really needs to change. The reason this is coming up now is because a code change was proposed that failed the gate because for unrelated reasons (the pep8 python3 thing mentioned elsewhere). If the existing feature set is sufficient the only real work to do is to keep those features working as we move to python3. Any volunteers? For new projects, I think the standby is Flask + jsonschema. They are both boring and common. I know some people really like django REST framework, but it appears to have lots of magic and magic is bad. The longer answer is just opinion so if the above is enough of an answer you can stop here before I go off on a ramble. I've never really been all that sure on what WSME is for. It describes itself with "simplifies the writing of REST web services by providing simple yet powerful typing, removing the need to directly manipulate the request and the response objects." This is pretty much exactly the opposite of what I want when writing a web service. I want to be closely aware of the request and response and not abstract away the details of HTTP because those details are what makes a web service useful and maintainable. So I tend to avoid typing systems like WSME and object dispatch systems like pecan in favor of tools that are more explicit about the data (both headers and body) coming in and going out, and that make the association between URLs and code explicit rather than implicit. That is: you want to write code for the API layer so that future maintainers of that code find it easy to trace the path through the code that a request takes without having to make a lot of guesses or de-serialize (in their heads) an object inheritance hierarchy. Flask can do that, if you chose to use it that way, but like many tools it also allows you to do things in confusing ways too. I personally don't think that consistency of web framework across OpenStack projects is important. What's important is: * The exposed HTTP APIs have some degree of consistency (that is, they don't have glaring differences in grammar and semantics). * The code is low on abstraction and high on scrutability so that future maintainers aren't scratching their heads. * Any frameworks chosen (if any) are maintained by the broader Python community and are not OpenStack snowflakes. Committing to any particular framework is the same as committing to being wrong and calcified in some fairly short amount of time. Who wants to volunteer to help maintain WSME? -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From sean.mcginnis at gmx.com Wed Apr 11 11:24:16 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 11 Apr 2018 06:24:16 -0500 Subject: [openstack-dev] [Elections][TC] Announcing Sean McGinnis candidacy for TC Message-ID: <20180411112416.GA2307@sm-xps> Hey everyone, I am announcing my candidacy to continue on the OpenStack Technical Committee. I am employed by Huawei and lucky enough to have a full-time focus on OpenStack. I have been contributing since the Icehouse release. I served as Cinder PTL from Mitaka through Pike, and was elected to the TC last spring. I am currently serving my second cycle as Release Management PTL. During the last year on the TC, I have tried to be pragmatic and open to reason on governance changes. I do think some proposals need healthy debate with a really long term mindset to understand how changes now can impact our community long term. I also think we need to pay a lot of attention to how proposals impact the sometimes seemingly minor affect they will have on all those currently involved, and how it impacts developer happiness and the attraction of working on an OpenStack project. I've learned a lot from the other TC members and others participating in these discussions. This last year has been very rewarding, and I've been glad to do my part to move these conversations forward. My voting on past changes can be perused here: https://review.openstack.org/#/q/project:openstack/governance+reviewedby:%22Sean+McGinnis+%253Csean.mcginnis%2540gmail.com%253E%22 Outside of specific governance proposals, I have been working on getting involved in the operators community by attending the last few Ops Meetups to be able to get face to face with more of the folks actually using OpenStack. I've found it very valuable to hear directly about what kinds of issues are being run into and what kinds of things we might be able to change on the development side to make things better. Part of the outcome of that has led me to be more interested in our stable policy, and helping out more with stable branch reviews. Many operators are not able to get to a version, for one reason or another, until we have deleted the branch upstream. I was happy to support our recent efforts in changing out stable policies to allow a bigger window that might allow a resurgence in interest for some of these older branches once more users are actually able to run them and find issues. I do think it is good to have some new faces on the TC, but would love to serve another term. I feel like the first year was partly just getting settled in, and I would be very happy to continue to serve another term to keep things going. OpenStack has been one of the best communities I've been involved in, and I would love the opportunity to continue to do what I can to help support it and help it grow. Thank you for your consideration. Sean From tobias at citynetwork.se Wed Apr 11 11:26:08 2018 From: tobias at citynetwork.se (Tobias Rydberg) Date: Wed, 11 Apr 2018 13:26:08 +0200 Subject: [openstack-dev] [publiccloud-wg] Reminder and agenda tomorrows meeting Message-ID: Hi everyone, Time for a new meeting for the Public Cloud WG. Forum sessions for Vancouver is priority of this meeting, would be nice to see as many of you there. Agenda can be found at https://etherpad.openstack.org/p/publiccloud-wg Feel free to add items to the agenda! See you all tomorrow 1400 UTC in #opensstack-publiccloud Cheers, Tobias -- Tobias Rydberg Senior Developer Mobile: +46 733 312780 www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3945 bytes Desc: S/MIME Cryptographic Signature URL: From dougal at redhat.com Wed Apr 11 11:55:25 2018 From: dougal at redhat.com (Dougal Matthews) Date: Wed, 11 Apr 2018 12:55:25 +0100 Subject: [openstack-dev] [all] [api] Re-Reminder on the state of WSME In-Reply-To: References: <4bb99da6-1071-3f7b-2c87-979e0d48876d@nemebean.com> Message-ID: On 11 April 2018 at 11:39, Chris Dent wrote: > On Tue, 10 Apr 2018, Michael Johnson wrote: > > I echo Ben's question about what is the recommended replacement. >> > > It's a good question. Unfortunately I don't have a good answer. My > involvement in WSME is simply the result of submitting some bug fixes > in early 2015 and there being no one to review them. Lucas Gomes and > I were pressganged into becoming the sole core reviews for a project > that was already languishing. > > A short answer could be this: There doesn't have to be a > replacement. There are people in the community who are active users > of WSME, if those people would like to become maintainers of WSME, > Lucas and I can make those people core and help them to shepherd the > project to an active state. It may be that nothing really needs to > change. The reason this is coming up now is because a code change > was proposed that failed the gate because for unrelated reasons (the > pep8 python3 thing mentioned elsewhere). If the existing feature set > is sufficient the only real work to do is to keep those features > working as we move to python3. > I would like to see us move away from WSME. I'm not sure I have time to drive an effort in finding a replacement (and migration path) but I would certainly like to help. > > Any volunteers? > > For new projects, I think the standby is Flask + jsonschema. They > are both boring and common. > > I know some people really like django REST framework, but it appears > to have lots of magic and magic is bad. > > The longer answer is just opinion so if the above is enough of an > answer you can stop here before I go off on a ramble. > > I've never really been all that sure on what WSME is for. It > describes itself with "simplifies the writing of REST web services > by providing simple yet powerful typing, removing the need to > directly manipulate the request and the response objects." This is > pretty much exactly the opposite of what I want when writing a web > service. I want to be closely aware of the request and response and > not abstract away the details of HTTP because those details are what > makes a web service useful and maintainable. So I tend to avoid > typing systems like WSME and object dispatch systems like pecan in > favor of tools that are more explicit about the data (both headers > and body) coming in and going out, and that make the association > between URLs and code explicit rather than implicit. > > That is: you want to write code for the API layer so that future > maintainers of that code find it easy to trace the path through the > code that a request takes without having to make a lot of guesses or > de-serialize (in their heads) an object inheritance hierarchy. > > Flask can do that, if you chose to use it that way, but like many > tools it also allows you to do things in confusing ways too. > > I personally don't think that consistency of web framework across > OpenStack projects is important. What's important is: > > * The exposed HTTP APIs have some degree of consistency (that is, > they don't have glaring differences in grammar and semantics). > * The code is low on abstraction and high on scrutability so that > future maintainers aren't scratching their heads. > * Any frameworks chosen (if any) are maintained by the broader > Python community and are not OpenStack snowflakes. > > Committing to any particular framework is the same as committing to > being wrong and calcified in some fairly short amount of time. > > Who wants to volunteer to help maintain WSME? > > > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent tw: @anticdent > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Wed Apr 11 12:02:30 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Wed, 11 Apr 2018 20:02:30 +0800 Subject: [openstack-dev] [Elections][TC] Announcing Rico Lin candidacy for TC Message-ID: Dear all, I'd like to announce my candidacy for a seat on the OpenStack Technical Committee. I'm Rico Lin, employed by EasyStack, full-time OpenStacker. I have been in this community since 2014. And been deeply involved with technical contribution [1], mostly around Orchestration service which allows me to work on integration and management resources cross-projects. Also, I have served as PTL for three cycles. Which allows me to learn better on how we can join users and operators' experiences and requirements and integrated with development workflow and technical decision processes. Here are my major goals with this seat in TC: - Application: We've updated our resolution with [3] and saying we care about what applications needs on top of OpenStack. As for jobs from few projects are taking the role and thinking about what application needs, we should provide help with setting up community goals, making resolutions, or define what are top priority applications (can be a short-term definition) we need to focus on and taking action items/guidelines and finding weaknesses, so others from community can follow (if they agree with the goals, but got no idea on what they can help, IMO this will be a good stuff). - Cooperate with Users, Operators, and Developers: We have been losing some communication cross Users, Operators, and Developers. And it's never a good thing when users can share use cases, ops shares experiences, developers shares code, but they won't make it to one another not if a user provides developers by them self. In this case, works like StoryBoard should be in our first priority. We need a more solid way to get user feedback for developers, so we can actually learn what's working or not for each feature. And maybe it's considerable, to strengthen the communication between TCs and UCs (User Committee). - Diversity: The math is easy. [2] shows we got around one-third of users from Asia (with 75% of users in China). Also IIRC, around the same percentage of developers. But we got 0 in TC. The actual works are hard. We need forwards our technical guideline to developers in Asia and provide chances to get more feedback from them, so we can provide better technical resolutions which should be able to tight developers all together. Which I think I'm a good candidate for this. - Reach out for new blood: With cloud getting more mature. It's normal that cloud developers need to works in multiple communities, and they might comes and goes (mostly based on their job definition from their enterprise), so we need more new developers. And most important is to provides more chances for them to stay. Which I know there are many new join developers struggle with finding ways to fit in each project. We need ways to shorten their onboarding time, so they can make good works during they're in our community. - Paying the debt: Our community has done a great job on changing our resolutions and guidelines to adopt new trends and keep ourself sharp. TCs try really hard to migrate our path and do the magic. IMO, we need more effects on some specific jobs (like cross-project for Application infra. or Storyboard migrate), I do like to keep that going and closing our technical debts, so we can have room for new. Thank you for your consideration. Best Regards, Rico Lin (ricolin) [1] http://stackalytics.com/?release=all&user_id=rico-lin&metric=person-day [2] https://www.openstack.org/assets/survey/OpenStack-User-Survey-Nov17.pdf [3] https://review.openstack.org/#/c/447031/5/resolutions/20170317-cloud-applications-mission.rst -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed Apr 11 12:55:52 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 11 Apr 2018 08:55:52 -0400 Subject: [openstack-dev] [all][requirements] uncapping eventlet In-Reply-To: <20180411093423.7A48AB3350@mail.valinux.co.jp> References: <20180405154737.lhxhlpsj3uttjevu@gentoo.org> <20180405192619.nk7ykeel6qnwsk2y@gentoo.org> <20180406163433.fyj6qnq5oegivb4t@gentoo.org> <1523032867.936315.1329051592.0E63BA5F@webmail.messagingengine.com> <20180409033928.GB28028@thor.bakeyournoodle.com> <1523282186-sup-2@lrrr.local> <20180411093423.7A48AB3350@mail.valinux.co.jp> Message-ID: <1523451099-sup-3040@lrrr.local> Excerpts from IWAMOTO Toshihiro's message of 2018-04-11 18:19:02 +0900: > On Mon, 09 Apr 2018 22:58:28 +0900, > Doug Hellmann wrote: > > > > Excerpts from Tony Breeds's message of 2018-04-09 13:39:30 +1000: > > > On Fri, Apr 06, 2018 at 09:41:07AM -0700, Clark Boylan wrote: > > > > > > > My understanding of our use of upper constraints was that this should > > > > (almost) always be the case for (almost) all dependencies. We should > > > > rely on constraints instead of requirements caps. Capping libs like > > > > pbr or eventlet and any other that is in use globally is incredibly > > > > difficult to work with when you want to uncap it because you have to > > > > coordinate globally. Instead if using constraints you just bump the > > > > constraint and are done. > > > > > > Part of the reason that we have the caps it to prevent the tools that > > > auto-generate the constraints syncs from considering these versions and > > > then depending on the requirements team to strip that from the bot > > > change before committing (assuming it passes CI). > > > > > > Once the work Doug's doing is complete we could consider tweaking the > > > tools to use a different mechanism, but that's only part of the reason > > > for the caps in g-r. > > > > > > Yours Tony. > > > > Now that projects don't have to match the global requirements list > > entries exactly we should be able to remove caps from within the > > projects and keep caps in the global list for cases like this where we > > know we frequently encounter breaking changes in new releases. The > > changes to support that were part of > > https://review.openstack.org/#/c/555402/ > > As eventlet has been uncapped in g-r, requirements-check is > complaining on unrelated project-local requirement changes. > I'm not quite sure but doesn't seem to be a intended behavior. > > http://logs.openstack.org/57/451257/16/check/requirements-check/c32ee69/job-output.txt.gz > This error is related to the change in https://review.openstack.org/#/c/560050/ which applies the matching rules to all requirements settings any time any requirements-related file is touched. The change was made because we are less in-sync than we thought and because we're allowing "bad" settings to stay in place. To correct the problem in the log you linked to, remove the cap from eventlet in neutron. Doug From whayutin at redhat.com Wed Apr 11 12:58:13 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 11 Apr 2018 12:58:13 +0000 Subject: [openstack-dev] [tripleo] roadmap on containers workflow In-Reply-To: References: Message-ID: On Tue, 10 Apr 2018 at 20:51 Emilien Macchi wrote: > Greetings, > > Steve Baker and I had a quick chat today about the work that is being done > around containers workflow in Rocky cycle. > > If you're not familiar with the topic, I suggest to first read the > blueprint to understand the context here: > https://blueprints.launchpad.net/tripleo/+spec/container-prepare-workflow > > One of the great outcomes of this blueprint is that in Rocky, the operator > won't have to run all the "openstack overcloud container" commands to > prepare the container registry and upload the containers. Indeed, it'll be > driven by Heat and Mistral mostly. > > But today our discussion extended on 2 uses-cases that we're going to > explore and find how we can address them: > 1) I'm a developer and want to deploy a containerized undercloud with > customized containers (more or less related to the all-in-one discussions > on another thread [1]). > 2) I'm submitting a patch in tripleo-common (let's say a workflow) and > need my patch to be tested when the undercloud is containerized (see [2] > for an excellent example). > > Both cases would require additional things: > - The container registry needs to be deployed *before* actually installing > the undercloud. > - We need a tool to update containers from this registry and *before* > deploying them. We already have this tool in place in our CI for the > overcloud (see [3] and [4]). Now we need a similar thing for the undercloud. > > Next steps: > - Agree that we need to deploy the container-registry before the > undercloud. > - If agreed, we'll create a new Ansible role called > ansible-role-container-registry that for now will deploy exactly what we > have in TripleO, without extra feature. > - Drive the playbook runtime from tripleoclient to bootstrap the container > registry (which of course could be disabled in undercloud.conf). > - Create another Ansible role that would re-use container-check tool but > the idea is to provide a role to modify containers when needed, and we > could also control it from tripleoclient. The role would be using > the ContainerImagePrepare parameter, which Steve is working on right now. > This all looks really good Emilien, thanks for sending it out. Regarding the update of containers, we would just want to be 100% sure that we can control which yum repositories are in play for the update. Maybe it will be done by the user prior to running the command, or maybe with some flags to what ever command Steve is working on. FYI.. we've noticed in CI that when the base os updates ( not baseos) are included you tend to fail on at least on package download on one of the 50+ containers due to infra/network. In CI we only enable baseos, dlrn updates and the dependency change [1] Thanks [1] https://github.com/openstack/tripleo-quickstart-extras/blob/master/roles/overcloud-prep-containers/templates/overcloud-prep-containers.sh.j2#L104-L109 > > Feedback is welcome, thanks. > > [1] All-In-One thread: > http://lists.openstack.org/pipermail/openstack-dev/2018-March/128900.html > [2] Bug report when undercloud is containeirzed > https://bugs.launchpad.net/tripleo/+bug/1762422 > [3] Tool to update containers if needed: > https://github.com/imain/container-check > [4] Container-check running in TripleO CI: > https://review.openstack.org/#/c/558885/ and > https://review.openstack.org/#/c/529399/ > -- > Emilien Macchi > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jichenjc at cn.ibm.com Wed Apr 11 13:09:29 2018 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Wed, 11 Apr 2018 21:09:29 +0800 Subject: [openstack-dev] [all][requirements] uncapping eventlet In-Reply-To: <1523451099-sup-3040@lrrr.local> References: <20180405154737.lhxhlpsj3uttjevu@gentoo.org> <20180405192619.nk7ykeel6qnwsk2y@gentoo.org> <20180406163433.fyj6qnq5oegivb4t@gentoo.org> <1523032867.936315.1329051592.0E63BA5F@webmail.messagingengine.com> <20180409033928.GB28028@thor.bakeyournoodle.com> <1523282186-sup-2@lrrr.local> <20180411093423.7A48AB3350@mail.valinux.co.jp> <1523451099-sup-3040@lrrr.local> Message-ID: sorry, I didn't see any solution for following error found in [1] I just rechecked the patch and is this kind of issue already fixed? ubuntu-xenial | Requirement for package eventlet : Requirement (package=u'eventlet', location='', specifiers='!=0.18.3,!=0.20.1,<0.21.0,>=0.18.2', markers=u'', comment=u'# MIT', extras=frozenset([])) does not match openstack/requirements value : set([Requirement(package='eventlet', location='', specifiers='!=0.18.3,!=0.20.1,>=0.18.2', markers='', comment='# MIT', extras=frozenset([]))]) [1] logs.openstack.org/87/523387/32/check/requirements-check/408e28c/job-output.txt.gz Best Regards! Kevin (Chen) Ji 纪 晨 Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com Phone: +86-10-82451493 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC From: Doug Hellmann To: openstack-dev Date: 04/11/2018 08:56 PM Subject: Re: [openstack-dev] [all][requirements] uncapping eventlet Excerpts from IWAMOTO Toshihiro's message of 2018-04-11 18:19:02 +0900: > On Mon, 09 Apr 2018 22:58:28 +0900, > Doug Hellmann wrote: > > > > Excerpts from Tony Breeds's message of 2018-04-09 13:39:30 +1000: > > > On Fri, Apr 06, 2018 at 09:41:07AM -0700, Clark Boylan wrote: > > > > > > > My understanding of our use of upper constraints was that this should > > > > (almost) always be the case for (almost) all dependencies. We should > > > > rely on constraints instead of requirements caps. Capping libs like > > > > pbr or eventlet and any other that is in use globally is incredibly > > > > difficult to work with when you want to uncap it because you have to > > > > coordinate globally. Instead if using constraints you just bump the > > > > constraint and are done. > > > > > > Part of the reason that we have the caps it to prevent the tools that > > > auto-generate the constraints syncs from considering these versions and > > > then depending on the requirements team to strip that from the bot > > > change before committing (assuming it passes CI). > > > > > > Once the work Doug's doing is complete we could consider tweaking the > > > tools to use a different mechanism, but that's only part of the reason > > > for the caps in g-r. > > > > > > Yours Tony. > > > > Now that projects don't have to match the global requirements list > > entries exactly we should be able to remove caps from within the > > projects and keep caps in the global list for cases like this where we > > know we frequently encounter breaking changes in new releases. The > > changes to support that were part of > > https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_c_555402_&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=y9YWvP5nDDQCyw3QGNWGvQS-CVeHBeXA9rfHaLf3JpQ&s=P99Z7BlpiP8Sg9_5Ku4JMW_tJWXARpd2ldSvFFlFBpU&e= > > As eventlet has been uncapped in g-r, requirements-check is > complaining on unrelated project-local requirement changes. > I'm not quite sure but doesn't seem to be a intended behavior. > > https://urldefense.proofpoint.com/v2/url?u=http-3A__logs.openstack.org_57_451257_16_check_requirements-2Dcheck_c32ee69_job-2Doutput.txt.gz&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=y9YWvP5nDDQCyw3QGNWGvQS-CVeHBeXA9rfHaLf3JpQ&s=6uHgERcFttsqFakjBTrjvKZhk5n-tZO-e0QMd7zj0nc&e= > This error is related to the change in https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_c_560050_&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=y9YWvP5nDDQCyw3QGNWGvQS-CVeHBeXA9rfHaLf3JpQ&s=1hIA6J9OfM1mhcTDq89NkGmoAQi_fDfhel7q5dgcwIA&e= which applies the matching rules to all requirements settings any time any requirements-related file is touched. The change was made because we are less in-sync than we thought and because we're allowing "bad" settings to stay in place. To correct the problem in the log you linked to, remove the cap from eventlet in neutron. Doug __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=y9YWvP5nDDQCyw3QGNWGvQS-CVeHBeXA9rfHaLf3JpQ&s=nEWykx9Cfm6deVwO9Sdge-_Q31mCbdfAmvp_KoPaenc&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From cdent+os at anticdent.org Wed Apr 11 13:21:28 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 11 Apr 2018 14:21:28 +0100 (BST) Subject: [openstack-dev] [all] [api] Re-Reminder on the state of WSME In-Reply-To: References: <4bb99da6-1071-3f7b-2c87-979e0d48876d@nemebean.com> Message-ID: On Wed, 11 Apr 2018, Dougal Matthews wrote: > I would like to see us move away from WSME. I'm not sure I have time to > drive an effort in finding a replacement (and migration path) but I would > certainly like to help. Dougal and I talked about this in IRC and agreed that being able to merge changes in WSME would help the goal of establishing a migration path. So I've added him to WSME cores. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From doug at doughellmann.com Wed Apr 11 13:23:45 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 11 Apr 2018 09:23:45 -0400 Subject: [openstack-dev] [all][requirements] uncapping eventlet In-Reply-To: References: <20180405154737.lhxhlpsj3uttjevu@gentoo.org> <20180405192619.nk7ykeel6qnwsk2y@gentoo.org> <20180406163433.fyj6qnq5oegivb4t@gentoo.org> <1523032867.936315.1329051592.0E63BA5F@webmail.messagingengine.com> <20180409033928.GB28028@thor.bakeyournoodle.com> <1523282186-sup-2@lrrr.local> <20180411093423.7A48AB3350@mail.valinux.co.jp> <1523451099-sup-3040@lrrr.local> Message-ID: <1523452983-sup-7520@lrrr.local> Excerpts from Chen CH Ji's message of 2018-04-11 21:09:29 +0800: > sorry, I didn't see any solution for following error found in [1] > I just rechecked the patch and is this kind of issue already fixed? > > ubuntu-xenial | Requirement for package eventlet : Requirement > (package=u'eventlet', location='', > specifiers='!=0.18.3,!=0.20.1,<0.21.0,>=0.18.2', markers=u'', comment=u'# > MIT', extras=frozenset([])) does not match openstack/requirements value : > set([Requirement(package='eventlet', location='', > specifiers='!=0.18.3,!=0.20.1,>=0.18.2', markers='', comment='# MIT', > extras=frozenset([]))]) The error message is correct. The requirements specification does not match and needs to be fixed by removing the cap from eventlet. Doug > > [1] > logs.openstack.org/87/523387/32/check/requirements-check/408e28c/job-output.txt.gz > > Best Regards! > > Kevin (Chen) Ji 纪 晨 > > Engineer, zVM Development, CSTL > Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com > Phone: +86-10-82451493 > Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, > Beijing 100193, PRC > > > > From: Doug Hellmann > To: openstack-dev > Date: 04/11/2018 08:56 PM > Subject: Re: [openstack-dev] [all][requirements] uncapping eventlet > > Excerpts from IWAMOTO Toshihiro's message of 2018-04-11 18:19:02 +0900: > > On Mon, 09 Apr 2018 22:58:28 +0900, > > Doug Hellmann wrote: > > > > > > Excerpts from Tony Breeds's message of 2018-04-09 13:39:30 +1000: > > > > On Fri, Apr 06, 2018 at 09:41:07AM -0700, Clark Boylan wrote: > > > > > > > > > My understanding of our use of upper constraints was that this > should > > > > > (almost) always be the case for (almost) all dependencies. We > should > > > > > rely on constraints instead of requirements caps. Capping libs like > > > > > pbr or eventlet and any other that is in use globally is incredibly > > > > > difficult to work with when you want to uncap it because you have > to > > > > > coordinate globally. Instead if using constraints you just bump the > > > > > constraint and are done. > > > > > > > > Part of the reason that we have the caps it to prevent the tools that > > > > auto-generate the constraints syncs from considering these versions > and > > > > then depending on the requirements team to strip that from the bot > > > > change before committing (assuming it passes CI). > > > > > > > > Once the work Doug's doing is complete we could consider tweaking the > > > > tools to use a different mechanism, but that's only part of the > reason > > > > for the caps in g-r. > > > > > > > > Yours Tony. > > > > > > Now that projects don't have to match the global requirements list > > > entries exactly we should be able to remove caps from within the > > > projects and keep caps in the global list for cases like this where we > > > know we frequently encounter breaking changes in new releases. The > > > changes to support that were part of > > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_c_555402_&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=y9YWvP5nDDQCyw3QGNWGvQS-CVeHBeXA9rfHaLf3JpQ&s=P99Z7BlpiP8Sg9_5Ku4JMW_tJWXARpd2ldSvFFlFBpU&e= > > > > > As eventlet has been uncapped in g-r, requirements-check is > > complaining on unrelated project-local requirement changes. > > I'm not quite sure but doesn't seem to be a intended behavior. > > > > > https://urldefense.proofpoint.com/v2/url?u=http-3A__logs.openstack.org_57_451257_16_check_requirements-2Dcheck_c32ee69_job-2Doutput.txt.gz&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=y9YWvP5nDDQCyw3QGNWGvQS-CVeHBeXA9rfHaLf3JpQ&s=6uHgERcFttsqFakjBTrjvKZhk5n-tZO-e0QMd7zj0nc&e= > > > > > This error is related to the change in > https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_c_560050_&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=y9YWvP5nDDQCyw3QGNWGvQS-CVeHBeXA9rfHaLf3JpQ&s=1hIA6J9OfM1mhcTDq89NkGmoAQi_fDfhel7q5dgcwIA&e= > which applies the matching > rules to all requirements settings any time any requirements-related > file is touched. The change was made because we are less in-sync than we > thought and because we're allowing "bad" settings to stay in place. > > To correct the problem in the log you linked to, remove the cap from > eventlet in neutron. > > Doug > From thierry at openstack.org Wed Apr 11 13:27:15 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 11 Apr 2018 15:27:15 +0200 Subject: [openstack-dev] [election] [tc] TC candidacy (but not for chair) Message-ID: <1058ae13-66f6-7ea6-ac42-ef583ab8bfda@openstack.org> Hi everyone, Growing new leaders has been a focus of the Technical Committee over the last year: first discussed at the leadership workshop with Board members in March 2017, then included in the TC "vision for 2019"[1] adopted in June. As part of this objective, we actively looked for new stewards in our community, provided opportunities to step up, and rotated key roles to develop a deeper bench of ready leaders. But we never applied those ideas for the TC chair position itself: I have been the only candidate and holding that position since the creation of that governance body in 2012. The main reason for it is that tracking everything that's happening is a significant commitment, and the Foundation is happy with me investing that time in. That said, it's not ideal to have a role that only one person can fill, so it's time for a change. I am announcing my candidacy for a position on the OpenStack Technical Committee in the upcoming election. However, if I'm elected I won't be a candidate to the chair position for the upcoming TC session. To ensure a seamless transition I will actively support the person who will be chosen by the TC members. In all cases I'll be as involved with the TC activities as I've always been. In my opinion our vision for 2019[1] is still current. We have a lot of work ahead of us to fully implement it, especially around the concept of "Constellations" (representation of groups of OpenStack components that answer a specific use case). Beyond that, our main challenge is to continue to adapt OpenStack governance to the evolving needs of the project. Most of our processes and structures come from back when we doubled activity every year, when our main focus was to survive that meteoritic growth. With OpenStack getting more mature and having more adoption, we need to rethink those processes and structures with long-term sustainability in mind. Finally, we need to navigate a transition where everything produced by our community will no longer necessarily be called "OpenStack", starting with Zuul being given its own separate branding. If you're passionate about open source project governance and interested in tackling those challenges, please consider running for the Technical Committee ! Several of the current members won't be running for re-election, so seats are up for grabs. We track current proposed changes on a Tracker[2], track work items on StoryBoard[3], and usually meet in person at Summits and PTGs. You can read past weekly "TC status update" emails to get a better idea of the type of things we cover. I would say the time commitment is between 2 and 6 hours a week. Join us ! [1] https://governance.openstack.org/tc/resolutions/20170404-vision-2019.html [2] https://wiki.openstack.org/wiki/Technical_Committee_Tracker [3] https://storyboard.openstack.org/#!/project/923 -- Thierry Carrez (ttx) From openstack at fried.cc Wed Apr 11 13:46:19 2018 From: openstack at fried.cc (Eric Fried) Date: Wed, 11 Apr 2018 08:46:19 -0500 Subject: [openstack-dev] [nova] Changes toComputeVirtAPI.wait_for_instance_event In-Reply-To: References: Message-ID: Jichen was able to use this information immediately, to great benefit [1]. (If those paying attention could have a quick look at that to make sure he used it right, it would be appreciated; I'm not an expert here.) [1] https://review.openstack.org/#/c/527658/31..32/nova/virt/zvm/guest.py at 192 On 04/10/2018 09:06 PM, Chen CH Ji wrote: > Thanks for your info ,really helpful > > Best Regards! > > Kevin (Chen) Ji 纪 晨 > > Engineer, zVM Development, CSTL > Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com > Phone: +86-10-82451493 > Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian > District, Beijing 100193, PRC > > Inactive hide details for Andreas Scheuring ---04/10/2018 10:19:21 > PM---Yes, that’s how it works! ---Andreas Scheuring ---04/10/2018 > 10:19:21 PM---Yes, that’s how it works! --- > > From: Andreas Scheuring > To: "OpenStack Development Mailing List (not for usage questions)" > > Date: 04/10/2018 10:19 PM > Subject: Re: [openstack-dev] [nova] Changes > toComputeVirtAPI.wait_for_instance_event > > ------------------------------------------------------------------------ > > > > Yes, that’s how it works! > > --- > Andreas Scheuring (andreas_s) > > > > On 10. Apr 2018, at 16:05, Matt Riedemann <_mriedemos at gmail.com_ > > wrote: > > On 4/9/2018 9:57 PM, Chen CH Ji wrote: > > Could you please help to share whether this kind of event is > sent by neutron-server or neutron agent ? I searched neutron code > from [1][2] this means the agent itself need tell neutron server > the device(VIF) is up then neutron server will send notification > to nova through REST API and in turn consumed by compute node? > [1]_https://github.com/openstack/neutron/tree/master/neutron/notify_port_active_direct_ > > [2]_https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/rpc.py#L264_ > > > > I believe the neutron agent is the one that is getting (or polling) the > information from the underlying network backend when VIFs are plugged or > unplugged from a host, then route that information via RPC to the > neutron server which then sends an os-server-external-events request to > the compute REST API, which then routes the event information down to > the nova-compute host where the instance is currently running. > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: _OpenStack-dev-request at lists.openstack.org_ > ?subject:unsubscribe_ > __http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev_ > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=tIntFpZ0ffp-_h5CsqN1I9tv64hW2xugxBXaxDn7Z_I&s=z2jOgMD7B3XFoNsUHTtIO6hWKYXH-Dm4L4P0-u-oSSw&e= > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From mjturek at linux.vnet.ibm.com Wed Apr 11 13:47:02 2018 From: mjturek at linux.vnet.ibm.com (Michael Turek) Date: Wed, 11 Apr 2018 09:47:02 -0400 Subject: [openstack-dev] [ironic] Ironic Bug Day on Thursday April 12th @ 1:00 PM - 3:00 PM (UTC) Message-ID: <1233f954-1a90-a966-58ec-f7a20a89fc44@linux.vnet.ibm.com> Hey all, Ironic Bug Day is happening tomorrow, April 12th at 1:00 PM - 3:00 PM (UTC) We will be meeting on Julia's bluejeans line: https://bluejeans.com/5548595878 Hope to see everyone there! Thanks, Mike Turek From openstack at nemebean.com Wed Apr 11 14:33:24 2018 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 11 Apr 2018 09:33:24 -0500 Subject: [openstack-dev] [tripleo] roadmap on containers workflow In-Reply-To: References: Message-ID: <8327ba1d-347e-b847-5d96-434dda74f318@nemebean.com> On 04/11/2018 07:58 AM, Wesley Hayutin wrote: > > > On Tue, 10 Apr 2018 at 20:51 Emilien Macchi > wrote: > > Greetings, > > Steve Baker and I had a quick chat today about the work that is > being done around containers workflow in Rocky cycle. > > If you're not familiar with the topic, I suggest to first read the > blueprint to understand the context here: > https://blueprints.launchpad.net/tripleo/+spec/container-prepare-workflow > > One of the great outcomes of this blueprint is that in Rocky, the > operator won't have to run all the "openstack overcloud container" > commands to prepare the container registry and upload the > containers. Indeed, it'll be driven by Heat and Mistral mostly. > But today our discussion extended on 2 uses-cases that we're going > to explore and find how we can address them: > 1) I'm a developer and want to deploy a containerized undercloud > with customized containers (more or less related to the all-in-one > discussions on another thread [1]). > 2) I'm submitting a patch in tripleo-common (let's say a workflow) > and need my patch to be tested when the undercloud is containerized > (see [2] for an excellent example). > > Both cases would require additional things: > - The container registry needs to be deployed *before* actually > installing the undercloud. > - We need a tool to update containers from this registry and > *before* deploying them. We already have this tool in place in our > CI for the overcloud (see [3] and [4]). Now we need a similar thing > for the undercloud. > > Next steps: > - Agree that we need to deploy the container-registry before the > undercloud. > - If agreed, we'll create a new Ansible role called > ansible-role-container-registry that for now will deploy exactly > what we have in TripleO, without extra feature. > - Drive the playbook runtime from tripleoclient to bootstrap the > container registry (which of course could be disabled in > undercloud.conf). > - Create another Ansible role that would re-use container-check tool > but the idea is to provide a role to modify containers when needed, > and we could also control it from tripleoclient. The role would be > using the ContainerImagePrepare parameter, which Steve is working on > right now. > > > This all looks really good Emilien, thanks for sending it out. > Regarding the update of containers, we would just want to be 100% sure > that we can control which yum repositories are in play for the update. > Maybe it will be done by the user prior to running the command, or maybe > with some flags to what ever command Steve is working on. > FYI.. we've noticed in CI that when the base os updates ( not baseos) > are included you tend to fail on at least on package download on one of > the 50+ containers due to infra/network.  In CI we only enable baseos, > dlrn updates and the dependency change [1] I will note that this was the sort of use case the -o parameter to tripleo-repos was intended to handle. It can write the configured repos to an arbitrary location that we could then mount into the containers so the update repos are independent from the underlying system. https://github.com/openstack/tripleo-repos/blob/8961edcd2d9dd1f2c50d3da51f4129daaad85ab0/tripleo_repos/main.py#L88 > > Thanks > > [1] > https://github.com/openstack/tripleo-quickstart-extras/blob/master/roles/overcloud-prep-containers/templates/overcloud-prep-containers.sh.j2#L104-L109 > > > Feedback is welcome, thanks. > > [1] All-In-One thread: > http://lists.openstack.org/pipermail/openstack-dev/2018-March/128900.html > [2] Bug report when undercloud is containeirzed > https://bugs.launchpad.net/tripleo/+bug/1762422 > [3] Tool to update containers if needed: > https://github.com/imain/container-check > [4] Container-check running in TripleO CI: > https://review.openstack.org/#/c/558885/ and > https://review.openstack.org/#/c/529399/ > -- > Emilien Macchi > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From lebre.adrien at free.fr Wed Apr 11 14:45:27 2018 From: lebre.adrien at free.fr (free) Date: Wed, 11 Apr 2018 16:45:27 +0200 Subject: [openstack-dev] [FEMDC ]Kubernetes IoT Edge Working Group Proposal Message-ID: <1D2DDA2B-B46D-42DD-A398-670FF90D4A73@free.fr> Dear all, I’m not sure the information has been shared on the MLs. https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!topic/kubernetes-dev/sAFIvDsvUCI Regards, Ad_ri3n_ -------------- next part -------------- An HTML attachment was scrubbed... URL: From scheuran at linux.vnet.ibm.com Wed Apr 11 15:24:13 2018 From: scheuran at linux.vnet.ibm.com (Andreas Scheuring) Date: Wed, 11 Apr 2018 17:24:13 +0200 Subject: [openstack-dev] [nova] Changes toComputeVirtAPI.wait_for_instance_event In-Reply-To: References: Message-ID: <2C04972E-A38B-4912-92CE-EC6446DE1366@linux.vnet.ibm.com> Looks good IMO. --- Andreas Scheuring (andreas_s) On 11. Apr 2018, at 15:46, Eric Fried wrote: Jichen was able to use this information immediately, to great benefit [1]. (If those paying attention could have a quick look at that to make sure he used it right, it would be appreciated; I'm not an expert here.) [1] https://review.openstack.org/#/c/527658/31..32/nova/virt/zvm/guest.py at 192 On 04/10/2018 09:06 PM, Chen CH Ji wrote: > Thanks for your info ,really helpful > > Best Regards! > > Kevin (Chen) Ji 纪 晨 > > Engineer, zVM Development, CSTL > Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com > Phone: +86-10-82451493 > Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian > District, Beijing 100193, PRC > > Inactive hide details for Andreas Scheuring ---04/10/2018 10:19:21 > PM---Yes, that’s how it works! ---Andreas Scheuring ---04/10/2018 > 10:19:21 PM---Yes, that’s how it works! --- > > From: Andreas Scheuring > To: "OpenStack Development Mailing List (not for usage questions)" > > Date: 04/10/2018 10:19 PM > Subject: Re: [openstack-dev] [nova] Changes > toComputeVirtAPI.wait_for_instance_event > > ------------------------------------------------------------------------ > > > > Yes, that’s how it works! > > --- > Andreas Scheuring (andreas_s) > > > > On 10. Apr 2018, at 16:05, Matt Riedemann <_mriedemos at gmail.com_ > > wrote: > > On 4/9/2018 9:57 PM, Chen CH Ji wrote: > > Could you please help to share whether this kind of event is > sent by neutron-server or neutron agent ? I searched neutron code > from [1][2] this means the agent itself need tell neutron server > the device(VIF) is up then neutron server will send notification > to nova through REST API and in turn consumed by compute node? > [1]_https://github.com/openstack/neutron/tree/master/neutron/notify_port_active_direct_ > > [2]_https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/rpc.py#L264_ > > > > I believe the neutron agent is the one that is getting (or polling) the > information from the underlying network backend when VIFs are plugged or > unplugged from a host, then route that information via RPC to the > neutron server which then sends an os-server-external-events request to > the compute REST API, which then routes the event information down to > the nova-compute host where the instance is currently running. > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: _OpenStack-dev-request at lists.openstack.org_ > ?subject:unsubscribe_ > __http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev_ > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=tIntFpZ0ffp-_h5CsqN1I9tv64hW2xugxBXaxDn7Z_I&s=z2jOgMD7B3XFoNsUHTtIO6hWKYXH-Dm4L4P0-u-oSSw&e= > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From andr.kurilin at gmail.com Wed Apr 11 16:14:04 2018 From: andr.kurilin at gmail.com (Andrey Kurilin) Date: Wed, 11 Apr 2018 19:14:04 +0300 Subject: [openstack-dev] [rally] Moving OpenStack plugins into separate repo Message-ID: Hi Stackers! Today I am happy to announce great news! >From a historical perspective, Rally is testing (benchmarking) tool for OpenStack, but it is changed. More and more users want to use Rally for different platforms and environments. Our pluggable system allows doing this. To make the framework lightweight and simplify our release model, we decided to move OpenStack to the separate repository[1]. [1] https://git.openstack.org/cgit/openstack/rally-openstack We cut the first release 1.0.0 two weeks ago, and it is published to PyPI[2]. [2] https://pypi.python.org/pypi/rally-openstack If you are Rally consumer and do not have custom plugins, the migration should be simple. Just install rally-openstack package instead of rally and everything will work as previously. rally-openstack has a dependency to rally, so you need nothing more than installing one package. If you have custom plugins, do not worry, the migration should be simple for you too. The first release has the similar structure as it was in rally repository. The only thing which should be changed is importing rally_openstack instead of rally.plugins.openstack. -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed Apr 11 16:20:46 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 11 Apr 2018 12:20:46 -0400 Subject: [openstack-dev] [all][requirements] uncapping eventlet In-Reply-To: <20180405154737.lhxhlpsj3uttjevu@gentoo.org> References: <20180405154737.lhxhlpsj3uttjevu@gentoo.org> Message-ID: <1523463552-sup-1950@lrrr.local> Excerpts from Matthew Thode's message of 2018-04-05 10:47:37 -0500: > eventlet-0.22.1 has been out for a while now, we should try and use it. > Going to be fun times. > > I have a review projects can depend upon if they wish to test. > https://review.openstack.org/533021 I have proposed a bunch of patches to projects to remove the cap for eventlet [1]. If they don't pass tests, please take them over and fix them up as needed (I anticipate some trouble with the new check-requirements rules, for example). Doug [1] https://review.openstack.org/#/q/topic:uncap-eventlet+(status:open+OR+status:merged) From doug at doughellmann.com Wed Apr 11 16:26:37 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 11 Apr 2018 12:26:37 -0400 Subject: [openstack-dev] [all][requirements] uncapping eventlet In-Reply-To: <1523463552-sup-1950@lrrr.local> References: <20180405154737.lhxhlpsj3uttjevu@gentoo.org> <1523463552-sup-1950@lrrr.local> Message-ID: <1523463956-sup-7092@lrrr.local> Excerpts from Doug Hellmann's message of 2018-04-11 12:20:46 -0400: > Excerpts from Matthew Thode's message of 2018-04-05 10:47:37 -0500: > > eventlet-0.22.1 has been out for a while now, we should try and use it. > > Going to be fun times. > > > > I have a review projects can depend upon if they wish to test. > > https://review.openstack.org/533021 > > I have proposed a bunch of patches to projects to remove the cap > for eventlet [1]. If they don't pass tests, please take them over > and fix them up as needed (I anticipate some trouble with the new > check-requirements rules, for example). > > Doug > > [1] https://review.openstack.org/#/q/topic:uncap-eventlet+(status:open+OR+status:merged) And please go ahead and abandon any that are duplicates for patches that are already being worked on elsewhere. It was easier to just update everything than to script something to figure out when updates were needed. Doug From cdent+os at anticdent.org Wed Apr 11 16:40:01 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 11 Apr 2018 17:40:01 +0100 (BST) Subject: [openstack-dev] [election] [tc] TC candidacy for cdent Message-ID: Hi, I'm announcing my candidacy to continue as a member of the Technical Committee. When I ran a year ago, one of my goals was to foster more, and more transparent, communication among the many parts of the OpenStack community. The TC has made progress by being more overt and intentional in reaching out to others and sharing information in an active way. I helped, with my weekly TC Reports and other writing related to the TC [1], but there is plenty more to do, especially as the infrastructure as a service community grows and mutates to include CI/CD, Edge and container-related activities. Enough left to do that I would like to continue for another term. The growth of projects under the OpenStack Foundation umbrella will present opportunities and challenges. We'll be able to deal with those most effectively by having good communication hygiene: over communicating in a written and discoverable fashion. Changes in the shape of the community will impact the role of the TC and its members. The TC has been something of a high-level judiciary within the OpenStack technical community but increasingly will need to take on a role as a representative of the community that develops what has traditionally been known as "OpenStack" to the other nearby communities that are also now "OpenStack". My candidacy note from last year [2] remains relevant and a good expression of my opinions about governance and the overarching themes that concern me: communication, openness, lowering boundaries between people and platforms, maintaining developer sanity [3]. If I'm elected again I intend to encourage engagement by continuing with the TC Report, making sure that we include the right people when making decisions, and using media that is accessible to people of many languages and time zones. I will also actively drive discussion and policy that leads to people who are users of OpenStack in the broadest sense finding it easier to be regularly active contributors to the open source projects which create OpenStack. We are making progress with this, but much of OpenStack is still the domain of (often overburdened) "professionals". Breaking into those domains needs to be simpler and encouraged for the benefit of all concerned. If you would like to look at my past voting record on governance changes that can be found here: https://review.openstack.org/#/q/project:openstack/governance+reviewedby:%22Chris+Dent+%253Ccdent%2540anticdent.org%253E%22 If you would like me to continue, please vote for me in the upcoming elections. If you would like someone else, please vote for them. If you would like to give it a try yourself, then please run; you have until the end of the (UTC) day of April 17th to submit your candidacy. See the following for details: https://governance.openstack.org/election/#how-to-submit-a-candidacy Thanks for reading and your consideration. [1] https://anticdent.org/tag/tc.html [2] https://git.openstack.org/cgit/openstack/election/plain/candidates/pike/TC/cdent.txt [3] https://anticdent.org/openstack-developer-satisfaction.html -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From boris at pavlovic.me Wed Apr 11 17:30:20 2018 From: boris at pavlovic.me (Boris Pavlovic) Date: Wed, 11 Apr 2018 10:30:20 -0700 Subject: [openstack-dev] [rally] Moving OpenStack plugins into separate repo In-Reply-To: References: Message-ID: Andrey, Great news! Best regards, Boris Pavlovic On Wed, Apr 11, 2018 at 9:14 AM, Andrey Kurilin wrote: > Hi Stackers! > > Today I am happy to announce great news! > > From a historical perspective, Rally is testing (benchmarking) tool for > OpenStack, but it is changed. More and more users want to use Rally for > different platforms and environments. Our pluggable system allows doing > this. > To make the framework lightweight and simplify our release model, we > decided to move OpenStack to the separate repository[1]. > > [1] https://git.openstack.org/cgit/openstack/rally-openstack > > We cut the first release 1.0.0 two weeks ago, and it is published to > PyPI[2]. > > [2] https://pypi.python.org/pypi/rally-openstack > > If you are Rally consumer and do not have custom plugins, the migration > should be simple. Just install rally-openstack package instead of rally and > everything will work as previously. rally-openstack has a dependency to > rally, so you need nothing more than installing one package. > > If you have custom plugins, do not worry, the migration should be simple > for you too. The first release has the similar structure as it was in rally > repository. The only thing which should be changed is importing > rally_openstack instead of rally.plugins.openstack. > > -- > Best regards, > Andrey Kurilin. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mjturek at linux.vnet.ibm.com Wed Apr 11 18:53:46 2018 From: mjturek at linux.vnet.ibm.com (Michael Turek) Date: Wed, 11 Apr 2018 14:53:46 -0400 Subject: [openstack-dev] [ironic] Ironic Bug Day on Thursday April 12th @ 1:00 PM - 3:00 PM (UTC) In-Reply-To: <1233f954-1a90-a966-58ec-f7a20a89fc44@linux.vnet.ibm.com> References: <1233f954-1a90-a966-58ec-f7a20a89fc44@linux.vnet.ibm.com> Message-ID: <2827152b-679f-be88-9172-6fb692140791@linux.vnet.ibm.com> Sorry this is so late but as for the format of the event I think we should do something like this: 1) Go through new bugs     -This is doable in storyboard. Sort by creation date     -Should be a nice warm up activity! 2) Go through oldest bugs     -Again, doable in storyboard. Sort by last updated.     -Older bugs are usually candidates for some clean up. We'll decide if bugs are still valid      or if we need to reassign/poke owners. 3) Open Floor     -If you have a bug that you'd like to discuss, bring it up here! 4) Storyboard discussion     -One of the reasons we are doing this is to get our feet wet in storyboard. Let's spend      10 to 20 minutes discussing what we need out of the tool after playing with it. Originally I was hoping that we could sort by task priority but that currently seems to be unavailable, or well hidden, in storyboard . If someone knows how to do this, please reply. Does anyone else have any ideas on how to structure bug day? Thanks! Mike On 4/11/18 9:47 AM, Michael Turek wrote: > Hey all, > > Ironic Bug Day is happening tomorrow, April 12th at 1:00 PM - 3:00 PM > (UTC) > > We will be meeting on Julia's bluejeans line: > https://bluejeans.com/5548595878 > > Hope to see everyone there! > > Thanks, > Mike Turek > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From e0ne at e0ne.info Wed Apr 11 18:58:22 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Wed, 11 Apr 2018 21:58:22 +0300 Subject: [openstack-dev] [horizon][plugins] Improve Horizon testing Message-ID: Hi all, Let me introduce my proposal about Horizon testing improvements[1]. We started this discussion at the last PTG [2] and had a good conversation at the previous meeting [3]. The idea is simple: to have CI that verifies Horizon changes across supported plugins. As a side-effect of this activity, we'll have a list of maintained and supported plugins per each release. For now, we have a static list in Horizon Install Guide only [4] We don't have Selenium-based tests now. the selenium-headless job always reports success. Integration tests are totally broken and we even don't run them on gates. We need to fix selenium-headless job and integration tests too. It would be great to have new gate job per each plugin per any Horizon code change to be sure that we don't break anything. The same job with plugin-specific selenium or integration tests should be executed against each Horizon plugin's change request. To make this happen, we need to fix horizon's selenium and integration tests first. One of the first steps is to get rid of nose from Horizon and plugins. Initially, I tried to use Django Test Runner but XMLTestRunner [5] looks better for me because of it generates a report in xunit format. Ideally, it would be great to use pytest for it, but it requires more efforts now. stestr requires some work to get it working with Django too. I know that Horizon team already introduced some new things in Rocky which require action from plugins developers like moving to Mock (it's one of the community goals for this release for all projects) and support Django<2.0,>=1.11. That's why I'm ready to help plugins with test runner migration and propose a patch for each plugin in a list [4]. Since it's supposed to be a cross-project activity, I would like to get feedback from Horizon Plugins developers. [1] https://blueprints.launchpad.net/horizon/+spec/improve-horizon-testing [2] https://etherpad.openstack.org/p/horizon-ptg-rocky [3] http://eavesdrop.openstack.org/meetings/horizon/2018/horizon.2018-04-04-20.01.log.html#l-25 [4] https://docs.openstack.org/horizon/latest/install/plugin-registry.html [5] https://review.openstack.org/#/c/544296/ Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Wed Apr 11 20:21:20 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Wed, 11 Apr 2018 13:21:20 -0700 Subject: [openstack-dev] [all] [api] Re-Reminder on the state of WSME In-Reply-To: References: <4bb99da6-1071-3f7b-2c87-979e0d48876d@nemebean.com> Message-ID: I am willing to help with maintenance (patch reviews/gate fixes), but I cannot commit time to development work on it. Michael On Wed, Apr 11, 2018 at 6:21 AM, Chris Dent wrote: > On Wed, 11 Apr 2018, Dougal Matthews wrote: > >> I would like to see us move away from WSME. I'm not sure I have time to >> drive an effort in finding a replacement (and migration path) but I would >> certainly like to help. > > > Dougal and I talked about this in IRC and agreed that being able to > merge changes in WSME would help the goal of establishing a > migration path. So I've added him to WSME cores. > > > -- > Chris Dent ٩◔̯◔۶ https://anticdent.org/ > freenode: cdent tw: @anticdent > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From cdent+os at anticdent.org Wed Apr 11 20:42:02 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 11 Apr 2018 21:42:02 +0100 (BST) Subject: [openstack-dev] [all] [api] Re-Reminder on the state of WSME In-Reply-To: References: <4bb99da6-1071-3f7b-2c87-979e0d48876d@nemebean.com> Message-ID: On Wed, 11 Apr 2018, Michael Johnson wrote: > I am willing to help with maintenance (patch reviews/gate fixes), but > I cannot commit time to development work on it. Michael and I also spoke in IRC and he too is now a WSME core. Thanks to both of you for stepping up and being willing to help out. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From sean.mcginnis at gmx.com Wed Apr 11 20:47:16 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 11 Apr 2018 15:47:16 -0500 Subject: [openstack-dev] [all] Changes to direct tagging by projects under governance Message-ID: <20180411204716.GA8852@sm-xps> Hey all, We've had a semi-official thing until now that when projects are accepted under governance, they then do all of their tagging and releases via our official release process by submitting patches to the openstack/releases repo. >From time to time we would come across projects that either were not aware of this, or had someone new that would push up new tags. This could cause some complications, or at least confusion. Normally when a project came under governance, changes would be made to their gerrit ACLs, but that step was not always remembered. This is really kind of a clean up, but I wanted to make sure everyone was aware of this just in case. The TC has officially updated the new project documentation [1], and we are now merging a patch to remove those leftover ACL rights from projects that should no longer have rights to push tags [2]. [1] https://review.openstack.org/#/c/557737 [2] https://review.openstack.org/#/c/557730/ If there are any questions about this, please let me know, or grab someone in the #openstack-releases channel. Thanks! Sean From mikal at stillhq.com Wed Apr 11 22:09:44 2018 From: mikal at stillhq.com (Michael Still) Date: Thu, 12 Apr 2018 08:09:44 +1000 Subject: [openstack-dev] [Nova][Deployers] Optional, platform specific, dependancies in requirements.txt Message-ID: Hi, https://review.openstack.org/#/c/523387 proposes adding a z/VM specific dependancy to nova's requirements.txt. When I objected the counter argument is that we have examples of windows specific dependancies (os-win) and powervm specific dependancies in that file already. I think perhaps all three are a mistake and should be removed. My recollection is that for drivers like ironic which may not be deployed by everyone, we have the dependancy documented, and then loaded at runtime by the driver itself instead of adding it to requirements.txt. This is to stop pip for auto-installing the dependancy for anyone who wants to run nova. I had assumed this was at the request of the deployer community. So what do we do with z/VM? Do we clean this up? Or do we now allow dependancies that are only useful to a very small number of deployments into requirements.txt? Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbaker at redhat.com Wed Apr 11 22:20:25 2018 From: sbaker at redhat.com (Steve Baker) Date: Thu, 12 Apr 2018 10:20:25 +1200 Subject: [openstack-dev] [tripleo] roadmap on containers workflow In-Reply-To: References: Message-ID: On 12/04/18 00:58, Wesley Hayutin wrote: > > > On Tue, 10 Apr 2018 at 20:51 Emilien Macchi > wrote: > > Greetings, > > Steve Baker and I had a quick chat today about the work that is > being done around containers workflow in Rocky cycle. > > If you're not familiar with the topic, I suggest to first read the > blueprint to understand the context here: > https://blueprints.launchpad.net/tripleo/+spec/container-prepare-workflow > > One of the great outcomes of this blueprint is that in Rocky, the > operator won't have to run all the "openstack overcloud container" > commands to prepare the container registry and upload the > containers. Indeed, it'll be driven by Heat and Mistral mostly. > But today our discussion extended on 2 uses-cases that we're going > to explore and find how we can address them: > 1) I'm a developer and want to deploy a containerized undercloud > with customized containers (more or less related to the all-in-one > discussions on another thread [1]). > 2) I'm submitting a patch in tripleo-common (let's say a workflow) > and need my patch to be tested when the undercloud is > containerized (see [2] for an excellent example). > > Both cases would require additional things: > - The container registry needs to be deployed *before* actually > installing the undercloud. > - We need a tool to update containers from this registry and > *before* deploying them. We already have this tool in place in our > CI for the overcloud (see [3] and [4]). Now we need a similar > thing for the undercloud. > > Next steps: > - Agree that we need to deploy the container-registry before the > undercloud. > - If agreed, we'll create a new Ansible role called > ansible-role-container-registry that for now will deploy exactly > what we have in TripleO, without extra feature. > - Drive the playbook runtime from tripleoclient to bootstrap the > container registry (which of course could be disabled in > undercloud.conf). > - Create another Ansible role that would re-use container-check > tool but the idea is to provide a role to modify containers when > needed, and we could also control it from tripleoclient. The role > would be using the ContainerImagePrepare parameter, which Steve is > working on right now. > > > This all looks really good Emilien, thanks for sending it out. > Regarding the update of containers, we would just want to be 100% sure > that we can control which yum repositories are in play for the > update.  Maybe it will be done by the user prior to running the > command, or maybe with some flags to what ever command Steve is > working on. Is it enough to retain the existing container-check behavior of just mounting in the undercloud's /etc/yum.repos.d? > FYI.. we've noticed in CI that when the base os updates ( not baseos) > are included you tend to fail on at least on package download on one > of the 50+ containers due to infra/network.  In CI we only enable > baseos, dlrn updates and the dependency change [1] > It would be interesting to see what speed/reliability change there would be if the concurrency of container-check was disabled and the undercloud's /var/cache/yum was mounted in to each container to avoid duplicate package download. > Thanks > > [1] > https://github.com/openstack/tripleo-quickstart-extras/blob/master/roles/overcloud-prep-containers/templates/overcloud-prep-containers.sh.j2#L104-L109 > > > Feedback is welcome, thanks. > > [1] All-In-One thread: > http://lists.openstack.org/pipermail/openstack-dev/2018-March/128900.html > [2] Bug report when undercloud is containeirzed > https://bugs.launchpad.net/tripleo/+bug/1762422 > [3] Tool to update containers if needed: > https://github.com/imain/container-check > [4] Container-check running in TripleO CI: > https://review.openstack.org/#/c/558885/ and > https://review.openstack.org/#/c/529399/ > -- > Emilien Macchi > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Wed Apr 11 22:20:55 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 11 Apr 2018 15:20:55 -0700 Subject: [openstack-dev] [Nova][Deployers] Optional, platform specific, dependancies in requirements.txt In-Reply-To: References: Message-ID: <1523485255.330518.1334929280.391B0C51@webmail.messagingengine.com> On Wed, Apr 11, 2018, at 3:09 PM, Michael Still wrote: > Hi, > > https://review.openstack.org/#/c/523387 proposes adding a z/VM specific > dependancy to nova's requirements.txt. When I objected the counter argument > is that we have examples of windows specific dependancies (os-win) and > powervm specific dependancies in that file already. > > I think perhaps all three are a mistake and should be removed. > > My recollection is that for drivers like ironic which may not be deployed > by everyone, we have the dependancy documented, and then loaded at runtime > by the driver itself instead of adding it to requirements.txt. This is to > stop pip for auto-installing the dependancy for anyone who wants to run > nova. I had assumed this was at the request of the deployer community. > > So what do we do with z/VM? Do we clean this up? Or do we now allow > dependancies that are only useful to a very small number of deployments > into requirements.txt? > > Michael I think there are two somewhat related issues here. The first is being able to have platform specific dependencies so that nova can run on say python2 and python3 or linux and windows using the same requirements list. To address this you should use environment markers [0] to specify when a specific environment needs additional or different packages to function and those should probably all just go into requirements.txt. The second issue is enabling optional functionality that a default install shouldn't reasonably have to worry about (and is install platform independent). For this you can use setuptools extras[1]. For an example of how this is used along with setup.cfg and PBR you can look at swiftclient. Then users that know they want the extra features will execute something like `pip install python-swiftclient[keystone]`. [0] https://www.python.org/dev/peps/pep-0496/ [1] http://setuptools.readthedocs.io/en/latest/setuptools.html#declaring-extras-optional-features-with-their-own-dependencies [2] https://git.openstack.org/cgit/openstack/python-swiftclient/tree/setup.cfg#n35 Hope this helps, Clark From mikal at stillhq.com Wed Apr 11 22:35:02 2018 From: mikal at stillhq.com (Michael Still) Date: Thu, 12 Apr 2018 08:35:02 +1000 Subject: [openstack-dev] [Nova] z/VM introducing a new config drive format Message-ID: Heya, https://review.openstack.org/#/c/527658 is a z/VM patch which introduces their support for config drive. They do this by attaching a tarball to the instance, having pretended in the nova code that it is an iso9660. This worries me. In the past we've been concerned about adding new filesystem formats for config drives, and the long term support implications of that -- the filesystem formats for config drive that we use today were carefully selected as being universally supported by our guest operating systems. The previous example we've had of these issues is the parallels driver, which had similar "my hypervisor doesn't support these filesystem format" concerns. We worked around those concerns IIRC, and certainly virt.configdrive still only supports iso9660 and vfat. Discuss. Michael -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbaker at redhat.com Wed Apr 11 22:38:02 2018 From: sbaker at redhat.com (Steve Baker) Date: Thu, 12 Apr 2018 10:38:02 +1200 Subject: [openstack-dev] [tripleo] roadmap on containers workflow In-Reply-To: References: Message-ID: <1bf26224-6cb3-099b-f36a-88e0138eb502@redhat.com> On 11/04/18 12:50, Emilien Macchi wrote: > Greetings, > > Steve Baker and I had a quick chat today about the work that is being > done around containers workflow in Rocky cycle. > > If you're not familiar with the topic, I suggest to first read the > blueprint to understand the context here: > https://blueprints.launchpad.net/tripleo/+spec/container-prepare-workflow > > One of the great outcomes of this blueprint is that in Rocky, the > operator won't have to run all the "openstack overcloud container" > commands to prepare the container registry and upload the containers. > Indeed, it'll be driven by Heat and Mistral mostly. > But today our discussion extended on 2 uses-cases that we're going to > explore and find how we can address them: > 1) I'm a developer and want to deploy a containerized undercloud with > customized containers (more or less related to the all-in-one > discussions on another thread [1]). > 2) I'm submitting a patch in tripleo-common (let's say a workflow) and > need my patch to be tested when the undercloud is containerized (see > [2] for an excellent example). I'm fairly sure the only use cases for this will be developer or CI based. I think we need to be strongly encouraging image modifications for production deployments to go through some kind of image building pipeline. See Next Steps below for the implications of this. > Both cases would require additional things: > - The container registry needs to be deployed *before* actually > installing the undercloud. > - We need a tool to update containers from this registry and *before* > deploying them. We already have this tool in place in our CI for the > overcloud (see [3] and [4]). Now we need a similar thing for the > undercloud. One problem I see is that we use roles and environment files to filter the images to be pulled/modified/uploaded. Now we would need to assemble a list of undercloud *and* overcloud environments, and build some kind of aggregate role data for both. This would need to happen before the undercloud is even deployed, which is quite a different order from what quickstart does currently. Either that or we do no image filtering and just process every image regardless of whether it will be used. > Next steps: > - Agree that we need to deploy the container-registry before the > undercloud. > - If agreed, we'll create a new Ansible role called > ansible-role-container-registry that for now will deploy exactly what > we have in TripleO, without extra feature. +1 > - Drive the playbook runtime from tripleoclient to bootstrap the > container registry (which of course could be disabled in undercloud.conf). tripleoclient could switch to using this role instead of puppet-tripleo to install the registry, however since the only use-cases we have are dev/CI driven I wonder if quickstart/infrared can just invoke the role when required, before tripleoclient is involved. > - Create another Ansible role that would re-use container-check tool > but the idea is to provide a role to modify containers when needed, > and we could also control it from tripleoclient. The role would be > using the ContainerImagePrepare parameter, which Steve is working on > right now. > Since the use cases are all upstream CI/dev I do wonder if we should just have a dedicated container-check role inside tripleo-quickstart-extras which can continue to use the script[3] or whatever. Keeping the logic in quickstart will remove the temptation to use it instead of a proper image build pipeline for production deployments. Alternatively it could still be a standalone role which quickstart invokes, just to accommodate development workflows which don't use quickstart. > Feedback is welcome, thanks. > > [1] All-In-One thread: > http://lists.openstack.org/pipermail/openstack-dev/2018-March/128900.html > [2] Bug report when undercloud is containeirzed > https://bugs.launchpad.net/tripleo/+bug/1762422 > [3] Tool to update containers if needed: > https://github.com/imain/container-check > [4] Container-check running in TripleO CI: > https://review.openstack.org/#/c/558885/ and > https://review.openstack.org/#/c/529399/ > -- > Emilien Macchi > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at danplanet.com Wed Apr 11 23:28:58 2018 From: dms at danplanet.com (Dan Smith) Date: Wed, 11 Apr 2018 16:28:58 -0700 Subject: [openstack-dev] [Nova] z/VM introducing a new config drive format In-Reply-To: (Michael Still's message of "Thu, 12 Apr 2018 08:35:02 +1000") References: Message-ID: > https://review.openstack.org/#/c/527658 is a z/VM patch which > introduces their support for config drive. They do this by attaching a > tarball to the instance, having pretended in the nova code that it is > an iso9660. This worries me. > > In the past we've been concerned about adding new filesystem formats > for config drives, and the long term support implications of that -- > the filesystem formats for config drive that we use today were > carefully selected as being universally supported by our guest > operating systems. > > The previous example we've had of these issues is the parallels > driver, which had similar "my hypervisor doesn't support these > filesystem format" concerns. We worked around those concerns IIRC, and > certainly virt.configdrive still only supports iso9660 and vfat. Yeah, IIRC, the difference with the parallels driver was that it ends up mounted in the container automagically for the guest by the..uh..man behind the curtain. However, z/VM being much more VM-y I imagine that the guest is just expected to grab that blob and do something with it to extract it on local disk at runtime or something. That concerns me too. In the past I've likened adding filesystem (or format, in this case) options to configdrive as a guest ABI change. I think the stability of what we present to guests is second only to our external API in terms of importance. I know z/VM is "weird" or "different", but I wouldn't want a more conventional hypervisor exposing the configdrive as a tarball, so I don't really think it's a precedent we should set. Both vfat and iso9660 are easily supportable by most everything on the planet so I don't think it's an unreasonable bar. --Dan From mikal at stillhq.com Wed Apr 11 23:31:45 2018 From: mikal at stillhq.com (Michael Still) Date: Thu, 12 Apr 2018 09:31:45 +1000 Subject: [openstack-dev] [Nova] z/VM introducing a new config drive format In-Reply-To: References: Message-ID: The more I think about it, the more I dislike how the proposed driver also "lies" about it using iso9660. That's definitely wrong: if CONF.config_drive_format in ['iso9660']: # cloud-init only support iso9660 and vfat, but in z/VM # implementation, can't link a disk to VM as iso9660 before it's # boot ,so create a tgz file then send to the VM deployed, and # during startup process, the tgz file will be extracted and # mounted as iso9660 format then cloud-init is able to consume it self._make_tgz(path) else: raise exception.ConfigDriveUnknownFormat( format=CONF.config_drive_format) Michael On Thu, Apr 12, 2018 at 9:28 AM, Dan Smith wrote: > > https://review.openstack.org/#/c/527658 is a z/VM patch which > > introduces their support for config drive. They do this by attaching a > > tarball to the instance, having pretended in the nova code that it is > > an iso9660. This worries me. > > > > In the past we've been concerned about adding new filesystem formats > > for config drives, and the long term support implications of that -- > > the filesystem formats for config drive that we use today were > > carefully selected as being universally supported by our guest > > operating systems. > > > > The previous example we've had of these issues is the parallels > > driver, which had similar "my hypervisor doesn't support these > > filesystem format" concerns. We worked around those concerns IIRC, and > > certainly virt.configdrive still only supports iso9660 and vfat. > > Yeah, IIRC, the difference with the parallels driver was that it ends up > mounted in the container automagically for the guest by the..uh..man > behind the curtain. However, z/VM being much more VM-y I imagine that > the guest is just expected to grab that blob and do something with it to > extract it on local disk at runtime or something. That concerns me too. > > In the past I've likened adding filesystem (or format, in this case) > options to configdrive as a guest ABI change. I think the stability of > what we present to guests is second only to our external API in terms of > importance. I know z/VM is "weird" or "different", but I wouldn't want a > more conventional hypervisor exposing the configdrive as a tarball, so I > don't really think it's a precedent we should set. Both vfat and iso9660 > are easily supportable by most everything on the planet so I don't think > it's an unreasonable bar. > > --Dan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Thu Apr 12 00:45:44 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 11 Apr 2018 19:45:44 -0500 Subject: [openstack-dev] [Nova][Deployers] Optional, platform specific, dependancies in requirements.txt In-Reply-To: References: Message-ID: <1a9a4e75-9b89-9474-38d7-910f08fbd1b9@gmail.com> On 4/11/2018 5:09 PM, Michael Still wrote: > > https://review.openstack.org/#/c/523387 proposes adding a z/VM specific > dependancy to nova's requirements.txt. When I objected the counter > argument is that we have examples of windows specific dependancies > (os-win) and powervm specific dependancies in that file already. > > I think perhaps all three are a mistake and should be removed. > > My recollection is that for drivers like ironic which may not be > deployed by everyone, we have the dependancy documented, and then loaded > at runtime by the driver itself instead of adding it to > requirements.txt. This is to stop pip for auto-installing the dependancy > for anyone who wants to run nova. I had assumed this was at the request > of the deployer community. > > So what do we do with z/VM? Do we clean this up? Or do we now allow > dependancies that are only useful to a very small number of deployments > into requirements.txt? As Eric pointed out in the review, this came up when pypowervm was added: https://review.openstack.org/#/c/438119/5/requirements.txt And you're asking the same questions I did in there, which was, should it go into test-requirements.txt like oslo.vmware and python-ironicclient, or should it go under [extras], or go into requirements.txt like os-win (we also have the xenapi library now too). I don't really think all of these optional packages should be in requirements.txt, but we should just be consistent with whatever we do, be that test-requirements.txt or [extras]. I remember caring more about this back in my rpm packaging days when we actually tracked what was in requirements.txt to base what needed to go into the rpm spec, unlike Fedora rpm specs which just zero out requirements.txt and depend on their own knowledge of what needs to be installed (which is sometimes lacking or lagging master). I also seem to remember that [extras] was less than user-friendly for some reason, but maybe that was just because of how our CI jobs are setup? Or I'm just making that up. I know it's pretty simple to install the stuff from extras for tox runs, it's just an extra set of dependencies to list in the tox.ini. Having said all this, I don't have the energy to help push for consistency myself, but will happily watch you from the sidelines. -- Thanks, Matt From zhipengh512 at gmail.com Thu Apr 12 02:12:10 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Thu, 12 Apr 2018 10:12:10 +0800 Subject: [openstack-dev] [election][tc]TC candidacy Message-ID: Hi all, I'm announcing my candidacy for the OpenStack Technical Committee. I started following OpenStack community since Portland Summit in 2013, and has been an integral part of it from then on. I'm currently serving as the PTL for the Cyborg project [0] which provides general management framework for accelerators. I'm also serving as the co-chair of the Public Cloud WG [1], active member of the First Contact SIG [2] and had been a contributor for the Interop WG throughout the year 2017 [3]. Outside of OpenStack, I'm one of the founding co-leads for the Kubernetes Policy WG [4], the ecosystem lead for OpenSDS community [5], and also served as the PTL of OPNFV Parser project from 2014 to 2016 [6]. I've also been involved with Open Service Broker API and SPDK community where my team members are working on. I would like to think my strength are in the areas like cross-community collaboration, community team building, and non-stop innovation. I believe these are also the areas that my future work on the Technical Committee should continue to bring forth. ** Cross Community Collaboration ** For those of you who are familiar with my work, you would know that I've always been taking a full stack approach for open source community work and strongly believed the value of collaboration. From the very start building of the *Cyborg project*, we collaborated with the OPNFV community and also had a concrete plan on working with communities like Kubernetes, Linaro, ONNX and so forth. With my work in *OpenSDS*, I've repeatedly emphasize the importance of the capability of working with OpenStack and Kubernetes but not drop something and claim it would be better to replace the existing module which has been built by a lot of community work. During our discussions in the *Kubernetes Policy WG* on multi-tenancy I've also introduced what the Keystone team has greatly done and try to build a synergy there. Hence if I were to be elected on the technical committee, I would like to pushing further on the community collaborations within but not limited to the following areas: *- Data model alignment regarding accelerator between OpenStack and Kubernetes via Cyborg project and the Resource Management SIG.* *- Alignment regarding Policy architecture between OpenStack and Kubernetes via Kubernetes Policy WG as well as Keystone team.* ** Community Team Building ** With currently busting the hype bubble, I've seen many commentaries on how OpenStack "is getting outdated" and not "technically cool" any more. Set aside the absurdity on the technical aspects, I think one of the core things people will learn in the OpenStack community is the governance, the way how we work here. Take *Cyborg* for example, from day one I've been strictly following the four opens principle and trying to build a good team structure by learning from great teams like Nova, Cinder, Neutron, etc. The Cyborg project was started from basically zero and I intentionally avoided any code dumping as we've seen in many open source projects. We designed the specs from open discussion, wrote the codes with public reviews and continue on. When few people believe even this could work, we make it happen. The reward we are having is awesome, for example on nova-cyborg collaboration, by not mandating certain design philosophy, we have great Nova developers joining our project meeting from time to time, providing valuable comments on how we better design the interaction, and help reviewing the specs. I think for a new project I dare say we've got the best and logical architecture design with regarding to nova interaction. With that said, the community team building will be another important theme for my future work on TC: *- Leveraging First Contact SIG, to try to incubate or help more project that knows how to build their team in a community way instead of a corporate way.* *- Continue on the Cyborg team structure building, enable reasonable sub-team work and encourage more developers to join and contribute.* *- Enabling more collaboration between projects and WG/SIGs. We have some good experience on Cyborg working with Scientific SIG as well as Public Cloud WG working with Nova/Keystone team, and I think we could make further progress on it* ** Non Stop Innovation ** OpenStack offers the ultimate open source cloud computing infrastructure and there are just so many exciting new things we could do with it. I've experimenting the ideas of *how Cyborg could better support AI application, and also the possibility of utilizing blockchain for the Passport Program [7]*. I plan to keep bring new things like these forward when given the opportunity to serve on the technical committee to make OpenStack's edge keep cutting as sharp as ever :) Thank you for your time to read such a long letter and please vote for me and any other candidate that you see value in. A great community could not exist without your important voice. [0]https://governance.openstack.org/election/results/rocky/ptl.html [1]https://wiki.openstack.org/wiki/PublicCloudWorkingGroup [2]https://wiki.openstack.org/wiki/First_Contact_SIG [3] https://review.openstack.org/#/q/project:openstack/interop+owner:%22Zhipeng+Huang+%253Chuangzhipeng%2540huawei.com%253E%22 [4]https://github.com/kubernetes/community/tree/master/wg-policy [5]https://hannibalhuang.github.io/2017/12/27/opensds-official/ [6]https://hannibalhuang.github.io/2016/02/27/opnfv-parser/ [7] https://docs.google.com/presentation/d/1RYRq1YdYEoZ5KNKwlDDtnunMdoYRAHPjPslnng3VqcI/edit?usp=sharing -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From pratapagoutham at gmail.com Thu Apr 12 04:44:47 2018 From: pratapagoutham at gmail.com (Goutham Pratapa) Date: Thu, 12 Apr 2018 04:44:47 +0000 Subject: [openstack-dev] [rally] Moving OpenStack plugins into separate repo In-Reply-To: References: Message-ID: Hi andrey, Great to hear this Cheers and I wish you all luck Cheers Goutham. On Wed, 11 Apr 2018 at 11:00 PM, Boris Pavlovic wrote: > Andrey, > > Great news! > > Best regards, > Boris Pavlovic > > On Wed, Apr 11, 2018 at 9:14 AM, Andrey Kurilin > wrote: > >> Hi Stackers! >> >> Today I am happy to announce great news! >> >> From a historical perspective, Rally is testing (benchmarking) tool for >> OpenStack, but it is changed. More and more users want to use Rally for >> different platforms and environments. Our pluggable system allows doing >> this. >> To make the framework lightweight and simplify our release model, we >> decided to move OpenStack to the separate repository[1]. >> >> [1] https://git.openstack.org/cgit/openstack/rally-openstack >> >> We cut the first release 1.0.0 two weeks ago, and it is published to >> PyPI[2]. >> >> [2] https://pypi.python.org/pypi/rally-openstack >> >> If you are Rally consumer and do not have custom plugins, the migration >> should be simple. Just install rally-openstack package instead of rally and >> everything will work as previously. rally-openstack has a dependency to >> rally, so you need nothing more than installing one package. >> >> If you have custom plugins, do not worry, the migration should be simple >> for you too. The first release has the similar structure as it was in rally >> repository. The only thing which should be changed is importing >> rally_openstack instead of rally.plugins.openstack. >> >> -- >> Best regards, >> Andrey Kurilin. >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Cheers !!! Goutham Pratapa -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgolovat at redhat.com Thu Apr 12 08:08:54 2018 From: sgolovat at redhat.com (Sergii Golovatiuk) Date: Thu, 12 Apr 2018 10:08:54 +0200 Subject: [openstack-dev] [tripleo] roadmap on containers workflow In-Reply-To: References: Message-ID: Hi, Thank you very much for brining up this topic. On Wed, Apr 11, 2018 at 2:50 AM, Emilien Macchi wrote: > Greetings, > > Steve Baker and I had a quick chat today about the work that is being done > around containers workflow in Rocky cycle. > > If you're not familiar with the topic, I suggest to first read the blueprint > to understand the context here: > https://blueprints.launchpad.net/tripleo/+spec/container-prepare-workflow > > One of the great outcomes of this blueprint is that in Rocky, the operator > won't have to run all the "openstack overcloud container" commands to > prepare the container registry and upload the containers. Indeed, it'll be > driven by Heat and Mistral mostly. I am trying to think as operator and it's very similar to 'openstack container' which is swift. So it might be confusing I guess. > > But today our discussion extended on 2 uses-cases that we're going to > explore and find how we can address them: > 1) I'm a developer and want to deploy a containerized undercloud with > customized containers (more or less related to the all-in-one discussions on > another thread [1]). > 2) I'm submitting a patch in tripleo-common (let's say a workflow) and need > my patch to be tested when the undercloud is containerized (see [2] for an > excellent example). That's very nice initiative. > Both cases would require additional things: > - The container registry needs to be deployed *before* actually installing > the undercloud. > - We need a tool to update containers from this registry and *before* > deploying them. We already have this tool in place in our CI for the > overcloud (see [3] and [4]). Now we need a similar thing for the undercloud. I would use external registry in this case. Quay.io might be a good choice to have rock solid simplicity. It might not be good for CI as requires very strong connectivity but it should be sufficient for developers. > Next steps: > - Agree that we need to deploy the container-registry before the undercloud. > - If agreed, we'll create a new Ansible role called > ansible-role-container-registry that for now will deploy exactly what we > have in TripleO, without extra feature. Deploy own registry as part of UC deployment or use external. For instance, for production use I would like to have a cluster of 3-5 registries with HAProxy in front to speed up 1k nodes deployments. > - Drive the playbook runtime from tripleoclient to bootstrap the container > registry (which of course could be disabled in undercloud.conf). > - Create another Ansible role that would re-use container-check tool but the > idea is to provide a role to modify containers when needed, and we could > also control it from tripleoclient. The role would be using the > ContainerImagePrepare parameter, which Steve is working on right now. > > Feedback is welcome, thanks. > > [1] All-In-One thread: > http://lists.openstack.org/pipermail/openstack-dev/2018-March/128900.html > [2] Bug report when undercloud is containeirzed > https://bugs.launchpad.net/tripleo/+bug/1762422 > [3] Tool to update containers if needed: > https://github.com/imain/container-check > [4] Container-check running in TripleO CI: > https://review.openstack.org/#/c/558885/ and > https://review.openstack.org/#/c/529399/ > -- > Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best Regards, Sergii Golovatiuk From bdobreli at redhat.com Thu Apr 12 08:10:37 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Thu, 12 Apr 2018 10:10:37 +0200 Subject: [openstack-dev] [tripleo] roadmap on containers workflow In-Reply-To: <1bf26224-6cb3-099b-f36a-88e0138eb502@redhat.com> References: <1bf26224-6cb3-099b-f36a-88e0138eb502@redhat.com> Message-ID: <5f6aa0f7-9124-86cd-665b-66c1af81f117@redhat.com> On 4/12/18 12:38 AM, Steve Baker wrote: > > > On 11/04/18 12:50, Emilien Macchi wrote: >> Greetings, >> >> Steve Baker and I had a quick chat today about the work that is being >> done around containers workflow in Rocky cycle. >> >> If you're not familiar with the topic, I suggest to first read the >> blueprint to understand the context here: >> https://blueprints.launchpad.net/tripleo/+spec/container-prepare-workflow >> >> One of the great outcomes of this blueprint is that in Rocky, the >> operator won't have to run all the "openstack overcloud container" >> commands to prepare the container registry and upload the containers. >> Indeed, it'll be driven by Heat and Mistral mostly. >> But today our discussion extended on 2 uses-cases that we're going to >> explore and find how we can address them: >> 1) I'm a developer and want to deploy a containerized undercloud with >> customized containers (more or less related to the all-in-one >> discussions on another thread [1]). >> 2) I'm submitting a patch in tripleo-common (let's say a workflow) and >> need my patch to be tested when the undercloud is containerized (see >> [2] for an excellent example). > > I'm fairly sure the only use cases for this will be developer or CI > based. I think we need to be strongly encouraging image modifications > for production deployments to go through some kind of image building > pipeline. See Next Steps below for the implications of this. Yes, this. I would love to see container-check tool improving CI and dev experience and would not happy to see it as a blessed part of the product architecture. Containers should be immutable and nothing should be mutated runtime, like updating packages et al. > >> Both cases would require additional things: >> - The container registry needs to be deployed *before* actually >> installing the undercloud. >> - We need a tool to update containers from this registry and *before* >> deploying them. We already have this tool in place in our CI for the >> overcloud (see [3] and [4]). Now we need a similar thing for the >> undercloud. > > One problem I see is that we use roles and environment files to filter > the images to be pulled/modified/uploaded. Now we would need to assemble > a list of undercloud *and* overcloud environments, and build some kind > of aggregate role data for both. This would need to happen before the > undercloud is even deployed, which is quite a different order from what > quickstart does currently. > > Either that or we do no image filtering and just process every image > regardless of whether it will be used. > > >> Next steps: >> - Agree that we need to deploy the container-registry before the >> undercloud. >> - If agreed, we'll create a new Ansible role called >> ansible-role-container-registry that for now will deploy exactly what >> we have in TripleO, without extra feature. > +1 >> - Drive the playbook runtime from tripleoclient to bootstrap the >> container registry (which of course could be disabled in undercloud.conf). > tripleoclient could switch to using this role instead of puppet-tripleo > to install the registry, however since the only use-cases we have are > dev/CI driven I wonder if quickstart/infrared can just invoke the role > when required, before tripleoclient is involved. > >> - Create another Ansible role that would re-use container-check tool >> but the idea is to provide a role to modify containers when needed, >> and we could also control it from tripleoclient. The role would be >> using the ContainerImagePrepare parameter, which Steve is working on >> right now. >> > Since the use cases are all upstream CI/dev I do wonder if we should > just have a dedicated container-check > role inside > tripleo-quickstart-extras which can continue to use the script[3] or > whatever. Keeping the logic in quickstart will remove the temptation to > use it instead of a proper image build pipeline for production deployments. > > Alternatively it could still be a standalone role which quickstart > invokes, just to accommodate development workflows which don't use > quickstart. > >> Feedback is welcome, thanks. >> >> [1] All-In-One thread: >> http://lists.openstack.org/pipermail/openstack-dev/2018-March/128900.html >> [2] Bug report when undercloud is containeirzed >> https://bugs.launchpad.net/tripleo/+bug/1762422 >> [3] Tool to update containers if needed: >> https://github.com/imain/container-check >> [4] Container-check running in TripleO CI: >> https://review.openstack.org/#/c/558885/ and >> https://review.openstack.org/#/c/529399/ >> -- >> Emilien Macchi >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Bogdan Dobrelya, Irc #bogdando From bdobreli at redhat.com Thu Apr 12 08:16:18 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Thu, 12 Apr 2018 10:16:18 +0200 Subject: [openstack-dev] [tripleo] roadmap on containers workflow In-Reply-To: References: Message-ID: <25259749-6def-c1c9-2525-1dc97ec1a5a7@redhat.com> On 4/12/18 10:08 AM, Sergii Golovatiuk wrote: > Hi, > > Thank you very much for brining up this topic. > > On Wed, Apr 11, 2018 at 2:50 AM, Emilien Macchi wrote: >> Greetings, >> >> Steve Baker and I had a quick chat today about the work that is being done >> around containers workflow in Rocky cycle. >> >> If you're not familiar with the topic, I suggest to first read the blueprint >> to understand the context here: >> https://blueprints.launchpad.net/tripleo/+spec/container-prepare-workflow >> >> One of the great outcomes of this blueprint is that in Rocky, the operator >> won't have to run all the "openstack overcloud container" commands to >> prepare the container registry and upload the containers. Indeed, it'll be >> driven by Heat and Mistral mostly. > > I am trying to think as operator and it's very similar to 'openstack > container' which is swift. So it might be confusing I guess. > >> >> But today our discussion extended on 2 uses-cases that we're going to >> explore and find how we can address them: >> 1) I'm a developer and want to deploy a containerized undercloud with >> customized containers (more or less related to the all-in-one discussions on >> another thread [1]). >> 2) I'm submitting a patch in tripleo-common (let's say a workflow) and need >> my patch to be tested when the undercloud is containerized (see [2] for an >> excellent example). > > That's very nice initiative. > >> Both cases would require additional things: >> - The container registry needs to be deployed *before* actually installing >> the undercloud. >> - We need a tool to update containers from this registry and *before* >> deploying them. We already have this tool in place in our CI for the >> overcloud (see [3] and [4]). Now we need a similar thing for the undercloud. > > I would use external registry in this case. Quay.io might be a good > choice to have rock solid simplicity. It might not be good for CI as > requires very strong connectivity but it should be sufficient for > developers. > >> Next steps: >> - Agree that we need to deploy the container-registry before the undercloud. >> - If agreed, we'll create a new Ansible role called >> ansible-role-container-registry that for now will deploy exactly what we >> have in TripleO, without extra feature. > > Deploy own registry as part of UC deployment or use external. For > instance, for production use I would like to have a cluster of 3-5 > registries with HAProxy in front to speed up 1k nodes deployments. Note that this implies HA undercloud as well. Although, given that HA undercloud is goodness indeed, I would *not* invest time into reliable container registry deployment architecture for undercloud as we'll have it for free, once kubernetes/openshift control plane for openstack becomes adopted. There is a very strong notion of build pipelines, reliable containers registries et al. > >> - Drive the playbook runtime from tripleoclient to bootstrap the container >> registry (which of course could be disabled in undercloud.conf). >> - Create another Ansible role that would re-use container-check tool but the >> idea is to provide a role to modify containers when needed, and we could >> also control it from tripleoclient. The role would be using the >> ContainerImagePrepare parameter, which Steve is working on right now. >> >> Feedback is welcome, thanks. >> >> [1] All-In-One thread: >> http://lists.openstack.org/pipermail/openstack-dev/2018-March/128900.html >> [2] Bug report when undercloud is containeirzed >> https://bugs.launchpad.net/tripleo/+bug/1762422 >> [3] Tool to update containers if needed: >> https://github.com/imain/container-check >> [4] Container-check running in TripleO CI: >> https://review.openstack.org/#/c/558885/ and >> https://review.openstack.org/#/c/529399/ >> -- >> Emilien Macchi >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > -- Best regards, Bogdan Dobrelya, Irc #bogdando From bdobreli at redhat.com Thu Apr 12 08:23:21 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Thu, 12 Apr 2018 10:23:21 +0200 Subject: [openstack-dev] [tripleo] roadmap on containers workflow In-Reply-To: <1bf26224-6cb3-099b-f36a-88e0138eb502@redhat.com> References: <1bf26224-6cb3-099b-f36a-88e0138eb502@redhat.com> Message-ID: On 4/12/18 12:38 AM, Steve Baker wrote: > > > On 11/04/18 12:50, Emilien Macchi wrote: >> Greetings, >> >> Steve Baker and I had a quick chat today about the work that is being >> done around containers workflow in Rocky cycle. >> >> If you're not familiar with the topic, I suggest to first read the >> blueprint to understand the context here: >> https://blueprints.launchpad.net/tripleo/+spec/container-prepare-workflow >> >> One of the great outcomes of this blueprint is that in Rocky, the >> operator won't have to run all the "openstack overcloud container" >> commands to prepare the container registry and upload the containers. >> Indeed, it'll be driven by Heat and Mistral mostly. >> But today our discussion extended on 2 uses-cases that we're going to >> explore and find how we can address them: >> 1) I'm a developer and want to deploy a containerized undercloud with >> customized containers (more or less related to the all-in-one >> discussions on another thread [1]). >> 2) I'm submitting a patch in tripleo-common (let's say a workflow) and >> need my patch to be tested when the undercloud is containerized (see >> [2] for an excellent example). > > I'm fairly sure the only use cases for this will be developer or CI > based. I think we need to be strongly encouraging image modifications > for production deployments to go through some kind of image building > pipeline. See Next Steps below for the implications of this. > >> Both cases would require additional things: >> - The container registry needs to be deployed *before* actually >> installing the undercloud. >> - We need a tool to update containers from this registry and *before* >> deploying them. We already have this tool in place in our CI for the >> overcloud (see [3] and [4]). Now we need a similar thing for the >> undercloud. > > One problem I see is that we use roles and environment files to filter > the images to be pulled/modified/uploaded. Now we would need to assemble > a list of undercloud *and* overcloud environments, and build some kind > of aggregate role data for both. This would need to happen before the > undercloud is even deployed, which is quite a different order from what > quickstart does currently. > > Either that or we do no image filtering and just process every image > regardless of whether it will be used. > > >> Next steps: >> - Agree that we need to deploy the container-registry before the >> undercloud. >> - If agreed, we'll create a new Ansible role called >> ansible-role-container-registry that for now will deploy exactly what >> we have in TripleO, without extra feature. > +1 >> - Drive the playbook runtime from tripleoclient to bootstrap the >> container registry (which of course could be disabled in undercloud.conf). > tripleoclient could switch to using this role instead of puppet-tripleo > to install the registry, however since the only use-cases we have are > dev/CI driven I wonder if quickstart/infrared can just invoke the role > when required, before tripleoclient is involved. Please let's do that for tripleoclient and only make quickstart and other tools to invoke commands. We should keep being close to what users would do, which is only issuing client commands. > >> - Create another Ansible role that would re-use container-check tool >> but the idea is to provide a role to modify containers when needed, >> and we could also control it from tripleoclient. The role would be >> using the ContainerImagePrepare parameter, which Steve is working on >> right now. >> > Since the use cases are all upstream CI/dev I do wonder if we should > just have a dedicated container-check > role inside > tripleo-quickstart-extras which can continue to use the script[3] or > whatever. Keeping the logic in quickstart will remove the temptation to > use it instead of a proper image build pipeline for production deployments. > > Alternatively it could still be a standalone role which quickstart > invokes, just to accommodate development workflows which don't use > quickstart. > >> Feedback is welcome, thanks. >> >> [1] All-In-One thread: >> http://lists.openstack.org/pipermail/openstack-dev/2018-March/128900.html >> [2] Bug report when undercloud is containeirzed >> https://bugs.launchpad.net/tripleo/+bug/1762422 >> [3] Tool to update containers if needed: >> https://github.com/imain/container-check >> [4] Container-check running in TripleO CI: >> https://review.openstack.org/#/c/558885/ and >> https://review.openstack.org/#/c/529399/ >> -- >> Emilien Macchi >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Bogdan Dobrelya, Irc #bogdando From bdobreli at redhat.com Thu Apr 12 08:26:30 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Thu, 12 Apr 2018 10:26:30 +0200 Subject: [openstack-dev] [tripleo] roadmap on containers workflow In-Reply-To: <1bf26224-6cb3-099b-f36a-88e0138eb502@redhat.com> References: <1bf26224-6cb3-099b-f36a-88e0138eb502@redhat.com> Message-ID: On 4/12/18 12:38 AM, Steve Baker wrote: > > > On 11/04/18 12:50, Emilien Macchi wrote: >> Greetings, >> >> Steve Baker and I had a quick chat today about the work that is being >> done around containers workflow in Rocky cycle. >> >> If you're not familiar with the topic, I suggest to first read the >> blueprint to understand the context here: >> https://blueprints.launchpad.net/tripleo/+spec/container-prepare-workflow >> >> One of the great outcomes of this blueprint is that in Rocky, the >> operator won't have to run all the "openstack overcloud container" >> commands to prepare the container registry and upload the containers. >> Indeed, it'll be driven by Heat and Mistral mostly. >> But today our discussion extended on 2 uses-cases that we're going to >> explore and find how we can address them: >> 1) I'm a developer and want to deploy a containerized undercloud with >> customized containers (more or less related to the all-in-one >> discussions on another thread [1]). >> 2) I'm submitting a patch in tripleo-common (let's say a workflow) and >> need my patch to be tested when the undercloud is containerized (see >> [2] for an excellent example). > > I'm fairly sure the only use cases for this will be developer or CI > based. I think we need to be strongly encouraging image modifications > for production deployments to go through some kind of image building > pipeline. See Next Steps below for the implications of this. > >> Both cases would require additional things: >> - The container registry needs to be deployed *before* actually >> installing the undercloud. >> - We need a tool to update containers from this registry and *before* >> deploying them. We already have this tool in place in our CI for the >> overcloud (see [3] and [4]). Now we need a similar thing for the >> undercloud. > > One problem I see is that we use roles and environment files to filter > the images to be pulled/modified/uploaded. Now we would need to assemble > a list of undercloud *and* overcloud environments, and build some kind > of aggregate role data for both. This would need to happen before the > undercloud is even deployed, which is quite a different order from what > quickstart does currently. > > Either that or we do no image filtering and just process every image > regardless of whether it will be used. > > >> Next steps: >> - Agree that we need to deploy the container-registry before the >> undercloud. >> - If agreed, we'll create a new Ansible role called >> ansible-role-container-registry that for now will deploy exactly what >> we have in TripleO, without extra feature. > +1 >> - Drive the playbook runtime from tripleoclient to bootstrap the >> container registry (which of course could be disabled in undercloud.conf). > tripleoclient could switch to using this role instead of puppet-tripleo > to install the registry, however since the only use-cases we have are > dev/CI driven I wonder if quickstart/infrared can just invoke the role > when required, before tripleoclient is involved. > >> - Create another Ansible role that would re-use container-check tool >> but the idea is to provide a role to modify containers when needed, >> and we could also control it from tripleoclient. The role would be >> using the ContainerImagePrepare parameter, which Steve is working on >> right now. >> > Since the use cases are all upstream CI/dev I do wonder if we should > just have a dedicated container-check > role inside > tripleo-quickstart-extras which can continue to use the script[3] or > whatever. Keeping the logic in quickstart will remove the temptation to > use it instead of a proper image build pipeline for production deployments. +1 to put it in quickstart-extras to "hide" it from the production use cases. > > Alternatively it could still be a standalone role which quickstart > invokes, just to accommodate development workflows which don't use > quickstart. > >> Feedback is welcome, thanks. >> >> [1] All-In-One thread: >> http://lists.openstack.org/pipermail/openstack-dev/2018-March/128900.html >> [2] Bug report when undercloud is containeirzed >> https://bugs.launchpad.net/tripleo/+bug/1762422 >> [3] Tool to update containers if needed: >> https://github.com/imain/container-check >> [4] Container-check running in TripleO CI: >> https://review.openstack.org/#/c/558885/ and >> https://review.openstack.org/#/c/529399/ >> -- >> Emilien Macchi >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Bogdan Dobrelya, Irc #bogdando From jichenjc at cn.ibm.com Thu Apr 12 09:13:22 2018 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Thu, 12 Apr 2018 17:13:22 +0800 Subject: [openstack-dev] [Nova][Deployers] Optional, platform specific, dependancies in requirements.txt In-Reply-To: <1a9a4e75-9b89-9474-38d7-910f08fbd1b9@gmail.com> References: <1a9a4e75-9b89-9474-38d7-910f08fbd1b9@gmail.com> Message-ID: Thanks for Michael for raising this question and detailed information from Clark As indicated in the mail, xen, vmware etc might already have this kind of requirements (and I guess might be more than that) , can we accept z/VM requirements first by following other existing ones then next I can create a BP later to indicate this kind of change request by referring to Clark's comments and submit patches to handle it ? Thanks Best Regards! Kevin (Chen) Ji 纪 晨 Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com Phone: +86-10-82451493 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC From: Matt Riedemann To: openstack-dev at lists.openstack.org Date: 04/12/2018 08:46 AM Subject: Re: [openstack-dev] [Nova][Deployers] Optional, platform specific, dependancies in requirements.txt On 4/11/2018 5:09 PM, Michael Still wrote: > > https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_c_523387&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=212PUwLYOBlJZ3BiZNuJIFkRfqXoBPJDcWYCDk7vCHg&s=CNosrTHnAR21zOI52fnDRfTqu2zPiAn2oW9f67Qijo4&e= proposes adding a z/VM specific > dependancy to nova's requirements.txt. When I objected the counter > argument is that we have examples of windows specific dependancies > (os-win) and powervm specific dependancies in that file already. > > I think perhaps all three are a mistake and should be removed. > > My recollection is that for drivers like ironic which may not be > deployed by everyone, we have the dependancy documented, and then loaded > at runtime by the driver itself instead of adding it to > requirements.txt. This is to stop pip for auto-installing the dependancy > for anyone who wants to run nova. I had assumed this was at the request > of the deployer community. > > So what do we do with z/VM? Do we clean this up? Or do we now allow > dependancies that are only useful to a very small number of deployments > into requirements.txt? As Eric pointed out in the review, this came up when pypowervm was added: https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_c_438119_5_requirements.txt&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=212PUwLYOBlJZ3BiZNuJIFkRfqXoBPJDcWYCDk7vCHg&s=iyKxF-CcGAFmnQs8B7d5u2zwEiJqq8ivETmrgB77PEg&e= And you're asking the same questions I did in there, which was, should it go into test-requirements.txt like oslo.vmware and python-ironicclient, or should it go under [extras], or go into requirements.txt like os-win (we also have the xenapi library now too). I don't really think all of these optional packages should be in requirements.txt, but we should just be consistent with whatever we do, be that test-requirements.txt or [extras]. I remember caring more about this back in my rpm packaging days when we actually tracked what was in requirements.txt to base what needed to go into the rpm spec, unlike Fedora rpm specs which just zero out requirements.txt and depend on their own knowledge of what needs to be installed (which is sometimes lacking or lagging master). I also seem to remember that [extras] was less than user-friendly for some reason, but maybe that was just because of how our CI jobs are setup? Or I'm just making that up. I know it's pretty simple to install the stuff from extras for tox runs, it's just an extra set of dependencies to list in the tox.ini. Having said all this, I don't have the energy to help push for consistency myself, but will happily watch you from the sidelines. -- Thanks, Matt __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=212PUwLYOBlJZ3BiZNuJIFkRfqXoBPJDcWYCDk7vCHg&s=2FioyzCRtztysjjEqCrBTkpQs_wwfs3Mt2wGDkrft-s&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From jpichon at redhat.com Thu Apr 12 09:55:20 2018 From: jpichon at redhat.com (Julie Pichon) Date: Thu, 12 Apr 2018 10:55:20 +0100 Subject: [openstack-dev] [tripleo][ui] Dependency management In-Reply-To: References: <20180119184342.bz5yzdn7t35xkzqu@localhost.localdomain> Message-ID: On 2 March 2018 at 22:52, Alan Pevec wrote: > On Mon, Jan 22, 2018 at 9:30 AM, Julie Pichon wrote: >> On 19 January 2018 at 18:43, Honza Pokorny wrote: >>> We've recently discovered an issue with the way we handle dependencies for >>> tripleo-ui. This is an explanation of the problem, and a proposed solution. >>> I'm looking for feedback. >>> >>> Before the upgrade to zuul v3 in TripleO CI, we had two types of jobs for >>> tripleo-ui: >>> >>> * Native npm jobs >>> * Undercloud integration jobs >>> >>> After the upgrade, the integration jobs went away. Our goal is to add them >>> back. >>> >>> There is a difference in how these two types of jobs handle dependencies. >>> Native npm jobs use the "npm install" command to acquire packages, and >>> undercloud jobs use RPMs. The tripleo-ui project uses a separate RPM for >>> dependencies called openstack-tripleo-ui-deps. >>> >>> Because of the requirement to use a separate RPM for dependencies, there is some >>> extra work needed when a new dependency is introduced, or an existing one is >>> upgraded. Once the patch that introduces the dependency is merged, we have to >>> increment the version of the -deps package, and rebuild it. It then shows up in >>> the yum repos used by the undercloud. >>> >>> To make matters worse, we recently upgraded our infrastructure to nodejs 8.9.4 >>> and npm 5.6.0 (latest stable). This makes it so we can't write "purist" patches >>> that simply introduce a new dependency to package.json, and nothing more. The >>> code that uses the new dependency must be included. I tend to think that each >>> commit should work on its own so this can be seen as a plus. >>> >>> This presents a problem: you can't get a patch that introduces a new dependency >>> merged because it's not included in the RPM needed by the undercloud ci job. >>> >>> So, here is a proposal on how that might work: >>> >>> 1. Submit a patch for review that introduces the dependency, along with code >>> changes to support it and validate its inclusion >>> 2. Native npm jobs will pass >>> 3. Undercloud gate job will fail because the dependency isn't in -deps RPM >>> 4. We ask RDO to review for licensing >>> 5. Once reviewed, new -deps package is built >>> 6. Recheck >>> 7. All jobs pass >> >> Perhaps there should be a step after 3 or 4 to have the patch normally >> reviewed, and wait for it to have two +2s before building the new >> package? Otherwise we may end up with wasted work to get a new package >> ready for dependencies that were eventually dismissed. > > Thanks Julie for reminding me of this thread! > > I agree we can only build ui-deps package when the review is about to merge. > I've proposed https://github.com/rdo-common/openstack-tripleo-ui-deps/pull/19 > which allows us to build the package with the review and patchset > numbers, before it's merged. > Please review and we can try it on the next deps update! Thanks Alan! Let's do that :) Glad to see the pull request merged. If we're happy with the new suggested process here, I proposed a patch to update the docs with it at [1]. Hopefully we can move ahead with this and also merge the patch to reenable the undercloud job [2] to get back minimal sanity checking on the UI rpms. Thanks! Julie [1] https://review.openstack.org/#/c/560846/ [2] https://review.openstack.org/#/c/526430/ From sergey.glazyrin.dev at gmail.com Thu Apr 12 12:30:35 2018 From: sergey.glazyrin.dev at gmail.com (Sergey Glazyrin) Date: Thu, 12 Apr 2018 14:30:35 +0200 Subject: [openstack-dev] service/package dependencies Message-ID: Hello guys. Is there a way to automatically find out the dependencies (build tree of dependencies) of openstack services: for example, ceilometer depends on rabbitmq, etc. We are developing a troubleshooting system for openstack and we want to let the user know when some service/package dependency broken that this service/package at risk. We may hardcode such dependencies but I prefer to have some automatic solution. -- Best, Sergey -------------- next part -------------- An HTML attachment was scrubbed... URL: From opensrloo at gmail.com Thu Apr 12 12:32:02 2018 From: opensrloo at gmail.com (Ruby Loo) Date: Thu, 12 Apr 2018 08:32:02 -0400 Subject: [openstack-dev] [ironic] Ironic Bug Day on Thursday April 12th @ 1:00 PM - 3:00 PM (UTC) In-Reply-To: <2827152b-679f-be88-9172-6fb692140791@linux.vnet.ibm.com> References: <1233f954-1a90-a966-58ec-f7a20a89fc44@linux.vnet.ibm.com> <2827152b-679f-be88-9172-6fb692140791@linux.vnet.ibm.com> Message-ID: Hi Mike, This works for me. We can refine/discuss at the bug squashing event. Thanks! --ruby On Wed, Apr 11, 2018 at 2:53 PM, Michael Turek wrote: > Sorry this is so late but as for the format of the event I think we should > do something like this: > > 1) Go through new bugs > -This is doable in storyboard. Sort by creation date > -Should be a nice warm up activity! > 2) Go through oldest bugs > -Again, doable in storyboard. Sort by last updated. > -Older bugs are usually candidates for some clean up. We'll decide if > bugs are still valid > or if we need to reassign/poke owners. > 3) Open Floor > -If you have a bug that you'd like to discuss, bring it up here! > 4) Storyboard discussion > -One of the reasons we are doing this is to get our feet wet in > storyboard. Let's spend > 10 to 20 minutes discussing what we need out of the tool after > playing with it. > > Originally I was hoping that we could sort by task priority but that > currently seems to be > unavailable, or well hidden, in storyboard . If someone knows how to do > this, please reply. > > Does anyone else have any ideas on how to structure bug day? > > Thanks! > Mike > > > On 4/11/18 9:47 AM, Michael Turek wrote: > >> Hey all, >> >> Ironic Bug Day is happening tomorrow, April 12th at 1:00 PM - 3:00 PM >> (UTC) >> >> We will be meeting on Julia's bluejeans line: >> https://bluejeans.com/5548595878 >> >> Hope to see everyone there! >> >> Thanks, >> Mike Turek >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at fried.cc Thu Apr 12 12:42:59 2018 From: openstack at fried.cc (Eric Fried) Date: Thu, 12 Apr 2018 07:42:59 -0500 Subject: [openstack-dev] [Nova][Deployers] Optional, platform specific, dependancies in requirements.txt In-Reply-To: References: <1a9a4e75-9b89-9474-38d7-910f08fbd1b9@gmail.com> Message-ID: <1e44b8bc-9855-e7f6-4ef8-2762dd1fbf0d@fried.cc> +1 This sounds reasonable to me. I'm glad the issue was raised, but IMO it shouldn't derail progress on an approved blueprint with ready code. Jichen, would you please go ahead and file that blueprint template (no need to write a spec yet) and link it in a review comment on the bottom zvm patch so we have a paper trail? I'm thinking something like "Consistent platform-specific and optional requirements" -- that leaves us open to decide *how* we're going to "handle" them. Thanks, efried On 04/12/2018 04:13 AM, Chen CH Ji wrote: > Thanks for Michael for raising this question and detailed information > from Clark > > As indicated in the mail, xen, vmware etc might already have this kind > of requirements (and I guess might be more than that) , > can we accept z/VM requirements first by following other existing ones > then next I can create a BP later to indicate this kind > of change request by referring to Clark's comments and submit patches to > handle it ? Thanks > > Best Regards! > > Kevin (Chen) Ji 纪 晨 > > Engineer, zVM Development, CSTL > Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com > Phone: +86-10-82451493 > Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian > District, Beijing 100193, PRC > > Inactive hide details for Matt Riedemann ---04/12/2018 08:46:25 AM---On > 4/11/2018 5:09 PM, Michael Still wrote: >Matt Riedemann ---04/12/2018 > 08:46:25 AM---On 4/11/2018 5:09 PM, Michael Still wrote: > > > From: Matt Riedemann > To: openstack-dev at lists.openstack.org > Date: 04/12/2018 08:46 AM > Subject: Re: [openstack-dev] [Nova][Deployers] Optional, platform > specific, dependancies in requirements.txt > > ------------------------------------------------------------------------ > > > > On 4/11/2018 5:09 PM, Michael Still wrote: >> >> > https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_c_523387&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=212PUwLYOBlJZ3BiZNuJIFkRfqXoBPJDcWYCDk7vCHg&s=CNosrTHnAR21zOI52fnDRfTqu2zPiAn2oW9f67Qijo4&e= proposes > adding a z/VM specific >> dependancy to nova's requirements.txt. When I objected the counter >> argument is that we have examples of windows specific dependancies >> (os-win) and powervm specific dependancies in that file already. >> >> I think perhaps all three are a mistake and should be removed. >> >> My recollection is that for drivers like ironic which may not be >> deployed by everyone, we have the dependancy documented, and then loaded >> at runtime by the driver itself instead of adding it to >> requirements.txt. This is to stop pip for auto-installing the dependancy >> for anyone who wants to run nova. I had assumed this was at the request >> of the deployer community. >> >> So what do we do with z/VM? Do we clean this up? Or do we now allow >> dependancies that are only useful to a very small number of deployments >> into requirements.txt? > > As Eric pointed out in the review, this came up when pypowervm was added: > > https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_c_438119_5_requirements.txt&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=212PUwLYOBlJZ3BiZNuJIFkRfqXoBPJDcWYCDk7vCHg&s=iyKxF-CcGAFmnQs8B7d5u2zwEiJqq8ivETmrgB77PEg&e= > > And you're asking the same questions I did in there, which was, should > it go into test-requirements.txt like oslo.vmware and > python-ironicclient, or should it go under [extras], or go into > requirements.txt like os-win (we also have the xenapi library now too). > > I don't really think all of these optional packages should be in > requirements.txt, but we should just be consistent with whatever we do, > be that test-requirements.txt or [extras]. I remember caring more about > this back in my rpm packaging days when we actually tracked what was in > requirements.txt to base what needed to go into the rpm spec, unlike > Fedora rpm specs which just zero out requirements.txt and depend on > their own knowledge of what needs to be installed (which is sometimes > lacking or lagging master). > > I also seem to remember that [extras] was less than user-friendly for > some reason, but maybe that was just because of how our CI jobs are > setup? Or I'm just making that up. I know it's pretty simple to install > the stuff from extras for tox runs, it's just an extra set of > dependencies to list in the tox.ini. > > Having said all this, I don't have the energy to help push for > consistency myself, but will happily watch you from the sidelines. > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=212PUwLYOBlJZ3BiZNuJIFkRfqXoBPJDcWYCDk7vCHg&s=2FioyzCRtztysjjEqCrBTkpQs_wwfs3Mt2wGDkrft-s&e= > > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From aakashkt0 at gmail.com Thu Apr 12 14:03:46 2018 From: aakashkt0 at gmail.com (Aakash Kt) Date: Thu, 12 Apr 2018 19:33:46 +0530 Subject: [openstack-dev] [openstack][charms] Openstack + OVN In-Reply-To: References: Message-ID: Hello, Any update on getting to the development of this charm? I need some guidance on this. Thank you, Aakash On Tue, Mar 27, 2018 at 10:27 PM, Aakash Kt wrote: > Hello, > > So an update about current status. The charm spec for charm-os-ovn has > been merged (queens/backlog). I don't know what the process is after this, > but I had a couple of questions for the development of the charm : > > - I was wondering whether I need to use the charms.openstack package? Or > can I just write using the reactive framework as is? > - If we do have to use charms.openstack, where can I find good > documentation of the package? I searched online and could not find much to > go on with. > - How much time do you think this will take to develop (not including test > cases) ? > > Do guide me on the further steps to bring this charm to completion :-) > > Thank you, > Aakash > > > On Mon, Mar 19, 2018 at 5:37 PM, Aakash Kt wrote: > >> Hi James, >> >> Thank you for the previous code review. >> I have pushed another patch. Also, I do not know how to reply to your >> review comments on gerrit, so I will reply to them here. >> >> About the signed-off-message, I did not know that it wasn't a requirement >> for OpenStack, I assumed it was. I have removed it from the updated patch. >> >> Thank you, >> Aakash >> >> >> On Thu, Mar 15, 2018 at 11:34 AM, Aakash Kt wrote: >> >>> Hi James, >>> >>> Just a small reminder that I have pushed a patch for review, according >>> to changes you suggested :-) >>> >>> Thanks, >>> Aakash >>> >>> On Mon, Mar 12, 2018 at 2:38 PM, James Page >>> wrote: >>> >>>> Hi Aakash >>>> >>>> On Sun, 11 Mar 2018 at 19:01 Aakash Kt wrote: >>>> >>>>> Hi, >>>>> >>>>> I had previously put in a mail about the development for openstack-ovn >>>>> charm. Sorry it took me this long to get back, was involved in other >>>>> projects. >>>>> >>>>> I have submitted a charm spec for the above charm. >>>>> Here is the review link : https://review.openstack.org/#/c/551800/ >>>>> >>>>> Please look in to it and we can further discuss how to proceed. >>>>> >>>> >>>> I'll feedback directly on the review. >>>> >>>> Thanks! >>>> >>>> James >>>> >>>> ____________________________________________________________ >>>> ______________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.op >>>> enstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Thu Apr 12 14:25:50 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Thu, 12 Apr 2018 09:25:50 -0500 Subject: [openstack-dev] Forum Submissions Reminder + Vancouver Info Message-ID: <5ACF6C6E.6070705@openstack.org> Hello! A quick reminder that the Vancouver Forum Submission deadline is this coming Sunday, April 15th. Submission Process Please proceed to http://forumtopics.openstack.org/ to submit your topics. What is the Forum? If you'd like more details about the Forum, go to https://wiki.openstack.org/wiki/Forum Where do I register for the Summit in Vancouver? https://www.eventbrite.com/e/openstack-summit-may-2018-vancouver-tickets-40845826968?aff=YVRSummit2018 Now get a hotel room for up to 55% off the standard Vancouver rates https://www.openstack.org/summit/vancouver-2018/travel/ Thanks and we look forward to seeing you all in Vancouver! Cheers, Jimmy -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Thu Apr 12 14:37:07 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 12 Apr 2018 10:37:07 -0400 Subject: [openstack-dev] Fwd: Summary of PyPI overhaul in new LWN article Message-ID: <1523543639-sup-8235@lrrr.local> I thought some folks from our community would be interested in the ongoing work on the Python Package Index (PyPI). The article mentioned in this post to the distutils mailing list provides a good history and a description of the new and planned features for "Warehouse". Doug --- Begin forwarded message from Sumana Harihareswara --- From: Sumana Harihareswara To: pypa-dev , DistUtils mailing list Date: Wed, 11 Apr 2018 22:30:49 -0400 Subject: [Distutils] Summary of PyPI overhaul in new LWN article Today, LWN published my new article "A new package index for Python". https://lwn.net/Articles/751458/ In it, I discuss security, policy, UX and developer experience changes in the 15+ years since PyPI's founding, new features (and deprecated old features) in Warehouse, and future plans. Plus: screenshots! If you aren't already an LWN subscriber, you can use this subscriber link for the next week to read the article despite the LWN paywall. https://lwn.net/SubscriberLink/751458/81b2759e7025d6b9/ This summary should help occasional Python programmers -- and frequent Pythonists who don't follow packaging/distro discussions closely -- understand why a new application is necessary, what's new, what features are going away, and what to expect in the near future. I also hope it catches the attention of downstreams that ought to migrate. --- End forwarded message --- From openstack at nemebean.com Thu Apr 12 14:42:10 2018 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 12 Apr 2018 09:42:10 -0500 Subject: [openstack-dev] [Nova][Deployers] Optional, platform specific, dependancies in requirements.txt In-Reply-To: <1e44b8bc-9855-e7f6-4ef8-2762dd1fbf0d@fried.cc> References: <1a9a4e75-9b89-9474-38d7-910f08fbd1b9@gmail.com> <1e44b8bc-9855-e7f6-4ef8-2762dd1fbf0d@fried.cc> Message-ID: <1d841b5f-d008-b9af-9e13-ba7fd9ce105f@nemebean.com> On 04/12/2018 07:42 AM, Eric Fried wrote: > +1 > > This sounds reasonable to me. I'm glad the issue was raised, but IMO it > shouldn't derail progress on an approved blueprint with ready code. The one thing I will note, because we're dealing with it in oslo.messaging right now, is that there's no clear path to removing a requirement from the unconditional list and moving it to extras. There isn't really a deprecation method for requirements where we can notify users that they'll need to start installing things with [zvm] or whatever added as extras. Our current approach in oslo.messaging is to leave the existing requirements and add new ones as extras. It's not perfect (someone using kafka doesn't need the rabbit deps, but they'll still get them), but it's a step in the right direction. > > Jichen, would you please go ahead and file that blueprint template (no > need to write a spec yet) and link it in a review comment on the bottom > zvm patch so we have a paper trail? I'm thinking something like > "Consistent platform-specific and optional requirements" -- that leaves > us open to decide *how* we're going to "handle" them. > > Thanks, > efried > > On 04/12/2018 04:13 AM, Chen CH Ji wrote: >> Thanks for Michael for raising this question and detailed information >> from Clark >> >> As indicated in the mail, xen, vmware etc might already have this kind >> of requirements (and I guess might be more than that) , >> can we accept z/VM requirements first by following other existing ones >> then next I can create a BP later to indicate this kind >> of change request by referring to Clark's comments and submit patches to >> handle it ? Thanks >> >> Best Regards! >> >> Kevin (Chen) Ji 纪 晨 >> >> Engineer, zVM Development, CSTL >> Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com >> Phone: +86-10-82451493 >> Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian >> District, Beijing 100193, PRC >> >> Inactive hide details for Matt Riedemann ---04/12/2018 08:46:25 AM---On >> 4/11/2018 5:09 PM, Michael Still wrote: >Matt Riedemann ---04/12/2018 >> 08:46:25 AM---On 4/11/2018 5:09 PM, Michael Still wrote: > >> >> From: Matt Riedemann >> To: openstack-dev at lists.openstack.org >> Date: 04/12/2018 08:46 AM >> Subject: Re: [openstack-dev] [Nova][Deployers] Optional, platform >> specific, dependancies in requirements.txt >> >> ------------------------------------------------------------------------ >> >> >> >> On 4/11/2018 5:09 PM, Michael Still wrote: >>> >>> >> https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_c_523387&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=212PUwLYOBlJZ3BiZNuJIFkRfqXoBPJDcWYCDk7vCHg&s=CNosrTHnAR21zOI52fnDRfTqu2zPiAn2oW9f67Qijo4&e= proposes >> adding a z/VM specific >>> dependancy to nova's requirements.txt. When I objected the counter >>> argument is that we have examples of windows specific dependancies >>> (os-win) and powervm specific dependancies in that file already. >>> >>> I think perhaps all three are a mistake and should be removed. >>> >>> My recollection is that for drivers like ironic which may not be >>> deployed by everyone, we have the dependancy documented, and then loaded >>> at runtime by the driver itself instead of adding it to >>> requirements.txt. This is to stop pip for auto-installing the dependancy >>> for anyone who wants to run nova. I had assumed this was at the request >>> of the deployer community. >>> >>> So what do we do with z/VM? Do we clean this up? Or do we now allow >>> dependancies that are only useful to a very small number of deployments >>> into requirements.txt? >> >> As Eric pointed out in the review, this came up when pypowervm was added: >> >> https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_c_438119_5_requirements.txt&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=212PUwLYOBlJZ3BiZNuJIFkRfqXoBPJDcWYCDk7vCHg&s=iyKxF-CcGAFmnQs8B7d5u2zwEiJqq8ivETmrgB77PEg&e= >> >> And you're asking the same questions I did in there, which was, should >> it go into test-requirements.txt like oslo.vmware and >> python-ironicclient, or should it go under [extras], or go into >> requirements.txt like os-win (we also have the xenapi library now too). >> >> I don't really think all of these optional packages should be in >> requirements.txt, but we should just be consistent with whatever we do, >> be that test-requirements.txt or [extras]. I remember caring more about >> this back in my rpm packaging days when we actually tracked what was in >> requirements.txt to base what needed to go into the rpm spec, unlike >> Fedora rpm specs which just zero out requirements.txt and depend on >> their own knowledge of what needs to be installed (which is sometimes >> lacking or lagging master). >> >> I also seem to remember that [extras] was less than user-friendly for >> some reason, but maybe that was just because of how our CI jobs are >> setup? Or I'm just making that up. I know it's pretty simple to install >> the stuff from extras for tox runs, it's just an extra set of >> dependencies to list in the tox.ini. >> >> Having said all this, I don't have the energy to help push for >> consistency myself, but will happily watch you from the sidelines. >> >> -- >> >> Thanks, >> >> Matt >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=212PUwLYOBlJZ3BiZNuJIFkRfqXoBPJDcWYCDk7vCHg&s=2FioyzCRtztysjjEqCrBTkpQs_wwfs3Mt2wGDkrft-s&e= >> >> >> >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From emilien at redhat.com Thu Apr 12 15:44:53 2018 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 12 Apr 2018 08:44:53 -0700 Subject: [openstack-dev] [tripleo] roadmap on containers workflow In-Reply-To: References: Message-ID: On Thu, Apr 12, 2018 at 1:08 AM, Sergii Golovatiuk wrote: [...] > > One of the great outcomes of this blueprint is that in Rocky, the > operator > > won't have to run all the "openstack overcloud container" commands to > > prepare the container registry and upload the containers. Indeed, it'll > be > > driven by Heat and Mistral mostly. > > I am trying to think as operator and it's very similar to 'openstack > container' which is swift. So it might be confusing I guess. "openstack overcloud container" was already in Pike, Queens for your information. [...] > > - We need a tool to update containers from this registry and *before* > > deploying them. We already have this tool in place in our CI for the > > overcloud (see [3] and [4]). Now we need a similar thing for the > undercloud. > > I would use external registry in this case. Quay.io might be a good > choice to have rock solid simplicity. It might not be good for CI as > requires very strong connectivity but it should be sufficient for > developers. > No. We'll use docker-distribution for now, and will consider more support in the future but what we want right now is parity. [...] -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Thu Apr 12 15:46:14 2018 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 12 Apr 2018 08:46:14 -0700 Subject: [openstack-dev] [tripleo] roadmap on containers workflow In-Reply-To: <25259749-6def-c1c9-2525-1dc97ec1a5a7@redhat.com> References: <25259749-6def-c1c9-2525-1dc97ec1a5a7@redhat.com> Message-ID: On Thu, Apr 12, 2018 at 1:16 AM, Bogdan Dobrelya wrote: [...] > Deploy own registry as part of UC deployment or use external. For >> instance, for production use I would like to have a cluster of 3-5 >> registries with HAProxy in front to speed up 1k nodes deployments. >> > > Note that this implies HA undercloud as well. Although, given that HA > undercloud is goodness indeed, I would *not* invest time into reliable > container registry deployment architecture for undercloud as we'll have it > for free, once kubernetes/openshift control plane for openstack becomes > adopted. There is a very strong notion of build pipelines, reliable > containers registries et al. Right. HA undercloud is out of context now. -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Thu Apr 12 16:27:01 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 12 Apr 2018 09:27:01 -0700 Subject: [openstack-dev] [Nova][Deployers] Optional, platform specific, dependancies in requirements.txt In-Reply-To: <1a9a4e75-9b89-9474-38d7-910f08fbd1b9@gmail.com> References: <1a9a4e75-9b89-9474-38d7-910f08fbd1b9@gmail.com> Message-ID: <1523550421.2763190.1335865000.75062D24@webmail.messagingengine.com> On Wed, Apr 11, 2018, at 5:45 PM, Matt Riedemann wrote: > I also seem to remember that [extras] was less than user-friendly for > some reason, but maybe that was just because of how our CI jobs are > setup? Or I'm just making that up. I know it's pretty simple to install > the stuff from extras for tox runs, it's just an extra set of > dependencies to list in the tox.ini. One concern I have as a user is that extras are not very discoverable without reading the source setup.cfg file. This can be addressed by improving installation docs to explain what the extras options are and why you might want to use them. Another idea was to add a 'all' extras that installed all of the more fine grained extra's options. That way a user can just say give me all the features I don't care even if I can't use them all I know the ones I can use will be properly installed. As for the CI jobs its just a matter of listing the extras in the appropriate requirements files or explicitly installing them. Clark From ed at leafe.com Thu Apr 12 16:39:31 2018 From: ed at leafe.com (Ed Leafe) Date: Thu, 12 Apr 2018 11:39:31 -0500 Subject: [openstack-dev] [all][api] POST /api-sig/news Message-ID: <42252A9A-B51E-400B-B81F-54C43FD68726@leafe.com> Greetings OpenStack community, It was a fairly quick meeting today, as we weren't able to find anything to argue about. That doesn't happen too often. :) We agreed that the revamped HTTP guidelines [8] should be merged, as they were strictly formatting changes, and no content change. We also merged the change to update the errors guidance [9] to use service-type instead of service-name, as that had been frozen last week, with no negative feedback since then. We still have not gotten a lot of feedback from the SDK community about topics to discuss at the upcoming Vancouver Forum. If you are involved with SDK development and have something you'd like to discuss there, please reply to the openstack-dev mailing list thread [7] with your thoughts. As always if you're interested in helping out, in addition to coming to the meetings, there's also: * The list of bugs [5] indicates several missing or incomplete guidelines. * The existing guidelines [2] always need refreshing to account for changes over time. If you find something that's not quite right, submit a patch [6] to fix it. * Have you done something for which you think guidance would have made things easier but couldn't find any? Submit a patch and help others [6]. # Newly Published Guidelines * Update the errors guidance to use service-type for code https://review.openstack.org/#/c/554921/ * Break up the HTTP guideline into smaller documents https://review.openstack.org/#/c/554234/ # API Guidelines Proposed for Freeze Guidelines that are ready for wider review by the whole community. None # Guidelines Currently Under Review [3] * Add guidance on needing cache-control headers https://review.openstack.org/550468 * Update parameter names in microversion sdk spec https://review.openstack.org/#/c/557773/ * Add API-schema guide (still being defined) https://review.openstack.org/#/c/524467/ * A (shrinking) suite of several documents about doing version and service discovery Start at https://review.openstack.org/#/c/459405/ * WIP: microversion architecture archival doc (very early; not yet ready for review) https://review.openstack.org/444892 # Highlighting your API impacting issues If you seek further review and insight from the API SIG about APIs that you are developing or changing, please address your concerns in an email to the OpenStack developer mailing list[1] with the tag "[api]" in the subject. In your email, you should include any relevant reviews, links, and comments to help guide the discussion of the specific challenge you are facing. To learn more about the API SIG mission and the work we do, see our wiki page [4] and guidelines [2]. Thanks for reading and see you next week! # References [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] http://specs.openstack.org/openstack/api-wg/ [3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z [4] https://wiki.openstack.org/wiki/API_SIG [5] https://bugs.launchpad.net/openstack-api-wg [6] https://git.openstack.org/cgit/openstack/api-wg [7] http://lists.openstack.org/pipermail/openstack-sigs/2018-March/000353.html [8] https://review.openstack.org/#/c/554234/ [9] https://review.openstack.org/#/c/554921/ Meeting Agenda https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda Past Meeting Records http://eavesdrop.openstack.org/meetings/api_sig/ Open Bugs https://bugs.launchpad.net/openstack-api-wg -- Ed Leafe From mriedemos at gmail.com Thu Apr 12 17:23:48 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 12 Apr 2018 12:23:48 -0500 Subject: [openstack-dev] [Nova][Deployers] Optional, platform specific, dependancies in requirements.txt In-Reply-To: <1e44b8bc-9855-e7f6-4ef8-2762dd1fbf0d@fried.cc> References: <1a9a4e75-9b89-9474-38d7-910f08fbd1b9@gmail.com> <1e44b8bc-9855-e7f6-4ef8-2762dd1fbf0d@fried.cc> Message-ID: <27218cea-74f4-dd54-fa91-0c090f190d6d@gmail.com> On 4/12/2018 7:42 AM, Eric Fried wrote: > This sounds reasonable to me. I'm glad the issue was raised, but IMO it > shouldn't derail progress on an approved blueprint with ready code. > > Jichen, would you please go ahead and file that blueprint template (no > need to write a spec yet) and link it in a review comment on the bottom > zvm patch so we have a paper trail? I'm thinking something like > "Consistent platform-specific and optional requirements" -- that leaves > us open to decide*how* we're going to "handle" them. FWIW I'm also OK with deferring debate on this and not blocking the zvm stuff for this specific issue, because we can really go down a rabbit hole if we want to be pedantic on this, for example, os-brick is only used by a couple of virt drivers, taskflow is only used by powervm, castellan is optional since we don't require a real key manager, etc. -- Thanks, Matt From emilien at redhat.com Thu Apr 12 18:17:13 2018 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 12 Apr 2018 11:17:13 -0700 Subject: [openstack-dev] [tripleo] Recap of Python 3 testing session at PTG In-Reply-To: <1028378685.11422536.1521566925853.JavaMail.zimbra@redhat.com> References: <1028378685.11422536.1521566925853.JavaMail.zimbra@redhat.com> Message-ID: On Tue, Mar 20, 2018 at 10:28 AM, Javier Pena wrote: > One point we should add here: to test Python 3 we need some base operating system to work on. For now, our plan is to create a set of stabilized Fedora 28 repositories and use them only for CI jobs. See [1] for details on this plan. Javier, Alfredo, where are we regarding this topic? Have we made some progress on f28 repos? I'm interested to know about the next steps, I really want us to make some progress on python3 testing here. Thanks, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Thu Apr 12 18:44:14 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 12 Apr 2018 13:44:14 -0500 Subject: [openstack-dev] [release] Release countdown for week R-19, April 16-20 Message-ID: <20180412184413.GA22342@sm-xps> Welcome to our regular release countdown email. Development Focus ----------------- Team focus should be on spec approval and implementation for priority features. The first Rocky milestone is this coming Thursday, the 19th. While there aren't any OpenStack-wide deadlines for this milestone, individual projects do have some time critical requirements. Please be aware of any project specific deadlines that may impact you. General Information ------------------- PTLs and release liaisons of cycle-with-milestones projects, no later than Thursday you will need to prepare a release request for your project(s) using deliverables/rocky/$project.yaml in openstack/releases. The initial release number should be $MAJOR.0.0.0b1, where $MAJOR is incremented from the Queens version. Please ask in the #openstack-release channel if you have any questions about this. Reminder to pay attention to the work being done in support of the Rocky cycle goals [1]. [1] https://governance.openstack.org/tc/goals/rocky/index.html The TC elections start on the April 23rd. The nomination period is open until the 17th, so if you have any interest, please consider putting your name in for the election. There will be a week of "campaigning" after the nomination period is over and before voting begins. Please participate in any discussions on the openstack-dev mailing list to give everyone a chance to learn more about the candidates and their opinions. You can check out the candidates for this election and get details on the election page [2]. [2] https://governance.openstack.org/election/ Even if you don't have a strong opinion on candidates or their plans with the TC, please consider voting for your preferred candidates. We need the participation of all of the OpenStack community. Your vote helps and does make a difference. Upcoming Deadlines & Dates -------------------------- TC Nomination Deadline: April 17 TC Campaigning: April 17-22 TC Election: April 23-30 Rocky-1 milestone: April 19 (R-19 week) Forum at OpenStack Summit in Vancouver: May 21-24 -- Sean McGinnis (smcginnis) From mordred at inaugust.com Thu Apr 12 18:54:46 2018 From: mordred at inaugust.com (Monty Taylor) Date: Thu, 12 Apr 2018 13:54:46 -0500 Subject: [openstack-dev] [Nova][Deployers] Optional, platform specific, dependancies in requirements.txt In-Reply-To: <1523550421.2763190.1335865000.75062D24@webmail.messagingengine.com> References: <1a9a4e75-9b89-9474-38d7-910f08fbd1b9@gmail.com> <1523550421.2763190.1335865000.75062D24@webmail.messagingengine.com> Message-ID: <087c82c8-87a8-4834-de83-c3d0f7ad1133@inaugust.com> On 04/12/2018 11:27 AM, Clark Boylan wrote: > On Wed, Apr 11, 2018, at 5:45 PM, Matt Riedemann wrote: >> I also seem to remember that [extras] was less than user-friendly for >> some reason, but maybe that was just because of how our CI jobs are >> setup? Or I'm just making that up. I know it's pretty simple to install >> the stuff from extras for tox runs, it's just an extra set of >> dependencies to list in the tox.ini. > > One concern I have as a user is that extras are not very discoverable without reading the source setup.cfg file. This can be addressed by improving installation docs to explain what the extras options are and why you might want to use them. Yeah - they're kind of an advanced feature that most python people don't seem to know exists at all. I'm honestly worried about us expanding our use of them and would prefer we got rid of our usage. I don't think the upcoming Pipfile stuff supports them at all - and I believe that's on purpose. > Another idea was to add a 'all' extras that installed all of the more fine grained extra's options. That way a user can just say give me all the features I don't care even if I can't use them all I know the ones I can use will be properly installed. > > As for the CI jobs its just a matter of listing the extras in the appropriate requirements files or explicitly installing them. How about instead of extras we just make some additional packages? Like, for instance make a "nova-zvm-support" repo that contains the extra requirements in it and that we publish to PyPI. Then a user could do "pip install nova nova-zvm-support" instead of "pip install nova[zvm]". That way we can avoid installing optional things for the common case when they're not going to be used (including in the gate where we have no Z machines) but still provide a mechanism for users to easily install the software they need. It would also let a 3rd-party ci that DOES have some Z to test against to set up a zuul job that puts nova-zvm-support into its required-projects and test the combination of the two. We could do a similar thing for the extras in keystoneauth. Make a keystoneauth-kerberos and a keystoneauth-saml2 and a keystoneauth-oauth1. Just a thought... Monty From melwittt at gmail.com Thu Apr 12 19:39:11 2018 From: melwittt at gmail.com (melanie witt) Date: Thu, 12 Apr 2018 12:39:11 -0700 Subject: [openstack-dev] [Nova] z/VM introducing a new config drive format In-Reply-To: References: Message-ID: <7efe6916-17fc-59c5-d666-6bdfc19c3329@gmail.com> On Thu, 12 Apr 2018 09:31:45 +1000, Michael Still wrote: > The more I think about it, the more I dislike how the proposed driver > also "lies" about it using iso9660. That's definitely wrong: > >         if CONF.config_drive_format in ['iso9660']: >             # cloud-init only support iso9660 and vfat, but in z/VM >             # implementation, can't link a disk to VM as iso9660 before > it's >             # boot ,so create a tgz file then send to the VM deployed, and >             # during startup process, the tgz file will be extracted and >             # mounted as iso9660 format then cloud-init is able to > consume it >             self._make_tgz(path) >         else: >             raise exception.ConfigDriveUnknownFormat( >                 format=CONF.config_drive_format) I've asked for more information on the review about how this works -- is it the z/VM library that extracts the tarball and mounts it as iso9660 before the guest boots or is it expected that the guest is running some special software that will do that before cloud-init runs, or what? I also noticed that the z/VM CI has disabled ssh validation of guests by setting '[validation]run_validation=False' in tempest.conf [0]. This means we're unable to see the running guest successfully consume the config drive using this approach. This is the tempest test that verifies functionality when run_validation=True [1]. I think we need to understand more about how this config drive approach is supposed to work and be able to see running instances successfully start up using it in the CI runs. -melanie [0] http://extbasicopstackcilog01.podc.sl.edst.ibm.com/test_logs/jenkins-check-nova-master-16244/logs/tempest_conf [1] https://github.com/openstack/tempest/blob/master/tempest/scenario/test_server_basic_ops.py From doug at doughellmann.com Thu Apr 12 19:45:43 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 12 Apr 2018 15:45:43 -0400 Subject: [openstack-dev] [Nova][Deployers] Optional, platform specific, dependancies in requirements.txt In-Reply-To: <087c82c8-87a8-4834-de83-c3d0f7ad1133@inaugust.com> References: <1a9a4e75-9b89-9474-38d7-910f08fbd1b9@gmail.com> <1523550421.2763190.1335865000.75062D24@webmail.messagingengine.com> <087c82c8-87a8-4834-de83-c3d0f7ad1133@inaugust.com> Message-ID: <1523561985-sup-3578@lrrr.local> Excerpts from Monty Taylor's message of 2018-04-12 13:54:46 -0500: > On 04/12/2018 11:27 AM, Clark Boylan wrote: > > On Wed, Apr 11, 2018, at 5:45 PM, Matt Riedemann wrote: > >> I also seem to remember that [extras] was less than user-friendly for > >> some reason, but maybe that was just because of how our CI jobs are > >> setup? Or I'm just making that up. I know it's pretty simple to install > >> the stuff from extras for tox runs, it's just an extra set of > >> dependencies to list in the tox.ini. > > > > One concern I have as a user is that extras are not very discoverable without reading the source setup.cfg file. This can be addressed by improving installation docs to explain what the extras options are and why you might want to use them. > > Yeah - they're kind of an advanced feature that most python people don't > seem to know exists at all. > > I'm honestly worried about us expanding our use of them and would prefer > we got rid of our usage. I don't think the upcoming Pipfile stuff > supports them at all - and I believe that's on purpose. Pipfile is being created as a replacement for requirements.txt but not in the way that we use the file. So it is possible to express via a Pipfile that something needs to install extras (see the example in https://github.com/pypa/pipfile) but it is not possible to express those extras there because that's not what that file is meant to be used for (as I think you've pointed out in the thread about pbr/pipfile integration). > > > Another idea was to add a 'all' extras that installed all of the more fine grained extra's options. That way a user can just say give me all the features I don't care even if I can't use them all I know the ones I can use will be properly installed. > > > > As for the CI jobs its just a matter of listing the extras in the appropriate requirements files or explicitly installing them. > > How about instead of extras we just make some additional packages? Like, > for instance make a "nova-zvm-support" repo that contains the extra > requirements in it and that we publish to PyPI. Then a user could do > "pip install nova nova-zvm-support" instead of "pip install nova[zvm]". So the driver would still live in the nova tree, but the dependencies for it would be expressed by a package that is built elsewhere? It feels like that's likely to require some extra care for ordering changes when a dependency has to be updated. > That way we can avoid installing optional things for the common case > when they're not going to be used (including in the gate where we have > no Z machines) but still provide a mechanism for users to easily install > the software they need. It would also let a 3rd-party ci that DOES have > some Z to test against to set up a zuul job that puts nova-zvm-support > into its required-projects and test the combination of the two. All of that is technically true. I'm not sure how a separate package is more discoverable than using extras, though. Doug From mikal at stillhq.com Thu Apr 12 20:28:16 2018 From: mikal at stillhq.com (Michael Still) Date: Fri, 13 Apr 2018 06:28:16 +1000 Subject: [openstack-dev] [Nova][Deployers] Optional, platform specific, dependancies in requirements.txt In-Reply-To: <1e44b8bc-9855-e7f6-4ef8-2762dd1fbf0d@fried.cc> References: <1a9a4e75-9b89-9474-38d7-910f08fbd1b9@gmail.com> <1e44b8bc-9855-e7f6-4ef8-2762dd1fbf0d@fried.cc> Message-ID: I don't understand why you think the alternative is so hard. Here's how ironic does it: global ironic if ironic is None: ironic = importutils.import_module('ironicclient') Is avoiding three lines of code really worth making future cleanup harder? Is a three line change really blocking "an approved blueprint with ready code"? Michael On Thu, Apr 12, 2018 at 10:42 PM, Eric Fried wrote: > +1 > > This sounds reasonable to me. I'm glad the issue was raised, but IMO it > shouldn't derail progress on an approved blueprint with ready code. > > Jichen, would you please go ahead and file that blueprint template (no > need to write a spec yet) and link it in a review comment on the bottom > zvm patch so we have a paper trail? I'm thinking something like > "Consistent platform-specific and optional requirements" -- that leaves > us open to decide *how* we're going to "handle" them. > > Thanks, > efried > > On 04/12/2018 04:13 AM, Chen CH Ji wrote: > > Thanks for Michael for raising this question and detailed information > > from Clark > > > > As indicated in the mail, xen, vmware etc might already have this kind > > of requirements (and I guess might be more than that) , > > can we accept z/VM requirements first by following other existing ones > > then next I can create a BP later to indicate this kind > > of change request by referring to Clark's comments and submit patches to > > handle it ? Thanks > > > > Best Regards! > > > > Kevin (Chen) Ji 纪 晨 > > > > Engineer, zVM Development, CSTL > > Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com > > Phone: +86-10-82451493 > > Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian > > District, Beijing 100193, PRC > > > > Inactive hide details for Matt Riedemann ---04/12/2018 08:46:25 AM---On > > 4/11/2018 5:09 PM, Michael Still wrote: >Matt Riedemann ---04/12/2018 > > 08:46:25 AM---On 4/11/2018 5:09 PM, Michael Still wrote: > > > > > From: Matt Riedemann > > To: openstack-dev at lists.openstack.org > > Date: 04/12/2018 08:46 AM > > Subject: Re: [openstack-dev] [Nova][Deployers] Optional, platform > > specific, dependancies in requirements.txt > > > > ------------------------------------------------------------------------ > > > > > > > > On 4/11/2018 5:09 PM, Michael Still wrote: > >> > >> > > https://urldefense.proofpoint.com/v2/url?u=https-3A__review. > openstack.org_-23_c_523387&d=DwIGaQ&c=jf_iaSHvJObTbx- > siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m= > 212PUwLYOBlJZ3BiZNuJIFkRfqXoBPJDcWYCDk7vCHg&s= > CNosrTHnAR21zOI52fnDRfTqu2zPiAn2oW9f67Qijo4&e= proposes > > adding a z/VM specific > >> dependancy to nova's requirements.txt. When I objected the counter > >> argument is that we have examples of windows specific dependancies > >> (os-win) and powervm specific dependancies in that file already. > >> > >> I think perhaps all three are a mistake and should be removed. > >> > >> My recollection is that for drivers like ironic which may not be > >> deployed by everyone, we have the dependancy documented, and then loaded > >> at runtime by the driver itself instead of adding it to > >> requirements.txt. This is to stop pip for auto-installing the dependancy > >> for anyone who wants to run nova. I had assumed this was at the request > >> of the deployer community. > >> > >> So what do we do with z/VM? Do we clean this up? Or do we now allow > >> dependancies that are only useful to a very small number of deployments > >> into requirements.txt? > > > > As Eric pointed out in the review, this came up when pypowervm was added: > > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__review. > openstack.org_-23_c_438119_5_requirements.txt&d=DwIGaQ&c= > jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m= > 212PUwLYOBlJZ3BiZNuJIFkRfqXoBPJDcWYCDk7vCHg&s=iyKxF- > CcGAFmnQs8B7d5u2zwEiJqq8ivETmrgB77PEg&e= > > > > And you're asking the same questions I did in there, which was, should > > it go into test-requirements.txt like oslo.vmware and > > python-ironicclient, or should it go under [extras], or go into > > requirements.txt like os-win (we also have the xenapi library now too). > > > > I don't really think all of these optional packages should be in > > requirements.txt, but we should just be consistent with whatever we do, > > be that test-requirements.txt or [extras]. I remember caring more about > > this back in my rpm packaging days when we actually tracked what was in > > requirements.txt to base what needed to go into the rpm spec, unlike > > Fedora rpm specs which just zero out requirements.txt and depend on > > their own knowledge of what needs to be installed (which is sometimes > > lacking or lagging master). > > > > I also seem to remember that [extras] was less than user-friendly for > > some reason, but maybe that was just because of how our CI jobs are > > setup? Or I'm just making that up. I know it's pretty simple to install > > the stuff from extras for tox runs, it's just an extra set of > > dependencies to list in the tox.ini. > > > > Having said all this, I don't have the energy to help push for > > consistency myself, but will happily watch you from the sidelines. > > > > -- > > > > Thanks, > > > > Matt > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists. > openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev&d=DwIGaQ&c=jf_ > iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m= > 212PUwLYOBlJZ3BiZNuJIFkRfqXoBPJDcWYCDk7vCHg&s=2FioyzCRtztysjjEqCrBTkpQs_ > wwfs3Mt2wGDkrft-s&e= > > > > > > > > > > > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at fried.cc Thu Apr 12 20:56:39 2018 From: openstack at fried.cc (Eric Fried) Date: Thu, 12 Apr 2018 15:56:39 -0500 Subject: [openstack-dev] [Nova][Deployers] Optional, platform specific, dependancies in requirements.txt In-Reply-To: References: <1a9a4e75-9b89-9474-38d7-910f08fbd1b9@gmail.com> <1e44b8bc-9855-e7f6-4ef8-2762dd1fbf0d@fried.cc> Message-ID: <5a7495b8-8332-327e-0a1e-c0b3448f265b@fried.cc> > Is avoiding three lines of code really worth making future cleanup > harder? Is a three line change really blocking "an approved blueprint > with ready code"? Nope. What's blocking is deciding that that's the right thing to do. Which we clearly don't have consensus on, based on what's happening in this thread. > global ironic > if ironic is None: > ironic = importutils.import_module('ironicclient') I have a pretty strong dislike for this mechanism. For one thing, I'm frustrated when I can't use hotkeys to jump to an ironicclient method because my IDE doesn't recognize that dynamic import. I have to go look up the symbol some other way (and hope I'm getting the right one). To me (with my bias as a dev rather than a deployer) that's way worse than having the 704KB python-ironicclient installed on my machine even though I've never spawned an ironic VM in my life. It should also be noted that python-ironicclient is in test-requirements.txt. Not that my personal preference ought to dictate or even influence what we decide to do here. But dynamic import is not the obviously correct choice. -efried On 04/12/2018 03:28 PM, Michael Still wrote: > I don't understand why you think the alternative is so hard. Here's how > ironic does it: > >         global ironic > >         if ironic is None: > >             ironic = importutils.import_module('ironicclient') > > > Is avoiding three lines of code really worth making future cleanup > harder? Is a three line change really blocking "an approved blueprint > with ready code"? > > Michael > > > > On Thu, Apr 12, 2018 at 10:42 PM, Eric Fried > wrote: > > +1 > > This sounds reasonable to me.  I'm glad the issue was raised, but IMO it > shouldn't derail progress on an approved blueprint with ready code. > > Jichen, would you please go ahead and file that blueprint template (no > need to write a spec yet) and link it in a review comment on the bottom > zvm patch so we have a paper trail?  I'm thinking something like > "Consistent platform-specific and optional requirements" -- that leaves > us open to decide *how* we're going to "handle" them. > > Thanks, > efried > > On 04/12/2018 04:13 AM, Chen CH Ji wrote: > > Thanks for Michael for raising this question and detailed information > > from Clark > > > > As indicated in the mail, xen, vmware etc might already have this kind > > of requirements (and I guess might be more than that) , > > can we accept z/VM requirements first by following other existing ones > > then next I can create a BP later to indicate this kind > > of change request by referring to Clark's comments and submit patches to > > handle it ? Thanks > > > > Best Regards! > > > > Kevin (Chen) Ji 纪 晨 > > > > Engineer, zVM Development, CSTL > > Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com > > Phone: +86-10-82451493 > > Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian > > District, Beijing 100193, PRC > > > > Inactive hide details for Matt Riedemann ---04/12/2018 08:46:25 AM---On > > 4/11/2018 5:09 PM, Michael Still wrote: >Matt Riedemann ---04/12/2018 > > 08:46:25 AM---On 4/11/2018 5:09 PM, Michael Still wrote: > > > > > From: Matt Riedemann > > > To: openstack-dev at lists.openstack.org > > > Date: 04/12/2018 08:46 AM > > Subject: Re: [openstack-dev] [Nova][Deployers] Optional, platform > > specific, dependancies in requirements.txt > > > > > ------------------------------------------------------------------------ > > > > > > > > On 4/11/2018 5:09 PM, Michael Still wrote: > >> > >> > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_c_523387&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=212PUwLYOBlJZ3BiZNuJIFkRfqXoBPJDcWYCDk7vCHg&s=CNosrTHnAR21zOI52fnDRfTqu2zPiAn2oW9f67Qijo4&e= >  proposes > > adding a z/VM specific > >> dependancy to nova's requirements.txt. When I objected the counter > >> argument is that we have examples of windows specific dependancies > >> (os-win) and powervm specific dependancies in that file already. > >> > >> I think perhaps all three are a mistake and should be removed. > >> > >> My recollection is that for drivers like ironic which may not be > >> deployed by everyone, we have the dependancy documented, and then > loaded > >> at runtime by the driver itself instead of adding it to > >> requirements.txt. This is to stop pip for auto-installing the > dependancy > >> for anyone who wants to run nova. I had assumed this was at the > request > >> of the deployer community. > >> > >> So what do we do with z/VM? Do we clean this up? Or do we now allow > >> dependancies that are only useful to a very small number of > deployments > >> into requirements.txt? > > > > As Eric pointed out in the review, this came up when pypowervm was > added: > > > > > https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_c_438119_5_requirements.txt&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=212PUwLYOBlJZ3BiZNuJIFkRfqXoBPJDcWYCDk7vCHg&s=iyKxF-CcGAFmnQs8B7d5u2zwEiJqq8ivETmrgB77PEg&e= > > > > > And you're asking the same questions I did in there, which was, should > > it go into test-requirements.txt like oslo.vmware and > > python-ironicclient, or should it go under [extras], or go into > > requirements.txt like os-win (we also have the xenapi library now > too). > > > > I don't really think all of these optional packages should be in > > requirements.txt, but we should just be consistent with whatever > we do, > > be that test-requirements.txt or [extras]. I remember caring more > about > > this back in my rpm packaging days when we actually tracked what > was in > > requirements.txt to base what needed to go into the rpm spec, unlike > > Fedora rpm specs which just zero out requirements.txt and depend on > > their own knowledge of what needs to be installed (which is sometimes > > lacking or lagging master). > > > > I also seem to remember that [extras] was less than user-friendly for > > some reason, but maybe that was just because of how our CI jobs are > > setup? Or I'm just making that up. I know it's pretty simple to > install > > the stuff from extras for tox runs, it's just an extra set of > > dependencies to list in the tox.ini. > > > > Having said all this, I don't have the energy to help push for > > consistency myself, but will happily watch you from the sidelines. > > > > -- > > > > Thanks, > > > > Matt > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=212PUwLYOBlJZ3BiZNuJIFkRfqXoBPJDcWYCDk7vCHg&s=2FioyzCRtztysjjEqCrBTkpQs_wwfs3Mt2wGDkrft-s&e= > > > > > > > > > > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From allison at openstack.org Thu Apr 12 21:27:34 2018 From: allison at openstack.org (Allison Price) Date: Thu, 12 Apr 2018 16:27:34 -0500 Subject: [openstack-dev] Save $500 on OpenStack Summit Vancouver Hotel + Ticket Message-ID: Hi everyone, For a limited time, you can now purchase a discounted package including a Vancouver Summit ticket and hotel stay at the beautiful Pan Pacific Hotel for savings of more than $500 USD! This discount runs until April 25 pending availability - book your ticket & hotel room now for maximum savings: 4-night stay at the Pan Pacific Hotel & Weeklong Vancouver Summit Pass: $1,859 USD—$500 in savings per person 5-night stay at the Pan Pacific Hotel & Weeklong Vancouver Summit Pass: $2,149 USD—$550 in savings per person REGISTER HERE After you've registered we will book your hotel room for you, and follow-up with your confirmed hotel information in early May. Please email summit at openstack.org if you have any questions. Cheers, Allison Allison Price OpenStack Foundation allison at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Thu Apr 12 22:05:26 2018 From: melwittt at gmail.com (melanie witt) Date: Thu, 12 Apr 2018 15:05:26 -0700 Subject: [openstack-dev] [nova] meeting log from today 2018-04-12 at 21:00 UTC Message-ID: <8c906605-5641-b4e5-7eb9-80fe5c04227e@gmail.com> Howdy everyone, The meetbot was restarted in the middle of our meeting, so the log and minutes could not be collected (after the restart) and will not be found at http://eavesdrop.openstack.org/meetings/nova/2018/. Here's a link to the #openstack-meeting channel log for the meeting if you are looking for the minutes: http://eavesdrop.openstack.org/irclogs/%23openstack-meeting/%23openstack-meeting.2018-04-12.log.html#t2018-04-12T21:00:21 Cheers, -melanie From xianjun666 at dcn.ssu.ac.kr Fri Apr 13 02:17:42 2018 From: xianjun666 at dcn.ssu.ac.kr (=?ks_c_5601-1987?B?yKu8sbG6?=) Date: Fri, 13 Apr 2018 11:17:42 +0900 Subject: [openstack-dev] [Mistral]I think Mistral need K8S action Message-ID: <58c101d3d2cd$9bbb7a40$d3326ec0$@dcn.ssu.ac.kr> Hello Mistral team. I'm doing some work on the K8S but I observed that there is only Docker's action in Mistral. I would like to ask Mistral Team, why there is no K8S action in the mistral. I think it would be useful in Mistral. If you feel it's necessary, could I add K8S action to mistral? Regards, Xian Jun Hong -------------- next part -------------- An HTML attachment was scrubbed... URL: From lijunbo at fiberhome.com Fri Apr 13 02:25:22 2018 From: lijunbo at fiberhome.com (=?utf-8?B?5p2O5L+K5rOi?=) Date: Fri, 13 Apr 2018 10:25:22 +0800 Subject: [openstack-dev] [cinder][nova] Message-ID: <006701d3d2ce$accb0ac0$06612040$@com> Hello Nova, Cinder developers, I would like to ask you a question concerns a Cinder patch [1]. In this patch, it mentioned that RBD features were incompatible with multi-attach, which disabled multi-attach for RBD. I would like to know which RBD features that are incompatible? In the Bug [2], yao ning also raised this question, and in his envrionment, it proved that they did not find ant problems when enable this feature. So, I also would like to know which features in ceph will make this feature unsafe? [1] https://review.openstack.org/#/c/283695/ [2] https://bugs.launchpad.net/cinder/+bug/1535815 Best wishes and Regards junboli -------------- next part -------------- An HTML attachment was scrubbed... URL: From liliueecg at gmail.com Fri Apr 13 03:08:54 2018 From: liliueecg at gmail.com (Li Liu) Date: Thu, 12 Apr 2018 23:08:54 -0400 Subject: [openstack-dev] Initiate the discussion for FPGA reconfigurability Message-ID: Hi Team, While wrapping up spec for FPGA programmability, I think we still miss the reconfigurability part of Accelerators For instance, in the FPGA case, after the bitstream is loaded, a user might still need to tune the clock frequency, VF numbers, do reset, etc. These reconfigurations can be arbitory. Unfortunately, none of the APIs we have right can handle them properly. I suggest having another spec for a couple of new APIs dedicated to reconfiguring accelerators. 1. A rest API 2. A driver API I want to gather more ideas from you guys especially from our vendor folks :) -- Thank you Regards Li Liu -------------- next part -------------- An HTML attachment was scrubbed... URL: From luo.lujin at jp.fujitsu.com Fri Apr 13 04:44:53 2018 From: luo.lujin at jp.fujitsu.com (Luo, Lujin) Date: Fri, 13 Apr 2018 04:44:53 +0000 Subject: [openstack-dev] [sig][upgrades] Upgrade SIG IRC meeting poll Message-ID: Hello everyone, Sorry for keeping you waiting! Since we have launched Upgrade SIG [1], we are now happy to invite everyone who is interested to take a vote so that we can find a good time for our regular IRC meeting. Please kindly look at the weekdays in the poll only, not the actual date. Odd week: https://doodle.com/poll/q8qr9iza9kmwax2z Even week: https://doodle.com/poll/ude4rmacmbp4k5xg We expect to alternate meeting times between odd and even weeks to cover different time zones. We'd love that if people can vote before Apr. 22nd. Best, Lujin [1] http://lists.openstack.org/pipermail/openstack-dev/2018-March/128426.html From renat.akhmerov at gmail.com Fri Apr 13 04:47:52 2018 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Fri, 13 Apr 2018 11:47:52 +0700 Subject: [openstack-dev] [Mistral]I think Mistral need K8S action In-Reply-To: <58c101d3d2cd$9bbb7a40$d3326ec0$@dcn.ssu.ac.kr> References: <58c101d3d2cd$9bbb7a40$d3326ec0$@dcn.ssu.ac.kr> Message-ID: <578d70ca-25e5-441a-9211-0c7986bf2f16@Spark> Hi, I completely agree with you that having such an action would be useful. However, I don’t think this kind of action should be provided by Mistral out of the box. Actions and triggers are integration pieces for Mistral and are natively external to Mistral code base. In other words, this action can be implemented anywhere and plugged into a concrete Mistral installation where needed. As a home for this action I’d propose 'mistral-extra’ repo where we are planning to move OpenStack actions and some more. Also, if you’d like to contribute you’re very welcome. Thanks Renat Akhmerov @Nokia On 13 Apr 2018, 09:18 +0700, 홍선군 , wrote: > Hello  Mistral team. > I'm doing some work on the K8S but I observed that there is only Docker's action in Mistral. > I would like to ask Mistral Team, why there is no K8S action in the mistral. > I think it would be useful in Mistral. > If you feel it's necessary, could I add K8S action to mistral? > > Regards, > Xian Jun Hong > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From xianjun666 at dcn.ssu.ac.kr Fri Apr 13 05:40:47 2018 From: xianjun666 at dcn.ssu.ac.kr (=?utf-8?B?7ZmN7ISg6rWw?=) Date: Fri, 13 Apr 2018 14:40:47 +0900 Subject: [openstack-dev] [Mistral]I think Mistral need K8S action In-Reply-To: <578d70ca-25e5-441a-9211-0c7986bf2f16@Spark> References: <58c101d3d2cd$9bbb7a40$d3326ec0$@dcn.ssu.ac.kr> <578d70ca-25e5-441a-9211-0c7986bf2f16@Spark> Message-ID: <5ac401d3d2e9$fa8a31d0$ef9e9570$@dcn.ssu.ac.kr> Thanks for your reply. I will continue to pay attention. Regards, Xian Jun Hong From: Renat Akhmerov Sent: Friday, April 13, 2018 1:48 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Mistral]I think Mistral need K8S action Hi, I completely agree with you that having such an action would be useful. However, I don’t think this kind of action should be provided by Mistral out of the box. Actions and triggers are integration pieces for Mistral and are natively external to Mistral code base. In other words, this action can be implemented anywhere and plugged into a concrete Mistral installation where needed. As a home for this action I’d propose 'mistral-extra’ repo where we are planning to move OpenStack actions and some more. Also, if you’d like to contribute you’re very welcome. Thanks Renat Akhmerov @Nokia On 13 Apr 2018, 09:18 +0700, >, wrote: Hello Mistral team. I'm doing some work on the K8S but I observed that there is only Docker's action in Mistral. I would like to ask Mistral Team, why there is no K8S action in the mistral. I think it would be useful in Mistral. If you feel it's necessary, could I add K8S action to mistral? Regards, Xian Jun Hong __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Fri Apr 13 08:59:10 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 13 Apr 2018 10:59:10 +0200 Subject: [openstack-dev] [tc] Technical Committee Status update, April 13th Message-ID: Hi! This is the weekly summary of Technical Committee initiatives. You can find the full list of currently-considered changes at: https://wiki.openstack.org/wiki/Technical_Committee_Tracker We also track TC objectives for the cycle using StoryBoard at: https://storyboard.openstack.org/#!/project/923 == Recently-approved changes == * Official projects should not keep tagging rights [1] * Add vulnerability:managed tag to keystonemiddleware [1] https://review.openstack.org/#/c/557737/ We just entered election season, which is usually a calmer period when it comes to changes. The main change this week is a change to new project requirements setting an early expectation that official projects will have to drop direct tagging (or branching) rights in their Gerrit ACLs once they are made official, as those actions will be handled by the Release Management team through the openstack/releases repository: https://governance.openstack.org/tc/reference/new-projects-requirements.html == Election season == We are renewing 7 seats from the Technical Committee's 13 seats. Nomination period runs until EOD, Tuesday April 17th. We only have 5 candidates so far, so if you're interested in tackling OpenStack governance issues and helping stewarding our community, please consider running ! You can find details on the process at: http://lists.openstack.org/pipermail/openstack-dev/2018-April/129260.html == Voting in progress == Having only 3 weeks between TC election and summit makes it difficult to plan travel and content for that Summit for new members. A TC charter change was proposed to move the date to 6 weeks before Summit instead. As all charter changes this one will require at least 9 TC members to approve it, and it is still short of a couple of votes. Please see: https://review.openstack.org/#/c/560002/ We have a proposal up to update the OpenStack Project Testing Interface (PTI) info around docs job to match the current state of the art. It is still short of a couple of votes, but shall be approved soon. Please see: https://review.openstack.org/#/c/556576/ == Under discussion == The discussion on the review proposing the split of the kolla-kubernetes deliverable out of the Kolla team has slowed down this week. Current consensus seems to be that we should proceed with the proposed change, if that is the will of the Kolla PTL. The resulting Kolla-K8s (or Koltes) team might want to decide on their PTL first, though. We also still need to have a larger discussion around Kolla team governance: would it be better to go all the way into splitting Kolla from its deployment mechanisms, and therefore also split Kolla and Kolla-Ansible? If you have an opinion on that, please chime in on the review or the ML thread: https://review.openstack.org/#/c/552531/ http://lists.openstack.org/pipermail/openstack-dev/2018-March/128822.html The discussion around the proposed Adjutant project team addition also slowed down as we entered election season. At this point the discussion is expected to restart after the election, and culminate in a Forum session in Vancouver where we hope teh various involved patries will be able to discuss more directly. You can jump in the discussion here: https://review.openstack.org/#/c/553643/ == TC member actions/focus/discussions for the coming week(s) == Election season, with encouragements to new leaders to step up and run for election, followed by a short campaigning period, should be the focus of next week. We'll also start preparing the joint Board+TC+UC+Staff meeting by asking the broader community what topics should be on the agenda. == Office hours == To be more inclusive of all timezones and more mindful of people for which English is not the primary language, the Technical Committee dropped its dependency on weekly meetings. So that you can still get hold of TC members on IRC, we instituted a series of office hours on #openstack-tc: * 09:00 UTC on Tuesdays * 01:00 UTC on Wednesdays * 15:00 UTC on Thursdays Feel free to add your own office hour conversation starter at: https://etherpad.openstack.org/p/tc-office-hour-conversation-starters Cheers, -- Thierry Carrez (ttx) From jichenjc at cn.ibm.com Fri Apr 13 09:58:23 2018 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Fri, 13 Apr 2018 17:58:23 +0800 Subject: [openstack-dev] [Nova] z/VM introducing a new config driveformat In-Reply-To: <7efe6916-17fc-59c5-d666-6bdfc19c3329@gmail.com> References: <7efe6916-17fc-59c5-d666-6bdfc19c3329@gmail.com> Message-ID: Thanks for raising this question, really helpful to the enhancement of our driver for the run_validation=False issue, you are right, because z/VM driver only support config drive and don't support metadata service ,we made bad assumption and took wrong action to disabled the whole ssh check, actually according to [1] , we should only disable CONF.compute_feature_enabled.metadata_service but keep both self.run_ssh and CONF.compute_feature_enabled.config_drive as True in order to make config drive test validation take effect, our CI will handle that For the tgz/iso9660 question below, this is because we got wrong info from low layer component folks back to 2012 and after discuss with some experts again, actually we can create iso9660 in the driver layer and pass down to the spawned virtual machine and during startup process, the VM itself will mount the iso file and consume it, because from linux perspective, either tgz or iso9660 doesn't matter , only need some files in order to transfer the information from openstack compute node to the spawned VM. so our action is to change the format from tgz to iso9660 and keep consistent to other drivers. For the config drive working mechanism question, according to [2] z/VM is Type 1 hypervisor while Qemu/KVM are mostly likely to be Type 2 hypervisor, there is no file system in z/VM hypervisor (I omit too much detail here) , so we can't do something like linux operation system to keep a file as qcow2 image in the host operating system, what we do is use a special file pool to store the config drive and during VM init process, we read that file from special device and attach to VM as iso9660 format then cloud-init will handle the follow up, the cloud-init handle process is identical to other platform Again, The tgz/iso9660 format is only because tgz format being wrongly thought, we already have some existing customers and a public openstack cloud [3] running on LinuxONE (system z) [4] so config drive of z/VM driver does work and we will modify our code to be consistent to community in our patch set please let us know any further question, thanks [1] https://github.com/openstack/tempest/blob/master/tempest/scenario/test_server_basic_ops.py#L68 [2]https://en.wikipedia.org/wiki/Hypervisor [3]https://linuxone20.cloud.marist.edu/cloud/ [4]https://www.zdnet.com/article/linuxone-ibms-new-linux-mainframes/ Best Regards! Kevin (Chen) Ji 纪 晨 Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com Phone: +86-10-82451493 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC From: melanie witt To: openstack-dev at lists.openstack.org Date: 04/13/2018 03:39 AM Subject: Re: [openstack-dev] [Nova] z/VM introducing a new config drive format On Thu, 12 Apr 2018 09:31:45 +1000, Michael Still wrote: > The more I think about it, the more I dislike how the proposed driver > also "lies" about it using iso9660. That's definitely wrong: > >         if CONF.config_drive_format in ['iso9660']: >             # cloud-init only support iso9660 and vfat, but in z/VM >             # implementation, can't link a disk to VM as iso9660 before > it's >             # boot ,so create a tgz file then send to the VM deployed, and >             # during startup process, the tgz file will be extracted and >             # mounted as iso9660 format then cloud-init is able to > consume it >             self._make_tgz(path) >         else: >             raise exception.ConfigDriveUnknownFormat( >                 format=CONF.config_drive_format) I've asked for more information on the review about how this works -- is it the z/VM library that extracts the tarball and mounts it as iso9660 before the guest boots or is it expected that the guest is running some special software that will do that before cloud-init runs, or what? I also noticed that the z/VM CI has disabled ssh validation of guests by setting '[validation]run_validation=False' in tempest.conf [0]. This means we're unable to see the running guest successfully consume the config drive using this approach. This is the tempest test that verifies functionality when run_validation=True [1]. I think we need to understand more about how this config drive approach is supposed to work and be able to see running instances successfully start up using it in the CI runs. -melanie [0] http://extbasicopstackcilog01.podc.sl.edst.ibm.com/test_logs/jenkins-check-nova-master-16244/logs/tempest_conf [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openstack_tempest_blob_master_tempest_scenario_test-5Fserver-5Fbasic-5Fops.py&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=d9VEfLe0LqlPnfL0F0DwKa5iNpsRfDQKiobInGR02lc&s=0X5hrQ3zh7vwq7wJJAbox4M_4p0myAC1zehbbxYGNF8&e= __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=d9VEfLe0LqlPnfL0F0DwKa5iNpsRfDQKiobInGR02lc&s=hU-eEpSb-YMBEMckcP_GgysY7R0t33mnCEQyJ0sbECU&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From jichenjc at cn.ibm.com Fri Apr 13 10:03:32 2018 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Fri, 13 Apr 2018 18:03:32 +0800 Subject: [openstack-dev] [Nova][Deployers] Optional, platform specific, dependancies in requirements.txt In-Reply-To: <1e44b8bc-9855-e7f6-4ef8-2762dd1fbf0d@fried.cc> References: <1a9a4e75-9b89-9474-38d7-910f08fbd1b9@gmail.com> <1e44b8bc-9855-e7f6-4ef8-2762dd1fbf0d@fried.cc> Message-ID: https://blueprints.launchpad.net/nova/+spec/optional-requirements-packages is the one I created I agree with you tend to think it's a specless blueprint unless someone want a spec on it And I saw there are a set of more discussions in the ML so again agree that let's watch and see what's really need to be changed then update blueprint Thanks for your info and support Best Regards! Kevin (Chen) Ji 纪 晨 Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com Phone: +86-10-82451493 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC From: Eric Fried To: "OpenStack Development Mailing List (not for usage questions)" Date: 04/12/2018 08:43 PM Subject: Re: [openstack-dev] [Nova][Deployers] Optional, platform specific, dependancies in requirements.txt +1 This sounds reasonable to me. I'm glad the issue was raised, but IMO it shouldn't derail progress on an approved blueprint with ready code. Jichen, would you please go ahead and file that blueprint template (no need to write a spec yet) and link it in a review comment on the bottom zvm patch so we have a paper trail? I'm thinking something like "Consistent platform-specific and optional requirements" -- that leaves us open to decide *how* we're going to "handle" them. Thanks, efried On 04/12/2018 04:13 AM, Chen CH Ji wrote: > Thanks for Michael for raising this question and detailed information > from Clark > > As indicated in the mail, xen, vmware etc might already have this kind > of requirements (and I guess might be more than that) , > can we accept z/VM requirements first by following other existing ones > then next I can create a BP later to indicate this kind > of change request by referring to Clark's comments and submit patches to > handle it ? Thanks > > Best Regards! > > Kevin (Chen) Ji 纪 晨 > > Engineer, zVM Development, CSTL > Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com > Phone: +86-10-82451493 > Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian > District, Beijing 100193, PRC > > Inactive hide details for Matt Riedemann ---04/12/2018 08:46:25 AM---On > 4/11/2018 5:09 PM, Michael Still wrote: >Matt Riedemann ---04/12/2018 > 08:46:25 AM---On 4/11/2018 5:09 PM, Michael Still wrote: > > > From: Matt Riedemann > To: openstack-dev at lists.openstack.org > Date: 04/12/2018 08:46 AM > Subject: Re: [openstack-dev] [Nova][Deployers] Optional, platform > specific, dependancies in requirements.txt > > ------------------------------------------------------------------------ > > > > On 4/11/2018 5:09 PM, Michael Still wrote: >> >> > https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_c_523387&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=212PUwLYOBlJZ3BiZNuJIFkRfqXoBPJDcWYCDk7vCHg&s=CNosrTHnAR21zOI52fnDRfTqu2zPiAn2oW9f67Qijo4&e= proposes > adding a z/VM specific >> dependancy to nova's requirements.txt. When I objected the counter >> argument is that we have examples of windows specific dependancies >> (os-win) and powervm specific dependancies in that file already. >> >> I think perhaps all three are a mistake and should be removed. >> >> My recollection is that for drivers like ironic which may not be >> deployed by everyone, we have the dependancy documented, and then loaded >> at runtime by the driver itself instead of adding it to >> requirements.txt. This is to stop pip for auto-installing the dependancy >> for anyone who wants to run nova. I had assumed this was at the request >> of the deployer community. >> >> So what do we do with z/VM? Do we clean this up? Or do we now allow >> dependancies that are only useful to a very small number of deployments >> into requirements.txt? > > As Eric pointed out in the review, this came up when pypowervm was added: > > https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_c_438119_5_requirements.txt&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=212PUwLYOBlJZ3BiZNuJIFkRfqXoBPJDcWYCDk7vCHg&s=iyKxF-CcGAFmnQs8B7d5u2zwEiJqq8ivETmrgB77PEg&e= > > And you're asking the same questions I did in there, which was, should > it go into test-requirements.txt like oslo.vmware and > python-ironicclient, or should it go under [extras], or go into > requirements.txt like os-win (we also have the xenapi library now too). > > I don't really think all of these optional packages should be in > requirements.txt, but we should just be consistent with whatever we do, > be that test-requirements.txt or [extras]. I remember caring more about > this back in my rpm packaging days when we actually tracked what was in > requirements.txt to base what needed to go into the rpm spec, unlike > Fedora rpm specs which just zero out requirements.txt and depend on > their own knowledge of what needs to be installed (which is sometimes > lacking or lagging master). > > I also seem to remember that [extras] was less than user-friendly for > some reason, but maybe that was just because of how our CI jobs are > setup? Or I'm just making that up. I know it's pretty simple to install > the stuff from extras for tox runs, it's just an extra set of > dependencies to list in the tox.ini. > > Having said all this, I don't have the energy to help push for > consistency myself, but will happily watch you from the sidelines. > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=212PUwLYOBlJZ3BiZNuJIFkRfqXoBPJDcWYCDk7vCHg&s=2FioyzCRtztysjjEqCrBTkpQs_wwfs3Mt2wGDkrft-s&e= > > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=9cPFSQlrAGTIS7x9O7dhxGFALDYV3Seub-sXD2DCrTU&s=lUPkxIEZrxiuKhJbLkU01LqAARcIVXal0mWjmdV5ksE&e= > __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=9cPFSQlrAGTIS7x9O7dhxGFALDYV3Seub-sXD2DCrTU&s=lUPkxIEZrxiuKhJbLkU01LqAARcIVXal0mWjmdV5ksE&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From liliueecg at gmail.com Fri Apr 13 11:26:34 2018 From: liliueecg at gmail.com (Li Liu) Date: Fri, 13 Apr 2018 11:26:34 +0000 Subject: [openstack-dev] [Cyborg] Initiate the discussion for FPGA reconfigurability Message-ID: Hi Team, While wrapping up spec for FPGA programmability, I think we still miss the reconfigurability part of Accelerators For instance, in the FPGA case, after the bitstream is loaded, a user might still need to tune the clock frequency, VF numbers, do reset, etc. These reconfigurations can be arbitory. Unfortunately, none of the APIs we have right can handle them properly. I suggest having another spec for a couple of new APIs dedicated to reconfiguring accelerators. 1. A rest API 2. A driver API I want to gather more ideas from you guys especially from our vendor folks :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Fri Apr 13 11:49:47 2018 From: zigo at debian.org (Thomas Goirand) Date: Fri, 13 Apr 2018 13:49:47 +0200 Subject: [openstack-dev] Removing networking-mlnx from Debian? Message-ID: <94573577-2216-3f87-aeba-e494f6d3d974@debian.org> Hi, Is networking-mlnx actively maintained? It doesn't look like it to me, there's still no Queens release. It also fails to build in Debian, with apparently no Python 3 support. Without any reply from an active maintainer, I'll ask for this package to be removed from Debian. Please let me know, Cheers, Thomas Goirand (zigo) From fungi at yuggoth.org Fri Apr 13 12:08:04 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 13 Apr 2018 12:08:04 +0000 Subject: [openstack-dev] Removing networking-mlnx from Debian? In-Reply-To: <94573577-2216-3f87-aeba-e494f6d3d974@debian.org> References: <94573577-2216-3f87-aeba-e494f6d3d974@debian.org> Message-ID: <20180413120804.tag6zt4vntmk7jxe@yuggoth.org> On 2018-04-13 13:49:47 +0200 (+0200), Thomas Goirand wrote: > Is networking-mlnx actively maintained? It doesn't look like it to > me, there's still no Queens release. It looks like they were merging changes to master and backporting to stable/queens as recently as three weeks ago, but I agree they don't seem to have tagged their 9.0.0 release yet. They're not part of any official project though, so it's hard to guess what their release timeframe might be. > It also fails to build in Debian, with apparently no Python 3 > support. [...] Right, from what I can see they're not testing Python 3 support for their changes, not even in a non-voting capacity. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From zigo at debian.org Fri Apr 13 12:48:42 2018 From: zigo at debian.org (Thomas Goirand) Date: Fri, 13 Apr 2018 14:48:42 +0200 Subject: [openstack-dev] [tripleo] Recap of Python 3 testing session at PTG In-Reply-To: References: Message-ID: <7a57cbba-b09d-b528-8fe9-4ecbd0499d10@debian.org> On 03/17/2018 09:34 AM, Emilien Macchi wrote: > ## Challenges > > - Some services aren't fully Python 3 To my experience switching everything to Py3 in Debian, the only issues were: - manila-ui - networking-mlnx The Melanox driver will probably be dropped from Debian, so the only collateral is manila-ui, which is being worked on upstream. The other one that isn't Py3 ready *in stable* is trove-dashboard. I have sent backport patches, but they were not approved because of the stable gate having issues: https://review.openstack.org/#/c/554680/ https://review.openstack.org/#/c/554681/ https://review.openstack.org/#/c/554682/ https://review.openstack.org/#/c/554683/ The team had plans to make this pass (by temporarily fixing the gate) but so far, it hasn't happened. On the packaging level, this wont be an issue for Rocky, and for Queens (which you probably don't care about), you could just add these patches at the packaging level. I hope this helps, Cheers, Thomas Goirand (zigo) From eharney at redhat.com Fri Apr 13 13:00:24 2018 From: eharney at redhat.com (Eric Harney) Date: Fri, 13 Apr 2018 09:00:24 -0400 Subject: [openstack-dev] [cinder][nova] RBD multi-attach In-Reply-To: <006701d3d2ce$accb0ac0$06612040$@com> References: <006701d3d2ce$accb0ac0$06612040$@com> Message-ID: <5ed7631e-1982-133c-5d40-451df0874fc1@redhat.com> On 04/12/2018 10:25 PM, 李俊波 wrote: > Hello Nova, Cinder developers, > > > > I would like to ask you a question concerns a Cinder patch [1]. > > > > In this patch, it mentioned that RBD features were incompatible with > multi-attach, which disabled multi-attach for RBD. I would like to know > which RBD features that are incompatible? > > > > In the Bug [2], yao ning also raised this question, and in his envrionment, > it proved that they did not find ant problems when enable this feature. > > > > So, I also would like to know which features in ceph will make this feature > unsafe? > > > > [1] https://review.openstack.org/#/c/283695/ > > [2] https://bugs.launchpad.net/cinder/+bug/1535815 > > > > > > Best wishes and Regards > > junboli > > Hi, As noted in the comment in the code [1] -- the exclusive lock feature must be disabled. However, this feature is required for RBD mirroring [2], which will be the basis of Cinder volume replication for RBD. We are currently prioritizing completing support for replication over multi-attach for this driver, since there is more demand for that feature. After that, we will look more at multi-attach and how to let deployers choose to enable replication or multi-attach. [1] https://git.openstack.org/cgit/openstack/cinder/tree/cinder/volume/drivers/rbd.py?id=d1bae7462e3bc#n485 [2] http://docs.ceph.com/docs/master/rbd/rbd-mirroring/#enable-image-journaling-support Thanks, Eric From vstinner at redhat.com Fri Apr 13 13:04:12 2018 From: vstinner at redhat.com (Victor Stinner) Date: Fri, 13 Apr 2018 15:04:12 +0200 Subject: [openstack-dev] [tripleo] Recap of Python 3 testing session at PTG In-Reply-To: <7a57cbba-b09d-b528-8fe9-4ecbd0499d10@debian.org> References: <7a57cbba-b09d-b528-8fe9-4ecbd0499d10@debian.org> Message-ID: 2018-04-13 14:48 GMT+02:00 Thomas Goirand : > On 03/17/2018 09:34 AM, Emilien Macchi wrote: >> ## Challenges >> >> - Some services aren't fully Python 3 > > To my experience switching everything to Py3 in Debian, the only issues > were: > > - manila-ui > - networking-mlnx What about swift? Victor From zigo at debian.org Fri Apr 13 13:07:42 2018 From: zigo at debian.org (Thomas Goirand) Date: Fri, 13 Apr 2018 15:07:42 +0200 Subject: [openstack-dev] [tripleo] Recap of Python 3 testing session at PTG In-Reply-To: <7a57cbba-b09d-b528-8fe9-4ecbd0499d10@debian.org> References: <7a57cbba-b09d-b528-8fe9-4ecbd0499d10@debian.org> Message-ID: <26f528a1-8968-326d-14ec-c8645a186141@debian.org> On 04/13/2018 02:48 PM, Thomas Goirand wrote: > On 03/17/2018 09:34 AM, Emilien Macchi wrote: >> ## Challenges >> >> - Some services aren't fully Python 3 > > To my experience switching everything to Py3 in Debian, the only issues > were: > > - manila-ui > - networking-mlnx Of course, I also forgot Swift, which isn't Py3 ready. But that's so famous that I didn't mention it. BTW, any progress on upstream Swift WRT Py3 support? Cheers, Thomas Goirand (zigo) From vstinner at redhat.com Fri Apr 13 13:37:59 2018 From: vstinner at redhat.com (Victor Stinner) Date: Fri, 13 Apr 2018 15:37:59 +0200 Subject: [openstack-dev] [tripleo] Recap of Python 3 testing session at PTG In-Reply-To: <26f528a1-8968-326d-14ec-c8645a186141@debian.org> References: <7a57cbba-b09d-b528-8fe9-4ecbd0499d10@debian.org> <26f528a1-8968-326d-14ec-c8645a186141@debian.org> Message-ID: 2018-04-13 15:07 GMT+02:00 Thomas Goirand : > BTW, any progress on upstream Swift WRT Py3 support? There is a voting Python 3.4 gate which runs 902 unit tests. The Python 2.7 gate runs 5,902 unit tests. I compute that 15% of unit tests pass on Python 3.4. I tested locally with Python 3.5 (tox -e py35): 876 tests pass on Python 3.5, 57 skipped and 1 failure: "FAIL: test_get_logger_sysloghandler_plumbing (test.unit.common.test_utils.TestUtils)" Victor From dms at danplanet.com Fri Apr 13 13:45:26 2018 From: dms at danplanet.com (Dan Smith) Date: Fri, 13 Apr 2018 06:45:26 -0700 Subject: [openstack-dev] [Nova] z/VM introducing a new config driveformat In-Reply-To: (Chen CH Ji's message of "Fri, 13 Apr 2018 17:58:23 +0800") References: <7efe6916-17fc-59c5-d666-6bdfc19c3329@gmail.com> Message-ID: > for the run_validation=False issue, you are right, because z/VM driver > only support config drive and don't support metadata service ,we made > bad assumption and took wrong action to disabled the whole ssh check, > actually according to [1] , we should only disable > CONF.compute_feature_enabled.metadata_service but keep both > self.run_ssh and CONF.compute_feature_enabled.config_drive as True in > order to make config drive test validation take effect, our CI will > handle that Why don't you support the metadata service? That's a pretty fundamental mechanism for nova and openstack. It's the only way you can get a live copy of metadata, and it's the only way you can get access to device tags when you hot-attach something. Personally, I think that it's something that needs to work. > For the tgz/iso9660 question below, this is because we got wrong info > from low layer component folks back to 2012 and after discuss with > some experts again, actually we can create iso9660 in the driver layer > and pass down to the spawned virtual machine and during startup > process, the VM itself will mount the iso file and consume it, because > from linux perspective, either tgz or iso9660 doesn't matter , only > need some files in order to transfer the information from openstack > compute node to the spawned VM. so our action is to change the format > from tgz to iso9660 and keep consistent to other drivers. The "iso file" will not be inside the guest, but rather passed to the guest as a block device, right? > For the config drive working mechanism question, according to [2] z/VM > is Type 1 hypervisor while Qemu/KVM are mostly likely to be Type 2 > hypervisor, there is no file system in z/VM hypervisor (I omit too > much detail here) , so we can't do something like linux operation > system to keep a file as qcow2 image in the host operating system, I'm not sure what the type-1-ness has to do with this. The hypervisor doesn't need to support any specific filesystem for this to work. Many drivers we have in the tree are type-1 (xen, vmware, hyperv, powervm) and you can argue that KVM is type-1-ish. They support configdrive. > what we do is use a special file pool to store the config drive and > during VM init process, we read that file from special device and > attach to VM as iso9660 format then cloud-init will handle the follow > up, the cloud-init handle process is identical to other platform This and the previous mention of this sort of behavior has me concerned. Are you describing some sort of process that runs when the instance is starting to initialize its environment, or something that runs *inside* the instance and thus functionality that has to exist in the *image* to work? --Dan From dms at danplanet.com Fri Apr 13 13:49:08 2018 From: dms at danplanet.com (Dan Smith) Date: Fri, 13 Apr 2018 06:49:08 -0700 Subject: [openstack-dev] [Nova][Deployers] Optional, platform specific, dependancies in requirements.txt In-Reply-To: <5a7495b8-8332-327e-0a1e-c0b3448f265b@fried.cc> (Eric Fried's message of "Thu, 12 Apr 2018 15:56:39 -0500") References: <1a9a4e75-9b89-9474-38d7-910f08fbd1b9@gmail.com> <1e44b8bc-9855-e7f6-4ef8-2762dd1fbf0d@fried.cc> <5a7495b8-8332-327e-0a1e-c0b3448f265b@fried.cc> Message-ID: >> global ironic >> if ironic is None: >> ironic = importutils.import_module('ironicclient') I believe ironic was an early example of a client library we hot-loaded, and I believe at the time we said this was a pattern we were going to follow. Personally, I think this makes plenty of sense and I think that even moving things like the python-libvirt load out to something like this to avoid hyperv people having to nuke it from requirements makes sense. > I have a pretty strong dislike for this mechanism. For one thing, I'm > frustrated when I can't use hotkeys to jump to an ironicclient method > because my IDE doesn't recognize that dynamic import. I have to go look > up the symbol some other way (and hope I'm getting the right one). To > me (with my bias as a dev rather than a deployer) that's way worse than > having the 704KB python-ironicclient installed on my machine even though This seems like a terrible reason to make everyone install ironicclient (or the z/vm client) on their systems at runtime. --Dan From cdent+os at anticdent.org Fri Apr 13 13:57:29 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 13 Apr 2018 14:57:29 +0100 (BST) Subject: [openstack-dev] [nova] [placement] placement update 18-15 Message-ID: This is an "expand" update, meaning I've searched out new stuff to add to lists. New stuff is added to the end of lists (of specs and other). # Most Important Forbidden traits are now in the runway (until April 26th). Nested providers on allocation candidates (and associated changes related to nested representations) are key to many things. Consumer generations is actively underway. # What's Changed Handling in the report client to fail over when an expected micorvesrion is not there and a 406 is returned, has been removed. We now expect placement to be upgraded first and for the report client to be able to expect a certain minimum version. The functional test for the report client has moved out of the placement namespace. Spec for consumer generations merged. Spec for mirror host aggregates to placement merged. Error codes for placement API spec merged. An #openstack-placement IRC channel has been created. Please join if you are tracking or working on placement related activity. # Questions * Is anyone already on the hook to implement the multiple member_of support described by this spec ammendment: https://review.openstack.org/#/c/555413/ ? # Bugs * Placement related bugs not yet in progress: https://goo.gl/TgiPXb 14, -1 on last week * In progress placement bugs: https://goo.gl/vzGGDQ 13, +0 on last week # Specs (new(ly discovered) things are added at the end of this list) Some of these look like they could be abandoned. Others are hanging around for a long time because it seems we struggle to make progress on things related to representing complex architectures with nested providers. Apparently a hard problem. * https://review.openstack.org/#/c/549067/ VMware: place instances on resource pool (using update_provider_tree) * https://review.openstack.org/#/c/552924/ Proposes NUMA topology with RPs * https://review.openstack.org/#/c/544683/ Account for host agg allocation ratio in placement * https://review.openstack.org/#/c/552927/ Spec for isolating configuration of placement database (This has a strong +2 on it but needs one more.) * https://review.openstack.org/#/c/552105/ Support default allocation ratios * https://review.openstack.org/#/c/438640/ Spec on preemptible servers * https://review.openstack.org/#/c/556873/ Handle nested providers for allocation candidates * https://review.openstack.org/#/c/557065/ Proposes Multiple GPU types * https://review.openstack.org/#/c/555081/ Standardize CPU resource tracking * https://review.openstack.org/#/c/502306/ Network bandwidth resource provider * https://review.openstack.org/#/c/509042/ Propose counting quota usage from placement * https://review.openstack.org/#/c/560174/ Add history behind nullable project_id and user_id * https://review.openstack.org/#/c/559466/ Return resources of entire trees in Placement * https://review.openstack.org/#/c/560974/ Numbered request groups use different providers * Main Themes ## Update Provider Tree The main body of this work has been merged. There are some tweaks and cleanups at https://review.openstack.org/#/q/topic:bp/update-provider-tree as well as some related work: * https://review.openstack.org/#/c/560444/ libvirt using update provider tree ## Nested providers in allocation candidates Representing nested provides in the response to GET /allocation_candidates is required to actually make use of all the topology that update provider tree will report. That work is in progress at: https://review.openstack.org/#/q/topic:bp/nested-resource-providers https://review.openstack.org/#/q/topic:bp/nested-resource-providers-allocation-candidates ## Mirror nova host aggregates to placement This makes it so some kinds of aggregate filtering can be done "placement side" by mirroring nova host aggregates into placement aggregates. https://review.openstack.org/#/q/topic:bp/placement-mirror-host-aggregates ## Forbidden Traits A way of expressing "I'd like resources that do _not_ have trait X". This is ready for review: https://review.openstack.org/#/q/topic:bp/placement-forbidden-traits (This is in the current runway and already has one +2 across the 4 patches.) ## Consumer Generations This allows multiple agents to "safely" update allocations for a single consumer. The spec for this has merged and code is in progress: https://review.openstack.org/#/q/topic:bp/add-consumer-generation We had some extensive discussion in IRC on how to manage data in the face of three (somewhat conflicting) data models and flows in each of the microversions associated with PUT /allocations/{consumer_id}. # Extraction Small bits of work on extraction continue on the bp/placement-extract topic: https://review.openstack.org/#/q/topic:bp/placement-extract The spec for optional database handling got some nice support but needs more attention: https://review.openstack.org/#/c/552927/ Jay is going to work on an os-resource-classes library (or perhaps a merging of that functionality and os-traits into an os-placement library) but is waiting on the discussion related to the cpu-resources spec to resolve (which will drive some of the standard resource classes). A forum topic has been proposed about extracting placement: http://forumtopics.openstack.org/cfp/details/88 To repeat a bit from last week: A recent experiment with shrinking the repo to just the placement dir reinforced a few things we already know: * The placement tests need their own base test to avoid 'from nova import test' * That will need to provide database and other fixtures (such a config and the self.flags feature). * And, of course, eventually, config handling. The container experiments above demonstrate just how little config placement actually needs (by design, let's keep it that way). # Other This is an expand week, so new stuff has been added to the end of this list. 18 entries, +4 on last week. I thought there would be more, but I guess much of the additions are in specs, and some stuff has merged. * https://review.openstack.org/#/c/546660/ Purge comp_node and res_prvdr records during deletion of cells/hosts * https://review.openstack.org/#/q/topic:bp/placement-osc-plugin-rocky A huge pile of improvements to osc-placement * https://review.openstack.org/#/c/546713/ Add compute capabilities traits (to os-traits) * https://review.openstack.org/#/c/524425/ General policy sample file for placement * https://review.openstack.org/#/c/546177/ Provide framework for setting placement error codes * https://review.openstack.org/#/c/527791/ Get resource provider by uuid or name (osc-placement) * https://review.openstack.org/#/c/477478/ placement: Make API history doc more consistent * https://review.openstack.org/#/c/556669/ Handle agg generation conflict in report client * https://review.openstack.org/#/c/557086/ Remove usage of [placement]os_region_name * https://review.openstack.org/#/c/537614/ Add unit test for non-placement resize * https://review.openstack.org/#/c/554357/ Address issues raised in adding member_of to GET /a-c * https://review.openstack.org/#/c/493865/ cover migration cases with functional tests * https://review.openstack.org/#/c/558089/ Update check to ensure compute is using placement * https://review.openstack.org/#/q/topic:bug/1732731 Bug fixes for sharing resource providers * https://review.openstack.org/#/c/560107/ normalize_name helper (in os-traits) * https://review.openstack.org/#/q/topic:bug/1762789 Fix issues with unicode uppercasing in normalizing resource classes * https://review.openstack.org/#/c/517757/ WIP at granular in allocation candidates * https://review.openstack.org/#/q/topic:bug/1760322 Fix a bug with syncing traits. It can fail, ruining the whole service. # End Weeee Ha Hootay. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From jaypipes at gmail.com Fri Apr 13 14:10:42 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Fri, 13 Apr 2018 10:10:42 -0400 Subject: [openstack-dev] [cyborg] Initiate the discussion for FPGA reconfigurability In-Reply-To: References: Message-ID: <706b8576-2299-e422-6a37-3d48b0ac39e1@gmail.com> Hi Li, please do remember to use a [cyborg] topic marker in your subject line. (I've added one). Comments inline. On 04/12/2018 11:08 PM, Li Liu wrote: > Hi Team, > > While wrapping up spec for FPGA programmability, I think we still miss > the reconfigurability part of Accelerators > > For instance, in the FPGA case, after the bitstream is loaded, a user > might still need to tune the clock frequency, VF numbers, do reset, etc. When you say "user" above, are you referring to a normal unprivileged user or are you referring to a privileged user like an admin or MANO system? I'm not entirely sure why an unprivileged user would need to change the clock frequency or VF numbers for the FPGA, so I presume you are referring to a privileged user (admin)? > These reconfigurations can be arbitory. Unfortunately, none of the APIs > we have right can handle them properly. > > I suggest having another spec for a couple of new APIs dedicated > to reconfiguring accelerators. > > 1. A rest API > 2. A driver API If my presumption from above is correct -- that you are referring to privileged users (and not the unprivileged users that are spinning up workloads that utilize the FPGA) -- then I believe a non-REST API is appropriate. REST APIs are typically more appropriate when trying provide a publicly-accessible endpoint for unprivileged users to perform actions against something. It's also easier to modify a driver API vs a REST API due to not having to be as concerned about backwards compatibility and things like microversions. Best, -jay > I want to gather more ideas from you guys especially from our vendor > folks :) > > > > -- > Thank you > > Regards > > Li Liu > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From moshele at mellanox.com Fri Apr 13 14:13:49 2018 From: moshele at mellanox.com (Moshe Levi) Date: Fri, 13 Apr 2018 14:13:49 +0000 Subject: [openstack-dev] Removing networking-mlnx from Debian? In-Reply-To: <94573577-2216-3f87-aeba-e494f6d3d974@debian.org> References: <94573577-2216-3f87-aeba-e494f6d3d974@debian.org> Message-ID: Hi Thomas, Networking-mlnx is still maintained. We will fix all the issues next week and I will create a tag for it. > -----Original Message----- > From: Thomas Goirand [mailto:zigo at debian.org] > Sent: Friday, April 13, 2018 2:50 PM > To: OpenStack Development Mailing List dev at lists.openstack.org> > Subject: [openstack-dev] Removing networking-mlnx from Debian? > > Hi, > > Is networking-mlnx actively maintained? It doesn't look like it to me, there's > still no Queens release. It also fails to build in Debian, with apparently no > Python 3 support. > > Without any reply from an active maintainer, I'll ask for this package to be > removed from Debian. > > Please let me know, > Cheers, > > Thomas Goirand (zigo) > > __________________________________________________________ > ________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev- > request at lists.openstack.org?subject:unsubscribe > https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists. > openstack.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fopenstack- > dev&data=02%7C01%7Cmoshele%40mellanox.com%7C7cb48464a3b24d0fc10 > e08d5a134b5a0%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C6365 > 92171104919078&sdata=BFuUnC5qJWNm0J7WPZtosLJeuww%2BVwc4s%2Bu > rIaF3jbQ%3D&reserved=0 From moshele at mellanox.com Fri Apr 13 14:17:29 2018 From: moshele at mellanox.com (Moshe Levi) Date: Fri, 13 Apr 2018 14:17:29 +0000 Subject: [openstack-dev] Removing networking-mlnx from Debian? In-Reply-To: <20180413120804.tag6zt4vntmk7jxe@yuggoth.org> References: <94573577-2216-3f87-aeba-e494f6d3d974@debian.org> <20180413120804.tag6zt4vntmk7jxe@yuggoth.org> Message-ID: > -----Original Message----- > From: Jeremy Stanley [mailto:fungi at yuggoth.org] > Sent: Friday, April 13, 2018 3:08 PM > To: OpenStack Development Mailing List (not for usage questions) > > Subject: Re: [openstack-dev] Removing networking-mlnx from Debian? > > On 2018-04-13 13:49:47 +0200 (+0200), Thomas Goirand wrote: > > Is networking-mlnx actively maintained? It doesn't look like it to me, > > there's still no Queens release. > > It looks like they were merging changes to master and backporting to > stable/queens as recently as three weeks ago, but I agree they don't seem > to have tagged their 9.0.0 release yet. They're not part of any official project > though, so it's hard to guess what their release timeframe might be. > > > It also fails to build in Debian, with apparently no Python 3 support. > [...] > > Right, from what I can see they're not testing Python 3 support for their > changes, not even in a non-voting capacity. Yes, How can we add python3 job in zuul for testing it? > -- > Jeremy Stanley From melwittt at gmail.com Fri Apr 13 15:00:31 2018 From: melwittt at gmail.com (melanie witt) Date: Fri, 13 Apr 2018 08:00:31 -0700 Subject: [openstack-dev] [nova] Rocky forum topics brainstorming In-Reply-To: <0037fa0a-aa31-1744-b050-783e8be81138@gmail.com> References: <0037fa0a-aa31-1744-b050-783e8be81138@gmail.com> Message-ID: <4de18eaf-0b28-62aa-2935-a18d5ada160c@gmail.com> +openstack-operators (apologies that I forgot to add originally) On Mon, 9 Apr 2018 10:09:12 -0700, Melanie Witt wrote: > Hey everyone, > > Let's collect forum topic brainstorming ideas for the Forum sessions in > Vancouver in this etherpad [0]. Once we've brainstormed, we'll select > and submit our topic proposals for consideration at the end of this > week. The deadline for submissions is Sunday April 15. > > Thanks, > -melanie > > [0] https://etherpad.openstack.org/p/YVR-nova-brainstorming Just a reminder that we're collecting forum topic ideas to propose for Vancouver and input from operators is especially important. Please add your topics and/or comments to the etherpad [0] and we'll submit proposals before the Sunday deadline. Thanks all, -melanie From dougal at redhat.com Fri Apr 13 16:00:09 2018 From: dougal at redhat.com (Dougal Matthews) Date: Fri, 13 Apr 2018 17:00:09 +0100 Subject: [openstack-dev] [Mistral]I think Mistral need K8S action In-Reply-To: <578d70ca-25e5-441a-9211-0c7986bf2f16@Spark> References: <58c101d3d2cd$9bbb7a40$d3326ec0$@dcn.ssu.ac.kr> <578d70ca-25e5-441a-9211-0c7986bf2f16@Spark> Message-ID: On 13 April 2018 at 05:47, Renat Akhmerov wrote: > Hi, > > I completely agree with you that having such an action would be useful. > However, I don’t think this kind of action should be provided by Mistral > out of the box. Actions and triggers are integration pieces for Mistral and > are natively external to Mistral code base. In other words, this action can > be implemented anywhere and plugged into a concrete Mistral installation > where needed. > > As a home for this action I’d propose 'mistral-extra’ repo where we are > planning to move OpenStack actions and some more. > Also, if you’d like to contribute you’re very welcome. > I would recommend developing actions for K8s somewhere externally, then when mistral-extra is ready we can move them over. This is the approach that I took for the Ansible actions[1] and they will likely be one of the first additions to mistral-extra. [1]: https://github.com/d0ugal/mistral-ansible-actions > > > Thanks > > Renat Akhmerov > @Nokia > > On 13 Apr 2018, 09:18 +0700, 홍선군 , wrote: > > Hello Mistral team. > > I'm doing some work on the K8S but I observed that there is only Docker's > action in Mistral. > > I would like to ask Mistral Team, why there is no K8S action in the > mistral. > > I think it would be useful in Mistral. > > If you feel it's necessary, could I add K8S action to mistral? > > > > Regards, > > Xian Jun Hong > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thiago at redhat.com Fri Apr 13 16:05:01 2018 From: thiago at redhat.com (Thiago da Silva) Date: Fri, 13 Apr 2018 12:05:01 -0400 Subject: [openstack-dev] [tripleo] Recap of Python 3 testing session at PTG In-Reply-To: References: <7a57cbba-b09d-b528-8fe9-4ecbd0499d10@debian.org> <26f528a1-8968-326d-14ec-c8645a186141@debian.org> Message-ID: On Fri, Apr 13, 2018 at 9:37 AM, Victor Stinner wrote: > 2018-04-13 15:07 GMT+02:00 Thomas Goirand : > > BTW, any progress on upstream Swift WRT Py3 support? > > There is a voting Python 3.4 gate which runs 902 unit tests. The > Python 2.7 gate runs 5,902 unit tests. I compute that 15% of unit > tests pass on Python 3.4. > > I tested locally with Python 3.5 (tox -e py35): 876 tests pass on > Python 3.5, 57 skipped and 1 failure: > "FAIL: test_get_logger_sysloghandler_plumbing > (test.unit.common.test_utils.TestUtils)" > There's been some continued effort to port Swift to py3. Our current goal has been to focus on running the proxy under py3, this way we could start also running Swift's functional test. Once proxy server and unit/functional tests have been ported, we could shift focus to account, container, object servers and finally to background daemons. So yes, there's still a lot of work to do, but we are making progress. Thiago > > Victor > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Fri Apr 13 16:13:05 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Fri, 13 Apr 2018 11:13:05 -0500 Subject: [openstack-dev] [neutron] [release] Message-ID: Hi everybody, This message is to announce that effective now Akihiro Amotoki has accepted to be the new Neutron liaison with the Release team. As such, Akihiro will be responsible for coordinating the releases of Neutron project and the Neutron Stadium projects, starting with the upcoming Rocky-1 release I also want to take this opportunity to thank Armando Migliaccio for the support he has provided over the past few cycles releasing Neutron to the community. This is only a tiny fraction of the many great contributions that Armando has made to OpenStack over many years. Thank you and good luck! Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Fri Apr 13 17:20:24 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Fri, 13 Apr 2018 13:20:24 -0400 Subject: [openstack-dev] [nova] [placement] placement update 18-15 In-Reply-To: References: Message-ID: On 04/13/2018 09:57 AM, Chris Dent wrote: > # Questions > > * Is anyone already on the hook to implement the multiple member_of >   support described by this spec ammendment: >   https://review.openstack.org/#/c/555413/ ? I got this. Should have code up today for it. Best, -jay From cboylan at sapwetik.org Fri Apr 13 17:20:47 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 13 Apr 2018 10:20:47 -0700 Subject: [openstack-dev] [all] Zuul job definitions and the branches attribute Message-ID: <1523640047.1975705.1337157000.528BA80D@webmail.messagingengine.com> Hello everyone, Nova recently discovered that if you use the job.branches attribute in job definitions the results may not be as expected, particularly if you are porting jobs from openstack-zuul-jobs into your project repos. The problem is the openstack-zuul-jobs project is "branchless", it only has a master branch. This means for jobs defined in that repo to restrict running jobs against certain branches it used the job.branches attribute. When ported to "branched" repos like Nova this job.branches attribute has a slightly different behavior and it applies the config on the current branch to all branches matching job.branches. In the Nova case this meant the stable/queens job definition was being applied to the master job definition for the job with the same name. Instead the job.branches attribute should be dropped and you should use the per branch job definition to control branch specific attributes. If you want to stop running a job on a branch delete the job's definition from that branch. TL;DR if you have job definitions that have a branches attribute like Nova did [0], you should consider removing that and use the per branch definitions to control where and when jobs run. [0] https://git.openstack.org/cgit/openstack/nova/tree/.zuul.yaml?id=cb6c8ca1a7a5abc4d0079e285f877c18c49acaf2#n99 If you have any questions feel free to reach out to the infra team either here or on IRC. Clark From lbragstad at gmail.com Fri Apr 13 19:43:51 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Fri, 13 Apr 2018 14:43:51 -0500 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 9 April 2018 Message-ID: <09647cb2-ac2d-6653-ece4-63cc4a8d6a7e@gmail.com> # Keystone Team Update - Week of 9 April 2018 ## News This week was another quiet week with our primary focus being specification reviews. We did reach consensus on the application credentials specification [0], which landed on Tuesday. Even though it hasn't landed yet, there's been a bunch of good discussion on the cross-project default roles specification, which is shaping up nicely [1]. Keystonemiddleware is now officially VMT managed [2], meaning keystonemiddleware is treated as more of a first class citizen when dealing with vulnerabilities. This effort has been underway for over a year and it finally became official this week. [0] http://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/capabilities-app-creds.html [1] https://review.openstack.org/#/c/523973/ [2] https://review.openstack.org/#/c/555934/ ## Open Specs Search query: https://goo.gl/eyTktx No new specifications were proposed this week, but there are several that still need reviews. Specifically default roles [3], unified limits [4][5] and JWT [6]. Those specifications would really benefit from some more eyes. If you have questions, let us know in IRC and we can discuss it. [3] https://review.openstack.org/#/c/523973/ [4] https://review.openstack.org/#/c/540803/ [5] https://review.openstack.org/#/c/549766/ [6] https://review.openstack.org/#/c/541903/ ## Recently Merged Changes Search query: https://goo.gl/hdD9Kw We merged 6 changes this week. Mainly dealing with PTI changes, updated pysaml2 requirements, and a bug fix or two. ## Changes that need Attention Search query: https://goo.gl/tW5PiH There are 51 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. Please have a look if you have time. If you're curious about what to review or have questions about a change, please stop by #openstack-keystone or check out the priorities in Trello [7]. [7] https://trello.com/b/wmyzbFq5/keystone-rocky-roadmap ## Bugs There are a total of 127 bugs open against openstack/keystone, with a pretty even distribution ranging from High through Undecided. If you see something that interests you, please don't hesitate to communicate it with us. We do set aside time every week during office hours to help wrangle bugs. Total: 127 High (14): https://bit.ly/2v7yedC Medium (33): https://bit.ly/2qvb23L Low (32): https://bit.ly/2JEjoyJ Wishlist (32): https://bit.ly/2JFGa9c Undecided (16): https://bit.ly/2qsSjXe ## Milestone Outlook Next Friday is specification proposal freeze. Our next deadline is June 8th, which is specification freeze. We should be focused on finishing up our specification reviews so that we can get implementations moving for the release. https://releases.openstack.org/ rocky /schedule.html ## Shout-outs Big thanks to Gage and Kristi for the work they did to get keystonemiddleware VMT managed! This took a long time and it easily could have been dropped. Thanks for pushing this forward! ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Fri Apr 13 19:46:59 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 13 Apr 2018 14:46:59 -0500 Subject: [openstack-dev] [rally][dragonflow][ec2-api][PowerVMStackers][murano] Tagging rights Message-ID: <20180413194659.GA9657@sm-xps> Hello teams, I am following up on some recently announced changes regarding governed projects and tagging rights. See [1] for background. It was mostly followed before that when a project came under official governance that all tagging and releases would then move to using the openstack/releases repo and associated automation. It was not officially stated until recently that this was one of the steps of coming under governance, so there were a few projects that became official but that continued to do their own releases. We've cleaned up most projects' rights to push tags, but for the ones listed here we waited: - rally - dragflow - ec2-api - networking-powervm - nova-powervm - yaql We would like to finish cleaning up the ACLs for these, but I wanted to check with the teams to make sure there wasn't a reason why these repos had continued tagging separately. Please let me know, either here or in the #openstack-release channel, if there is something we are overlooking. Thanks for your attention. --- Sean (smcginnis) From sean.mcginnis at gmx.com Fri Apr 13 19:53:35 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 13 Apr 2018 14:53:35 -0500 Subject: [openstack-dev] [rally][dragonflow][ec2-api][PowerVMStackers][murano] Tagging rights In-Reply-To: <20180413194659.GA9657@sm-xps> References: <20180413194659.GA9657@sm-xps> Message-ID: <20180413195334.GA10074@sm-xps> On Fri, Apr 13, 2018 at 02:46:59PM -0500, Sean McGinnis wrote: > Hello teams, > > I am following up on some recently announced changes regarding governed > projects and tagging rights. See [1] for background. > [1] https://review.openstack.org/#/c/557737/ From myoung at redhat.com Fri Apr 13 22:24:00 2018 From: myoung at redhat.com (Matt Young) Date: Fri, 13 Apr 2018 18:24:00 -0400 Subject: [openstack-dev] [tripleo] CI / Tempest Sprint 11 Summary Message-ID: Greetings, The TripleO squads for CI and Tempest have just completed Sprint 11. The following is a summary of activities during this sprint. The newly formed Tempest Squad has completed its first sprint. Details on the team structure can be found in the spec [1]. Sprint 11 Epic (CI Squad): Upgrades Epic Card: https://trello.com/c/8pbRwBps/549-upstream-upgrade-ci This is the second sprint that the team focused on CI for upgrades. We expect additional sprints will be needed focused on upgrades, and have a number of backlog tasks remaining as well [2] We did the following: * Prune and remove old / irrelevant jobs from CI * Assess the current state of existing jobs to determine status and issues. * Ensure the reproducer script enabling the correct branches of tripleo-upgrade * Implement “Keystone Only” CI job. This is a minimal deployment with the smallest set of services (keystone + deps) in play. * tripleo-ci-centos-7-scenario000-multinode-oooq-container-updates * Consolidate docker namespaces between docker.io, rdoproject.org Sprint 11 Epic (Tempest Squad): Containerize Tempest Epic Card: https://trello.com/c/066JFJjf/537-epic-containerize-tempest As noted above, this is the first sprint for the newly formed Tempest Squad. The work was a combination of the sprint epic and team members’ pre-existing work that is nearing completion. We did the following: * Fix tempest plugins upgrade issue (RHOS 10>11>12>13) * Switch to stestr to run tempest beginning with queens * Move neutron CLI calls to openstack CLI * Containerize tempest on featureset027 (UC idempotency) We made progress on the following, but work remains and continues in Sprint 12 * Refactor validate-tempest CI role for UC and containers (reviews in flight) * Updates to ansible-role-openstack-certification playbook & CI jobs that use it. * Upstream documentation covering above work Note: We have added a new trello board [2] to archive completed sprint cards. Previously we were archiving (trello operation) the cards, making it difficult to analyze/search the past. Ruck and Rover Each sprint two of the team members assume the roles of Ruck and Rover (each for half of the sprint). * Ruck is responsible to monitoring the CI, checking for failures, opening bugs, participate on meetings, and this is your focal point to any CI issues. * Rover is responsible to work on these bugs, fix problems and the rest of the team are focused on the sprint. For more information about our structure, check [1] Ruck & Rover (Sprint 11), Etherpad [4]: * Arx Cruz (arxcruz) * Rafael Folco (rfolco) Two issues in particular where substantial time was spent were: http://bugs.launchpad.net/bugs/1757556 (SSH timeouts) https://bugs.launchpad.net/tripleo/+bug/1760189 (AMQP issues) The full list of bugs open or worked on were: https://bugs.launchpad.net/tripleo/+bug/1763009 https://bugs.launchpad.net/tripleo/+bug/1762419 https://bugs.launchpad.net/tripleo/+bug/1762351 https://bugs.launchpad.net/tripleo/+bug/1761171 https://bugs.launchpad.net/tripleo/+bug/1760189 https://bugs.launchpad.net/bugs/1757556 https://bugs.launchpad.net/tripleo/+bug/1759868 https://bugs.launchpad.net/tripleo/+bug/1759876 https://bugs.launchpad.net/tripleo/+bug/1759583 https://bugs.launchpad.net/tripleo/+bug/1758143 https://bugs.launchpad.net/tripleo/+bug/1757134 https://bugs.launchpad.net/tripleo/+bug/1755485 https://bugs.launchpad.net/tripleo/+bug/1758932 https://bugs.launchpad.net/tripleo/+bug/1751180 If you have any questions and/or suggestions, please contact us in #tripleo Thanks, Matt [1] https://specs.openstack.org/openstack/tripleo-specs/specs/policy/ci-team-structure.html [2] https://trello.com/b/U1ITy0cu/tripleo-and-rdo-ci?menu=filter&filter=label:upgrades [3] https://trello.com/b/BjcIIp0f/tripleo-and-rdo-ci-archive [4] https://review.rdoproject.org/etherpad/p/ruckrover-sprint11 -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Sat Apr 14 00:58:20 2018 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 13 Apr 2018 17:58:20 -0700 Subject: [openstack-dev] [tripleo] roadmap on containers workflow In-Reply-To: <1bf26224-6cb3-099b-f36a-88e0138eb502@redhat.com> References: <1bf26224-6cb3-099b-f36a-88e0138eb502@redhat.com> Message-ID: On Wed, Apr 11, 2018 at 3:38 PM, Steve Baker wrote: > > - If agreed, we'll create a new Ansible role called ansible-role-container-registry > that for now will deploy exactly what we have in TripleO, without extra > feature. > > +1 > A bit of progress today, I prototyped an Ansible role for that purpose: https://github.com/EmilienM/ansible-role-container-registry The next step is, I'm going to investigate if we can deploy Docker and Docker Distribution (to deploy the registry v2) via the existing composable services in THT by using external_deploy_tasks maybe (or another interface). The idea is really to have the registry ready before actually deploying the undercloud containers, so we can modify them in the middle (run container-check in our case). -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhang.lei.fly at gmail.com Sat Apr 14 03:02:54 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Sat, 14 Apr 2018 11:02:54 +0800 Subject: [openstack-dev] [stable][kolla] tagging newton EOL Message-ID: hi stable team, Kolla project is ready for Newton EOL. Since kolla-ansible is split from kolla since ocata cycle, so there is not newton branch in kolla-ansible. please make following repo EOL openstack/kolla Thanks a lot. -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From mordred at inaugust.com Sat Apr 14 16:37:46 2018 From: mordred at inaugust.com (Monty Taylor) Date: Sat, 14 Apr 2018 11:37:46 -0500 Subject: [openstack-dev] [sdk][osc][openstackclient] Migration to storyboard complete Message-ID: <876abe58-d86a-8717-6bb5-7c7b5f7957f9@inaugust.com> Hey everybody, The migration of the openstacksdk and python-openstackclient repositories to storyboard has been completed. Each of the repos owned by those teams has been migrated, and project groups now also exist for each. python-openstackclient group: https://storyboard.openstack.org/#!/project_group/80 python-openstackclient https://storyboard.openstack.org/#!/project/975 cliff https://storyboard.openstack.org/#!/project/977 osc-lib https://storyboard.openstack.org/#!/project/974 openstackclient https://storyboard.openstack.org/#!/project/971 openstacksdk group: https://storyboard.openstack.org/#!/project_group/78 openstacksdk https://storyboard.openstack.org/#!/project/972 os-client-config https://storyboard.openstack.org/#!/project/973 os-service-types https://storyboard.openstack.org/#!/project/904 requestsexceptions https://storyboard.openstack.org/#!/project/835 shade https://storyboard.openstack.org/#!/project/760 Happy storyboarding. Monty From sean.mcginnis at gmx.com Sat Apr 14 19:29:42 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Sat, 14 Apr 2018 14:29:42 -0500 Subject: [openstack-dev] [sdk][osc][openstackclient] Migration to storyboard complete In-Reply-To: <876abe58-d86a-8717-6bb5-7c7b5f7957f9@inaugust.com> References: <876abe58-d86a-8717-6bb5-7c7b5f7957f9@inaugust.com> Message-ID: <20180414192942.GA15758@sm-xps> On Sat, Apr 14, 2018 at 11:37:46AM -0500, Monty Taylor wrote: > Hey everybody, > > The migration of the openstacksdk and python-openstackclient repositories to > storyboard has been completed. Each of the repos owned by those teams has > been migrated, and project groups now also exist for each. > I just noticed on python-openstackclient, in the repo's README file it still points people to launchpad for bug and blueprint tracking. Just one more transition housekeeping item folks need to keep in mind when making this switch. Sean From gkotton at vmware.com Sun Apr 15 09:06:03 2018 From: gkotton at vmware.com (Gary Kotton) Date: Sun, 15 Apr 2018 09:06:03 +0000 Subject: [openstack-dev] [neutron][dynamic routing] RYU Breaks lower constraints Message-ID: <94BD147E-C0B0-4F84-BADE-C39469022654@vmware.com> Hi, It seems like ther RYU import is breaking the project: 2018-04-15 08:41:34.654681 | ubuntu-xenial | b'--- import errors ---\nFailed to import test module: neutron_dynamic_routing.tests.unit.services.bgp.driver.ryu.test_driver\nTraceback (most recent call last):\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/unittest2/loader.py", line 456, in _find_test_path\n module = self._get_module_from_name(name)\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/unittest2/loader.py", line 395, in _get_module_from_name\n __import__(name)\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/neutron_dynamic_routing/tests/unit/services/bgp/driver/ryu/test_driver.py", line 21, in \n from ryu.services.protocols.bgp import bgpspeaker\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/services/protocols/bgp/bgpspeaker.py", line 21, in \n from ryu.lib.packet.bgp import (\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/__init__.py", line 6, in \n from . import (ethernet, arp, icmp, icmpv6, ipv4, ipv6, lldp, mpls, packet,\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/ethernet.py", line 18, in \n from . import vlan\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/vlan.py", line 21, in \n from . import ipv4\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/ipv4.py", line 23, in \n from . import tcp\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/tcp.py", line 24, in \n from . import bgp\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/bgp.py", line 52, in \n from ryu.utils import binary_str\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/utils.py", line 23, in \n from pip import req as pip_req\nImportError: cannot import name \'req\'\n' Any suggestions? Thanks Gary -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Sun Apr 15 09:42:52 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Sun, 15 Apr 2018 09:42:52 +0000 Subject: [openstack-dev] [neutron][dynamic routing] RYU Breaks lower constraints In-Reply-To: <94BD147E-C0B0-4F84-BADE-C39469022654@vmware.com> References: <94BD147E-C0B0-4F84-BADE-C39469022654@vmware.com> Message-ID: Gary, I think this is caused by the recent pip change and pip no longer cannot import pip from code. The right solution seems to bump the minimum version of ryu. Thought? http://lists.openstack.org/pipermail/openstack-dev/2018-March/128939.html Akihiro 2018/04/15 午後6:06 "Gary Kotton" : Hi, It seems like ther RYU import is breaking the project: 2018-04-15 08:41:34.654681 | ubuntu-xenial | b'--- import errors ---\nFailed to import test module: neutron_dynamic_routing.tests.unit.services.bgp.driver.ryu.test_driver\nTraceback (most recent call last):\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/unittest2/loader.py", line 456, in _find_test_path\n module = self._get_module_from_name(name)\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/unittest2/loader.py", line 395, in _get_module_from_name\n __import__(name)\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/neutron_dynamic_routing/tests/unit/services/bgp/driver/ryu/test_driver.py", line 21, in \n from ryu.services.protocols.bgp import bgpspeaker\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/services/protocols/bgp/bgpspeaker.py", line 21, in \n from ryu.lib.packet.bgp import (\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/__init__.py", line 6, in \n from . import (ethernet, arp, icmp, icmpv6, ipv4, ipv6, lldp, mpls, packet,\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/ethernet.py", line 18, in \n from . import vlan\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/vlan.py", line 21, in \n from . import ipv4\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/ipv4.py", line 23, in \n from . import tcp\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/tcp.py", line 24, in \n from . import bgp\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/bgp.py", line 52, in \n from ryu.utils import binary_str\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/utils.py", line 23, in \n from pip import req as pip_req\nImportError: cannot import name \'req\'\n' Any suggestions? Thanks Gary __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From gkotton at vmware.com Sun Apr 15 11:32:18 2018 From: gkotton at vmware.com (Gary Kotton) Date: Sun, 15 Apr 2018 11:32:18 +0000 Subject: [openstack-dev] [devstack][infra] pip vs psutil Message-ID: <1BA67F39-62A8-4203-A40E-23B885E1F284@vmware.com> Hi, The gate is currently broken with https://launchpad.net/bugs/1763966. https://review.openstack.org/#/c/561427/ Can unblock us in the short term. Any other ideas? Thanks Gary -------------- next part -------------- An HTML attachment was scrubbed... URL: From gkotton at vmware.com Sun Apr 15 12:02:42 2018 From: gkotton at vmware.com (Gary Kotton) Date: Sun, 15 Apr 2018 12:02:42 +0000 Subject: [openstack-dev] [neutron][dynamic routing] RYU Breaks lower constraints In-Reply-To: References: <94BD147E-C0B0-4F84-BADE-C39469022654@vmware.com> Message-ID: <050C7C89-8A69-4D41-81DC-9D029E09FFEE@vmware.com> Hi, That sounds reasonable. I wonder if the RYU folk can chime in here. Thanks Gary From: Akihiro MOTOKI Reply-To: OpenStack List Date: Sunday, April 15, 2018 at 12:43 PM To: OpenStack List Subject: Re: [openstack-dev] [neutron][dynamic routing] RYU Breaks lower constraints Gary, I think this is caused by the recent pip change and pip no longer cannot import pip from code. The right solution seems to bump the minimum version of ryu. Thought? http://lists.openstack.org/pipermail/openstack-dev/2018-March/128939.html Akihiro 2018/04/15 午後6:06 "Gary Kotton" >: Hi, It seems like ther RYU import is breaking the project: 2018-04-15 08:41:34.654681 | ubuntu-xenial | b'--- import errors ---\nFailed to import test module: neutron_dynamic_routing.tests.unit.services.bgp.driver.ryu.test_driver\nTraceback (most recent call last):\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/unittest2/loader.py", line 456, in _find_test_path\n module = self._get_module_from_name(name)\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/unittest2/loader.py", line 395, in _get_module_from_name\n __import__(name)\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/neutron_dynamic_routing/tests/unit/services/bgp/driver/ryu/test_driver.py", line 21, in \n from ryu.services.protocols.bgp import bgpspeaker\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/services/protocols/bgp/bgpspeaker.py", line 21, in \n from ryu.lib.packet.bgp import (\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/__init__.py", line 6, in \n from . import (ethernet, arp, icmp, icmpv6, ipv4, ipv6, lldp, mpls, packet,\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/ethernet.py", line 18, in \n from . import vlan\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/vlan.py", line 21, in \n from . import ipv4\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/ipv4.py", line 23, in \n from . import tcp\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/tcp.py", line 24, in \n from . import bgp\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/bgp.py", line 52, in \n from ryu.utils import binary_str\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/utils.py", line 23, in \n from pip import req as pip_req\nImportError: cannot import name \'req\'\n' Any suggestions? Thanks Gary __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaetan at xeberon.net Sun Apr 15 14:42:13 2018 From: gaetan at xeberon.net (Gaetan) Date: Sun, 15 Apr 2018 14:42:13 +0000 Subject: [openstack-dev] PBR and Pipfile In-Reply-To: References: Message-ID: Hello, Thank you for this response. It took me quite some times to carefully read it, it was far beyond my expectation! So thanks a lot, lot of thing to understand. > >> There are actually three different relevant use cases here, with some > patterns available to draw from. I'm going to spell them out to just make > sure we're on the same page. > > * Library > * Application > * Suite of Coordinated Applications [...] Can we say packaging a python application for a linux distribution may fall in this "Suite of Coordinate Applications" kind? I really liked your way of describing these differences, I actually started using it as basis for my internal Python courses I give in my company on a regular basis :) As maintainer of a small project (Guake), I am in contact with package maintainers (Debian, arch, now, fedora,...) that relates me these kind of issues (Guake mainly has system python dependencies such as pygtk). They (the maintainers) has similar issues as OpenStack might have, in that they need to sync all dependencies for a given version distribution. Of course they do not use this upper-constraint requirements file as a way of fixing the same dependencies for all apps. > > For Pipfile, I believe we'd want to see is adding support for --constraints > to pipenv install - so that we can update our Pipfile.lock file for each > application in the context of the global constraints file. This can be > simulated today without any support from pipenv directly like this: > > pipenv install > $(pipenv --venv)/bin/pip install -U -c > https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt > -r requirements.txt > pipenv lock the pipenv lock will not lock the dependencies added manually by pip, unfortunately. What you can do it open a bug on pipenv project [1] on my behalf, with your need and assign me on it. I am not sure how this would need to be handled, but this worth notifying the pipenv projects of your needs (never used the -c option of pip install) > There is also works on PEP around pyproject.toml ([4]), which looks quite >> similar to PBR's setup.cfg. What do you think about it? >> > > It's a bit different. There is also a philosophical disagreement about the > use of TOML that's not worth going in to here we agree on TOML > - but from a pbr perspecitve I'd like to minimize use of pyproject.toml to > the bare minimm needed to bootstrap things into pbr's control. In the first > phase I expect to replace our current setup.py boilerplate: > > setuptools.setup( > setup_requires=['pbr'], > pbr=True) > > with: > > setuptool.setup(pbr=True) > > and add pyproject.toml files with: > > [build-system] > requires = ["setuptools", "wheel", "pbr"] [...] That would would indeed way beyond all what I wanted to work on for pbr. Do you have a plan for this support for pbr? > My opinion is this difference in behaviourbetween lib and app has >> technical reasons, but as a community we would gain a lot of unifying both >> workflows. I am using PBR + a few hacks [5], and I am pretty satisfied with >> the overall result. >> > > There are two topics your pbr patch opens up that need to be covered: > > * pbr behavior > * dependencies > > ** pbr behavior ** > > I appreciate what you're saying about unifying the lib and app workflow, > but I think the general pattern across the language communities (javascript > and rust both have similar patterns to Pipefile) is that the two different > options are important. We may just need to have a better document - rust has an excellent > description: > > https://doc.rust-lang.org/cargo/guide/cargo-toml-vs-cargo-lock.html > There are lot of talks in the pipenv community at this moment, we ended up with best practices for python: - app: Pipfile and Pipfile.lock tracked in git - library: do not track Pipfile*, but use ‘pipenv install -e .’ I am trying to amend it with pbr support for libs (and there are actually several persons that uses pbr). In any case, I think what pbr should do with pipfiles is: > > * If pbr discovers a Pipfile and no Pipfile.lock, it should treat the > content in the packages section of Pipfile as it currently does with > requirements.txt (and how you have done in the current patch) That’s my goal in my patch. If I understand how it works (please tell me if i am wrong), on a distribution package/wheel, pbr looks for requirements.txt and inject as dependencies. The major change on my patch is: - having to vendor the basic toml parser. * If pbr discoves a Pipfile.lock, it should treat the content in > Pipfile.lock as it currently does requirements.txt. This is actually the “normal” behavior of pipenv, and more generally, of the Pipfile parser: prefer the lock file over the pipenv. The ultimate source of truth is the lock file and the Pipfile is only a convinient way of describing the dependencies, at least that is for apps. Then, we either need to: > > * Add support to pipfile install for specifying a pip-style constraints > file Not sure if I’ll be the right dev to handle the constraint feature... * Add support to pipfile install for specifying a constraints file that is > in the format of a Pipfile.lock - but which does the same thing. > > * Write a pbr utility subcommand for generating a Pipfile.lock from a > Pipfile taking a provided constraints file into account. Ok, you mean, as OpenStack will keep the requirements constraint, pbr needs to support it to be compatible. Pipenv is already able to support read from requirement, but it reflect on the Pipfile (that’s probably not wanted) We may also want to write a utility for creating a Pipefile and/or lock > from a pbr-oriented requirements.txt/test-requirements.txt. (it should use > pipfile on the backend of course) that can do the appropriate initial dance. Do you mean a constraint file or any requirements.txt? Pipenv is already able to convert a requirements.txt to Pipfile. ** dependencies ** > > The pep518 support in pip10 is really important here. Because ... > > We should not vendor code into pbr. Oops, I was exactly thinking this would cause problem. I actually kind of vendor 2 deps, “pipfile” parser and “toml parser”. While vendoring code has been found to be acceptable by other portions of > the Python community, it is not acceptable here. There are intense talks on this vendoring issues on pipenv, for example some deps are actually vendored 3 times (request if I remember). So, I agree this is not acceptable, but I need to better understand how pip 10 works. Once pip10 is released next week with pyproject.toml support, as mentioned > earlier, we'll be able to start using (a reasonable set) of dependencies as > is appropriate. In order to ensure backwards compat, I would recommend we > do the following: > > * Add toml and pipfile as depends to pbr > * Protect their imports with a try/except (for people using old pip which > won't install any depends pbr has) > * Declare that pbr support for Pipfile will only work with people using > pip>=10 and for projects that add a pyproject.toml to their project > containing > > [build-system] > requires = ["pbr"] > > * If pbr tries to import toml/pipfile and fails, it should fall back to > reading requirements.txt (this allows us to keep backwards compat until > it's reasonable to expect everyone to be on pip10) Seems way much more what I was expecting for the Pipfile support, but if you (the openstack community and pbr dévelopers) can guide me on this I think I can work on this full pip10+Pipfile support. The thing is, for the moment the current situation works pretty well for us, pbr does a marvelous work on both our libraries (but require to generate the requirement.twt with pipenv-to-requirements). I started using reno and its Sphinx plug-in (see the Guake source code :) ). To support that last point, we should write a utility function, let's call > it 'pbr lock', with the following behavior: > > * If a Pipfile and a Pipfile.lock are present, it runs: > > pipfile lock -r Not sure why it would be needed, if the project uses pipenv the normal way is to use pipenv lock directly to generate Pipfile.lock. I am not sure to understand the need for generating the requirements.txt with pipenv lock -r? Is it to support projects that uses Pipfile but on an old pip (<10?) is there reason not to update pip to the latest version ? * If there is no Pipfile.lock, simply read the Pipfile and write the > specifiers into requirements.txt in non-pinned format. > > This will allow pbr users to maintain their projects in such a way as to > be backwards compatible while they start to use Pipfile/Pipefile.lock > > We MAY want to consider adding an option flag to setup.cfg, like: > > [pbr] > type = application > > or > > [pbr] > type = library > > for declaring to pbr which of Pipfile / Pipfile.lock should pbr pay > attention to, regardless of which files might be present. I really liked this feature, may I add another setting in pbr: allow or disallow v-version. I have added support for version with a “v” or “V” prefix, that works really great with Gitlab’s protected tag regular expression. I would like my project for example to fail if one create a tag on master without the v prefix. It is not mandatory, but when the project will be handed over to new maintainer, they may do the mistake of using a version without v/V and it will work silently. I'm not sure whether that would be better or worse than inferring behavior > from the presence of files. I like the convention over configuration approach, but still having a way of describing our wanted behavior literally in a config file is great also ! Thanks for this very interesting answer, hope to have lot of other great exchange like this ! Gaetan > [1] pipenv: https://github.com/pypa/pipenv -- ----- Gaetan -------------- next part -------------- An HTML attachment was scrubbed... URL: From iwamoto at valinux.co.jp Sun Apr 15 23:30:32 2018 From: iwamoto at valinux.co.jp (IWAMOTO Toshihiro) Date: Mon, 16 Apr 2018 08:30:32 +0900 Subject: [openstack-dev] [neutron][dynamic routing] RYU Breaks lower constraints In-Reply-To: <050C7C89-8A69-4D41-81DC-9D029E09FFEE@vmware.com> References: <94BD147E-C0B0-4F84-BADE-C39469022654@vmware.com> <050C7C89-8A69-4D41-81DC-9D029E09FFEE@vmware.com> Message-ID: <20180415233032.9ABE6B32D1@mail.valinux.co.jp> On Sun, 15 Apr 2018 21:02:42 +0900, Gary Kotton wrote: > > [1 ] > [1.1 ] > Hi, > That sounds reasonable. I wonder if the RYU folk can chime in here. > Thanks I don't fully understand the recent g-r change yet, but I guess neutron-dynamic-routing should also have ryu>=4.24. I'll check this tommorrow. > From: Akihiro MOTOKI > Reply-To: OpenStack List > Date: Sunday, April 15, 2018 at 12:43 PM > To: OpenStack List > Subject: Re: [openstack-dev] [neutron][dynamic routing] RYU Breaks lower constraints > > Gary, > > I think this is caused by the recent pip change and pip no longer cannot import pip from code. The right solution seems to bump the minimum version of ryu. > > Thought? > > http://lists.openstack.org/pipermail/openstack-dev/2018-March/128939.html > > Akihiro > > 2018/04/15 午後6:06 "Gary Kotton" >: > Hi, > It seems like ther RYU import is breaking the project: > > > 2018-04-15 08:41:34.654681 | ubuntu-xenial | b'--- import errors ---\nFailed to import test module: neutron_dynamic_routing.tests.unit.services.bgp.driver.ryu.test_driver\nTraceback (most recent call last):\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/unittest2/loader.py", line 456, in _find_test_path\n module = self._get_module_from_name(name)\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/unittest2/loader.py", line 395, in _get_modu le_from_name\n __import__(name)\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/neutron_dynamic_routing/tests/unit/services/bgp/driver/ryu/test_driver.py", line 21, in \n from ryu.services.protocols.bgp import bgpspeaker\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/services/protocols/bgp/bgpspeaker.py", line 21, in \n from ryu.lib.packet.bgp import (\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/__init__.py", line 6, in \n from . import (ethernet, arp, icmp, icmpv6, ipv4, ipv6, lldp, mpls, packet,\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/ethernet.py", line 18, in \n from . import vlan\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/vlan.py", line 21, in \n from . import ipv4\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/ipv4.py", line 23, in \n from . import tcp\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/tcp.py", line 24, in \n from . import bgp\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/bgp.py", line 52, in \n from ryu.utils import binary_str\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/utils.py", line 23, in \n from pip import req as pip_req\nImportError: cannot import name \'req\'\n' > > Any suggestions? > Thanks > Gary > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > [1.2 ] > [2 ] > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From emilien at redhat.com Mon Apr 16 02:24:58 2018 From: emilien at redhat.com (Emilien Macchi) Date: Sun, 15 Apr 2018 19:24:58 -0700 Subject: [openstack-dev] [tripleo] roadmap on containers workflow In-Reply-To: References: <1bf26224-6cb3-099b-f36a-88e0138eb502@redhat.com> Message-ID: On Fri, Apr 13, 2018 at 5:58 PM, Emilien Macchi wrote: > > A bit of progress today, I prototyped an Ansible role for that purpose: > https://github.com/EmilienM/ansible-role-container-registry > > The next step is, I'm going to investigate if we can deploy Docker and > Docker Distribution (to deploy the registry v2) via the existing composable > services in THT by using external_deploy_tasks maybe (or another interface). > The idea is really to have the registry ready before actually deploying > the undercloud containers, so we can modify them in the middle (run > container-check in our case). > This patch: https://review.openstack.org/#/c/561377 is deploying Docker and Docker Registry v2 *before* containers deployment in the docker_steps. It's using the external_deploy_tasks interface that runs right after the host_prep_tasks, so still before starting containers. It's using the Ansible role that was prototyped on Friday, please take a look and raise any concern. Now I would like to investigate how we can run container workflows between the deployment and docker and containers deployments. -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From renat.akhmerov at gmail.com Mon Apr 16 02:56:52 2018 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Mon, 16 Apr 2018 09:56:52 +0700 Subject: [openstack-dev] [Mistral]I think Mistral need K8S action In-Reply-To: References: <58c101d3d2cd$9bbb7a40$d3326ec0$@dcn.ssu.ac.kr> <578d70ca-25e5-441a-9211-0c7986bf2f16@Spark> Message-ID: <70599721-5d72-45ba-b8f8-00e6b0963509@Spark> On 13 Apr 2018, 23:01 +0700, Dougal Matthews , wrote: > > I would recommend developing actions for K8s somewhere externally, then when mistral-extra is ready we can move them over. This is the approach that I took for the Ansible actions[1] and they will likely be one of the first additions to mistral-extra. > > > > [1]: https://github.com/d0ugal/mistral-ansible-actions > > Yes, I agree. Renat Akhmerov @Nokia -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Mon Apr 16 03:22:03 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 16 Apr 2018 03:22:03 +0000 Subject: [openstack-dev] [All][Election] Last Days for TC Nominations Message-ID: Hello Everyone, A quick reminder that we are in the last hours for TC candidate announcements. Nominations are open until Apr 17, 2018 23:45 UTC. If you want to stand for TC, don't delay, follow the instructions at [1] to make sure the community knows your intentions. Make sure your nomination has been submitted to the openstack/election repository and approved by election officials. Thank you, -Kendall Nelson (diablo_rojo) [1] http://governance.openstack.org/election/#how-to-submit-your-candidacy -------------- next part -------------- An HTML attachment was scrubbed... URL: From xianjun666 at dcn.ssu.ac.kr Mon Apr 16 04:08:54 2018 From: xianjun666 at dcn.ssu.ac.kr (=?utf-8?B?7ZmN7ISg6rWw?=) Date: Mon, 16 Apr 2018 13:08:54 +0900 Subject: [openstack-dev] [Mistral]I think Mistral need K8S action In-Reply-To: References: <58c101d3d2cd$9bbb7a40$d3326ec0$@dcn.ssu.ac.kr> <578d70ca-25e5-441a-9211-0c7986bf2f16@Spark> Message-ID: <734a01d3d538$a3b688d0$eb239a70$@dcn.ssu.ac.kr> Thanks for your reply. I will refer to this Ansible action and developing actions for K8S somewhere externally. Regards, Xian Jun Hong From: Dougal Matthews Sent: Saturday, April 14, 2018 1:00 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Mistral]I think Mistral need K8S action On 13 April 2018 at 05:47, Renat Akhmerov > wrote: Hi, I completely agree with you that having such an action would be useful. However, I don’t think this kind of action should be provided by Mistral out of the box. Actions and triggers are integration pieces for Mistral and are natively external to Mistral code base. In other words, this action can be implemented anywhere and plugged into a concrete Mistral installation where needed. As a home for this action I’d propose 'mistral-extra’ repo where we are planning to move OpenStack actions and some more. Also, if you’d like to contribute you’re very welcome. I would recommend developing actions for K8s somewhere externally, then when mistral-extra is ready we can move them over. This is the approach that I took for the Ansible actions[1] and they will likely be one of the first additions to mistral-extra. [1]: https://github.com/d0ugal/mistral-ansible-actions Thanks Renat Akhmerov @Nokia On 13 Apr 2018, 09:18 +0700, >, wrote: Hello Mistral team. I'm doing some work on the K8S but I observed that there is only Docker's action in Mistral. I would like to ask Mistral Team, why there is no K8S action in the mistral. I think it would be useful in Mistral. If you feel it's necessary, could I add K8S action to mistral? Regards, Xian Jun Hong __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jichenjc at cn.ibm.com Mon Apr 16 06:56:06 2018 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Mon, 16 Apr 2018 14:56:06 +0800 Subject: [openstack-dev] [Nova] z/VM introducing a new config driveformat In-Reply-To: References: <7efe6916-17fc-59c5-d666-6bdfc19c3329@gmail.com> Message-ID: Thanks for the question and comments >>> metadata service question Fully agree metadata is something that we need support, and as it need some network setup on NAT ,as you pointed out, without metadata there are some functions missing ; so it's already in our support plan , currently we plan to use config drive and later (with the enhance with our neutron support as well) to support metadata service >>>The "iso file" will not be inside the guest, but rather passed to the guest as a block device, right? Cloud init expects to find a config drive with following requirements [1], in order to make cloud init able to consume config drive , we should be able to prepare it, in some hypervisor, you can define something like following to the VM then VM startup is able to consume it but for z/VM case it allows disk to be created during VM create (define )stage but no disk format set, it's the operating system's responsibility to define the purpose of the disk, so what we do is 1) first when we build image ,we create a small AE like cloud-init but only purpose is to get files from z/VM internal pipe and handle config drive case 2) During spawn we create config drive in nova-compute side then send the file to z/VM through z/VM internal pipe (omit detail here) 3) During startup of the virtual machine, the small AE is able to mount the file as loop device and then in turn cloud-init is able to handle it because this is our special case, we don't want to upload to cloud-init community because of uniqueness and as far as we can tell, no hook in cloud-init mechanism allowed as well to let us 'mount -o loop' ; also, from openstack point of view except this small AE (which is documented well) no special thing and inconsistent to other drivers [1] https://github.com/number5/cloud-init/blob/master/cloudinit/sources/DataSourceConfigDrive.py#L225 Best Regards! Kevin (Chen) Ji 纪 晨 Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com Phone: +86-10-82451493 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC From: Dan Smith To: "Chen CH Ji" Cc: "OpenStack Development Mailing List \(not for usage questions \)" Date: 04/13/2018 09:46 PM Subject: Re: [openstack-dev] [Nova] z/VM introducing a new config driveformat > for the run_validation=False issue, you are right, because z/VM driver > only support config drive and don't support metadata service ,we made > bad assumption and took wrong action to disabled the whole ssh check, > actually according to [1] , we should only disable > CONF.compute_feature_enabled.metadata_service but keep both > self.run_ssh and CONF.compute_feature_enabled.config_drive as True in > order to make config drive test validation take effect, our CI will > handle that Why don't you support the metadata service? That's a pretty fundamental mechanism for nova and openstack. It's the only way you can get a live copy of metadata, and it's the only way you can get access to device tags when you hot-attach something. Personally, I think that it's something that needs to work. > For the tgz/iso9660 question below, this is because we got wrong info > from low layer component folks back to 2012 and after discuss with > some experts again, actually we can create iso9660 in the driver layer > and pass down to the spawned virtual machine and during startup > process, the VM itself will mount the iso file and consume it, because > from linux perspective, either tgz or iso9660 doesn't matter , only > need some files in order to transfer the information from openstack > compute node to the spawned VM. so our action is to change the format > from tgz to iso9660 and keep consistent to other drivers. The "iso file" will not be inside the guest, but rather passed to the guest as a block device, right? > For the config drive working mechanism question, according to [2] z/VM > is Type 1 hypervisor while Qemu/KVM are mostly likely to be Type 2 > hypervisor, there is no file system in z/VM hypervisor (I omit too > much detail here) , so we can't do something like linux operation > system to keep a file as qcow2 image in the host operating system, I'm not sure what the type-1-ness has to do with this. The hypervisor doesn't need to support any specific filesystem for this to work. Many drivers we have in the tree are type-1 (xen, vmware, hyperv, powervm) and you can argue that KVM is type-1-ish. They support configdrive. > what we do is use a special file pool to store the config drive and > during VM init process, we read that file from special device and > attach to VM as iso9660 format then cloud-init will handle the follow > up, the cloud-init handle process is identical to other platform This and the previous mention of this sort of behavior has me concerned. Are you describing some sort of process that runs when the instance is starting to initialize its environment, or something that runs *inside* the instance and thus functionality that has to exist in the *image* to work? --Dan -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From skaplons at redhat.com Mon Apr 16 07:13:04 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Mon, 16 Apr 2018 09:13:04 +0200 Subject: [openstack-dev] [devstack][infra] pip vs psutil In-Reply-To: <1BA67F39-62A8-4203-A40E-23B885E1F284@vmware.com> References: <1BA67F39-62A8-4203-A40E-23B885E1F284@vmware.com> Message-ID: Hi, I just wanted to ask if there is any ongoing work on https://bugs.launchpad.net/devstack/+bug/1763966 to fix grenade failures? It looks that e.g. all grenade jobs in neutron are broken currently :/ > Wiadomość napisana przez Gary Kotton w dniu 15.04.2018, o godz. 13:32: > > Hi, > The gate is currently broken with https://launchpad.net/bugs/1763966. https://review.openstack.org/#/c/561427/ Can unblock us in the short term. Any other ideas? > Thanks > Gary > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Best regards Slawek Kaplonski skaplons at redhat.com From gkotton at vmware.com Mon Apr 16 07:13:33 2018 From: gkotton at vmware.com (Gary Kotton) Date: Mon, 16 Apr 2018 07:13:33 +0000 Subject: [openstack-dev] [neutron][dynamic routing] RYU Breaks lower constraints In-Reply-To: <20180415233032.9ABE6B32D1@mail.valinux.co.jp> References: <94BD147E-C0B0-4F84-BADE-C39469022654@vmware.com> <050C7C89-8A69-4D41-81DC-9D029E09FFEE@vmware.com> <20180415233032.9ABE6B32D1@mail.valinux.co.jp> Message-ID: Please see https://review.openstack.org/561443 On 4/16/18, 2:31 AM, "IWAMOTO Toshihiro" wrote: On Sun, 15 Apr 2018 21:02:42 +0900, Gary Kotton wrote: > > [1 ] > [1.1 ] > Hi, > That sounds reasonable. I wonder if the RYU folk can chime in here. > Thanks I don't fully understand the recent g-r change yet, but I guess neutron-dynamic-routing should also have ryu>=4.24. I'll check this tommorrow. > From: Akihiro MOTOKI > Reply-To: OpenStack List > Date: Sunday, April 15, 2018 at 12:43 PM > To: OpenStack List > Subject: Re: [openstack-dev] [neutron][dynamic routing] RYU Breaks lower constraints > > Gary, > > I think this is caused by the recent pip change and pip no longer cannot import pip from code. The right solution seems to bump the minimum version of ryu. > > Thought? > > http://lists.openstack.org/pipermail/openstack-dev/2018-March/128939.html > > Akihiro > > 2018/04/15 午後6:06 "Gary Kotton" >: > Hi, > It seems like ther RYU import is breaking the project: > > > 2018-04-15 08:41:34.654681 | ubuntu-xenial | b'--- import errors ---\nFailed to import test module: neutron_dynamic_routing.tests.unit.services.bgp.driver.ryu.test_driver\nTraceback (most recent call last):\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/unittest2/loader.py", line 456, in _find_test_path\n module = self._get_module_from_name(name)\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/unittest2/loader.py", line 395, in _get_modu le_from_name\n __import__(name)\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/neutron_dynamic_routing/tests/unit/services/bgp/driver/ryu/test_driver.py", line 21, in \n from ryu.services.protocols.bgp import bgpspeaker\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/services/protocols/bgp/bgpspeaker.py", line 21, in \n from ryu.lib.packet.bgp import (\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/__init__.py", line 6, in \n from . import (ethernet, arp, icmp, icmpv6, ipv4, ipv6, lldp, mpls, packet,\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/ethernet.py", line 18, in \n from . import vlan\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/vlan.py", line 21, in \n from . import ipv4\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/ipv4.py", line 23, in \n from . import tcp\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/tcp.py", line 24, in \n from . import bgp\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/bgp.py", line 52, in \n from ryu.utils import binary_str\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/utils.py", line 23, in \n from pip import req as pip_req\nImportError: cannot import name \'req\'\n' > > Any suggestions? > Thanks > Gary > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > [1.2 ] > [2 ] > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From gkotton at vmware.com Mon Apr 16 07:14:10 2018 From: gkotton at vmware.com (Gary Kotton) Date: Mon, 16 Apr 2018 07:14:10 +0000 Subject: [openstack-dev] [devstack][infra] pip vs psutil In-Reply-To: References: <1BA67F39-62A8-4203-A40E-23B885E1F284@vmware.com> Message-ID: <822EBA11-EAF4-4423-856F-9B7CE769F74D@vmware.com> Hi, I think that we need https://review.openstack.org/561471 until we have a proper solution. Thanks Gary On 4/16/18, 10:13 AM, "Slawomir Kaplonski" wrote: Hi, I just wanted to ask if there is any ongoing work on https://bugs.launchpad.net/devstack/+bug/1763966 to fix grenade failures? It looks that e.g. all grenade jobs in neutron are broken currently :/ > Wiadomość napisana przez Gary Kotton w dniu 15.04.2018, o godz. 13:32: > > Hi, > The gate is currently broken with https://launchpad.net/bugs/1763966. https://review.openstack.org/#/c/561427/ Can unblock us in the short term. Any other ideas? > Thanks > Gary > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Best regards Slawek Kaplonski skaplons at redhat.com __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From skaplons at redhat.com Mon Apr 16 07:39:30 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Mon, 16 Apr 2018 09:39:30 +0200 Subject: [openstack-dev] [devstack][infra] pip vs psutil In-Reply-To: <822EBA11-EAF4-4423-856F-9B7CE769F74D@vmware.com> References: <1BA67F39-62A8-4203-A40E-23B885E1F284@vmware.com> <822EBA11-EAF4-4423-856F-9B7CE769F74D@vmware.com> Message-ID: <5FBD1851-9A5F-444C-8C1C-F28949B87CA4@redhat.com> Right. Thx Gary :) > Wiadomość napisana przez Gary Kotton w dniu 16.04.2018, o godz. 09:14: > > Hi, > I think that we need https://review.openstack.org/561471 until we have a proper solution. > Thanks > Gary > > On 4/16/18, 10:13 AM, "Slawomir Kaplonski" wrote: > > Hi, > > I just wanted to ask if there is any ongoing work on https://bugs.launchpad.net/devstack/+bug/1763966 to fix grenade failures? It looks that e.g. all grenade jobs in neutron are broken currently :/ > >> Wiadomość napisana przez Gary Kotton w dniu 15.04.2018, o godz. 13:32: >> >> Hi, >> The gate is currently broken with https://launchpad.net/bugs/1763966. https://review.openstack.org/#/c/561427/ Can unblock us in the short term. Any other ideas? >> Thanks >> Gary >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > — > Best regards > Slawek Kaplonski > skaplons at redhat.com > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Best regards Slawek Kaplonski skaplons at redhat.com From iwienand at redhat.com Mon Apr 16 07:46:54 2018 From: iwienand at redhat.com (Ian Wienand) Date: Mon, 16 Apr 2018 17:46:54 +1000 Subject: [openstack-dev] [devstack][infra] pip vs psutil In-Reply-To: <1BA67F39-62A8-4203-A40E-23B885E1F284@vmware.com> References: <1BA67F39-62A8-4203-A40E-23B885E1F284@vmware.com> Message-ID: On 04/15/2018 09:32 PM, Gary Kotton wrote: > The gate is currently broken with > https://launchpad.net/bugs/1763966. https://review.openstack.org/#/c/561427/ > Can unblock us in the short term. Any other ideas? I'm thinking this is probably along the lines of the best idea. I left a fairly long comment on this in [1], but the root issue here is that if a system package is created using distutils (rather than setuptools) we end up with this problem with pip10. That means the problem occurs when we a) try to overwrite a system package and b) that package has been created using distutils. This means it is a small(er) subset of packages that cause this problem. Ergo, our best option might be to see if we can avoid such packages on a one-by-one basis, like here. In some cases, we could just delete the .egg-info file, which is approximately what was happening before anyway. In this particular case, the psutils package is used by glance & the peakmem tracker. Under USE_PYTHON3, devstack's pip_install_gr only installs the python3 library; however the peakmem tracker always uses python2 -- leaing to missing library the failures in [2]. I have two thoughts; either install for both python2 & 3 always [3] or make peakmem tracker obey USE_PYTHON3 [4]. We can discuss the approach in the reviews. The other option is to move everything to virtualenv's, so we never conflict with a system package, as suggested by clarkb [5] or pabelanger [6]. These are more invasive changes, but also arguably more correct. Note diskimage-builder, and hence our image generation for some platforms, is also broken. Working on that in [7]. -i [1] https://github.com/pypa/pip/issues/4805#issuecomment-340987536 [2] https://review.openstack.org/561427 [3] https://review.openstack.org/561524 [4] https://review.openstack.org/561525 [5] https://review.openstack.org/558930 [6] https://review.openstack.org/#/c/552939 [7] https://review.openstack.org/#/c/561479/ From zhaochao1984 at gmail.com Mon Apr 16 08:04:00 2018 From: zhaochao1984 at gmail.com (=?UTF-8?B?6LW16LaF?=) Date: Mon, 16 Apr 2018 16:04:00 +0800 Subject: [openstack-dev] [stable][trove] keep trove-stable-maint members up-to-date Message-ID: Hi , core stable team, There are some patches to stable branches to the different trove repos, and they are always progressing slowly ,because none of the current trove team core members are in the trove-stable-maint. I tried to contact with the previous PTLs about expanding the 'trove-stable-maint' group and keep the group up-to-date, however got no response yet. I noticed that 'stable-maint-core' is always included in the individual project -stable-maint group, could the core stable team help to update the 'trove-stable-maint' group (adding me to it could be sufficient by now)? Thanks! -- To be free as in freedom. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lhinds at redhat.com Mon Apr 16 08:45:24 2018 From: lhinds at redhat.com (Luke Hinds) Date: Mon, 16 Apr 2018 09:45:24 +0100 Subject: [openstack-dev] [bandit] Migration to PyCQA Message-ID: Hi All, As most of you are aware, a decision was made to migrate the maintenance of Bandit to PyCQA [0]. In order to kick things off, I have started a pad [1] to make sure we capture all the steps needed for a seamless migration. Please do look at this and provide input / feedback. @Jeremy, including you for the zuul side of things. We will start covering this as a topic each Thursday on the security-sig meeting, until we are confident to 'hit the button' and move the project over. [0] https://github.com/PyCQA [1] https://etherpad.openstack.org/p/bandit-migration Please be mindful of including Ian on replies, who may not be subscribed the the -dev list. -- Luke Hinds -------------- next part -------------- An HTML attachment was scrubbed... URL: From e0ne at e0ne.info Mon Apr 16 09:01:51 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Mon, 16 Apr 2018 12:01:51 +0300 Subject: [openstack-dev] [horizon] Meeting time and location are changed Message-ID: Hi team, Please be informed that Horizon meeting time has been changed [1]. We'll have our weekly meetings at 15.00 UTC starting this week at 'openstack-meeting-alt' channel. We had to change meeting channel too due to the conflict with others. [1] https://review.openstack.org/#/c/560979/ Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekuvaja at redhat.com Mon Apr 16 09:10:30 2018 From: ekuvaja at redhat.com (Erno Kuvaja) Date: Mon, 16 Apr 2018 10:10:30 +0100 Subject: [openstack-dev] [glance] Priorities for WC 16th of April Message-ID: Hi all, Now when client release is out and milestone 1 is approaching quickly I'd like to draw your attention to few things in Glance development. 1) reviews of outstanding specs 2) work towards removal of the Images API v1 3) Pending delete rollback ability Lets get the reviews to specs so we have clear picture by the summit next month what we are doing, work towards removing the Images API v1 endpoints as early as possible so we avoid nasty surprises at the end of the cycle and get the first new features merged in. Thanks all for your continuous support driving towards another great Glance release! - Erno -jokke- Kuvaja From zhang.lei.fly at gmail.com Mon Apr 16 09:52:50 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Mon, 16 Apr 2018 17:52:50 +0800 Subject: [openstack-dev] [kolla][stable][tc] Kolla deployment guide link is missing on docs.o.o Message-ID: Seems kolla deployment guide doc link is missing here[0]. But it exists on pike[1] and ocata[2] How could we fix this? [0] https://docs.openstack.org/queens/deploy/ [1] https://docs.openstack.org/pike/deploy/ ​[2] https://docs.openstack.org/ocata/deploy/​ -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From moshele at mellanox.com Mon Apr 16 09:56:58 2018 From: moshele at mellanox.com (Moshe Levi) Date: Mon, 16 Apr 2018 09:56:58 +0000 Subject: [openstack-dev] Removing networking-mlnx from Debian? References: <94573577-2216-3f87-aeba-e494f6d3d974@debian.org> Message-ID: What are the python3 issues that you see? tox -epy34 and all the unit test are passing? (tested on cento 7.4) [1] networking_mlnx.tests.unit.ml2.drivers.sdn.test_mechanism_sdn.SdnDriverTestCase.test_port_processing_network 2.642 networking_mlnx.tests.unit.ml2.drivers.sdn.test_mechanism_sdn.SdnDriverTestCase.test_network_filter_phynset 2.572 networking_mlnx.tests.unit.ml2.drivers.mlnx.test_mech_mlnx.MlnxMechanismIbPortTestCase.test_precommit_ib_config_dont_update 2.310 networking_mlnx.tests.unit.ml2.drivers.mlnx.test_mech_mlnx.MlnxMechanismIbPortTestCase.test_precommit_ib_port_deleted_port 2.291 networking_mlnx.tests.unit.ml2.drivers.mlnx.test_mech_mlnx.MlnxMechanismIbPortTestCase.test_precommit_ib_port_non_migration 2.252 networking_mlnx.tests.unit.ml2.drivers.sdn.test_mechanism_sdn.SdnDriverTestCase.test_port_delete_pending_port_update 2.250 networking_mlnx.tests.unit.ml2.drivers.sdn.test_mechanism_sdn.SdnDriverTestCase.test_port_filter_phynset 2.237 networking_mlnx.tests.unit.ml2.drivers.sdn.test_mechanism_sdn.SdnDriverTestCase.test_driver 2.201 networking_mlnx.tests.unit.ml2.drivers.sdn.test_mechanism_sdn.SdnDriverTestCase.test_network_update_pending_network_create 2.135 networking_mlnx.tests.unit.ml2.drivers.sdn.test_mechanism_sdn.SdnDriverTestCase.test_network 1.981 __________________________________________________________________________________________________ summary ___________________________________________________________________________________________________ py34: commands succeeded congratulations :) > -----Original Message----- > From: Moshe Levi > Sent: Friday, April 13, 2018 5:14 PM > To: OpenStack Development Mailing List dev at lists.openstack.org> > Subject: RE: [openstack-dev] Removing networking-mlnx from Debian? > > Hi Thomas, > > Networking-mlnx is still maintained. > We will fix all the issues next week and I will create a tag for it. > > > -----Original Message----- > > From: Thomas Goirand [mailto:zigo at debian.org] > > Sent: Friday, April 13, 2018 2:50 PM > > To: OpenStack Development Mailing List > dev at lists.openstack.org> > > Subject: [openstack-dev] Removing networking-mlnx from Debian? > > > > Hi, > > > > Is networking-mlnx actively maintained? It doesn't look like it to me, > > there's still no Queens release. It also fails to build in Debian, > > with apparently no Python 3 support. > > > > Without any reply from an active maintainer, I'll ask for this package > > to be removed from Debian. > > > > Please let me know, > > Cheers, > > > > Thomas Goirand (zigo) > > > > > __________________________________________________________ > > ________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev- > > request at lists.openstack.org?subject:unsubscribe > > > https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists. > > openstack.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fopenstack- > > > dev&data=02%7C01%7Cmoshele%40mellanox.com%7C7cb48464a3b24d0fc10 > > > e08d5a134b5a0%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C6365 > > > 92171104919078&sdata=BFuUnC5qJWNm0J7WPZtosLJeuww%2BVwc4s%2Bu > > rIaF3jbQ%3D&reserved=0 From aj at suse.com Mon Apr 16 10:02:35 2018 From: aj at suse.com (Andreas Jaeger) Date: Mon, 16 Apr 2018 12:02:35 +0200 Subject: [openstack-dev] [kolla][stable][tc] Kolla deployment guide link is missing on docs.o.o In-Reply-To: References: Message-ID: <4e5a092f-1b64-1fb2-1a2d-1394ec35fc14@suse.com> On 2018-04-16 11:52, Jeffrey Zhang wrote: > Seems kolla deployment guide doc link is missing here[0]. But it exists > on pike[1] and ocata[2] > > How could we fix this? See https://docs.openstack.org/doc-contrib-guide/doc-index.html and sent a patch for openstack-manuals repository, Andreas > [0] https://docs.openstack.org/queens/deploy/ > [1] https://docs.openstack.org/pike/deploy/ > ​[2] https://docs.openstack.org/ocata/deploy/​ > > -- > Regards, > Jeffrey Zhang > Blog: http://xcodest.me > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From zhang.lei.fly at gmail.com Mon Apr 16 10:39:49 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Mon, 16 Apr 2018 18:39:49 +0800 Subject: [openstack-dev] [kolla][stable][tc] Kolla deployment guide link is missing on docs.o.o In-Reply-To: <4e5a092f-1b64-1fb2-1a2d-1394ec35fc14@suse.com> References: <4e5a092f-1b64-1fb2-1a2d-1394ec35fc14@suse.com> Message-ID: Thanks Andreas, patch is pushed, please check https://review.openstack.org/#/c/561578/ On Mon, Apr 16, 2018 at 6:02 PM, Andreas Jaeger wrote: > On 2018-04-16 11:52, Jeffrey Zhang wrote: > > Seems kolla deployment guide doc link is missing here[0]. But it exists > > on pike[1] and ocata[2] > > > > How could we fix this? > > See https://docs.openstack.org/doc-contrib-guide/doc-index.html and sent > a patch for openstack-manuals repository, > > Andreas > > > [0] https://docs.openstack.org/queens/deploy/ > > [1] https://docs.openstack.org/pike/deploy/ > > ​[2] https://docs.openstack.org/ocata/deploy/​ > > > > -- > > Regards, > > Jeffrey Zhang > > Blog: http://xcodest.me > > > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi > SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany > GF: Felix Imendörffer, Jane Smithard, Graham Norton, > HRB 21284 (AG Nürnberg) > GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Mon Apr 16 10:41:53 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Mon, 16 Apr 2018 12:41:53 +0200 Subject: [openstack-dev] [neutron][dynamic routing] RYU Breaks lower constraints In-Reply-To: References: <94BD147E-C0B0-4F84-BADE-C39469022654@vmware.com> <050C7C89-8A69-4D41-81DC-9D029E09FFEE@vmware.com> <20180415233032.9ABE6B32D1@mail.valinux.co.jp> Message-ID: <97B996A6-F453-495C-BA95-1FDDAD31A8EF@redhat.com> I just sent a patch to bump Ryu version in Neutron’s requirements to fix lower constraints job there also: https://review.openstack.org/#/c/561579/ > Wiadomość napisana przez Gary Kotton w dniu 16.04.2018, o godz. 09:13: > > Please see https://review.openstack.org/561443 > > On 4/16/18, 2:31 AM, "IWAMOTO Toshihiro" wrote: > > On Sun, 15 Apr 2018 21:02:42 +0900, > Gary Kotton wrote: >> >> [1 ] >> [1.1 ] >> Hi, >> That sounds reasonable. I wonder if the RYU folk can chime in here. >> Thanks > > I don't fully understand the recent g-r change yet, but > I guess neutron-dynamic-routing should also have ryu>=4.24. > I'll check this tommorrow. > >> From: Akihiro MOTOKI >> Reply-To: OpenStack List >> Date: Sunday, April 15, 2018 at 12:43 PM >> To: OpenStack List >> Subject: Re: [openstack-dev] [neutron][dynamic routing] RYU Breaks lower constraints >> >> Gary, >> >> I think this is caused by the recent pip change and pip no longer cannot import pip from code. The right solution seems to bump the minimum version of ryu. >> >> Thought? >> >> http://lists.openstack.org/pipermail/openstack-dev/2018-March/128939.html >> >> Akihiro >> >> 2018/04/15 午後6:06 "Gary Kotton" >: >> Hi, >> It seems like ther RYU import is breaking the project: >> >> >> 2018-04-15 08:41:34.654681 | ubuntu-xenial | b'--- import errors ---\nFailed to import test module: neutron_dynamic_routing.tests.unit.services.bgp.driver.ryu.test_driver\nTraceback (most recent call last):\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/unittest2/loader.py", line 456, in _find_test_path\n module = self._get_module_from_name(name)\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/unittest2/loader.py", line 395, in _get_modu > le_from_name\n __import__(name)\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/neutron_dynamic_routing/tests/unit/services/bgp/driver/ryu/test_driver.py", line 21, in \n from ryu.services.protocols.bgp import bgpspeaker\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/services/protocols/bgp/bgpspeaker.py", line 21, in \n from ryu.lib.packet.bgp import (\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/__init__.py ower-constraints/lib/python3.5/site-packages/ryu/lib/packet/__init__.py>", line 6, in \n from . import (ethernet, arp, icmp, icmpv6, ipv4, ipv6, lldp, mpls, packet,\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/ethernet.py", line 18, in \n from . import vlan\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/vlan.py", line 21, in \n from . import ipv4\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/ipv4.py ttp://git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/ipv4.py>", line 23, in \n from . import tcp\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/tcp.py", line 24, in \n from . import bgp\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/bgp.py", line 52, in \n from ryu.utils import binary_str\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/utils.py git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/utils.py>", line 23, in \n from pip import req as pip_req\nImportError: cannot import name \'req\'\n' >> >> Any suggestions? >> Thanks >> Gary >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> [1.2 ] >> [2 ] >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev — Best regards Slawek Kaplonski skaplons at redhat.com From gkotton at vmware.com Mon Apr 16 10:49:17 2018 From: gkotton at vmware.com (Gary Kotton) Date: Mon, 16 Apr 2018 10:49:17 +0000 Subject: [openstack-dev] [neutron][vpnaas][fwaas][vmware-nsx] Stable/queens build sphinx docs broken Message-ID: <313D8448-4CE2-45FC-A426-01EE4A5BB167@vmware.com> Hi, We have seen that a number of stable projects that the sphinx docs is broken. The gate job returns ‘retry limit’. An example of the error is http://logs.openstack.org/22/561522/1/check/build-openstack-sphinx-docs/cd99af8/job-output.txt.gz Does anyone have any idea how to address this? Thanks Gary -------------- next part -------------- An HTML attachment was scrubbed... URL: From gkotton at vmware.com Mon Apr 16 11:27:49 2018 From: gkotton at vmware.com (Gary Kotton) Date: Mon, 16 Apr 2018 11:27:49 +0000 Subject: [openstack-dev] [neutron][vpnaas][fwaas][vmware-nsx][ovn] Stable/queens build sphinx docs broken Message-ID: <35369789-CB08-4C19-A172-8B4E3A84FB0D@vmware.com> Hi, OVN too. Things were working on the 12th of April and something has changed since then. Thanks Gary From: Gary Kotton Reply-To: OpenStack List Date: Monday, April 16, 2018 at 1:49 PM To: OpenStack List Subject: [openstack-dev] [neutron][vpnaas][fwaas][vmware-nsx] Stable/queens build sphinx docs broken Hi, We have seen that a number of stable projects that the sphinx docs is broken. The gate job returns ‘retry limit’. An example of the error is http://logs.openstack.org/22/561522/1/check/build-openstack-sphinx-docs/cd99af8/job-output.txt.gz Does anyone have any idea how to address this? Thanks Gary -------------- next part -------------- An HTML attachment was scrubbed... URL: From gkotton at vmware.com Mon Apr 16 11:31:52 2018 From: gkotton at vmware.com (Gary Kotton) Date: Mon, 16 Apr 2018 11:31:52 +0000 Subject: [openstack-dev] [neutron][vpnaas][fwaas][vmware-nsx][ovn] Stable/queens build sphinx docs broken In-Reply-To: <35369789-CB08-4C19-A172-8B4E3A84FB0D@vmware.com> References: <35369789-CB08-4C19-A172-8B4E3A84FB0D@vmware.com> Message-ID: <6316ECFE-804A-4C76-A779-72ADD9BA6A22@vmware.com> Hi, Here is an example - https://review.openstack.org/#/c/560893/ Thanks Gary From: Gary Kotton Reply-To: OpenStack List Date: Monday, April 16, 2018 at 2:28 PM To: OpenStack List Subject: Re: [openstack-dev] [neutron][vpnaas][fwaas][vmware-nsx][ovn] Stable/queens build sphinx docs broken Hi, OVN too. Things were working on the 12th of April and something has changed since then. Thanks Gary From: Gary Kotton Reply-To: OpenStack List Date: Monday, April 16, 2018 at 1:49 PM To: OpenStack List Subject: [openstack-dev] [neutron][vpnaas][fwaas][vmware-nsx] Stable/queens build sphinx docs broken Hi, We have seen that a number of stable projects that the sphinx docs is broken. The gate job returns ‘retry limit’. An example of the error is http://logs.openstack.org/22/561522/1/check/build-openstack-sphinx-docs/cd99af8/job-output.txt.gz Does anyone have any idea how to address this? Thanks Gary -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at ericsson.com Mon Apr 16 12:36:51 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 16 Apr 2018 14:36:51 +0200 Subject: [openstack-dev] [nova][xenapi] does get_all_bw_counters driver call nova-network only? Message-ID: <1523882211.27744.1@smtp.office365.com> Hi, The get_all_bw_counters() virt driver [1] is only supported by xenapi today. However Matt raised the question [2] if this is a nova-network only feature. As in that case we can simply remove it. Cheers, gibi [1] https://github.com/openstack/nova/blob/68afe71e26e60a3e4ad30083cc244c57540d4da9/nova/virt/xenapi/driver.py#L383 [2] https://review.openstack.org/#/c/403660/78/nova/compute/manager.py at 6855 From balazs.gibizer at ericsson.com Mon Apr 16 13:05:19 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 16 Apr 2018 15:05:19 +0200 Subject: [openstack-dev] [nova] Notification update week 16 Message-ID: <1523883919.27744.2@smtp.office365.com> Hi, After the long silence here is the current notification status info. Bugs ---- New bugs ~~~~~~~~ [Low] https://bugs.launchpad.net/nova/+bug/1757407 Notification sending sometimes hits the keystone API to get glance endpoints As the versioned notifications does not use the glance endpoints info we can avoid hitting the keystone API if notification_format is set to 'versioned' [Medium] https://bugs.launchpad.net/nova/+bug/1763051 Need to audit when notifications are sent during live migration We need to go throught the live migration codepath and make sure that the different live migartion notifications sent at a proper time. [Low] https://bugs.launchpad.net/nova/+bug/1761405 impossible to disable notifications The way to turn off emitting notification from nova is to set the oslo_messaging_notifications.driver config option to 'noop'. We need to document this better in the notification devref and in the notification_format config option. There are two follow up bugs opened based on the Matt's review comments in https://review.openstack.org/#/c/403660: [Low] https://bugs.launchpad.net/nova/+bug/1764390 Replace passing system_metadata to notification functions with instance.system_metadata usage [Low] https://bugs.launchpad.net/nova/+bug/1764392 Avoid bandwidth usage db query in notifications when the virt driver does not support collecting such data Old bugs ~~~~~~~~ [High] https://bugs.launchpad.net/nova/+bug/1737201 TypeError when sending notification during attach_interface Fix merged to most of the stable branches. The backport for ocata is still open but has +2 from Tony. https://review.openstack.org/#/c/531746/ [High] https://bugs.launchpad.net/nova/+bug/1739325 Server operations fail to complete with versioned notifications if payload contains unset non-nullable fields No progress. We still need to understand how this problem happens to find the proper solution. [Low] https://bugs.launchpad.net/nova/+bug/1487038 nova.exception._cleanse_dict should use oslo_utils.strutils._SANITIZE_KEYS Old abandoned patches exist but need somebody to pick them up: * https://review.openstack.org/#/c/215308/ * https://review.openstack.org/#/c/388345/ Versioned notification transformation ------------------------------------- https://review.openstack.org/#/q/topic:bp/versioned-notification-transformation-rocky+status:open There are some patches that only needs a second +2: * https://review.openstack.org/#/c/460625 Transform aggregate.update_metadata notification * https://review.openstack.org/#/c/403660 Transform instance.exists notification Introduce instance.lock and instance.unlock notifications --------------------------------------------------------- https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances Implementation proposed but needs some work: https://review.openstack.org/#/c/526251/ Add the user id and project id of the user initiated the instance action to the notification ----------------------------------------------------------------- https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications Implementation patch exists but still needs work https://review.openstack.org/#/c/536243/ Add request_id to the InstanceAction versioned notifications ------------------------------------------------------------ https://blueprints.launchpad.net/nova/+spec/add-request-id-to-instance-action-notifications Implemenation needs a rebase and review https://review.openstack.org/#/c/553288/ Sending full traceback in versioned notifications ------------------------------------------------- https://blueprints.launchpad.net/nova/+spec/add-full-traceback-to-error-notifications I have to propose the implementation. Add versioned notifications for removing a member from a server group --------------------------------------------------------------------- The specless bp https://blueprints.launchpad.net/nova/+spec/add-server-group-remove-member-notifications is pending approval as we would like to see the POC code first. Takashi has been proposed the POC code https://review.openstack.org/#/c/559076/ so we have to look at it. Factor out duplicated notification sample ----------------------------------------- https://review.openstack.org/#/q/topic:refactor-notification-samples+status:open Kevin proposed a lot of patches. \o/ Now I have to go and review them. Weekly meeting -------------- The next meeting will be held on 17th of April on #openstack-meeting-4 https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180417T170000 Cheers, gibi From maciej.szwed at intel.com Mon Apr 16 13:51:45 2018 From: maciej.szwed at intel.com (Szwed, Maciej) Date: Mon, 16 Apr 2018 13:51:45 +0000 Subject: [openstack-dev] [Os-brick][Cinder] NVMe-oF NQN string Message-ID: <122B872DCF83AB4DB816E25A2C1AD08D8B9242BF@IRSMSX102.ger.corp.intel.com> Hi, I'm wondering why in Os-brick implementation of NVMe-oF in os_brick/initiator/connectors/nvme.py, line 97 we do split on 'nqn'. Connection properties, including 'nqn', are provided by Cinder driver and when user want to implement new driver that will use NVMe-of he/she needs to create NQN string with additional string and dot proceeding the desired NQN string. This additional string is unused across whole NVMe-oF implementation. This creates confusion for people when creating new Cinder driver. What was its purpose? Can we drop that split? Regards, Maciej -------------- next part -------------- An HTML attachment was scrubbed... URL: From gr at ham.ie Mon Apr 16 14:13:45 2018 From: gr at ham.ie (Graham Hayes) Date: Mon, 16 Apr 2018 15:13:45 +0100 Subject: [openstack-dev] [election][tc] TC Candidacy for Graham Hayes Message-ID: <6f7bc009-5843-a2a5-4283-bb88427099af@ham.ie> Hi, I am submitting my candidacy for the OpenStack Technical Committee. I have been contributing to OpenStack since the Havana cycle [1][2], mainly in Designate. I have also been involved with the TC, and its meetings since Designate applied for incubation all the way back in Atlanta (the first time we were there). Over the last 6 months I have become more involved in the TC, and have been an active contributor to TC discussions (both on IRC and in person) and governance [3]. I have been PTL for Designate for Mitaka, Newton, Ocata, Queens and Rocky cycles, and a core for a longer period. I believe my experience working in a younger, smaller project within OpenStack is a benefit. Along with the experience of working on software as an end user of OpenStack I can help us ensure the Technical Committee is mindful of the unique challenges these projects and users can face. With the broadening of the scope of the OpenStack Foundation, I believe that it is an important part of the TC's role to have robust, and frank discussions with the Board of Directors and I believe that I have done a reasonable job of summarizing [4][5] what happens at the Board of Directors meetings to the community over the last 6 months. The need for the candid discussions is not restricted to the Foundation and the board - the new strategic focus areas that the foundation is expanding into need our technical leadership to engage with them and ensure that we are all working towards the overall goal of the foundation and promoting open infrastructure. We need to make collaborating, and sharing resources and expertise where it makes sense a priority. What it does not mean is changing what OpenStack is nor changing OpenStack to cater for a single use case. This is a situation where better education of how OpenStack and it components can be used and orchestrated is needed, and a lot of this work should be directed by the TC. I don't think the TC will always (or even most of the time) be the correct people to engage, but I think we should lead the way by finding the correct people with the knowledge and experience, and helping support them and provide them with a platform to provide guidance to these groups. When it comes to pushing forward the TC vision[6] I think the community has made great steps forward to realizing it, on all but one section. We engage with groups like the CNCF, and specifically Kubernetes and help drive OpenStack adoption with first class support for the OpenStack Cloud Provider, the Cinder Container Storage Interface and other projects. We still need to find a way to show the world what a top tier private open source infrastructure of components like OpenStack, Kubernetes, Cloud Foundry or OpenShift looks like, and helping companies understand why this is the way forward for their infrastructure. Unfortunately, helping users, deployers and C(T|I)Os understand this would be easier with well written and and clearly documented "constellations" - I have always found talking in the abstract is a lot more difficult than discussing something tangible. For the last 5 years I have worked on product teams building products based on OpenStack, Kubernetes and Cloud Foundry, and I think this experience will be a great asset in developing our first generation of constellations, which is something I think we need to focus on for the next term of the TC. I think that having constellations will also help us solve the perennial question of what OpenStack is. By having sets of projects, we can show that OpenStack is extremely flexible - and that there are projects for different use cases. Far too much time is spent circling back to the "What is OpenStack" - which I foresee getting even more complex as the OpenStack Foundation grows beyond the OpenStack Project, and having a solid, stable answer to what we are is going to be vital. I would like to thank you for taking the time to read my thoughts - and ask you to consider me for your vote. If elected I will strive to be vocal for the community that I have gotten so much from. I want to give some more back to them ensure that the OpenStack Project continues to be the go to software for Infrastructure as a Service. Thanks again, - Graham 1 - http://stackalytics.com/?release=all&metric=commits&user_id=grahamhayes 2 - https://www.openstack.org/community/members/profile/12766/graham-hayes 3 - https://review.openstack.org/#/q/project:openstack/governance+(commentby:%22Graham+Hayes+%253Cgr%2540ham.ie%253E%22+OR+reviewedby:%22Graham+Hayes+%253Cgr%2540ham.ie%253E%22++OR+owner:%22Graham+Hayes+%253Cgr%2540ham.ie%253E%22) 4 - http://graham.hayes.ie/posts/dublin-ptg-summary/#board-of-directors-meeting 5 - http://graham.hayes.ie/posts/sydney-openstack-summit/#sunday-board-joint-leadership-meeting 6 - https://governance.openstack.org/tc/resolutions/20170404-vision-2019.html -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From j.harbott at x-ion.de Mon Apr 16 14:33:44 2018 From: j.harbott at x-ion.de (Jens Harbott) Date: Mon, 16 Apr 2018 14:33:44 +0000 Subject: [openstack-dev] [devstack][infra] pip vs psutil In-Reply-To: References: <1BA67F39-62A8-4203-A40E-23B885E1F284@vmware.com> Message-ID: 2018-04-16 7:46 GMT+00:00 Ian Wienand : > On 04/15/2018 09:32 PM, Gary Kotton wrote: >> >> The gate is currently broken with >> https://launchpad.net/bugs/1763966. >> https://review.openstack.org/#/c/561427/ >> Can unblock us in the short term. Any other ideas? > > > I'm thinking this is probably along the lines of the best idea. I > left a fairly long comment on this in [1], but the root issue here is > that if a system package is created using distutils (rather than > setuptools) we end up with this problem with pip10. > > That means the problem occurs when we a) try to overwrite a system > package and b) that package has been created using distutils. This > means it is a small(er) subset of packages that cause this problem. > Ergo, our best option might be to see if we can avoid such packages on > a one-by-one basis, like here. > > In some cases, we could just delete the .egg-info file, which is > approximately what was happening before anyway. > > In this particular case, the psutils package is used by glance & the > peakmem tracker. Under USE_PYTHON3, devstack's pip_install_gr only > installs the python3 library; however the peakmem tracker always uses > python2 -- leaing to missing library the failures in [2]. I have two > thoughts; either install for both python2 & 3 always [3] or make > peakmem tracker obey USE_PYTHON3 [4]. We can discuss the approach in > the reviews. > > The other option is to move everything to virtualenv's, so we never > conflict with a system package, as suggested by clarkb [5] or > pabelanger [6]. These are more invasive changes, but also arguably > more correct. > > Note diskimage-builder, and hence our image generation for some > platforms, is also broken. Working on that in [7]. The cap in devstack has been merged in master and stable/queens, other merges are being help up by unstable volume checks or so it seems. There is also another issue caused by pip 10 treating some former warning as error now. I've tried to list all "global" (Infra+QA) related issues in [8], feel free to amend as needed. [8] https://etherpad.openstack.org/p/pip10-mitigation From fungi at yuggoth.org Mon Apr 16 14:34:28 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 16 Apr 2018 14:34:28 +0000 Subject: [openstack-dev] [neutron][vpnaas][fwaas][vmware-nsx] Stable/queens build sphinx docs broken In-Reply-To: <313D8448-4CE2-45FC-A426-01EE4A5BB167@vmware.com> References: <313D8448-4CE2-45FC-A426-01EE4A5BB167@vmware.com> Message-ID: <20180416143428.zq473nmwhvvucsb6@yuggoth.org> On 2018-04-16 10:49:17 +0000 (+0000), Gary Kotton wrote: > We have seen that a number of stable projects that the sphinx docs > is broken. The gate job returns ‘retry limit’. An example of the > error is > http://logs.openstack.org/22/561522/1/check/build-openstack-sphinx-docs/cd99af8/job-output.txt.gz > Does anyone have any idea how to address this? Potential fixes seem to be adjusting the tools/tox_install.sh in each of these projects to stop erroring when passed only a single argument, or switch to relying on tox-siblings in those jobs so that the neutron-horizon-hack role can be dropped from them entirely. There is some discussion in https://review.openstack.org/561593 but a centralized temporary workaround is somewhat risky since the people in charge of reviewing any eventual revert will have a hard time knowing when it's finally safe to do so. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From openstack at nemebean.com Mon Apr 16 17:17:34 2018 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 16 Apr 2018 12:17:34 -0500 Subject: [openstack-dev] [sdk][osc][openstackclient] Migration to storyboard complete In-Reply-To: <20180414192942.GA15758@sm-xps> References: <876abe58-d86a-8717-6bb5-7c7b5f7957f9@inaugust.com> <20180414192942.GA15758@sm-xps> Message-ID: <44e55688-a0ba-9093-b84e-91b0a27aa6c8@nemebean.com> On 04/14/2018 02:29 PM, Sean McGinnis wrote: > On Sat, Apr 14, 2018 at 11:37:46AM -0500, Monty Taylor wrote: >> Hey everybody, >> >> The migration of the openstacksdk and python-openstackclient repositories to >> storyboard has been completed. Each of the repos owned by those teams has >> been migrated, and project groups now also exist for each. >> > > I just noticed on python-openstackclient, in the repo's README file it still > points people to launchpad for bug and blueprint tracking. > > Just one more transition housekeeping item folks need to keep in mind when > making this switch. Has anybody been making a checklist as projects go through this process, and if not can we start on one? -Ben From kennelson11 at gmail.com Mon Apr 16 17:30:14 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 16 Apr 2018 17:30:14 +0000 Subject: [openstack-dev] [sdk][osc][openstackclient] Migration to storyboard complete In-Reply-To: <44e55688-a0ba-9093-b84e-91b0a27aa6c8@nemebean.com> References: <876abe58-d86a-8717-6bb5-7c7b5f7957f9@inaugust.com> <20180414192942.GA15758@sm-xps> <44e55688-a0ba-9093-b84e-91b0a27aa6c8@nemebean.com> Message-ID: Hello Ben :) As I've been the one poking at PTLs and doing most of the test migrations I've been tracking the work here[1] on this StoryBoard board. There's a story for it all too, but its much easier to see progress and status on the board. If you or anyone has questions please ping me directly or in #storyboard! -Kendall Nelson (diablo_rojo) [1] https://storyboard.openstack.org/#!/board/45 On Mon, Apr 16, 2018 at 10:18 AM Ben Nemec wrote: > > > On 04/14/2018 02:29 PM, Sean McGinnis wrote: > > On Sat, Apr 14, 2018 at 11:37:46AM -0500, Monty Taylor wrote: > >> Hey everybody, > >> > >> The migration of the openstacksdk and python-openstackclient > repositories to > >> storyboard has been completed. Each of the repos owned by those teams > has > >> been migrated, and project groups now also exist for each. > >> > > > > I just noticed on python-openstackclient, in the repo's README file it > still > > points people to launchpad for bug and blueprint tracking. > > > > Just one more transition housekeeping item folks need to keep in mind > when > > making this switch. > > Has anybody been making a checklist as projects go through this process, > and if not can we start on one? > > -Ben > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andr.kurilin at gmail.com Mon Apr 16 17:36:44 2018 From: andr.kurilin at gmail.com (Andrey Kurilin) Date: Mon, 16 Apr 2018 20:36:44 +0300 Subject: [openstack-dev] [rally][dragonflow][ec2-api][PowerVMStackers][murano] Tagging rights In-Reply-To: <20180413195334.GA10074@sm-xps> References: <20180413194659.GA9657@sm-xps> <20180413195334.GA10074@sm-xps> Message-ID: Hi Sean! Thanks for raising this question. As for Rally team, we are using self-tagging approach for several reasons: - Release notes Check the difference between https://github.com/openstack/nova/releases/tag/17.0.2 and https://github.com/openstack/rally-openstack/releases/tag/1.0.0. The first one includes just autogenerated metadata. The second one user-friendly notes (they are not ideal, but we are working on making them better). I do not find a way to add custom release notes via openstack/releases project. - Time Self-tagging the repo allows me to schedule/reschedule the release in whatever timeframe I decide without pinging anyone and waiting for folks to return from summit/PTG. I do not want to offend anyone, but we all know that such events take much time for preparation, holding and resting after it. Since there are no official OpenStack projects built on top of Rally, launching any of "integration" jobs while making Rally release is a wasting of time and money(resources). Also, such jobs can block to make a release. I remember sometimes it can take weeks to pass all gates with tons of rechecks https://github.com/openstack/releases#release-approval == "Freezes and no late releases". It is an opensource and I want to make releases on weekends if there is any reason for doing this (critical fix or the last blocking feature is merged or whatever). 2018-04-13 22:53 GMT+03:00 Sean McGinnis : > On Fri, Apr 13, 2018 at 02:46:59PM -0500, Sean McGinnis wrote: > > Hello teams, > > > > I am following up on some recently announced changes regarding governed > > projects and tagging rights. See [1] for background. > > > > [1] https://review.openstack.org/#/c/557737/ > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gkotton at vmware.com Mon Apr 16 17:36:56 2018 From: gkotton at vmware.com (Gary Kotton) Date: Mon, 16 Apr 2018 17:36:56 +0000 Subject: [openstack-dev] [neutron][vpnaas][fwaas][vmware-nsx] Stable/queens build sphinx docs broken In-Reply-To: <20180416143428.zq473nmwhvvucsb6@yuggoth.org> References: <313D8448-4CE2-45FC-A426-01EE4A5BB167@vmware.com> <20180416143428.zq473nmwhvvucsb6@yuggoth.org> Message-ID: <1188C47E-92B2-453B-8D29-7EF166A5DE46@vmware.com> Maybe we should consider unblocking the stable versions at the moment and then enabling us to address in each project. On 4/16/18, 5:34 PM, "Jeremy Stanley" wrote: On 2018-04-16 10:49:17 +0000 (+0000), Gary Kotton wrote: > We have seen that a number of stable projects that the sphinx docs > is broken. The gate job returns ‘retry limit’. An example of the > error is > http://logs.openstack.org/22/561522/1/check/build-openstack-sphinx-docs/cd99af8/job-output.txt.gz > Does anyone have any idea how to address this? Potential fixes seem to be adjusting the tools/tox_install.sh in each of these projects to stop erroring when passed only a single argument, or switch to relying on tox-siblings in those jobs so that the neutron-horizon-hack role can be dropped from them entirely. There is some discussion in https://review.openstack.org/561593 but a centralized temporary workaround is somewhat risky since the people in charge of reviewing any eventual revert will have a hard time knowing when it's finally safe to do so. -- Jeremy Stanley From ramamani.yeleswarapu at intel.com Mon Apr 16 18:14:22 2018 From: ramamani.yeleswarapu at intel.com (Yeleswarapu, Ramamani) Date: Mon, 16 Apr 2018 18:14:22 +0000 Subject: [openstack-dev] [ironic] this week's priorities and subteam reports Message-ID: Hi, We are glad to present this week's priorities and subteam report for Ironic. As usual, this is pulled directly from the Ironic whiteboard[0] and formatted. This Week's Priorities (as of the weekly ironic meeting) ======================================================== Weekly priorities ----------------- - Python-ironicclient things - Accept a version on set_provision_state - https://review.openstack.org/#/c/558027/ - Wire in header microversion into client negotiation - https://review.openstack.org/#/c/557850/ - Remaining Rescue patches - https://review.openstack.org/#/c/499050/ - Fix ``agent`` deploy interface to call ``boot.prepare_instance`` - https://review.openstack.org/#/c/528699/ - Tempest tests with nova (This can land after nova work is done. But, it should be ready to get the nova patch reviewed.) Needs to be rebased. - Management interface boot_mode change - https://review.openstack.org/#/c/526773/ - Bios interface support - https://review.openstack.org/#/c/511162/ - https://review.openstack.org/#/c/528609/ - db api - https://review.openstack.org/#/c/511402/ - Bug fixes: - https://review.openstack.org/#/c/556748 - House Keeping: - https://review.openstack.org/#/c/557441/ Vendor priorities ----------------- cisco-ucs: Patches in works for SDK update, but not posted yet, currently rebuilding third party CI infra after a disaster... idrac: RFE and first several patches for adding UEFI support will be posted by Tuesday, 1/9 ilo: None irmc: None - a few works are work in progress oneview: None at this time - No subteam at present. xclarity: None at this time - No subteam at present. Subproject priorities --------------------- bifrost: ironic-inspector (or its client): networking-baremetal: networking-generic-switch: sushy and the redfish driver: Bugs (dtantsur, vdrok, TheJulia) -------------------------------- - (TheJulia) Ironic has moved to Storyboard. Dtantsur has indicated he will update the tool that generates these stats. - Stats (diff between 12 Mar 2018 and 19 Mar 2018) - Ironic: 225 bugs (+14) + 250 wishlist items (+2). 15 new (+10), 152 in progress, 1 critical, 36 high (+3) and 26 incomplete (+2) - Inspector: 15 bugs (+1) + 26 wishlist items. 1 new (+1), 14 in progress, 0 critical, 3 high and 4 incomplete - Nova bugs with Ironic tag: 14 (-1). 1 new, 0 critical, 0 high - critical: - sushy: https://bugs.launchpad.net/sushy/+bug/1754514 (basic auth broken when SessionService is not present) - Queens backport release: https://review.openstack.org/#/c/558799/ MERGED. - the dashboard was abruptly deleted and needs a new home :( - use it locally with `tox -erun` if you need to - HIGH bugs with patches to review: - Clean steps are not tested in gate https://bugs.launchpad.net/ironic/+bug/1523640: Add manual clean step ironic standalone test https://review.openstack.org/#/c/429770/15 - Needs to be reproposed to the ironic tempest plugin repository. - prepare_instance() is not called for whole disk images with 'agent' deploy interface https://bugs.launchpad.net/ironic/+bug/1713916: - Fix ``agent`` deploy interface to call ``boot.prepare_instance`` https://review.openstack.org/#/c/499050/ - (TheJulia) Currently WF-1, as revision is required for deprecation. Priorities ========== Deploy Steps (rloo, mgoddard) ----------------------------- - status as of 16 April 2018: - spec for deployment steps framework has merged: https://review.openstack.org/#/c/549493/ - waiting for code from rloo, no timeframe yet BIOS config framework(zshi, yolanda, mgoddard, hshiina) ------------------------------------------------------- - status as of 16 April 2018: - Spec has merged: https://review.openstack.org/#/c/496481/ - List of ordered patches: - BIOS Settings: Add DB model: https://review.openstack.org/511162 agreed that column type of bios setting value is string, blocked by the gate failure - Add bios_interface db field https://review.openstack.org/528609 many +2s, can be merged soon after the patch above is merged - BIOS Settings: Add DB API: https://review.openstack.org/511402 1x +1, actively reviewed and updated - BIOS Settings: Add RPC object https://review.openstack.org/511714 - Add BIOSInterface to base driver class https://review.openstack.org/507793 - BIOS Settings: Add BIOS caching: https://review.openstack.org/512200 - Add Node BIOS support - REST API: https://review.openstack.org/512579 Conductor Location Awareness (jroll, dtantsur) ---------------------------------------------- - (april 16) spec has good feedback, one issue to resolve, should be able to land this week - https://review.openstack.org/#/c/559420/ Reference architecture guide (dtantsur, jroll) ---------------------------------------------- - story: https://storyboard.openstack.org/#!/story/2001745 - status as of 16 April 2018: - Dublin PTG consensus was to start with small architectural building blocks. - list of cases from the Denver PTG - see in the story - nothing new this week Graphical console interface (mkrai, anup-d-navare, TheJulia) ------------------------------------------------------------ - status as of 16 Apr 2018: - No update - VNC Graphical console spec: https://review.openstack.org/#/c/306074/ - needs update, address comments - nova blueprint: https://blueprints.launchpad.net/nova/+spec/ironic-vnc-console Neutron event processing (vdrok) -------------------------------- - status as of 16 April 2018: - spec at https://review.openstack.org/343684 - Needs update - WIP code at https://review.openstack.org/440778 - code is being rewritten to look a bit nicer (major rewrite), spec update coming afterwards Goals ===== Updating nova virt to use REST API (TheJulia) --------------------------------------------- Status as of 16 APR 2018: (TheJulia) We need python-ironicclient reviews which would superceed this idea for now. Storyboard migration (TheJulia, dtantsur) ----------------------------------------- Status as of Apr 16th. - Done with moving data. - dtantsur to rewrite the bug dashboard Management interface refactoring (etingof, dtantsur) ---------------------------------------------------- - Status as of 9 Apr: - boot mode in ManagementInterface: https://review.openstack.org/#/c/526773/ 2x-1 Getting clean steps (rloo, TheJulia) ------------------------------------ - Stat as of April 2nd 2018 - No update - Status as of March 26th: - Cleanhold specification updated - https://review.openstack.org/#/c/507910/ Project vision (jroll, TheJulia) -------------------------------- - Status as of April 16: - jroll still trying to find time to collect enough thoughts for an email SIGHUP support (rloo) --------------------- - Status as of April 16 - ironic Done - ironic-inspector: https://review.openstack.org/560243 Need Reviews - doesn't use oslo.service because not sure if can use flask with it - networking-baremetal: https://review.openstack.org/561257 Need Reviews Stretch Goals ============= NOTE: These items will be migrated into storyboard and will be removed from the weekly whiteboard once storyboard is in-place Classic driver removal formerly Classic drivers deprecation (dtantsur) ---------------------------------------------------------------------- - spec: http://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/classic-drivers-future.html - status as of 26 Mar 2018: - switch documentation to hardware types: - api-ref examples: TODO - update https://wiki.openstack.org/wiki/Ironic/Drivers: TODO - or should we kill it with fire in favour of the docs? - ironic-inspector: - documentation: https://review.openstack.org/#/c/545285/ MERGED - backport: https://review.openstack.org/#/c/554586/ - enable fake-hardware in devstack: https://review.openstack.org/#/c/550811/ MERGED - change the default discovery driver: https://review.openstack.org/#/c/550464/ - migration of CI to hardware types - IPA: https://review.openstack.org/553431 MERGED - ironic-lib: https://review.openstack.org/#/c/552537/ MERGED - python-ironicclient: https://review.openstack.org/552543 MERGED - python-ironic-inspector-client: https://review.openstack.org/552546 +A MERGED - virtualbmc: https://review.openstack.org/#/c/555361/ MERGED - started an ML thread tagging potentially affected projects: http://lists.openstack.org/pipermail/openstack-dev/2018-March/128438.html Redfish OOB inspection (etingof, deray, stendulker) Zuul v3 playbook refactoring (sambetts, pas-ha) Before Rocky ============ CI refactoring and missing test coverage ---------------------------------------- - not considered a priority, it's a 'do it always' thing - Standalone CI tests (vsaienk0) - next patch to be reviewed, needed for 3rd party CI: https://review.openstack.org/#/c/429770/ - localboot with partitioned image patches: - Ironic - add localboot partitioned image test: https://review.openstack.org/#/c/502886/ Rebase/update required - when previous are merged TODO (vsaienko) - Upload tinycore partitioned image to tarbals.openstack.org - Switch ironic to use tinyipa partitioned image by default - Missing test coverage (all) - portgroups and attach/detach tempest tests: https://review.openstack.org/382476 - adoption: https://review.openstack.org/#/c/344975/ - should probably be changed to use standalone tests - root device hints: TODO - node take over - resource classes integration tests: https://review.openstack.org/#/c/443628/ - radosgw (https://bugs.launchpad.net/ironic/+bug/1737957) Queens High Priorities ====================== Routed network support (sambetts, vsaienk0, bfournie, hjensas) -------------------------------------------------------------- - status as of 12 Feb 2018: - All code patches are merged. - One CI patch left, rework devstack baremetal simulation. To be done in Rocky? - This is to have actual 'flat' networks in CI. - Placement API work to be done in Rocky due to: Challenges with integration to Placement due to the way the integration was done in neutron. Neutron will create a resource provider for network segments in Placement, then it creates an os-aggregate in Nova for the segment, adds nova compute hosts to this aggregate. Ironic nodes cannot be added to host-aggregates. I (hjensas) had a short discussion with neutron devs (mlavalle) on the issue: http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2018-01-12.log.html#t2018-01-12T17:05:38 There are patches in Nova to add support for ironic nodes in host-aggregates: - https://review.openstack.org/#/c/526753/ allow compute nodes to be associated with host agg - https://review.openstack.org/#/c/529135/ (Spec) - Patches: - CI Patches: - https://review.openstack.org/#/c/392959/ Rework Ironic devstack baremetal network simulation - RFEs (Rocky) - https://bugs.launchpad.net/networking-baremetal/+bug/1749166 - TheJulia, March 19th 2018: This RFE seems not to contain detail on what is desired to be improved upon, and ultimately just seems like refactoring/improvement work and may not then need an rfe. - https://bugs.launchpad.net/networking-baremetal/+bug/1749162 - TheJulia, March 19th 2018: This RFE makes sense, although I would classify it as a general improvement. If we wish to adhere to strict RFE approval for networking-baremetal work, then I think we should consider this approved since it is minor enhancement to improve operation. Rescue mode (rloo, stendulker) ------------------------------ - Status as on 12 Feb 2018 - spec: http://specs.openstack.org/openstack/ironic-specs/specs/approved/implement-rescue-mode.html - code: https://review.openstack.org/#/q/topic:bug/1526449+status:open+OR+status:merged - ironic side: - all code patches have merged except for - Rescue mode standalone tests: https://review.openstack.org/#/c/538119/ (failing CI, not ready for reviews) - Tempest tests with nova: https://review.openstack.org/#/c/528699/ - Run the tempest test on the CI: https://review.openstack.org/#/c/528704/ - succeeded in rescuing: http://logs.openstack.org/04/528704/16/check/ironic-tempest-dsvm-ipa-wholedisk-bios-agent_ipmitool-tinyipa/4b74169/logs/screen-ir-cond.txt.gz#_Feb_02_09_44_12_940007 - nova side: - https://blueprints.launchpad.net/nova/+spec/ironic-rescue-mode: - approved for Queens but didn't get the ironic code (client) done in time - (TheJulia) Nova has indicated that this is deferred until Rocky. - To get the nova patch merged, we need: - release new python-ironicclient - Done - update ironicclient version in upper-constraints (this patch will be posted automatically) - update ironicclient version in global-requirement (this patch needs to be posted manually) Posted https://review.openstack.org/554673 - code patch: https://review.openstack.org/#/c/416487/ Needs revision - CI is needed for nova part to land - tiendc is working for CI Clean up deploy interfaces (vdrok) ---------------------------------- - status as of 5 Feb 2017: - patch https://review.openstack.org/524433 needs update and rebase Zuul v3 jobs in-tree (sambetts, derekh, jlvillal, rloo) ------------------------------------------------------- - etherpad tracking zuul v3 -> intree: https://etherpad.openstack.org/p/ironic-zuulv3-intree-tracking - cleaning up/centralizing job descriptions (eg 'irrelevant-files'): DONE - Next TODO is to convert jobs on master, to proper ansible. NOT a high priority though. - (pas-ha) DNM experimental patch with "devstack-tempest" as base job https://review.openstack.org/#/c/520167/ OpenStack Priorities ==================== Mox --- - TheJulia needs to just declare this done. Python 3.5 compatibility (Nisha, Ankit) --------------------------------------- - Topic: https://review.openstack.org/#/q/topic:goal-python35+NOT+project:openstack/governance+NOT+project:openstack/releases - this include all projects, not only ironic - please tag all reviews with topic "goal-python35" - TODO submit the python3 job for IPA - for ironic and ironic-inspector job enabled by disabling swift as swift is still lacking py3.5 support. - anupn to update the python3 job to build tinyipa with python3 - (anupn): Talked with swift folks and there is a bug upstream opened https://review.openstack.org/#/c/401397 for py3 support in swift. But this is not on their priority - Right now patch pass all gate jobs except agent_- drivers. - (TheJulia) It seems we might not have py3 compatibility with swift until the T- cycle. - updating setup.cfg (part of requirements for the goal): - ironic: https://review.openstack.org/#/c/539500/ - MERGED - ironic-inspector: https://review.openstack.org/#/c/539502/ - MERGED Deploying with Apache and WSGI in CI (pas-ha, vsaienk0) ------------------------------------------------------- - ironic is mostly finished - (pas-ha) needs to be rewritten for uWSGI, patches on review: - https://review.openstack.org/#/c/507067 - inspector is TODO and depends on https://review.openstack.org/#/q/topic:bug/1525218 - delayed as the HA work seems to take a different direction - (TheJulia, March 19th, 2018) Perhaps because of the different direction, we should consider ourselves done? Subprojects =========== Inspector (dtantsur) -------------------- - trying to flip dsvm-discovery to use the new dnsmasq pxe filter and failing because of bash :Dhttps://review.openstack.org/#/c/525685/6/devstack/plugin.sh at 202 - follow-ups being merged/reviewed; working on state consistency enhancements https://review.openstack.org/#/c/510928/ too (HA demo follow-up) Bifrost (TheJulia) ------------------ - Also seems a recent authentication change in keystoneauth1 has broken processing of the clouds.yaml files, i.e. `openstack` command does not work. - TheJulia will try to look at this this week. Drivers: -------- OneView (???) ~~~~~~~~~~~~~ - Oneview presently does not have a subteam. Cisco UCS (sambetts) Last updated 2018/02/05 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - Cisco CIMC driver CI back up and working on every patch - Cisco UCSM driver CI in development - Patches for updating the UCS python SDKs are in the works and should be posted soon ......... Until next week, --rama [0] https://etherpad.openstack.org/p/IronicWhiteBoard -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Mon Apr 16 18:58:10 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 16 Apr 2018 14:58:10 -0400 Subject: [openstack-dev] [placement] Anchor/Relay Providers In-Reply-To: <1e51904e-f100-da32-966d-316d9fb7a87f@fried.cc> References: <1e51904e-f100-da32-966d-316d9fb7a87f@fried.cc> Message-ID: <4374f7d6-cf6c-a61e-305f-b409a4ab8c59@gmail.com> Sorry it took so long to respond. Comments inline. On 03/30/2018 08:34 PM, Eric Fried wrote: > Folks who care about placement (but especially Jay and Tetsuro)- > > I was reviewing [1] and was at first very unsatisfied that we were not > returning the anchor providers in the results. But as I started digging > into what it would take to fix it, I realized it's going to be > nontrivial. I wanted to dump my thoughts before the weekend. > > > It should be legal to have a configuration like: > > # CN1 (VCPU, MEMORY_MB) > # / \ > # /agg1 \agg2 > # / \ > # SS1 SS2 > # (DISK_GB) (IPV4_ADDRESS) > > And make a request for DISK_GB,IPV4_ADDRESS; > And have it return a candidate including SS1 and SS2. > > The CN1 resource provider acts as an "anchor" or "relay": a provider > that doesn't provide any of the requested resource, but connects to one > or more sharing providers that do so. To be honest, such a request just doesn't make much sense to me. Think about what that is requesting. I want some DISK_GB resources and an IP address. For what? What is going to be *using* those resources? Ah... a virtual machine. In other words, something that would *also* be requesting some CPU and memory resources as well. So, the request is just fatally flawed, IMHO. It doesn't represent a use case from the real world. I don't believe we should be changing placement (either the REST API or the implementation of allocation candidate retrieval) for use cases that don't represent real-world requests. Best, -jay From davanum at gmail.com Mon Apr 16 19:18:33 2018 From: davanum at gmail.com (Davanum Srinivas) Date: Mon, 16 Apr 2018 15:18:33 -0400 Subject: [openstack-dev] [Elections][TC] Announcing Davanum Srinivas (dims) candidacy for TC Message-ID: Team, Please consider my candidacy for Rocky. Please see my previous two candidacy statements [1][2], participation in TC deliberations [3] and recent bootstrap of SIG-OpenStack/SIG-Kubernetes cross community collaboration [4] with the Kubernetes/CNCF communities. With the OpenLab initiative, we have been able to get our sister projects like GopherCloud, Terraform etc some solid testing and we are well on our way to co-test master of Kubernetes and OpenStack as well. While i have been long enough to be considering turning over leadership role to other folks, i do feel like i have some unfinished business that i would like to concentrate on for the next year. I spent quite a bit of time with things that use OpenStack and have a new appreciation for the challenges in the field and in practice. Just for example the need for a "validator" by our partners in CF [5] illustrates the kind of challenges our users face. I would like to spend some time working / tackling these kinds of issues. Other things that i have promised but not yet acted on to my satisfaction are drafting some initial constellation(s), making it easier on part time contributors, easing the pain of working in the community for folks in other geographies etc. I hope you consider my candidacy. I will be working on these things irrespective of whether i am elected or not :) Thanks, Dims [1] https://git.openstack.org/cgit/openstack/election/plain/candidates/newton/TC/Davanum_Srinivas.txt [2] https://git.openstack.org/cgit/openstack/election/plain/candidates/pike/TC/dims.txt [3] https://review.openstack.org/#/q/project:openstack/governance+reviewedby:%22Davanum+Srinivas+(dims)+%253Cdavanum%2540gmail.com%253E%22 [4] https://github.com/kubernetes/cloud-provider-openstack [5] https://github.com/cloudfoundry-incubator/cf-openstack-validator -- Davanum Srinivas :: https://twitter.com/dims From fungi at yuggoth.org Mon Apr 16 19:43:52 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 16 Apr 2018 19:43:52 +0000 Subject: [openstack-dev] Removing networking-mlnx from Debian? In-Reply-To: References: <94573577-2216-3f87-aeba-e494f6d3d974@debian.org> <20180413120804.tag6zt4vntmk7jxe@yuggoth.org> Message-ID: <20180416194352.w35vj42i7hkc2ygo@yuggoth.org> On 2018-04-13 14:17:29 +0000 (+0000), Moshe Levi wrote: [...] > Yes, How can we add python3 job in zuul for testing it? I've proposed https://review.openstack.org/561703 to add the corresponding Python3.5 version of your unit test jobs. Looks like it's passing if you want to the openstack-tox-py35 job (Ran: 176 tests in 8.8742 sec.) if you feel like approving. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From openstack at fried.cc Mon Apr 16 20:16:27 2018 From: openstack at fried.cc (Eric Fried) Date: Mon, 16 Apr 2018 15:16:27 -0500 Subject: [openstack-dev] [placement] Anchor/Relay Providers In-Reply-To: <4374f7d6-cf6c-a61e-305f-b409a4ab8c59@gmail.com> References: <1e51904e-f100-da32-966d-316d9fb7a87f@fried.cc> <4374f7d6-cf6c-a61e-305f-b409a4ab8c59@gmail.com> Message-ID: I was presenting an example using VM-ish resource classes, because I can write them down and everybody knows what I'm talking about without me having to explain what they are. But remember we want placement to be usable outside of Nova as well. But also, I thought we had situations where the VCPU and MEMORY_MB were themselves provided by sharing providers, associated with a compute host RP that may be itself devoid of inventory. (This may even be a viable way to model VMWare's clustery things today.) -efried On 04/16/2018 01:58 PM, Jay Pipes wrote: > Sorry it took so long to respond. Comments inline. > > On 03/30/2018 08:34 PM, Eric Fried wrote: >> Folks who care about placement (but especially Jay and Tetsuro)- >> >> I was reviewing [1] and was at first very unsatisfied that we were not >> returning the anchor providers in the results.  But as I started digging >> into what it would take to fix it, I realized it's going to be >> nontrivial.  I wanted to dump my thoughts before the weekend. >> >> >> It should be legal to have a configuration like: >> >>          #        CN1 (VCPU, MEMORY_MB) >>          #        /      \ >>          #       /agg1    \agg2 >>          #      /          \ >>          #     SS1        SS2 >>          #      (DISK_GB)  (IPV4_ADDRESS) >> >> And make a request for DISK_GB,IPV4_ADDRESS; >> And have it return a candidate including SS1 and SS2. >> >> The CN1 resource provider acts as an "anchor" or "relay": a provider >> that doesn't provide any of the requested resource, but connects to one >> or more sharing providers that do so. > > To be honest, such a request just doesn't make much sense to me. > > Think about what that is requesting. I want some DISK_GB resources and > an IP address. For what? What is going to be *using* those resources? > > Ah... a virtual machine. In other words, something that would *also* be > requesting some CPU and memory resources as well. > > So, the request is just fatally flawed, IMHO. It doesn't represent a use > case from the real world. > > I don't believe we should be changing placement (either the REST API or > the implementation of allocation candidate retrieval) for use cases that > don't represent real-world requests. > > Best, > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jaypipes at gmail.com Mon Apr 16 20:33:39 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 16 Apr 2018 16:33:39 -0400 Subject: [openstack-dev] [placement] Anchor/Relay Providers In-Reply-To: References: <1e51904e-f100-da32-966d-316d9fb7a87f@fried.cc> <4374f7d6-cf6c-a61e-305f-b409a4ab8c59@gmail.com> Message-ID: <7d0b4890-062a-49f3-0642-12df717ac0b6@gmail.com> On 04/16/2018 04:16 PM, Eric Fried wrote: > I was presenting an example using VM-ish resource classes, because I can > write them down and everybody knows what I'm talking about without me > having to explain what they are. But remember we want placement to be > usable outside of Nova as well. > > But also, I thought we had situations where the VCPU and MEMORY_MB were > themselves provided by sharing providers, associated with a compute host > RP that may be itself devoid of inventory. (This may even be a viable > way to model VMWare's clustery things today.) I still don't see a use in returning the root providers in the allocation requests -- since there is nothing consuming resources from those providers. And we already return the root_provider_uuid for all providers involved in allocation requests within the provider_summaries section. So, I can kind of see where we might want to change *this* line of the nova scheduler: https://github.com/openstack/nova/blob/stable/pike/nova/scheduler/filter_scheduler.py#L349 from this: compute_uuids = list(provider_summaries.keys()) to this: compute_uuids = set([ ps['root_provider_uuid'] for ps in provider_summaries ]) But other than that, I don't see a reason to change the response from GET /allocation_candidates at this time. Best, -jay > On 04/16/2018 01:58 PM, Jay Pipes wrote: >> Sorry it took so long to respond. Comments inline. >> >> On 03/30/2018 08:34 PM, Eric Fried wrote: >>> Folks who care about placement (but especially Jay and Tetsuro)- >>> >>> I was reviewing [1] and was at first very unsatisfied that we were not >>> returning the anchor providers in the results.  But as I started digging >>> into what it would take to fix it, I realized it's going to be >>> nontrivial.  I wanted to dump my thoughts before the weekend. >>> >>> >>> It should be legal to have a configuration like: >>> >>>          #        CN1 (VCPU, MEMORY_MB) >>>          #        /      \ >>>          #       /agg1    \agg2 >>>          #      /          \ >>>          #     SS1        SS2 >>>          #      (DISK_GB)  (IPV4_ADDRESS) >>> >>> And make a request for DISK_GB,IPV4_ADDRESS; >>> And have it return a candidate including SS1 and SS2. >>> >>> The CN1 resource provider acts as an "anchor" or "relay": a provider >>> that doesn't provide any of the requested resource, but connects to one >>> or more sharing providers that do so. >> >> To be honest, such a request just doesn't make much sense to me. >> >> Think about what that is requesting. I want some DISK_GB resources and >> an IP address. For what? What is going to be *using* those resources? >> >> Ah... a virtual machine. In other words, something that would *also* be >> requesting some CPU and memory resources as well. >> >> So, the request is just fatally flawed, IMHO. It doesn't represent a use >> case from the real world. >> >> I don't believe we should be changing placement (either the REST API or >> the implementation of allocation candidate retrieval) for use cases that >> don't represent real-world requests. >> >> Best, >> -jay >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From dougal at redhat.com Mon Apr 16 21:00:27 2018 From: dougal at redhat.com (Dougal Matthews) Date: Mon, 16 Apr 2018 22:00:27 +0100 Subject: [openstack-dev] [Mistral]I think Mistral need K8S action In-Reply-To: <734a01d3d538$a3b688d0$eb239a70$@dcn.ssu.ac.kr> References: <58c101d3d2cd$9bbb7a40$d3326ec0$@dcn.ssu.ac.kr> <578d70ca-25e5-441a-9211-0c7986bf2f16@Spark> <734a01d3d538$a3b688d0$eb239a70$@dcn.ssu.ac.kr> Message-ID: On 16 April 2018 at 05:08, 홍선군 wrote: > Thanks for your reply. > > > > I will refer to this Ansible action and developing actions for K8S > somewhere externally. > Great. Do let us know when you start someting - I would be interested in giving feedback and testing or possibly helping out too. > > > Regards, > > Xian Jun Hong > > > > > > *From:* Dougal Matthews > *Sent:* Saturday, April 14, 2018 1:00 AM > *To:* OpenStack Development Mailing List (not for usage questions) < > openstack-dev at lists.openstack.org> > *Subject:* Re: [openstack-dev] [Mistral]I think Mistral need K8S action > > > > > > > > On 13 April 2018 at 05:47, Renat Akhmerov > wrote: > > Hi, > > > > I completely agree with you that having such an action would be useful. > However, I don’t think this kind of action should be provided by Mistral > out of the box. Actions and triggers are integration pieces for Mistral and > are natively external to Mistral code base. In other words, this action can > be implemented anywhere and plugged into a concrete Mistral installation > where needed. > > > > As a home for this action I’d propose 'mistral-extra’ repo where we are > planning to move OpenStack actions and some more. > > Also, if you’d like to contribute you’re very welcome. > > > > I would recommend developing actions for K8s somewhere externally, then > when mistral-extra is ready we can move them over. This is the approach > that I took for the Ansible actions[1] and they will likely be one of the > first additions to mistral-extra. > > [1]: https://github.com/d0ugal/mistral-ansible-actions > > > > > > > > Thanks > > Renat Akhmerov > @Nokia > > > On 13 Apr 2018, 09:18 +0700, , wrote: > > Hello Mistral team. > > I'm doing some work on the K8S but I observed that there is only Docker's > action in Mistral. > > I would like to ask Mistral Team, why there is no K8S action in the > mistral. > > I think it would be useful in Mistral. > > If you feel it's necessary, could I add K8S action to mistral? > > > > Regards, > > Xian Jun Hong > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaetan at xeberon.net Mon Apr 16 22:01:58 2018 From: gaetan at xeberon.net (Gaetan) Date: Tue, 17 Apr 2018 00:01:58 +0200 Subject: [openstack-dev] [PBR] Patches on PBR Message-ID: Hello, I have started a few other patches on PBR, can anyone give me some feedback on them? https://review.openstack.org/558181 https://review.openstack.org/561731 https://review.openstack.org/559484 They don't do that much, but they can help dev workflow Thanks a lot ! ----- Gaetan 2018-04-08 9:39 GMT+02:00 Gaetan : > Hello, > > I have started a few patch on PBR which fail, but I am not sure the > reason, they seem related to something external of my changes: > > - https://review.openstack.org/#/c/559484/6: 'pbr boostrap' command. > Error seems:"testtools.matchers._impl.MismatchError: b'STARTING test > server pbr_testpackage.wsgi' not in b''" > - https://review.openstack.org/#/c/558181/: proposal for update of > sem-ver 3 doc > - https://review.openstack.org/#/c/524436/: Pipfile support (still WIP) > > Can you review them? > Thanks, > > ----- > Gaetan > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at fried.cc Mon Apr 16 22:23:46 2018 From: openstack at fried.cc (Eric Fried) Date: Mon, 16 Apr 2018 17:23:46 -0500 Subject: [openstack-dev] [placement] Anchor/Relay Providers In-Reply-To: <7d0b4890-062a-49f3-0642-12df717ac0b6@gmail.com> References: <1e51904e-f100-da32-966d-316d9fb7a87f@fried.cc> <4374f7d6-cf6c-a61e-305f-b409a4ab8c59@gmail.com> <7d0b4890-062a-49f3-0642-12df717ac0b6@gmail.com> Message-ID: > I still don't see a use in returning the root providers in the > allocation requests -- since there is nothing consuming resources from > those providers. > > And we already return the root_provider_uuid for all providers involved > in allocation requests within the provider_summaries section. > > So, I can kind of see where we might want to change *this* line of the > nova scheduler: > > https://github.com/openstack/nova/blob/stable/pike/nova/scheduler/filter_scheduler.py#L349 > > > from this: > >  compute_uuids = list(provider_summaries.keys()) > > to this: > >  compute_uuids = set([ >      ps['root_provider_uuid'] for ps in provider_summaries >  ]) If we're granting that it's possible to get all your resources from sharing providers, the above doesn't help you to know which of your compute_uuids belongs to which of those sharing-only allocation requests. I'm fine deferring this part until we have a use case for sharing-only allocation requests that aren't prompted by an "attach-*" case where we already know the target host/consumer. But I'd like to point out that there's nothing in the API that prevents us from getting such results. -efried From jaypipes at gmail.com Mon Apr 16 22:50:39 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 16 Apr 2018 18:50:39 -0400 Subject: [openstack-dev] [placement] Anchor/Relay Providers In-Reply-To: References: <1e51904e-f100-da32-966d-316d9fb7a87f@fried.cc> <4374f7d6-cf6c-a61e-305f-b409a4ab8c59@gmail.com> <7d0b4890-062a-49f3-0642-12df717ac0b6@gmail.com> Message-ID: On 04/16/2018 06:23 PM, Eric Fried wrote: >> I still don't see a use in returning the root providers in the >> allocation requests -- since there is nothing consuming resources from >> those providers. >> >> And we already return the root_provider_uuid for all providers involved >> in allocation requests within the provider_summaries section. >> >> So, I can kind of see where we might want to change *this* line of the >> nova scheduler: >> >> https://github.com/openstack/nova/blob/stable/pike/nova/scheduler/filter_scheduler.py#L349 >> >> >> from this: >> >>  compute_uuids = list(provider_summaries.keys()) >> >> to this: >> >>  compute_uuids = set([ >>      ps['root_provider_uuid'] for ps in provider_summaries >>  ]) > > If we're granting that it's possible to get all your resources from > sharing providers, the above doesn't help you to know which of your > compute_uuids belongs to which of those sharing-only allocation requests. > > I'm fine deferring this part until we have a use case for sharing-only > allocation requests that aren't prompted by an "attach-*" case where we > already know the target host/consumer. But I'd like to point out that > there's nothing in the API that prevents us from getting such results. And I'd like to point out that I originally made the GET /allocation_candidates API not return allocation requests when there were only sharing providers. Because... well, there's just no viable use cases for it. -jay From gdubreui at redhat.com Mon Apr 16 23:49:59 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Tue, 17 Apr 2018 09:49:59 +1000 Subject: [openstack-dev] [api] Adding a SDK to developer.openstack.org pages In-Reply-To: <20180406123753.ljldunkd3cxn7z6i@yuggoth.org> References: <5cf52faf-9755-2ddd-4ba3-d19f1a4d4490@redhat.com> <20180406123753.ljldunkd3cxn7z6i@yuggoth.org> Message-ID: <8c2d5df0-b343-dc0f-bd89-8cb1ab25c974@redhat.com> On 06/04/18 22:37, Jeremy Stanley wrote: > On 2018-04-06 12:00:24 +1000 (+1000), Gilles Dubreuil wrote: >> I'd like to update the developer.openstack.org to add details about a new >> SDK. >> >> What would be the corresponding repo? My searches landed me into >> https://docs.openstack.org/doc-contrib-guide/ which is about updating the >> docs.openstack.org but not developer.openstack.org. Is the developer section >> inside the docs section? > Looks like we could do a better job of linking to the relevant git > repositories from some documents. > > I think the file you're looking for is probably: > > https://git.openstack.org/cgit/openstack/api-site/tree/www/index.html > > Happy hacking! > That's the one! Thank you From moshele at mellanox.com Tue Apr 17 00:07:11 2018 From: moshele at mellanox.com (Moshe Levi) Date: Tue, 17 Apr 2018 00:07:11 +0000 Subject: [openstack-dev] Removing networking-mlnx from Debian? In-Reply-To: <20180416194352.w35vj42i7hkc2ygo@yuggoth.org> References: <94573577-2216-3f87-aeba-e494f6d3d974@debian.org> <20180413120804.tag6zt4vntmk7jxe@yuggoth.org> <20180416194352.w35vj42i7hkc2ygo@yuggoth.org> Message-ID: > -----Original Message----- > From: Jeremy Stanley [mailto:fungi at yuggoth.org] > Sent: Monday, April 16, 2018 10:44 PM > To: OpenStack Development Mailing List (not for usage questions) > > Subject: Re: [openstack-dev] Removing networking-mlnx from Debian? > > On 2018-04-13 14:17:29 +0000 (+0000), Moshe Levi wrote: > [...] > > Yes, How can we add python3 job in zuul for testing it? > > I've proposed > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Frev > iew.openstack.org%2F561703&data=02%7C01%7Cmoshele%40mellanox.com > %7C5bffbf4cda7b4efc8df408d5a3d26fad%7Ca652971c7d2e4d9ba6a4d149256f > 461b%7C0%7C0%7C636595046579652523&sdata=7pNgYdNQnNvno%2FH29ZF > qh00D6Sn0l5cB6tqnGMePrQ4%3D&reserved=0 to add the corresponding > Python3.5 version of your unit test jobs. Looks like it's passing if you want to > the openstack-tox-py35 job (Ran: 176 tests in 8.8742 sec.) if you feel like > approving. Approved. Thanks :) > -- > Jeremy Stanley From aaronzhu1121 at gmail.com Tue Apr 17 00:27:06 2018 From: aaronzhu1121 at gmail.com (Rong Zhu) Date: Tue, 17 Apr 2018 08:27:06 +0800 Subject: [openstack-dev] [rally][dragonflow][ec2-api][PowerVMStackers][murano] Tagging rights Message-ID: Hi Sean! yaql is OK to this. On Sat, Apr 14, 2018 at 3:46 AM, Sean McGinnis wrote: > Hello teams, > > I am following up on some recently announced changes regarding governed > projects and tagging rights. See [1] for background. > > It was mostly followed before that when a project came under official > governance that all tagging and releases would then move to using the > openstack/releases repo and associated automation. It was not officially > stated > until recently that this was one of the steps of coming under governance, > so > there were a few projects that became official but that continued to do > their > own releases. > > We've cleaned up most projects' rights to push tags, but for the ones > listed > here we waited: > > - rally > - dragflow > - ec2-api > - networking-powervm > - nova-powervm > - yaql > > We would like to finish cleaning up the ACLs for these, but I wanted to > check > with the teams to make sure there wasn't a reason why these repos had > continued > tagging separately. Please let me know, either here or in the > #openstack-release channel, if there is something we are overlooking. > > Thanks for your attention. > > --- > Sean (smcginnis) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Thanks, Rong Zhu -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Tue Apr 17 01:20:20 2018 From: melwittt at gmail.com (melanie witt) Date: Mon, 16 Apr 2018 18:20:20 -0700 Subject: [openstack-dev] [Nova] z/VM introducing a new config driveformat In-Reply-To: References: <7efe6916-17fc-59c5-d666-6bdfc19c3329@gmail.com> Message-ID: <2eea2405-2e85-6730-e022-e1be33dc35d5@gmail.com> On Mon, 16 Apr 2018 14:56:06 +0800, Chen Ch Ji wrote: > >>>The "iso file" will not be inside the guest, but rather passed to > the guest as a block device, right? > Cloud init expects to find a config drive with following requirements > [1], in order to make cloud init able to consume config drive , we > should be able to prepare it, > in some hypervisor, you can define something like following to the VM > then VM startup is able to consume it > > but for z/VM case it allows disk to be created during VM create (define > )stage but no disk format set, it's the operating system's > responsibility to define the purpose of the > disk, so what we do is > 1) first when we build image ,we create a small AE like cloud-init but > only purpose is to get files from z/VM internal pipe and handle config > drive case What does AE stand for? So, this means in order to use the z/VM driver, users must have special images that will ensure the config drive will be readable by cloud-init. They can't use standard cloud images. > 2) During spawn we create config drive in nova-compute side then send > the file to z/VM through z/VM internal pipe (omit detail here) > 3) During startup of the virtual machine, the small AE is able to mount > the file as loop device and then in turn cloud-init is able to handle it > > because this is our special case, we don't want to upload to cloud-init > community because of uniqueness and as far as we can tell, no hook in > cloud-init mechanism allowed as well > to let us 'mount -o loop' ; also, from openstack point of view except > this small AE (which is documented well) no special thing and > inconsistent to other drivers > > [1]https://github.com/number5/cloud-init/blob/master/cloudinit/sources/DataSourceConfigDrive.py#L225 Where is the AE documented? How do users obtain it? What tools are they supposed to use to build images to use with the z/VM driver? That aside, from what I can see, the z/VM driver behaves unlike any other in-tree driver [0-5] in how it handles config drive. Drivers are expected to create the config drive and present it to the guest in iso9660 or vfat format without requirement of a custom image and the existing drivers are doing that. IMHO, if the z/VM driver can't be fixed to provide proper config drive support, we won't be able to approve the implementation patches. I would like to hear other opinions about it. I propose that we remove the z/VM driver blueprint from the runway at this time and place it back into the queue while work on the driver continues. At a minimum, we need to see z/VM CI running with [validation]run_validation = True in tempest.conf before we add the z/VM driver blueprint back into a runway in the future. Cheers, -melanie [0] https://github.com/openstack/nova/blob/888cd51/nova/virt/hyperv/vmops.py#L661 [1] https://github.com/openstack/nova/blob/888cd51/nova/virt/ironic/driver.py#L974 [2] https://github.com/openstack/nova/blob/888cd51/nova/virt/libvirt/driver.py#L3595 [3] https://github.com/openstack/nova/blob/888cd51/nova/virt/powervm/media.py#L120 [4] https://github.com/openstack/nova/blob/888cd51/nova/virt/vmwareapi/vmops.py#L854 [5] https://github.com/openstack/nova/blob/888cd51/nova/virt/xenapi/vm_utils.py#L1151 From xianjun666 at dcn.ssu.ac.kr Tue Apr 17 01:50:36 2018 From: xianjun666 at dcn.ssu.ac.kr (=?utf-8?B?7ZmN7ISg6rWw?=) Date: Tue, 17 Apr 2018 10:50:36 +0900 Subject: [openstack-dev] [Mistral]I think Mistral need K8S action In-Reply-To: References: <58c101d3d2cd$9bbb7a40$d3326ec0$@dcn.ssu.ac.kr> <578d70ca-25e5-441a-9211-0c7986bf2f16@Spark> <734a01d3d538$a3b688d0$eb239a70$@dcn.ssu.ac.kr> Message-ID: <7b8801d3d5ee$7c2923c0$747b6b40$@dcn.ssu.ac.kr> Thank you very much for your help. I'm doing some tests now and I'll let you know if I start the K8S action. Thanks again, Xian Jun Hong From: Dougal Matthews Sent: Tuesday, April 17, 2018 6:00 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Mistral]I think Mistral need K8S action On 16 April 2018 at 05:08, > wrote: Thanks for your reply. I will refer to this Ansible action and developing actions for K8S somewhere externally. Great. Do let us know when you start someting - I would be interested in giving feedback and testing or possibly helping out too. Regards, Xian Jun Hong From: Dougal Matthews > Sent: Saturday, April 14, 2018 1:00 AM To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [Mistral]I think Mistral need K8S action On 13 April 2018 at 05:47, Renat Akhmerov > wrote: Hi, I completely agree with you that having such an action would be useful. However, I don’t think this kind of action should be provided by Mistral out of the box. Actions and triggers are integration pieces for Mistral and are natively external to Mistral code base. In other words, this action can be implemented anywhere and plugged into a concrete Mistral installation where needed. As a home for this action I’d propose 'mistral-extra’ repo where we are planning to move OpenStack actions and some more. Also, if you’d like to contribute you’re very welcome. I would recommend developing actions for K8s somewhere externally, then when mistral-extra is ready we can move them over. This is the approach that I took for the Ansible actions[1] and they will likely be one of the first additions to mistral-extra. [1]: https://github.com/d0ugal/mistral-ansible-actions Thanks Renat Akhmerov @Nokia On 13 Apr 2018, 09:18 +0700, < xianjun666 at dcn.ssu.ac.kr>, wrote: Hello Mistral team. I'm doing some work on the K8S but I observed that there is only Docker's action in Mistral. I would like to ask Mistral Team, why there is no K8S action in the mistral. I think it would be useful in Mistral. If you feel it's necessary, could I add K8S action to mistral? Regards, Xian Jun Hong __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From sangho at opennetworking.org Tue Apr 17 02:00:30 2018 From: sangho at opennetworking.org (Sangho Shin) Date: Tue, 17 Apr 2018 11:00:30 +0900 Subject: [openstack-dev] [openstack-infra] How to take over a project? Message-ID: Dear OpenStack Infra team, I would like to know how to take over an OpenStack project. I am a committer of the networking-onos project (https://github.com/openstack/networking-onos ), and I would like to take over the project. The current maintainer (cc’d) has already agreed with that. Please let me know the process to take over (or change the maintainer of) the project. BTW, it looks like even the current maintainer cannot create a new branch of the codes. How can we get the authority to create a new branch? Thank you, Sangho -------------- next part -------------- An HTML attachment was scrubbed... URL: From iwienand at redhat.com Tue Apr 17 06:03:30 2018 From: iwienand at redhat.com (Ian Wienand) Date: Tue, 17 Apr 2018 16:03:30 +1000 Subject: [openstack-dev] [openstack-infra] How to take over a project? In-Reply-To: References: Message-ID: On 04/17/2018 12:00 PM, Sangho Shin wrote: > I would like to know how to take over an OpenStack project. I am a > committer of the networking-onos project > (https://github.com/openstack/networking-onos > ), and I would like to > take over the project. > The current maintainer (cc’d) has already agreed with that. > Please let me know the process to take over (or change the > maintainer of) the project. Are you talking about the github project or the gerrit project? Github is a read-only mirror of the project from gerrit. You appear to already be a member of networking-onos-core [1] so you have permissions to approve and reject changes. > BTW, it looks like even the current maintainer cannot create a new > branch of the codes. How can we get the authority to create a new > branch? Are you following something like [2]? -i [1] https://review.openstack.org/#/admin/groups/1001,members [2] https://docs.openstack.org/infra/manual/drivers.html#feature-branches From adriant at catalyst.net.nz Tue Apr 17 06:10:25 2018 From: adriant at catalyst.net.nz (Adrian Turjak) Date: Tue, 17 Apr 2018 18:10:25 +1200 Subject: [openstack-dev] [all] How to handle python3 only projects Message-ID: <8c07821c-3546-7c6e-8288-e42f4847e36d@catalyst.net.nz> Hello devs, The python27 clock of doom ticks closer to zero (https://pythonclock.org/) and officially dropping python27 support is going to have to happen eventually, that though is a bigger topic. Before we get there outright what we should think about is what place python3 only projects have in OpenStack alongside ones that support both. Given that python27's life is nearing the end, we should probably support a project either transitioning to only python3 or new projects that are only python3. Not to mention the potential inclusion of python3 only libraries in global-requirements. Potentially we should even encourage python3 only projects, and encourage deployers and distro providers to focus on python3 only (do we?). Python3 only projects are now a reality, python3 only libraries are now a reality, and most of OpenStack already supports python3. Major libraries are dropping python27 support in newer versions, and we should think about how we want to do it too. So where do projects that want to stop supporting python27 fit in the OpenStack ecosystem? Or given the impending end of python27, why should new projects be required to support it at all, or should we heavily encourage new projects to be python3 only (if not require it)? It's not an easy topic, and there are likely lots of opinions on the matter, but it's something to start considering. Cheers! - Adrian Turjak From skaplons at redhat.com Tue Apr 17 06:55:35 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Tue, 17 Apr 2018 08:55:35 +0200 Subject: [openstack-dev] [neutron][dynamic routing] RYU Breaks lower constraints In-Reply-To: <97B996A6-F453-495C-BA95-1FDDAD31A8EF@redhat.com> References: <94BD147E-C0B0-4F84-BADE-C39469022654@vmware.com> <050C7C89-8A69-4D41-81DC-9D029E09FFEE@vmware.com> <20180415233032.9ABE6B32D1@mail.valinux.co.jp> <97B996A6-F453-495C-BA95-1FDDAD31A8EF@redhat.com> Message-ID: <75F0E8EC-918C-4491-8969-BD592CDF6FC1@redhat.com> Hi, Just for information to all who sends patches for Neutron. Patch [1] is now merged so if in Your patch lower-constraints job is still failing with error like in this thread, please rebase Your patch on top of [1] instead of just rechecking which probably will not help. [1] https://review.openstack.org/#/c/561579/ > Wiadomość napisana przez Slawomir Kaplonski w dniu 16.04.2018, o godz. 12:41: > > I just sent a patch to bump Ryu version in Neutron’s requirements to fix lower constraints job there also: https://review.openstack.org/#/c/561579/ > > >> Wiadomość napisana przez Gary Kotton w dniu 16.04.2018, o godz. 09:13: >> >> Please see https://review.openstack.org/561443 >> >> On 4/16/18, 2:31 AM, "IWAMOTO Toshihiro" wrote: >> >> On Sun, 15 Apr 2018 21:02:42 +0900, >> Gary Kotton wrote: >>> >>> [1 ] >>> [1.1 ] >>> Hi, >>> That sounds reasonable. I wonder if the RYU folk can chime in here. >>> Thanks >> >> I don't fully understand the recent g-r change yet, but >> I guess neutron-dynamic-routing should also have ryu>=4.24. >> I'll check this tommorrow. >> >>> From: Akihiro MOTOKI >>> Reply-To: OpenStack List >>> Date: Sunday, April 15, 2018 at 12:43 PM >>> To: OpenStack List >>> Subject: Re: [openstack-dev] [neutron][dynamic routing] RYU Breaks lower constraints >>> >>> Gary, >>> >>> I think this is caused by the recent pip change and pip no longer cannot import pip from code. The right solution seems to bump the minimum version of ryu. >>> >>> Thought? >>> >>> http://lists.openstack.org/pipermail/openstack-dev/2018-March/128939.html >>> >>> Akihiro >>> >>> 2018/04/15 午後6:06 "Gary Kotton" >: >>> Hi, >>> It seems like ther RYU import is breaking the project: >>> >>> >>> 2018-04-15 08:41:34.654681 | ubuntu-xenial | b'--- import errors ---\nFailed to import test module: neutron_dynamic_routing.tests.unit.services.bgp.driver.ryu.test_driver\nTraceback (most recent call last):\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/unittest2/loader.py", line 456, in _find_test_path\n module = self._get_module_from_name(name)\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/unittest2/loader.py", line 395, in _get_modu >> le_from_name\n __import__(name)\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/neutron_dynamic_routing/tests/unit/services/bgp/driver/ryu/test_driver.py", line 21, in \n from ryu.services.protocols.bgp import bgpspeaker\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/services/protocols/bgp/bgpspeaker.py", line 21, in \n from ryu.lib.packet.bgp import (\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/__init__.py> ower-constraints/lib/python3.5/site-packages/ryu/lib/packet/__init__.py>", line 6, in \n from . import (ethernet, arp, icmp, icmpv6, ipv4, ipv6, lldp, mpls, packet,\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/ethernet.py", line 18, in \n from . import vlan\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/vlan.py", line 21, in \n from . import ipv4\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/ipv4.py> ttp://git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/ipv4.py>", line 23, in \n from . import tcp\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/tcp.py", line 24, in \n from . import bgp\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/lib/packet/bgp.py", line 52, in \n from ryu.utils import binary_str\n File "/home/zuul/src/git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/utils.py> git.openstack.org/openstack/neutron-dynamic-routing/.tox/lower-constraints/lib/python3.5/site-packages/ryu/utils.py>", line 23, in \n from pip import req as pip_req\nImportError: cannot import name \'req\'\n' >>> >>> Any suggestions? >>> Thanks >>> Gary >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> [1.2 ] >>> [2 ] >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > — > Best regards > Slawek Kaplonski > skaplons at redhat.com > — Best regards Slawek Kaplonski skaplons at redhat.com From jichenjc at cn.ibm.com Tue Apr 17 08:58:22 2018 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Tue, 17 Apr 2018 16:58:22 +0800 Subject: [openstack-dev] [Nova] z/VM introducing a new config driveformat In-Reply-To: <2eea2405-2e85-6730-e022-e1be33dc35d5@gmail.com> References: <7efe6916-17fc-59c5-d666-6bdfc19c3329@gmail.com> <2eea2405-2e85-6730-e022-e1be33dc35d5@gmail.com> Message-ID: For the question on AE documentation, it's open source in [1] and the documentation for how to build and use is [2] once our code is upstream, there are a set of documentation change which will cover this image build process by adding some links to there [3] You are right, we need image to have our Active Engine, I think different arch and platform might have their unique requirements and our solution our Active Engine is very like to cloud-init so no harm to add it from user's perspective I think later we can upload image to some place so anyone is able to consume it as test image if they like because different arch's image (e.g x86 and s390x) can't be shared anyway. For the config drive format you mentioned, actually, as previous explanation and discussion witho Michael and Dan, We found the iso9660 can be used (previously we made a bad assumption) and we already changed the patch in [4], so it's exactly same to other virt drivers you mentioned , we don't need special format and iso9660 works perfect for our driver It make sense to me we are temply moved out from runway, I suppose we can adjust the CI to enable the run_ssh = true with config drive functionalities very soon and we will apply for review after that with the test result requested in our CI log. Thanks [1] https://github.com/mfcloud/python-zvm-sdk/blob/master/tools/share/zvmguestconfigure [2] http://cloudlib4zvm.readthedocs.io/en/latest/makeimage.html#configuration-of-activation-engine-ae-in-zlinux [3] https://review.openstack.org/#/q/status:open+project:openstack/nova +branch:master+topic:bp/add-zvm-driver-rocky [4] https://review.openstack.org/#/c/527658/33/nova/virt/zvm/utils.py line 104 Best Regards! Kevin (Chen) Ji 纪 晨 Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com Phone: +86-10-82451493 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC From: melanie witt To: openstack-dev at lists.openstack.org Date: 04/17/2018 09:21 AM Subject: Re: [openstack-dev] [Nova] z/VM introducing a new config driveformat On Mon, 16 Apr 2018 14:56:06 +0800, Chen Ch Ji wrote: > >>>The "iso file" will not be inside the guest, but rather passed to > the guest as a block device, right? > Cloud init expects to find a config drive with following requirements > [1], in order to make cloud init able to consume config drive , we > should be able to prepare it, > in some hypervisor, you can define something like following to the VM > then VM startup is able to consume it > > but for z/VM case it allows disk to be created during VM create (define > )stage but no disk format set, it's the operating system's > responsibility to define the purpose of the > disk, so what we do is > 1) first when we build image ,we create a small AE like cloud-init but > only purpose is to get files from z/VM internal pipe and handle config > drive case What does AE stand for? So, this means in order to use the z/VM driver, users must have special images that will ensure the config drive will be readable by cloud-init. They can't use standard cloud images. > 2) During spawn we create config drive in nova-compute side then send > the file to z/VM through z/VM internal pipe (omit detail here) > 3) During startup of the virtual machine, the small AE is able to mount > the file as loop device and then in turn cloud-init is able to handle it > > because this is our special case, we don't want to upload to cloud-init > community because of uniqueness and as far as we can tell, no hook in > cloud-init mechanism allowed as well > to let us 'mount -o loop' ; also, from openstack point of view except > this small AE (which is documented well) no special thing and > inconsistent to other drivers > > [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_number5_cloud-2Dinit_blob_master_cloudinit_sources_DataSourceConfigDrive.py-23L225&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=yV6OJ4IfFSLoHNWAJpBF7j0sK2pgfxSEIigv8vinYw0&s=3410axnNZ_62U3HOh6i7yivyc7HyTcqwx2xuKRDEeac&e= Where is the AE documented? How do users obtain it? What tools are they supposed to use to build images to use with the z/VM driver? That aside, from what I can see, the z/VM driver behaves unlike any other in-tree driver [0-5] in how it handles config drive. Drivers are expected to create the config drive and present it to the guest in iso9660 or vfat format without requirement of a custom image and the existing drivers are doing that. IMHO, if the z/VM driver can't be fixed to provide proper config drive support, we won't be able to approve the implementation patches. I would like to hear other opinions about it. I propose that we remove the z/VM driver blueprint from the runway at this time and place it back into the queue while work on the driver continues. At a minimum, we need to see z/VM CI running with [validation]run_validation = True in tempest.conf before we add the z/VM driver blueprint back into a runway in the future. Cheers, -melanie [0] https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openstack_nova_blob_888cd51_nova_virt_hyperv_vmops.py-23L661&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=yV6OJ4IfFSLoHNWAJpBF7j0sK2pgfxSEIigv8vinYw0&s=7PXdcMLIrzcekkl0V3N1vML09CGgvali0Q4v-M_vrzk&e= [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openstack_nova_blob_888cd51_nova_virt_ironic_driver.py-23L974&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=yV6OJ4IfFSLoHNWAJpBF7j0sK2pgfxSEIigv8vinYw0&s=X1KzmZQEfiHW1O6N1j5vBJkERjrV0dDrZlkT3LjE5aY&e= [2] https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openstack_nova_blob_888cd51_nova_virt_libvirt_driver.py-23L3595&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=yV6OJ4IfFSLoHNWAJpBF7j0sK2pgfxSEIigv8vinYw0&s=a5XhSWf7Ws5h_OuiUc_LpMVtM4ud3GoexVM1NKpBwfM&e= [3] https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openstack_nova_blob_888cd51_nova_virt_powervm_media.py-23L120&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=yV6OJ4IfFSLoHNWAJpBF7j0sK2pgfxSEIigv8vinYw0&s=w7kq1DhO7qw57H0ZX0uxkj1tFvLCeYOHU9QVUTmBehU&e= [4] https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openstack_nova_blob_888cd51_nova_virt_vmwareapi_vmops.py-23L854&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=yV6OJ4IfFSLoHNWAJpBF7j0sK2pgfxSEIigv8vinYw0&s=_G6MIr7OqLH48t8b8JGMVhg6bgCPg8bgHbPez9ohbG0&e= [5] https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openstack_nova_blob_888cd51_nova_virt_xenapi_vm-5Futils.py-23L1151&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=yV6OJ4IfFSLoHNWAJpBF7j0sK2pgfxSEIigv8vinYw0&s=LZK-0hqXfMqBaLHUHMA4kjE-mReBuP1vw9pYGPoqAxU&e= __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=yV6OJ4IfFSLoHNWAJpBF7j0sK2pgfxSEIigv8vinYw0&s=SiDXOoY94EWr2-3GDE9_5U6tsqgl7OqwbFzSwJrGAzA&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From trinath.somanchi at nxp.com Tue Apr 17 09:13:24 2018 From: trinath.somanchi at nxp.com (Trinath Somanchi) Date: Tue, 17 Apr 2018 09:13:24 +0000 Subject: [openstack-dev] [openstack][charms] Openstack + OVN In-Reply-To: References: Message-ID: Hi Openstack-Charms team- Please help us with your guidance to submit openstack-ovn charm. /Trinath | NXP From: Aakash Kt [mailto:aakashkt0 at gmail.com] Sent: Thursday, April 12, 2018 7:34 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [openstack][charms] Openstack + OVN Hello, Any update on getting to the development of this charm? I need some guidance on this. Thank you, Aakash On Tue, Mar 27, 2018 at 10:27 PM, Aakash Kt > wrote: Hello, So an update about current status. The charm spec for charm-os-ovn has been merged (queens/backlog). I don't know what the process is after this, but I had a couple of questions for the development of the charm : - I was wondering whether I need to use the charms.openstack package? Or can I just write using the reactive framework as is? - If we do have to use charms.openstack, where can I find good documentation of the package? I searched online and could not find much to go on with. - How much time do you think this will take to develop (not including test cases) ? Do guide me on the further steps to bring this charm to completion :-) Thank you, Aakash On Mon, Mar 19, 2018 at 5:37 PM, Aakash Kt > wrote: Hi James, Thank you for the previous code review. I have pushed another patch. Also, I do not know how to reply to your review comments on gerrit, so I will reply to them here. About the signed-off-message, I did not know that it wasn't a requirement for OpenStack, I assumed it was. I have removed it from the updated patch. Thank you, Aakash On Thu, Mar 15, 2018 at 11:34 AM, Aakash Kt > wrote: Hi James, Just a small reminder that I have pushed a patch for review, according to changes you suggested :-) Thanks, Aakash On Mon, Mar 12, 2018 at 2:38 PM, James Page > wrote: Hi Aakash On Sun, 11 Mar 2018 at 19:01 Aakash Kt > wrote: Hi, I had previously put in a mail about the development for openstack-ovn charm. Sorry it took me this long to get back, was involved in other projects. I have submitted a charm spec for the above charm. Here is the review link : https://review.openstack.org/#/c/551800/ Please look in to it and we can further discuss how to proceed. I'll feedback directly on the review. Thanks! James __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Tue Apr 17 09:24:48 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 17 Apr 2018 11:24:48 +0200 Subject: [openstack-dev] [all] Topics for the Board+TC+UC meeting in Vancouver Message-ID: <1df6fb53-ed96-8eea-a290-ab0b889a1ae2@openstack.org> Hi everyone, As you know the Technical Committee (the governance body representing contributors producing OpenStack software) meets with other OpenStack governance bodies (Board of Directors and User Committee) on the Sunday before every Summit, and Vancouver will be no exception. At the TC retrospective Forum session in Sydney we decided we should more broadly ask our constituency for topics they would like us to cover in that discussion. Once the current election cycle is over and the new TC chair is picked, we'll come up with a proposed agenda and submit it to the Chairman of the Board for consideration. So... Is there any specific topic you think we should cover in that meeting ? -- Thierry Carrez (ttx) From jean-philippe at evrard.me Tue Apr 17 09:36:11 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Tue, 17 Apr 2018 10:36:11 +0100 Subject: [openstack-dev] [openstack-ansible] We need to change! Message-ID: Dear community, Starting at the end of this month, I won't be able to work full time on OpenStack-Ansible anymore. I want to highlight the following: Our current way of working is not sustainable in the long run, as a lot of work (and therefore pressure) is concentrated on a few individuals. I managed to get more people working on some parts of our code (becoming cores on specific areas of knowledge, like mbuil on networking, mnaser and gokhan on telemetry, johnsom on octavia, mugsie on designate), but at the same time we have lost a core reviewer on all our code base (mhayden). I like the fact we are still innovating with our own deployment tooling, bringing more features in, changing the deployment models to be always more stable, more user-friendly. But new features aren't all. We need people actively looking at the quality of existing deliverables. We need to stretch those responsibilities to more people. I would be very happy if some people using OpenStack-Ansible would help on: * Bugs. We are reaching an all-time high amount of bugs pending. We need people actively cleaning those. We need someone to organize a bug smash. We need people willing to lead the bug triage process too. * Releases. Our current release process is manual. People interested by how releases are handled should step in there (for example, what does in, and at what time). We also need to coordinate with the releases team, and improve our way to release. * Jobs/state monitoring. I have spent an insane amount of time cleaning up after other people. That cannot be done any longer. If you're breaking a job, whether it's part of the openstack-ansible gates or not, you should be fixing it. Even if it's a non-voting job, or a periodic job. I'd like everyone to monitor our zuul dashboard, and take action based on that. When queens was close to release, everything job was green on the zuul dashboard. I did an experiment of 1 month without me fixing the upgrade jobs, and guess what: ALL (or almost ALL) the upgrade jobs are now broken. Please monitor [1] and actively help fixing the jobs. Remember, if everyone works on this, it would give a great feedback to new users, and it becomes a virtuous cycle. * Reduce technical debt. We have so many variables, so many remnants of the past. This cycle is planned to be a cleanup. Let's simplify all of this, making sure the deployment of openstack with openstack-ansible ends up with a system KISS. * Increasing voting test coverage. We need more code paths tested and we need those code path preventing bad patches to merge. It makes the reduction of technical debt easier. Really thank you for your understanding. Best regards, Jean-Philippe (evrardjp) [1]: http://zuul.openstack.org/builds.html?pipeline=periodic&project=openstack%2Fopenstack-ansible From gong.yongsheng at 99cloud.net Tue Apr 17 09:47:22 2018 From: gong.yongsheng at 99cloud.net (=?GBK?B?uajTwMn6?=) Date: Tue, 17 Apr 2018 17:47:22 +0800 (CST) Subject: [openstack-dev] [election][tc] TC Candidacy for gong yongsheng Message-ID: <2f3e8b5d.4e28.162d3006b48.Coremail.gong.yongsheng@99cloud.net> Hi, I am announcing my candidacy for a member of the Technical Committee. Now I am CTO of 99cloud Inc., China. After discussing with my Team (My CEO, COO and R&D department), I made up my mind. This means my company will continue devoting itself into the OpenStack community, help the OpenStack to meet customer’s needs and promote OpenStack to a new high. I was previously a Neutron Core member and am the PTL of Tacker project. Besides this, I am also making contribution to OpenCord project, which is the edge platform for telecom central office. OpenCord project is using OpenStack as the platform to instantiate the VNFs. One more open project I am leading my company R&D to is akraino edge stack, a Linux Foundation project in formation, which will create an open source software stack to improve the state of edge cloud infrastructure for carrier, provider, and IoT networks. I think OpenStack can play a good role in OpenCord and akraino as the infrastructure manager. I agree to Thierry Carrez with the concept of “Constellations” (representation of groups of OpenStack components that answer a specific use case). For example, in the case of MANO system, we need to improve Tacker with OpenStack as NFVI. For DEVOPS, the ZUUL and the whole OpenStack’s CI infrastructure is great, we can also introduce kubernetes into this constellation so that we can meet the DEVOPS needs of container-based or VM-based applications. I am new to TC governance and have the passion to join TC. I am a technical guy and hope I can help to glue the OpenStack projects and keep my eye on their development. As a member of management team, I like to make others succeed and then enjoy their success. So, I will support OpenStack project teams to make OpenStack great again. Thank you for your consideration. Best Regards, Gong Yongsheng (gongysh) [1] http://stackalytics.com/?metric=commits&company=99cloud&user_id=gongysh&release=all [2] http://stackalytics.com/?metric=commits&company=99cloud&release=all [3] https://www.akraino.org/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gkotton at vmware.com Tue Apr 17 10:47:11 2018 From: gkotton at vmware.com (Gary Kotton) Date: Tue, 17 Apr 2018 10:47:11 +0000 Subject: [openstack-dev] [openstack-infra] How to take over a project? In-Reply-To: References: Message-ID: <0630C19B-9EA1-457D-A74C-8A7A03D96DD6@vmware.com> Hi, You either need one of the ono core team or the neutron release team to add you. FYI - https://review.openstack.org/#/admin/groups/1001,members Thanks Gary From: Sangho Shin Reply-To: OpenStack List Date: Tuesday, April 17, 2018 at 5:01 AM To: OpenStack List Subject: [openstack-dev] [openstack-infra] How to take over a project? Dear OpenStack Infra team, I would like to know how to take over an OpenStack project. I am a committer of the networking-onos project (https://github.com/openstack/networking-onos), and I would like to take over the project. The current maintainer (cc’d) has already agreed with that. Please let me know the process to take over (or change the maintainer of) the project. BTW, it looks like even the current maintainer cannot create a new branch of the codes. How can we get the authority to create a new branch? Thank you, Sangho -------------- next part -------------- An HTML attachment was scrubbed... URL: From hamdyk at mellanox.com Tue Apr 17 10:51:18 2018 From: hamdyk at mellanox.com (Hamdy Khader) Date: Tue, 17 Apr 2018 10:51:18 +0000 Subject: [openstack-dev] [Os-brick][Cinder] NVMe-oF NQN string In-Reply-To: <122B872DCF83AB4DB816E25A2C1AD08D8B9242BF@IRSMSX102.ger.corp.intel.com> References: <122B872DCF83AB4DB816E25A2C1AD08D8B9242BF@IRSMSX102.ger.corp.intel.com> Message-ID: Hi, I think you're right, will drop the split and push change soon. Regards, Hamdy ________________________________ From: Szwed, Maciej Sent: Monday, April 16, 2018 4:51 PM To: OpenStack-dev at lists.openstack.org Subject: [openstack-dev] [Os-brick][Cinder] NVMe-oF NQN string Hi, I’m wondering why in Os-brick implementation of NVMe-oF in os_brick/initiator/connectors/nvme.py, line 97 we do split on ‘nqn’. Connection properties, including ‘nqn’, are provided by Cinder driver and when user want to implement new driver that will use NVMe-of he/she needs to create NQN string with additional string and dot proceeding the desired NQN string. This additional string is unused across whole NVMe-oF implementation. This creates confusion for people when creating new Cinder driver. What was its purpose? Can we drop that split? Regards, Maciej -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Tue Apr 17 12:34:41 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 17 Apr 2018 08:34:41 -0400 Subject: [openstack-dev] [Nova] z/VM introducing a new config driveformat In-Reply-To: <2eea2405-2e85-6730-e022-e1be33dc35d5@gmail.com> References: <7efe6916-17fc-59c5-d666-6bdfc19c3329@gmail.com> <2eea2405-2e85-6730-e022-e1be33dc35d5@gmail.com> Message-ID: <132dd688-ccfd-22f6-fc94-6901f2038479@gmail.com> On 04/16/2018 09:20 PM, melanie witt wrote: > I propose that we remove the z/VM driver blueprint from the runway at > this time and place it back into the queue while work on the driver > continues. At a minimum, we need to see z/VM CI running with > [validation]run_validation = True in tempest.conf before we add the z/VM > driver blueprint back into a runway in the future. Seems reasonable to me. -jay From gr at ham.ie Tue Apr 17 12:40:05 2018 From: gr at ham.ie (Graham Hayes) Date: Tue, 17 Apr 2018 13:40:05 +0100 Subject: [openstack-dev] [all] How to handle python3 only projects In-Reply-To: <8c07821c-3546-7c6e-8288-e42f4847e36d@catalyst.net.nz> References: <8c07821c-3546-7c6e-8288-e42f4847e36d@catalyst.net.nz> Message-ID: On 17/04/18 07:10, Adrian Turjak wrote: > Hello devs, > > The python27 clock of doom ticks closer to zero > (https://pythonclock.org/) and officially dropping python27 support is > going to have to happen eventually, that though is a bigger topic. > > Before we get there outright what we should think about is what place > python3 only projects have in OpenStack alongside ones that support both. > > Given that python27's life is nearing the end, we should probably > support a project either transitioning to only python3 or new projects > that are only python3. Not to mention the potential inclusion of python3 > only libraries in global-requirements. > > Potentially we should even encourage python3 only projects, and > encourage deployers and distro providers to focus on python3 only (do > we?). Python3 only projects are now a reality, python3 only libraries > are now a reality, and most of OpenStack already supports python3. Major > libraries are dropping python27 support in newer versions, and we should > think about how we want to do it too. > > So where do projects that want to stop supporting python27 fit in the > OpenStack ecosystem? Or given the impending end of python27, why should > new projects be required to support it at all, or should we heavily > encourage new projects to be python3 only (if not require it)? > > It's not an easy topic, and there are likely lots of opinions on the > matter, but it's something to start considering. > > Cheers! > > - Adrian Turjak I think the time has definitely come to allow projects be py3 only. https://review.openstack.org/#/c/561922/ should allow that from a governance perspective. - Graham -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From periyasamy.palanisamy at ericsson.com Tue Apr 17 13:07:48 2018 From: periyasamy.palanisamy at ericsson.com (Periyasamy Palanisamy) Date: Tue, 17 Apr 2018 13:07:48 +0000 Subject: [openstack-dev] [openstack-ansible] Problems with Openstack services while migrating VMs Message-ID: Hi, I'm trying to migrate controller and compute VMs installed with Openstack-Ansible across systems with following approach. This is mainly to minimize the deployment time in the Jenkins CI environment. Export steps: 1. Power off the VMs gracefully. 2. virsh dumpxml ${node} > $EXPORT_PATH/${node}.xml 3. cp /var/lib/libvirt/images/${node}.qcow2 $EXPORT_PATH/$node.qcow2 4. create a tar ball for the xml's and qcow2 images. Import steps: 1. cp ${node}.qcow2 /var/lib/libvirt/images/ 2. virsh define ${node}.xml 3. virsh start ${node} After the import of the VMs, The openstack services (neutron-server, DHCP agent, Metering agent, Metadata agent, L3 agent, Open vSwitch agent, nova-conductor and nova-comute) are started in random order. This causes neutron and nova is not able to find DHCP agent and compute accordingly to bring up the tenant VM and throws the error [1]. I have also tried to boot compute VM followed by controller VM. It also doesn't help. Could you please let me know what is going wrong here ? [1] https://paste.ubuntu.com/p/YNg2NnjvpS/ (fault section) Thanks, Periyasamy -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Tue Apr 17 13:22:51 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 17 Apr 2018 14:22:51 +0100 (BST) Subject: [openstack-dev] [tc] [all] TC Report 18-16 Message-ID: HTML: https://anticdent.org/tc-report-18-16.html This the 16th week of the year, meaning I've been making these reports for a full year. The [first one](https://anticdent.org/tc-report-17.html) was in the 17th week of 2017. The reports have changed quite a bit since then: Back then there was still a once weekly TC meeting. That's since been replaced by less formal office hours, three times a week. That's had mixed results, reflected in the tone and content of these reports. To some extent the decompression of TC activity has meant less drama, intensity and apparent depth in the reports. But it may also be the case that the TC hasn't done as much as it could or should. With elections in progress, we could take advantage of this time to reflect on the role of the TC and work to make sure the next term is more active in shaping and sustaining OpenStack. At the time of this writing there are ten hours left if you would like to nominate yourself. Info on the [election page](https://governance.openstack.org/election/). There are nine candidates (so far) for seven slots. When nominations close, there will be a week of "campaigning". This is an opportunity for community members to question the candidates about any concerns. (I'm running for reelection, you can read my [nomination](https://git.openstack.org/cgit/openstack/election/plain/candidates/rocky/TC/cdent%40anticdent.org) if you like. Please ask me any questions you may have.) # Leadership Shadowing Last [Wednesday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-04-11.log.html#t2018-04-11T14:53:11) there was some comparison between the Kubernetes and OpenStack styles of "growing new leaders". In Kubernetes there is a shadowing system that has mixed success, depending on the group. # Forum and Summit On [Thursday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-04-12.log.html#t2018-04-12T15:00:50) there was final discussion on submitting sessions to [the forum](http://forumtopics.openstack.org/). Prior to the summit proper there will be a joint leadership meeting. Thierry has started [a thread](http://lists.openstack.org/pipermail/openstack-dev/2018-April/129428.html) seeking topics that the community would like to see raised at that meeting. These meetings are the most significant formal engagement the TC has with the board throughout the year. # More on Kolla Also on Thursday there was [more discussion](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-04-12.log.html#t2018-04-12T15:27:07) on how kolla-k8s might evolve. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From dms at danplanet.com Tue Apr 17 13:40:35 2018 From: dms at danplanet.com (Dan Smith) Date: Tue, 17 Apr 2018 06:40:35 -0700 Subject: [openstack-dev] [Nova] z/VM introducing a new config driveformat In-Reply-To: <2eea2405-2e85-6730-e022-e1be33dc35d5@gmail.com> (melanie witt's message of "Mon, 16 Apr 2018 18:20:20 -0700") References: <7efe6916-17fc-59c5-d666-6bdfc19c3329@gmail.com> <2eea2405-2e85-6730-e022-e1be33dc35d5@gmail.com> Message-ID: > I propose that we remove the z/VM driver blueprint from the runway at > this time and place it back into the queue while work on the driver > continues. At a minimum, we need to see z/VM CI running with > [validation]run_validation = True in tempest.conf before we add the > z/VM driver blueprint back into a runway in the future. Agreed. I also want to see the CI reporting cleaned up so that it's readable and consistent. Yesterday I pointed out some issues with the fact that the actual config files being used are not the ones being uploaded. There are also duplicate (but not actually identical) logs from all services being uploaded, including things like a full compute log from starting with the libvirt driver. I'm also pretty troubled by the total lack of support for the metadata service. I know it's technically optional on our matrix, but it's a pretty important feature for a lot of scenarios, and it's also a dependency for other features that we'd like to have wider support for (like attached device metadata). Going back to the spec, I see very little detail on some of the things raised here, and very (very) little review back when it was first approved. I'd also like to see more detail be added to the spec about all of these things, especially around required special changes like this extra AE agent. --Dan From zbitter at redhat.com Tue Apr 17 13:49:58 2018 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 17 Apr 2018 09:49:58 -0400 Subject: [openstack-dev] [TC][election] TC Candidacy Message-ID: Hello friends, I've been working full-time on OpenStack for 6 years now, since the very early days of the Heat project back in 2012. Along the way I have served as PTL of Heat, where I am still a member of the core team, and colloborated with developers from many other projects, such as Mistral, Zaqar, Telemetry, and Keystone. I also worked on TripleO for a while, from which I learned a lot about both deploying OpenStack itself and deploying complex applications using OpenStack (since it uses an OpenStack undercloud to deploy OpenStack as an application). Last year I wrote, and the TC approved, a resolution on the importance of catering to applications that autonomously make use of OpenStack APIs if we are to achieve OpenStack's mission: https://governance.openstack.org/tc/resolutions/20170317-cloud-applications-mission.html (Since then a lot of great progress[1] has been made, with more coming[2].) Afterwards, a number of people remarked that up until that point, despite being familiar with all of the pieces, they had never really connected the dots to realise that there was no secure way for an application to authenticate to the OpenStack cloud it is running in without extensive manual intervention from the cloud operator. I'm running for election to the Technical Committee because I think it's important that we have a TC that can, collectively, connect the dots in as many different ways as possible, to cater to the many different users and potential users of OpenStack. There are important discussions ahead -- both within the technical community and between the TC and the Board -- about where to draw the boundaries of OpenStack; the more user viewpoints that are represented, the better the result will be. We don't get as much feedback from developers of cloud-aware applications as we do from other end users, because in many cases OpenStack doesn't yet meet their minimum requirements. That is the gap I am hoping to bridge. If we succeed, OpenStack will not only gain a lot more users, but I expect more users will become contributors. I know from long experience that keeping up with the activity of the TC requires a substantial time commitment; I am fortunate to be in a position to contribute and I hope to be able to represent many of y'all who are unable to devote that amount of time. I also plan to work with the TC to find more ways to guide projects toward maturity once they have joined the OpenStack community -- something we largely lost when the old incubation process went away. Questions and comments are welcome! thanks, Zane. [1] https://docs.openstack.org/keystone/queens/user/application_credentials.html [2] https://specs.openstack.org/openstack/keystone-specs/specs/keystone/rocky/capabilities-app-creds.html From doug at doughellmann.com Tue Apr 17 13:55:56 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Tue, 17 Apr 2018 09:55:56 -0400 Subject: [openstack-dev] [all] How to handle python3 only projects In-Reply-To: References: <8c07821c-3546-7c6e-8288-e42f4847e36d@catalyst.net.nz> Message-ID: <1523973236-sup-8187@lrrr.local> Excerpts from Graham Hayes's message of 2018-04-17 13:40:05 +0100: > On 17/04/18 07:10, Adrian Turjak wrote: > > Hello devs, > > > > The python27 clock of doom ticks closer to zero > > (https://pythonclock.org/) and officially dropping python27 support is > > going to have to happen eventually, that though is a bigger topic. > > > > Before we get there outright what we should think about is what place > > python3 only projects have in OpenStack alongside ones that support both. > > > > Given that python27's life is nearing the end, we should probably > > support a project either transitioning to only python3 or new projects > > that are only python3. Not to mention the potential inclusion of python3 > > only libraries in global-requirements. > > > > Potentially we should even encourage python3 only projects, and > > encourage deployers and distro providers to focus on python3 only (do > > we?). Python3 only projects are now a reality, python3 only libraries > > are now a reality, and most of OpenStack already supports python3. Major > > libraries are dropping python27 support in newer versions, and we should > > think about how we want to do it too. > > > > So where do projects that want to stop supporting python27 fit in the > > OpenStack ecosystem? Or given the impending end of python27, why should > > new projects be required to support it at all, or should we heavily > > encourage new projects to be python3 only (if not require it)? > > > > It's not an easy topic, and there are likely lots of opinions on the > > matter, but it's something to start considering. > > > > Cheers! > > > > - Adrian Turjak > > I think the time has definitely come to allow projects be py3 only. > > https://review.openstack.org/#/c/561922/ should allow that from a > governance perspective. > > - Graham (I replied on the patch because I saw that first, but I'll repeat myself here for continuity.) I don't think we're ready to make python 2 support optional. I do think we should shift to emphasizing python 3 "first", though. In order to allow projects to drop python 2 support I think we need to wait for both distributions we claim to support to have good python 3 support. Ubuntu has 3.5, but CentOS/RHEL does not, yet. Red Hat has announced in the release notes for RHEL 7.5 that "the next major release" of RHEL will not include python 2 and will include python 3. At that point we can reasonably expect deployers to start deploying OpenStack on python 3, and dropping python 2 support will become a realistic option. I should add that we have plenty of constructive things to do yet to prepare for that date. The Oslo team is currently working on making python 3 the default for all of our secondary jobs (docs, release notes, pep8, etc.). The next step will be to update the functional test jobs to ensure they are all running under python 3 (at least). Doug From ifat.afek at nokia.com Tue Apr 17 14:23:10 2018 From: ifat.afek at nokia.com (Afek, Ifat (Nokia - IL/Kfar Sava)) Date: Tue, 17 Apr 2018 14:23:10 +0000 Subject: [openstack-dev] [vitrage] No IRC meeting this week Message-ID: <1010559F-8915-47E3-9B30-FC82D4C24D3D@nokia.com> Hi, The IRC meeting tomorrow (April 18th) is canceled, as many Vitrage developers will be on vacation. See you next week, Ifat -------------- next part -------------- An HTML attachment was scrubbed... URL: From bob.ball at citrix.com Tue Apr 17 15:03:25 2018 From: bob.ball at citrix.com (Bob Ball) Date: Tue, 17 Apr 2018 15:03:25 +0000 Subject: [openstack-dev] [nova][xenapi] does get_all_bw_counters driver call nova-network only? In-Reply-To: <1523882211.27744.1@smtp.office365.com> References: <1523882211.27744.1@smtp.office365.com> Message-ID: <78e434279da84fa6bc50681831949a3c@AMSPEX02CL01.citrite.net> As far as I remember this isn't a nova-network only feature; but I may be missing something. I believe the bandwidth counters may be being used at Rackspace. Bob -----Original Message----- From: Balázs Gibizer [mailto:balazs.gibizer at ericsson.com] Sent: 16 April 2018 13:37 To: OpenStack-dev Subject: [openstack-dev] [nova][xenapi] does get_all_bw_counters driver call nova-network only? Hi, The get_all_bw_counters() virt driver [1] is only supported by xenapi today. However Matt raised the question [2] if this is a nova-network only feature. As in that case we can simply remove it. Cheers, gibi [1] https://github.com/openstack/nova/blob/68afe71e26e60a3e4ad30083cc244c57540d4da9/nova/virt/xenapi/driver.py#L383 [2] https://review.openstack.org/#/c/403660/78/nova/compute/manager.py at 6855 __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From skaplons at redhat.com Tue Apr 17 15:12:11 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Tue, 17 Apr 2018 17:12:11 +0200 Subject: [openstack-dev] [neutron] Sum up about "Enable mutable configuration" goal in Neutron Message-ID: <48E5DFAF-B3B0-4089-83B6-2F2369AFDFCB@redhat.com> Hi, I was working to implement goal [1] in Neutron and it’s now done with [2]. I also checked with code search if other related to neutron projects will need any changes. I looked with [3] for projects which might need such changes. According to this list it looks that projects which may require some changes are: openstack/neutron-lbaas openstack/networking-cisco openstack/networking-dpm openstack/networking-infoblox openstack/networking-l2gw openstack/networking-lagopus openstack/neutron-dynamic-routing openstack/networking-avaya openstack/networking-6wind Based on mail [4] I updated also neutron-dynamic-routing project [5] and proposed a patch for neutron-lbaas [6] but it will probably not be merged as neutron-lbaas is deprecated. If You are maintainer of one of other projects from list above (or other related to neutron) please take care of this goal on Your side. [1] https://governance.openstack.org/tc/goals/rocky/enable-mutable-configuration.html [2] https://review.openstack.org/#/c/554259/ [3] http://codesearch.openstack.org/?q=service.launch&i=nope&files=&repos=networking-6wind,networking-ale-omniswitch,networking-arista,networking-avaya,networking-bagpipe,networking-baremetal,networking-bgpvpn,networking-bigswitch,networking-brocade,networking-calico,networking-cisco,networking-cumulus,networking-dpm,networking-edge-vpn,networking-extreme,networking-fortinet,networking-fujitsu,networking-generic-switch,networking-generic-switch-tempest-plugin,networking-gluon,networking-h3c,networking-hpe,networking-huawei,networking-hyperv,networking-icc,networking-infoblox,networking-l2gw,networking-l2gw-tempest-plugin,networking-lagopus,networking-lenovo,networking-midonet,networking-mlnx,networking-nec,networking-odl,networking-onos,networking-opencontrail,networking-ovn,networking-ovs-dpdk,networking-peregrine,networking-plumgrid,networking-powervm,networking-sfc,networking-spp,networking-vpp,networking-vsphere,networking-zte,networking-zvm,neutron-classifier,neutron-dynamic-routing,neutron-fwaas,neutron-fwaas-dashboard,neutron-lbaas,neutron-lbaas-dashboard,neutron-lib,neutron-specs,neutron-tempest-plugin,neutron-vpnaas,neutron-vpnaas-dashboard [4] http://lists.openstack.org/pipermail/openstack-dev/2018-April/129137.html [5] https://review.openstack.org/#/c/559309/ [6] https://review.openstack.org/#/c/559412/ — Best regards Slawek Kaplonski skaplons at redhat.com From mriedemos at gmail.com Tue Apr 17 16:18:33 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 17 Apr 2018 11:18:33 -0500 Subject: [openstack-dev] [stable][trove] keep trove-stable-maint members up-to-date In-Reply-To: References: Message-ID: <149d6e3a-98f1-1415-a087-99e6d1ec2cb6@gmail.com> On 4/16/2018 3:04 AM, 赵超 wrote: > > There are some patches to stable branches to the different trove repos, > and they are always progressing slowly ,because none of the current > trove team core members are in the trove-stable-maint. I tried to > contact with the previous PTLs about expanding the 'trove-stable-maint' > group and keep the group up-to-date, however got no response yet. > > I noticed that 'stable-maint-core' is always included in the individual > project -stable-maint group, could the core stable team help to update > the 'trove-stable-maint' group (adding me to it could be sufficient by > now)? I've gone through the stable branch proposed changes for python-troveclient and trove. The reason the core team from a project isn't automatically core on the stable branches for that project is because the review criteria and what's appropriate for stable branches is different from changes on the master branch. The details are in the stable branch guide [1]. Until it becomes clear that there are people that are reviewing stable branch patches and understand the rules, they don't get added to the core team for stable. Until then, you can make review requests for stable patches in the ML like you have here, or in the #openstack-stable freenode IRC channel. I think over time once the stable-maint-core team can identify some people that have done a good job of doing early reviews and +1 (and -1 inappropriate changes) then they can be added to the stable branch core team for the project. [1] https://docs.openstack.org/project-team-guide/stable-branches.html -- Thanks, Matt From lbragstad at gmail.com Tue Apr 17 16:57:19 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Tue, 17 Apr 2018 11:57:19 -0500 Subject: [openstack-dev] [keystone] rocky-1 retrospective Message-ID: <6c7a0986-75d0-e0af-6f97-72e9d5e2eb85@gmail.com> Hi all, As discussed in the keystone meeting today [0], we'll be holding our retrospective for rocky-1 next Tuesday, April 24th immediately after the keystone meeting at 1600 UTC. We're going to coordinate tools and what-not prior to the retrospective in the #openstack-keystone channel. See you next week, Lance [0] http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-04-17-16.00.log.html#l-137 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From melwittt at gmail.com Tue Apr 17 17:40:48 2018 From: melwittt at gmail.com (melanie witt) Date: Tue, 17 Apr 2018 10:40:48 -0700 Subject: [openstack-dev] [Nova] z/VM introducing a new config driveformat In-Reply-To: References: <7efe6916-17fc-59c5-d666-6bdfc19c3329@gmail.com> <2eea2405-2e85-6730-e022-e1be33dc35d5@gmail.com> Message-ID: <35a542f1-2b2f-74bd-b769-eb049a430223@gmail.com> On Tue, 17 Apr 2018 16:58:22 +0800, Chen Ch Ji wrote: > For the question on AE documentation, it's open source in [1] and the > documentation for how to build and use is [2] > once our code is upstream, there are a set of documentation change which > will cover this image build process by > adding some links to there [3] Thanks, that is good info. > You are right, we need image to have our Active Engine, I think > different arch and platform might have their unique > requirements and our solution our Active Engine is very like to > cloud-init so no harm to add it from user's perspective > I think later we can upload image to some place so anyone is able to > consume it as test image if they like > because different arch's image (e.g x86 and s390x) can't be shared anyway. > > For the config drive format you mentioned, actually, as previous > explanation and discussion witho Michael and Dan, > We found the iso9660 can be used (previously we made a bad assumption) > and we already changed the patch in [4], > so it's exactly same to other virt drivers you mentioned , we don't need > special format and iso9660 works perfect for our driver That's good news, I'm glad that got resolved. > It make sense to me we are temply moved out from runway, I suppose we > can adjust the CI to enable the run_ssh = true > with config drive functionalities very soon and we will apply for review > after that with the test result requested in our CI log. Okay, sounds good. Since you expect to be up and running with [validation]run_validation = True soon, I'm going to move the z/VM driver blueprint back to the front of the queue and put the next blueprint in line into the runway. Then, when the next blueprint end date arrives (currently 2018-04-30), if the z/VM CI is ready with cleaned up, human readable log files and is running with run_ssh = True with the test_server_basic_ops test to verify config drive operation, we will add the z/VM driver blueprint back to a runway for dedicated review. Let us know when the z/VM CI is ready, in case other runway reviews are completed early. If other runway reviews complete early, a runway space might be available earlier than 2018-04-30. Thanks, -melanie > Thanks > > [1] > https://github.com/mfcloud/python-zvm-sdk/blob/master/tools/share/zvmguestconfigure > [2] > http://cloudlib4zvm.readthedocs.io/en/latest/makeimage.html#configuration-of-activation-engine-ae-in-zlinux > [3] > https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/add-zvm-driver-rocky > [4] > https://review.openstack.org/#/c/527658/33/nova/virt/zvm/utils.pyline 104 From melwittt at gmail.com Tue Apr 17 17:46:56 2018 From: melwittt at gmail.com (melanie witt) Date: Tue, 17 Apr 2018 10:46:56 -0700 Subject: [openstack-dev] [Nova] z/VM introducing a new config driveformat In-Reply-To: References: <7efe6916-17fc-59c5-d666-6bdfc19c3329@gmail.com> <2eea2405-2e85-6730-e022-e1be33dc35d5@gmail.com> Message-ID: On Tue, 17 Apr 2018 06:40:35 -0700, Dan Smith wrote: >> I propose that we remove the z/VM driver blueprint from the runway at >> this time and place it back into the queue while work on the driver >> continues. At a minimum, we need to see z/VM CI running with >> [validation]run_validation = True in tempest.conf before we add the >> z/VM driver blueprint back into a runway in the future. > > Agreed. I also want to see the CI reporting cleaned up so that it's > readable and consistent. Yesterday I pointed out some issues with the > fact that the actual config files being used are not the ones being > uploaded. There are also duplicate (but not actually identical) logs > from all services being uploaded, including things like a full compute > log from starting with the libvirt driver. Yes, we definitely need to see all of these issues fixed. > I'm also pretty troubled by the total lack of support for the metadata > service. I know it's technically optional on our matrix, but it's a > pretty important feature for a lot of scenarios, and it's also a > dependency for other features that we'd like to have wider support for > (like attached device metadata). > > Going back to the spec, I see very little detail on some of the things > raised here, and very (very) little review back when it was first > approved. I'd also like to see more detail be added to the spec about > all of these things, especially around required special changes like > this extra AE agent. Agreed, can someone from the z/VM team please propose an update to the driver spec to document these details? Thanks, -melanie From kennelson11 at gmail.com Tue Apr 17 23:45:52 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 17 Apr 2018 23:45:52 +0000 Subject: [openstack-dev] [All] [Election] End TC Nominations & Begin Campaigning Period Message-ID: Hello All, The TC Nomination period is now over. The official candidate list is available on the election website[0]. Now begins the campaigning period where candidates and electorate may debate their statements. Polling will start 2018-04-23T23:45. Thank you, [0] http://governance.openstack.org/election/#rocky-tc-candidates -------------- next part -------------- An HTML attachment was scrubbed... URL: From sangho at opennetworking.org Wed Apr 18 00:00:58 2018 From: sangho at opennetworking.org (Sangho Shin) Date: Wed, 18 Apr 2018 09:00:58 +0900 Subject: [openstack-dev] [openstack-infra] How to take over a project? In-Reply-To: <0630C19B-9EA1-457D-A74C-8A7A03D96DD6@vmware.com> References: <0630C19B-9EA1-457D-A74C-8A7A03D96DD6@vmware.com> Message-ID: Ian and Gary, Thank you so much for your answer. I will try what you suggested. Thank you, Sangho > On 17 Apr 2018, at 7:47 PM, Gary Kotton wrote: > > Hi, > You either need one of the ono core team or the neutron release team to add you. FYI - https://review.openstack.org/#/admin/groups/1001,members > Thanks > Gary > > From: Sangho Shin > Reply-To: OpenStack List > Date: Tuesday, April 17, 2018 at 5:01 AM > To: OpenStack List > Subject: [openstack-dev] [openstack-infra] How to take over a project? > > Dear OpenStack Infra team,  <> > > I would like to know how to take over an OpenStack project. > I am a committer of the networking-onos project (https://github.com/openstack/networking-onos ), and I would like to take over the project. > The current maintainer (cc’d) has already agreed with that. > > Please let me know the process to take over (or change the maintainer of) the project. > > BTW, it looks like even the current maintainer cannot create a new branch of the codes. How can we get the authority to create a new branch? > > Thank you, > > Sangho > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org ?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From wangpeihuixyz at 126.com Wed Apr 18 00:29:29 2018 From: wangpeihuixyz at 126.com (Frank Wang) Date: Wed, 18 Apr 2018 08:29:29 +0800 (CST) Subject: [openstack-dev] [openstack-infra] How to take over a project? In-Reply-To: References: <0630C19B-9EA1-457D-A74C-8A7A03D96DD6@vmware.com> Message-ID: <3790b7cd.c9b.162d62802bb.Coremail.wangpeihuixyz@126.com> Hi Sangho, I'm excited to see the networking-onos project moving forward again. that's very cool, please let me know if there are some features need to be done. Hope you can get rid of this problem quickly Thanks, Frank. At 2018-04-18 08:00:58, "Sangho Shin" wrote: Ian and Gary, Thank you so much for your answer. I will try what you suggested. Thank you, Sangho On 17 Apr 2018, at 7:47 PM, Gary Kotton wrote: Hi, You either need one of the ono core team or the neutron release team to add you. FYI - https://review.openstack.org/#/admin/groups/1001,members Thanks Gary From: Sangho Shin Reply-To: OpenStack List Date: Tuesday, April 17, 2018 at 5:01 AM To: OpenStack List Subject: [openstack-dev] [openstack-infra] How to take over a project? Dear OpenStack Infra team, I would like to know how to take over an OpenStack project. I am a committer of the networking-onos project (https://github.com/openstack/networking-onos), and I would like to take over the project. The current maintainer (cc’d) has already agreed with that. Please let me know the process to take over (or change the maintainer of) the project. BTW, it looks like even the current maintainer cannot create a new branch of the codes. How can we get the authority to create a new branch? Thank you, Sangho __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Wed Apr 18 01:43:52 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 17 Apr 2018 18:43:52 -0700 Subject: [openstack-dev] [tripleo] The Weekly Owl - 17th Edition Message-ID: Note: this is the seventeeth edition of a weekly update of what happens in TripleO. The goal is to provide a short reading (less than 5 minutes) to learn where we are and what we're doing. Any contributions and feedback are welcome. Link to the previous version: http://lists.openstack.org/pipermail/openstack-dev/2018-April/129255.html +---------------------------------+ | General announcements | +---------------------------------+ +--> Rocky milestone 1 will be released this week (probably tomorrow)! +--> (reminder) if you're looking at reproducing a CI job, checkout: https://docs.openstack.org/tripleo-docs/latest/contributor/reproduce-ci.html +------------------------------+ | Continuous Integration | +------------------------------+ +--> Ruck is quiquell and Rover is panda. Please let them know any new CI issue. +--> Master promotion is 1 day, Queens is 2 days, Pike is 4 days and Ocata is 5 days. +--> Efforts around libvirt based multinode reproducer, see https://trello.com/c/JEGLSVh6/323-reproduce-ci-jobs-with-libvirt +--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting and https://goo.gl/D4WuBP +-------------+ | Upgrades | +-------------+ +--> Progress on FFU CLI in tripleoclient, need reviews. +--> Work for containerized undercloud upgrades has been merged. Testing will make progress after rocky-m1 (with new tags). +--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad-status +---------------+ | Containers | +---------------+ +--> Still working on UX problems +--> Still working on container workflow, good progress last week where container prepare isn't needed. Now working on container updates. +--> Investigating how to bootstrap Docker + Registry before deploying containers +--> Progress on routed networks support +--> More: https://etherpad.openstack.org/p/tripleo-containers-squad-status +----------------------+ | config-download | +----------------------+ +--> Moving to config-download by default is coming very soon (once Ceph patches land). +--> Ceph was migrated and all patches are going to merge this week. +--> octavia/skydive migration is wip. +--> Improving deploy-steps-tasks.j2 to improve playbook readability and memory consumption +--> UI work is work in progress. +--> More: https://etherpad.openstack.org/p/tripleo-config-downlo ad-squad-status +--------------+ | Integration | +--------------+ +--> No updates. +--> More: https://etherpad.openstack.org/p/tripleo-integration-squad-status +---------+ | UI/CLI | +---------+ +--> Efforts on config-download integration +--> Added type to ansible-playbook messages (feedback needed) +--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status +---------------+ | Validations | +---------------+ +--> No updates. +--> More: https://etherpad.openstack.org/p/tripleo-validations-squad-status +---------------+ | Networking | +---------------+ +--> No updates this week. +--> More: https://etherpad.openstack.org/p/tripleo-networking-squad-status +--------------+ | Workflows | +--------------+ +--> Need reviews, see etherpad. +--> Working on workflows v2 +--> More: https://etherpad.openstack.org/p/tripleo-workflows-squad-status +-----------+ | Security | +-----------+ +--> Tomorrow's meeting is about Storyboard migration and Secret management. +--> More: https://etherpad.openstack.org/p/tripleo-security-squad +------------+ | Owl fact | +------------+ Did you know owls were watching you while working on TripleO? Check this out: https://www.reddit.com/r/pics/comments/8cz8v0/owls_born_outside_of_office_window_wont_stop/ (Thanks Wes for the link) Thanks all for reading and stay tuned! -- Your fellow reporter, Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhaochao1984 at gmail.com Wed Apr 18 01:49:52 2018 From: zhaochao1984 at gmail.com (=?UTF-8?B?6LW16LaF?=) Date: Wed, 18 Apr 2018 09:49:52 +0800 Subject: [openstack-dev] [stable][trove] keep trove-stable-maint members up-to-date In-Reply-To: <149d6e3a-98f1-1415-a087-99e6d1ec2cb6@gmail.com> References: <149d6e3a-98f1-1415-a087-99e6d1ec2cb6@gmail.com> Message-ID: Matt, Thanks for the explanation. Currently we don't have much participation in the Trove project(so much less resources for the stable branches), from time to time some people would ask or submit patches to the stable branches, I think it would be sufficient to ask the stable-maint-core team for help after the trove team have agreed on accepting them. Thanks for approving the stable branch patches of trove and python-trove, we also have some in the trove-dashboard. Some check and gate jobs of python-troveclient and trove-dashboard in the stable branches do work properly, so some patches couldn't get merged, I'll try to find ways to fix them. On Wed, Apr 18, 2018 at 12:18 AM, Matt Riedemann wrote: > On 4/16/2018 3:04 AM, 赵超 wrote: > >> >> There are some patches to stable branches to the different trove repos, >> and they are always progressing slowly ,because none of the current trove >> team core members are in the trove-stable-maint. I tried to contact with >> the previous PTLs about expanding the 'trove-stable-maint' group and keep >> the group up-to-date, however got no response yet. >> >> I noticed that 'stable-maint-core' is always included in the individual >> project -stable-maint group, could the core stable team help to update the >> 'trove-stable-maint' group (adding me to it could be sufficient by now)? >> > > I've gone through the stable branch proposed changes for > python-troveclient and trove. > > The reason the core team from a project isn't automatically core on the > stable branches for that project is because the review criteria and what's > appropriate for stable branches is different from changes on the master > branch. The details are in the stable branch guide [1]. Until it becomes > clear that there are people that are reviewing stable branch patches and > understand the rules, they don't get added to the core team for stable. > > Until then, you can make review requests for stable patches in the ML like > you have here, or in the #openstack-stable freenode IRC channel. > > I think over time once the stable-maint-core team can identify some people > that have done a good job of doing early reviews and +1 (and -1 > inappropriate changes) then they can be added to the stable branch core > team for the project. > > [1] https://docs.openstack.org/project-team-guide/stable-branches.html > > -- > > Thanks, > > Matt > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- To be free as in freedom. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhang.lei.fly at gmail.com Wed Apr 18 01:51:51 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Wed, 18 Apr 2018 09:51:51 +0800 Subject: [openstack-dev] [kolla][vote]Retire kolla-kubernetes project Message-ID: Since many of the contributors in the kolla-kubernetes project are moved to other things. And there is no active contributor for months. On the other hand, there is another comparable project, openstack-helm, in the community. For less confusion and disruptive community resource, I propose to retire the kolla-kubernetes project. More discussion about this you can check the mail[0] and patch[1] please vote +1 to retire the repo, or -1 not to retire the repo. The vote will be open until everyone has voted, or for 1 week until April 25th, 2018. [0] http://lists.openstack.org/pipermail/openstack-dev/2018-March/128822.html [1] https://review.openstack.org/552531 -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Wed Apr 18 01:54:32 2018 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 18 Apr 2018 01:54:32 +0000 Subject: [openstack-dev] [tripleo] The Weekly Owl - 17th Edition In-Reply-To: References: Message-ID: On Tue, Apr 17, 2018 at 9:44 PM Emilien Macchi wrote: > Note: this is the seventeeth edition of a weekly update of what happens in > TripleO. > The goal is to provide a short reading (less than 5 minutes) to learn > where we are and what we're doing. > Any contributions and feedback are welcome. > Link to the previous version: > http://lists.openstack.org/pipermail/openstack-dev/2018-April/129255.html > > +---------------------------------+ > | General announcements | > +---------------------------------+ > > +--> Rocky milestone 1 will be released this week (probably tomorrow)! > +--> (reminder) if you're looking at reproducing a CI job, checkout: > https://docs.openstack.org/tripleo-docs/latest/contributor/reproduce-ci.html > > +------------------------------+ > | Continuous Integration | > +------------------------------+ > > +--> Ruck is quiquell and Rover is panda. Please let them know any new CI > issue. > +--> Master promotion is 1 day, Queens is 2 days, Pike is 4 days and Ocata > is 5 days. > +--> Efforts around libvirt based multinode reproducer, see > https://trello.com/c/JEGLSVh6/323-reproduce-ci-jobs-with-libvirt > +--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting and > https://goo.gl/D4WuBP > So, just to add some context. We would like to be able to setup libvirt guests in the same way nodepool nodes are setup to allow the ci team and others to reexecute upstream ci jobs on libvirt using the exact workflow that upstream jobs take. A reminder the current reproduce scripts are documented here [1]. We plan on updating the current doc with our libvirt work when it is ready. Thanks all [1] http://tripleo.org/contributor/reproduce-ci.html > > > +-------------+ > | Upgrades | > +-------------+ > > +--> Progress on FFU CLI in tripleoclient, need reviews. > +--> Work for containerized undercloud upgrades has been merged. Testing > will make progress after rocky-m1 (with new tags). > +--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad-status > > +---------------+ > | Containers | > +---------------+ > > +--> Still working on UX problems > +--> Still working on container workflow, good progress last week where > container prepare isn't needed. Now working on container updates. > +--> Investigating how to bootstrap Docker + Registry before deploying > containers > +--> Progress on routed networks support > +--> More: https://etherpad.openstack.org/p/tripleo-containers-sq > uad-status > > +----------------------+ > | config-download | > +----------------------+ > > +--> Moving to config-download by default is coming very soon (once Ceph > patches land). > +--> Ceph was migrated and all patches are going to merge this week. > +--> octavia/skydive migration is wip. > +--> Improving deploy-steps-tasks.j2 to improve playbook readability and > memory consumption > +--> UI work is work in progress. > +--> More: https://etherpad.openstack.org/p/tripleo-config-downlo > ad-squad-status > > +--------------+ > | Integration | > +--------------+ > > +--> No updates. > +--> More: https://etherpad.openstack.org/p/tripleo-integration-s > quad-status > > +---------+ > | UI/CLI | > +---------+ > > +--> Efforts on config-download integration > +--> Added type to ansible-playbook messages (feedback needed) > +--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status > > +---------------+ > | Validations | > +---------------+ > > +--> No updates. > +--> More: https://etherpad.openstack.org/p/tripleo-validations-s > quad-status > > +---------------+ > | Networking | > +---------------+ > > +--> No updates this week. > +--> More: https://etherpad.openstack.org/p/tripleo-networking-sq > uad-status > > +--------------+ > | Workflows | > +--------------+ > > +--> Need reviews, see etherpad. > +--> Working on workflows v2 > +--> More: https://etherpad.openstack.org/p/tripleo-workflows-squad-status > > +-----------+ > | Security | > +-----------+ > > +--> Tomorrow's meeting is about Storyboard migration and Secret > management. > +--> More: https://etherpad.openstack.org/p/tripleo-security-squad > > +------------+ > | Owl fact | > +------------+ > > Did you know owls were watching you while working on TripleO? > Check this out: > https://www.reddit.com/r/pics/comments/8cz8v0/owls_born_outside_of_office_window_wont_stop/ > (Thanks Wes for the link) > > > Thanks all for reading and stay tuned! > -- > Your fellow reporter, Emilien Macchi > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From richwellum at gmail.com Wed Apr 18 02:10:33 2018 From: richwellum at gmail.com (Richard Wellum) Date: Wed, 18 Apr 2018 02:10:33 +0000 Subject: [openstack-dev] [kolla][vote]Retire kolla-kubernetes project In-Reply-To: References: Message-ID: +1 On Tue, Apr 17, 2018 at 9:52 PM Jeffrey Zhang wrote: > Since many of the contributors in the kolla-kubernetes project are moved > to other things. And there is no active contributor for months. On the > other hand, there is another comparable project, openstack-helm, in the > community. For less confusion and disruptive community resource, I propose > to retire the kolla-kubernetes project. > > More discussion about this you can check the mail[0] and patch[1] > > please vote +1 to retire the repo, or -1 not to retire the repo. The vote > will be open until everyone has voted, or for 1 week until April 25th, 2018. > > [0] > http://lists.openstack.org/pipermail/openstack-dev/2018-March/128822.html > [1] https://review.openstack.org/552531 > > -- > Regards, > Jeffrey Zhang > Blog: http://xcodest.me > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhang.lei.fly at gmail.com Wed Apr 18 02:48:15 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Wed, 18 Apr 2018 10:48:15 +0800 Subject: [openstack-dev] [kolla][neutron][requirements][pbr]Use git+https line in requirements.txt break the pip install Message-ID: Recently, one of networking-odl package breaks kolla's gate[0]. The direct issue is ceilometer is added in networking-odl's requirements.txt file[1] Then when install network-odl with upper-contraints.txt file, it will raise error like $ pip install -c https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt ./networking-odl ... collecting networking-bgpvpn>=8.0.0 (from networking-odl==12.0.1.dev54) Downloading http://pypi.doubanio.com/packages/5a/e5/995be0d53d472f739a7a0bb6c9d9fecbc4936148651aaf56d39f3b65b1f1/networking_bgpvpn-8.0.0-py2-none-any.whl (172kB) 100% |████████████████████████████████| 174kB 12.0MB/s Collecting ceilometer (from networking-odl==12.0.1.dev54) Could not find a version that satisfies the requirement ceilometer (from networking-odl==12.0.1.dev54) (from versions: ) No matching distribution found for ceilometer (from networking-odl==12.0.1.dev54) But if you just install the networking-odl's requirements.txt file, it works $ pip install -c https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt -r ./networking-odl/requirements.txt ... Obtaining ceilometer from git+ https://git.openstack.org/openstack/ceilometer at master#egg=ceilometer (from -r networking-odl/requirements.txt (line 21)) Cloning https://git.openstack.org/openstack/ceilometer (to revision master) to /home/jeffrey/.dotfiles/virtualenvs/test/src/ceilometer ... Is this expected? and how could we fix this? [0] https://bugs.launchpad.net/kolla/+bug/1764621 [1] https://github.com/openstack/networking-odl/blob/master/requirements.txt#L21 - ​​ - Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From hswayne77 at gmail.com Wed Apr 18 02:53:14 2018 From: hswayne77 at gmail.com (=?utf-8?B?5qWK552/6LGq?=) Date: Wed, 18 Apr 2018 10:53:14 +0800 Subject: [openstack-dev] [kolla][vote]Retire kolla-kubernetes project In-Reply-To: References: Message-ID: <103AF24E-A222-4D80-8E1D-F9E8F27D41A3@gmail.com> +1 從我的 iPhone 傳送 > Richard Wellum 於 2018年4月18日 上午10:10 寫道: > > +1 > >> On Tue, Apr 17, 2018 at 9:52 PM Jeffrey Zhang wrote: >> Since many of the contributors in the kolla-kubernetes project are moved to other things. And there is no active contributor for months. On the other hand, there is another comparable project, openstack-helm, in the community. For less confusion and disruptive community resource, I propose to retire the kolla-kubernetes project. >> >> More discussion about this you can check the mail[0] and patch[1] >> >> please vote +1 to retire the repo, or -1 not to retire the repo. The vote will be open until everyone has voted, or for 1 week until April 25th, 2018. >> >> [0] http://lists.openstack.org/pipermail/openstack-dev/2018-March/128822.html >> [1] https://review.openstack.org/552531 >> >> -- >> Regards, >> Jeffrey Zhang >> Blog: http://xcodest.me >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From chenxingcampus at outlook.com Wed Apr 18 03:37:17 2018 From: chenxingcampus at outlook.com (Chan Chason) Date: Wed, 18 Apr 2018 03:37:17 +0000 Subject: [openstack-dev] [kolla][vote]Retire kolla-kubernetes project In-Reply-To: References: Message-ID: +1 在 2018年4月18日,上午9:51,Jeffrey Zhang > 写道: Since many of the contributors in the kolla-kubernetes project are moved to other things. And there is no active contributor for months. On the other hand, there is another comparable project, openstack-helm, in the community. For less confusion and disruptive community resource, I propose to retire the kolla-kubernetes project. More discussion about this you can check the mail[0] and patch[1] please vote +1 to retire the repo, or -1 not to retire the repo. The vote will be open until everyone has voted, or for 1 week until April 25th, 2018. [0] http://lists.openstack.org/pipermail/openstack-dev/2018-March/128822.html [1] https://review.openstack.org/552531 -- Regards, Jeffrey Zhang Blog: http://xcodest.me __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From sangho at opennetworking.org Wed Apr 18 04:51:09 2018 From: sangho at opennetworking.org (Sangho Shin) Date: Wed, 18 Apr 2018 13:51:09 +0900 Subject: [openstack-dev] [openstack-infra] How to take over a project? In-Reply-To: <3790b7cd.c9b.162d62802bb.Coremail.wangpeihuixyz@126.com> References: <0630C19B-9EA1-457D-A74C-8A7A03D96DD6@vmware.com> <3790b7cd.c9b.162d62802bb.Coremail.wangpeihuixyz@126.com> Message-ID: <64A28203-EEF3-4FE2-AED5-6382306EAA97@opennetworking.org> Frank, Thank you for your support. I will make it work as quickly as possible and add many features like odl. I will let you know if I need your help. Sangho > On 18 Apr 2018, at 9:29 AM, Frank Wang wrote: > > Hi Sangho, > > I'm excited to see the networking-onos project moving forward again. that's very cool, please let me know if there are some features need to be done. > Hope you can get rid of this problem quickly > > Thanks, > Frank. > > At 2018-04-18 08:00:58, "Sangho Shin" wrote: > Ian and Gary, > > Thank you so much for your answer. > I will try what you suggested. > > Thank you, > > Sangho > >> On 17 Apr 2018, at 7:47 PM, Gary Kotton > wrote: >> >> Hi, >> You either need one of the ono core team or the neutron release team to add you. FYI - https://review.openstack.org/#/admin/groups/1001,members >> Thanks >> Gary >> >> From: Sangho Shin > >> Reply-To: OpenStack List > >> Date: Tuesday, April 17, 2018 at 5:01 AM >> To: OpenStack List > >> Subject: [openstack-dev] [openstack-infra] How to take over a project? >> >> Dear OpenStack Infra team,  <> >> >> I would like to know how to take over an OpenStack project. >> I am a committer of the networking-onos project (https://github.com/openstack/networking-onos ), and I would like to take over the project. >> The current maintainer (cc’d) has already agreed with that. >> >> Please let me know the process to take over (or change the maintainer of) the project. >> >> BTW, it looks like even the current maintainer cannot create a new branch of the codes. How can we get the authority to create a new branch? >> >> Thank you, >> >> Sangho >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org ?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From sangho at opennetworking.org Wed Apr 18 05:48:18 2018 From: sangho at opennetworking.org (Sangho Shin) Date: Wed, 18 Apr 2018 14:48:18 +0900 Subject: [openstack-dev] [openstack-infra] How to take over a project? In-Reply-To: References: <0630C19B-9EA1-457D-A74C-8A7A03D96DD6@vmware.com> Message-ID: Hello, Ian I am trying to add a new stable branch in the networking-onos, following the page you suggested. Create stable/* Branch¶ For OpenStack projects this should be performed by the OpenStack Release Management Team at the Release Branch Point. If you are managing branches for your project you may have permission to do this yourself. Go to https://review.openstack.org/ and sign in Select ‘Admin’, ‘Projects’, then the project Select ‘Branches’ Enter stable/ in the ‘Branch Name’ field, and HEAD as the ‘Initial Revision’, then press ‘Create Branch’. Alternatively, you may run git branch stable/ && git push gerrit stable/ However, after I login, I cannot see the ‘Admin’ and also I cannot create a new branch. Do I need an additional authority for it? BTW, I am a member of networking-onos-core team, as you know. Thank you, Sangho > On 18 Apr 2018, at 9:00 AM, Sangho Shin wrote: > > Ian and Gary, > > Thank you so much for your answer. > I will try what you suggested. > > Thank you, > > Sangho > >> On 17 Apr 2018, at 7:47 PM, Gary Kotton > wrote: >> >> Hi, >> You either need one of the ono core team or the neutron release team to add you. FYI - https://review.openstack.org/#/admin/groups/1001,members >> Thanks >> Gary >> >> From: Sangho Shin > >> Reply-To: OpenStack List > >> Date: Tuesday, April 17, 2018 at 5:01 AM >> To: OpenStack List > >> Subject: [openstack-dev] [openstack-infra] How to take over a project? >> >> Dear OpenStack Infra team,  <> >> >> I would like to know how to take over an OpenStack project. >> I am a committer of the networking-onos project (https://github.com/openstack/networking-onos ), and I would like to take over the project. >> The current maintainer (cc’d) has already agreed with that. >> >> Please let me know the process to take over (or change the maintainer of) the project. >> >> BTW, it looks like even the current maintainer cannot create a new branch of the codes. How can we get the authority to create a new branch? >> >> Thank you, >> >> Sangho >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org ?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From tasogabe at yahoo-corp.jp Wed Apr 18 06:36:59 2018 From: tasogabe at yahoo-corp.jp (Takashi Sogabe) Date: Wed, 18 Apr 2018 06:36:59 +0000 Subject: [openstack-dev] [kolla][vote]Retire kolla-kubernetes project In-Reply-To: References: Message-ID: +1 From: Jeffrey Zhang [mailto:zhang.lei.fly at gmail.com] Sent: Wednesday, April 18, 2018 10:52 AM To: OpenStack Development Mailing List Subject: [openstack-dev] [kolla][vote]Retire kolla-kubernetes project Since many of the contributors in the kolla-kubernetes project are moved to other things. And there is no active contributor for months. On the other hand, there is another comparable project, openstack-helm, in the community. For less confusion and disruptive community resource, I propose to retire the kolla-kubernetes project. More discussion about this you can check the mail[0] and patch[1] please vote +1 to retire the repo, or -1 not to retire the repo. The vote will be open until everyone has voted, or for 1 week until April 25th, 2018. [0] http://lists.openstack.org/pipermail/openstack-dev/2018-March/128822.html [1] https://review.openstack.org/552531 -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-philippe at evrard.me Wed Apr 18 06:42:45 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Wed, 18 Apr 2018 07:42:45 +0100 Subject: [openstack-dev] [openstack-ansible] Problems with Openstack services while migrating VMs In-Reply-To: References: Message-ID: Maybe worth posting on operators, but it looks like the scheduling of the action fails, which let me think that nova is not running fine somewhere. Why is the restart in a random order? That can cause issues, and that's the whole reason why we are orchestrating the deploys/upgrade with ansible. Also, why don't you follow our operations guide for recovering for a failure? Is there something wrong there? Regards, JP On 17 April 2018 at 14:07, Periyasamy Palanisamy wrote: > Hi, > > > > I’m trying to migrate controller and compute VMs installed with > Openstack-Ansible across systems with following approach. > > This is mainly to minimize the deployment time in the Jenkins CI > environment. > > > > Export steps: > > Power off the VMs gracefully. > virsh dumpxml ${node} > $EXPORT_PATH/${node}.xml > cp /var/lib/libvirt/images/${node}.qcow2 $EXPORT_PATH/$node.qcow2 > create a tar ball for the xml’s and qcow2 images. > > > > Import steps: > > cp ${node}.qcow2 /var/lib/libvirt/images/ > virsh define ${node}.xml > virsh start ${node} > > > > After the import of the VMs, The openstack services (neutron-server, DHCP > agent, Metering agent, Metadata agent, L3 agent, Open vSwitch agent, > nova-conductor and nova-comute) are started in random order. > > This causes neutron and nova is not able to find DHCP agent and compute > accordingly to bring up the tenant VM and throws the error [1]. > > > > I have also tried to boot compute VM followed by controller VM. It also > doesn’t help. > > Could you please let me know what is going wrong here ? > > > > [1] https://paste.ubuntu.com/p/YNg2NnjvpS/ (fault section) > > > > Thanks, > > Periyasamy > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From thomas.morin at orange.com Wed Apr 18 07:31:49 2018 From: thomas.morin at orange.com (thomas.morin at orange.com) Date: Wed, 18 Apr 2018 09:31:49 +0200 Subject: [openstack-dev] [kolla][neutron][requirements][pbr]Use git+https line in requirements.txt break the pip install In-Reply-To: References: Message-ID: <2096_1524036709_5AD6F465_2096_421_2_10f66045-ff65-2e11-7980-78b03b78488a@orange.com> As I understand, this is due to a not-yet-completed transition in networking-odl after stopping the use of the tools/tox_install.sh and relying on the tox-sibling CI role instead. I'm not able to explain the difference between the two "pip install" run variants that you see, though. For the record, a distinct side effect of the same incomplete transition is also tracked in [1] : having networking-bgpvpn depend on networking-odl from git (relying on black-magic by the tox-siblings ansible role and 'required-project' job configuration) would not work anymore after the change in networking-odl to depend on ceilometer with '-e git+...'. -Thomas [1] https://bugs.launchpad.net/networking-odl/+bug/1764371 On 18/04/2018 04:48, Jeffrey Zhang wrote: > > Recently, one of networking-odl package breaks kolla's gate[0]. The > direct issue is ceilometer is added in networking-odl's > requirements.txt file[1] > > Then when install network-odl with upper-contraints.txt file, it will > raise error like > > $ pip install -c > https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt > ./networking-odl > ... > collecting networking-bgpvpn>=8.0.0 (from networking-odl==12.0.1.dev54) > Downloading > http://pypi.doubanio.com/packages/5a/e5/995be0d53d472f739a7a0bb6c9d9fecbc4936148651aaf56d39f3b65b1f1/networking_bgpvpn-8.0.0-py2-none-any.whl > (172kB) >   100% |████████████████████████████████| 174kB 12.0MB/s > Collecting ceilometer (from networking-odl==12.0.1.dev54) > Could not find a version that satisfies the requirement ceilometer > (from networking-odl==12.0.1.dev54) (from versions: ) > No matching distribution found for ceilometer (from > networking-odl==12.0.1.dev54) > > > But if you just install the networking-odl's requirements.txt file, it > works > > > $ pip install -c > https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt > -r ./networking-odl/requirements.txt > ... > Obtaining ceilometer from > git+https://git.openstack.org/openstack/ceilometer at master#egg=ceilometer > (from -r networking-odl/requirements.txt (line 21)) >   Cloning https://git.openstack.org/openstack/ceilometer (to revision > master) to /home/jeffrey/.dotfiles/virtualenvs/test/src/ceilometer > ... > > > Is this expected? and how could we fix this? > > > [0] https://bugs.launchpad.net/kolla/+bug/1764621 > [1] > https://github.com/openstack/networking-odl/blob/master/requirements.txt#L21 > > - > ​​ > - > Regards, > Jeffrey Zhang > Blog: http://xcodest.me > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev _________________________________________________________________________________________________________________________ Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci. This message and its attachments may contain confidential or privileged information that may be protected by law; they should not be distributed, used or copied without authorisation. If you have received this email in error, please notify the sender and delete this message and its attachments. As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified. Thank you. From bdobreli at redhat.com Wed Apr 18 08:16:55 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Wed, 18 Apr 2018 10:16:55 +0200 Subject: [openstack-dev] [tripleo] The Weekly Owl - 17th Edition In-Reply-To: References: Message-ID: <63079baf-398d-1844-8a17-b595b07653fd@redhat.com> On 4/18/18 3:54 AM, Wesley Hayutin wrote: > > > On Tue, Apr 17, 2018 at 9:44 PM Emilien Macchi > wrote: > > Note: this is the seventeeth edition of a weekly update of what > happens in TripleO. > The goal is to provide a short reading (less than 5 minutes) to > learn where we are and what we're doing. > Any contributions and feedback are welcome. > Link to the previous version: > http://lists.openstack.org/pipermail/openstack-dev/2018-April/129255.html > > +---------------------------------+ > | General announcements | > +---------------------------------+ > > +--> Rocky milestone 1 will be released this week (probably tomorrow)! > +--> (reminder) if you're looking at reproducing a CI job, checkout: > https://docs.openstack.org/tripleo-docs/latest/contributor/reproduce-ci.html > > +------------------------------+ > | Continuous Integration | > +------------------------------+ > > +--> Ruck is quiquell and Rover is panda. Please let them know any > new CI issue. > +--> Master promotion is 1 day, Queens is 2 days, Pike is 4 days and > Ocata is 5 days. > +--> Efforts around libvirt based multinode reproducer, see > https://trello.com/c/JEGLSVh6/323-reproduce-ci-jobs-with-libvirt > +--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meet > ing and https://goo.gl/D4WuBP > > > So, just to add some context.  We would like to be able to setup libvirt > guests in the same way nodepool nodes are setup to allow the ci team and > others to reexecute upstream ci jobs on libvirt using the exact workflow > that upstream jobs take. > > A reminder the current reproduce scripts are documented here [1].  We > plan on updating the current doc with our libvirt work when it is > ready.   Thanks all This is really great effort! Thank you for doing this. Will this also bring the deployed servers feature into libvirt setups? > > [1] http://tripleo.org/contributor/reproduce-ci.html > > > > +-------------+ > | Upgrades | > +-------------+ > > +--> Progress on FFU CLI in tripleoclient, need reviews. > +--> Work for containerized undercloud upgrades has been merged. > Testing will make progress after rocky-m1 (with new tags). > +--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad > -status > > +---------------+ > | Containers | > +---------------+ > > +--> Still working on UX problems > +--> Still working on container workflow, good progress last week > where container prepare isn't needed. Now working on container updates. > +--> Investigating how to bootstrap Docker + Registry before > deploying containers > +--> Progress on routed networks support > +--> More:https://etherpad.openstack.org/p/tripleo-containers-sq > uad-status > > +----------------------+ > | config-download | > +----------------------+ > > +--> Moving to config-download by default is coming very soon (once > Ceph patches land). > +--> Ceph was migrated and all patches are going to merge this week. > +--> octavia/skydive migration is wip. > +--> Improving deploy-steps-tasks.j2 to improve playbook readability > and memory consumption > +--> UI work is work in progress. > +--> More:https://etherpad.openstack.org/p/tripleo-config-downlo > ad-squad-status > > +--------------+ > | Integration | > +--------------+ > > +--> No updates. > +--> More:https://etherpad.openstack.org/p/tripleo-integration-s > quad-status > > +---------+ > | UI/CLI | > +---------+ > > +--> Efforts on config-download integration > +--> Added type to ansible-playbook messages (feedback needed) > +--> More:https://etherpad.openstack.org/p/tripleo-ui-cli-squad- > status > > +---------------+ > | Validations | > +---------------+ > > +--> No updates. > +--> More: https://etherpad.openstack.org/p/tripleo-validations-s > quad-status > > +---------------+ > | Networking | > +---------------+ > > +--> No updates this week. > +--> More:https://etherpad.openstack.org/p/tripleo-networking-sq > uad-status > > +--------------+ > | Workflows | > +--------------+ > > +--> Need reviews, see etherpad. > +--> Working on workflows v2 > +--> More:https://etherpad.openstack.org/p/tripleo-workflows-squ > ad-status > > +-----------+ > | Security | > +-----------+ > > +--> Tomorrow's meeting is about Storyboard migration and Secret > management. > +--> More:https://etherpad.openstack.org/p/tripleo-security-squa > d > > +------------+ > | Owl fact  | > +------------+ > > Did you know owls were watching you while working on TripleO? > Check this out: > https://www.reddit.com/r/pics/comments/8cz8v0/owls_born_outside_of_office_window_wont_stop/ > (Thanks Wes for the link) > > > Thanks all for reading and stay tuned! > -- > Your fellow reporter, Emilien Macchi > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Bogdan Dobrelya, Irc #bogdando From michel at redhat.com Wed Apr 18 09:02:10 2018 From: michel at redhat.com (Michel Peterson) Date: Wed, 18 Apr 2018 12:02:10 +0300 Subject: [openstack-dev] [kolla][neutron][requirements][pbr]Use git+https line in requirements.txt break the pip install In-Reply-To: References: Message-ID: Hi, I'm one of the networking-odl core devs. On Wed, Apr 18, 2018 at 5:48 AM, Jeffrey Zhang wrote: > > Recently, one of networking-odl package breaks kolla's gate[0]. The direct > issue is ceilometer is added in networking-odl's requirements.txt file[1] > This is an issue that concerns me too. First off let me start with a simple solution, which is to install ceilometer from git before requiring networking-odl. Also, if networking-odl is installed through devstack's enable_plugin this issue wouldn't arise (as the plugin.sh takes care of installing ceilometer before installing networking-odl). Still, I see this as a problem, I just didn't find a way to solve it in general, except ceilometer being published to PyPI. What happened then is I got caught up in other priorities that took bandwidth away from it and kinda forgot about it. > > Then when install network-odl with upper-contraints.txt file, it will > raise error like > > $ pip install -c https://git.openstack.org/cgit/openstack/requirements/ > plain/upper-constraints.txt ./networking-odl > ... > collecting networking-bgpvpn>=8.0.0 (from networking-odl==12.0.1.dev54) > Downloading http://pypi.doubanio.com/packages/5a/e5/ > 995be0d53d472f739a7a0bb6c9d9fecbc4936148651aaf56d39f3b65b1f1 > /networking_bgpvpn-8.0.0-py2-none-any.whl (172kB) > 100% |████████████████████████████████| 174kB 12.0MB/s > Collecting ceilometer (from networking-odl==12.0.1.dev54) > Could not find a version that satisfies the requirement ceilometer (from > networking-odl==12.0.1.dev54) (from versions: ) > No matching distribution found for ceilometer (from > networking-odl==12.0.1.dev54) > > > But if you just install the networking-odl's requirements.txt file, it > works > > > $ pip install -c https://git.openstack.org/cgit/openstack/requirements/ > plain/upper-constraints.txt -r ./networking-odl/requirements.txt > ... > Obtaining ceilometer from git+https://git.openstack.org/ > openstack/ceilometer at master#egg=ceilometer (from -r > networking-odl/requirements.txt (line 21)) > Cloning https://git.openstack.org/openstack/ceilometer (to revision > master) to /home/jeffrey/.dotfiles/virtualenvs/test/src/ceilometer > ... > > > Is this expected? and how could we fix this? > This is an interesting case of how pip works differently when installing from a requirements file or from a folder (as it would happen with -e or the first command you issued). While in the former it knows how to solve the dependencies correctly, in the second it actually relies in the setup.py file to install. That means it goes into pbr's realm and does not use the requirements at all. So let's analyse what happens in pbr. Internally in PBR what is doing is reading the requirements.txt, finding the -e line, reading it's comment that says #egg=ceilometer and adding that as a requirement [1]. What is failing to do though, is to instruct pip to fetch it from the git repository (as the requirements file would do). Sadly, this is not only a problem of pbr but it's also a limitation of the current state of pip and the corresponding PEPs, which apparently is already addressed for the long term with new PEPs and upcoming changes to pip. How can we fix this? There are several ways I can think of the top of my head: 1. When encountered with edge cases like this one, first install that dependency with a manual pip run [2] 2. Modify pbr to handle these situations by handling the installation of those depenencies differently with a workaround to the current functionality of pip 3. Leverage on the work of corvus [3] to not only do what that patch is doing, but also including the checked out path of the dependency in PIP_FIND_LINKS, that way pip knows how to solve the issue. All these solutions have different set of pros and cons, but I favor #3 as the long term solution, #1 as short term and I think #2 requires further analysis by the pbr team. Hope my contribution helped to clarify this issue. [1]: https://github.com/openstack-dev/pbr/blob/7767c44ab1289ed7d1cc4f9e12986bef07865d5c/pbr/packaging.py#L168 [2]: https://github.com/openstack/networking-odl/blob/aa3acb23a5736f128fee0a514a588b9035551d88/devstack/entry_points#L259 [3]: https://review.openstack.org/549252/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mikal at stillhq.com Wed Apr 18 09:07:14 2018 From: mikal at stillhq.com (Michael Still) Date: Wed, 18 Apr 2018 19:07:14 +1000 Subject: [openstack-dev] [Nova] z/VM introducing a new config driveformat In-Reply-To: References: <7efe6916-17fc-59c5-d666-6bdfc19c3329@gmail.com> <2eea2405-2e85-6730-e022-e1be33dc35d5@gmail.com> Message-ID: I'm confused about the design of AE to be honest. Is there a good reason that this functionality couldn't be provided by cloud-init? I think there's a lot of cost in deviating from the industry standard, so the reasons to do so have to be really solid. I'm also a bit confused by what seems to be support for streaming configuration. Is there any documentation on the design of AE anywhere? Thanks, Michael On Tue, Apr 17, 2018 at 6:58 PM, Chen CH Ji wrote: > For the question on AE documentation, it's open source in [1] and the > documentation for how to build and use is [2] > once our code is upstream, there are a set of documentation change which > will cover this image build process by > adding some links to there [3] > > You are right, we need image to have our Active Engine, I think different > arch and platform might have their unique > requirements and our solution our Active Engine is very like to cloud-init > so no harm to add it from user's perspective > I think later we can upload image to some place so anyone is able to > consume it as test image if they like > because different arch's image (e.g x86 and s390x) can't be shared anyway. > > For the config drive format you mentioned, actually, as previous > explanation and discussion witho Michael and Dan, > We found the iso9660 can be used (previously we made a bad assumption) and > we already changed the patch in [4], > so it's exactly same to other virt drivers you mentioned , we don't need > special format and iso9660 works perfect for our driver > > It make sense to me we are temply moved out from runway, I suppose we can > adjust the CI to enable the run_ssh = true > with config drive functionalities very soon and we will apply for review > after that with the test result requested in our CI log. > > Thanks > > [1] https://github.com/mfcloud/python-zvm-sdk/blob/master/ > tools/share/zvmguestconfigure > [2] http://cloudlib4zvm.readthedocs.io/en/latest/ > makeimage.html#configuration-of-activation-engine-ae-in-zlinux > [3] https://review.openstack.org/#/q/status:open+project: > openstack/nova+branch:master+topic:bp/add-zvm-driver-rocky > [4] https://review.openstack.org/#/c/527658/33/nova/virt/zvm/utils.py > line 104 > > Best Regards! > > Kevin (Chen) Ji 纪 晨 > > Engineer, zVM Development, CSTL > Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com > Phone: +86-10-82451493 > Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, > Beijing 100193, PRC > > [image: Inactive hide details for melanie witt ---04/17/2018 09:21:03 > AM---On Mon, 16 Apr 2018 14:56:06 +0800, Chen Ch Ji wrote: > >>>]melanie > witt ---04/17/2018 09:21:03 AM---On Mon, 16 Apr 2018 14:56:06 +0800, Chen > Ch Ji wrote: > >>>The "iso file" will not be inside the gu > > From: melanie witt > To: openstack-dev at lists.openstack.org > Date: 04/17/2018 09:21 AM > Subject: Re: [openstack-dev] [Nova] z/VM introducing a new config > driveformat > ------------------------------ > > > > On Mon, 16 Apr 2018 14:56:06 +0800, Chen Ch Ji wrote: > > >>>The "iso file" will not be inside the guest, but rather passed to > > the guest as a block device, right? > > Cloud init expects to find a config drive with following requirements > > [1], in order to make cloud init able to consume config drive , we > > should be able to prepare it, > > in some hypervisor, you can define something like following to the VM > > then VM startup is able to consume it > > > > but for z/VM case it allows disk to be created during VM create (define > > )stage but no disk format set, it's the operating system's > > responsibility to define the purpose of the > > disk, so what we do is > > 1) first when we build image ,we create a small AE like cloud-init but > > only purpose is to get files from z/VM internal pipe and handle config > > drive case > > What does AE stand for? So, this means in order to use the z/VM driver, > users must have special images that will ensure the config drive will be > readable by cloud-init. They can't use standard cloud images. > > > 2) During spawn we create config drive in nova-compute side then send > > the file to z/VM through z/VM internal pipe (omit detail here) > > 3) During startup of the virtual machine, the small AE is able to mount > > the file as loop device and then in turn cloud-init is able to handle it > > > > because this is our special case, we don't want to upload to cloud-init > > community because of uniqueness and as far as we can tell, no hook in > > cloud-init mechanism allowed as well > > to let us 'mount -o loop' ; also, from openstack point of view except > > this small AE (which is documented well) no special thing and > > inconsistent to other drivers > > > > [1]https://urldefense.proofpoint.com/v2/url?u=https- > 3A__github.com_number5_cloud-2Dinit_blob_master_cloudinit_ > sources_DataSourceConfigDrive.py-23L225&d=DwIGaQ&c=jf_ > iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m= > yV6OJ4IfFSLoHNWAJpBF7j0sK2pgfxSEIigv8vinYw0&s=3410axnNZ_ > 62U3HOh6i7yivyc7HyTcqwx2xuKRDEeac&e= > > Where is the AE documented? How do users obtain it? What tools are they > supposed to use to build images to use with the z/VM driver? > > That aside, from what I can see, the z/VM driver behaves unlike any > other in-tree driver [0-5] in how it handles config drive. Drivers are > expected to create the config drive and present it to the guest in > iso9660 or vfat format without requirement of a custom image and the > existing drivers are doing that. > > IMHO, if the z/VM driver can't be fixed to provide proper config drive > support, we won't be able to approve the implementation patches. I would > like to hear other opinions about it. > > I propose that we remove the z/VM driver blueprint from the runway at > this time and place it back into the queue while work on the driver > continues. At a minimum, we need to see z/VM CI running with > [validation]run_validation = True in tempest.conf before we add the z/VM > driver blueprint back into a runway in the future. > > Cheers, > -melanie > > [0] > https://urldefense.proofpoint.com/v2/url?u=https-3A__github. > com_openstack_nova_blob_888cd51_nova_virt_hyperv_ > vmops.py-23L661&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r= > 8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m= > yV6OJ4IfFSLoHNWAJpBF7j0sK2pgfxSEIigv8vinYw0&s= > 7PXdcMLIrzcekkl0V3N1vML09CGgvali0Q4v-M_vrzk&e= > [1] > https://urldefense.proofpoint.com/v2/url?u=https-3A__github. > com_openstack_nova_blob_888cd51_nova_virt_ironic_ > driver.py-23L974&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r= > 8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m= > yV6OJ4IfFSLoHNWAJpBF7j0sK2pgfxSEIigv8vinYw0&s= > X1KzmZQEfiHW1O6N1j5vBJkERjrV0dDrZlkT3LjE5aY&e= > [2] > https://urldefense.proofpoint.com/v2/url?u=https-3A__github. > com_openstack_nova_blob_888cd51_nova_virt_libvirt_ > driver.py-23L3595&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r= > 8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m= > yV6OJ4IfFSLoHNWAJpBF7j0sK2pgfxSEIigv8vinYw0&s=a5XhSWf7Ws5h_OuiUc_ > LpMVtM4ud3GoexVM1NKpBwfM&e= > [3] > https://urldefense.proofpoint.com/v2/url?u=https-3A__github. > com_openstack_nova_blob_888cd51_nova_virt_powervm_ > media.py-23L120&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r= > 8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m= > yV6OJ4IfFSLoHNWAJpBF7j0sK2pgfxSEIigv8vinYw0&s= > w7kq1DhO7qw57H0ZX0uxkj1tFvLCeYOHU9QVUTmBehU&e= > [4] > https://urldefense.proofpoint.com/v2/url?u=https-3A__github. > com_openstack_nova_blob_888cd51_nova_virt_vmwareapi_ > vmops.py-23L854&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r= > 8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m= > yV6OJ4IfFSLoHNWAJpBF7j0sK2pgfxSEIigv8vinYw0&s=_ > G6MIr7OqLH48t8b8JGMVhg6bgCPg8bgHbPez9ohbG0&e= > [5] > https://urldefense.proofpoint.com/v2/url?u=https-3A__github. > com_openstack_nova_blob_888cd51_nova_virt_xenapi_vm- > 5Futils.py-23L1151&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r= > 8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m= > yV6OJ4IfFSLoHNWAJpBF7j0sK2pgfxSEIigv8vinYw0&s=LZK-0hqXfMqBaLHUHMA4kjE- > mReBuP1vw9pYGPoqAxU&e= > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists. > openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev&d=DwIGaQ&c=jf_ > iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m= > yV6OJ4IfFSLoHNWAJpBF7j0sK2pgfxSEIigv8vinYw0&s=SiDXOoY94EWr2- > 3GDE9_5U6tsqgl7OqwbFzSwJrGAzA&e= > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Did this email leave you hoping to cause me pain? Good news! Sponsor me in city2surf 2018 and I promise to suffer greatly. http://www.madebymikal.com/city2surf-2018/ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From michel at redhat.com Wed Apr 18 09:09:49 2018 From: michel at redhat.com (Michel Peterson) Date: Wed, 18 Apr 2018 12:09:49 +0300 Subject: [openstack-dev] [kolla][neutron][requirements][pbr]Use git+https line in requirements.txt break the pip install In-Reply-To: References: Message-ID: On Wed, Apr 18, 2018 at 12:02 PM, Michel Peterson wrote: > How can we fix this? There are several ways I can think of the top of my > head: > > > 1. When encountered with edge cases like this one, first install that > dependency with a manual pip run [2] > 2. Modify pbr to handle these situations by handling the installation > of those depenencies differently with a workaround to the current > functionality of pip > 3. Leverage on the work of corvus [3] to not only do what that patch > is doing, but also including the checked out path of the dependency in > PIP_FIND_LINKS, that way pip knows how to solve the issue. > > All these solutions have different set of pros and cons, but I favor #3 as > the long term solution, #1 as short term and I think #2 requires further > analysis by the pbr team. > I forgot to add the reference on where to add the PIP_FIND_LINKS for solution #3, here you go: https://github.com/openstack-dev/devstack/blob/f99d1771ba1882dfbb69186212a197edae3ef02c/inc/python#L362 -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.bourke at oracle.com Wed Apr 18 09:17:30 2018 From: paul.bourke at oracle.com (Paul Bourke) Date: Wed, 18 Apr 2018 10:17:30 +0100 Subject: [openstack-dev] [kolla][vote]Retire kolla-kubernetes project In-Reply-To: References: Message-ID: +1 On 18/04/18 02:51, Jeffrey Zhang wrote: > Since many of the contributors in the kolla-kubernetes project are moved > to other things. And there is no active contributor for months.  On the > other hand, there is another comparable project, openstack-helm, in the > community.  For less confusion and disruptive community resource, I > propose to retire the kolla-kubernetes project. > > More discussion about this you can check the mail[0] and patch[1] > > please vote +1 to retire the repo, or -1 not to retire the repo. The > vote will be open until everyone has voted, or for 1 week until April > 25th, 2018. > > [0] > http://lists.openstack.org/pipermail/openstack-dev/2018-March/128822.html > [1] https://review.openstack.org/552531 > > -- > Regards, > Jeffrey Zhang > Blog: http://xcodest.me > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From zhipengh512 at gmail.com Wed Apr 18 09:26:58 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 18 Apr 2018 17:26:58 +0800 Subject: [openstack-dev] [cyborg]Weekly Team Meeting 2018.04.18 Message-ID: Hi Team, Weekly meeting as usual starting UTC1400 at #openstack-cyborg, initial agenda as follows: 1. MS1 preparation 2. bug report on storyboard 3. Rocky critical spec review 4. open patches discussion -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From jichenjc at cn.ibm.com Wed Apr 18 09:39:47 2018 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Wed, 18 Apr 2018 17:39:47 +0800 Subject: [openstack-dev] [Nova] z/VM introducing a new config driveformat In-Reply-To: References: <7efe6916-17fc-59c5-d666-6bdfc19c3329@gmail.com> <2eea2405-2e85-6730-e022-e1be33dc35d5@gmail.com> Message-ID: Added a update to the spec to the issues that requested https://review.openstack.org/#/c/562154/ , including: 1) How the config drive (Metadata) defined 2) Special AE reason and why it's needed, also ,some documentation and source code links 3) neutron agent for z/VM Best Regards! Kevin (Chen) Ji 纪 晨 Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com Phone: +86-10-82451493 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC From: melanie witt To: Dan Smith Cc: openstack-dev at lists.openstack.org Date: 04/18/2018 01:47 AM Subject: Re: [openstack-dev] [Nova] z/VM introducing a new config driveformat On Tue, 17 Apr 2018 06:40:35 -0700, Dan Smith wrote: >> I propose that we remove the z/VM driver blueprint from the runway at >> this time and place it back into the queue while work on the driver >> continues. At a minimum, we need to see z/VM CI running with >> [validation]run_validation = True in tempest.conf before we add the >> z/VM driver blueprint back into a runway in the future. > > Agreed. I also want to see the CI reporting cleaned up so that it's > readable and consistent. Yesterday I pointed out some issues with the > fact that the actual config files being used are not the ones being > uploaded. There are also duplicate (but not actually identical) logs > from all services being uploaded, including things like a full compute > log from starting with the libvirt driver. Yes, we definitely need to see all of these issues fixed. > I'm also pretty troubled by the total lack of support for the metadata > service. I know it's technically optional on our matrix, but it's a > pretty important feature for a lot of scenarios, and it's also a > dependency for other features that we'd like to have wider support for > (like attached device metadata). > > Going back to the spec, I see very little detail on some of the things > raised here, and very (very) little review back when it was first > approved. I'd also like to see more detail be added to the spec about > all of these things, especially around required special changes like > this extra AE agent. Agreed, can someone from the z/VM team please propose an update to the driver spec to document these details? Thanks, -melanie __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=CCtWdN4OOlqKzrLg4ctuY1D_fHo8wvps59hVs35J8ys&s=wHuQV89_dwXLe15VAkg8_UOBPfjD72vB0_47W6BgRVk&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From jichenjc at cn.ibm.com Wed Apr 18 09:44:30 2018 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Wed, 18 Apr 2018 17:44:30 +0800 Subject: [openstack-dev] [Nova] z/VM introducing a new config driveformat In-Reply-To: References: <7efe6916-17fc-59c5-d666-6bdfc19c3329@gmail.com> <2eea2405-2e85-6730-e022-e1be33dc35d5@gmail.com> Message-ID: Thanks for the concern and fully under it , the major reason is cloud-init doesn't have a hook or plugin before it start to read config drive (ISO disk) z/VM is an old hypervisor and no way to do something like libvirt to define a ISO format disk in xml definition, instead, it can define disks in the defintion of virtual machine and let VM to decide its format. so we need a way to tell cloud-init where to find ISO file before cloud-init start but without AE, we can't handle that...some update on the spec here for further information https://review.openstack.org/#/c/562154/ Best Regards! Kevin (Chen) Ji 纪 晨 Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com Phone: +86-10-82451493 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC From: Michael Still To: "OpenStack Development Mailing List (not for usage questions)" Date: 04/18/2018 05:08 PM Subject: Re: [openstack-dev] [Nova] z/VM introducing a new config driveformat I'm confused about the design of AE to be honest. Is there a good reason that this functionality couldn't be provided by cloud-init? I think there's a lot of cost in deviating from the industry standard, so the reasons to do so have to be really solid. I'm also a bit confused by what seems to be support for streaming configuration. Is there any documentation on the design of AE anywhere? Thanks, Michael On Tue, Apr 17, 2018 at 6:58 PM, Chen CH Ji wrote: For the question on AE documentation, it's open source in [1] and the documentation for how to build and use is [2] once our code is upstream, there are a set of documentation change which will cover this image build process by adding some links to there [3] You are right, we need image to have our Active Engine, I think different arch and platform might have their unique requirements and our solution our Active Engine is very like to cloud-init so no harm to add it from user's perspective I think later we can upload image to some place so anyone is able to consume it as test image if they like because different arch's image (e.g x86 and s390x) can't be shared anyway. For the config drive format you mentioned, actually, as previous explanation and discussion witho Michael and Dan, We found the iso9660 can be used (previously we made a bad assumption) and we already changed the patch in [4], so it's exactly same to other virt drivers you mentioned , we don't need special format and iso9660 works perfect for our driver It make sense to me we are temply moved out from runway, I suppose we can adjust the CI to enable the run_ssh = true with config drive functionalities very soon and we will apply for review after that with the test result requested in our CI log. Thanks [1] https://github.com/mfcloud/python-zvm-sdk/blob/master/tools/share/zvmguestconfigure [2] http://cloudlib4zvm.readthedocs.io/en/latest/makeimage.html#configuration-of-activation-engine-ae-in-zlinux [3] https://review.openstack.org/#/q/status:open+project:openstack/nova +branch:master+topic:bp/add-zvm-driver-rocky [4] https://review.openstack.org/#/c/527658/33/nova/virt/zvm/utils.py line 104 Best Regards! Kevin (Chen) Ji 纪 晨 Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com Phone: +86-10-82451493 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC Inactive hide details for melanie witt ---04/17/2018 09:21:03 AM---On Mon, 16 Apr 2018 14:56:06 +0800, Chen Ch Ji wrote: > >>>melanie witt ---04/17/2018 09:21:03 AM---On Mon, 16 Apr 2018 14:56:06 +0800, Chen Ch Ji wrote: > >>>The "iso file" will not be inside the gu From: melanie witt To: openstack-dev at lists.openstack.org Date: 04/17/2018 09:21 AM Subject: Re: [openstack-dev] [Nova] z/VM introducing a new config driveformat On Mon, 16 Apr 2018 14:56:06 +0800, Chen Ch Ji wrote: >  >>>The "iso file" will not be inside the guest, but rather passed to > the guest as a block device, right? > Cloud init expects to find a config drive with following requirements > [1], in order to make cloud init able to consume config drive , we > should be able to prepare it, > in some hypervisor, you can define something like following to the VM > then VM startup is able to consume it > > but for z/VM case it allows disk to be created during VM create (define > )stage but no disk format set, it's the operating system's > responsibility to define the purpose of the > disk, so what we do is > 1) first when we build image ,we create a small AE like cloud-init but > only purpose is to get files from z/VM internal pipe and handle config > drive case What does AE stand for? So, this means in order to use the z/VM driver, users must have special images that will ensure the config drive will be readable by cloud-init. They can't use standard cloud images. > 2) During spawn we create config drive in nova-compute side then send > the file to z/VM through z/VM internal pipe (omit detail here) > 3) During startup of the virtual machine, the small AE is able to mount > the file as loop device and then in turn cloud-init is able to handle it > > because this is our special case, we don't want to upload to cloud-init > community because of uniqueness and as far as we can tell, no hook in > cloud-init mechanism allowed as well > to let us 'mount -o loop' ; also, from openstack point of view except > this small AE (which is documented well) no special thing and > inconsistent to other drivers > > [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_number5_cloud-2Dinit_blob_master_cloudinit_sources_DataSourceConfigDrive.py-23L225&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=yV6OJ4IfFSLoHNWAJpBF7j0sK2pgfxSEIigv8vinYw0&s=3410axnNZ_62U3HOh6i7yivyc7HyTcqwx2xuKRDEeac&e= Where is the AE documented? How do users obtain it? What tools are they supposed to use to build images to use with the z/VM driver? That aside, from what I can see, the z/VM driver behaves unlike any other in-tree driver [0-5] in how it handles config drive. Drivers are expected to create the config drive and present it to the guest in iso9660 or vfat format without requirement of a custom image and the existing drivers are doing that. IMHO, if the z/VM driver can't be fixed to provide proper config drive support, we won't be able to approve the implementation patches. I would like to hear other opinions about it. I propose that we remove the z/VM driver blueprint from the runway at this time and place it back into the queue while work on the driver continues. At a minimum, we need to see z/VM CI running with [validation]run_validation = True in tempest.conf before we add the z/VM driver blueprint back into a runway in the future. Cheers, -melanie [0] https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openstack_nova_blob_888cd51_nova_virt_hyperv_vmops.py-23L661&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=yV6OJ4IfFSLoHNWAJpBF7j0sK2pgfxSEIigv8vinYw0&s=7PXdcMLIrzcekkl0V3N1vML09CGgvali0Q4v-M_vrzk&e= [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openstack_nova_blob_888cd51_nova_virt_ironic_driver.py-23L974&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=yV6OJ4IfFSLoHNWAJpBF7j0sK2pgfxSEIigv8vinYw0&s=X1KzmZQEfiHW1O6N1j5vBJkERjrV0dDrZlkT3LjE5aY&e= [2] https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openstack_nova_blob_888cd51_nova_virt_libvirt_driver.py-23L3595&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=yV6OJ4IfFSLoHNWAJpBF7j0sK2pgfxSEIigv8vinYw0&s=a5XhSWf7Ws5h_OuiUc_LpMVtM4ud3GoexVM1NKpBwfM&e= [3] https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openstack_nova_blob_888cd51_nova_virt_powervm_media.py-23L120&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=yV6OJ4IfFSLoHNWAJpBF7j0sK2pgfxSEIigv8vinYw0&s=w7kq1DhO7qw57H0ZX0uxkj1tFvLCeYOHU9QVUTmBehU&e= [4] https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openstack_nova_blob_888cd51_nova_virt_vmwareapi_vmops.py-23L854&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=yV6OJ4IfFSLoHNWAJpBF7j0sK2pgfxSEIigv8vinYw0&s=_G6MIr7OqLH48t8b8JGMVhg6bgCPg8bgHbPez9ohbG0&e= [5] https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openstack_nova_blob_888cd51_nova_virt_xenapi_vm-5Futils.py-23L1151&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=yV6OJ4IfFSLoHNWAJpBF7j0sK2pgfxSEIigv8vinYw0&s=LZK-0hqXfMqBaLHUHMA4kjE-mReBuP1vw9pYGPoqAxU&e= __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=yV6OJ4IfFSLoHNWAJpBF7j0sK2pgfxSEIigv8vinYw0&s=SiDXOoY94EWr2-3GDE9_5U6tsqgl7OqwbFzSwJrGAzA&e= __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Did this email leave you hoping to cause me pain? Good news! Sponsor me in city2surf 2018 and I promise to suffer greatly. http://www.madebymikal.com/city2surf-2018/ __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=0b05NYxdutLoT3zTF5t-KEYfQwqfqEAMQk63ZLjrHvc&s=24HEDjCD0EBpKd90iVibbyuNogfp23J4kaQxD-BgHuY&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From derekh at redhat.com Wed Apr 18 10:07:45 2018 From: derekh at redhat.com (Derek Higgins) Date: Wed, 18 Apr 2018 11:07:45 +0100 Subject: [openstack-dev] [tripleo] Ironic Inspector in the overcloud Message-ID: Hi All, I've been testing the ironic inspector containerised service in the overcloud, the service essentially works but there is a couple of hurdles to tackle to set it up, the first of these is how to get the IPA kernel and ramdisk where they need to be. These need to be be present in the ironic_pxe_http container to be served out over http, whats the best way to get them there? On the undercloud this is done by copying the files across the filesystem[1][2] to /httpboot when we run "openstack overcloud image upload", but on the overcloud an alternative is required, could the files be pulled into the container during setup? thanks, Derek 1 - https://github.com/openstack/python-tripleoclient/blob/3cf44eb/tripleoclient/v1/overcloud_image.py#L421-L433 2 - https://github.com/openstack/python-tripleoclient/blob/3cf44eb/tripleoclient/v1/overcloud_image.py#L181 -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Wed Apr 18 10:38:35 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 18 Apr 2018 11:38:35 +0100 (BST) Subject: [openstack-dev] [all] Topics for the Board+TC+UC meeting in Vancouver In-Reply-To: <1df6fb53-ed96-8eea-a290-ab0b889a1ae2@openstack.org> References: <1df6fb53-ed96-8eea-a290-ab0b889a1ae2@openstack.org> Message-ID: On Tue, 17 Apr 2018, Thierry Carrez wrote: > So... Is there any specific topic you think we should cover in that > meeting ? I'll bite. I've got two topics that I think are pretty critical to address with the various segments of the community that are the source of code commits and reviews. Neither of these are specifically Board issues but are things are that I think are pretty critical to discuss and address, and topics for which corporate members of the foundation ought to be worried about. These aren't fully formed ideas or questions, but I hope that before we get to Vancouver they might evolve into concrete agenda items with the usual feedback loops in email. I figure it is better to get the ball rolling early than wait for perfection. In the past on topics like this we've said "usually it's not the right people at the board meeting to make headway on these kinds of things". That's not our problem nor our responsibility. If the people at the board meetings are designated representatives of the corporate members it's their responsibility to hear our issues and respond appropriately (even if that means, over the long term, changing the people that are there). The health and productivity of the community is what we should be concerned with. The topics: 1. What are we to do, as a community, when external pressures for results are not matched by contribution of resources to produce those results? There are probably several examples of this, but one that I'm particularly familiar with is the drive to be able to satisfy complex hardware topologies demanded by virtual network functions and related NFV use cases. Within nova, and I suspect other projects, there is intense pressure to make progress and intense effort that is removing resources from other areas. But the amount of daily, visible contribution from the interest companies [1] is _sometimes_ limited. There are many factors in this, and obviously "throw more people at it" is not a silver bullet, but there are things to talk about here that need the input from all the segments. 2. We've made progress of late with acknowledging the concepts and importance of casual contribution and "drive-by bug fixing" in our changing environment. But we've not yet made enough progress in changing the way we do work. Corporate foundation members need to be more aware and more accepting that the people they provide to work "mostly upstream" need to be focused on making other people capable of contribution. Not on getting features done. And those of us who do have the privilege of being "mostly upstream" need to adjust our priorities. Somewhere in that screed are, I think, some things worth talking about, but they need to be distilled out. [1] http://superuser.openstack.org/articles/5g-open-source-att/ -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From bdobreli at redhat.com Wed Apr 18 13:22:04 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Wed, 18 Apr 2018 15:22:04 +0200 Subject: [openstack-dev] [tripleo] Ironic Inspector in the overcloud In-Reply-To: References: Message-ID: <4b7e509e-3c1c-6ba1-be1c-59708d22919a@redhat.com> On 4/18/18 12:07 PM, Derek Higgins wrote: > Hi All, > > I've been testing the ironic inspector containerised service in the > overcloud, the service essentially works but there is a couple of > hurdles to tackle to set it up, the first of these is how to get  the > IPA kernel and ramdisk where they need to be. > > These need to be be present in the ironic_pxe_http container to be > served out over http, whats the best way to get them there? > > On the undercloud this is done by copying the files across the > filesystem[1][2] to /httpboot  when we run "openstack overcloud image > upload", but on the overcloud an alternative is required, could the > files be pulled into the container during setup? I'd prefer keep bind-mounting IPA kernel and ramdisk into a container via the /var/lib/ironic/httpboot host-path. So the question then becomes how to deliver those by that path for overcloud nodes? > > thanks, > Derek > > 1 - > https://github.com/openstack/python-tripleoclient/blob/3cf44eb/tripleoclient/v1/overcloud_image.py#L421-L433 > 2 - > https://github.com/openstack/python-tripleoclient/blob/3cf44eb/tripleoclient/v1/overcloud_image.py#L181 > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Bogdan Dobrelya, Irc #bogdando From lbragstad at gmail.com Wed Apr 18 13:41:12 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Wed, 18 Apr 2018 08:41:12 -0500 Subject: [openstack-dev] [keystone] [all] openstack-specs process Message-ID: <82d1c01d-1d46-0c34-c1c0-2395241dc871@gmail.com> Hi all, There is a specification proposed to openstack/openstack-specs that summarizes some outcomes from the PTG in Dublin [0]. The keystone team had some questions about what happens next regarding that specification in this week's meeting [1]. What is the process for that repository? Is there a schedule? The Rocky release schedule doesn't seem to have any deadlines for OpenStack specific specs [2]. I dug through the documentation in the repository, but I didn't find anything describing the process [3] [4]. [0] https://review.openstack.org/#/c/523973/ [1] http://eavesdrop.openstack.org/meetings/keystone/2018/keystone.2018-04-17-16.00.log.html#l-66 [2] https://releases.openstack.org/rocky/schedule.html [3] https://specs.openstack.org/openstack/openstack-specs/readme.html [4] https://specs.openstack.org/openstack/openstack-specs/contributing.html -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From pkovar at redhat.com Wed Apr 18 13:41:44 2018 From: pkovar at redhat.com (Petr Kovar) Date: Wed, 18 Apr 2018 15:41:44 +0200 Subject: [openstack-dev] [docs] Documentation meeting today Message-ID: <20180418154144.a1ed381823db95102c3ef8aa@redhat.com> Hi all, The docs meeting will continue today at 16:00 UTC in #openstack-doc, as scheduled. For more details, see the meeting page: https://wiki.openstack.org/wiki/Meetings/DocTeamMeeting Cheers, pk From e0ne at e0ne.info Wed Apr 18 13:51:46 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Wed, 18 Apr 2018 16:51:46 +0300 Subject: [openstack-dev] [horizon] Meeting time and location are changed In-Reply-To: References: Message-ID: Hi, It's just a reminder that we've got our meeting today at 15.00UTC at openstack-meeting-alt channel. Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Mon, Apr 16, 2018 at 12:01 PM, Ivan Kolodyazhny wrote: > Hi team, > > Please be informed that Horizon meeting time has been changed [1]. We'll > have our weekly meetings at 15.00 UTC starting this week at > 'openstack-meeting-alt' channel. We had to change meeting channel too due > to the conflict with others. > > > [1] https://review.openstack.org/#/c/560979/ > > Regards, > Ivan Kolodyazhny, > http://blog.e0ne.info/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Wed Apr 18 14:06:21 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 18 Apr 2018 10:06:21 -0400 Subject: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources Message-ID: Stackers, Eric Fried and I are currently at an impasse regarding a decision that will have far-reaching (and end-user facing) impacts to the placement API and how nova interacts with the placement service from the nova scheduler. We need to make a decision regarding the following question: "By default, should resources/traits submitted in different numbered request groups be supplied by separate resource providers?" There are two competing proposals right now (both being amendments to the original granular request groups spec [1]) which outline two different viewpoints. Viewpoint A [2], from me, is that like resources listed in different granular request groups should mean that those resources will be sourced from *different* resource providers. In other words, if I issue the following request: GET /allocation_candidates?resources1=VCPU:1&resources2=VCPU:1 Then I am assured of getting allocation candidates that contain 2 distinct resource providers consuming 1 VCPU from each provider. Viewpoint B [3], from Eric, is that like resources listed in different granular request groups should not necessarily mean that those resources will be sourced from different resource providers. They *could* be sourced from different providers, or they could be sourced from the same provider. Both proposals include ways to specify whether certain resources or whole request groups can be forced to be sources from either a single provider or from different providers. In Viewpoint A, the proposal is to have a can_split=RESOURCE1,RESOURCE2 query parameter that would indicate which resource classes in the unnumbered request group that may be split across multiple providers (remember that viewpoint A considers different request groups to explicitly mean different providers, so it doesn't make sense to have a can_split query parameter for numbered request groups). In Viewpoint B, the proposal is to have a separate_providers=1,2 query parameter that would indicate that the identified request groups should be sourced from separate providers. Request groups that are not listed in the separate_providers query parameter are not guaranteed to be sourced from different providers. I know this is a complex subject, but I thought it was worthwhile trying to explain the two proposals in as clear terms as I could muster. I'm, quite frankly, a bit on the fence about the whole thing and would just like to have a clear path forward so that we can start landing the 12+ patches that are queued up waiting for a decision on this. Thoughts and opinions welcome. Thanks, -jay [1] http://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/granular-resource-requests.html [2] https://review.openstack.org/#/c/560974/ [3] https://review.openstack.org/#/c/561717/ From dms at danplanet.com Wed Apr 18 14:10:32 2018 From: dms at danplanet.com (Dan Smith) Date: Wed, 18 Apr 2018 07:10:32 -0700 Subject: [openstack-dev] [Nova] z/VM introducing a new config driveformat In-Reply-To: (Chen CH Ji's message of "Wed, 18 Apr 2018 17:44:30 +0800") References: <7efe6916-17fc-59c5-d666-6bdfc19c3329@gmail.com> <2eea2405-2e85-6730-e022-e1be33dc35d5@gmail.com> Message-ID: > Thanks for the concern and fully under it , the major reason is > cloud-init doesn't have a hook or plugin before it start to read > config drive (ISO disk) z/VM is an old hypervisor and no way to do > something like libvirt to define a ISO format disk in xml definition, > instead, it can define disks in the defintion of virtual machine and > let VM to decide its format. > > so we need a way to tell cloud-init where to find ISO file before > cloud-init start but without AE, we can't handle that...some update on > the spec here for further information > https://review.openstack.org/#/c/562154/ The ISO format does not come from telling libvirt something about it. The host creates and formats the image, adds the data, and then attaches it to the instance. The latter part is the only step that involves configuring libvirt to attach the image to the instance. The rest is just stuff done by nova-compute (and the virt driver) on the linux system it's running on. That's the same arrangement as your driver, AFAICT. You're asking the system to hypervisor (or something running on it) to grab the image from glance, pre-filled with data. This is no different, except that the configdrive image comes from the system running the compute service. I don't see how it's any different in actual hypervisor mechanics, and thus feel like there _has_ to be a way to do this without the AE magic agent. I agree with Mikal that needing more agent behavior than cloud-init does a disservice to the users. I feel like we get a lot of "but no, my hypervisor is special!" reasoning when people go to add a driver to nova. So far, I think they're a lot more similar than people think. Ironic is the weirdest one we have (IMHO and no offense to the ironic folks) and it can support configdrive properly. --Dan From openstack at fried.cc Wed Apr 18 14:30:42 2018 From: openstack at fried.cc (Eric Fried) Date: Wed, 18 Apr 2018 09:30:42 -0500 Subject: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources In-Reply-To: References: Message-ID: <3f93ef68-c776-4bac-1d06-d4cfc1f3f4a6@fried.cc> Thanks for describing the proposals clearly and concisely, Jay. My preamble would have been that we need to support two use cases: - "explicit anti-affinity": make sure certain parts of my request land on *different* providers; - "any fit": make sure my instance lands *somewhere*. Both proposals address both use cases, but in different ways. > "By default, should resources/traits submitted in different numbered > request groups be supplied by separate resource providers?" I agree this question needs to be answered, but that won't necessarily inform which path we choose. Viewpoint B [3] is set up to go either way: either we're unrestricted by default and use a queryparam to force separation; or we're split by default and use a queryparam to allow the unrestricted behavior. Otherwise I agree with everything Jay said. -efried On 04/18/2018 09:06 AM, Jay Pipes wrote: > Stackers, > > Eric Fried and I are currently at an impasse regarding a decision that > will have far-reaching (and end-user facing) impacts to the placement > API and how nova interacts with the placement service from the nova > scheduler. > > We need to make a decision regarding the following question: > > > There are two competing proposals right now (both being amendments to > the original granular request groups spec [1]) which outline two > different viewpoints. > > Viewpoint A [2], from me, is that like resources listed in different > granular request groups should mean that those resources will be sourced > from *different* resource providers. > > In other words, if I issue the following request: > > GET /allocation_candidates?resources1=VCPU:1&resources2=VCPU:1 > > Then I am assured of getting allocation candidates that contain 2 > distinct resource providers consuming 1 VCPU from each provider. > > Viewpoint B [3], from Eric, is that like resources listed in different > granular request groups should not necessarily mean that those resources > will be sourced from different resource providers. They *could* be > sourced from different providers, or they could be sourced from the same > provider. > > Both proposals include ways to specify whether certain resources or > whole request groups can be forced to be sources from either a single > provider or from different providers. > > In Viewpoint A, the proposal is to have a can_split=RESOURCE1,RESOURCE2 > query parameter that would indicate which resource classes in the > unnumbered request group that may be split across multiple providers > (remember that viewpoint A considers different request groups to > explicitly mean different providers, so it doesn't make sense to have a > can_split query parameter for numbered request groups). > > In Viewpoint B, the proposal is to have a separate_providers=1,2 query > parameter that would indicate that the identified request groups should > be sourced from separate providers. Request groups that are not listed > in the separate_providers query parameter are not guaranteed to be > sourced from different providers. > > I know this is a complex subject, but I thought it was worthwhile trying > to explain the two proposals in as clear terms as I could muster. > > I'm, quite frankly, a bit on the fence about the whole thing and would > just like to have a clear path forward so that we can start landing the > 12+ patches that are queued up waiting for a decision on this. > > Thoughts and opinions welcome. > > Thanks, > -jay > > > [1] > http://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/granular-resource-requests.html > > > [2] https://review.openstack.org/#/c/560974/ > > [3] https://review.openstack.org/#/c/561717/ > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jaypipes at gmail.com Wed Apr 18 14:38:54 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 18 Apr 2018 10:38:54 -0400 Subject: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources In-Reply-To: <3f93ef68-c776-4bac-1d06-d4cfc1f3f4a6@fried.cc> References: <3f93ef68-c776-4bac-1d06-d4cfc1f3f4a6@fried.cc> Message-ID: <8848a478-3614-e7d7-e2e0-5a046fddc136@gmail.com> On 04/18/2018 10:30 AM, Eric Fried wrote: > Thanks for describing the proposals clearly and concisely, Jay. > > My preamble would have been that we need to support two use cases: > > - "explicit anti-affinity": make sure certain parts of my request land > on *different* providers; > - "any fit": make sure my instance lands *somewhere*. > > Both proposals address both use cases, but in different ways. Right. It's important to point out when we say "different providers" in this ML post, we are specifically referring to different providers *within a tree of providers*. We are not referring to completely separate compute hosts. We are referring to things like multiple NUMA cells that expose CPU resources on a single compute host or multiple SR-IOV-enabled physical functions that expose SR-IOV VFs for use by guests. Best. -jay >> "By default, should resources/traits submitted in different numbered >> request groups be supplied by separate resource providers?" > > I agree this question needs to be answered, but that won't necessarily > inform which path we choose. Viewpoint B [3] is set up to go either > way: either we're unrestricted by default and use a queryparam to force > separation; or we're split by default and use a queryparam to allow the > unrestricted behavior. > > Otherwise I agree with everything Jay said. > > -efried > > On 04/18/2018 09:06 AM, Jay Pipes wrote: >> Stackers, >> >> Eric Fried and I are currently at an impasse regarding a decision that >> will have far-reaching (and end-user facing) impacts to the placement >> API and how nova interacts with the placement service from the nova >> scheduler. >> >> We need to make a decision regarding the following question: >> >> >> There are two competing proposals right now (both being amendments to >> the original granular request groups spec [1]) which outline two >> different viewpoints. >> >> Viewpoint A [2], from me, is that like resources listed in different >> granular request groups should mean that those resources will be sourced >> from *different* resource providers. >> >> In other words, if I issue the following request: >> >> GET /allocation_candidates?resources1=VCPU:1&resources2=VCPU:1 >> >> Then I am assured of getting allocation candidates that contain 2 >> distinct resource providers consuming 1 VCPU from each provider. >> >> Viewpoint B [3], from Eric, is that like resources listed in different >> granular request groups should not necessarily mean that those resources >> will be sourced from different resource providers. They *could* be >> sourced from different providers, or they could be sourced from the same >> provider. >> >> Both proposals include ways to specify whether certain resources or >> whole request groups can be forced to be sources from either a single >> provider or from different providers. >> >> In Viewpoint A, the proposal is to have a can_split=RESOURCE1,RESOURCE2 >> query parameter that would indicate which resource classes in the >> unnumbered request group that may be split across multiple providers >> (remember that viewpoint A considers different request groups to >> explicitly mean different providers, so it doesn't make sense to have a >> can_split query parameter for numbered request groups). >> >> In Viewpoint B, the proposal is to have a separate_providers=1,2 query >> parameter that would indicate that the identified request groups should >> be sourced from separate providers. Request groups that are not listed >> in the separate_providers query parameter are not guaranteed to be >> sourced from different providers. >> >> I know this is a complex subject, but I thought it was worthwhile trying >> to explain the two proposals in as clear terms as I could muster. >> >> I'm, quite frankly, a bit on the fence about the whole thing and would >> just like to have a clear path forward so that we can start landing the >> 12+ patches that are queued up waiting for a decision on this. >> >> Thoughts and opinions welcome. >> >> Thanks, >> -jay >> >> >> [1] >> http://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/granular-resource-requests.html >> >> >> [2] https://review.openstack.org/#/c/560974/ >> >> [3] https://review.openstack.org/#/c/561717/ >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From mbooth at redhat.com Wed Apr 18 14:56:09 2018 From: mbooth at redhat.com (Matthew Booth) Date: Wed, 18 Apr 2018 15:56:09 +0100 Subject: [openstack-dev] [Nova] z/VM introducing a new config driveformat In-Reply-To: References: <7efe6916-17fc-59c5-d666-6bdfc19c3329@gmail.com> <2eea2405-2e85-6730-e022-e1be33dc35d5@gmail.com> Message-ID: On 18 April 2018 at 15:10, Dan Smith wrote: >> Thanks for the concern and fully under it , the major reason is >> cloud-init doesn't have a hook or plugin before it start to read >> config drive (ISO disk) z/VM is an old hypervisor and no way to do >> something like libvirt to define a ISO format disk in xml definition, >> instead, it can define disks in the defintion of virtual machine and >> let VM to decide its format. >> >> so we need a way to tell cloud-init where to find ISO file before >> cloud-init start but without AE, we can't handle that...some update on >> the spec here for further information >> https://review.openstack.org/#/c/562154/ > > The ISO format does not come from telling libvirt something about > it. The host creates and formats the image, adds the data, and then > attaches it to the instance. The latter part is the only step that > involves configuring libvirt to attach the image to the instance. The > rest is just stuff done by nova-compute (and the virt driver) on the > linux system it's running on. That's the same arrangement as your > driver, AFAICT. > > You're asking the system to hypervisor (or something running on it) to > grab the image from glance, pre-filled with data. This is no different, > except that the configdrive image comes from the system running the > compute service. I don't see how it's any different in actual hypervisor > mechanics, and thus feel like there _has_ to be a way to do this without > the AE magic agent. Having briefly read the cloud-init snippet which was linked earlier in this thread, the requirement seems to be that the guest exposes the device as /dev/srX or /dev/cdX. So I guess in order to make this work: * You need to tell z/VM to expose the virtual disk as an optical disk * The z/VM kernel needs to call optical disks /dev/srX or /dev/cdX > I agree with Mikal that needing more agent behavior than cloud-init does > a disservice to the users. > > I feel like we get a lot of "but no, my hypervisor is special!" > reasoning when people go to add a driver to nova. So far, I think > they're a lot more similar than people think. Ironic is the weirdest one > we have (IMHO and no offense to the ironic folks) and it can support > configdrive properly. I was going to ask this. Even if the contents of the disk can't be transferred in advance... how does ironic do this? There must be a way. Matt -- Matthew Booth Red Hat OpenStack Engineer, Compute DFG Phone: +442070094448 (UK) From alifshit at redhat.com Wed Apr 18 15:17:23 2018 From: alifshit at redhat.com (Artom Lifshitz) Date: Wed, 18 Apr 2018 11:17:23 -0400 Subject: [openstack-dev] [nova] Default scheduler filters survey Message-ID: Hi all, A CI issue [1] caused by tempest thinking some filters are enabled when they're really not, and a proposed patch [2] to add (Same|Different)HostFilter to the default filters as a workaround, has led to a discussion about what filters should be enabled by default in nova. The default filters should make sense for a majority of real world deployments. Adding some filters to the defaults because CI needs them is faulty logic, because the needs of CI are different to the needs of operators/users, and the latter takes priority (though it's my understanding that a good chunk of operators run tempest on their clouds post-deployment as a way to validate that the cloud is working properly, so maybe CI's and users' needs aren't that different after all). To that end, we'd like to know what filters operators are enabling in their deployment. If you can, please reply to this email with your [filter_scheduler]/enabled_filters (or [DEFAULT]/scheduler_default_filters if you're using an older version) option from nova.conf. Any other comments are welcome as well :) Cheers! [1] https://bugs.launchpad.net/tempest/+bug/1628443 [2] https://review.openstack.org/#/c/561651/ From ianyrchoi at gmail.com Wed Apr 18 15:19:57 2018 From: ianyrchoi at gmail.com (Ian Y. Choi) Date: Thu, 19 Apr 2018 00:19:57 +0900 Subject: [openstack-dev] [openstack-infra] How to take over a project? In-Reply-To: References: <0630C19B-9EA1-457D-A74C-8A7A03D96DD6@vmware.com> Message-ID: <2bb037db-61aa-4fec-a694-00eee36bbdab@gmail.com> Hello Sangho, When I see https://review.openstack.org/#/admin/projects/openstack/networking-onos,access page, it seems that networking-onos-release group members can create stable branches for the repository. By the way, since the networking-onos-release group has no neutron release team group, I think infra team can help to include neutron release team and neutron release team can help to create branches for the repo if there is no reponse from current networking-onos-release group member. Might this help you? With many thanks, /Ian Sangho Shin wrote on 4/18/2018 2:48 PM: > Hello, Ian > > I am trying to add a new stable branch in the networking-onos, > following the page you suggested. > > > Create stable/* Branch¶ > > > For OpenStack projects this should be performed by the OpenStack > Release Management Team at the Release Branch Point. If you are > managing branches for your project you may have permission to do this > yourself. > > * Go to https://review.openstack.org/ and sign in > * Select ‘Admin’, ‘Projects’, then the project > * Select ‘Branches’ > * Enter |stable/| in the ‘Branch Name’ field, and |HEAD| as > the ‘Initial Revision’, then press ‘Create Branch’. Alternatively, > you may run |git branch stable/ && git push gerrit > stable/| > > > However, after I login, I cannot see the ‘Admin’ and also I cannot > create a new branch. Do I need an additional authority for it? > BTW, I am a member of networking-onos-core team, as you know. > > Thank you, > > Sangho > > > >> On 18 Apr 2018, at 9:00 AM, Sangho Shin > > wrote: >> >> Ian and Gary, >> >> Thank you so much for your answer. >> I will try what you suggested. >> >> Thank you, >> >> Sangho >> >>> On 17 Apr 2018, at 7:47 PM, Gary Kotton >> > wrote: >>> >>> Hi, >>> You either need one of the ono core team or the neutron release team >>> to add you. FYI >>> -https://review.openstack.org/#/admin/groups/1001,members >>> Thanks >>> Gary >>> *From:*Sangho Shin >> > >>> *Reply-To:*OpenStack List >> > >>> *Date:*Tuesday, April 17, 2018 at 5:01 AM >>> *To:*OpenStack List >> > >>> *Subject:*[openstack-dev] [openstack-infra] How to take over a project? >>> Dear OpenStack Infra team, >>> I would like to know how to take over an OpenStack project. >>> I am a committer of the networking-onos project >>> (https://github.com/openstack/networking-onos), >>> and I would like to take over the project. >>> The current maintainer (cc’d) has already agreed with that. >>> Please let me know the process to take over (or change the >>> maintainer of) the project. >>> BTW, it looks like even the current maintainer cannot create a new >>> branch of the codes. How can we get the authority to create a new >>> branch? >>> Thank you, >>> Sangho >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe:OpenStack-dev-request at lists.openstack.org >>> ?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sean.k.mooney at intel.com Wed Apr 18 15:46:42 2018 From: sean.k.mooney at intel.com (Mooney, Sean K) Date: Wed, 18 Apr 2018 15:46:42 +0000 Subject: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources In-Reply-To: <8848a478-3614-e7d7-e2e0-5a046fddc136@gmail.com> References: <3f93ef68-c776-4bac-1d06-d4cfc1f3f4a6@fried.cc> <8848a478-3614-e7d7-e2e0-5a046fddc136@gmail.com> Message-ID: <4B1BB321037C0849AAE171801564DFA688A54B2C@IRSMSX107.ger.corp.intel.com> > -----Original Message----- > From: Jay Pipes [mailto:jaypipes at gmail.com] > Sent: Wednesday, April 18, 2018 3:39 PM > To: openstack-dev at lists.openstack.org > Subject: Re: [openstack-dev] [placement][nova] Decision time on > granular request groups for like resources > > On 04/18/2018 10:30 AM, Eric Fried wrote: > > Thanks for describing the proposals clearly and concisely, Jay. > > > > My preamble would have been that we need to support two use cases: > > > > - "explicit anti-affinity": make sure certain parts of my request > land > > on *different* providers; > > - "any fit": make sure my instance lands *somewhere*. > > [Mooney, Sean K] for completeness we must also support explicit affinity also So the tree cases are "explicit anti-affinity": make sure certain parts of my request land on *different* providers in the same tree think VFs for bonded ports. "explicit affinity": make sure certain parts of my request land on the same providers in the same tree. This is the numa affinity case for ram and cpus. "any fit": make sure my instance lands *somewhere* with in the same tree. We have to also be aware of the implication for sharing resource providers here Too as with jays approach you cannot mix shared and non-shared in a request Numbered request group. With eric's proposal I believe you can have allocation within A numbered request group come from sharing providers and local providers assuming you Do not use traits to confine that behavior. > > Both proposals address both use cases, but in different ways. > > Right. > > It's important to point out when we say "different providers" in this > ML post, we are specifically referring to different providers *within a > tree of providers*. We are not referring to completely separate compute > hosts. We are referring to things like multiple NUMA cells that expose > CPU resources on a single compute host or multiple SR-IOV-enabled > physical functions that expose SR-IOV VFs for use by guests. > > Best. > -jay > > >> "By default, should resources/traits submitted in different numbered > >> request groups be supplied by separate resource providers?" > > > > I agree this question needs to be answered, but that won't > necessarily > > inform which path we choose. Viewpoint B [3] is set up to go either > > way: either we're unrestricted by default and use a queryparam to > > force separation; or we're split by default and use a queryparam to > > allow the unrestricted behavior. > > > > Otherwise I agree with everything Jay said. > > > > -efried > > > > On 04/18/2018 09:06 AM, Jay Pipes wrote: > >> Stackers, > >> > >> Eric Fried and I are currently at an impasse regarding a decision > >> that will have far-reaching (and end-user facing) impacts to the > >> placement API and how nova interacts with the placement service from > >> the nova scheduler. > >> > >> We need to make a decision regarding the following question: > >> > >> > >> There are two competing proposals right now (both being amendments > to > >> the original granular request groups spec [1]) which outline two > >> different viewpoints. > >> > >> Viewpoint A [2], from me, is that like resources listed in different > >> granular request groups should mean that those resources will be > >> sourced from *different* resource providers. > >> > >> In other words, if I issue the following request: > >> > >> GET /allocation_candidates?resources1=VCPU:1&resources2=VCPU:1 > >> > >> Then I am assured of getting allocation candidates that contain 2 > >> distinct resource providers consuming 1 VCPU from each provider. > >> > >> Viewpoint B [3], from Eric, is that like resources listed in > >> different granular request groups should not necessarily mean that > >> those resources will be sourced from different resource providers. > >> They *could* be sourced from different providers, or they could be > >> sourced from the same provider. > >> > >> Both proposals include ways to specify whether certain resources or > >> whole request groups can be forced to be sources from either a > single > >> provider or from different providers. > >> > >> In Viewpoint A, the proposal is to have a > >> can_split=RESOURCE1,RESOURCE2 query parameter that would indicate > >> which resource classes in the unnumbered request group that may be > >> split across multiple providers (remember that viewpoint A considers > >> different request groups to explicitly mean different providers, so > >> it doesn't make sense to have a can_split query parameter for > numbered request groups). > >> > >> In Viewpoint B, the proposal is to have a separate_providers=1,2 > >> query parameter that would indicate that the identified request > >> groups should be sourced from separate providers. Request groups > that > >> are not listed in the separate_providers query parameter are not > >> guaranteed to be sourced from different providers. > >> > >> I know this is a complex subject, but I thought it was worthwhile > >> trying to explain the two proposals in as clear terms as I could > muster. > >> > >> I'm, quite frankly, a bit on the fence about the whole thing and > >> would just like to have a clear path forward so that we can start > >> landing the > >> 12+ patches that are queued up waiting for a decision on this. > >> > >> Thoughts and opinions welcome. > >> > >> Thanks, > >> -jay > >> > >> > >> [1] > >> http://specs.openstack.org/openstack/nova- > specs/specs/rocky/approved/ > >> granular-resource-requests.html > >> > >> > >> [2] https://review.openstack.org/#/c/560974/ > >> > >> [3] https://review.openstack.org/#/c/561717/ > >> > >> > _____________________________________________________________________ > >> _____ OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: > >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > ______________________________________________________________________ > > ____ OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > _______________________________________________________________________ > ___ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev- > request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mriedemos at gmail.com Wed Apr 18 15:58:08 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 18 Apr 2018 10:58:08 -0500 Subject: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources In-Reply-To: References: Message-ID: <125f7217-2a83-316f-63fd-e1cee0ee3a5f@gmail.com> On 4/18/2018 9:06 AM, Jay Pipes wrote: > "By default, should resources/traits submitted in different numbered > request groups be supplied by separate resource providers?" Without knowing all of the hairy use cases, I'm trying to channel my inner sdague and some of the similar types of discussions we've had to changes in the compute API, and a lot of the time we've agreed that we shouldn't assume a default in certain cases. So for this case, if I'm requesting numbered request groups, why doesn't the API just require that I pass a query parameter telling it how I'd like those requests to be handled, either via affinity or anti-affinity. I'm specifically thinking about the changes to the compute API in microversion 2.37 for get-me-a-network where my initial design was to allow the 'networks' entry in the POST /servers request to remain optional and default to auto-allocate, but without going into details, that could be a problem. So ultimately we just decided that with >=2.37 you have to specify "networks" in POST /servers and we provided specific values for what the networks should be (specific network ID, port ID, auto or none). That way the user knows exactly what they are opting into rather than rely on default behavior in the server, which might bite you (or us) later if we ever want to change that default behavior. -- Thanks, Matt From jim at jimrollenhagen.com Wed Apr 18 16:01:05 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Wed, 18 Apr 2018 12:01:05 -0400 Subject: [openstack-dev] [Nova] z/VM introducing a new config driveformat In-Reply-To: References: <7efe6916-17fc-59c5-d666-6bdfc19c3329@gmail.com> <2eea2405-2e85-6730-e022-e1be33dc35d5@gmail.com> Message-ID: On Wed, Apr 18, 2018 at 10:56 AM, Matthew Booth wrote: > > > I agree with Mikal that needing more agent behavior than cloud-init does > > a disservice to the users. > > > > I feel like we get a lot of "but no, my hypervisor is special!" > > reasoning when people go to add a driver to nova. So far, I think > > they're a lot more similar than people think. Ironic is the weirdest one > > we have (IMHO and no offense to the ironic folks) and it can support > > configdrive properly. > > I was going to ask this. Even if the contents of the disk can't be > transferred in advance... how does ironic do this? There must be a > way. > I'm not sure if this is a rhetorical question, so I'll just answer it. :) We basically build the configdrive in nova-compute, then gzip and base64 it, and send it to ironic with the deploy request. On the ironic side, we unpack it and write it to the end of the boot disk. https://github.com/openstack/nova/blob/324899c621ee02d877122ba3412712ebb92831f2/nova/virt/ironic/driver.py#L952-L985 // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.friesen at windriver.com Wed Apr 18 16:04:08 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Wed, 18 Apr 2018 10:04:08 -0600 Subject: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources In-Reply-To: References: Message-ID: <5AD76C78.9030601@windriver.com> On 04/18/2018 08:06 AM, Jay Pipes wrote: > Stackers, > > Eric Fried and I are currently at an impasse regarding a decision that will have > far-reaching (and end-user facing) impacts to the placement API and how nova > interacts with the placement service from the nova scheduler. > > We need to make a decision regarding the following question: > > "By default, should resources/traits submitted in different numbered request > groups be supplied by separate resource providers?" I'm a bit conflicted. On the one hand if we're talking about virtual resources like "vCPUs" then there's really no reason why they couldn't be sourced from the same resource provider. On the other hand, once we're talking about *physical* resources it seems like it might be more common to want them to be coming from different resource providers. We may want memory spread across multiple NUMA nodes for higher aggregate bandwidth, we may want VFs from separate PFs for high availability. I'm half tempted to side with mriedem and say that there is no default and it must be explicit, but I'm concerned that this would make the requests a lot larger if you have to specify it for every resource. (Will follow up in a reply to mriedem's post.) > Both proposals include ways to specify whether certain resources or whole > request groups can be forced to be sources from either a single provider or from > different providers. > > In Viewpoint A, the proposal is to have a can_split=RESOURCE1,RESOURCE2 query > parameter that would indicate which resource classes in the unnumbered request > group that may be split across multiple providers (remember that viewpoint A > considers different request groups to explicitly mean different providers, so it > doesn't make sense to have a can_split query parameter for numbered request > groups). > In Viewpoint B, the proposal is to have a separate_providers=1,2 query parameter > that would indicate that the identified request groups should be sourced from > separate providers. Request groups that are not listed in the separate_providers > query parameter are not guaranteed to be sourced from different providers. In either viewpoint, is there a way to represent "I want two resource groups, with resource X in each group coming from different resource providers (anti-affinity) and resource Y from the same resource provider (affinity)? Chris From melwittt at gmail.com Wed Apr 18 16:04:26 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 18 Apr 2018 09:04:26 -0700 Subject: [openstack-dev] [nova] Rocky forum topics brainstorming In-Reply-To: <4de18eaf-0b28-62aa-2935-a18d5ada160c@gmail.com> References: <0037fa0a-aa31-1744-b050-783e8be81138@gmail.com> <4de18eaf-0b28-62aa-2935-a18d5ada160c@gmail.com> Message-ID: <97076e49-80fb-9889-8123-b141413f73b7@gmail.com> On Fri, 13 Apr 2018 08:00:31 -0700, Melanie Witt wrote: > +openstack-operators (apologies that I forgot to add originally) > > On Mon, 9 Apr 2018 10:09:12 -0700, Melanie Witt wrote: >> Hey everyone, >> >> Let's collect forum topic brainstorming ideas for the Forum sessions in >> Vancouver in this etherpad [0]. Once we've brainstormed, we'll select >> and submit our topic proposals for consideration at the end of this >> week. The deadline for submissions is Sunday April 15. >> >> Thanks, >> -melanie >> >> [0] https://etherpad.openstack.org/p/YVR-nova-brainstorming > > Just a reminder that we're collecting forum topic ideas to propose for > Vancouver and input from operators is especially important. Please add > your topics and/or comments to the etherpad [0] and we'll submit > proposals before the Sunday deadline. Here's a list of nova-related sessions that have been proposed: * CellsV2 migration process sync with operators: http://forumtopics.openstack.org/cfp/details/125 * nova/neutron + ops cross-project session: http://forumtopics.openstack.org/cfp/details/124 * Planning to use Placement in Cinder: http://forumtopics.openstack.org/cfp/details/89 * Building the path to extracting Placement from Nova: http://forumtopics.openstack.org/cfp/details/88 * Multi-attach introduction and future direction: http://forumtopics.openstack.org/cfp/details/101 * Making NFV features easier to use: http://forumtopics.openstack.org/cfp/details/146 A list of all proposed forum topics can be seen here: http://forumtopics.openstack.org Cheers, -melanie From chris.friesen at windriver.com Wed Apr 18 16:07:29 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Wed, 18 Apr 2018 10:07:29 -0600 Subject: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources In-Reply-To: <125f7217-2a83-316f-63fd-e1cee0ee3a5f@gmail.com> References: <125f7217-2a83-316f-63fd-e1cee0ee3a5f@gmail.com> Message-ID: <5AD76D41.1040204@windriver.com> On 04/18/2018 09:58 AM, Matt Riedemann wrote: > On 4/18/2018 9:06 AM, Jay Pipes wrote: >> "By default, should resources/traits submitted in different numbered request >> groups be supplied by separate resource providers?" > > Without knowing all of the hairy use cases, I'm trying to channel my inner > sdague and some of the similar types of discussions we've had to changes in the > compute API, and a lot of the time we've agreed that we shouldn't assume a > default in certain cases. > > So for this case, if I'm requesting numbered request groups, why doesn't the API > just require that I pass a query parameter telling it how I'd like those > requests to be handled, either via affinity or anti-affinity. The request might get unwieldy if we have to specify affinity/anti-affinity for each resource. Maybe you could specify the default for the request and then optionally override it for each resource? I'm not current on the placement implementation details, but would this level of flexibility cause complexity problems in the code? Chris From ed at leafe.com Wed Apr 18 16:11:06 2018 From: ed at leafe.com (Ed Leafe) Date: Wed, 18 Apr 2018 11:11:06 -0500 Subject: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources In-Reply-To: <125f7217-2a83-316f-63fd-e1cee0ee3a5f@gmail.com> References: <125f7217-2a83-316f-63fd-e1cee0ee3a5f@gmail.com> Message-ID: <9EA2D9D0-D563-4E4F-A8BA-45AD49E20CEB@leafe.com> On Apr 18, 2018, at 10:58 AM, Matt Riedemann wrote: > > So for this case, if I'm requesting numbered request groups, why doesn't the API just require that I pass a query parameter telling it how I'd like those requests to be handled, either via affinity or anti-affinity. That makes a lot of sense. Since we are already suffixing the query param “resources” to indicate granular, why not add a clarifying term to that suffix? E.g., “resources1=“ -> “resources1d” (for ‘different’). The exact string we use can be bike shedded, but requiring it be specified sounds pretty sane to me. -- Ed Leafe From derekh at redhat.com Wed Apr 18 16:12:07 2018 From: derekh at redhat.com (Derek Higgins) Date: Wed, 18 Apr 2018 17:12:07 +0100 Subject: [openstack-dev] [tripleo] Ironic Inspector in the overcloud In-Reply-To: <4b7e509e-3c1c-6ba1-be1c-59708d22919a@redhat.com> References: <4b7e509e-3c1c-6ba1-be1c-59708d22919a@redhat.com> Message-ID: On 18 April 2018 at 14:22, Bogdan Dobrelya wrote: > On 4/18/18 12:07 PM, Derek Higgins wrote: > >> Hi All, >> >> I've been testing the ironic inspector containerised service in the >> overcloud, the service essentially works but there is a couple of hurdles >> to tackle to set it up, the first of these is how to get the IPA kernel >> and ramdisk where they need to be. >> >> These need to be be present in the ironic_pxe_http container to be served >> out over http, whats the best way to get them there? >> >> On the undercloud this is done by copying the files across the >> filesystem[1][2] to /httpboot when we run "openstack overcloud image >> upload", but on the overcloud an alternative is required, could the files >> be pulled into the container during setup? >> > > I'd prefer keep bind-mounting IPA kernel and ramdisk into a container via > the /var/lib/ironic/httpboot host-path. So the question then becomes how to > deliver those by that path for overcloud nodes? > Yup it does, I'm currently looking into using DeployArtifactURLs to download the files to the controller nodes > > >> thanks, >> Derek >> >> 1 - https://github.com/openstack/python-tripleoclient/blob/3cf44 >> eb/tripleoclient/v1/overcloud_image.py#L421-L433 >> 2 - https://github.com/openstack/python-tripleoclient/blob/3cf44 >> eb/tripleoclient/v1/overcloud_image.py#L181 >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > -- > Best regards, > Bogdan Dobrelya, > Irc #bogdando > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.friesen at windriver.com Wed Apr 18 16:32:19 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Wed, 18 Apr 2018 10:32:19 -0600 Subject: [openstack-dev] [nova] Default scheduler filters survey In-Reply-To: References: Message-ID: <5AD77313.30102@windriver.com> On 04/18/2018 09:17 AM, Artom Lifshitz wrote: > To that end, we'd like to know what filters operators are enabling in > their deployment. If you can, please reply to this email with your > [filter_scheduler]/enabled_filters (or > [DEFAULT]/scheduler_default_filters if you're using an older version) > option from nova.conf. Any other comments are welcome as well :) RetryFilter ComputeFilter AvailabilityZoneFilter AggregateInstanceExtraSpecsFilter ComputeCapabilitiesFilter ImagePropertiesFilter NUMATopologyFilter ServerGroupAffinityFilter ServerGroupAntiAffinityFilter PciPassthroughFilter From cdent+os at anticdent.org Wed Apr 18 16:38:21 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Wed, 18 Apr 2018 17:38:21 +0100 (BST) Subject: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources In-Reply-To: References: Message-ID: On Wed, 18 Apr 2018, Jay Pipes wrote: > Stackers, Thanks for doing this. Seeing it gathered in one place like this in neither IRC nor gerrit is way easier on my brain (for whatever reason, don't know why). > Eric Fried and I are currently at an impasse regarding a decision that will > have far-reaching (and end-user facing) impacts to the placement API and how > nova interacts with the placement service from the nova scheduler. One thing that has felt like it is missing (at least not explicitly present) in this discussion. We talk about this as if it will have far reaching consequences, but it is not clear (to me at least) what those consequences are, other than need to diddle yet more syntax further down the line. Are there deeper consequences than that? > In Viewpoint B, the proposal is to have a separate_providers=1,2 query > parameter that would indicate that the identified request groups should be > sourced from separate providers. Request groups that are not listed in the > separate_providers query parameter are not guaranteed to be sourced from > different providers. Do I recall correctly that part of the motivation here (in viewpoint B) is to be able to express: I'd like two disparate chunks of the same class of inventory and while having them come from diffferent providers it okay, it is also okay if they came from the same? If that's correct, then that, to me, is fairly compelling if we are thinking about placement over the long term outside the context of solely satisfying nova workload placement. > I'm, quite frankly, a bit on the fence about the whole thing and would just > like to have a clear path forward so that we can start landing the 12+ > patches that are queued up waiting for a decision on this. yes -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From mriedemos at gmail.com Wed Apr 18 16:41:03 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 18 Apr 2018 11:41:03 -0500 Subject: [openstack-dev] [nova] Concern about trusted certificates API change Message-ID: <617ae5d9-25c3-d4d0-1d1a-4e8d7602ccea@gmail.com> There is a compute REST API change proposed [1] which will allow users to pass trusted certificate IDs to be used with validation of images when creating or rebuilding a server. The trusted cert IDs are based on certificates stored in some key manager, e.g. Barbican. The full nova spec is here [2]. The main concern I have is that trusted certs will not be supported for volume-backed instances, and some clouds only support volume-backed instances. The way the patch is written is that if the user attempts to boot from volume with trusted certs, it will fail. In thinking about a semi-discoverable/configurable solution, I'm thinking we should add a policy rule around trusted certs to indicate if they can be used or not. Beyond the boot from volume issue, the only virt driver that supports trusted cert image validation is the libvirt driver, so any cloud that's not using the libvirt driver simply cannot support this feature, regardless of boot from volume. We have added similar policy rules in the past for backend-dependent features like volume extend and volume multi-attach, so I don't think this is a new issue. Alternatively we can block the change in nova until it supports boot from volume, but that would mean needing to add trusted cert image validation support into cinder along with API changes, effectively killing the chance of this getting done in nova in Rocky, and this blueprint has been around since at least Ocata so it would be good to make progress if possible. [1] https://review.openstack.org/#/c/486204/ [2] https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/nova-validate-certificates.html -- Thanks, Matt From myoung at redhat.com Wed Apr 18 16:51:33 2018 From: myoung at redhat.com (Matt Young) Date: Wed, 18 Apr 2018 12:51:33 -0400 Subject: [openstack-dev] [tripleo] CI & Tempest squad planning summary: Sprint 12 Message-ID: Greetings, The TripleO CI & Tempest squads have begun work on Sprint 12. This is a 3 week sprint. The Ruck & Rover for this sprint are quiquell and panda. ## CI Squad Goals: "As a developer, I want reproduce a multinode CI job on a bare metal host using libvirt" "Enable the same workflows used in upstream CI / reproducer using libvirt instead of OVB" Epic: https://trello.com/c/JEGLSVh6/323-reproduce-ci-jobs-with-libvirt Tasks: https://tinyurl.com/yd93nz8p ## Tempest Squad Goals: "Run tempest on undercloud by using containerized and packaged tempest as well as against Heat, Mistral, Ironic, Tempest and python-tempestconf upstream" "Finish work items carried from sprint 11 or other side work going on." Epic: https://trello.com/c/ifIYQsxs/680-sprint-12-undercloud-tempest Tasks: https://tinyurl.com/y8k6yvbm For any questions please find us in #tripleo Thanks, Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Wed Apr 18 16:57:43 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 18 Apr 2018 12:57:43 -0400 Subject: [openstack-dev] [nova] Concern about trusted certificates API change In-Reply-To: <617ae5d9-25c3-d4d0-1d1a-4e8d7602ccea@gmail.com> References: <617ae5d9-25c3-d4d0-1d1a-4e8d7602ccea@gmail.com> Message-ID: On 04/18/2018 12:41 PM, Matt Riedemann wrote: > There is a compute REST API change proposed [1] which will allow users > to pass trusted certificate IDs to be used with validation of images > when creating or rebuilding a server. The trusted cert IDs are based on > certificates stored in some key manager, e.g. Barbican. > > The full nova spec is here [2]. > > The main concern I have is that trusted certs will not be supported for > volume-backed instances, and some clouds only support volume-backed > instances. Yes. And some clouds only support VMWare vCenter virt driver. And some only support Hyper-V. I don't believe we should delay adding good functionality to (large percentage of) clouds because it doesn't yet work with one virt driver or one piece of (badly-designed) functionality. > The way the patch is written is that if the user attempts to > boot from volume with trusted certs, it will fail. And... I think that's perfectly fine. > In thinking about a semi-discoverable/configurable solution, I'm > thinking we should add a policy rule around trusted certs to indicate if > they can be used or not. Beyond the boot from volume issue, the only > virt driver that supports trusted cert image validation is the libvirt > driver, so any cloud that's not using the libvirt driver simply cannot > support this feature, regardless of boot from volume. We have added > similar policy rules in the past for backend-dependent features like > volume extend and volume multi-attach, so I don't think this is a new > issue. > > Alternatively we can block the change in nova until it supports boot > from volume, but that would mean needing to add trusted cert image > validation support into cinder along with API changes, effectively > killing the chance of this getting done in nova in Rocky, and this > blueprint has been around since at least Ocata so it would be good to > make progress if possible. As mentioned above, I don't want to derail progress until (if ever?) trusted certs achieves this magical works-for-every-driver-and-functionality state. It's not realistic to expect this to be done, IMHO, and just keeps good functionality out of the hands of many cloud users. Just my 2 cents. -jay > [1] https://review.openstack.org/#/c/486204/ > [2] > https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/nova-validate-certificates.html > > From chris.friesen at windriver.com Wed Apr 18 17:09:54 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Wed, 18 Apr 2018 11:09:54 -0600 Subject: [openstack-dev] [nova] Concern about trusted certificates API change In-Reply-To: References: <617ae5d9-25c3-d4d0-1d1a-4e8d7602ccea@gmail.com> Message-ID: <5AD77BE2.5040000@windriver.com> On 04/18/2018 10:57 AM, Jay Pipes wrote: > On 04/18/2018 12:41 PM, Matt Riedemann wrote: >> There is a compute REST API change proposed [1] which will allow users to pass >> trusted certificate IDs to be used with validation of images when creating or >> rebuilding a server. The trusted cert IDs are based on certificates stored in >> some key manager, e.g. Barbican. >> >> The full nova spec is here [2]. >> >> The main concern I have is that trusted certs will not be supported for >> volume-backed instances, and some clouds only support volume-backed instances. > > Yes. And some clouds only support VMWare vCenter virt driver. And some only > support Hyper-V. I don't believe we should delay adding good functionality to > (large percentage of) clouds because it doesn't yet work with one virt driver or > one piece of (badly-designed) functionality. > > > The way the patch is written is that if the user attempts to >> boot from volume with trusted certs, it will fail. > > And... I think that's perfectly fine. If this happens, is it clear to the end-user that the reason the boot failed is that the cloud doesn't support trusted cert IDs for boot-from-vol? If so, then I think that's totally fine. If the error message is unclear, then maybe we should just improve it. Chris From mriedemos at gmail.com Wed Apr 18 17:11:58 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 18 Apr 2018 12:11:58 -0500 Subject: [openstack-dev] [nova] Concern about trusted certificates API change In-Reply-To: References: <617ae5d9-25c3-d4d0-1d1a-4e8d7602ccea@gmail.com> Message-ID: <1f88c15c-b75a-cc34-0299-3b47f78ba794@gmail.com> On 4/18/2018 11:57 AM, Jay Pipes wrote: >> There is a compute REST API change proposed [1] which will allow users >> to pass trusted certificate IDs to be used with validation of images >> when creating or rebuilding a server. The trusted cert IDs are based >> on certificates stored in some key manager, e.g. Barbican. >> >> The full nova spec is here [2]. >> >> The main concern I have is that trusted certs will not be supported >> for volume-backed instances, and some clouds only support >> volume-backed instances. > > Yes. And some clouds only support VMWare vCenter virt driver. And some > only support Hyper-V. I don't believe we should delay adding good > functionality to (large percentage of) clouds because it doesn't yet > work with one virt driver or one piece of (badly-designed) functionality. Maybe it wasn't clear but I'm not advocating that we block the change until volume-backed instances are supported with trusted certs. I'm suggesting we add a policy rule which allows deployers to at least disable it via policy if it's not supported for their cloud. > > The way the patch is written is that if the user attempts to >> boot from volume with trusted certs, it will fail. > > And... I think that's perfectly fine. I agree. I'm the one that noticed the issue and pointed out in the code review that we should explicitly fail the request if we can't honor it. > >> In thinking about a semi-discoverable/configurable solution, I'm >> thinking we should add a policy rule around trusted certs to indicate >> if they can be used or not. Beyond the boot from volume issue, the >> only virt driver that supports trusted cert image validation is the >> libvirt driver, so any cloud that's not using the libvirt driver >> simply cannot support this feature, regardless of boot from volume. We >> have added similar policy rules in the past for backend-dependent >> features like volume extend and volume multi-attach, so I don't think >> this is a new issue. >> >> Alternatively we can block the change in nova until it supports boot >> from volume, but that would mean needing to add trusted cert image >> validation support into cinder along with API changes, effectively >> killing the chance of this getting done in nova in Rocky, and this >> blueprint has been around since at least Ocata so it would be good to >> make progress if possible. > > As mentioned above, I don't want to derail progress until (if ever?) > trusted certs achieves this magical > works-for-every-driver-and-functionality state. It's not realistic to > expect this to be done, IMHO, and just keeps good functionality out of > the hands of many cloud users. Again, I'm not advocating that we block until boot from volume is supported. However, we have a lot of technical debt for "good functionality" added over the years that failed to consider volume-backed instances, like rebuild, rescue, backup, etc and it's painful to deal with that after the fact, as can be seen from the various specs proposed for adding that support to those APIs. -- Thanks, Matt From mriedemos at gmail.com Wed Apr 18 17:14:56 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 18 Apr 2018 12:14:56 -0500 Subject: [openstack-dev] [nova] Concern about trusted certificates API change In-Reply-To: <5AD77BE2.5040000@windriver.com> References: <617ae5d9-25c3-d4d0-1d1a-4e8d7602ccea@gmail.com> <5AD77BE2.5040000@windriver.com> Message-ID: <5806cc4f-18a3-4465-ad65-ddaa8c881a07@gmail.com> On 4/18/2018 12:09 PM, Chris Friesen wrote: > If this happens, is it clear to the end-user that the reason the boot > failed is that the cloud doesn't support trusted cert IDs for > boot-from-vol?  If so, then I think that's totally fine. If you're creating an image-backed server and requesting specific trusted certs, you'll get by the API but could land on a compute host that doesn't support image validation, like any non-libvirt driver, and at that point the trusted certs request is ignored. We could fix that the same way I've proposed we fix it for boot from volume with multiattach volumes in that the compute node resource provider would have a trait on it for the capability, and we'd add a placement request filter that detects, from the RequestSpec, that you're trying to do this specific thing that requires a compute that supports that capability, otherwise you get NoValidHost. -- Thanks, Matt From jaypipes at gmail.com Wed Apr 18 17:16:44 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 18 Apr 2018 13:16:44 -0400 Subject: [openstack-dev] [nova] Concern about trusted certificates API change In-Reply-To: <5806cc4f-18a3-4465-ad65-ddaa8c881a07@gmail.com> References: <617ae5d9-25c3-d4d0-1d1a-4e8d7602ccea@gmail.com> <5AD77BE2.5040000@windriver.com> <5806cc4f-18a3-4465-ad65-ddaa8c881a07@gmail.com> Message-ID: <575b0d4f-a661-e0ed-7a7e-55772c76019f@gmail.com> On 04/18/2018 01:14 PM, Matt Riedemann wrote: > On 4/18/2018 12:09 PM, Chris Friesen wrote: >> If this happens, is it clear to the end-user that the reason the boot >> failed is that the cloud doesn't support trusted cert IDs for >> boot-from-vol?  If so, then I think that's totally fine. > > If you're creating an image-backed server and requesting specific > trusted certs, you'll get by the API but could land on a compute host > that doesn't support image validation, like any non-libvirt driver, and > at that point the trusted certs request is ignored. > > We could fix that the same way I've proposed we fix it for boot from > volume with multiattach volumes in that the compute node resource > provider would have a trait on it for the capability, and we'd add a > placement request filter that detects, from the RequestSpec, that you're > trying to do this specific thing that requires a compute that supports > that capability, otherwise you get NoValidHost. +1 Still looking for reviews on https://review.openstack.org/#/c/546713/. Thanks, -jay From Tim.Bell at cern.ch Wed Apr 18 17:20:13 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Wed, 18 Apr 2018 17:20:13 +0000 Subject: [openstack-dev] [nova] Default scheduler filters survey In-Reply-To: <5AD77313.30102@windriver.com> References: <5AD77313.30102@windriver.com> Message-ID: <2F4FE602-6AED-448C-A61C-619463F2520E@cern.ch> I'd suggest asking on the openstack-operators list since there is only a subset of operators who follow openstack-dev. Tim -----Original Message----- From: Chris Friesen Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Wednesday, 18 April 2018 at 18:34 To: "openstack-dev at lists.openstack.org" Subject: Re: [openstack-dev] [nova] Default scheduler filters survey On 04/18/2018 09:17 AM, Artom Lifshitz wrote: > To that end, we'd like to know what filters operators are enabling in > their deployment. If you can, please reply to this email with your > [filter_scheduler]/enabled_filters (or > [DEFAULT]/scheduler_default_filters if you're using an older version) > option from nova.conf. Any other comments are welcome as well :) RetryFilter ComputeFilter AvailabilityZoneFilter AggregateInstanceExtraSpecsFilter ComputeCapabilitiesFilter ImagePropertiesFilter NUMATopologyFilter ServerGroupAffinityFilter ServerGroupAntiAffinityFilter PciPassthroughFilter __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jon at csail.mit.edu Wed Apr 18 17:25:42 2018 From: jon at csail.mit.edu (Jonathan D. Proulx) Date: Wed, 18 Apr 2018 13:25:42 -0400 Subject: [openstack-dev] [nova] Default scheduler filters survey In-Reply-To: <2F4FE602-6AED-448C-A61C-619463F2520E@cern.ch> References: <5AD77313.30102@windriver.com> <2F4FE602-6AED-448C-A61C-619463F2520E@cern.ch> Message-ID: <20180418172542.d3rpv7f3snnvknli@csail.mit.edu> On Wed, Apr 18, 2018 at 05:20:13PM +0000, Tim Bell wrote: :I'd suggest asking on the openstack-operators list since there is only a subset of operators who follow openstack-dev. I'd second that, which I'm (obviously) subscribed to both I do pay more attention to operators, and almost missed this ask. but here's mine: scheduler_default_filters=ComputeFilter,AggregateInstanceExtraSpecsFilter,AggregateCoreFilter,AggregateRamFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter,ImagePropertiesFilter,PciPassthroughFilter :Tim : :-----Original Message----- :From: Chris Friesen :Reply-To: "OpenStack Development Mailing List (not for usage questions)" :Date: Wednesday, 18 April 2018 at 18:34 :To: "openstack-dev at lists.openstack.org" :Subject: Re: [openstack-dev] [nova] Default scheduler filters survey : : On 04/18/2018 09:17 AM, Artom Lifshitz wrote: : : > To that end, we'd like to know what filters operators are enabling in : > their deployment. If you can, please reply to this email with your : > [filter_scheduler]/enabled_filters (or : > [DEFAULT]/scheduler_default_filters if you're using an older version) : > option from nova.conf. Any other comments are welcome as well :) : : RetryFilter : ComputeFilter : AvailabilityZoneFilter : AggregateInstanceExtraSpecsFilter : ComputeCapabilitiesFilter : ImagePropertiesFilter : NUMATopologyFilter : ServerGroupAffinityFilter : ServerGroupAntiAffinityFilter : PciPassthroughFilter : : : __________________________________________________________________________ : OpenStack Development Mailing List (not for usage questions) : Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev : : :__________________________________________________________________________ :OpenStack Development Mailing List (not for usage questions) :Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe :http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jaypipes at gmail.com Wed Apr 18 17:40:54 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 18 Apr 2018 13:40:54 -0400 Subject: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources In-Reply-To: <125f7217-2a83-316f-63fd-e1cee0ee3a5f@gmail.com> References: <125f7217-2a83-316f-63fd-e1cee0ee3a5f@gmail.com> Message-ID: On 04/18/2018 11:58 AM, Matt Riedemann wrote: > On 4/18/2018 9:06 AM, Jay Pipes wrote: >> "By default, should resources/traits submitted in different numbered >> request groups be supplied by separate resource providers?" > > Without knowing all of the hairy use cases, I'm trying to channel my > inner sdague and some of the similar types of discussions we've had to > changes in the compute API, and a lot of the time we've agreed that we > shouldn't assume a default in certain cases. > > So for this case, if I'm requesting numbered request groups, why doesn't > the API just require that I pass a query parameter telling it how I'd > like those requests to be handled, either via affinity or anti-affinity So, you're thinking maybe something like this? 1) Get me two dedicated CPUs. One of those dedicated CPUs must have AVX2 capabilities. They must be on different child providers (different NUMA cells that are providing those dedicated CPUs). GET /allocation_candidates? resources1=PCPU:1&required1=HW_CPU_X86_AVX2 &resources2=PCPU:1 &proximity=isolate:1,2 2) Get me four dedicated CPUs. Two of those dedicated CPUs must have AVX2 capabilities. Two of the dedicated CPUs must have the SSE 4.2 capability. They may come from the same provider (NUMA cell) or different providers. GET /allocation_candidates? resources1=PCPU:2&required1=HW_CPU_X86_AVX2 &resources2=PCPU:2&required2=HW_CPU_X86_SSE42 &proximity=any:1,2 3) Get me 2 dedicated CPUs and 2 SR-IOV VFs. The VFs must be provided by separate physical function providers which have different traits marking separate physical networks. The dedicated CPUs must come from the same provider tree in which the physical function providers reside. GET /allocation_candidates? resources1=PCPU:2 &resources2=SRIOV_NET_VF:1&required2=CUSTOM_PHYSNET_A &resources3=SRIOV_NET_VF:1&required3=CUSTOM_PHYSNET_B &proximity=isolate:2,3 &proximity=same_tree:1,2,3 3) Get me 2 dedicated CPUs and 2 SR-IOV VFs. The VFs must be provided by separate physical function providers which have different traits marking separate physical networks. The dedicated CPUs must come from the same provider *subtree* in which the second group of VF resources are sourced. GET /allocation_candidates? resources1=PCPU:2 &resources2=SRIOV_NET_VF:1&required2=CUSTOM_PHYSNET_A &resources3=SRIOV_NET_VF:1&required3=CUSTOM_PHYSNET_B &proximity=isolate:2,3 &proximity=same_subtree:1,3 4) Get me 4 SR-IOV VFs. 2 VFs should be sourced from a provider that is decorated with the CUSTOM_PHYSNET_A trait. 2 VFs should be sourced from a provider that is decorated with the CUSTOM_PHYSNET_B trait. For HA purposes, none of the VFs should be sourced from the same provider. However, the VFs for each physical network should be within the same subtree (NUMA cell) as each other. GET /allocation_candidates? resources1=SRIOV_NET_VF:1&required1=CUSTOM_PHYSNET_A &resources2=SRIOV_NET_VF:1&required2=CUSTOM_PHYSNET_A &resources3=SRIOV_NET_VF:1&required3=CUSTOM_PHYSNET_B &resources4=SRIOV_NET_VF:1&required4=CUSTOM_PHYSNET_B &proximity=isolate:1,2,3,4 &proximity=same_subtree:1,2 &proximity=same_subtree:3,4 We can go even deeper if you'd like, since NFV means "never-ending feature velocity". Just let me know. -jay From jim at jimrollenhagen.com Wed Apr 18 17:44:08 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Wed, 18 Apr 2018 13:44:08 -0400 Subject: [openstack-dev] [ironic][infra][qa] Jobs failing; pep8 not found Message-ID: Hi all, We have a number of stable branch jobs failing[0] with an error about pep8 not being importable[1], when it's clearly installed[2]. We first saw this when installing networking-generic-switch on queens in our multinode grenade job. We hacked a fix there[3], as we couldn't figure it out and thought it was a fluke. Now it's showing up elsewhere. I suspected a new pycodestyle was the culprit (maybe it kills off the pep8 package somehow?) but pinning pycodestyle back a version didn't seem to help. Any ideas what might be going on here? I'm completely lost. P.S. if anyone has the side question of why pep8 is being imported at install time, it seems that pbr iterates over any entry points under 'distutils.commands' for any installed package. flake8 has one of these which must import pep8 to be resolved. I'm not sure *why* pbr needs to do this, but I'll assume it's necessary. [0] https://review.openstack.org/#/c/557441/ [1] http://logs.openstack.org/41/557441/1/gate/ironic-tempest-dsvm-ironic-inspector-queens/5a4a6c9/logs/devstacklog.txt.gz#_2018-04-16_15_48_01_508 [2] http://logs.openstack.org/41/557441/1/gate/ironic-tempest-dsvm-ironic-inspector-queens/5a4a6c9/logs/devstacklog.txt.gz#_2018-04-16_15_47_40_822 [3] https://review.openstack.org/#/c/561358/ // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Wed Apr 18 18:01:29 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 18 Apr 2018 13:01:29 -0500 Subject: [openstack-dev] [stable][trove] keep trove-stable-maint members up-to-date In-Reply-To: References: <149d6e3a-98f1-1415-a087-99e6d1ec2cb6@gmail.com> Message-ID: On 4/17/2018 8:49 PM, 赵超 wrote: > Thanks for approving the stable branch patches of trove and > python-trove, we also have some in the trove-dashboard. I also went through the trove-dashboard ones, just need another stable-maint-core to approve those. https://review.openstack.org/#/q/project:openstack/trove-dashboard+status:open+NOT+branch:master -- Thanks, Matt From mriedemos at gmail.com Wed Apr 18 18:02:47 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 18 Apr 2018 13:02:47 -0500 Subject: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources In-Reply-To: References: <125f7217-2a83-316f-63fd-e1cee0ee3a5f@gmail.com> Message-ID: <07d807ae-ff7c-0a9b-18ce-eb43ad5e5d09@gmail.com> On 4/18/2018 12:40 PM, Jay Pipes wrote: > We can go even deeper if you'd like, since NFV means "never-ending > feature velocity". Just let me know. Cool. So let's not use a GET for this and instead change it to a POST with a request body that can more cleanly describe what the user is requesting, which is something we talked about a long time ago. -- Thanks, Matt From dms at danplanet.com Wed Apr 18 18:17:00 2018 From: dms at danplanet.com (Dan Smith) Date: Wed, 18 Apr 2018 11:17:00 -0700 Subject: [openstack-dev] [nova] Concern about trusted certificates API change In-Reply-To: <1f88c15c-b75a-cc34-0299-3b47f78ba794@gmail.com> (Matt Riedemann's message of "Wed, 18 Apr 2018 12:11:58 -0500") References: <617ae5d9-25c3-d4d0-1d1a-4e8d7602ccea@gmail.com> <1f88c15c-b75a-cc34-0299-3b47f78ba794@gmail.com> Message-ID: > Maybe it wasn't clear but I'm not advocating that we block the change > until volume-backed instances are supported with trusted certs. I'm > suggesting we add a policy rule which allows deployers to at least > disable it via policy if it's not supported for their cloud. That's fine with me, and provides an out for another issue I pointed out on the code review. Basically, the operator has no way to disable this feature. If they haven't set this up properly and have no desire to, a user reading the API spec and passing trusted certs will not be able to boot an instance and not really understand why. > I agree. I'm the one that noticed the issue and pointed out in the > code review that we should explicitly fail the request if we can't > honor it. I agree for the moment for sure, but it would obviously be nice not to open another gap we're not going to close. There's no reason this can't be supported for volume-backed instances, it just requires some help from cinder. I would think that it'd be nice if we could declare the "can't do this for reasons" response as a valid one regardless of the cause so we don't need another microversion for the future where volume-backed instances can do this. > Again, I'm not advocating that we block until boot from volume is > supported. However, we have a lot of technical debt for "good > functionality" added over the years that failed to consider > volume-backed instances, like rebuild, rescue, backup, etc and it's > painful to deal with that after the fact, as can be seen from the > various specs proposed for adding that support to those APIs. Totes agree. --Dan From dms at danplanet.com Wed Apr 18 18:20:09 2018 From: dms at danplanet.com (Dan Smith) Date: Wed, 18 Apr 2018 11:20:09 -0700 Subject: [openstack-dev] [Nova] z/VM introducing a new config driveformat In-Reply-To: (Matthew Booth's message of "Wed, 18 Apr 2018 15:56:09 +0100") References: <7efe6916-17fc-59c5-d666-6bdfc19c3329@gmail.com> <2eea2405-2e85-6730-e022-e1be33dc35d5@gmail.com> Message-ID: > Having briefly read the cloud-init snippet which was linked earlier in > this thread, the requirement seems to be that the guest exposes the > device as /dev/srX or /dev/cdX. So I guess in order to make this work: > > * You need to tell z/VM to expose the virtual disk as an optical disk > * The z/VM kernel needs to call optical disks /dev/srX or /dev/cdX According to the docs, it doesn't need to be. You can indicate the configdrive via filesystem label which makes sense given we support vfat for it as well. http://cloudinit.readthedocs.io/en/latest/topics/datasources/configdrive.html#version-2 --Dan From doug at doughellmann.com Wed Apr 18 20:13:28 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 18 Apr 2018 16:13:28 -0400 Subject: [openstack-dev] [ironic][infra][qa] Jobs failing; pep8 not found In-Reply-To: References: Message-ID: <1524082352-sup-6326@lrrr.local> Excerpts from Jim Rollenhagen's message of 2018-04-18 13:44:08 -0400: > Hi all, > > We have a number of stable branch jobs failing[0] with an error about pep8 > not being importable[1], when it's clearly installed[2]. We first saw this > when installing networking-generic-switch on queens in our multinode > grenade job. We hacked a fix there[3], as we couldn't figure it out and > thought it was a fluke. Now it's showing up elsewhere. > > I suspected a new pycodestyle was the culprit (maybe it kills off the pep8 > package somehow?) but pinning pycodestyle back a version didn't seem to > help. > > Any ideas what might be going on here? I'm completely lost. > > P.S. if anyone has the side question of why pep8 is being imported at > install time, it seems that pbr iterates over any entry points under > 'distutils.commands' for any installed package. flake8 has one of these > which must import pep8 to be resolved. I'm not sure *why* pbr needs to do > this, but I'll assume it's necessary. > > [0] https://review.openstack.org/#/c/557441/ > [1] > http://logs.openstack.org/41/557441/1/gate/ironic-tempest-dsvm-ironic-inspector-queens/5a4a6c9/logs/devstacklog.txt.gz#_2018-04-16_15_48_01_508 > [2] > http://logs.openstack.org/41/557441/1/gate/ironic-tempest-dsvm-ironic-inspector-queens/5a4a6c9/logs/devstacklog.txt.gz#_2018-04-16_15_47_40_822 > [3] https://review.openstack.org/#/c/561358/ > > // jim This looks like some behavior that has been pulled out as part of pbr 4 (version 3 is being used in the stable branch). Perhaps we want to update the pbr constraint there to use the newer version? Doug From simon.leinen at switch.ch Wed Apr 18 20:20:45 2018 From: simon.leinen at switch.ch (Simon Leinen) Date: Wed, 18 Apr 2018 22:20:45 +0200 Subject: [openstack-dev] [nova] Default scheduler filters survey In-Reply-To: (Artom Lifshitz's message of "Wed, 18 Apr 2018 11:17:23 -0400") References: Message-ID: Artom Lifshitz writes: > To that end, we'd like to know what filters operators are enabling in > their deployment. If you can, please reply to this email with your > [filter_scheduler]/enabled_filters (or > [DEFAULT]/scheduler_default_filters if you're using an older version) > option from nova.conf. Any other comments are welcome as well :) We have the following enabled on our semi-public (academic community) cloud, which runs on Newton: AggregateInstanceExtraSpecsFilter AvailabilityZoneFilter ComputeCapabilitiesFilter ComputeFilter ImagePropertiesFilter PciPassthroughFilter RamFilter RetryFilter ServerGroupAffinityFilter ServerGroupAntiAffinityFilter (sorted alphabetically) Recently we've also been trying AggregateImagePropertiesIsolation ...but it looks like we'll replace it with our own because it's a bit awkward to use for our purpose (scheduling Windows instance to licensed compute nodes). -- Simon. From openstack at fried.cc Wed Apr 18 20:52:36 2018 From: openstack at fried.cc (Eric Fried) Date: Wed, 18 Apr 2018 15:52:36 -0500 Subject: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources In-Reply-To: References: <125f7217-2a83-316f-63fd-e1cee0ee3a5f@gmail.com> Message-ID: I can't tell if you're being facetious, but this seems sane, albeit complex. It's also extensible as we come up with new and wacky affinity semantics we want to support. I can't say I'm sold on requiring `proximity` qparams that cover every granular group - that seems like a pretty onerous burden to put on the user right out of the gate. That said, the idea of not having a default is quite appealing. Perhaps as a first pass we can require a single ?proximity={isolate|any} and build on it to support group numbers (etc.) in the future. One other thing inline below, not related to the immediate subject. On 04/18/2018 12:40 PM, Jay Pipes wrote: > On 04/18/2018 11:58 AM, Matt Riedemann wrote: >> On 4/18/2018 9:06 AM, Jay Pipes wrote: >>> "By default, should resources/traits submitted in different numbered >>> request groups be supplied by separate resource providers?" >> >> Without knowing all of the hairy use cases, I'm trying to channel my >> inner sdague and some of the similar types of discussions we've had to >> changes in the compute API, and a lot of the time we've agreed that we >> shouldn't assume a default in certain cases. >> >> So for this case, if I'm requesting numbered request groups, why >> doesn't the API just require that I pass a query parameter telling it >> how I'd like those requests to be handled, either via affinity or >> anti-affinity > So, you're thinking maybe something like this? > > 1) Get me two dedicated CPUs. One of those dedicated CPUs must have AVX2 > capabilities. They must be on different child providers (different NUMA > cells that are providing those dedicated CPUs). > > GET /allocation_candidates? > >  resources1=PCPU:1&required1=HW_CPU_X86_AVX2 > &resources2=PCPU:1 > &proximity=isolate:1,2 > > 2) Get me four dedicated CPUs. Two of those dedicated CPUs must have > AVX2 capabilities. Two of the dedicated CPUs must have the SSE 4.2 > capability. They may come from the same provider (NUMA cell) or > different providers. > > GET /allocation_candidates? > >  resources1=PCPU:2&required1=HW_CPU_X86_AVX2 > &resources2=PCPU:2&required2=HW_CPU_X86_SSE42 > &proximity=any:1,2 > > 3) Get me 2 dedicated CPUs and 2 SR-IOV VFs. The VFs must be provided by > separate physical function providers which have different traits marking > separate physical networks. The dedicated CPUs must come from the same > provider tree in which the physical function providers reside. > > GET /allocation_candidates? > >  resources1=PCPU:2 > &resources2=SRIOV_NET_VF:1&required2=CUSTOM_PHYSNET_A > &resources3=SRIOV_NET_VF:1&required3=CUSTOM_PHYSNET_B > &proximity=isolate:2,3 > &proximity=same_tree:1,2,3 > > 3) Get me 2 dedicated CPUs and 2 SR-IOV VFs. The VFs must be provided by > separate physical function providers which have different traits marking > separate physical networks. The dedicated CPUs must come from the same > provider *subtree* in which the second group of VF resources are sourced. > > GET /allocation_candidates? > >  resources1=PCPU:2 > &resources2=SRIOV_NET_VF:1&required2=CUSTOM_PHYSNET_A > &resources3=SRIOV_NET_VF:1&required3=CUSTOM_PHYSNET_B > &proximity=isolate:2,3 > &proximity=same_subtree:1,3 The 'same_subtree' concept requires a way to identify how far up the common ancestor can be. Otherwise, *everything* is in the same subtree. You could arbitrarily say "one step down from the root", but that's not very flexible. Allowing the user to specify a *number* of steps down from the root is getting closer, but it requires the user to have an understanding of the provider tree's exact structure, which is not ideal. The idea I've been toying with here is "common ancestor by trait". For example, you would tag your NUMA node providers with trait NUMA_ROOT, and then your request would include: ... &proximity=common_ancestor_by_trait:NUMA_ROOT:1,3 > > 4) Get me 4 SR-IOV VFs. 2 VFs should be sourced from a provider that is > decorated with the CUSTOM_PHYSNET_A trait. 2 VFs should be sourced from > a provider that is decorated with the CUSTOM_PHYSNET_B trait. For HA > purposes, none of the VFs should be sourced from the same provider. > However, the VFs for each physical network should be within the same > subtree (NUMA cell) as each other. > > GET /allocation_candidates? > >  resources1=SRIOV_NET_VF:1&required1=CUSTOM_PHYSNET_A > &resources2=SRIOV_NET_VF:1&required2=CUSTOM_PHYSNET_A > &resources3=SRIOV_NET_VF:1&required3=CUSTOM_PHYSNET_B > &resources4=SRIOV_NET_VF:1&required4=CUSTOM_PHYSNET_B > &proximity=isolate:1,2,3,4 > &proximity=same_subtree:1,2 > &proximity=same_subtree:3,4 > > We can go even deeper if you'd like, since NFV means "never-ending > feature velocity". Just let me know. > > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From openstack at fried.cc Wed Apr 18 21:09:04 2018 From: openstack at fried.cc (Eric Fried) Date: Wed, 18 Apr 2018 16:09:04 -0500 Subject: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources In-Reply-To: <5AD76C78.9030601@windriver.com> References: <5AD76C78.9030601@windriver.com> Message-ID: Chris- Going to accumulate a couple of your emails and answer them. I could have answered them separately (anti-affinity). But in this case I felt it appropriate to provide responses in a single note (best fit). > I'm a bit conflicted.  On the one hand... > On the other hand, Right; we're in agreement that we need to handle both. > I'm half tempted to side with mriedem and say that there is no default > and it must be explicit, but I'm concerned that this would make the > requests a lot larger if you have to specify it for every resource. and > The request might get unwieldy if we have to specify affinity/anti- > affinity for each resource. Maybe you could specify the default for > the request and then optionally override it for each resource? Yes, good call. I'm favoring this as a first pass. See my other response. > In either viewpoint, is there a way to represent "I want two resource > groups, with resource X in each group coming from different resource > providers (anti-affinity) and resource Y from the same resource provider > (affinity)? As proposed, yes. Though if we go with the above (one flag to specify request-wide behavior) then there wouldn't be that ability beyond putting things in the un-numbered vs. numbered groups. So I guess my question is: do we have a use case *right now* that requires supporting "isolate for some, unrestricted for others"? > I'm not current on the placement implementation details, but would > this level of flexibility cause complexity problems in the code? Oh, implementing this is complex af. Here's what it takes *just* to satisfy the "any fit" version: https://review.openstack.org/#/c/517757/10/nova/api/openstack/placement/objects/resource_provider.py at 3599 I've made some progress implementing "proximity=isolate:X,Y,..." in my sandbox, and that's even hairier. Doing "proximity=isolate" (request-wide policy) would be a little easier. -efried From openstack at fried.cc Wed Apr 18 21:38:14 2018 From: openstack at fried.cc (Eric Fried) Date: Wed, 18 Apr 2018 16:38:14 -0500 Subject: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources In-Reply-To: <07d807ae-ff7c-0a9b-18ce-eb43ad5e5d09@gmail.com> References: <125f7217-2a83-316f-63fd-e1cee0ee3a5f@gmail.com> <07d807ae-ff7c-0a9b-18ce-eb43ad5e5d09@gmail.com> Message-ID: <22f04e35-8f1b-ac8b-1380-0a4fae935b1c@fried.cc> > Cool. So let's not use a GET for this and instead change it to a POST > with a request body that can more cleanly describe what the user is > requesting, which is something we talked about a long time ago. I kinda doubt we could agree on a format for this in the Rocky timeframe. But for the sake of curiosity, I'd like to see some strawman proposals for what that request body would look like. Here's a couple off the top: { "anti-affinity": [ { "resources": { $RESOURCE_CLASS: amount, ... }, "required": [ $TRAIT, ... ], "forbidden": [ $TRAIT, ... ], }, ... ], "affinity": [ ... ], "any fit": [ ... ], } Or maybe: { $ARBITRARY_USER_SPECIFIED_KEY_DESCRIBING_THE_GROUP: { "resources": { $RESOURCE_CLASS: amount, ... }, "required": [ $TRAIT, ... ], "forbidden": [ $TRAIT, ... ], }, ... "affinity_spec": { "isolate": [ $ARBITRARY_KEY, ... ], "any": [ $ARBITRARY_KEY, ... ], "common_subtree_by_trait": { "groups": [ $ARBITRARY_KEY, ... ], "traits": [ $TRAIT, ... ], }, } } (I think we also now need to fold multiple `member_of` in there somehow. And `limit` - does that stay in the querystring? Etc.) -efried From openstack at fried.cc Wed Apr 18 21:43:29 2018 From: openstack at fried.cc (Eric Fried) Date: Wed, 18 Apr 2018 16:43:29 -0500 Subject: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources In-Reply-To: <833750AB-5AE1-4533-BF33-F9BE7DD7C9D8@leafe.com> References: <125f7217-2a83-316f-63fd-e1cee0ee3a5f@gmail.com> <9EA2D9D0-D563-4E4F-A8BA-45AD49E20CEB@leafe.com> <833750AB-5AE1-4533-BF33-F9BE7DD7C9D8@leafe.com> Message-ID: <4b116d32-9385-6ddd-361a-6f272ff42d60@fried.cc> Sorry, addressing gaffe, bringing this back on-list... On 04/18/2018 04:36 PM, Ed Leafe wrote: > On Apr 18, 2018, at 4:11 PM, Eric Fried wrote: >>> That makes a lot of sense. Since we are already suffixing the query param “resources” to indicate granular, why not add a clarifying term to that suffix? E.g., “resources1=“ -> “resources1d” (for ‘different’). The exact string we use can be bike shedded, but requiring it be specified sounds pretty sane to me. >> I'm not understanding what you mean here. The issue at hand is how >> numbered groups interact with *each other*. If I said >> resources1s=...&resources2d=..., what am I saying about whether the >> resources in group 1 can or can't land in the same RP as those of group 2? > OK, sorry. What I meant by the ‘d’ was that that group’s resources must be from a different provider than any other group’s resources (anti-affinity). So in your example, you don’t care if group1 is from the same provider, but you do with group2, so that’s kind of a contradictory set-up (unless you had other groups). > > Instead, if the example were changed to resources1s=...&resources2d=..&resources3s=…, then groups 1 and 3 could be allocated from the same provider. > > -- Ed Leafe This is a cool idea.  It doesn't allow the same level of granularity as being able to list explicit group numbers to be [anti-]affinitized with specific other groups - but I'm not sure we need that.  I would have to think through the use cases with this in mind. -efried From jaypipes at gmail.com Wed Apr 18 22:20:14 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Wed, 18 Apr 2018 18:20:14 -0400 Subject: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources In-Reply-To: References: <125f7217-2a83-316f-63fd-e1cee0ee3a5f@gmail.com> Message-ID: On 04/18/2018 04:52 PM, Eric Fried wrote: > I can't tell if you're being facetious, but this seems sane, albeit > complex. It's also extensible as we come up with new and wacky affinity > semantics we want to support. I was not being facetious. > I can't say I'm sold on requiring `proximity` qparams that cover every > granular group - that seems like a pretty onerous burden to put on the > user right out of the gate. I did that because Matt said he wanted no default/implicit behaviour -- everything should be explicit. > That said, the idea of not having a default > is quite appealing. Perhaps as a first pass we can require a single > ?proximity={isolate|any} and build on it to support group numbers (etc.) > in the future. Here's my problem. I have a feeling we're just going to go back and forth on this, as we have for weeks now, and not reach any conclusion that is satisfactory to everyone. And we'll delay, yet again, getting functionality into this release that serves 90% of use cases because we are obsessing over the 0.01% of use cases that may pop up later. Best, -jay > One other thing inline below, not related to the immediate subject. > > On 04/18/2018 12:40 PM, Jay Pipes wrote: >> On 04/18/2018 11:58 AM, Matt Riedemann wrote: >>> On 4/18/2018 9:06 AM, Jay Pipes wrote: >>>> "By default, should resources/traits submitted in different numbered >>>> request groups be supplied by separate resource providers?" >>> >>> Without knowing all of the hairy use cases, I'm trying to channel my >>> inner sdague and some of the similar types of discussions we've had to >>> changes in the compute API, and a lot of the time we've agreed that we >>> shouldn't assume a default in certain cases. >>> >>> So for this case, if I'm requesting numbered request groups, why >>> doesn't the API just require that I pass a query parameter telling it >>> how I'd like those requests to be handled, either via affinity or >>> anti-affinity >> So, you're thinking maybe something like this? >> >> 1) Get me two dedicated CPUs. One of those dedicated CPUs must have AVX2 >> capabilities. They must be on different child providers (different NUMA >> cells that are providing those dedicated CPUs). >> >> GET /allocation_candidates? >> >>  resources1=PCPU:1&required1=HW_CPU_X86_AVX2 >> &resources2=PCPU:1 >> &proximity=isolate:1,2 >> >> 2) Get me four dedicated CPUs. Two of those dedicated CPUs must have >> AVX2 capabilities. Two of the dedicated CPUs must have the SSE 4.2 >> capability. They may come from the same provider (NUMA cell) or >> different providers. >> >> GET /allocation_candidates? >> >>  resources1=PCPU:2&required1=HW_CPU_X86_AVX2 >> &resources2=PCPU:2&required2=HW_CPU_X86_SSE42 >> &proximity=any:1,2 >> >> 3) Get me 2 dedicated CPUs and 2 SR-IOV VFs. The VFs must be provided by >> separate physical function providers which have different traits marking >> separate physical networks. The dedicated CPUs must come from the same >> provider tree in which the physical function providers reside. >> >> GET /allocation_candidates? >> >>  resources1=PCPU:2 >> &resources2=SRIOV_NET_VF:1&required2=CUSTOM_PHYSNET_A >> &resources3=SRIOV_NET_VF:1&required3=CUSTOM_PHYSNET_B >> &proximity=isolate:2,3 >> &proximity=same_tree:1,2,3 >> >> 3) Get me 2 dedicated CPUs and 2 SR-IOV VFs. The VFs must be provided by >> separate physical function providers which have different traits marking >> separate physical networks. The dedicated CPUs must come from the same >> provider *subtree* in which the second group of VF resources are sourced. >> >> GET /allocation_candidates? >> >>  resources1=PCPU:2 >> &resources2=SRIOV_NET_VF:1&required2=CUSTOM_PHYSNET_A >> &resources3=SRIOV_NET_VF:1&required3=CUSTOM_PHYSNET_B >> &proximity=isolate:2,3 >> &proximity=same_subtree:1,3 > > The 'same_subtree' concept requires a way to identify how far up the > common ancestor can be. Otherwise, *everything* is in the same subtree. > You could arbitrarily say "one step down from the root", but that's not > very flexible. Allowing the user to specify a *number* of steps down > from the root is getting closer, but it requires the user to have an > understanding of the provider tree's exact structure, which is not ideal. > > The idea I've been toying with here is "common ancestor by trait". For > example, you would tag your NUMA node providers with trait NUMA_ROOT, > and then your request would include: > > ... > &proximity=common_ancestor_by_trait:NUMA_ROOT:1,3 > >> >> 4) Get me 4 SR-IOV VFs. 2 VFs should be sourced from a provider that is >> decorated with the CUSTOM_PHYSNET_A trait. 2 VFs should be sourced from >> a provider that is decorated with the CUSTOM_PHYSNET_B trait. For HA >> purposes, none of the VFs should be sourced from the same provider. >> However, the VFs for each physical network should be within the same >> subtree (NUMA cell) as each other. >> >> GET /allocation_candidates? >> >>  resources1=SRIOV_NET_VF:1&required1=CUSTOM_PHYSNET_A >> &resources2=SRIOV_NET_VF:1&required2=CUSTOM_PHYSNET_A >> &resources3=SRIOV_NET_VF:1&required3=CUSTOM_PHYSNET_B >> &resources4=SRIOV_NET_VF:1&required4=CUSTOM_PHYSNET_B >> &proximity=isolate:1,2,3,4 >> &proximity=same_subtree:1,2 >> &proximity=same_subtree:3,4 >> >> We can go even deeper if you'd like, since NFV means "never-ending >> feature velocity". Just let me know. >> >> -jay >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From openstack at fried.cc Wed Apr 18 22:45:34 2018 From: openstack at fried.cc (Eric Fried) Date: Wed, 18 Apr 2018 17:45:34 -0500 Subject: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources In-Reply-To: References: <125f7217-2a83-316f-63fd-e1cee0ee3a5f@gmail.com> Message-ID: <3b3de257-e95c-6cc8-15a0-0453c6b529b7@fried.cc> > I have a feeling we're just going to go back and forth on this, as we > have for weeks now, and not reach any conclusion that is satisfactory to > everyone. And we'll delay, yet again, getting functionality into this > release that serves 90% of use cases because we are obsessing over the > 0.01% of use cases that may pop up later. So I vote that, for the Rocky iteration of the granular spec, we add a single `proximity={isolate|any}` qparam, required when any numbered request groups are specified. I believe this allows us to satisfy the two NUMA use cases we care most about: "forced sharding" and "any fit". And as you demonstrated, it leaves the way open for finer-grained and more powerful semantics to be added in the future. -efried From juliaashleykreger at gmail.com Wed Apr 18 23:32:02 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 18 Apr 2018 19:32:02 -0400 Subject: [openstack-dev] [ironic][infra][qa] Jobs failing; pep8 not found In-Reply-To: <1524082352-sup-6326@lrrr.local> References: <1524082352-sup-6326@lrrr.local> Message-ID: > > This looks like some behavior that has been pulled out as part of pbr 4 > (version 3 is being used in the stable branch). Perhaps we want to > update the pbr constraint there to use the newer version? > And it looks like doing that for stable/queens, at least from an ironic-inspector point of view[1], fixes the issue for the branch. The funny thing is, our ironic-inspector stable/pike -> stable/queens test job fails on stable/pike as well now, with the same failure [2]. That being said, we did observe during troubleshooting this issue last week that the pep8 dist-info was present, however the actual module contents were not present, which is why we worked around the issue forcing the module to be re-installed. We also had this occur today on an ironic stable/queens backport triggered grenade job when keystone was being upgraded [3]. If the answer is update the upper constraint, from my point of view, I suspect we're going to want to consider doing it across the board. Of course, the real question is what changed, that is causing test machines to think pep8 is present... :( [1]: https://review.openstack.org/#/c/562384/ [2]: http://logs.openstack.org/84/562384/2/check/ironic-inspector-grenade-dsvm/59f0605/logs/grenade.sh.txt.gz#_2018-04-18_21_53_20_527 [3]: http://logs.openstack.org/14/562314/1/check/ironic-grenade-dsvm/2227c41/logs/grenade.sh.txt.gz#_2018-04-18_16_55_00_456 From iwienand at redhat.com Wed Apr 18 23:36:50 2018 From: iwienand at redhat.com (Ian Wienand) Date: Thu, 19 Apr 2018 09:36:50 +1000 Subject: [openstack-dev] [openstack-infra] How to take over a project? In-Reply-To: <2bb037db-61aa-4fec-a694-00eee36bbdab@gmail.com> References: <0630C19B-9EA1-457D-A74C-8A7A03D96DD6@vmware.com> <2bb037db-61aa-4fec-a694-00eee36bbdab@gmail.com> Message-ID: <7a4390b1-2c4e-6600-4d93-167697ea9f12@redhat.com> On 04/19/2018 01:19 AM, Ian Y. Choi wrote: > By the way, since the networking-onos-release group has no neutron > release team group, I think infra team can help to include neutron > release team and neutron release team can help to create branches > for the repo if there is no reponse from current > networking-onos-release group member. This seems sane and I've added neutron-release to networking-onos-release. I'm hesitant to give advice on branching within a project like neutron as I'm sure there's stuff I'm not aware of; but members of the neutron-release team should be able to get you going. Thanks, -i From sangho at opennetworking.org Thu Apr 19 00:15:50 2018 From: sangho at opennetworking.org (Sangho Shin) Date: Thu, 19 Apr 2018 09:15:50 +0900 Subject: [openstack-dev] [openstack-infra] How to take over a project? In-Reply-To: <2bb037db-61aa-4fec-a694-00eee36bbdab@gmail.com> References: <0630C19B-9EA1-457D-A74C-8A7A03D96DD6@vmware.com> <2bb037db-61aa-4fec-a694-00eee36bbdab@gmail.com> Message-ID: Vikram, According to https://review.openstack.org/#/admin/groups/1002,members , you are the member of networking-onos release team. Can you please add me to the group so that I can create a new branch? Thank you, Sangho > On 19 Apr 2018, at 12:19 AM, Ian Y. Choi wrote: > > Hello Sangho, > > When I see https://review.openstack.org/#/admin/projects/openstack/networking-onos,access page, > it seems that networking-onos-release group members can create stable branches for the repository. > > By the way, since the networking-onos-release group has no neutron release team group, > I think infra team can help to include neutron release team and neutron release team can help to create branches > for the repo if there is no reponse from current networking-onos-release group member. > > > Might this help you? > > > With many thanks, > > /Ian > > Sangho Shin wrote on 4/18/2018 2:48 PM: >> Hello, Ian >> >> I am trying to add a new stable branch in the networking-onos, following the page you suggested. >> >> >> Create stable/* Branch¶ >> > >> >> For OpenStack projects this should be performed by the OpenStack Release Management Team at the Release Branch Point. If you are managing branches for your project you may have permission to do this yourself. >> >> * Go to https://review.openstack.org/ and sign in >> * Select ‘Admin’, ‘Projects’, then the project >> * Select ‘Branches’ >> * Enter |stable/| in the ‘Branch Name’ field, and |HEAD| as >> the ‘Initial Revision’, then press ‘Create Branch’. Alternatively, >> you may run |git branch stable/ && git push gerrit >> stable/| >> >> >> However, after I login, I cannot see the ‘Admin’ and also I cannot create a new branch. Do I need an additional authority for it? >> BTW, I am a member of networking-onos-core team, as you know. >> >> Thank you, >> >> Sangho >> >> >> >>> On 18 Apr 2018, at 9:00 AM, Sangho Shin >> wrote: >>> >>> Ian and Gary, >>> >>> Thank you so much for your answer. >>> I will try what you suggested. >>> >>> Thank you, >>> >>> Sangho >>> >>>> On 17 Apr 2018, at 7:47 PM, Gary Kotton >> wrote: >>>> >>>> Hi, >>>> You either need one of the ono core team or the neutron release team to add you. FYI -https://review.openstack.org/#/admin/groups/1001,members >>>> Thanks >>>> Gary >>>> *From:*Sangho Shin >> >>>> *Reply-To:*OpenStack List >> >>>> *Date:*Tuesday, April 17, 2018 at 5:01 AM >>>> *To:*OpenStack List >> >>>> *Subject:*[openstack-dev] [openstack-infra] How to take over a project? >>>> Dear OpenStack Infra team, >>>> I would like to know how to take over an OpenStack project. >>>> I am a committer of the networking-onos project (https://github.com/openstack/networking-onos ), and I would like to take over the project. >>>> The current maintainer (cc’d) has already agreed with that. >>>> Please let me know the process to take over (or change the maintainer of) the project. >>>> BTW, it looks like even the current maintainer cannot create a new branch of the codes. How can we get the authority to create a new branch? >>>> Thank you, >>>> Sangho >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe:OpenStack-dev-request at lists.openstack.org >?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org ?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From sangho at opennetworking.org Thu Apr 19 00:18:06 2018 From: sangho at opennetworking.org (Sangho Shin) Date: Thu, 19 Apr 2018 09:18:06 +0900 Subject: [openstack-dev] [openstack-infra] How to take over a project? In-Reply-To: <7a4390b1-2c4e-6600-4d93-167697ea9f12@redhat.com> References: <0630C19B-9EA1-457D-A74C-8A7A03D96DD6@vmware.com> <2bb037db-61aa-4fec-a694-00eee36bbdab@gmail.com> <7a4390b1-2c4e-6600-4d93-167697ea9f12@redhat.com> Message-ID: <81B28CCD-93B2-4BC8-B2C5-50B0C5D2A972@opennetworking.org> Ian, Thank you so much for your help. I have requested Vikram to add me to the release team. He should be able to help me. :-) Sangho > On 19 Apr 2018, at 8:36 AM, Ian Wienand wrote: > > On 04/19/2018 01:19 AM, Ian Y. Choi wrote: >> By the way, since the networking-onos-release group has no neutron >> release team group, I think infra team can help to include neutron >> release team and neutron release team can help to create branches >> for the repo if there is no reponse from current >> networking-onos-release group member. > > This seems sane and I've added neutron-release to > networking-onos-release. > > I'm hesitant to give advice on branching within a project like neutron > as I'm sure there's stuff I'm not aware of; but members of the > neutron-release team should be able to get you going. > > Thanks, > > -i From zhaochao1984 at gmail.com Thu Apr 19 01:16:03 2018 From: zhaochao1984 at gmail.com (=?UTF-8?B?6LW16LaF?=) Date: Thu, 19 Apr 2018 09:16:03 +0800 Subject: [openstack-dev] [stable][trove] keep trove-stable-maint members up-to-date In-Reply-To: References: <149d6e3a-98f1-1415-a087-99e6d1ec2cb6@gmail.com> Message-ID: Matt, Thanks a lot! On Thu, Apr 19, 2018 at 2:01 AM, Matt Riedemann wrote: > On 4/17/2018 8:49 PM, 赵超 wrote: > >> Thanks for approving the stable branch patches of trove and python-trove, >> we also have some in the trove-dashboard. >> > > I also went through the trove-dashboard ones, just need another > stable-maint-core to approve those. > > https://review.openstack.org/#/q/project:openstack/trove-das > hboard+status:open+NOT+branch:master > > > -- > > Thanks, > > Matt > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- To be free as in freedom. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark.kirkwood at catalyst.net.nz Thu Apr 19 04:47:58 2018 From: mark.kirkwood at catalyst.net.nz (Mark Kirkwood) Date: Thu, 19 Apr 2018 16:47:58 +1200 Subject: [openstack-dev] [osc][swift] Setting storage policy for a container possible via the client? Message-ID: Swift has had storage policies for a while now. These are enabled by setting the 'X-Storage-Policy' header on a container. It looks to me like this is not possible using openstack-client (even in master branch) - while there is a 'set' operation for containers this will *only* set  'Meta-*' type headers. It seems to me that adding this would be highly desirable. Is it in the pipeline? If not I might see how much interest there is at my end for adding such - as (famous last words) it looks pretty straightforward to do. regards Mark From zhaochao1984 at gmail.com Thu Apr 19 05:49:39 2018 From: zhaochao1984 at gmail.com (=?UTF-8?B?6LW16LaF?=) Date: Thu, 19 Apr 2018 13:49:39 +0800 Subject: [openstack-dev] [tripleo] Recap of Python 3 testing session at PTG In-Reply-To: <7a57cbba-b09d-b528-8fe9-4ecbd0499d10@debian.org> References: <7a57cbba-b09d-b528-8fe9-4ecbd0499d10@debian.org> Message-ID: On Fri, Apr 13, 2018 at 8:48 PM, Thomas Goirand wrote: > On 03/17/2018 09:34 AM, Emilien Macchi wrote: > > The other one that isn't Py3 ready *in stable* is trove-dashboard. I > have sent backport patches, but they were not approved because of the > stable gate having issues: > https://review.openstack.org/#/c/554680/ > https://review.openstack.org/#/c/554681/ > https://review.openstack.org/#/c/554682/ > https://review.openstack.org/#/c/554683/ > > The team had plans to make this pass (by temporarily fixing the gate) > but so far, it hasn't happened. > ​Just FYI, these patches have been merged already today​. Thanks for reporting this and pushing them to the Queens branch. -- To be free as in freedom. -------------- next part -------------- An HTML attachment was scrubbed... URL: From masayuki.igawa at gmail.com Thu Apr 19 05:58:09 2018 From: masayuki.igawa at gmail.com (Masayuki Igawa) Date: Thu, 19 Apr 2018 14:58:09 +0900 Subject: [openstack-dev] [qa] QA Office Hours 9:00 UTC is cancelled Message-ID: <20180419055808.ixrvasc5taaoai36@fastmail.com> Hi All, Today, QA Office Hours @9:00 UTC is canceled due to unavailability of members. Happy Hacking!! -- Masayuki Igawa -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From berendt at betacloud-solutions.de Thu Apr 19 06:21:39 2018 From: berendt at betacloud-solutions.de (Christian Berendt) Date: Thu, 19 Apr 2018 08:21:39 +0200 Subject: [openstack-dev] [kolla][vote]Retire kolla-kubernetes project In-Reply-To: References: Message-ID: <3146848E-8DFB-4DBE-ABA2-1485AC590502@betacloud-solutions.de> +1 > On 18. Apr 2018, at 03:51, Jeffrey Zhang wrote: > > Since many of the contributors in the kolla-kubernetes project are moved to other things. And there is no active contributor for months. On the other hand, there is another comparable project, openstack-helm, in the community. For less confusion and disruptive community resource, I propose to retire the kolla-kubernetes project. > > More discussion about this you can check the mail[0] and patch[1] > > please vote +1 to retire the repo, or -1 not to retire the repo. The vote will be open until everyone has voted, or for 1 week until April 25th, 2018. > > [0] http://lists.openstack.org/pipermail/openstack-dev/2018-March/128822.html > [1] https://review.openstack.org/552531 > > -- > Regards, > Jeffrey Zhang > Blog: http://xcodest.me > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Christian Berendt Chief Executive Officer (CEO) Mail: berendt at betacloud-solutions.de Web: https://www.betacloud-solutions.de Betacloud Solutions GmbH Teckstrasse 62 / 70190 Stuttgart / Deutschland Geschäftsführer: Christian Berendt Unternehmenssitz: Stuttgart Amtsgericht: Stuttgart, HRB 756139 From dabarren at gmail.com Thu Apr 19 06:24:13 2018 From: dabarren at gmail.com (Eduardo Gonzalez) Date: Thu, 19 Apr 2018 08:24:13 +0200 Subject: [openstack-dev] [kolla][vote]Retire kolla-kubernetes project In-Reply-To: <3146848E-8DFB-4DBE-ABA2-1485AC590502@betacloud-solutions.de> References: <3146848E-8DFB-4DBE-ABA2-1485AC590502@betacloud-solutions.de> Message-ID: +1 2018-04-19 8:21 GMT+02:00 Christian Berendt : > +1 > > > On 18. Apr 2018, at 03:51, Jeffrey Zhang > wrote: > > > > Since many of the contributors in the kolla-kubernetes project are moved > to other things. And there is no active contributor for months. On the > other hand, there is another comparable project, openstack-helm, in the > community. For less confusion and disruptive community resource, I propose > to retire the kolla-kubernetes project. > > > > More discussion about this you can check the mail[0] and patch[1] > > > > please vote +1 to retire the repo, or -1 not to retire the repo. The > vote will be open until everyone has voted, or for 1 week until April 25th, > 2018. > > > > [0] http://lists.openstack.org/pipermail/openstack-dev/2018- > March/128822.html > > [1] https://review.openstack.org/552531 > > > > -- > > Regards, > > Jeffrey Zhang > > Blog: http://xcodest.me > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- > Christian Berendt > Chief Executive Officer (CEO) > > Mail: berendt at betacloud-solutions.de > Web: https://www.betacloud-solutions.de > > Betacloud Solutions GmbH > Teckstrasse 62 / 70190 Stuttgart / Deutschland > > Geschäftsführer: Christian Berendt > Unternehmenssitz: Stuttgart > Amtsgericht: Stuttgart, HRB 756139 > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From agarwalvishakha18 at gmail.com Thu Apr 19 06:50:58 2018 From: agarwalvishakha18 at gmail.com (vishakha agarwal) Date: Thu, 19 Apr 2018 12:20:58 +0530 Subject: [openstack-dev] Request for Freezer patch review Message-ID: Hi szaher, I have updated the patch. Kindly provide the feedback as I am altogether taking the backup of instances with same name https://review.openstack.org/# /c/559665 Thanks and Regards, Vishakha -------------- next part -------------- An HTML attachment was scrubbed... URL: From soulxu at gmail.com Thu Apr 19 07:28:03 2018 From: soulxu at gmail.com (Alex Xu) Date: Thu, 19 Apr 2018 15:28:03 +0800 Subject: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources In-Reply-To: <3b3de257-e95c-6cc8-15a0-0453c6b529b7@fried.cc> References: <125f7217-2a83-316f-63fd-e1cee0ee3a5f@gmail.com> <3b3de257-e95c-6cc8-15a0-0453c6b529b7@fried.cc> Message-ID: I'm trying so hard to catch up the discussion since I lost few.., it is really hard... In my mind , I'm always thinking the request group is only about binding the trait and the resource class together. Also thinking about whether we need a explicit tree structure to describe the request. So sounds like proximity parameter right to me. 2018-04-19 6:45 GMT+08:00 Eric Fried : > > I have a feeling we're just going to go back and forth on this, as we > > have for weeks now, and not reach any conclusion that is satisfactory to > > everyone. And we'll delay, yet again, getting functionality into this > > release that serves 90% of use cases because we are obsessing over the > > 0.01% of use cases that may pop up later. > > So I vote that, for the Rocky iteration of the granular spec, we add a > single `proximity={isolate|any}` qparam, required when any numbered > request groups are specified. I believe this allows us to satisfy the > two NUMA use cases we care most about: "forced sharding" and "any fit". > And as you demonstrated, it leaves the way open for finer-grained and > more powerful semantics to be added in the future. > > -efried > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From maciej.szwed at intel.com Thu Apr 19 07:40:50 2018 From: maciej.szwed at intel.com (Szwed, Maciej) Date: Thu, 19 Apr 2018 07:40:50 +0000 Subject: [openstack-dev] [Os-brick][Cinder] NVMe-oF NQN string In-Reply-To: References: <122B872DCF83AB4DB816E25A2C1AD08D8B9242BF@IRSMSX102.ger.corp.intel.com> Message-ID: <122B872DCF83AB4DB816E25A2C1AD08D8B92507E@IRSMSX102.ger.corp.intel.com> Hi Hamdy, Thanks for quick action. Regards Maciej From: Hamdy Khader [mailto:hamdyk at mellanox.com] Sent: Tuesday, April 17, 2018 12:51 PM To: OpenStack-dev at lists.openstack.org Subject: Re: [openstack-dev] [Os-brick][Cinder] NVMe-oF NQN string Hi, I think you're right, will drop the split and push change soon. Regards, Hamdy ________________________________ From: Szwed, Maciej > Sent: Monday, April 16, 2018 4:51 PM To: OpenStack-dev at lists.openstack.org Subject: [openstack-dev] [Os-brick][Cinder] NVMe-oF NQN string Hi, I'm wondering why in Os-brick implementation of NVMe-oF in os_brick/initiator/connectors/nvme.py, line 97 we do split on 'nqn'. Connection properties, including 'nqn', are provided by Cinder driver and when user want to implement new driver that will use NVMe-of he/she needs to create NQN string with additional string and dot proceeding the desired NQN string. This additional string is unused across whole NVMe-oF implementation. This creates confusion for people when creating new Cinder driver. What was its purpose? Can we drop that split? Regards, Maciej -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at ericsson.com Thu Apr 19 08:38:03 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Thu, 19 Apr 2018 10:38:03 +0200 Subject: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources In-Reply-To: <3b3de257-e95c-6cc8-15a0-0453c6b529b7@fried.cc> References: <125f7217-2a83-316f-63fd-e1cee0ee3a5f@gmail.com> <3b3de257-e95c-6cc8-15a0-0453c6b529b7@fried.cc> Message-ID: <1524127083.30697.0@smtp.office365.com> On Thu, Apr 19, 2018 at 12:45 AM, Eric Fried wrote: >> I have a feeling we're just going to go back and forth on this, as >> we >> have for weeks now, and not reach any conclusion that is >> satisfactory to >> everyone. And we'll delay, yet again, getting functionality into >> this >> release that serves 90% of use cases because we are obsessing over >> the >> 0.01% of use cases that may pop up later. > > So I vote that, for the Rocky iteration of the granular spec, we add a > single `proximity={isolate|any}` qparam, required when any numbered > request groups are specified. I believe this allows us to satisfy the > two NUMA use cases we care most about: "forced sharding" and "any > fit". > And as you demonstrated, it leaves the way open for finer-grained and > more powerful semantics to be added in the future. Can the proximity param specify relationship between the un-numbered and the numbered groups as well or only between numbered groups? Besides that I'm +1 about proxyimity={isolate|any} Cheers, gibi > > -efried > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From cdent+os at anticdent.org Thu Apr 19 09:28:14 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 19 Apr 2018 10:28:14 +0100 (BST) Subject: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources In-Reply-To: <3b3de257-e95c-6cc8-15a0-0453c6b529b7@fried.cc> References: <125f7217-2a83-316f-63fd-e1cee0ee3a5f@gmail.com> <3b3de257-e95c-6cc8-15a0-0453c6b529b7@fried.cc> Message-ID: On Wed, 18 Apr 2018, Eric Fried wrote: >> I have a feeling we're just going to go back and forth on this, as we >> have for weeks now, and not reach any conclusion that is satisfactory to >> everyone. And we'll delay, yet again, getting functionality into this >> release that serves 90% of use cases because we are obsessing over the >> 0.01% of use cases that may pop up later. > > So I vote that, for the Rocky iteration of the granular spec, we add a > single `proximity={isolate|any}` qparam, required when any numbered > request groups are specified. I believe this allows us to satisfy the > two NUMA use cases we care most about: "forced sharding" and "any fit". > And as you demonstrated, it leaves the way open for finer-grained and > more powerful semantics to be added in the future. The three most important priorities for me (highest last) are: * being able to move forward quickly so we can learn from our mistakes sooner than later and not cause backlogs in our progress * the common behavior should require the least syntax. Since (I hope) the common behavior has nothing to do with nested, and the syntax under discussion only comes into play on granular requests, it's not really germane here. But it bears repeating that we are outside the domain of useful stuff for most cloudy people, here. * the API needs to have an easy mental process for translating from human utterances to a set of query parameters and vice versa. This is why I tend to prefer a single query parameter (like either of the two original proposals in this thread, or 'proximity) to encoded parameters (like 'resources1{s,d}') or taking the leap into a complex JSON query structure in a POST. One of the advantages of microversions is that we can easily change it later if we want. It can mean that the underlying data query code may need to branch more, but that's the breaks, and isn't really that big of a deal if we're maintaining our tests well. It is more than likely that we will eventually have to move to POST at some point (and at that point it wouldn't be completely wrong to investigate graphql). But we should put that off and let ourselves progress there in a stepwise fashion. Let's take one or two use cases, solve for them in what we hope is a flexible fashion, and move on. If we get it wrong we can fix it. And it'll be okay. Let's not maintain this painful illusion that we're writing stone tablets. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From sylvain.bauza at gmail.com Thu Apr 19 10:50:42 2018 From: sylvain.bauza at gmail.com (Sylvain Bauza) Date: Thu, 19 Apr 2018 12:50:42 +0200 Subject: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources In-Reply-To: <1524127083.30697.0@smtp.office365.com> References: <125f7217-2a83-316f-63fd-e1cee0ee3a5f@gmail.com> <3b3de257-e95c-6cc8-15a0-0453c6b529b7@fried.cc> <1524127083.30697.0@smtp.office365.com> Message-ID: 2018-04-19 10:38 GMT+02:00 Balázs Gibizer : > > > On Thu, Apr 19, 2018 at 12:45 AM, Eric Fried wrote: > >> I have a feeling we're just going to go back and forth on this, as we >>> have for weeks now, and not reach any conclusion that is satisfactory to >>> everyone. And we'll delay, yet again, getting functionality into this >>> release that serves 90% of use cases because we are obsessing over the >>> 0.01% of use cases that may pop up later. >>> >> >> So I vote that, for the Rocky iteration of the granular spec, we add a >> single `proximity={isolate|any}` qparam, required when any numbered >> request groups are specified. I believe this allows us to satisfy the >> two NUMA use cases we care most about: "forced sharding" and "any fit". >> And as you demonstrated, it leaves the way open for finer-grained and >> more powerful semantics to be added in the future. >> > > Can the proximity param specify relationship between the un-numbered and > the numbered groups as well or only between numbered groups? > Besides that I'm +1 about proxyimity={isolate|any} > > What's the default behaviour if we aren't providing the proximity qparam ? Isolate or any ? > Cheers, > gibi > > > >> -efried >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrea.frittoli at gmail.com Thu Apr 19 10:58:16 2018 From: andrea.frittoli at gmail.com (Andrea Frittoli) Date: Thu, 19 Apr 2018 10:58:16 +0000 Subject: [openstack-dev] [QA][all] Migration of Tempest / Grenade jobs to Zuul v3 native In-Reply-To: References: Message-ID: Dear all, a quick update on the current status. Zuul has been fixed to use the correct branch for roles coming from different repositories [1]. The backport of the devstack patches to support multinode jobs is almost complete. All stable/queens patches are merged, stable/pike patches are almost all approved and going through the gate [2]. The two facts above mean that now the "devstack-tempest" base job defined in Tempest can be switched to use the "orchestrate-devstack" role and thus function as a base for multinode jobs [3]. It also means that work on writing grenade jobs in zuulv3 native format can now be resumed [4]. Kind regards Andrea Frittoli [1] http://lists.openstack.org/pipermail/openstack-dev/2018-April/129217.html [2] https://review.openstack.org/#/q/topic:multinode_zuulv3+(status:open+OR+status:merged ) [3] https://review.openstack.org/#/c/545724/ [4] https://review.openstack.org/#/q/status:open+branch:master+topic:grenade_zuulv3 On Mon, Mar 12, 2018 at 2:08 PM Andrea Frittoli wrote: > Dear all, > > post-PTG updates: > > - the devstack patches for multinode support are now merged on master. You > can now build your multinode zuulv3 native devstack/tempest test jobs using > the same base jobs as for single node, and setting a multinode nodeset. > Documentation landed as well, so you can now find docs on roles [0], jobs > [1] and a migration guide [2] which will show you which base jobs to start > with and how to migrate those devstack-gate flags from legacy jobs to the > zuul v3 jobs. > > - the multinode patches including switching of test-matrix (on master) and > start including the list of devstack services in the base jobs. In doing so > I used the new neutron service names. That may be causing issues to > devstack-plugins looking for old service names, so if you encounter an > issue please reach out in the openstack-qa / openstack-infra rooms. We > could still roll back to the old names, however the beginning of the cycle > is probably the best time to sort out issues related to the new names and > new logic in the neutron - devstack code. > > Coming up next: > > - backport of devstack patches to stable (queens and pike), so we can > switch the Tempest job devstack multinode mode and develop grenade zuulv3 > native jobs. I do not plan on backporting the new neutron names to any > stable branch, let me know if there is any reason to do otherwise. > - work on grenade is at very early stages [3], so far I got devstack > running successfully on stable/queens from the /opt/stack/old folder using > the zuulv3 roles. Next up is actually doing the migration and running all > relevant checks. > > Andrea Frittoli (andreaf) > > [0] https://docs.openstack.org/devstack/latest/zuul_roles.html > [1] https://docs.openstack.org/devstack/latest/zuul_jobs.html > [2] https://docs.openstack.org/devstack/latest/zuul_ci_jobs_migration.html > > [3] > https://review.openstack.org/#/q/status:open+branch:master+topic:grenade_zuulv3 > > > > On Tue, Feb 20, 2018 at 9:22 PM Andrea Frittoli > wrote: > >> Dear all, >> >> updates: >> >> - host/group vars: zuul now supports declaring host and group vars in the >> job definition [0][1] - thanks corvus and infra team! >> This is a great help towards writing the devstack and tempest base >> multinode jobs [2][3] >> * NOTE: zuul merges dict variables through job inheritance. Variables >> in host/group_vars override global ones. I will write some examples further >> clarify this. >> >> - stable/pike: devstack ansible changes have been backported to >> stable/pike, so we can now run zuulv3 jobs against stable/pike too - thank >> you tosky! >> next change in progress related to pike is to provide tempest-full-pike >> for branchless repositories [4] >> >> - documentation: devstack now publishes documentation on its ansible >> roles [5]. >> More devstack documentation patches are in progress to provide jobs >> reference, examples and a job migration how-to [6]. >> >> >> Andrea Frittoli (andreaf) >> >> [0] >> https://docs.openstack.org/infra/zuul/user/config.html#attr-job.host_vars >> >> [1] >> https://docs.openstack.org/infra/zuul/user/config.html#attr-job.group_vars >> >> [2] https://review.openstack.org/#/c/545696/ >> [3] https://review.openstack.org/#/c/545724/ >> [4] https://review.openstack.org/#/c/546196/ >> [5] https://docs.openstack.org/devstack/latest/roles.html >> [6] https://review.openstack.org/#/c/545992/ >> >> >> On Mon, Feb 19, 2018 at 2:46 PM Andrea Frittoli < >> andrea.frittoli at gmail.com> wrote: >> >>> Dear all, >>> >>> updates: >>> - tempest-full-queens and tempest-full-py3-queens are now available for >>> testing of branchless repositories [0]. They are used for tempest and >>> devstack-gate. If you own a tempest plugin in a branchless repo, you may >>> consider adding similar jobs to your plugin if you use it for tests on >>> stable/queen as well. >>> - if you have migrated jobs based on devstack-tempest please let me >>> know, I'm building reference docs and I'd like to include as many examples >>> as possible >>> - work on multi-node is in progress, but not ready still - you can >>> follow the patches in the multinode branch [1] >>> - updates on some of the points from my previous email are inline below >>> >>> Andrea Frittoli (andreaf) >>> >>> [0] http://git.openstack.org/cgit/openstack/tempest/tree/.zuul.yaml#n73 >>> [1] >>> https://review.openstack.org/#/q/status:open++branch:master+topic:multinode >>> >>> >>> >>> On Thu, Feb 15, 2018 at 11:31 PM Andrea Frittoli < >>> andrea.frittoli at gmail.com> wrote: >>> >>>> Dear all, >>>> >>>> this is the first or a series of ~regular updates on the migration of >>>> Tempest / Grenade jobs to Zuul v3 native. >>>> >>>> The QA team together with the infra team are working on providing the >>>> OpenStack community with a set of base Tempest / Grenade jobs that can be >>>> used as a basis to write new CI jobs / migrate existing legacy ones with a >>>> minimal effort and very little or no Ansible knowledge as a precondition. >>>> >>>> The effort is tracked in an etherpad [0]; I'm trying to keep the >>>> etherpad up to date but it may not always be a source of truth. >>>> >>>> Useful jobs available so far: >>>> - devstack-tempest [0] is a simple tempest/devstack job that runs >>>> keystone glance nova cinder neutron swift and tempest *smoke* filter >>>> - tempest-full [1] is similar but runs a full test run - it replaces >>>> the legacy tempest-dsvm-neutron-full from the integrated gate >>>> - tempest-full-py3 [2] runs a full test run on python3 - it replaces >>>> the legacy tempest-dsvm-py35 >>>> >>> >>> Some more details on this topic: what I did not mention in my previous >>> email is that the autogenerated Tempest / Grenade CI jobs (legacy-* >>> playbooks) are not meant to be used as a basis for Zuul V3 native jobs. To >>> create Zuul V3 Tempest / Grenade native jobs for your projects you need to >>> through away the legacy playbooks and defined new jobs in .zuul.yaml, as >>> documented in the zuul v3 docs [2]. >>> The parent job for a single node Tempest job will usually be >>> devstack-tempest. Example migrated jobs are avilable, for instance: [3] [4]. >>> >>> [2] >>> https://docs.openstack.org/infra/manual/zuulv3.html#howto-update-legacy-jobs >>> >>> [3] >>> http://git.openstack.org/cgit/openstack/sahara-tests/tree/.zuul.yaml#n21 >>> >>> [4] https://review.openstack.org/#/c/543048/5 >>> >>> >>>> >>>> Both tempest-full and tempest-full-py3 are part of integrated-gate >>>> templates, starting from stable/queens on. >>>> The other stable branches still run the legacy jobs, since >>>> devstack ansible changes have not been backported (yet). If we do backport >>>> it will be up to pike maximum. >>>> >>>> Those jobs work in single node mode only at the moment. Enabling >>>> multinode via job configuration only require a new Zuul feature [4][5] that >>>> should be available soon; the new feature allows defining host/group >>>> variables in the job definition, which means setting variables which are >>>> specific to one host or a group of hosts. >>>> Multinode DVR and Ironic jobs will require migration of the ovs-* roles >>>> form devstack-gate to devstack as well. >>>> >>>> Grenade jobs (single and multinode) are still legacy, even if the >>>> *legacy* word has been removed from the name. >>>> They are currently temporarily hosted in the neutron repository. They >>>> are going to be implemented as Zuul v3 native in the grenade repository. >>>> >>>> Roles are documented, and a couple of migration tips for DEVSTACK_GATE >>>> flags is available in the etherpad [0]; more comprehensive examples / >>>> docs will be available as soon as possible. >>>> >>>> Please let me know if you find this update useful and / or if you would >>>> like to see different information in it. >>>> I will send further updates as soon as significant changes / new >>>> features become available. >>>> >>>> Andrea Frittoli (andreaf) >>>> >>>> [0] >>>> https://etherpad.openstack.org/p/zuulv3-native-devstack-tempest-jobs >>>> [1] http://git.openstack.org/cgit/openstack/tempest/tree/.zuul.yaml#n1 >>>> [2] http://git.openstack.org/cgit/openstack/tempest/tree/.zuul.yaml#n29 >>>> >>>> [3] http://git.openstack.org/cgit/openstack/tempest/tree/.zuul.yaml#n47 >>>> >>>> [4] https://etherpad.openstack.org/p/zuulv3-group-variables >>>> [5] https://review.openstack.org/#/c/544562/ >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at fried.cc Thu Apr 19 12:27:34 2018 From: openstack at fried.cc (Eric Fried) Date: Thu, 19 Apr 2018 07:27:34 -0500 Subject: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources In-Reply-To: <1524127083.30697.0@smtp.office365.com> References: <125f7217-2a83-316f-63fd-e1cee0ee3a5f@gmail.com> <3b3de257-e95c-6cc8-15a0-0453c6b529b7@fried.cc> <1524127083.30697.0@smtp.office365.com> Message-ID: <37cb5f6d-ca6f-6856-d13d-210b40422f53@fried.cc> gibi- > Can the proximity param specify relationship between the un-numbered and > the numbered groups as well or only between numbered groups? > Besides that I'm +1 about proxyimity={isolate|any} Remembering that the resources in the un-numbered group can be spread around the tree and sharing providers... If applying "isolate" to the un-numbered group means that each resource you specify therein must be satisfied by a different provider, then you should have just put those resources into numbered groups. If "isolate" means that *none* of the numbered groups will land on *any* of the providers satisfying the un-numbered group... that could be hard to reason about, and I don't know if it's useful. So thus far I've been thinking about all of these semantics only in terms of the numbered groups (although Jay's `can_split` was specifically aimed at the un-numbered group). That being the case (is that a bikeshed on the horizon?) perhaps `granular_policy={isolate|any}` is a more appropriate name than `proximity`. -efried From openstack at fried.cc Thu Apr 19 12:33:31 2018 From: openstack at fried.cc (Eric Fried) Date: Thu, 19 Apr 2018 07:33:31 -0500 Subject: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources In-Reply-To: References: <125f7217-2a83-316f-63fd-e1cee0ee3a5f@gmail.com> <3b3de257-e95c-6cc8-15a0-0453c6b529b7@fried.cc> Message-ID: Chris- Thanks for this perspective. I totally agree. > * the common behavior should require the least syntax. To that point, I had been assuming "any fit" was going to be more common than "explicit anti-affinity". But I think this is where we are having trouble agreeing. So since, as you point out, we're in the weeds to begin with when talking about nested, IMO mriedem's suggestion (no default, require behavior to be specified) is a reasonable compromise. > it'll be okay. Let's not maintain this painful illusion that we're > writing stone tablets. This. I, for one, was being totally guilty of that. -efried From openstack at fried.cc Thu Apr 19 12:36:06 2018 From: openstack at fried.cc (Eric Fried) Date: Thu, 19 Apr 2018 07:36:06 -0500 Subject: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources In-Reply-To: References: <125f7217-2a83-316f-63fd-e1cee0ee3a5f@gmail.com> <3b3de257-e95c-6cc8-15a0-0453c6b529b7@fried.cc> <1524127083.30697.0@smtp.office365.com> Message-ID: Sylvain- > What's the default behaviour if we aren't providing the proximity qparam > ? Isolate or any ? What we've been talking about, per mriedem's suggestion, is that the qparam is required when you specify any numbered request groups. There is no default. If you don't provide the qparam, 400. (Edge case: the qparam is meaningless if you only provide *one* numbered request group - assuming it has no bearing on the un-numbered group. In that case omitting it might be acceptable... or 400 for consistency.) -efried From balazs.gibizer at ericsson.com Thu Apr 19 12:38:54 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Thu, 19 Apr 2018 14:38:54 +0200 Subject: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources In-Reply-To: <37cb5f6d-ca6f-6856-d13d-210b40422f53@fried.cc> References: <125f7217-2a83-316f-63fd-e1cee0ee3a5f@gmail.com> <3b3de257-e95c-6cc8-15a0-0453c6b529b7@fried.cc> <1524127083.30697.0@smtp.office365.com> <37cb5f6d-ca6f-6856-d13d-210b40422f53@fried.cc> Message-ID: <1524141534.30697.1@smtp.office365.com> On Thu, Apr 19, 2018 at 2:27 PM, Eric Fried wrote: > gibi- > >> Can the proximity param specify relationship between the >> un-numbered and >> the numbered groups as well or only between numbered groups? >> Besides that I'm +1 about proxyimity={isolate|any} > > Remembering that the resources in the un-numbered group can be spread > around the tree and sharing providers... > > If applying "isolate" to the un-numbered group means that each > resource > you specify therein must be satisfied by a different provider, then > you > should have just put those resources into numbered groups. > > If "isolate" means that *none* of the numbered groups will land on > *any* > of the providers satisfying the un-numbered group... that could be > hard > to reason about, and I don't know if it's useful. > > So thus far I've been thinking about all of these semantics only in > terms of the numbered groups (although Jay's `can_split` was > specifically aimed at the un-numbered group). Thanks for the explanation. Now it make sense to me to limit the proximity param to the numbered groups. > > That being the case (is that a bikeshed on the horizon?) perhaps > `granular_policy={isolate|any}` is a more appropriate name than > `proximity`. The policy term is more general than proximity therefore the granular_policy=any query fragment isn't descriptive enough any more. gibi > > -efried > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From pkovar at redhat.com Thu Apr 19 12:51:38 2018 From: pkovar at redhat.com (Petr Kovar) Date: Thu, 19 Apr 2018 14:51:38 +0200 Subject: [openstack-dev] [docs] Documentation meeting minutes for 2018-04-18 In-Reply-To: <20180418154144.a1ed381823db95102c3ef8aa@redhat.com> References: <20180418154144.a1ed381823db95102c3ef8aa@redhat.com> Message-ID: <20180419145138.c78b980a2a82cc1c22d5122f@redhat.com> ======================= #openstack-doc: docteam ======================= Meeting started by pkovar at 16:02:48 UTC. The full logs are available at http://eavesdrop.openstack.org/meetings/docteam/2018/docteam.2018-04-18-16.02.log.html . Meeting summary --------------- * Open discussion (pkovar, 16:04:23) * docs PTL availability in April/May (pkovar, 16:05:32) * pkovar to have limited online presence for the next 3 weeks (pkovar, 16:06:10) * back in mid-May (pkovar, 16:06:23) * will check email (pkovar, 16:06:43) * Vancouver Summit (pkovar, 16:08:27) * Will have a shared 10+10 mins project update slot with i18n, see the published schedule (pkovar, 16:08:31) * LINK: https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21627/docsi18n-project-onboarding (pkovar, 16:09:09) * Frank (eumel8) started to fill out the content for project updates on I18n part. (ianychoi, 16:09:29) * stephenfin to talk about docs tooling updates (pkovar, 16:10:16) * Bug Triage Team (pkovar, 16:19:01) * LINK: https://wiki.openstack.org/wiki/Documentation/SpecialityTeams (pkovar, 16:19:06) * if foks want to help, sign up (pkovar, 16:19:55) * if folks want to help, sign up at https://wiki.openstack.org/wiki/Documentation/SpecialityTeams for the next slot (pkovar, 16:20:36) * for the next cycle, we need to decide if we want to retire ha guide which is pretty much unmaintained with more and more bugs being filed (pkovar, 16:25:39) * Replacing pbr's autodoc feature with sphinxcontrib-apidoc (pkovar, 16:25:43) * LINK: http://lists.openstack.org/pipermail/openstack-dev/2018-April/128986.html (pkovar, 16:25:50) * kudos to stephenfin for spearheading this (pkovar, 16:26:00) * LINK: https://review.openstack.org/#/c/509297/ (ianychoi, 16:31:09) Meeting ended at 16:34:04 UTC. People present (lines said) --------------------------- * pkovar (61) * ianychoi (22) * stephenfin (5) * openstack (4) * openstackgerrit (1) Generated by `MeetBot`_ 0.1.4 From doug at doughellmann.com Thu Apr 19 12:51:46 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 19 Apr 2018 08:51:46 -0400 Subject: [openstack-dev] [osc][swift] Setting storage policy for a container possible via the client? In-Reply-To: References: Message-ID: <1524142259-sup-5177@lrrr.local> Excerpts from Mark Kirkwood's message of 2018-04-19 16:47:58 +1200: > Swift has had storage policies for a while now. These are enabled by > setting the 'X-Storage-Policy' header on a container. > > It looks to me like this is not possible using openstack-client (even in > master branch) - while there is a 'set' operation for containers this > will *only* set  'Meta-*' type headers. > > It seems to me that adding this would be highly desirable. Is it in the > pipeline? If not I might see how much interest there is at my end for > adding such - as (famous last words) it looks pretty straightforward to do. > > regards > > Mark > I can't imagine why we wouldn't want to implement that and I'm not aware of anyone working on it. If you're interested and have time, please do work on the patch(es). Doug From hjensas at redhat.com Thu Apr 19 12:59:24 2018 From: hjensas at redhat.com (Harald =?ISO-8859-1?Q?Jens=E5s?=) Date: Thu, 19 Apr 2018 14:59:24 +0200 Subject: [openstack-dev] [Heat][TripleO] - Getting attributes of openstack resources not created by the stack for TripleO NetworkConfig. Message-ID: <1524142764.4383.83.camel@redhat.com> Hi, When configuring TripleO deployments with nodes on routed ctlplane networks we need to pass some per-network properties to the NetworkConfig resource[1] in THT. We get the ``ControlPlaneIp`` property using get_attr, but the NIC configs need a couple of more parameters[2], for example: ``ControlPlaneSubnetCidr``, ``ControlPlaneDefaultRoute`` and ``DnsServers``. Since queens these templates are jinja templated, to generate things from from network_data.yaml. When using routed ctlplane networks, the parameters ``ControlPlaneSubnetCidr`` and ``ControlPlaneDefaultRoute`` will be different. So we need to use static per-role Net::SoftwareConfig templates, and add parameters such as ``ControlPlaneDefaultRouteLeafX``. The values the use need to pass in for these are already available in the neutron ctlplane network configuration on the undercloud. So ideally we should not need to ask the user to provide them in parameter_defaults, we should resolve the correct values automatically. : We can get the port ID using get_attr: {get_attr: [, addresses, , 0, port]} : From there outside of heat we can get the subnet_id: openstack port show 2fb4baf9-45b0-48cb-8249-c09a535b9eda \ -f yaml -c fixed_ips fixed_ips: ip_address='172.20.0.10', subnet_id='2b06ae2e-423f-4a73- 97ad-4e9822d201e5' : And finally we can get the gateway_ip and cidr of the subnet: openstack subnet show 2b06ae2e-423f-4a73-97ad-4e9822d201e5 \ -f yaml -c gateway_ip -c cidr cidr: 172.20.0.0/26 gateway_ip: 172.20.0.62 The problem is getting there using heat ... a couple of ideas: a) Use heat's ``external_resource`` to create a port resource, and then a external subnet resource. Then get the data from the external resources. We probably would have to make it possible for a ``external_resource`` depend on the server resource, and verify that these resource have the required attributes. b) Extend attributes of OS::Nova::Server (OS::Neutron::Port as well probably) to include the data. If we do this we should probably aim to be in parity with what is made available to clients getting the configuration from dhcp. (mtu, dns_domain, dns_servers, prefixlen, gateway_ip, host_routes, ipv6_address_mode, ipv6_ra_mode etc.) c) Create a new heat function to read properties of any openstack resource, without having to make use of the external_resource in heat. [1] https://github.com/openstack/tripleo-heat-templates/blob/9727a0d813 f5078d19b605e445d1c0603c9e777c/puppet/role.role.j2.yaml#L383-L389 [2] https://github.com/openstack/tripleo-heat-templates/blob/9727a0d813 f5078d19b605e445d1c0603c9e777c/network/config/single-nic- vlans/role.role.j2.yaml#L21-L27 From mbooth at redhat.com Thu Apr 19 13:15:03 2018 From: mbooth at redhat.com (Matthew Booth) Date: Thu, 19 Apr 2018 14:15:03 +0100 Subject: [openstack-dev] [nova] Heads up for out-of-tree drivers: supports_recreate -> supports_evacuate Message-ID: We've had inconsistent naming of recreate/evacuate in Nova for a long time, and it will persist in a couple of places for a while more. However, I've proposed the following to rename 'recreate' to 'evacuate' everywhere with no rpc/api impact here: https://review.openstack.org/560900 One of the things which is renamed is the driver 'supports_recreate' capability, which I've renamed to 'supports_evacuate'. The above change updates this for in-tree drivers, but as noted in review this would impact out-of-tree drivers. If this might affect you, please follow the above in case it merges. Matt -- Matthew Booth Red Hat OpenStack Engineer, Compute DFG Phone: +442070094448 (UK) From doug at doughellmann.com Thu Apr 19 13:15:49 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 19 Apr 2018 09:15:49 -0400 Subject: [openstack-dev] [all][ptl][release] reminder for rocky-1 milestone deadline Message-ID: <1524143700-sup-9515@lrrr.local> Today is the deadline for proposing a release for the Rocky-1 milestone. Please don't forget to include your libraries (client or otherwise) as well. Doug From tobias.urdin at crystone.com Thu Apr 19 13:32:30 2018 From: tobias.urdin at crystone.com (Tobias Urdin) Date: Thu, 19 Apr 2018 13:32:30 +0000 Subject: [openstack-dev] [nova] Default scheduler filters survey References: <5AD77313.30102@windriver.com> Message-ID: Two different setups, very basic. AggregateInstanceExtraSpecsFilter RetryFilter AvailabilityZoneFilter ComputeFilter ComputeCapabilitiesFilter ImagePropertiesFilter ServerGroupAntiAffinityFilter ServerGroupAffinityFilter RetryFilter AvailabilityZoneFilter RamFilter ComputeFilter ComputeCapabilitiesFilter ImagePropertiesFilter ServerGroupAntiAffinityFilter ServerGroupAffinityFilter On 04/18/2018 06:34 PM, Chris Friesen wrote: > On 04/18/2018 09:17 AM, Artom Lifshitz wrote: > >> To that end, we'd like to know what filters operators are enabling in >> their deployment. If you can, please reply to this email with your >> [filter_scheduler]/enabled_filters (or >> [DEFAULT]/scheduler_default_filters if you're using an older version) >> option from nova.conf. Any other comments are welcome as well :) > RetryFilter > ComputeFilter > AvailabilityZoneFilter > AggregateInstanceExtraSpecsFilter > ComputeCapabilitiesFilter > ImagePropertiesFilter > NUMATopologyFilter > ServerGroupAffinityFilter > ServerGroupAntiAffinityFilter > PciPassthroughFilter > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jpena at redhat.com Thu Apr 19 14:17:33 2018 From: jpena at redhat.com (Javier Pena) Date: Thu, 19 Apr 2018 10:17:33 -0400 (EDT) Subject: [openstack-dev] [packaging-rpm][meeting] Proposal for new meeting time In-Reply-To: <370166038.18306291.1524147310070.JavaMail.zimbra@redhat.com> Message-ID: <1548725753.18307087.1524147453478.JavaMail.zimbra@redhat.com> Hello fellow packagers, During today's meeting [1], we discussed the schedule conflicts some of us have with the current meeting slot. As a result, I would like to propose a new meeting time: - Wednesdays, 1 PM UTC (3 PM CEST) So far, dirk and jruzicka agreed with the change. If you have an issue, please reply now. Regards, Javier Peña From avolkov at mirantis.com Thu Apr 19 14:27:57 2018 From: avolkov at mirantis.com (Andrey Volkov) Date: Thu, 19 Apr 2018 17:27:57 +0300 Subject: [openstack-dev] [nova][placement] Scheduler VM distribution Message-ID: Hello, >From my understanding, we have a race between the scheduling process and host weight update. I made a simple experiment. On the 50 fake host environment it was asked to boot 40 VMs those should be placed 1 on each host. The hosts are equal to each other in terms of inventory. img=6fedf6a1-5a55-4149-b774-b0b4dccd2ed1 flavor=1 for i in {1..40}; do nova boot --flavor $flavor --image $img --nic none vm-$i; sleep 1; done The following distribution was gotten: mysql> select resource_provider_id, count(*) from allocations where resource_class_id = 0 group by 1; +----------------------+----------+ | resource_provider_id | count(*) | +----------------------+----------+ | 1 | 2 | | 18 | 2 | | 19 | 3 | | 20 | 3 | | 26 | 2 | | 29 | 2 | | 33 | 3 | | 36 | 2 | | 41 | 1 | | 49 | 3 | | 51 | 2 | | 52 | 3 | | 55 | 2 | | 60 | 3 | | 61 | 2 | | 63 | 2 | | 67 | 3 | +----------------------+----------+ 17 rows in set (0.00 sec) And the question is: If we have an atomic resource allocation what is the reason to use compute_nodes.* for weight calculation? There is a custom log of behavior I described: http://ix.io/18cw -- Thanks, Andrey Volkov, Software Engineer, Mirantis, Inc. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Thu Apr 19 14:33:05 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 19 Apr 2018 10:33:05 -0400 Subject: [openstack-dev] [nova] Heads up for out-of-tree drivers: supports_recreate -> supports_evacuate In-Reply-To: References: Message-ID: <8beb9706-2db7-395b-cea2-b1bc4e5a24e0@gmail.com> On 04/19/2018 09:15 AM, Matthew Booth wrote: > We've had inconsistent naming of recreate/evacuate in Nova for a long > time, and it will persist in a couple of places for a while more. > However, I've proposed the following to rename 'recreate' to > 'evacuate' everywhere with no rpc/api impact here: > > https://review.openstack.org/560900 > > One of the things which is renamed is the driver 'supports_recreate' > capability, which I've renamed to 'supports_evacuate'. The above > change updates this for in-tree drivers, but as noted in review this > would impact out-of-tree drivers. If this might affect you, please > follow the above in case it merges. I have to admit, Matt, I'm a bit confused by this. I was under the impression that we were trying to *remove* uses of the term "evacuate" as much as possible because that term is not adequately descriptive of the operation and terms like "recreate" were more descriptive? Best, -jay From doug at doughellmann.com Thu Apr 19 14:40:34 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 19 Apr 2018 10:40:34 -0400 Subject: [openstack-dev] [Release-job-failures][freezer][release] Pre-release of openstack/freezer-dr failed In-Reply-To: <1524145028-sup-5105@lrrr.local> References: <1524145028-sup-5105@lrrr.local> Message-ID: <1524148708-sup-7314@lrrr.local> Excerpts from Doug Hellmann's message of 2018-04-19 09:38:31 -0400: > Excerpts from zuul's message of 2018-04-19 13:22:40 +0000: > > Build failed. > > > > - release-openstack-python http://logs.openstack.org/c9/c9263cde360d37654c4298c496cd9af251f23ce7/pre-release/release-openstack-python/541ad7d/ : FAILURE in 3m 48s > > - announce-release announce-release : SKIPPED > > - propose-update-constraints propose-update-constraints : SKIPPED > > > > This failure seems to be caused by a failure to install libvirt when > trying to build the sdist under tox. > > Doug It looks like the problem is that freezer-dr is not using the constraints list, so it is getting libvirt 4.2.0. Thanks to Matt Thode (prometheanfire) for helping debug that! Freezer team, I suggest you add constraints to the freezer-dr repository before the next milestone so the next release job run passes. Doug From doug at doughellmann.com Thu Apr 19 13:38:31 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 19 Apr 2018 09:38:31 -0400 Subject: [openstack-dev] [Release-job-failures] Pre-release of openstack/freezer-dr failed In-Reply-To: References: Message-ID: <1524145028-sup-5105@lrrr.local> Excerpts from zuul's message of 2018-04-19 13:22:40 +0000: > Build failed. > > - release-openstack-python http://logs.openstack.org/c9/c9263cde360d37654c4298c496cd9af251f23ce7/pre-release/release-openstack-python/541ad7d/ : FAILURE in 3m 48s > - announce-release announce-release : SKIPPED > - propose-update-constraints propose-update-constraints : SKIPPED > This failure seems to be caused by a failure to install libvirt when trying to build the sdist under tox. Doug From lhinds at redhat.com Thu Apr 19 14:49:14 2018 From: lhinds at redhat.com (Luke Hinds) Date: Thu, 19 Apr 2018 15:49:14 +0100 Subject: [openstack-dev] Migration of Bandit Message-ID: All, Please note that Bandits code and issues / docs will be migrated from OpenStack to PyCQA. This is expected to happen next week. No changes are required in any projects or CI, as Bandit will still be available via pypi and projects / CI are set up to use Bandit in that way via tox. READMEs and Key Wiki pages will be updated to inform any visitors of the new home and how to contribute / raise issues. Cheers, Luke -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Thu Apr 19 15:18:38 2018 From: corvus at inaugust.com (James E. Blair) Date: Thu, 19 Apr 2018 08:18:38 -0700 Subject: [openstack-dev] [QA][all] Migration of Tempest / Grenade jobs to Zuul v3 native In-Reply-To: (Andrea Frittoli's message of "Thu, 19 Apr 2018 10:58:16 +0000") References: Message-ID: <874lk7dy29.fsf@meyer.lemoncheese.net> Andrea Frittoli writes: > Dear all, > > a quick update on the current status. > > Zuul has been fixed to use the correct branch for roles coming from > different repositories [1]. > The backport of the devstack patches to support multinode jobs is almost > complete. All stable/queens patches are merged, stable/pike patches are > almost all approved and going through the gate [2]. > > The two facts above mean that now the "devstack-tempest" base job defined > in Tempest can be switched to use the "orchestrate-devstack" role and thus > function as a base for multinode jobs [3]. > It also means that work on writing grenade jobs in zuulv3 native format can > now be resumed [4]. > > Kind regards > > Andrea Frittoli > > [1] > http://lists.openstack.org/pipermail/openstack-dev/2018-April/129217.html > [2] > https://review.openstack.org/#/q/topic:multinode_zuulv3+(status:open+OR+status:merged > ) > [3] https://review.openstack.org/#/c/545724/ > [4] > https://review.openstack.org/#/q/status:open+branch:master+topic:grenade_zuulv3 Also, shortly after this update, we made a change to make it slightly easier for folks with devstack plugin jobs. You should no longer need to set the LIBS_FROM_GIT variable manually; instead, just specify the project in `required-projects`, and the devstack job will set it automatically. See https://review.openstack.org/548331 for an example. -Jim From openstack at fried.cc Thu Apr 19 15:37:09 2018 From: openstack at fried.cc (Eric Fried) Date: Thu, 19 Apr 2018 10:37:09 -0500 Subject: [openstack-dev] [placement][nova] Decision time on granular request groups for like resources In-Reply-To: <1524141534.30697.1@smtp.office365.com> References: <125f7217-2a83-316f-63fd-e1cee0ee3a5f@gmail.com> <3b3de257-e95c-6cc8-15a0-0453c6b529b7@fried.cc> <1524127083.30697.0@smtp.office365.com> <37cb5f6d-ca6f-6856-d13d-210b40422f53@fried.cc> <1524141534.30697.1@smtp.office365.com> Message-ID: <729c486b-3a63-9ac3-4d76-0634cfb9a8a0@fried.cc> Thanks to everyone who contributed to this discussion. With just a teeny bit more bikeshedding on the exact syntax [1], we landed on: group_policy={none|isolate} I have proposed this delta to the granular spec [2]. -efried [1] http://p.anticdent.org/logs/openstack-placement?dated=2018-04-19%2013:48:39.213790#a1c [2] https://review.openstack.org/#/c/562687/ On 04/19/2018 07:38 AM, Balázs Gibizer wrote: > > > On Thu, Apr 19, 2018 at 2:27 PM, Eric Fried wrote: >> gibi- >> >>>  Can the proximity param specify relationship between the un-numbered >>> and >>>  the numbered groups as well or only between numbered groups? >>>  Besides that I'm +1 about proxyimity={isolate|any} >> >> Remembering that the resources in the un-numbered group can be spread >> around the tree and sharing providers... >> >> If applying "isolate" to the un-numbered group means that each resource >> you specify therein must be satisfied by a different provider, then you >> should have just put those resources into numbered groups. >> >> If "isolate" means that *none* of the numbered groups will land on *any* >> of the providers satisfying the un-numbered group... that could be hard >> to reason about, and I don't know if it's useful. >> >> So thus far I've been thinking about all of these semantics only in >> terms of the numbered groups (although Jay's `can_split` was >> specifically aimed at the un-numbered group). > > Thanks for the explanation. Now it make sense to me to limit the > proximity param to the numbered groups. > >> >> That being the case (is that a bikeshed on the horizon?) perhaps >> `granular_policy={isolate|any}` is a more appropriate name than >> `proximity`. > > The policy term is more general than proximity therefore the > granular_policy=any query fragment isn't descriptive enough any more. > > > gibi > >> >> -efried >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From chris.friesen at windriver.com Thu Apr 19 15:46:52 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Thu, 19 Apr 2018 09:46:52 -0600 Subject: [openstack-dev] [nova] Heads up for out-of-tree drivers: supports_recreate -> supports_evacuate In-Reply-To: <8beb9706-2db7-395b-cea2-b1bc4e5a24e0@gmail.com> References: <8beb9706-2db7-395b-cea2-b1bc4e5a24e0@gmail.com> Message-ID: <5AD8B9EC.9020706@windriver.com> On 04/19/2018 08:33 AM, Jay Pipes wrote: > On 04/19/2018 09:15 AM, Matthew Booth wrote: >> We've had inconsistent naming of recreate/evacuate in Nova for a long >> time, and it will persist in a couple of places for a while more. >> However, I've proposed the following to rename 'recreate' to >> 'evacuate' everywhere with no rpc/api impact here: >> >> https://review.openstack.org/560900 >> >> One of the things which is renamed is the driver 'supports_recreate' >> capability, which I've renamed to 'supports_evacuate'. The above >> change updates this for in-tree drivers, but as noted in review this >> would impact out-of-tree drivers. If this might affect you, please >> follow the above in case it merges. > > I have to admit, Matt, I'm a bit confused by this. I was under the impression > that we were trying to *remove* uses of the term "evacuate" as much as possible > because that term is not adequately descriptive of the operation and terms like > "recreate" were more descriptive? This is a good point. Personally I'd prefer to see it go the other way and convert everything to the "recreate" terminology, including the external API. From the CLI perspective, it makes no sense that "nova evacuate" operates after a host is already down, but "nova evacuate-live" operates on a running host. Chris From jaypipes at gmail.com Thu Apr 19 15:48:31 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 19 Apr 2018 11:48:31 -0400 Subject: [openstack-dev] [nova][placement] Scheduler VM distribution In-Reply-To: References: Message-ID: <0861cd9b-83e8-dd82-5a93-de9fa9396782@gmail.com> Привет, Андрей! Comments inline... On 04/19/2018 10:27 AM, Andrey Volkov wrote: > Hello, > > From my understanding, we have a race between the scheduling > process and host weight update. > > I made a simple experiment. On the 50 fake host environment > it was asked to boot 40 VMs those should be placed 1 on each host. > The hosts are equal to each other in terms of inventory. > > img=6fedf6a1-5a55-4149-b774-b0b4dccd2ed1 > flavor=1 > for i in {1..40}; do > nova boot --flavor $flavor --image $img --nic none vm-$i; > sleep 1; > done > > The following distribution was gotten: > > mysql> select resource_provider_id, count(*) from allocations where > resource_class_id = 0 group by 1; > > +----------------------+----------+ > | resource_provider_id | count(*) | > +----------------------+----------+ > |                    1 |        2 | > |                   18 |        2 | > |                   19 |        3 | > |                   20 |        3 | > |                   26 |        2 | > |                   29 |        2 | > |                   33 |        3 | > |                   36 |        2 | > |                   41 |        1 | > |                   49 |        3 | > |                   51 |        2 | > |                   52 |        3 | > |                   55 |        2 | > |                   60 |        3 | > |                   61 |        2 | > |                   63 |        2 | > |                   67 |        3 | > +----------------------+----------+ > 17 rows in set (0.00 sec) > > And the question is: > If we have an atomic resource allocation what is the reason > to use compute_nodes.* for weight calculation? The resource allocation is only atomic in the placement service, since the placement service prevents clients from modifying records that have changed since the client read information about the record (it uses a "generation" field in the resource_providers table records to provide this protection). What seems to be happening is that a scheduler thread's view of the set of HostState objects used in weighing is stale at some point in the weighing process. I'm going to guess and say you have 3 scheduler processes, right? In other words, what is happening is something like this: (Tx indicates a period in sequential time) T0: thread A gets a list of filtered hosts and weighs them. T1: thread B gets a list of filtered hosts and weighs them. T2: thread A picks the first host in its weighed list T3: thread B picks the first host in its weighed list (this is the same host as thread A picked) T4: thread B increments the num_instances attribute of its HostState object for the chosen host (done in the HostState._consume_from_request() method) T5: thread A increments the num_instances attribute of its HostState object for the same chosen host. So, both thread A and B choose the same host because at the time they read the HostState objects, the num_instances attribute was 0 and the weight for that host was the same (2.0 in the logs). I'm not aware of any effort to fix this behaviour in the scheduler. Best, -jay > There is a custom log of behavior I described: http://ix.io/18cw > > -- > Thanks, > > Andrey Volkov, > Software Engineer, Mirantis, Inc. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From pabelanger at redhat.com Thu Apr 19 15:49:13 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Thu, 19 Apr 2018 11:49:13 -0400 Subject: [openstack-dev] [all] Gerrit server replacement scheduled for May 2nd 2018 In-Reply-To: <20180410184829.GA16085@localhost.localdomain> References: <20180410184829.GA16085@localhost.localdomain> Message-ID: <20180419154912.GA13701@localhost.localdomain> Hello from Infra. This is our weekly reminder of the upcoming gerrit replacement. We'll continue to send these announcements out up until the day of the migration. We are now 2 weeks away from replacement date. If you have any questions, please contact us in #openstack-infra. --- It's that time again... on Wednesday, May 02, 2018 20:00 UTC, the OpenStack Project Infrastructure team is upgrading the server which runs review.openstack.org to Ubuntu Xenial, and that means a new virtual machine instance with new IP addresses assigned by our service provider. The new IP addresses will be as follows: IPv4 -> 104.130.246.32 IPv6 -> 2001:4800:7819:103:be76:4eff:fe04:9229 They will replace these current production IP addresses: IPv4 -> 104.130.246.91 IPv6 -> 2001:4800:7819:103:be76:4eff:fe05:8525 We understand that some users may be running from egress-filtered networks with port 29418/tcp explicitly allowed to the current review.openstack.org IP addresses, and so are providing this information as far in advance as we can to allow them time to update their firewalls accordingly. Note that some users dealing with egress filtering may find it easier to switch their local configuration to use Gerrit's REST API via HTTPS instead, and the current release of git-review has support for that workflow as well. http://lists.openstack.org/pipermail/openstack-dev/2014-September/045385.html We will follow up with final confirmation in subsequent announcements. Thanks, Paul From mbooth at redhat.com Thu Apr 19 16:06:37 2018 From: mbooth at redhat.com (Matthew Booth) Date: Thu, 19 Apr 2018 17:06:37 +0100 Subject: [openstack-dev] [nova] Heads up for out-of-tree drivers: supports_recreate -> supports_evacuate In-Reply-To: <8beb9706-2db7-395b-cea2-b1bc4e5a24e0@gmail.com> References: <8beb9706-2db7-395b-cea2-b1bc4e5a24e0@gmail.com> Message-ID: On 19 April 2018 at 15:33, Jay Pipes wrote: > On 04/19/2018 09:15 AM, Matthew Booth wrote: >> >> We've had inconsistent naming of recreate/evacuate in Nova for a long >> time, and it will persist in a couple of places for a while more. >> However, I've proposed the following to rename 'recreate' to >> 'evacuate' everywhere with no rpc/api impact here: >> >> https://review.openstack.org/560900 >> >> One of the things which is renamed is the driver 'supports_recreate' >> capability, which I've renamed to 'supports_evacuate'. The above >> change updates this for in-tree drivers, but as noted in review this >> would impact out-of-tree drivers. If this might affect you, please >> follow the above in case it merges. > > > I have to admit, Matt, I'm a bit confused by this. I was under the > impression that we were trying to *remove* uses of the term "evacuate" as > much as possible because that term is not adequately descriptive of the > operation and terms like "recreate" were more descriptive? I'm ambivalent, tbh, but I think it's better to pick one. I thought we'd picked 'evacuate' based on the TODOs from Matt R: http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py#n2985 http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py#n3093 Incidentally, this isn't at all core to what I'm working on, but I'm about to start poking it and thought I'd tidy up as I go (as is my wont). If there's discussion to be had I don't mind dropping this and moving on. Matt -- Matthew Booth Red Hat OpenStack Engineer, Compute DFG Phone: +442070094448 (UK) From mbooth at redhat.com Thu Apr 19 16:10:42 2018 From: mbooth at redhat.com (Matthew Booth) Date: Thu, 19 Apr 2018 17:10:42 +0100 Subject: [openstack-dev] [nova] Heads up for out-of-tree drivers: supports_recreate -> supports_evacuate In-Reply-To: <5AD8B9EC.9020706@windriver.com> References: <8beb9706-2db7-395b-cea2-b1bc4e5a24e0@gmail.com> <5AD8B9EC.9020706@windriver.com> Message-ID: On 19 April 2018 at 16:46, Chris Friesen wrote: > On 04/19/2018 08:33 AM, Jay Pipes wrote: >> >> On 04/19/2018 09:15 AM, Matthew Booth wrote: >>> >>> We've had inconsistent naming of recreate/evacuate in Nova for a long >>> time, and it will persist in a couple of places for a while more. >>> However, I've proposed the following to rename 'recreate' to >>> 'evacuate' everywhere with no rpc/api impact here: >>> >>> https://review.openstack.org/560900 >>> >>> One of the things which is renamed is the driver 'supports_recreate' >>> capability, which I've renamed to 'supports_evacuate'. The above >>> change updates this for in-tree drivers, but as noted in review this >>> would impact out-of-tree drivers. If this might affect you, please >>> follow the above in case it merges. >> >> >> I have to admit, Matt, I'm a bit confused by this. I was under the >> impression >> that we were trying to *remove* uses of the term "evacuate" as much as >> possible >> because that term is not adequately descriptive of the operation and terms >> like >> "recreate" were more descriptive? > > > This is a good point. > > Personally I'd prefer to see it go the other way and convert everything to > the "recreate" terminology, including the external API. > > From the CLI perspective, it makes no sense that "nova evacuate" operates > after a host is already down, but "nova evacuate-live" operates on a running > host. A bit OT, but evacuate-live probably shouldn't exist at all for a variety of reasons. The implementation is shonky, it's doing orchestration in the CLI, and the name is misleading, as you say. Matt -- Matthew Booth Red Hat OpenStack Engineer, Compute DFG Phone: +442070094448 (UK) From cdent+os at anticdent.org Thu Apr 19 16:24:34 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Thu, 19 Apr 2018 17:24:34 +0100 (BST) Subject: [openstack-dev] [all][api] POST /api-sig/news Message-ID: Greetings OpenStack community, As it was just edleafe and I today, we had a quick meeting and went back to other things. The main actions were to select one guideline to publish and one guideline to freeze. These are listed below. We also briefly discussed that though we have not planned any official time and space in Vancouver, we hope to engage with anyone interested in APIs in whatever space we can find in the lovely hallways of the Vancouver Convention Centre. As always if you're interested in helping out, in addition to coming to the meetings, there's also: * The list of bugs [5] indicates several missing or incomplete guidelines. * The existing guidelines [2] always need refreshing to account for changes over time. If you find something that's not quite right, submit a patch [6] to fix it. * Have you done something for which you think guidance would have made things easier but couldn't find any? Submit a patch and help others [6]. # Newly Published Guidelines * Update the errors guidance to use service-type for code https://review.openstack.org/#/c/554921/ # API Guidelines Proposed for Freeze Guidelines that are ready for wider review by the whole community. * Add guidance on needing cache-control headers https://review.openstack.org/550468 # Guidelines Currently Under Review [3] * Update parameter names in microversion sdk spec https://review.openstack.org/#/c/557773/ * Add API-schema guide (still being defined) https://review.openstack.org/#/c/524467/ * A (shrinking) suite of several documents about doing version and service discovery Start at https://review.openstack.org/#/c/459405/ * WIP: microversion architecture archival doc (very early; not yet ready for review) https://review.openstack.org/444892 # Highlighting your API impacting issues If you seek further review and insight from the API SIG about APIs that you are developing or changing, please address your concerns in an email to the OpenStack developer mailing list[1] with the tag "[api]" in the subject. In your email, you should include any relevant reviews, links, and comments to help guide the discussion of the specific challenge you are facing. To learn more about the API SIG mission and the work we do, see our wiki page [4] and guidelines [2]. Thanks for reading and see you next week! # References [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] http://specs.openstack.org/openstack/api-wg/ [3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z [4] https://wiki.openstack.org/wiki/API_SIG [5] https://bugs.launchpad.net/openstack-api-wg [6] https://git.openstack.org/cgit/openstack/api-wg Meeting Agenda https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda Past Meeting Records http://eavesdrop.openstack.org/meetings/api_sig/ Open Bugs https://bugs.launchpad.net/openstack-api-wg -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From mriedemos at gmail.com Thu Apr 19 16:27:48 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 19 Apr 2018 11:27:48 -0500 Subject: [openstack-dev] [nova] Heads up for out-of-tree drivers: supports_recreate -> supports_evacuate In-Reply-To: References: <8beb9706-2db7-395b-cea2-b1bc4e5a24e0@gmail.com> Message-ID: <66aba789-525a-f4ac-48c0-c46541d05343@gmail.com> On 4/19/2018 11:06 AM, Matthew Booth wrote: > I'm ambivalent, tbh, but I think it's better to pick one. I thought > we'd picked 'evacuate' based on the TODOs from Matt R: > > http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py#n2985 > http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py#n3093 > > Incidentally, this isn't at all core to what I'm working on, but I'm > about to start poking it and thought I'd tidy up as I go (as is my > wont). If there's discussion to be had I don't mind dropping this and > moving on. For reference, I started this rolling ball: https://review.openstack.org/#/c/508190/ The internal 'recreate' argument to rebuild was always a thorn in my side so I renamed it to evacuate because that's what the operation is called in the API, how it shows up in bug reports, and how we talk about it in IRC. We don't talk about the "recreate" operation, we talk about evacuate. Completely re-doing the end-user API experience with evacuate and rebuild including internal plumbing changes is orthogonal to this cleanup IMO because we can do the cleanup now to avoid existing maintainer confusion rather than hold it up for something that no one is working on. -- Thanks, Matt From mriedemos at gmail.com Thu Apr 19 16:30:56 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 19 Apr 2018 11:30:56 -0500 Subject: [openstack-dev] [nova] Heads up for out-of-tree drivers: supports_recreate -> supports_evacuate In-Reply-To: <5AD8B9EC.9020706@windriver.com> References: <8beb9706-2db7-395b-cea2-b1bc4e5a24e0@gmail.com> <5AD8B9EC.9020706@windriver.com> Message-ID: On 4/19/2018 10:46 AM, Chris Friesen wrote: > From the CLI perspective, it makes no sense that "nova evacuate" > operates after a host is already down, but "nova evacuate-live" operates > on a running host. http://www.danplanet.com/blog/2016/03/03/evacuate-in-nova-one-command-to-confuse-us-all/ If people feel this strongly about the name of the "nova host-evacuate-live" CLI, they should propose changes to rename it (or deprecate it if it's dangerous and shouldn't exist). How about deprecating "nova host-evacuate-live" and just add a --batch option to the existing "nova live-migration" CLI if people want to retain the functionality but hate the other name. -- Thanks, Matt From jaypipes at gmail.com Thu Apr 19 16:40:37 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Thu, 19 Apr 2018 12:40:37 -0400 Subject: [openstack-dev] [nova] Heads up for out-of-tree drivers: supports_recreate -> supports_evacuate In-Reply-To: <66aba789-525a-f4ac-48c0-c46541d05343@gmail.com> References: <8beb9706-2db7-395b-cea2-b1bc4e5a24e0@gmail.com> <66aba789-525a-f4ac-48c0-c46541d05343@gmail.com> Message-ID: On 04/19/2018 12:27 PM, Matt Riedemann wrote: > On 4/19/2018 11:06 AM, Matthew Booth wrote: >> I'm ambivalent, tbh, but I think it's better to pick one. I thought >> we'd picked 'evacuate' based on the TODOs from Matt R: >> >> http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py#n2985 >> >> http://git.openstack.org/cgit/openstack/nova/tree/nova/compute/manager.py#n3093 >> >> >> Incidentally, this isn't at all core to what I'm working on, but I'm >> about to start poking it and thought I'd tidy up as I go (as is my >> wont). If there's discussion to be had I don't mind dropping this and >> moving on. > > For reference, I started this rolling ball: > > https://review.openstack.org/#/c/508190/ > > The internal 'recreate' argument to rebuild was always a thorn in my > side so I renamed it to evacuate because that's what the operation is > called in the API, how it shows up in bug reports, and how we talk about > it in IRC. We don't talk about the "recreate" operation, we talk about > evacuate. > > Completely re-doing the end-user API experience with evacuate and > rebuild including internal plumbing changes is orthogonal to this > cleanup IMO because we can do the cleanup now to avoid existing > maintainer confusion rather than hold it up for something that no one is > working on. I was only asking a question. I wasn't trying to hold anything up. -jay From dtroyer at gmail.com Thu Apr 19 16:54:14 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Thu, 19 Apr 2018 11:54:14 -0500 Subject: [openstack-dev] [osc][swift] Setting storage policy for a container possible via the client? In-Reply-To: <1524142259-sup-5177@lrrr.local> References: <1524142259-sup-5177@lrrr.local> Message-ID: On Thu, Apr 19, 2018 at 7:51 AM, Doug Hellmann wrote: > Excerpts from Mark Kirkwood's message of 2018-04-19 16:47:58 +1200: >> Swift has had storage policies for a while now. These are enabled by >> setting the 'X-Storage-Policy' header on a container. >> >> It looks to me like this is not possible using openstack-client (even in >> master branch) - while there is a 'set' operation for containers this >> will *only* set 'Meta-*' type headers. >> >> It seems to me that adding this would be highly desirable. Is it in the >> pipeline? If not I might see how much interest there is at my end for >> adding such - as (famous last words) it looks pretty straightforward to do. > > I can't imagine why we wouldn't want to implement that and I'm not > aware of anyone working on it. If you're interested and have time, > please do work on the patch(es). The primary thing that hinders Swift work like this is OSC does not use swiftclient as it wasn't a standalone thing yet when I wrote that bit (lifting much of the actual API code from swiftclient) . We decided a while ago to not add that dependency and drop the OSC-specific object code and use the SDK when we start using SDK for everything else, after there is an SDK 1.0 release. Moving forward on this today using either OSC's api.object code or the SDK would be fine, with the same SDK caveat we have with Neutron, since SDK isn't 1.0 we may have to play catch-up and maintain multiple SDK release compatibilities (which has happened at least twice). dt -- Dean Troyer dtroyer at gmail.com From emilien at redhat.com Thu Apr 19 17:01:50 2018 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 19 Apr 2018 10:01:50 -0700 Subject: [openstack-dev] [tripleo] Proposing Marius Cornea core on upgrade bits Message-ID: Greetings, As you probably know mcornea on IRC, Marius Cornea has been contributing on TripleO for a while, specially on the upgrade bits. Part of the quality team, he's always testing real customer scenarios and brings a lot of good feedback in his reviews, and quite often takes care of fixing complex bugs when it comes to advanced upgrades scenarios. He's very involved in tripleo-upgrade repository where he's already core, but I think it's time to let him +2 on other tripleo repos for the patches related to upgrades (we trust people's judgement for reviews). As usual, we'll vote! Thanks everyone for your feedback and thanks Marius for your hard work and involvement in the project. -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From johfulto at redhat.com Thu Apr 19 17:05:21 2018 From: johfulto at redhat.com (John Fulton) Date: Thu, 19 Apr 2018 13:05:21 -0400 Subject: [openstack-dev] [tripleo] Proposing Marius Cornea core on upgrade bits In-Reply-To: References: Message-ID: +1 On Thu, Apr 19, 2018 at 1:01 PM, Emilien Macchi wrote: > Greetings, > > As you probably know mcornea on IRC, Marius Cornea has been contributing on > TripleO for a while, specially on the upgrade bits. > Part of the quality team, he's always testing real customer scenarios and > brings a lot of good feedback in his reviews, and quite often takes care of > fixing complex bugs when it comes to advanced upgrades scenarios. > He's very involved in tripleo-upgrade repository where he's already core, > but I think it's time to let him +2 on other tripleo repos for the patches > related to upgrades (we trust people's judgement for reviews). > > As usual, we'll vote! > > Thanks everyone for your feedback and thanks Marius for your hard work and > involvement in the project. > -- > Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From kendall at openstack.org Thu Apr 19 17:13:57 2018 From: kendall at openstack.org (Kendall Waters) Date: Thu, 19 Apr 2018 12:13:57 -0500 Subject: [openstack-dev] Project Teams Gathering- Denver September 10-14th Message-ID: All aboard! Next stop Denver! The fourth Project Teams Gathering [1] will be held September 10-14th back at the Renaissance Stapleton Hotel [2] in Denver, Colorado (3801 Quebec Street, Denver, Colorado 80207). The Project Teams Gathering (PTG) is an event organized by the OpenStack Foundation. It provides meeting facilities allowing the various technical community groups working with OpenStack (operators, development teams, user workgroups, SIGs) to meet in-person, exchange and get work done in a productive setting. As you may have heard, this time around the Ops Meetup will be co-located with the Denver PTG. We're excited to have these two communities under one roof. Registration, travel support program, and the discounted hotel block are now live! REGISTRATION AND HOTEL Registration is now available here: https://denver2018ptg.eventbrite.com Ticket prices for this PTG will be tiered, and are significantly subsidized to help cover part of the overall event cost: Early Bird: USD $199 (Deadline May 11 at 6:59 UTC) Regular: USD $399 (Deadline August 23 at 6:59 UTC) Late/Onsite: USD $599 We've reserved a very limited block of discounted hotel rooms at $149/night USD (does not include breakfast) with the Renaissance Denver Stapleton Hotel where the event will be held. Please move quickly to reserve a room with 2 queen beds[3] or 1 king bed[4] by August 20th or until they sell out! TRAIN NEAR HOTEL You may be curious about the train noise situation around the hotel. This was due to an unsafe crossing requiring human flaggers and trains signalling using horns. After a meeting held in February of 2018, the Director for the RTD project stated that “The gate crossings are complete, operational and safe, and we feel that it’s appropriate at this time to remove the requirements to have grade crossing attendants at those crossings,” Regulatory approvals for the A, B and G commuter rail lines have a contracted deadline of June 2nd, 2018 to be approved by Federal Railroad Administration Commissioners. Also worth noting, right after we left the PTG last September, the hotel installed sound reduction windows throughout the property which should help with an overall quality of stay for guests. USA VISA APPLICATIONS Please note: Due to recent delays in the visa system, please allow as much time as possible for the application process if a visa is required in order to travel to the United States. We normally recommend applying no later than 60 days prior to the event. If you are unsure whether you require a visa or not, please visit this page [5] to see if your country is a part of the Visa Waiver Program. If it is not one of the countries listed, you will need to obtain a Visa to enter the U.S. To supplement your Visa application, we can also provide you with a Visa Invitation Letter on official OpenStack Foundation letterhead. Requests for invitation letters may be submitted here [6] and must be received by Friday, August 24, 2018. TRAVEL SUPPORT PROGRAM The OpenStack Travel Support Program's aim is to facilitate participation of key contributors to the OpenStack Project Teams Gathering (PTG) covering costs for travel, accommodation, and event pass. Please fill out this form [7] to apply; the application deadline for the first round of sponsorships is July 1st. If you are interested in donating to the Travel Support Program, you can do so on the Eventbrite page [8]. SPONSORSHIP The PTGs are critical to the OpenStack release cycle and community, and sponsorship of these events is a public demonstration of your commitment to the continued growth and success of OpenStack. Since this is a working event and we strive to maintain a distraction-free environment so teams, we have created sponsorship packages that are community focused so that all sponsors receive prominent recognition for their ongoing support of OpenStack without impacting productivity. If your organization is interested in sponsoring the Stein PTG in Denver, please review the sponsorship prospectus and contract here , and send any questions to ptg at openstack.org . Feel free to reach out to me directly with any questions, looking forward to seeing everyone in Denver! Cheers, Kendall Kendall Waters OpenStack Marketing kendall at openstack.org [1] www.openstack.org/ptg [2] http://www.marriott.com/hotels/travel/densa-renaissance-denver-stapleton-hotel/ [3] http://www.marriott.com/meeting-event-hotels/group-corporate-travel/groupCorp.mi?resLinkData=Project%20Team%20Gathering%20Two%20Queen%20Beds%5Edensa%60opnopnb%60149.00%60USD%60false%604%609/5/18%609/18/18%608/20/18&app=resvlink&stop_mobi=yes [4] http://www.marriott.com/meeting-event-hotels/group-corporate-travel/groupCorp.mi?resLinkData=Project%20Teams%20Gathering%20King%20Bed%5Edensa%60opnopna%60149.00%60USD%60false%604%609/5/18%609/18/18%608/20/18&app=resvlink&stop_mobi=yes [5] https://www.dhs.gov/visa-waiver-program-requirements [6] https://openstackfoundation.formstack.com/forms/visa_form_denver_2018_ptg [7] https://openstackfoundation.formstack.com/forms/travelsupportptg_denver_2018 [8] https://denver2018ptg.eventbrite.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekcs.openstack at gmail.com Thu Apr 19 17:52:02 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Thu, 19 Apr 2018 10:52:02 -0700 Subject: [openstack-dev] [all][ptl][release] reminder for rocky-1 milestone deadline In-Reply-To: <1524143700-sup-9515@lrrr.local> References: <1524143700-sup-9515@lrrr.local> Message-ID: Thank you, Doug. Question: do we need to do a client library release prior to R-3? The practice seems to change from cycle to cycle. On 4/19/18, 6:15 AM, "Doug Hellmann" wrote: >Today is the deadline for proposing a release for the Rocky-1 milestone. >Please don't forget to include your libraries (client or otherwise) as >well. > >Doug > >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ekcs.openstack at gmail.com Thu Apr 19 17:53:53 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Thu, 19 Apr 2018 10:53:53 -0700 Subject: [openstack-dev] [all][ptl][release] reminder for rocky-1 milestone deadline In-Reply-To: References: <1524143700-sup-9515@lrrr.local> Message-ID: Specifically, for client library using the cycle-with-intermediary release model. On 4/19/18, 10:52 AM, "Eric K" wrote: >Thank you, Doug. Question: do we need to do a client library release prior >to R-3? The practice seems to change from cycle to cycle. > >On 4/19/18, 6:15 AM, "Doug Hellmann" wrote: > >>Today is the deadline for proposing a release for the Rocky-1 milestone. >>Please don't forget to include your libraries (client or otherwise) as >>well. >> >>Doug >> >>_________________________________________________________________________ >>_ >>OpenStack Development Mailing List (not for usage questions) >>Unsubscribe: >>OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > From jaosorior at gmail.com Thu Apr 19 18:05:31 2018 From: jaosorior at gmail.com (Juan Antonio Osorio) Date: Thu, 19 Apr 2018 18:05:31 +0000 Subject: [openstack-dev] [tripleo] Proposing Marius Cornea core on upgrade bits In-Reply-To: References: Message-ID: +1 :D hell yeah! On Thu, 19 Apr 2018, 20:05 John Fulton, wrote: > +1 > > On Thu, Apr 19, 2018 at 1:01 PM, Emilien Macchi > wrote: > > Greetings, > > > > As you probably know mcornea on IRC, Marius Cornea has been contributing > on > > TripleO for a while, specially on the upgrade bits. > > Part of the quality team, he's always testing real customer scenarios and > > brings a lot of good feedback in his reviews, and quite often takes care > of > > fixing complex bugs when it comes to advanced upgrades scenarios. > > He's very involved in tripleo-upgrade repository where he's already core, > > but I think it's time to let him +2 on other tripleo repos for the > patches > > related to upgrades (we trust people's judgement for reviews). > > > > As usual, we'll vote! > > > > Thanks everyone for your feedback and thanks Marius for your hard work > and > > involvement in the project. > > -- > > Emilien Macchi > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Thu Apr 19 18:15:13 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 19 Apr 2018 14:15:13 -0400 Subject: [openstack-dev] [all][ptl][release] reminder for rocky-1 milestone deadline In-Reply-To: References: <1524143700-sup-9515@lrrr.local> Message-ID: <1524161506-sup-3770@lrrr.local> Libraries do need to release by R-3 so that we have something to use as a branch point for the stable branches. We encourage releases earlier than that, for a couple of reasons. First, because of the way the CI system works, libraries are generally not used in test jobs unless they are released. (We should not be testing services with unreleased versions of clients except on patches to the client library itself.) This means nothing can use the client modifications until they are actually released. Second, releasing early and often gives us more time to fix issues, so we aren't rushing around at deadline trying to solve a problem while the gate is full of other last minute patches for other projects. So, you don't *have* to release a client library this week, but it is strongly encouraged. And really, is there any reason to wait, if you have patches that haven't been released? Excerpts from Eric K's message of 2018-04-19 10:53:53 -0700: > Specifically, for client library using the cycle-with-intermediary release > model. > > On 4/19/18, 10:52 AM, "Eric K" wrote: > > >Thank you, Doug. Question: do we need to do a client library release prior > >to R-3? The practice seems to change from cycle to cycle. > > > >On 4/19/18, 6:15 AM, "Doug Hellmann" wrote: > > > >>Today is the deadline for proposing a release for the Rocky-1 milestone. > >>Please don't forget to include your libraries (client or otherwise) as > >>well. > >> > >>Doug > >> > >>_________________________________________________________________________ > >>_ > >>OpenStack Development Mailing List (not for usage questions) > >>Unsubscribe: > >>OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > From mriedemos at gmail.com Thu Apr 19 18:34:44 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 19 Apr 2018 13:34:44 -0500 Subject: [openstack-dev] [all][ptl][release] reminder for rocky-1 milestone deadline In-Reply-To: <1524161506-sup-3770@lrrr.local> References: <1524143700-sup-9515@lrrr.local> <1524161506-sup-3770@lrrr.local> Message-ID: <8a2f30e8-24b9-f903-db81-e20db40f6292@gmail.com> On 4/19/2018 1:15 PM, Doug Hellmann wrote: > Second, releasing early and often gives us more time to fix issues, > so we aren't rushing around at deadline trying to solve a problem > while the gate is full of other last minute patches for other > projects. Yup, case in point: I waited too long to release python-novaclient 10.x in Queens and it prevented us from being able to include it in upper-constraints for Queens because it negatively impacted some other projects due to backward incompatible changes in the 10.x series of novaclient. So learn from my mistakes. -- Thanks, Matt From ekcs.openstack at gmail.com Thu Apr 19 18:37:48 2018 From: ekcs.openstack at gmail.com (Eric K) Date: Thu, 19 Apr 2018 11:37:48 -0700 Subject: [openstack-dev] [all][ptl][release] reminder for rocky-1 milestone deadline In-Reply-To: <8a2f30e8-24b9-f903-db81-e20db40f6292@gmail.com> References: <1524143700-sup-9515@lrrr.local> <1524161506-sup-3770@lrrr.local> <8a2f30e8-24b9-f903-db81-e20db40f6292@gmail.com> Message-ID: Got it thanks a lot, Doug and Matt! On 4/19/18, 11:34 AM, "Matt Riedemann" wrote: >On 4/19/2018 1:15 PM, Doug Hellmann wrote: >> Second, releasing early and often gives us more time to fix issues, >> so we aren't rushing around at deadline trying to solve a problem >> while the gate is full of other last minute patches for other >> projects. > >Yup, case in point: I waited too long to release python-novaclient 10.x >in Queens and it prevented us from being able to include it in >upper-constraints for Queens because it negatively impacted some other >projects due to backward incompatible changes in the 10.x series of >novaclient. So learn from my mistakes. > >-- > >Thanks, > >Matt > >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Thu Apr 19 19:21:17 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 19 Apr 2018 15:21:17 -0400 Subject: [openstack-dev] [ironic][infra][qa] Jobs failing; pep8 not found In-Reply-To: References: Message-ID: <1524165416-sup-7286@lrrr.local> Excerpts from Jim Rollenhagen's message of 2018-04-18 13:44:08 -0400: > Hi all, > > We have a number of stable branch jobs failing[0] with an error about pep8 > not being importable[1], when it's clearly installed[2]. We first saw this > when installing networking-generic-switch on queens in our multinode > grenade job. We hacked a fix there[3], as we couldn't figure it out and > thought it was a fluke. Now it's showing up elsewhere. > > I suspected a new pycodestyle was the culprit (maybe it kills off the pep8 > package somehow?) but pinning pycodestyle back a version didn't seem to > help. > > Any ideas what might be going on here? I'm completely lost. > > P.S. if anyone has the side question of why pep8 is being imported at > install time, it seems that pbr iterates over any entry points under > 'distutils.commands' for any installed package. flake8 has one of these > which must import pep8 to be resolved. I'm not sure *why* pbr needs to do > this, but I'll assume it's necessary. > > [0] https://review.openstack.org/#/c/557441/ > [1] > http://logs.openstack.org/41/557441/1/gate/ironic-tempest-dsvm-ironic-inspector-queens/5a4a6c9/logs/devstacklog.txt.gz#_2018-04-16_15_48_01_508 > [2] > http://logs.openstack.org/41/557441/1/gate/ironic-tempest-dsvm-ironic-inspector-queens/5a4a6c9/logs/devstacklog.txt.gz#_2018-04-16_15_47_40_822 > [3] https://review.openstack.org/#/c/561358/ > > // jim Reading through that log more carefully, I see an early attempt to pin pycodestyle <= 2.3.1 [1], followed later by pycodestyle == 2.4.0 being pulled in as a dependency of flake8-import-order==0.12 when neutron's test-requirements.txt is installed [2]. Then later when ironic's test-requirements.txt is installed pycodestyle is downgraded to 2.3.1 [3]. Reproducing those install & downgrade steps, I see that pycodestyle 2.4.0 claims to own pep8.py but pycodestyle 2.3.1 does not [4]. So that explains why pep8 is not re-installed when pycodestyle is downgraded. I think the real problem here is that we have linter dependencies listed in the test-requirements.txt files for our projects, and they are somehow being installed without the constraints. I don't think they need to be installed for devstack at all, so one way to fix it would be to move those dependencies to the tox.ini section for running pep8, or to have devstack look at the blacklisted packages and skip installing them. Doug [1] http://logs.openstack.org/41/557441/1/gate/ironic-tempest-dsvm-ironic-inspector-queens/5a4a6c9/logs/devstacklog.txt.gz#_2018-04-16_15_39_00_392 [2] http://logs.openstack.org/41/557441/1/gate/ironic-tempest-dsvm-ironic-inspector-queens/5a4a6c9/logs/devstacklog.txt.gz#_2018-04-16_15_44_56_527 [3] http://logs.openstack.org/41/557441/1/gate/ironic-tempest-dsvm-ironic-inspector-queens/5a4a6c9/logs/devstacklog.txt.gz#_2018-04-16_15_47_40_120 [4] http://paste.openstack.org/show/719580/ From doug at doughellmann.com Thu Apr 19 19:45:45 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 19 Apr 2018 15:45:45 -0400 Subject: [openstack-dev] [all][ptl][release] reminder for rocky-1 milestone deadline In-Reply-To: <8a2f30e8-24b9-f903-db81-e20db40f6292@gmail.com> References: <1524143700-sup-9515@lrrr.local> <1524161506-sup-3770@lrrr.local> <8a2f30e8-24b9-f903-db81-e20db40f6292@gmail.com> Message-ID: <1524167120-sup-991@lrrr.local> Excerpts from Matt Riedemann's message of 2018-04-19 13:34:44 -0500: > On 4/19/2018 1:15 PM, Doug Hellmann wrote: > > Second, releasing early and often gives us more time to fix issues, > > so we aren't rushing around at deadline trying to solve a problem > > while the gate is full of other last minute patches for other > > projects. > > Yup, case in point: I waited too long to release python-novaclient 10.x > in Queens and it prevented us from being able to include it in > upper-constraints for Queens because it negatively impacted some other > projects due to backward incompatible changes in the 10.x series of > novaclient. So learn from my mistakes. > Thanks, Matt, that's a perfect example of what we're trying to avoid. Doug From pabelanger at redhat.com Thu Apr 19 22:10:39 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Thu, 19 Apr 2018 18:10:39 -0400 Subject: [openstack-dev] [infra] Removal of debian-jessie, replaced by debian-stable (stretch) Message-ID: <20180419221039.GA19857@localhost.localdomain> Greetings, I'd like to propose now that we have debian-stable (stretch) nodesets online for nodepool, that we start the process to remove debian-jessie. As I can see, there really is only 2 projects using debian-jessie: * ARA * ansible-hardening I've already proposed patches to update their jobs to debian-stable, replacing debian-jessie: https://review.openstack.org/#/q/topic:debian-stable You'll also noticed we are not using debian-stretch directly for the nodeset, this is on purpose as when the next release of debian happens (buster). We don't need to make a bunch of in repo changes to projects. But update the label of the nodeset from debian-stretch to debian-buster. Of course, we'd need to give a fair amount of notice when we plan to make that change, but given this nodeset isn't part of our LTS platform (ubuntu / centos) I believe this will help us in openstack-infra migrate projects to the latest distro as fast a possible. Thoughts? Paul From mriedemos at gmail.com Thu Apr 19 22:11:58 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 19 Apr 2018 17:11:58 -0500 Subject: [openstack-dev] [docs] When should we say 'run as root' in the docs? Message-ID: <7e90efac-3e21-acc5-bd06-a5b963ec10e4@gmail.com> How loose are we with saying things like, "you should run this as root" in the docs? I was triaging this nova bug [1] which is saying that the docs should tell you to run nova-status (which implies also nova-manage) as root, but isn't running admin-level CLIs implied that you need root, or something with access to those commands (sudo)? I'm not sure how prescriptive we should be with stuff like this in the docs because if we did start saying this, I feel like we'd have to say it everywhere. [1] https://bugs.launchpad.net/nova/+bug/1764530 -- Thanks, Matt From edmondsw at us.ibm.com Thu Apr 19 22:32:36 2018 From: edmondsw at us.ibm.com (William M Edmonds) Date: Thu, 19 Apr 2018 18:32:36 -0400 Subject: [openstack-dev] [docs] When should we say 'run as root' in the docs? In-Reply-To: <7e90efac-3e21-acc5-bd06-a5b963ec10e4@gmail.com> References: <7e90efac-3e21-acc5-bd06-a5b963ec10e4@gmail.com> Message-ID: Matt Riedemann wrote on 04/19/2018 06:11:58 PM: > How loose are we with saying things like, "you should run this as root" > in the docs? > > I was triaging this nova bug [1] which is saying that the docs should > tell you to run nova-status (which implies also nova-manage) as root, > but isn't running admin-level CLIs implied that you need root, or > something with access to those commands (sudo)? > > I'm not sure how prescriptive we should be with stuff like this in the > docs because if we did start saying this, I feel like we'd have to say > it everywhere. > > [1] https://bugs.launchpad.net/nova/+bug/1764530 Maybe instead of treating this as a docs bug, we should fix the command to return a nicer error when run as non-root. Presumably the caller has root access, but forgot they were logged in as something else or forgot sudo. Dumping that stack trace on them is more likely to confuse than anything. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pabelanger at redhat.com Thu Apr 19 23:37:36 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Thu, 19 Apr 2018 19:37:36 -0400 Subject: [openstack-dev] [all][infra] ubuntu-bionic and legacy nodesets Message-ID: <20180419233736.GA4807@localhost.localdomain> Greetings, With ubuntu-bionic release around the corner we'll be starting discussions about migrating jobs from ubuntu-xenial to ubuntu-bionic. On topic I'd like to raise, is round job migrations from legacy to native zuulv3. Specifically, I'd like to propose we do not add legacy-ubuntu-bionic nodesets into openstack-zuul-jobs. Projects should be working towards moving away from the legacy format, as they were just copypasta from our previous JJB templates. Projects would still be free to move them intree, but I would highly encourage projects do not do this, as it only delays the issue. The good news is the majority of jobs have already been moved to native zuulv3 jobs, but there are still some projects still depending on the legacy nodesets. For example, tox bases jobs would not be affected. It mostly would be dsvm based jobs that haven't been switch to use the new devstack jobs for zuulv3. -Paul From sangho at opennetworking.org Fri Apr 20 01:01:17 2018 From: sangho at opennetworking.org (Sangho Shin) Date: Fri, 20 Apr 2018 10:01:17 +0900 Subject: [openstack-dev] [openstack-infra] How to take over a project? In-Reply-To: <81B28CCD-93B2-4BC8-B2C5-50B0C5D2A972@opennetworking.org> References: <0630C19B-9EA1-457D-A74C-8A7A03D96DD6@vmware.com> <2bb037db-61aa-4fec-a694-00eee36bbdab@gmail.com> <7a4390b1-2c4e-6600-4d93-167697ea9f12@redhat.com> <81B28CCD-93B2-4BC8-B2C5-50B0C5D2A972@opennetworking.org> Message-ID: <3C5A1D78-828F-4C6D-B3A1-B6597403233F@opennetworking.org> Dear Neutron-Release team, I wonder if any of you can add me to the network-onos-release member. It seems that Vikram is busy. :-) Thank you, Sangho > On 19 Apr 2018, at 9:18 AM, Sangho Shin wrote: > > Ian, > > Thank you so much for your help. > I have requested Vikram to add me to the release team. > He should be able to help me. :-) > > Sangho > > >> On 19 Apr 2018, at 8:36 AM, Ian Wienand wrote: >> >> On 04/19/2018 01:19 AM, Ian Y. Choi wrote: >>> By the way, since the networking-onos-release group has no neutron >>> release team group, I think infra team can help to include neutron >>> release team and neutron release team can help to create branches >>> for the repo if there is no reponse from current >>> networking-onos-release group member. >> >> This seems sane and I've added neutron-release to >> networking-onos-release. >> >> I'm hesitant to give advice on branching within a project like neutron >> as I'm sure there's stuff I'm not aware of; but members of the >> neutron-release team should be able to get you going. >> >> Thanks, >> >> -i > From Tushar.Patil at nttdata.com Fri Apr 20 04:38:35 2018 From: Tushar.Patil at nttdata.com (Patil, Tushar) Date: Fri, 20 Apr 2018 04:38:35 +0000 Subject: [openstack-dev] [sdk][masakari] Need newer version of openstacksdk Message-ID: Hi SDK team, Few weeks back we have moved "instance_ha" service code from python-masakariclient into openstacksdk project in patch [1] and it got merged. Currently, masakari-monitors is totally broken and it requires newer version of openstacksdk. We have proposed a patch [2] in masakari-monitors to fix the issue but we cannot merge it until a newer version of openstacksdk is available. Request you to please release newer version of openstacksdk. Thank you. [1] : https://review.openstack.org/#/c/555710 [2] : https://review.openstack.org/#/c/546492 Best Regards, Tushar Patil ______________________________________________________________________ Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. From Cory at Hawkless.id.au Fri Apr 20 06:00:01 2018 From: Cory at Hawkless.id.au (Cory Hawkless) Date: Fri, 20 Apr 2018 06:00:01 +0000 Subject: [openstack-dev] [neutron][horizon][l2gw] Unable to create a floating IP In-Reply-To: References: Message-ID: <18C7C076CE65A443BC1DEC057949DEFE6F1950E9@CorysCloudVPS.Oblivion.local> I’m also seeing this issue, but with Routers and networks as well. The apache server running horizon logs the following ERROR horizon.tables.base Error while checking action permissions. Traceback (most recent call last): File "/usr/share/openstack-dashboard/horizon/tables/base.py", line 1389, in _filter_action return action._allowed(request, datum) and row_matched File "/usr/share/openstack-dashboard/horizon/tables/actions.py", line 139, in _allowed self.allowed(request, datum)) File "/usr/share/openstack-dashboard/openstack_dashboard/dashboards/project/networks/tables.py", line 85, in allowed usages = quotas.tenant_quota_usages(request, targets=('network', )) File "/usr/share/openstack-dashboard/horizon/utils/memoized.py", line 95, in wrapped value = cache[key] = func(*args, **kwargs) File "/usr/share/openstack-dashboard/openstack_dashboard/usage/quotas.py", line 419, in tenant_quota_usages _get_tenant_network_usages(request, usages, disabled_quotas, tenant_id) File "/usr/share/openstack-dashboard/openstack_dashboard/usage/quotas.py", line 320, in _get_tenant_network_usages details = neutron.tenant_quota_detail_get(request, tenant_id) File "/usr/share/openstack-dashboard/openstack_dashboard/api/neutron.py", line 1457, in tenant_quota_detail_get response = neutronclient(request).get('/quotas/%s/details' % tenant_id) File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 354, in get headers=headers, params=params) File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 331, in retry_request headers=headers, params=params) File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 294, in do_request self._handle_fault_response(status_code, replybody, resp) File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 269, in _handle_fault_response exception_handler_v20(status_code, error_body) File "/usr/lib/python2.7/dist-packages/neutronclient/v2_0/client.py", line 93, in exception_handler_v20 request_ids=request_ids) Forbidden: User does not have admin privileges: Cannot GET resource for non admin tenant. Neutron server returns request_ids: ['req-3db6924c-1937-4c34-b5fa-bd3ae52f0c10'] From: Gary Kotton [mailto:gkotton at vmware.com] Sent: Monday, 9 April 2018 10:03 PM To: OpenStack List Subject: [openstack-dev] [neutron][horizon][l2gw] Unable to create a floating IP Hi, From Queens onwards we have a issue with horizon and L2GW. We are unable to create a floating IP. This does not occur when using the CLI only via horizon. The error received is ‘Error: User does not have admin privileges: Cannot GET resource for non admin tenant. Neutron server returns request_ids: ['req-f07a3aac-0994-4d3a-8409-1e55b374af9d']’ This is due to: https://github.com/openstack/networking-l2gw/blob/master/networking_l2gw/db/l2gateway/l2gateway_db.py#L316 This worked in Ocata and not sure what has changed since then ☹. Maybe in the past the Ocata quota’s were not checking L2gw. Any ideas? Thanks Gary -------------- next part -------------- An HTML attachment was scrubbed... URL: From aj at suse.com Fri Apr 20 07:04:12 2018 From: aj at suse.com (Andreas Jaeger) Date: Fri, 20 Apr 2018 09:04:12 +0200 Subject: [openstack-dev] [docs] When should we say 'run as root' in the docs? In-Reply-To: <7e90efac-3e21-acc5-bd06-a5b963ec10e4@gmail.com> References: <7e90efac-3e21-acc5-bd06-a5b963ec10e4@gmail.com> Message-ID: <0c4cb174-44a8-6a25-b1cb-e26d3aa9a670@suse.com> On 2018-04-20 00:11, Matt Riedemann wrote: > How loose are we with saying things like, "you should run this as root" > in the docs? > > I was triaging this nova bug [1] which is saying that the docs should > tell you to run nova-status (which implies also nova-manage) as root, > but isn't running admin-level CLIs implied that you need root, or > something with access to those commands (sudo)? > > I'm not sure how prescriptive we should be with stuff like this in the > docs because if we did start saying this, I feel like we'd have to say > it everywhere. > > [1] https://bugs.launchpad.net/nova/+bug/1764530 We use in openstack-manuals "# root-command" and "$ non-root command", see: https://docs.openstack.org/install-guide/common/conventions.html so, just add the "#" for thse. But looking at https://git.openstack.org/cgit/openstack/nova/tree/doc/source/install/verify.rst#n103, it is there - so, closed invalid IMHO, Andreas -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From jean-philippe at evrard.me Fri Apr 20 07:16:07 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Fri, 20 Apr 2018 08:16:07 +0100 Subject: [openstack-dev] [all][infra] ubuntu-bionic and legacy nodesets In-Reply-To: <20180419233736.GA4807@localhost.localdomain> References: <20180419233736.GA4807@localhost.localdomain> Message-ID: That's very cool. Any idea of the repartition of nodes xenial vs bionic? Is that a very restricted amount of nodes? On 20 April 2018 at 00:37, Paul Belanger wrote: > Greetings, > > With ubuntu-bionic release around the corner we'll be starting discussions about > migrating jobs from ubuntu-xenial to ubuntu-bionic. > > On topic I'd like to raise, is round job migrations from legacy to native > zuulv3. Specifically, I'd like to propose we do not add legacy-ubuntu-bionic > nodesets into openstack-zuul-jobs. Projects should be working towards moving > away from the legacy format, as they were just copypasta from our previous JJB > templates. > > Projects would still be free to move them intree, but I would highly encourage > projects do not do this, as it only delays the issue. > > The good news is the majority of jobs have already been moved to native zuulv3 > jobs, but there are still some projects still depending on the legacy nodesets. > For example, tox bases jobs would not be affected. It mostly would be dsvm > based jobs that haven't been switch to use the new devstack jobs for zuulv3. > > -Paul > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jichenjc at cn.ibm.com Fri Apr 20 08:02:37 2018 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Fri, 20 Apr 2018 16:02:37 +0800 Subject: [openstack-dev] [Nova] z/VM introducing a new config driveformat In-Reply-To: References: <7efe6916-17fc-59c5-d666-6bdfc19c3329@gmail.com> <2eea2405-2e85-6730-e022-e1be33dc35d5@gmail.com> Message-ID: Thanks a lot for your sharing, that's good info, just curious why [1] need zip and base64 encode if my understand is correct I was told nova need format should be pure vfat or iso9660, I assume it's because actually the config drive itself is making by iso by default then wrap a zip/base64 format ... thanks [1] https://github.com/openstack/nova/blob/324899c621ee02d877122ba3412712ebb92831f2/nova/virt/ironic/driver.py#L977 Best Regards! Kevin (Chen) Ji 纪 晨 Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com Phone: +86-10-82451493 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC From: Jim Rollenhagen To: "OpenStack Development Mailing List (not for usage questions)" Date: 04/19/2018 12:02 AM Subject: Re: [openstack-dev] [Nova] z/VM introducing a new config driveformat On Wed, Apr 18, 2018 at 10:56 AM, Matthew Booth wrote: > I agree with Mikal that needing more agent behavior than cloud-init does > a disservice to the users. > > I feel like we get a lot of "but no, my hypervisor is special!" > reasoning when people go to add a driver to nova. So far, I think > they're a lot more similar than people think. Ironic is the weirdest one > we have (IMHO and no offense to the ironic folks) and it can support > configdrive properly. I was going to ask this. Even if the contents of the disk can't be transferred in advance... how does ironic do this? There must be a way. I'm not sure if this is a rhetorical question, so I'll just answer it. :) We basically build the configdrive in nova-compute, then gzip and base64 it, and send it to ironic with the deploy request. On the ironic side, we unpack it and write it to the end of the boot disk. https://github.com/openstack/nova/blob/324899c621ee02d877122ba3412712ebb92831f2/nova/virt/ironic/driver.py#L952-L985 // jim __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=oGbSbjsg1aTX1kg1lAQIkEY8abfQ9632w8YmP3Lrt-U&s=Q8_XusVUibDx7ee5WguroOVm00Fl4rw2XSNEHIVOGb0&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From yprokule at redhat.com Fri Apr 20 08:21:18 2018 From: yprokule at redhat.com (Yurii Prokulevych) Date: Fri, 20 Apr 2018 10:21:18 +0200 Subject: [openstack-dev] [tripleo] Proposing Marius Cornea core on upgrade bits In-Reply-To: References: Message-ID: <1524212478.27935.0.camel@redhat.com> +1 Well deserved! On Thu, 2018-04-19 at 18:05 +0000, Juan Antonio Osorio wrote: > +1 :D hell yeah! > > On Thu, 19 Apr 2018, 20:05 John Fulton, wrote: > > +1 > > > > On Thu, Apr 19, 2018 at 1:01 PM, Emilien Macchi > > wrote: > > > Greetings, > > > > > > As you probably know mcornea on IRC, Marius Cornea has been > > contributing on > > > TripleO for a while, specially on the upgrade bits. > > > Part of the quality team, he's always testing real customer > > scenarios and > > > brings a lot of good feedback in his reviews, and quite often > > takes care of > > > fixing complex bugs when it comes to advanced upgrades scenarios. > > > He's very involved in tripleo-upgrade repository where he's > > already core, > > > but I think it's time to let him +2 on other tripleo repos for > > the patches > > > related to upgrades (we trust people's judgement for reviews). > > > > > > As usual, we'll vote! > > > > > > Thanks everyone for your feedback and thanks Marius for your hard > > work and > > > involvement in the project. > > > -- > > > Emilien Macchi > > > > > > > > ___________________________________________________________________ > > _______ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:un > > subscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > ___________________________________________________________________ > > _______ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsu > > bscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > _____________________________________________________________________ > _____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > cribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From haibin.huang at intel.com Fri Apr 20 08:24:49 2018 From: haibin.huang at intel.com (Huang, Haibin) Date: Fri, 20 Apr 2018 08:24:49 +0000 Subject: [openstack-dev] about cloud-init Message-ID: <26F9979367EE7A488DB1ED79369D372039C00A2B@SHSMSX104.ccr.corp.intel.com> Hi All, I have a problem about cloud-init. I want to both transfer files and execute script. So I give below script to user-data when I create instance. #cloud-config write_files: - encoding: b64 content: H4sICMxh2VoAA2hoYgCzKE5JK07hAgDCo1pOBwAAAA== owner: root:root path: /root/hhb.gz permissions: '0644' #!/bin/bash mkdir -p /home/ubuntu/config but, I can't get /root/hhb.gz and /home/Ubuntu/config. If I separate transfer files and execute script. It is ok. Any idea? Below is my debug info ubuntu at onap-hhb7:~$ sudo cloud-init --version sudo: unable to resolve host onap-hhb7 cloud-init 0.7.5 security-groupsubuntu at onap-hhb7:~$ curl http://169.254.169.254/2009-04-04/user-data #cloud-config write_files: - encoding: b64 content: H4sICMxh2VoAA2hoYgCzKE5JK07hAgDCo1pOBwAAAA== owner: root:root path: /root/hhb.gz permissions: '0644' #!/bin/bash mkdir -p /home/ubuntu/config ubuntu at onap-hhb7:~$ sudo ls /root/ -a . .. .bashrc .profile .ssh ubuntu at onap-hhb7:/var/lib/cloud/instance$ ls boot-finished datasource obj.pkl sem user-data.txt.i vendor-data.txt.i cloud-config.txt handlers scripts user-data.txt vendor-data.txt ubuntu at onap-hhb7:/var/lib/cloud/instance$ sudo cat user-data.txt sudo: unable to resolve host onap-hhb7 #cloud-config write_files: - encoding: b64 content: H4sICMxh2VoAA2hoYgCzKE5JK07hAgDCo1pOBwAAAA== owner: root:root path: /root/hhb.gz permissions: '0644' #!/bin/bash mkdir -p /home/ubuntu/config ------------------------------------------------------------------------------------------------------------------------------- Huang.haibin 11628530 86+18106533356 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jistr at redhat.com Fri Apr 20 09:23:13 2018 From: jistr at redhat.com (=?UTF-8?B?SmnFmcOtIFN0csOhbnNrw70=?=) Date: Fri, 20 Apr 2018 11:23:13 +0200 Subject: [openstack-dev] [tripleo] Proposing Marius Cornea core on upgrade bits In-Reply-To: References: Message-ID: <5c5d6111-1879-1626-0463-7ed8155524c0@redhat.com> +1! On 19.4.2018 19:01, Emilien Macchi wrote: > Greetings, > > As you probably know mcornea on IRC, Marius Cornea has been contributing on > TripleO for a while, specially on the upgrade bits. > Part of the quality team, he's always testing real customer scenarios and > brings a lot of good feedback in his reviews, and quite often takes care of > fixing complex bugs when it comes to advanced upgrades scenarios. > He's very involved in tripleo-upgrade repository where he's already core, > but I think it's time to let him +2 on other tripleo repos for the patches > related to upgrades (we trust people's judgement for reviews). > > As usual, we'll vote! > > Thanks everyone for your feedback and thanks Marius for your hard work and > involvement in the project. > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From thierry at openstack.org Fri Apr 20 09:28:30 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 20 Apr 2018 11:28:30 +0200 Subject: [openstack-dev] [tc] Technical Committee Status update, April 20th Message-ID: <375c6f16-bf19-c449-111b-8d7c7b530cf3@openstack.org> Hi! This is the weekly summary of Technical Committee initiatives. You can find the full list of currently-considered changes at: https://wiki.openstack.org/wiki/Technical_Committee_Tracker We also track TC objectives for the cycle using StoryBoard at: https://storyboard.openstack.org/#!/project/923 == Recently-approved changes == * Adjust TC election target to 6 weeks before summit [1] * Update docs job info to match current PTI [2] * Goal updates: barbican * New repos: tripleo-ha-utils, puppet-senlin [1] https://review.openstack.org/#/c/560002/ [2] https://review.openstack.org/#/c/556576/ The main change this week is a TC charter change moving the dates for future TC elections, six weeks away from the summit rather than just three weeks away, giving more time for newly-elected members to plan their summit presence. You can read the full TC charter here: https://governance.openstack.org/tc/reference/charter.html == Election season == We are renewing 7 seats from the Technical Committee's 13 seats. We have 10 great candidates with some geographic diversity. Voting will start early next week. If you have questions for the candidates, please ask them on the mailing-list ASAP ! You can find details on the process at: https://governance.openstack.org/election/ == Under discussion == We have two open patches which will probably wait until the end of the election season to be finally approved. The first one is a review proposing the split of the kolla-kubernetes deliverable out of the Kolla team. The various teams involved are coming to an agreement that Kolla-k8s should be abandoned in favor of OpenStack-Helm. A (new) change should be proposed soon to do that. If you have an opinion on that, please chime in on the (currently-proposed) review or the Kolla team ML thread: https://review.openstack.org/#/c/552531/ http://lists.openstack.org/pipermail/openstack-dev/2018-April/129452.html The other discussion is around the proposed Adjutant project team addition. At this point the discussion is expected to restart after the election, and culminate in a Forum session in Vancouver where we hope the various involved parties will be able to discuss more directly. You can jump in the discussion here: https://review.openstack.org/#/c/553643/ == TC member actions/focus/discussions for the coming week(s) == Voting will be open to renew part of the TC next week. I also started a thread around potential topics the joint Board+TC+UC+Staff meeting in Vancouver, please join in if you have suggestions: http://lists.openstack.org/pipermail/openstack-dev/2018-April/129428.html == Office hours == To be more inclusive of all timezones and more mindful of people for which English is not the primary language, the Technical Committee dropped its dependency on weekly meetings. So that you can still get hold of TC members on IRC, we instituted a series of office hours on #openstack-tc: * 09:00 UTC on Tuesdays * 01:00 UTC on Wednesdays * 15:00 UTC on Thursdays Feel free to add your own office hour conversation starter at: https://etherpad.openstack.org/p/tc-office-hour-conversation-starters Cheers, -- Thierry Carrez (ttx) From sgolovat at redhat.com Fri Apr 20 09:45:51 2018 From: sgolovat at redhat.com (Sergii Golovatiuk) Date: Fri, 20 Apr 2018 11:45:51 +0200 Subject: [openstack-dev] [tripleo] Proposing Marius Cornea core on upgrade bits In-Reply-To: <5c5d6111-1879-1626-0463-7ed8155524c0@redhat.com> References: <5c5d6111-1879-1626-0463-7ed8155524c0@redhat.com> Message-ID: +1. Well done. On Fri, Apr 20, 2018 at 11:23 AM, Jiří Stránský wrote: > +1! > > On 19.4.2018 19:01, Emilien Macchi wrote: >> >> Greetings, >> >> As you probably know mcornea on IRC, Marius Cornea has been contributing >> on >> TripleO for a while, specially on the upgrade bits. >> Part of the quality team, he's always testing real customer scenarios and >> brings a lot of good feedback in his reviews, and quite often takes care >> of >> fixing complex bugs when it comes to advanced upgrades scenarios. >> He's very involved in tripleo-upgrade repository where he's already core, >> but I think it's time to let him +2 on other tripleo repos for the patches >> related to upgrades (we trust people's judgement for reviews). >> >> As usual, we'll vote! >> >> Thanks everyone for your feedback and thanks Marius for your hard work and >> involvement in the project. >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Best Regards, Sergii Golovatiuk From derekh at redhat.com Fri Apr 20 10:42:19 2018 From: derekh at redhat.com (Derek Higgins) Date: Fri, 20 Apr 2018 11:42:19 +0100 Subject: [openstack-dev] [tripleo] Ironic Inspector in the overcloud In-Reply-To: References: <4b7e509e-3c1c-6ba1-be1c-59708d22919a@redhat.com> Message-ID: On 18 April 2018 at 17:12, Derek Higgins wrote: > > > On 18 April 2018 at 14:22, Bogdan Dobrelya wrote: > >> On 4/18/18 12:07 PM, Derek Higgins wrote: >> >>> Hi All, >>> >>> I've been testing the ironic inspector containerised service in the >>> overcloud, the service essentially works but there is a couple of hurdles >>> to tackle to set it up, the first of these is how to get the IPA kernel >>> and ramdisk where they need to be. >>> >>> These need to be be present in the ironic_pxe_http container to be >>> served out over http, whats the best way to get them there? >>> >>> On the undercloud this is done by copying the files across the >>> filesystem[1][2] to /httpboot when we run "openstack overcloud image >>> upload", but on the overcloud an alternative is required, could the files >>> be pulled into the container during setup? >>> >> >> I'd prefer keep bind-mounting IPA kernel and ramdisk into a container via >> the /var/lib/ironic/httpboot host-path. So the question then becomes how to >> deliver those by that path for overcloud nodes? >> > Yup it does, I'm currently looking into using DeployArtifactURLs to > download the files to the controller nodes > It turns out this wont work as Deploy artifacts downloads to all hosts which we don't want, I'm instead going to propose we add a docker config to download the files over http, by default it will use the same images that were used by the undercloud https://review.openstack.org/#/c/563072/1 > > >> >> >>> thanks, >>> Derek >>> >>> 1 - https://github.com/openstack/python-tripleoclient/blob/3cf44 >>> eb/tripleoclient/v1/overcloud_image.py#L421-L433 >>> 2 - https://github.com/openstack/python-tripleoclient/blob/3cf44 >>> eb/tripleoclient/v1/overcloud_image.py#L181 >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> -- >> Best regards, >> Bogdan Dobrelya, >> Irc #bogdando >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From witold.bedyk at est.fujitsu.com Fri Apr 20 10:51:53 2018 From: witold.bedyk at est.fujitsu.com (Bedyk, Witold) Date: Fri, 20 Apr 2018 10:51:53 +0000 Subject: [openstack-dev] [monasca] Monasca PTG participation survey Message-ID: <1a122f4b08b843c8b579b5004076979e@R01UKEXCASM126.r01.fujitsu.local> Hello everyone, The next PTG will take place September 10-14, 2018 in Denver, Colorado [1]. Again we have to decide if Monasca will participate and gather together with other projects. The last PTG was a great success which is measurable in new code already submitted and number of reviews. But I also understand that it's not always easy to travel. Please take a minute, consider all the pros and cons, and fill out this form [2] until Wednesday, May 2nd. Cheers Witek [1] https://www.openstack.org/ptg [2] https://goo.gl/forms/z2Bu5RlXin30wTpA3 From marios at redhat.com Fri Apr 20 11:15:03 2018 From: marios at redhat.com (Marios Andreou) Date: Fri, 20 Apr 2018 14:15:03 +0300 Subject: [openstack-dev] [tripleo] Proposing Marius Cornea core on upgrade bits In-Reply-To: References: Message-ID: +10000 ! On Thu, Apr 19, 2018 at 8:01 PM, Emilien Macchi wrote: > Greetings, > > As you probably know mcornea on IRC, Marius Cornea has been contributing > on TripleO for a while, specially on the upgrade bits. > Part of the quality team, he's always testing real customer scenarios and > brings a lot of good feedback in his reviews, and quite often takes care of > fixing complex bugs when it comes to advanced upgrades scenarios. > He's very involved in tripleo-upgrade repository where he's already core, > but I think it's time to let him +2 on other tripleo repos for the patches > related to upgrades (we trust people's judgement for reviews). > > As usual, we'll vote! > > Thanks everyone for your feedback and thanks Marius for your hard work and > involvement in the project. > -- > Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at jimrollenhagen.com Fri Apr 20 11:33:51 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Fri, 20 Apr 2018 07:33:51 -0400 Subject: [openstack-dev] [ironic][infra][qa] Jobs failing; pep8 not found In-Reply-To: <1524165416-sup-7286@lrrr.local> References: <1524165416-sup-7286@lrrr.local> Message-ID: On Thu, Apr 19, 2018 at 3:21 PM, Doug Hellmann wrote: > > > Reading through that log more carefully, I see an early attempt to pin > pycodestyle <= 2.3.1 [1], followed later by pycodestyle == 2.4.0 being > pulled in as a dependency of flake8-import-order==0.12 when neutron's > test-requirements.txt is installed [2]. Then later when ironic's > test-requirements.txt is installed pycodestyle is downgraded to 2.3.1 > [3]. > > Reproducing those install & downgrade steps, I see that pycodestyle > 2.4.0 claims to own pep8.py but pycodestyle 2.3.1 does not [4]. So that > explains why pep8 is not re-installed when pycodestyle is downgraded. > Aha, interesting! That's a fun one. :) I think the real problem here is that we have linter dependencies listed > in the test-requirements.txt files for our projects, and they are > somehow being installed without the constraints. This is because they're in the blacklist, right? > I don't think they need > to be installed for devstack at all, so one way to fix it would be to > move those dependencies to the tox.ini section for running pep8, or to > have devstack look at the blacklisted packages and skip installing them. > Yeah, seems like either would work. With the latter, would devstack edit these out of test-requirements.txt before installing, I presume? The former seems less hacky, I'll proceed with that unless folks have objections. Thanks for the help, Doug! :) // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From yroblamo at redhat.com Fri Apr 20 11:58:49 2018 From: yroblamo at redhat.com (Yolanda Robla Mota) Date: Fri, 20 Apr 2018 13:58:49 +0200 Subject: [openstack-dev] [tripleo] Proposing Marius Cornea core on upgrade bits In-Reply-To: References: Message-ID: +1, Marius has been a great help On Thu, Apr 19, 2018 at 7:01 PM, Emilien Macchi wrote: > Greetings, > > As you probably know mcornea on IRC, Marius Cornea has been contributing > on TripleO for a while, specially on the upgrade bits. > Part of the quality team, he's always testing real customer scenarios and > brings a lot of good feedback in his reviews, and quite often takes care of > fixing complex bugs when it comes to advanced upgrades scenarios. > He's very involved in tripleo-upgrade repository where he's already core, > but I think it's time to let him +2 on other tripleo repos for the patches > related to upgrades (we trust people's judgement for reviews). > > As usual, we'll vote! > > Thanks everyone for your feedback and thanks Marius for your hard work and > involvement in the project. > -- > Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Yolanda Robla Mota Principal Software Engineer, RHCE Red Hat C/Avellana 213 Urb Portugal yroblamo at redhat.com M: +34605641639 -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Fri Apr 20 12:00:18 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 20 Apr 2018 13:00:18 +0100 (BST) Subject: [openstack-dev] [nova] [placement] placement update 18-16 Message-ID: This is a "contract" update, the lists of specs and other has not had new things added to it, only stuff that is done removed. There are of course new things out there, but they will be added next week. # Most Important Nested providers in allocation candidates remains the important stuff. There was a big email thread about some aspects of this http://lists.openstack.org/pipermail/openstack-dev/2018-April/129477.html which eventually led to a modification of the spec on granular resource requests for a new query parameter: https://review.openstack.org/#/c/562687/ The other main thing in progress is consumer generations. There's nothing currently in a runway or the runways queue related to placement. Anything ready to go? # What's Changed Besides the new query parameter above we've also got: forbidden traits support merged on both the placement and extra specs processing sides, and some performance improvements in the SQL for checking capacity when doing allocations. The framework that allows error responses to include an error code has merged. Future errors should provide codes, see: https://docs.openstack.org/nova/latest/contributor/placement.html#adding-a-new-handler for information on how to do that. This is especially important for the many different types of 409 responses that we can produce (even more coming with consumer generations). With forbidden being some definition of "done" it's no longer a main theme and "Granular" will take its place as a new theme. This is closely tied to nested providers but is enough of an undertaking to get its own theme. Update provider tree is also effectively done, so it is gone from themes as well. There's ongoing work to use ProviderTree in the virt drivers but that's not captured by the theme. # Bugs * Placement related bugs not yet in progress: https://goo.gl/TgiPXb 16, +2 on last week * In progress placement bugs: https://goo.gl/vzGGDQ 12, -1 on last week # Specs There have been some spec additions or modifications this week, but those are not present here. This is last week's list, with abandoned or merged stuff trimmed. Move these along before moving the others along, if possible. Total last week: 14. Now: 12 (just from this list). Merge or abandon more specs! * https://review.openstack.org/#/c/549067/ VMware: place instances on resource pool (using update_provider_tree) * https://review.openstack.org/#/c/552924/ Proposes NUMA topology with RPs * https://review.openstack.org/#/c/544683/ Account for host agg allocation ratio in placement * https://review.openstack.org/#/c/552105/ Support default allocation ratios * https://review.openstack.org/#/c/438640/ Spec on preemptible servers * https://review.openstack.org/#/c/557065/ Proposes Multiple GPU types * https://review.openstack.org/#/c/555081/ Standardize CPU resource tracking * https://review.openstack.org/#/c/502306/ Network bandwidth resource provider * https://review.openstack.org/#/c/509042/ Propose counting quota usage from placement * https://review.openstack.org/#/c/560174/ Add history behind nullable project_id and user_id * https://review.openstack.org/#/c/559466/ Return resources of entire trees in Placement * https://review.openstack.org/#/c/560974/ Numbered request groups use different providers # Main Themes ## Nested providers in allocation candidates Representing nested provides in the response to GET /allocation_candidates is required to actually make use of all the topology that update provider tree will report. That work is in progress at: https://review.openstack.org/#/q/topic:bp/nested-resource-providers https://review.openstack.org/#/q/topic:bp/nested-resource-providers-allocation-candidates (Someone want to clue me in as to whether that first topic is still legit?) ## Mirror nova host aggregates to placement This makes it so some kinds of aggregate filtering can be done "placement side" by mirroring nova host aggregates into placement aggregates. https://review.openstack.org/#/q/topic:bp/placement-mirror-host-aggregates This is still in progress but took a little attention break while nested provider discussions took up (and destroyed) brains. ## Consumer Generations This allows multiple agents to "safely" update allocations for a single consumer. The code is in progress: https://review.openstack.org/#/q/topic:bp/add-consumer-generation This is moving along. ## Granular Ways and means of addressing granular requests when dealing with nested resource providers. Granular in this sense is grouping resource classes and traits together in their own lumps as required. The email debate mentioned above is about how those lumps would like to associate with one another. Topic is: https://review.openstack.org/#/q/topic:bp/granular-resource-requests # Extraction The spec for optional database handling, which helps provide options for migrating to an independent placement service as well as drive experiments in extraction, has merged. Which means the stack of code beginning at: https://review.openstack.org/#/c/362766/ is legit for some review. The first commit on that stack was in 2016. Two other main issues in extraction: The creation of an os-resource-classes library. This will encapsulate the standard resource classes in its own thing. Jay has plans to work on this but the aforementioned nested stuff... The placement unit and functional tests have a lot of dependence on the fixtures and base classes used in the nova unit and functional tests. For the time being that is okay, but it would be useful to start unwinding that, soon. Same will be true for config. # Other As a contract, this will hopefully be shorter than last week and not have anything new. There were 18 entries last week. There is plenty of other work in progress that is not listed here. 14 now. * https://review.openstack.org/#/c/546660/ Purge comp_node and res_prvdr records during deletion of cells/hosts * https://review.openstack.org/#/q/topic:bp/placement-osc-plugin-rocky A huge pile of improvements to osc-placement * https://review.openstack.org/#/c/546713/ Add compute capabilities traits (to os-traits) * https://review.openstack.org/#/c/524425/ General policy sample file for placement * https://review.openstack.org/#/c/527791/ Get resource provider by uuid or name (osc-placement) * https://review.openstack.org/#/c/477478/ placement: Make API history doc more consistent * https://review.openstack.org/#/c/556669/ Handle agg generation conflict in report client * https://review.openstack.org/#/c/557086/ Remove usage of [placement]os_region_name * https://review.openstack.org/#/c/537614/ Add unit test for non-placement resize * https://review.openstack.org/#/c/554357/ Address issues raised in adding member_of to GET /a-c * https://review.openstack.org/#/c/493865/ cover migration cases with functional tests * https://review.openstack.org/#/q/topic:bug/1732731 Bug fixes for sharing resource providers * https://review.openstack.org/#/c/517757/ WIP at granular in allocation candidates * https://review.openstack.org/#/q/topic:bug/1760322 Fix a bug with syncing traits. It can fail, ruining the whole service. # End Oh hi. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From brad at redhat.com Fri Apr 20 12:12:21 2018 From: brad at redhat.com (Brad P. Crochet) Date: Fri, 20 Apr 2018 12:12:21 +0000 Subject: [openstack-dev] [tripleo] Proposing Marius Cornea core on upgrade bits In-Reply-To: References: Message-ID: +1 from me! On Fri, Apr 20, 2018 at 8:04 AM Yolanda Robla Mota wrote: > +1, Marius has been a great help > > On Thu, Apr 19, 2018 at 7:01 PM, Emilien Macchi > wrote: > >> Greetings, >> >> As you probably know mcornea on IRC, Marius Cornea has been contributing >> on TripleO for a while, specially on the upgrade bits. >> Part of the quality team, he's always testing real customer scenarios and >> brings a lot of good feedback in his reviews, and quite often takes care of >> fixing complex bugs when it comes to advanced upgrades scenarios. >> He's very involved in tripleo-upgrade repository where he's already core, >> but I think it's time to let him +2 on other tripleo repos for the patches >> related to upgrades (we trust people's judgement for reviews). >> >> As usual, we'll vote! >> >> Thanks everyone for your feedback and thanks Marius for your hard work >> and involvement in the project. >> -- >> Emilien Macchi >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > > Yolanda Robla Mota > > Principal Software Engineer, RHCE > > Red Hat > > > > C/Avellana 213 > > Urb Portugal > > yroblamo at redhat.com M: +34605641639 > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Brad P. Crochet, RHCA, RHCE, RHCVA, RHCDS Principal Software Engineer (c) 704.236.9385 -------------- next part -------------- An HTML attachment was scrubbed... URL: From beagles at redhat.com Fri Apr 20 12:29:10 2018 From: beagles at redhat.com (Brent Eagles) Date: Fri, 20 Apr 2018 09:59:10 -0230 Subject: [openstack-dev] [tripleo] Proposing Marius Cornea core on upgrade bits In-Reply-To: References: Message-ID: +1 !!! On Fri, Apr 20, 2018 at 9:42 AM, Brad P. Crochet wrote: > +1 from me! > > On Fri, Apr 20, 2018 at 8:04 AM Yolanda Robla Mota > wrote: > >> +1, Marius has been a great help >> >> On Thu, Apr 19, 2018 at 7:01 PM, Emilien Macchi >> wrote: >> >>> Greetings, >>> >>> As you probably know mcornea on IRC, Marius Cornea has been contributing >>> on TripleO for a while, specially on the upgrade bits. >>> Part of the quality team, he's always testing real customer scenarios >>> and brings a lot of good feedback in his reviews, and quite often takes >>> care of fixing complex bugs when it comes to advanced upgrades scenarios. >>> He's very involved in tripleo-upgrade repository where he's already >>> core, but I think it's time to let him +2 on other tripleo repos for the >>> patches related to upgrades (we trust people's judgement for reviews). >>> >>> As usual, we'll vote! >>> >>> Thanks everyone for your feedback and thanks Marius for your hard work >>> and involvement in the project. >>> -- >>> Emilien Macchi >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >>> unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> >> -- >> >> Yolanda Robla Mota >> >> Principal Software Engineer, RHCE >> >> Red Hat >> >> >> >> C/Avellana 213 >> >> Urb Portugal >> >> yroblamo at redhat.com M: +34605641639 >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > -- > Brad P. Crochet, RHCA, RHCE, RHCVA, RHCDS > Principal Software Engineer > (c) 704.236.9385 > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arxcruz at redhat.com Fri Apr 20 12:32:17 2018 From: arxcruz at redhat.com (Arx Cruz) Date: Fri, 20 Apr 2018 14:32:17 +0200 Subject: [openstack-dev] [tripleo] Proposing Marius Cornea core on upgrade bits In-Reply-To: References: Message-ID: +1 Congrats man!!! On Fri, Apr 20, 2018 at 2:29 PM, Brent Eagles wrote: > +1 !!! > > On Fri, Apr 20, 2018 at 9:42 AM, Brad P. Crochet wrote: > >> +1 from me! >> >> On Fri, Apr 20, 2018 at 8:04 AM Yolanda Robla Mota >> wrote: >> >>> +1, Marius has been a great help >>> >>> On Thu, Apr 19, 2018 at 7:01 PM, Emilien Macchi >>> wrote: >>> >>>> Greetings, >>>> >>>> As you probably know mcornea on IRC, Marius Cornea has been >>>> contributing on TripleO for a while, specially on the upgrade bits. >>>> Part of the quality team, he's always testing real customer scenarios >>>> and brings a lot of good feedback in his reviews, and quite often takes >>>> care of fixing complex bugs when it comes to advanced upgrades scenarios. >>>> He's very involved in tripleo-upgrade repository where he's already >>>> core, but I think it's time to let him +2 on other tripleo repos for the >>>> patches related to upgrades (we trust people's judgement for reviews). >>>> >>>> As usual, we'll vote! >>>> >>>> Thanks everyone for your feedback and thanks Marius for your hard work >>>> and involvement in the project. >>>> -- >>>> Emilien Macchi >>>> >>>> ____________________________________________________________ >>>> ______________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.op >>>> enstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>>> >>>> >>> >>> >>> -- >>> >>> Yolanda Robla Mota >>> >>> Principal Software Engineer, RHCE >>> >>> Red Hat >>> >>> >>> >>> C/Avellana 213 >>> >>> Urb Portugal >>> >>> yroblamo at redhat.com M: +34605641639 >>> >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >> -- >> Brad P. Crochet, RHCA, RHCE, RHCVA, RHCDS >> Principal Software Engineer >> (c) 704.236.9385 >> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From therve at redhat.com Fri Apr 20 12:44:12 2018 From: therve at redhat.com (Thomas Herve) Date: Fri, 20 Apr 2018 14:44:12 +0200 Subject: [openstack-dev] [Heat][TripleO] - Getting attributes of openstack resources not created by the stack for TripleO NetworkConfig. In-Reply-To: <1524142764.4383.83.camel@redhat.com> References: <1524142764.4383.83.camel@redhat.com> Message-ID: On Thu, Apr 19, 2018 at 2:59 PM, Harald Jensås wrote: > Hi, Hi, thanks for sending this. Responses inline. > When configuring TripleO deployments with nodes on routed ctlplane > networks we need to pass some per-network properties to the > NetworkConfig resource[1] in THT. We get the ``ControlPlaneIp`` > property using get_attr, but the NIC configs need a couple of more > parameters[2], for example: ``ControlPlaneSubnetCidr``, > ``ControlPlaneDefaultRoute`` and ``DnsServers``. > > Since queens these templates are jinja templated, to generate things > from from network_data.yaml. When using routed ctlplane networks, the > parameters ``ControlPlaneSubnetCidr`` and ``ControlPlaneDefaultRoute`` > will be different. So we need to use static per-role > Net::SoftwareConfig templates, and add parameters such as > ``ControlPlaneDefaultRouteLeafX``. > > The values the use need to pass in for these are already available in > the neutron ctlplane network configuration on the undercloud. So > ideally we should not need to ask the user to provide them in > parameter_defaults, we should resolve the correct values automatically. To make it clear, what you want to prevent is the need to add more keys in network_data.yaml? As those had to be provided at some point, I wonder if tripleo can't find a way to pass them again on the overcloud deploy. Inspecting neutron is an elegant solution, though. > : We can get the port ID using get_attr: > > {get_attr: [, addresses, , 0, port]} > > : From there outside of heat we can get the subnet_id: > > openstack port show 2fb4baf9-45b0-48cb-8249-c09a535b9eda \ > -f yaml -c fixed_ips > > fixed_ips: ip_address='172.20.0.10', subnet_id='2b06ae2e-423f-4a73- > 97ad-4e9822d201e5' > > : And finally we can get the gateway_ip and cidr of the subnet: > > openstack subnet show 2b06ae2e-423f-4a73-97ad-4e9822d201e5 \ > -f yaml -c gateway_ip -c cidr > > cidr: 172.20.0.0/26 > gateway_ip: 172.20.0.62 > > > The problem is getting there using heat ... > a couple of ideas: > > a) Use heat's ``external_resource`` to create a port resource, > and then a external subnet resource. Then get the data > from the external resources. We probably would have to make > it possible for a ``external_resource`` depend on the server > resource, and verify that these resource have the required > attributes. I believe that's a relatively easy fix. It's unclear why we didn't allow that in the first place, probably because we were missing a use case, but it seems valuable here. > b) Extend attributes of OS::Nova::Server (OS::Neutron::Port as > well probably) to include the data. > > If we do this we should probably aim to be in parity with > what is made available to clients getting the configuration > from dhcp. (mtu, dns_domain, dns_servers, prefixlen, > gateway_ip, host_routes, ipv6_address_mode, ipv6_ra_mode > etc.) I'm with you on exposing more neutron data to the Port resource. It can be complicated because some of them are implementation specific, but we can look into those. I don't think adding them directly to the Server resource makes a ton of sense though. > c) Create a new heat function to read properties of any > openstack resource, without having to make use of the > external_resource in heat. It's an interesting idea, but I think it would look a lot like what external resources are supposed to be. I see a few changes: * Allow external resource to depend on other resources * Expose more port attributes * Expose more subnet attributes If you can list the attributes you care about that'd be great. Thanks, -- Thomas From jpichon at redhat.com Fri Apr 20 12:47:15 2018 From: jpichon at redhat.com (Julie Pichon) Date: Fri, 20 Apr 2018 13:47:15 +0100 Subject: [openstack-dev] [tripleo] Proposing Marius Cornea core on upgrade bits In-Reply-To: References: Message-ID: On 19 April 2018 at 18:01, Emilien Macchi wrote: > Greetings, > > As you probably know mcornea on IRC, Marius Cornea has been contributing on > TripleO for a while, specially on the upgrade bits. > Part of the quality team, he's always testing real customer scenarios and > brings a lot of good feedback in his reviews, and quite often takes care of > fixing complex bugs when it comes to advanced upgrades scenarios. > He's very involved in tripleo-upgrade repository where he's already core, > but I think it's time to let him +2 on other tripleo repos for the patches > related to upgrades (we trust people's judgement for reviews). > > As usual, we'll vote! > > Thanks everyone for your feedback and thanks Marius for your hard work and > involvement in the project. +1 From michele at acksyn.org Fri Apr 20 13:01:52 2018 From: michele at acksyn.org (Michele Baldessari) Date: Fri, 20 Apr 2018 15:01:52 +0200 Subject: [openstack-dev] [tripleo] Proposing Marius Cornea core on upgrade bits In-Reply-To: References: Message-ID: <20180420130152.GA4247@palahniuk.int.rhx> +1 On Thu, Apr 19, 2018 at 10:01:50AM -0700, Emilien Macchi wrote: > Greetings, > > As you probably know mcornea on IRC, Marius Cornea has been contributing on > TripleO for a while, specially on the upgrade bits. > Part of the quality team, he's always testing real customer scenarios and > brings a lot of good feedback in his reviews, and quite often takes care of > fixing complex bugs when it comes to advanced upgrades scenarios. > He's very involved in tripleo-upgrade repository where he's already core, > but I think it's time to let him +2 on other tripleo repos for the patches > related to upgrades (we trust people's judgement for reviews). > > As usual, we'll vote! > > Thanks everyone for your feedback and thanks Marius for your hard work and > involvement in the project. > -- > Emilien Macchi > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Michele Baldessari C2A5 9DA3 9961 4FFB E01B D0BC DDD4 DCCB 7515 5C6D From jim at jimrollenhagen.com Fri Apr 20 13:05:23 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Fri, 20 Apr 2018 09:05:23 -0400 Subject: [openstack-dev] [ironic][infra][qa] Jobs failing; pep8 not found In-Reply-To: References: <1524165416-sup-7286@lrrr.local> Message-ID: On Fri, Apr 20, 2018 at 7:33 AM, Jim Rollenhagen wrote: > On Thu, Apr 19, 2018 at 3:21 PM, Doug Hellmann > wrote: >> >> >> Reading through that log more carefully, I see an early attempt to pin >> pycodestyle <= 2.3.1 [1], followed later by pycodestyle == 2.4.0 being >> pulled in as a dependency of flake8-import-order==0.12 when neutron's >> test-requirements.txt is installed [2]. Then later when ironic's >> test-requirements.txt is installed pycodestyle is downgraded to 2.3.1 >> [3]. >> >> Reproducing those install & downgrade steps, I see that pycodestyle >> 2.4.0 claims to own pep8.py but pycodestyle 2.3.1 does not [4]. So that >> explains why pep8 is not re-installed when pycodestyle is downgraded. >> > > Aha, interesting! That's a fun one. :) > > I think the real problem here is that we have linter dependencies listed >> in the test-requirements.txt files for our projects, and they are >> somehow being installed without the constraints. > > > This is because they're in the blacklist, right? > > >> I don't think they need >> to be installed for devstack at all, so one way to fix it would be to >> move those dependencies to the tox.ini section for running pep8, or to >> have devstack look at the blacklisted packages and skip installing them. >> > > Yeah, seems like either would work. With the latter, would devstack edit > these out of test-requirements.txt before installing, I presume? The former > seems less hacky, I'll proceed with that unless folks have objections. > Although... this would need to be done in every project installed from source during the devstack run. I'm going to look into doing this in devstack instead to avoid spending all day moving patches. // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Fri Apr 20 13:26:33 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 20 Apr 2018 09:26:33 -0400 Subject: [openstack-dev] [ironic][infra][qa] Jobs failing; pep8 not found In-Reply-To: References: <1524165416-sup-7286@lrrr.local> Message-ID: <1524230699-sup-7372@lrrr.local> Excerpts from Jim Rollenhagen's message of 2018-04-20 07:33:51 -0400: > On Thu, Apr 19, 2018 at 3:21 PM, Doug Hellmann > wrote: > > > > > > Reading through that log more carefully, I see an early attempt to pin > > pycodestyle <= 2.3.1 [1], followed later by pycodestyle == 2.4.0 being > > pulled in as a dependency of flake8-import-order==0.12 when neutron's > > test-requirements.txt is installed [2]. Then later when ironic's > > test-requirements.txt is installed pycodestyle is downgraded to 2.3.1 > > [3]. > > > > Reproducing those install & downgrade steps, I see that pycodestyle > > 2.4.0 claims to own pep8.py but pycodestyle 2.3.1 does not [4]. So that > > explains why pep8 is not re-installed when pycodestyle is downgraded. > > > > Aha, interesting! That's a fun one. :) > > I think the real problem here is that we have linter dependencies listed > > in the test-requirements.txt files for our projects, and they are > > somehow being installed without the constraints. > > > This is because they're in the blacklist, right? Yes, that's probably it. > > I don't think they need > > to be installed for devstack at all, so one way to fix it would be to > > move those dependencies to the tox.ini section for running pep8, or to > > have devstack look at the blacklisted packages and skip installing them. > > > > Yeah, seems like either would work. With the latter, would devstack edit > these out of test-requirements.txt before installing, I presume? The former > seems less hacky, I'll proceed with that unless folks have objections. I like updating the tox.ini, too, since it has the added benefit of putting the linter (and other blacklisted) dependencies in a file the requirements check job ignores. > > Thanks for the help, Doug! :) > > // jim From doug at doughellmann.com Fri Apr 20 13:27:37 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 20 Apr 2018 09:27:37 -0400 Subject: [openstack-dev] [ironic][infra][qa] Jobs failing; pep8 not found In-Reply-To: References: <1524165416-sup-7286@lrrr.local> Message-ID: <1524230798-sup-6092@lrrr.local> Excerpts from Jim Rollenhagen's message of 2018-04-20 09:05:23 -0400: > On Fri, Apr 20, 2018 at 7:33 AM, Jim Rollenhagen > wrote: > > > On Thu, Apr 19, 2018 at 3:21 PM, Doug Hellmann > > wrote: > >> > >> > >> Reading through that log more carefully, I see an early attempt to pin > >> pycodestyle <= 2.3.1 [1], followed later by pycodestyle == 2.4.0 being > >> pulled in as a dependency of flake8-import-order==0.12 when neutron's > >> test-requirements.txt is installed [2]. Then later when ironic's > >> test-requirements.txt is installed pycodestyle is downgraded to 2.3.1 > >> [3]. > >> > >> Reproducing those install & downgrade steps, I see that pycodestyle > >> 2.4.0 claims to own pep8.py but pycodestyle 2.3.1 does not [4]. So that > >> explains why pep8 is not re-installed when pycodestyle is downgraded. > >> > > > > Aha, interesting! That's a fun one. :) > > > > I think the real problem here is that we have linter dependencies listed > >> in the test-requirements.txt files for our projects, and they are > >> somehow being installed without the constraints. > > > > > > This is because they're in the blacklist, right? > > > > > >> I don't think they need > >> to be installed for devstack at all, so one way to fix it would be to > >> move those dependencies to the tox.ini section for running pep8, or to > >> have devstack look at the blacklisted packages and skip installing them. > >> > > > > Yeah, seems like either would work. With the latter, would devstack edit > > these out of test-requirements.txt before installing, I presume? The former > > seems less hacky, I'll proceed with that unless folks have objections. > > > > Although... this would need to be done in every project installed from > source during the devstack run. I'm going to look into doing this in > devstack instead to avoid spending all day moving patches. In the short term we only need to fix the few projects with conflicting requirements. In the longer term we could have a concerted effort to move those dependencies. Someone creative might even be able to script it, since we do have a list of the blacklisted items. > > // jim From jim at jimrollenhagen.com Fri Apr 20 14:12:46 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Fri, 20 Apr 2018 10:12:46 -0400 Subject: [openstack-dev] [Nova] z/VM introducing a new config driveformat In-Reply-To: References: <7efe6916-17fc-59c5-d666-6bdfc19c3329@gmail.com> <2eea2405-2e85-6730-e022-e1be33dc35d5@gmail.com> Message-ID: On Fri, Apr 20, 2018 at 4:02 AM, Chen CH Ji wrote: > Thanks a lot for your sharing, that's good info, just curious why [1] need > zip and base64 encode if my understand is correct > I was told nova need format should be pure vfat or iso9660, I assume it's > because actually the config drive itself is making by iso by default > then wrap a zip/base64 format ... thanks > We only gzip and base64 to send it to the ironic API. It is decoded and unzipped before writing it to disk, so it is a pure iso9660 on the disk. // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfidente at redhat.com Fri Apr 20 14:13:14 2018 From: gfidente at redhat.com (Giulio Fidente) Date: Fri, 20 Apr 2018 16:13:14 +0200 Subject: [openstack-dev] [tripleo] Proposing Marius Cornea core on upgrade bits In-Reply-To: References: Message-ID: <0b0e56b9-096b-66e6-dece-2d2a357ea939@redhat.com> On 04/19/2018 07:01 PM, Emilien Macchi wrote: > Greetings, > > As you probably know mcornea on IRC, Marius Cornea has been contributing > on TripleO for a while, specially on the upgrade bits. > Part of the quality team, he's always testing real customer scenarios > and brings a lot of good feedback in his reviews, and quite often takes > care of fixing complex bugs when it comes to advanced upgrades scenarios. > He's very involved in tripleo-upgrade repository where he's already > core, but I think it's time to let him +2 on other tripleo repos for the > patches related to upgrades (we trust people's judgement for reviews). > > As usual, we'll vote! > > Thanks everyone for your feedback and thanks Marius for your hard work > and involvement in the project. +1 thanks Marius for your hard and very important work -- Giulio Fidente GPG KEY: 08D733BA From pabelanger at redhat.com Fri Apr 20 14:13:41 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Fri, 20 Apr 2018 10:13:41 -0400 Subject: [openstack-dev] [all][infra] ubuntu-bionic and legacy nodesets In-Reply-To: References: <20180419233736.GA4807@localhost.localdomain> Message-ID: <20180420141341.GA19454@localhost.localdomain> On Fri, Apr 20, 2018 at 08:16:07AM +0100, Jean-Philippe Evrard wrote: > That's very cool. > Any idea of the repartition of nodes xenial vs bionic? Is that a very > restricted amount of nodes? > According to upstream, ubuntu-bionic releases next week. In openstack-infra we are in really good shape to have projects start using it once we rebuild using the released version. Projects are able to use ubuntu-bionic today, we just ask they don't gate on them until the official release. As for switching the PTI job to use ubuntu-bionic, that is a different discussion. It would bump python to 3.6 and likely be too late in the cycle to do it. I guess something we can hash out with infra / requirements / tc / EALLTHEPROJECTS. -Paul > > On 20 April 2018 at 00:37, Paul Belanger wrote: > > Greetings, > > > > With ubuntu-bionic release around the corner we'll be starting discussions about > > migrating jobs from ubuntu-xenial to ubuntu-bionic. > > > > On topic I'd like to raise, is round job migrations from legacy to native > > zuulv3. Specifically, I'd like to propose we do not add legacy-ubuntu-bionic > > nodesets into openstack-zuul-jobs. Projects should be working towards moving > > away from the legacy format, as they were just copypasta from our previous JJB > > templates. > > > > Projects would still be free to move them intree, but I would highly encourage > > projects do not do this, as it only delays the issue. > > > > The good news is the majority of jobs have already been moved to native zuulv3 > > jobs, but there are still some projects still depending on the legacy nodesets. > > For example, tox bases jobs would not be affected. It mostly would be dsvm > > based jobs that haven't been switch to use the new devstack jobs for zuulv3. > > > > -Paul > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From james.slagle at gmail.com Fri Apr 20 14:27:53 2018 From: james.slagle at gmail.com (James Slagle) Date: Fri, 20 Apr 2018 10:27:53 -0400 Subject: [openstack-dev] [TripleO][ci][ceph] switching to config-download by default In-Reply-To: References: Message-ID: On Thu, Apr 5, 2018 at 10:38 AM, James Slagle wrote: > I've pushed up for review a set of patches to switch us over to using > config-download by default: > > https://review.openstack.org/#/q/topic:bp/config-download-default > > I believe I've come up with the proper series of steps to switch > things over. Let me know if you have any feedback or foresee any > issues: > > FIrst, we update remaining multinode jobs > (https://review.openstack.org/558965) and ovb jobs > (https://review.openstack.org/559067) that run against master to > opt-in to config-download. This will expose any issues with these jobs > and config-download and let us fix those issues. > > We can then switch tripleoclient (https://review.openstack.org/558925) > over to use config-download by default. Since this also requires a > Heat environment, we must forcibly inject that environment via > tripleoclient. FYI, the above work is completed and config-download is now the default with tripleoclient. > > Once the tripleoclient patch lands, we can update > tripleo-heat-templates to use the mappings from config-download in the > default resource registry (https://review.openstack.org/558927). > > We can then remove the forcibly injected environment from > tripleoclient (https://review.openstack.org/558931) We're now moving forward with the above 2 patches. jtomasek is making good progress with the UI and support for config-download should be landing there soon. > > Finally, we can go back and update the multinode/ovb jobs on master to > not be opt-in for config-download since it would now be the default > (no patch yet). > > Now...for Ceph it will be slightly different: It took some CI wrangling, but Ceph is now switched over to use external_deploy_tasks. There are patches in progress to clean up the old workflow_tasks: https://review.openstack.org/563040 https://review.openstack.org/563113 There will be some further patches for CI to remove other explicit opt-in's for config-download since it's now the default. Feel free to ping me directly if you think you've found any issues related to any of the config-download work, or file bugs in launchpad using the official "config-download" tag. -- -- James Slagle -- From dougal at redhat.com Fri Apr 20 15:01:38 2018 From: dougal at redhat.com (Dougal Matthews) Date: Fri, 20 Apr 2018 16:01:38 +0100 Subject: [openstack-dev] [mistral] Rocky-1 Release and Rocky-2 Plans Message-ID: Hey all, Mistral Rocky-1 [1] has been released and mistral-lib [2] and mistral client [3] are on their way. I have moved all of the open bugs and blueprints assigned to Rocky-1 to the Rocky-2 cycle. Can you please check the following: - All the bugs and blueprints important to you are correctly assigned to Rocky 2. - That you still plan on working on bugs and blueprints that are assigned to you. In the coming weeks I plan on going through the bugs in Rocky-2 and trying to determine what is realistic. At the moment I believe we have more than we can finish. [4] Thanks all, Dougal [1]: https://review.openstack.org/#/c/562734/ [2]: https://review.openstack.org/#/c/562742/ [3]: https://review.openstack.org/#/c/562743/ [4]: https://launchpad.net/mistral/+milestone/rocky-2 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at jimrollenhagen.com Fri Apr 20 15:05:46 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Fri, 20 Apr 2018 11:05:46 -0400 Subject: [openstack-dev] [ironic][infra][qa] Jobs failing; pep8 not found In-Reply-To: <1524230798-sup-6092@lrrr.local> References: <1524165416-sup-7286@lrrr.local> <1524230798-sup-6092@lrrr.local> Message-ID: > > In the short term we only need to fix the few projects with conflicting > requirements. In the longer term we could have a concerted effort to > move those dependencies. Someone creative might even be able to script > it, since we do have a list of the blacklisted items. > Agree this is something we should do eventually. For now, rloo came up with a better solution - remove ironic's pycodestyle pin. We had backported the pin, then fixed and unpinned in master, but didn't backport the unpin. After backporting the patch to unpin it, we're all green again. // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From s at cassiba.com Fri Apr 20 15:10:38 2018 From: s at cassiba.com (Samuel Cassiba) Date: Fri, 20 Apr 2018 08:10:38 -0700 Subject: [openstack-dev] [chef] State of the Kitchen - 3rd Edition Message-ID: This is the third installment of what is going on in Chef OpenStack. The goal is to give a quick overview to see our progress and what is on the menu. Feedback is always welcome on the usefulness of the content. Appetizers ========== => Chef 14 support has arrived in the cookbooks. Test Kitchen will be updated to 14 Soon(tm). The gate is still testing against 13. The 12 release is considered EOL as of May 1, 2018, so we will not be able to support releases older than 13 at that time. https://blog.chef.io/2018/04/19/whats-new-in-chef-14-and-chefdk-3/ => Numerous community cookbooks received updates, the highest visibility being Poise itself. This resolves issues with installing pip 10 on both platforms, and system Python on RHEL. Entrees ======= => Installing Python has been centralized to the common cookbook, as opposed to multiple attempts to install the same Python instance. This produces a more consistent, repeatable outcome. => The dokken yaml has been fixed up to allow for testing in containers once more. => Work has begun on overhauling the aging documentation, in an attempt to align things closer to community standards. Parts are shamelessly inspired from other projects (Puppet OpenStack, OpenStack-Ansible), so it will look a bit familiar in some places. Desserts ======== => Rakefiles are going away! As tooling has matured, and the emergence of the ChefDK, the functionality of what the reliable Rakefiles provide are being replaced with tools such as Test Kitchen and Delivery. On The Menu =========== => Creamy Jalapeno Sauce -- 1 cup (170g) sour cream / creme fraiche -- 1 cup (170g) mayonnaise -- 5 tbsp (75g) dry Ranch dressing powder -- 2 tbsp (28g) dry Jalapeno powder -- 4-5 pickled jalapeno chiles, with the stem removed (use some of the pickling juice to thin things out if the consistency is too thick) -- 1/2 cup (64g) fresh picked cilantro (dry works here, but... dry) -- 1/2 cup (64g) salsa verde -- 2 tbsp (28g) lime juice -- (Optional) Heavy cream / double cream if the consistency is too thin Add ingredients to a blender or food processor. Blend until desired consistency, or until you do not see pieces of jalapeno. Your humble line cook, Samuel Cassiba (scas) -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmellado at redhat.com Fri Apr 20 15:12:00 2018 From: dmellado at redhat.com (Daniel Mellado) Date: Fri, 20 Apr 2018 17:12:00 +0200 Subject: [openstack-dev] [kuryr] Kuryr PTG survey Message-ID: <93c51c1c-a4da-c090-3f0f-9296fa3d7293@redhat.com> Hi Kuryrs, As you might've already been informed, next PTG [1] will be held again in Denver, Colorado[1]. Where the pretty Rocky Mountains are and the trains like to blow. We'd like you to have a minute and consider whether we should be participating in this one. I personally consider that we made great progress on last one at Dublin but would like you to fill this form [2] before May 2nd so we can provide feedback to the foundation. As usual, another options would be a VTG or a mid-cycle somewhere else, depending on the planned participation. Also take note if you haven't already that prices have changed quite a lot, so the sooner we decide, the better. Thanks! Daniel [1] https://www.openstack.org/ptg [2]https://goo.gl/forms/HfiNnEF2CwuMva6n1 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From dougal at redhat.com Fri Apr 20 15:17:02 2018 From: dougal at redhat.com (Dougal Matthews) Date: Fri, 20 Apr 2018 16:17:02 +0100 Subject: [openstack-dev] [mistral] September PTG in Denver Message-ID: Hey all, You may have seen the news already, but yesterday the next PTG location was announced [1]. It will be in Denver again. Can you let me know if you are planning to attend and go to Mistral sessions? I have been asked about numbers and need to reply by May 5th. Thanks, Dougal [1]: http://lists.openstack.org/pipermail/openstack-dev/2018-April/129564.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From alee at redhat.com Fri Apr 20 15:57:39 2018 From: alee at redhat.com (Ade Lee) Date: Fri, 20 Apr 2018 11:57:39 -0400 Subject: [openstack-dev] [barbican] NEW weekly meeting time In-Reply-To: <1520280969.25743.54.camel@redhat.com> References: <005101d3a55a$e6329270$b297b750$@gohighsec.com> <1518792130.19501.1.camel@redhat.com> <1520280969.25743.54.camel@redhat.com> Message-ID: <1524239859.2972.74.camel@redhat.com> Due to the DST change in the States, by popular agreement, we're going to move the Barbican meeting time back an hour. So the new meeting time will be: 2 am UTC Tuesday == 10 pm EST Monday == 10 am CST (China) Tuesday Thanks! Ade On Mon, 2018-03-05 at 15:16 -0500, Ade Lee wrote: > Based on a few replies, we'll try moving the Barbican weekly meeting > to > > > 3 am UTC Tuesday == 10 pm EST Monday == 11 am CST (China) Tuesday > > starting Tuesday March 12 2018 (next week). > > See you then! > > Ade > > > On Fri, 2018-02-16 at 09:42 -0500, Ade Lee wrote: > > Thanks Jiong, > > > > Preference noted. Anyone else want to make the meeting time > > switch? > > (Or prefer not to). > > > > Ade > > > > On Wed, 2018-02-14 at 14:13 +0800, Jiong Liu wrote: > > > Hi Ade, > > > > > > Thank you for proposing this change! > > > I'm in China, and the second time slot works better for me. > > > > > > Regards, > > > Jiong > > > > > > > Message: 35 > > > > Date: Tue, 13 Feb 2018 10:17:59 -0500 > > > > From: Ade Lee > > > > To: "OpenStack Development Mailing List (not for usage > > > > questions)" > > > > > > > > Subject: [openstack-dev] [barbican] weekly meeting time > > > > Message-ID: <1518535079.22990.9.camel at redhat.com> > > > > Content-Type: text/plain; charset="UTF-8" > > > > Hi all, > > > > The Barbican weekly meeting has been fairly sparsely attended > > > > for > > > > a > > > > little while now, and the most active contributors these days > > > > appear to > > > > be in Asia. > > > > Its time to consider moving the weekly meeting to a time when > > > > more > > > > contributors can attend. I'm going to propose a couple times > > > > below > > > > to > > > > start out. > > > > 2 am UTC Tuesday == 9 pm EST Monday == 10 am CST (China) > > > > Tuesday > > > > 3 am UTC Tuesday == 10 pm EST Monday == 11 am CST (China) > > > > Tuesday > > > > Feel free to propose other days/times. > > > > Thanks, > > > > Ade > > > > P.S. Until decided otherwise, the Barbican meeting remains on > > > > Mondays > > > > at 2000 UTC > > > > > > > > > > > > _________________________________________________________________ > > > __ > > > __ > > > _____ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:un > > > su > > > bs > > > cribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > ___________________________________________________________________ > > __ > > _____ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsu > > bs > > cribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > _____________________________________________________________________ > _____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > cribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Fri Apr 20 16:02:16 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 20 Apr 2018 12:02:16 -0400 Subject: [openstack-dev] [cyborg][release][Release-job-failures] Pre-release of openstack/cyborg failed In-Reply-To: References: Message-ID: <1524240091-sup-280@lrrr.local> Excerpts from zuul's message of 2018-04-20 13:59:14 +0000: > Build failed. > > - release-openstack-python http://logs.openstack.org/fa/fabeaffa6efe8b1ef3d828f5b8c2cdc896e4afe9/pre-release/release-openstack-python/c624655/ : FAILURE in 6m 07s > - announce-release announce-release : SKIPPED > - propose-update-constraints propose-update-constraints : SKIPPED > The cyborg milestone release failed to build because the packaging step could not find some expected rootwrap files: http://logs.openstack.org/fa/fabeaffa6efe8b1ef3d828f5b8c2cdc896e4afe9/pre-release/release-openstack-python/c624655/job-output.txt.gz#_2018-04-20_13_58_20_454319 Doug From amy at demarco.com Fri Apr 20 17:31:20 2018 From: amy at demarco.com (Amy Marrich) Date: Fri, 20 Apr 2018 12:31:20 -0500 Subject: [openstack-dev] =?utf-8?q?OpenStack_Summit_Vancouver_Speed_Mentor?= =?utf-8?q?ing_Workshop=E2=80=94Call_for_Mentors?= Message-ID: *Calling All OpenStack Mentors!We’re quickly nearing the Vancouver Summit, and gearing up for another successful Speed Mentoring workshop! This workshop, now a mainstay at OpenStack Summits, is designed to provide guidance to newcomers so that they can dive in and actively engage, participate and contribute to our community. And we couldn’t do this without you—our fearless mentors!Speed Mentoring Workshop & LunchMonday, May 21, 12:15 – 1:30 pmVancouver Convention Centre West, Level 2, Room 215-216https://bit.ly/2HCGjMo Who should sign up?Are you excited about OpenStack and interested in sharing your career, community or technical advice and expertise with others? Contributed (code and non-code contributions welcome) to the OpenStack community for at least one year? Any mentor of any gender with a technical or non-technical background is encouraged to join us. Share your insights, inspire those new to our community, grab lunch, and pick up special mentor gifts!How does it work?Simply sign up here , and fill in a short survey about your areas of interests and expertise. Your answers will be used to produce fun, customized baseball cards that you can use to introduce yourself to the mentees. You will be provided with mentees’ areas of interest and questions in advance to help you prepare, and we’ll meet as a team ahead of time to go over logistics and answer any questions you may have. On the day of the event, plan to arrive ~ 15 minutes before the session. During the session, you will meet with small groups of mentees in 15-minute intervals and answer their questions about how to grow in the community.It’s a fast-paced event and a great way to meet new people, introduce them to the Summit and welcome them to the OpenStack community.Be sure to sign up today !* *Thanks,* *Amy (spotz)* -------------- next part -------------- An HTML attachment was scrubbed... URL: From s.s.filatov94 at gmail.com Fri Apr 20 17:57:10 2018 From: s.s.filatov94 at gmail.com (Sergey Filatov) Date: Fri, 20 Apr 2018 20:57:10 +0300 Subject: [openstack-dev] [magnum] K8S apiserver key sync Message-ID: <0A797CB1-E1C4-4E13-AA3A-9A9000D07A07@gmail.com> Hello, I looked into k8s drivers for magnum I see that each api-server on master node generates it’s own service-account-key-file. This causes issues with service-accounts authenticating on api-server. (In case api-server endpoint moves). As far as I understand we should have either all api-server keys synced on api-servesr or pre-generate single api-server key. What is the way for magnum to get over this issue? From mriedemos at gmail.com Fri Apr 20 18:00:43 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 20 Apr 2018 13:00:43 -0500 Subject: [openstack-dev] [docs] When should we say 'run as root' in the docs? In-Reply-To: <0c4cb174-44a8-6a25-b1cb-e26d3aa9a670@suse.com> References: <7e90efac-3e21-acc5-bd06-a5b963ec10e4@gmail.com> <0c4cb174-44a8-6a25-b1cb-e26d3aa9a670@suse.com> Message-ID: <4c9d2111-eaa3-536c-f6ea-5bb2fe801825@gmail.com> On 4/20/2018 2:04 AM, Andreas Jaeger wrote: > We use in openstack-manuals "# root-command" and "$ non-root command", see: > https://docs.openstack.org/install-guide/common/conventions.html > I learned something new today. > > But looking at > https://git.openstack.org/cgit/openstack/nova/tree/doc/source/install/verify.rst#n103, > it is there - so, closed invalid IMHO, Done, thanks for the feedback. -- Thanks, Matt From doug at doughellmann.com Fri Apr 20 18:04:08 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 20 Apr 2018 14:04:08 -0400 Subject: [openstack-dev] [all][ptl][release][masakari][murano][qinling][searchlight][zaqar] reminder for rocky-1 milestone deadline In-Reply-To: <1524143700-sup-9515@lrrr.local> References: <1524143700-sup-9515@lrrr.local> Message-ID: <1524246767-sup-1191@lrrr.local> Excerpts from Doug Hellmann's message of 2018-04-19 09:15:49 -0400: > Today is the deadline for proposing a release for the Rocky-1 milestone. > Please don't forget to include your libraries (client or otherwise) as > well. > > Doug A few projects have missed the first milestone tagging deadline: masakari-monitors masakari murano-dashboard qinling searchlight-ui searchlight zaqar-ui zaqar The policy on missing deadlines this cycle is changing [1]: Projects using milestones are expected to tag at least 2 out of the 3 for each cycle, or risk being dropped as an official project. The release team will remind projects that miss the first milestone, and force tags on any later milestones by tagging HEAD at the time of the deadline. The masakari, murano, qinling, searchlight, and zaqar teams should consider this your reminder. We really don't want to be making decisions for you about what constitutes a good release, but we also do not want to have projects that are not preparing releases. Please keep up with the deadlines. Doug [1] https://review.openstack.org/#/c/561258 From emilien at redhat.com Fri Apr 20 18:56:55 2018 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 20 Apr 2018 11:56:55 -0700 Subject: [openstack-dev] [tripleo] roadmap on containers workflow In-Reply-To: References: <1bf26224-6cb3-099b-f36a-88e0138eb502@redhat.com> Message-ID: So the role has proven to be useful and we're now sure that we need it to deploy a container registry before any container in the deployment, which means we can't use the puppet code anymore for this service. I propose that we move the role to OpenStack: https://review.openstack.org/#/c/563197/ https://review.openstack.org/#/c/563198/ https://review.openstack.org/#/c/563200/ So we can properly peer review and gate the new role. In the meantime, we continue to work on the new workflow. Thanks, On Sun, Apr 15, 2018 at 7:24 PM, Emilien Macchi wrote: > On Fri, Apr 13, 2018 at 5:58 PM, Emilien Macchi > wrote: > >> >> A bit of progress today, I prototyped an Ansible role for that purpose: >> https://github.com/EmilienM/ansible-role-container-registry >> >> The next step is, I'm going to investigate if we can deploy Docker and >> Docker Distribution (to deploy the registry v2) via the existing composable >> services in THT by using external_deploy_tasks maybe (or another interface). >> The idea is really to have the registry ready before actually deploying >> the undercloud containers, so we can modify them in the middle (run >> container-check in our case). >> > > This patch: https://review.openstack.org/#/c/561377 is deploying Docker > and Docker Registry v2 *before* containers deployment in the docker_steps. > It's using the external_deploy_tasks interface that runs right after the > host_prep_tasks, so still before starting containers. > > It's using the Ansible role that was prototyped on Friday, please take a > look and raise any concern. > Now I would like to investigate how we can run container workflows between > the deployment and docker and containers deployments. > -- > Emilien Macchi > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Fri Apr 20 19:30:47 2018 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 20 Apr 2018 12:30:47 -0700 Subject: [openstack-dev] [tripleo] Reminder about openstack/instack-undercloud contributions Message-ID: In case you missed it, the TripleO team is working on making the containerized undercloud by default during Rocky. It means that any patch in instack-undercloud won't probably be useful for Rocky, unless you need to backport something in stable branches then fine. Anything that is new, has to be ported in tripleoclient and tripleo-heat-templates. Feel free to reach out on #tripleo if you have any question! Thanks, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Fri Apr 20 20:06:17 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 20 Apr 2018 16:06:17 -0400 Subject: [openstack-dev] [all][requirements] uncapping eventlet In-Reply-To: <1523463552-sup-1950@lrrr.local> References: <20180405154737.lhxhlpsj3uttjevu@gentoo.org> <1523463552-sup-1950@lrrr.local> Message-ID: <1524254623-sup-3036@lrrr.local> Excerpts from Doug Hellmann's message of 2018-04-11 12:20:46 -0400: > Excerpts from Matthew Thode's message of 2018-04-05 10:47:37 -0500: > > eventlet-0.22.1 has been out for a while now, we should try and use it. > > Going to be fun times. > > > > I have a review projects can depend upon if they wish to test. > > https://review.openstack.org/533021 > > I have proposed a bunch of patches to projects to remove the cap > for eventlet [1]. If they don't pass tests, please take them over > and fix them up as needed (I anticipate some trouble with the new > check-requirements rules, for example). > > Doug > > [1] https://review.openstack.org/#/q/topic:uncap-eventlet+(status:open+OR+status:merged) We have made great progress on this but we do still have quite a few of these patches that have not been approved. Many are failing test jobs and will need a little attention ( the failing requirements jobs are real problems and rechecking will not fix them). If you have time to help, please jump in and take over a patch and get it working. https://review.openstack.org/#/q/status:open+topic:uncap-eventlet Thanks, Doug From colleen at gazlene.net Fri Apr 20 21:03:02 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 20 Apr 2018 23:03:02 +0200 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 16 April 2018 Message-ID: <1524258182.244228.1345385336.5EFE0A0E@webmail.messagingengine.com> # Keystone Team Update - Week of 16 April 2018 ## News ### Retrospective scheduled We're planning on having our milestonely team retrospective next week immediately after the weekly meeting[1]. We usually do this as a video conference. Come with your feedback! [1] http://lists.openstack.org/pipermail/openstack-dev/2018-April/129444.html ### Milestone releases We released our main libraries and keystone for Milestone 1 of the Rocky cycle[2][3][4][5]. [2] https://review.openstack.org/562735 [3] https://review.openstack.org/562730 [4] https://review.openstack.org/562723 [5] https://review.openstack.org/562732 ### Forum topics submitted We submitted topic proposals for the Vancouver Forum[6]. We're proposing to discuss the next stage of Unified Limits[7], the default roles cross-project initiative[8], and have a standard feedback session[9]. We opted not to submit anything on application credentials since we think there is not much controversy over the projected direction (mainly adding fine-grained access control). [6] https://etherpad.openstack.org/p/YVR-keystone-forum-sessions [7] http://forumtopics.openstack.org/cfp/details/130 [8] http://forumtopics.openstack.org/cfp/details/131 [9] http://forumtopics.openstack.org/cfp/details/132 ### JWT still under discussion There are still a number of open questions[10] on the design of the proposed JWT token provider[11]. We're not sure if the token ought to be encrypted (fernet tokens are) and we're not sure whether we want symmetric or asymmetric signing (and encryption). Part of the issue is that we don't have a specific ask from stakeholders, so this is mostly all in "it would be nice" territory. The latest revision of the spec has been updated to include potential use cases. If you have a vested interest in this work, please engage with us on the spec. [10] http://eavesdrop.openstack.org/irclogs/%23openstack-keystone/%23openstack-keystone.2018-04-17.log.html#t2018-04-17T17:37:43 [11] https://review.openstack.org/541903 ### Default roles cross-project spec The keystone team is satisfied with the current state of the cross-project spec to agree upon a set of default roles across projects[12] but we need more feedback and eventual approval from cross-project liasons[13]. If you have input or questions, please reach out to us. [12] https://review.openstack.org/523973 [13] https://review.openstack.org/#/admin/groups/1372,members ## Open Specs Search query: https://goo.gl/eyTktx No new specs have been proposed for Rocky this week, and today is the deadline so we'll only expect to continue refinement of the current proposals. ## Recently Merged Changes Search query: https://goo.gl/hdD9Kw We merged 14 changes this week. ## Changes that need Attention Search query: https://goo.gl/tW5PiH There are 55 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. ## Bugs These week we opened 7 new bugs and fixed 4 bugs across keystone, keystoneauth, keystonemiddleware, python-keystoneclient, and oslo.policy. ## Milestone Outlook https://releases.openstack.org/rocky/schedule.html The time for submitting spec ideas is over. We'll continue to refine the current proposals until the Rocky-2 deadline in about six weeks. ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter From doug at doughellmann.com Fri Apr 20 21:26:15 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 20 Apr 2018 17:26:15 -0400 Subject: [openstack-dev] [tc] campaign question related to new projects Message-ID: <1524259233-sup-3003@lrrr.local> [This is meant to be one of (I hope) several conversation-provoking questions directed at prospective TC members to help the community understand their positions before considering how to vote in the ongoing election.] We are discussing adding at least one new project this cycle, and the specific case of Adjutant has brought up questions about the criteria we use for evaluating new projects when they apply to become official. Although the current system does include some well-defined requirements [1], it was also designed to rely on TC members to use their judgement in some other areas, to account for changing circumstances over the life of the project and to reflect the position that governance is not something we can automate away. Without letting the conversation devolve too much into a discussion of Adjutant's case, please talk a little about how you would evaluate a project's application in general. What sorts of things do you consider when deciding whether a project "aligns with the OpenStack Mission," for example? Doug [1] https://governance.openstack.org/tc/reference/new-projects-requirements.html From zhipengh512 at gmail.com Fri Apr 20 22:52:54 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Sat, 21 Apr 2018 06:52:54 +0800 Subject: [openstack-dev] [cyborg][release][Release-job-failures] Pre-release of openstack/cyborg failed In-Reply-To: <1524240091-sup-280@lrrr.local> References: <1524240091-sup-280@lrrr.local> Message-ID: Thanks Doug we will take a look into it On Sat, Apr 21, 2018 at 12:02 AM, Doug Hellmann wrote: > Excerpts from zuul's message of 2018-04-20 13:59:14 +0000: > > Build failed. > > > > - release-openstack-python http://logs.openstack.org/fa/ > fabeaffa6efe8b1ef3d828f5b8c2cdc896e4afe9/pre-release/ > release-openstack-python/c624655/ : FAILURE in 6m 07s > > - announce-release announce-release : SKIPPED > > - propose-update-constraints propose-update-constraints : SKIPPED > > > > The cyborg milestone release failed to build because the packaging step > could not find some expected rootwrap files: > > http://logs.openstack.org/fa/fabeaffa6efe8b1ef3d828f5b8c2cd > c896e4afe9/pre-release/release-openstack-python/ > c624655/job-output.txt.gz#_2018-04-20_13_58_20_454319 > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Fri Apr 20 23:06:30 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Sat, 21 Apr 2018 07:06:30 +0800 Subject: [openstack-dev] [tc] campaign question related to new projects In-Reply-To: <1524259233-sup-3003@lrrr.local> References: <1524259233-sup-3003@lrrr.local> Message-ID: As the one who just lead a new project into governance last year, I think I could take a first stab at it. For me the current requirements in general works fine, as I emphasized in my recent blog [0], the four opens are extremely important. Open Design is one of the most important out the four I guess, because it actually will lead to the diversity question. A team with a single vendor, although it could satisfy all the other three easily, could not have a good open design rather well. Another criteria (more related to the mission statement specifically) I would consider important is the ability to demonstrate (1)its scope does not overlap with existing official projects and (2) its ability to actively work with related projects. The cross project collaboration does not have to be waited after the project got anointed, rather started when the project is in conception. Well I guess that is my two cents :) [0] https://hannibalhuang.github.io/ On Sat, Apr 21, 2018 at 5:26 AM, Doug Hellmann wrote: > [This is meant to be one of (I hope) several conversation-provoking > questions directed at prospective TC members to help the community > understand their positions before considering how to vote in the > ongoing election.] > > We are discussing adding at least one new project this cycle, and > the specific case of Adjutant has brought up questions about the > criteria we use for evaluating new projects when they apply to > become official. Although the current system does include some > well-defined requirements [1], it was also designed to rely on TC > members to use their judgement in some other areas, to account for > changing circumstances over the life of the project and to reflect > the position that governance is not something we can automate away. > > Without letting the conversation devolve too much into a discussion > of Adjutant's case, please talk a little about how you would evaluate > a project's application in general. What sorts of things do you > consider when deciding whether a project "aligns with the OpenStack > Mission," for example? > > Doug > > [1] https://governance.openstack.org/tc/reference/new-projects- > requirements.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Sat Apr 21 12:16:17 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Sat, 21 Apr 2018 14:16:17 +0200 Subject: [openstack-dev] [neutron] Gate-failure bugs spring cleaning Message-ID: <40CBAEA8-B68B-482D-933E-76A97112A6E0@redhat.com> Hi Neutrinos, There is time for some spring cleaning now so I went through list of Neutron bugs with „gate-failure” tag https://tinyurl.com/y826rccx I mark some of them as incomplete if there was not hits of same errors in last 30 days. Please reopen them with proper comment if You think that it is still valid bug or if You will spot similar error in some recent test runs. About some of them I’m not sure if are still valid so please check it and maybe update comment or close it if it’s already fixed somehow :) Below detailed summary of bugs which I checked: I removed Neutron from affected projects: * https://bugs.launchpad.net/tempest/+bug/1660612 I marked as incomplete: * https://bugs.launchpad.net/neutron/+bug/1687027 * https://bugs.launchpad.net/neutron/+bug/1693931 * https://bugs.launchpad.net/neutron/+bug/1676966 Bugs which needs check of owner: * https://bugs.launchpad.net/neutron/+bug/1711463 - @Miguel, is it still valid? Can we close it? * https://bugs.launchpad.net/neutron/+bug/1717302 - @Brian, no action since 2017-12-12, is it failing still? Bug which IMO should be reported against Cinder instead of Neutron, can someone check and confirm that: * https://bugs.launchpad.net/neutron/+bug/1726462 - Is it related to Neutron really, IMO it look like error with Cinder and it happens also in other than neutron jobs, like „devstack-platform-opensuse-tumbleweed” and „nova-multiattach” for example Still valid bugs probably: * https://bugs.launchpad.net/neutron/+bug/1693950 - not exactly same error but same tests failures I found recently so I think it is still valid to check * https://bugs.launchpad.net/neutron/+bug/1756301 - @Miguel: Can You check and confirm that this is still valid * https://bugs.launchpad.net/neutron/+bug/1569621 - should be fixed by https://review.openstack.org/#/c/562220/ - @Jakub can You confirm that? — Best regards Slawek Kaplonski skaplons at redhat.com From erakli00 at gmail.com Sat Apr 21 16:58:18 2018 From: erakli00 at gmail.com (Egor Panfilov) Date: Sat, 21 Apr 2018 19:58:18 +0300 Subject: [openstack-dev] [watcher] Catch bad filters on DB API side Message-ID: <1524329898.5352.3.camel@gmail.com> Hi all.  I currently work on tests for https://review.openstack.org/#/c/559481/ and found that test watcher.tests.db.test_audit_template.DbAuditTemplateTestCase.test_get_a udit_template_list_with_filters() works not as supposed. When it calls get_audit_template_list() with filters on 'goal', it's done incorrectly. Filter named 'goal' is accepted by API and it converts 'goal' to 'goal_name' or 'goal_uuid" filter, that is actualy sent to DB. So, DB wait only fields that explicitly passed to Connection._add_filters() method. The most interesting part is that Connection._add_audit_templates_filters() doesn't define 'goal' field in plain_fields variable. And the key point is here: Connection._add_filters() method iterates over plain_fields and join_fieldmap but do nothing with filter, that is not a part of these arguments. I wonder why not to add something like exception or log entry on bad filter? This is the reason, why bug https://bugs.launchpad.net/watcher/+bug/176 1956 wasn't caught by tests. Test, that should caught it checks nothing. I fix it, but would it be a good idea to add checks on bad filters? Thanks. From pete at port.direct Sat Apr 21 21:56:41 2018 From: pete at port.direct (Pete Birley) Date: Sat, 21 Apr 2018 21:56:41 +0000 Subject: [openstack-dev] [kolla][vote]Retire kolla-kubernetes project In-Reply-To: References: <3146848E-8DFB-4DBE-ABA2-1485AC590502@betacloud-solutions.de> Message-ID: +1 On Thu, Apr 19, 2018, 1:24 AM Eduardo Gonzalez wrote: > +1 > > 2018-04-19 8:21 GMT+02:00 Christian Berendt < > berendt at betacloud-solutions.de>: > >> +1 >> >> > On 18. Apr 2018, at 03:51, Jeffrey Zhang >> wrote: >> > >> > Since many of the contributors in the kolla-kubernetes project are >> moved to other things. And there is no active contributor for months. On >> the other hand, there is another comparable project, openstack-helm, in the >> community. For less confusion and disruptive community resource, I propose >> to retire the kolla-kubernetes project. >> > >> > More discussion about this you can check the mail[0] and patch[1] >> > >> > please vote +1 to retire the repo, or -1 not to retire the repo. The >> vote will be open until everyone has voted, or for 1 week until April 25th, >> 2018. >> > >> > [0] >> http://lists.openstack.org/pipermail/openstack-dev/2018-March/128822.html >> > [1] https://review.openstack.org/552531 >> > >> > -- >> > Regards, >> > Jeffrey Zhang >> > Blog: http://xcodest.me >> > >> __________________________________________________________________________ >> > OpenStack Development Mailing List (not for usage questions) >> > Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> -- >> Christian Berendt >> Chief Executive Officer (CEO) >> >> Mail: berendt at betacloud-solutions.de >> Web: https://www.betacloud-solutions.de >> >> Betacloud Solutions GmbH >> Teckstrasse 62 / 70190 Stuttgart / Deutschland >> >> Geschäftsführer: Christian Berendt >> Unternehmenssitz: Stuttgart >> Amtsgericht: Stuttgart, HRB 756139 >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Sun Apr 22 00:50:05 2018 From: aschultz at redhat.com (Alex Schultz) Date: Sat, 21 Apr 2018 18:50:05 -0600 Subject: [openstack-dev] [tripleo] Rocky Milestone 1 has past Message-ID: Hey everyone, We released Rocky Milestone 1 this week[0]. I have gone through and updated the blueprints that were still targeted to rocky-1 to move them to rocky-2. Please take some time to review the outstanding blueprints to make sure that we still still be able to deliver them during the Rocky release. If any need to get pushed, please let me know. We would like to continue doing a soft feature freeze at Milestone 2, so make sure you are paying attention to the schedule. Thanks, -Alex [0] https://launchpad.net/tripleo/+milestone/rocky-1 From davanum at gmail.com Sun Apr 22 00:55:13 2018 From: davanum at gmail.com (Davanum Srinivas) Date: Sat, 21 Apr 2018 20:55:13 -0400 Subject: [openstack-dev] [kolla][vote]Retire kolla-kubernetes project In-Reply-To: References: <3146848E-8DFB-4DBE-ABA2-1485AC590502@betacloud-solutions.de> Message-ID: +1 On Sat, Apr 21, 2018 at 5:56 PM, Pete Birley wrote: > +1 > > On Thu, Apr 19, 2018, 1:24 AM Eduardo Gonzalez wrote: >> >> +1 >> >> 2018-04-19 8:21 GMT+02:00 Christian Berendt >> : >>> >>> +1 >>> >>> > On 18. Apr 2018, at 03:51, Jeffrey Zhang >>> > wrote: >>> > >>> > Since many of the contributors in the kolla-kubernetes project are >>> > moved to other things. And there is no active contributor for months. On >>> > the other hand, there is another comparable project, openstack-helm, in the >>> > community. For less confusion and disruptive community resource, I propose >>> > to retire the kolla-kubernetes project. >>> > >>> > More discussion about this you can check the mail[0] and patch[1] >>> > >>> > please vote +1 to retire the repo, or -1 not to retire the repo. The >>> > vote will be open until everyone has voted, or for 1 week until April 25th, >>> > 2018. >>> > >>> > [0] >>> > http://lists.openstack.org/pipermail/openstack-dev/2018-March/128822.html >>> > [1] https://review.openstack.org/552531 >>> > >>> > -- >>> > Regards, >>> > Jeffrey Zhang >>> > Blog: http://xcodest.me >>> > >>> > __________________________________________________________________________ >>> > OpenStack Development Mailing List (not for usage questions) >>> > Unsubscribe: >>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> -- >>> Christian Berendt >>> Chief Executive Officer (CEO) >>> >>> Mail: berendt at betacloud-solutions.de >>> Web: https://www.betacloud-solutions.de >>> >>> Betacloud Solutions GmbH >>> Teckstrasse 62 / 70190 Stuttgart / Deutschland >>> >>> Geschäftsführer: Christian Berendt >>> Unternehmenssitz: Stuttgart >>> Amtsgericht: Stuttgart, HRB 756139 >>> >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Davanum Srinivas :: https://twitter.com/dims From zhipengh512 at gmail.com Sun Apr 22 01:08:44 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Sun, 22 Apr 2018 09:08:44 +0800 Subject: [openstack-dev] [kolla][vote]Retire kolla-kubernetes project In-Reply-To: References: <3146848E-8DFB-4DBE-ABA2-1485AC590502@betacloud-solutions.de> Message-ID: +1 On Sun, Apr 22, 2018 at 8:55 AM, Davanum Srinivas wrote: > +1 > > On Sat, Apr 21, 2018 at 5:56 PM, Pete Birley wrote: > > +1 > > > > On Thu, Apr 19, 2018, 1:24 AM Eduardo Gonzalez > wrote: > >> > >> +1 > >> > >> 2018-04-19 8:21 GMT+02:00 Christian Berendt > >> : > >>> > >>> +1 > >>> > >>> > On 18. Apr 2018, at 03:51, Jeffrey Zhang > >>> > wrote: > >>> > > >>> > Since many of the contributors in the kolla-kubernetes project are > >>> > moved to other things. And there is no active contributor for > months. On > >>> > the other hand, there is another comparable project, openstack-helm, > in the > >>> > community. For less confusion and disruptive community resource, I > propose > >>> > to retire the kolla-kubernetes project. > >>> > > >>> > More discussion about this you can check the mail[0] and patch[1] > >>> > > >>> > please vote +1 to retire the repo, or -1 not to retire the repo. The > >>> > vote will be open until everyone has voted, or for 1 week until > April 25th, > >>> > 2018. > >>> > > >>> > [0] > >>> > http://lists.openstack.org/pipermail/openstack-dev/2018- > March/128822.html > >>> > [1] https://review.openstack.org/552531 > >>> > > >>> > -- > >>> > Regards, > >>> > Jeffrey Zhang > >>> > Blog: http://xcodest.me > >>> > > >>> > ____________________________________________________________ > ______________ > >>> > OpenStack Development Mailing List (not for usage questions) > >>> > Unsubscribe: > >>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >>> > >>> -- > >>> Christian Berendt > >>> Chief Executive Officer (CEO) > >>> > >>> Mail: berendt at betacloud-solutions.de > >>> Web: https://www.betacloud-solutions.de > >>> > >>> Betacloud Solutions GmbH > >>> Teckstrasse 62 / 70190 Stuttgart / Deutschland > >>> > >>> Geschäftsführer: Christian Berendt > >>> Unternehmenssitz: Stuttgart > >>> Amtsgericht: Stuttgart, HRB 756139 > >>> > >>> > >>> > >>> ____________________________________________________________ > ______________ > >>> OpenStack Development Mailing List (not for usage questions) > >>> Unsubscribe: > >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > >> > >> > >> ____________________________________________________________ > ______________ > >> OpenStack Development Mailing List (not for usage questions) > >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > -- > Davanum Srinivas :: https://twitter.com/dims > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Sun Apr 22 08:50:51 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Sun, 22 Apr 2018 16:50:51 +0800 Subject: [openstack-dev] [tc] campaign question related to new projects In-Reply-To: <1524259233-sup-3003@lrrr.local> References: <1524259233-sup-3003@lrrr.local> Message-ID: Thanks, Doug, for raising this campaign question Here are my answers: ***How you would evaluate a project's application in general First I would work through the requirements ([1]) to evaluate projects. Since most of the requirements are specific enough. And here's more important part, to leave evaluate logs or comments for projects which we considered but didn't reach some requirements. It's very important to guide projects to cross over requirements (and remember, a `-1` only means we trying to help). Then, I work on questions, like: `How many user are interesting to/needs the functionality that service provided?` `How active is this project and how's the diversity of contributors?` `Is this project required cross communities/projects cooperation? If yes, how's the development workflows are working between communities/projects?` And last but is one of the most important questions, `Is this service aligns with the OpenStack Mission`? (and let's jump to next question to answer this part) **What sorts of things do you consider when deciding whether a project "aligns with the OpenStack Mission," for example?* I would consider things like: `Is the project's functionality complete the OpenStack infrastructure map?` Asking from user requirement and functionality point of view, `how's the project(services) will make OpenStack better infrastructure for user/operators?` and `how's this functionality provide a better life for OpenStack developers?` `Is the project provides better integration point between communities` To build a better infrastructure, IMO it's also important to ask if a project (service) really help on integration with other communities like Kubernetes, OPNFV, CEPH, etc. I think to keep us as an active infrastructure to solutions is part of our mission too. `Is it providing functionality which we can integrate with current projects or SIG instead?` In short, we should be gathering our development energy, to really achieve the jobs which is exactly why we spend times on trying to find official projects and said this is part of our mission to work on. So when new projects jump out, it's really important to discuss cross-project `is it suitable for projects integrated and join force on specific functionality?` (to do this while evaluating a project instead of when it's creating might not be the best time to said `please integrate or join forces with other teams together`(not even with a smiling face), but it's never too late for a non-official/incubating project to consider about this). I really don't like to to see any project get higher chances to die just because developers chance their developing focus. It's happening when projects are all willing to do the functionality, but no communication between(some cases, not even now other projects exists), and new/old projects dead, then TC needs to spend the time to pick those projects out. So IMO, it's worth to spend times to investigate on whether projects can be joined. Or ideally to put a resolution said, it's project's obligation to help on this, and help other join force to be part of the team. `Can projects provide cross-project gating?` Do think if it's possible, we should consider this when asking if a service aligns with our mission because not breaking rest of infrastructure is part of the definition of `to build`. And providing cross-project gate jobs seems like a way to go. To stable the integration between projects and prevent released a failed feature when other services trying to work on new ways and provide no guideline, ML, or solution, just only leave words like `this is not part of our function to fix`. And finally, If we can answer all above questions, try to put in with the more accurate number (like from user survey), and provides communications it needs, will definitely help in finding next official projects. Also, when the evaluation is done, we should also evaluate the how's these evaluation processes, how's guideline working for us? and which questions above doesn't make any sense?. [1] https://governance.openstack.org/tc/reference/new-projects-requirements.html May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Sun Apr 22 13:10:40 2018 From: thierry at openstack.org (Thierry Carrez) Date: Sun, 22 Apr 2018 15:10:40 +0200 Subject: [openstack-dev] [tc] campaign question related to new projects In-Reply-To: <1524259233-sup-3003@lrrr.local> References: <1524259233-sup-3003@lrrr.local> Message-ID: Doug Hellmann wrote: > [This is meant to be one of (I hope) several conversation-provoking > questions directed at prospective TC members to help the community > understand their positions before considering how to vote in the > ongoing election.] > > We are discussing adding at least one new project this cycle, and > the specific case of Adjutant has brought up questions about the > criteria we use for evaluating new projects when they apply to > become official. Although the current system does include some > well-defined requirements [1], it was also designed to rely on TC > members to use their judgement in some other areas, to account for > changing circumstances over the life of the project and to reflect > the position that governance is not something we can automate away. > > Without letting the conversation devolve too much into a discussion > of Adjutant's case, please talk a little about how you would evaluate > a project's application in general. What sorts of things do you > consider when deciding whether a project "aligns with the OpenStack > Mission," for example? Thanks for getting the discussion started, Doug. We have two main criteria in the requirements. The "follows the OpenStack way" one, which I call the culture fit, and the "aligns with the OpenStack mission" one, which I call the product fit. In both cases there is room for interpretation and for personal differences in appreciation. For the culture fit, while in most cases its straightforward (as the project is born out of our existing community members), it is sometimes much more blurry. When the group behind the new project is sufficiently disjoint from our existing team members, you are judging a future promise to behave in "the OpenStack way". In those cases it's really an opportunity to reach out and explain how and why we do things the way we do them, the principles behind our community norms. In the end it's a leap of faith. The line I personally stand on is the willingness to openly collaborate. If the new group is a closed group that has no interest in including new people and wants to retain "control" over the project, and is only interested in the marketing boost of being a part of "OpenStack", then it should really be denied. We provide a space for open collaboration, not for openwashing projects. For the product fit, there is also a lot of room for interpretation. For me it boils down to whether "OpenStack" (the product) is better with that project "in" rather than with that project "out". Sometimes it's an easy sell: if a group wants to collaborate on packaging OpenStack for a certain format/distro/deployment tool, it's clearly a win. In that case more is always better. But in most cases it's not as straightforward. There is always tension between added functionality on one side, and complexity / dilution / confusion on the other. Every "service" project we add makes OpenStack more complex to explain, cross-project work more difficult and interoperability incrementally harder. Whatever is added has to be damn interesting to counterbalance that. The same goes for competitive / alternative projects: in some cases the net result is a win (different approaches to monitoring), while in some cases the net result would be a loss (a Keystone alternative that would make everyone else's life more miserable). In summary while the rules are precise, the way we interpret them can still be varied. That is why this discussion is useful: comparing notes on how we answer that difficult question, understanding where everyone stands, helps us converge to a general consensus of the goals we are trying to achieve when defining "OpenStack" scope, even if we disagree on the particulars. -- Thierry Carrez (ttx) From zhang.lei.fly at gmail.com Mon Apr 23 01:33:03 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Mon, 23 Apr 2018 09:33:03 +0800 Subject: [openstack-dev] [kolla] kolla September ptg survey Message-ID: ​​ Hi kollars The next PTG will be held at Denver Colorado on September 10-14, 2018[0]. We have to decide whether Kolla will participate. I personally think ptg is a great time for the team to gather together to resolve issues and make next roadmap. But it is not that easy for every to travel. So please take minutes to fill this form[1] before May 2nd. Then we cloud decide whether we should book a root at ptg. [0] https://www.openstack.org/ptg [1] https://goo.gl/forms/9ZHUw4GBUvggNl643 -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From singh.surya64mnnit at gmail.com Mon Apr 23 01:52:25 2018 From: singh.surya64mnnit at gmail.com (Surya Singh) Date: Mon, 23 Apr 2018 10:52:25 +0900 Subject: [openstack-dev] [kolla] kolla September ptg survey In-Reply-To: References: Message-ID: Hello Jeffrey, Filled the form. Yes, it's always good time to gather and discuss about the next roadmap at PTG. As It was great at Dublin PTG for Kolla despite snow storm. Hope I would be able to travel. ---spsurya On Mon, Apr 23, 2018 at 10:33 AM, Jeffrey Zhang wrote: > Hi kollars > > The next PTG will be held at Denver Colorado on September 10-14, 2018[0]. We > have to decide whether Kolla will participate. I personally think ptg is a > great time for the team to gather together to resolve issues and make next > roadmap. But it is not that easy for every to travel. > > So please take minutes to fill this form[1] before May 2nd. Then we cloud > decide whether we should book a root at ptg. > > [0] https://www.openstack.org/ptg > [1] https://goo.gl/forms/9ZHUw4GBUvggNl643 > > > -- > Regards, > Jeffrey Zhang > Blog: http://xcodest.me > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From sean.mcginnis at gmx.com Mon Apr 23 02:01:46 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Sun, 22 Apr 2018 21:01:46 -0500 Subject: [openstack-dev] [tc] campaign question related to new projects In-Reply-To: <1524259233-sup-3003@lrrr.local> References: <1524259233-sup-3003@lrrr.local> Message-ID: <20180423020145.GB6379@sm-xps> > > We are discussing adding at least one new project this cycle, and > the specific case of Adjutant has brought up questions about the > criteria we use for evaluating new projects when they apply to > become official. Although the current system does include some > well-defined requirements [1], it was also designed to rely on TC > members to use their judgement in some other areas, to account for > changing circumstances over the life of the project and to reflect > the position that governance is not something we can automate away. > Good question to get the conversation going Doug. This is an interesting point that I think will require some longer term discussions. It would be nice if we can narrow this down to a more defined decision tree, but I also think it may be too difficult to get to the point where it is something that can be that black and white. For better or worse, I do think there is some subjective evaluation that is required for each of these so far. I think following our four opens is the basis for most decisions. They need to be developing projects in an open way, and open to community involvement with the implementation and direction of the project, as a basic starting point. If they are not willing to follow these basic principles then I think it is an easy decision to not go any further from there. We do care about diversity in contributors. I think it is very important for the long term health of a project to have multiple interests involved. But I do not consider this a bar to entry. I think it is perfectly OK for a new (but open) project to come in with the majority of the work coming from one vendor. As long as they are open and willing to get others involved in the development of the project, then it is good. And at least sometimes, starting off is sometimes better with one perspective driving things toward a focused solution. I think one of the important things is if it fits in to furthering what is "OpenStack", as far as whether it is a service or functionality that is needed and useful for those running an OpenStack cloud. This is one of the parts that may be more on the subjective side. We need to see that adding the new project in question will enhance the use or operation of an OpenStack environment. There is the question about overlap with existing projects. While I think it's true that a new project can come along that meets a need in a better way than an existing solution, I think that bar needs so be raised a lot higher. I personally would much rather see resources joining together on an existing solution than a bunch of resources used to come up with a competing solution. Even with a less than ideal solution, there is a lot that is learned from the process that can be fed into and combined with new ideas to create a better solution than just having a new replacement. There's probably a lot more that can be said about all of this, but that's my initial take. Looking forward to seeing what everyone else has to say and learning from how we are the same and how we are different on this topic. Sean From jichenjc at cn.ibm.com Mon Apr 23 05:02:49 2018 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Mon, 23 Apr 2018 13:02:49 +0800 Subject: [openstack-dev] [Nova] z/VM introducing a new config driveformat In-Reply-To: References: <2eea2405-2e85-6730-e022-e1be33dc35d5@gmail.com> Message-ID: Yes, fully understand this ,thanks for sharing! Best Regards! Kevin (Chen) Ji 纪 晨 Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com Phone: +86-10-82451493 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC From: Jim Rollenhagen To: "OpenStack Development Mailing List (not for usage questions)" Date: 04/20/2018 10:13 PM Subject: Re: [openstack-dev] [Nova] z/VM introducing a new config driveformat On Fri, Apr 20, 2018 at 4:02 AM, Chen CH Ji wrote: Thanks a lot for your sharing, that's good info, just curious why [1] need zip and base64 encode if my understand is correct I was told nova need format should be pure vfat or iso9660, I assume it's because actually the config drive itself is making by iso by default then wrap a zip/base64 format ... thanks We only gzip and base64 to send it to the ironic API. It is decoded and unzipped before writing it to disk, so it is a pure iso9660 on the disk. // jim __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=4L-KwemnBkUdTMyGA_BviipEqJ7MKNGlKFMKH6J6iaM&s=S52V2lLNK1Mh7rprSl-edF3Q2M4m3qEXcWd3jTW8Y9g&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From kyle.oh95 at gmail.com Mon Apr 23 06:38:26 2018 From: kyle.oh95 at gmail.com (Jaewook Oh) Date: Mon, 23 Apr 2018 15:38:26 +0900 Subject: [openstack-dev] [mistral] [vitrage] Propose adding Vitrage's actions to Mistral Actions Message-ID: Hello Mistral and Vitrage team, I've been testing vitrage with mistral workflow, but it seems that there are no Vitrage actions in Mistral yet. I think Vitrage actions should be added to Mistral. We can use the actions in mistral workflow to automate lots of repeated tasks as it was originally intended. So, I'd like to add them to the Mistral Actions. Can I do this work? Best Regards, Jaewook. -- ================================================ *Jaewook Oh* (오재욱) IISTRC - Internet Infra System Technology Research Center 369 Sangdo-ro, Dongjak-gu, 06978, Seoul, Republic of Korea -------------- next part -------------- An HTML attachment was scrubbed... URL: From renat.akhmerov at gmail.com Mon Apr 23 06:45:55 2018 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Mon, 23 Apr 2018 13:45:55 +0700 Subject: [openstack-dev] [mistral] [vitrage] Propose adding Vitrage's actions to Mistral Actions In-Reply-To: References: Message-ID: On 23 Apr 2018, 13:38 +0700, Jaewook Oh , wrote: > Hello Mistral and Vitrage team, > > I've been testing vitrage with mistral workflow, > but it seems that there are no Vitrage actions in Mistral yet. > > I think Vitrage actions should be added to Mistral. > We can use the actions in mistral workflow to automate lots of repeated tasks as it was originally intended. > > So, I'd like to add them to the Mistral Actions. > Can I do this work? Hi, I see no reason why not. We’ll assist, if needed. I’d recommend to join us at #openstack-mistral IRC channel for better communication. Thanks Renat Akhmerov @Nokia -------------- next part -------------- An HTML attachment was scrubbed... URL: From agarwalvishakha18 at gmail.com Mon Apr 23 06:53:54 2018 From: agarwalvishakha18 at gmail.com (vishakha agarwal) Date: Mon, 23 Apr 2018 12:23:54 +0530 Subject: [openstack-dev] [Freezer] New feature to support taking backup by nova and cinder instance name Message-ID: Hi Szaher, Sorry I asked for review the patch only for this new feature instead of explaining the background. Let me explain the background of this requirement. This mail is in reference for the https://bugs.launchpad.net/freezer/+bug/1603099 Currently Freezer has feature to take the backup of server or volume by uuid. New feature requirement is Freezer should allow to take the server and volume backup by their name. While implementing this feature [1] I am taking the backup of all the instances altogether having the same name of nova and cinder. Is it the right approach I followed? Or anything that can be improved. Kindly help with your feedback and improvements on which I can work. ..1 https://review.openstack.org/#/c/559665/ Thanks and Regards, Vishakha -------------- next part -------------- An HTML attachment was scrubbed... URL: From kyle.oh95 at gmail.com Mon Apr 23 06:57:30 2018 From: kyle.oh95 at gmail.com (Jaewook Oh) Date: Mon, 23 Apr 2018 15:57:30 +0900 Subject: [openstack-dev] [mistral] [vitrage] Propose adding Vitrage's actions to Mistral Actions In-Reply-To: References: Message-ID: Hello Renat, I'll join the IRC channel :) Thanks, Jaewook. 2018-04-23 15:45 GMT+09:00 Renat Akhmerov : > On 23 Apr 2018, 13:38 +0700, Jaewook Oh , wrote: > > Hello Mistral and Vitrage team, > > I've been testing vitrage with mistral workflow, > but it seems that there are no Vitrage actions in Mistral yet. > > I think Vitrage actions should be added to Mistral. > We can use the actions in mistral workflow to automate lots of repeated > tasks as it was originally intended. > > So, I'd like to add them to the Mistral Actions. > Can I do this work? > > > Hi, I see no reason why not. We’ll assist, if needed. I’d recommend to > join us at #openstack-mistral IRC channel for better communication. > > Thanks > > Renat Akhmerov > @Nokia > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- ================================================ *Jaewook Oh* (오재욱) IISTRC - Internet Infra System Technology Research Center 369 Sangdo-ro, Dongjak-gu, 06978, Seoul, Republic of Korea -------------- next part -------------- An HTML attachment was scrubbed... URL: From ifat.afek at nokia.com Mon Apr 23 06:58:42 2018 From: ifat.afek at nokia.com (Afek, Ifat (Nokia - IL/Kfar Sava)) Date: Mon, 23 Apr 2018 06:58:42 +0000 Subject: [openstack-dev] [mistral] [vitrage] Propose adding Vitrage's actions to Mistral Actions In-Reply-To: References: Message-ID: <285DF879-62C8-473B-BDFA-E5984B511ED6@nokia.com> From: Renat Akhmerov Date: Monday, 23 April 2018 at 9:45 On 23 Apr 2018, 13:38 +0700, Jaewook Oh , wrote: Hello Mistral and Vitrage team, I've been testing vitrage with mistral workflow, but it seems that there are no Vitrage actions in Mistral yet. I think Vitrage actions should be added to Mistral. We can use the actions in mistral workflow to automate lots of repeated tasks as it was originally intended. So, I'd like to add them to the Mistral Actions. Can I do this work? Hi, I see no reason why not. We’ll assist, if needed. I’d recommend to join us at #openstack-mistral IRC channel for better communication. Thanks Renat Akhmerov @Nokia Hi, Sounds like a good idea, let us know if you need any help from Vitrage team. Thanks, Ifat -------------- next part -------------- An HTML attachment was scrubbed... URL: From tbechtold at suse.com Mon Apr 23 07:41:16 2018 From: tbechtold at suse.com (Thomas Bechtold) Date: Mon, 23 Apr 2018 09:41:16 +0200 Subject: [openstack-dev] [packaging-rpm][meeting] Proposal for new meeting time In-Reply-To: <1548725753.18307087.1524147453478.JavaMail.zimbra@redhat.com> References: <1548725753.18307087.1524147453478.JavaMail.zimbra@redhat.com> Message-ID: Works for me. Tom On 19.04.2018 16:17, Javier Pena wrote: > Hello fellow packagers, > > During today's meeting [1], we discussed the schedule conflicts some of us have with the current meeting slot. As a result, I would like to propose a new meeting time: > > - Wednesdays, 1 PM UTC (3 PM CEST) > > So far, dirk and jruzicka agreed with the change. If you have an issue, please reply now. > > Regards, > Javier Peña > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From bdobreli at redhat.com Mon Apr 23 08:08:36 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Mon, 23 Apr 2018 10:08:36 +0200 Subject: [openstack-dev] [tripleo] roadmap on containers workflow In-Reply-To: References: <1bf26224-6cb3-099b-f36a-88e0138eb502@redhat.com> Message-ID: On 4/20/18 8:56 PM, Emilien Macchi wrote: > So the role has proven to be useful and we're now sure that we need it > to deploy a container registry before any container in the deployment, > which means we can't use the puppet code anymore for this service. > > I propose that we move the role to OpenStack: > https://review.openstack.org/#/c/563197/ > https://review.openstack.org/#/c/563198/ > https://review.openstack.org/#/c/563200/ > > So we can properly peer review and gate the new role. > > In the meantime, we continue to work on the new workflow. > Thanks, > > On Sun, Apr 15, 2018 at 7:24 PM, Emilien Macchi > wrote: > > On Fri, Apr 13, 2018 at 5:58 PM, Emilien Macchi > wrote: > > > A bit of progress today, I prototyped an Ansible role for that > purpose: > https://github.com/EmilienM/ansible-role-container-registry > > > The next step is, I'm going to investigate if we can deploy > Docker and Docker Distribution (to deploy the registry v2) via > the existing composable services in THT by > using external_deploy_tasks maybe (or another interface). > The idea is really to have the registry ready before actually > deploying the undercloud containers, so we can modify them in > the middle (run container-check in our case). > > > This patch: https://review.openstack.org/#/c/561377 > is deploying Docker and > Docker Registry v2 *before* containers deployment in the docker_steps. > It's using the external_deploy_tasks interface that runs right after > the host_prep_tasks, so still before starting containers. > > It's using the Ansible role that was prototyped on Friday, please > take a look and raise any concern. I have only a question if we could reuse something instead if already had been solved in projects like Kolla? Otherwise it's LGTM. > Now I would like to investigate how we can run container workflows > between the deployment and docker and containers deployments. > -- > Emilien Macchi > > > > > -- > Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Bogdan Dobrelya, Irc #bogdando From bdobreli at redhat.com Mon Apr 23 08:10:55 2018 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Mon, 23 Apr 2018 10:10:55 +0200 Subject: [openstack-dev] Fwd: Re: [tripleo][kolla] roadmap on containers workflow In-Reply-To: References: Message-ID: Added the kolla tag in the hope to get some feedback wrt the question -------- Forwarded Message -------- Subject: Re: [openstack-dev] [tripleo] roadmap on containers workflow Date: Mon, 23 Apr 2018 10:08:36 +0200 From: Bogdan Dobrelya Organization: Red Hat To: openstack-dev at lists.openstack.org On 4/20/18 8:56 PM, Emilien Macchi wrote: > So the role has proven to be useful and we're now sure that we need it > to deploy a container registry before any container in the deployment, > which means we can't use the puppet code anymore for this service. > > I propose that we move the role to OpenStack: > https://review.openstack.org/#/c/563197/ > https://review.openstack.org/#/c/563198/ > https://review.openstack.org/#/c/563200/ > > So we can properly peer review and gate the new role. > > In the meantime, we continue to work on the new workflow. > Thanks, > > On Sun, Apr 15, 2018 at 7:24 PM, Emilien Macchi > wrote: > > On Fri, Apr 13, 2018 at 5:58 PM, Emilien Macchi > wrote: > > > A bit of progress today, I prototyped an Ansible role for that > purpose: > https://github.com/EmilienM/ansible-role-container-registry > > > The next step is, I'm going to investigate if we can deploy > Docker and Docker Distribution (to deploy the registry v2) via > the existing composable services in THT by > using external_deploy_tasks maybe (or another interface). > The idea is really to have the registry ready before actually > deploying the undercloud containers, so we can modify them in > the middle (run container-check in our case). > > > This patch: https://review.openstack.org/#/c/561377 > is deploying Docker and > Docker Registry v2 *before* containers deployment in the docker_steps. > It's using the external_deploy_tasks interface that runs right after > the host_prep_tasks, so still before starting containers. > > It's using the Ansible role that was prototyped on Friday, please > take a look and raise any concern. I have only a question if we could reuse something instead if already had been solved in projects like Kolla? Otherwise it's LGTM. > Now I would like to investigate how we can run container workflows > between the deployment and docker and containers deployments. > -- > Emilien Macchi > > > > > -- > Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Best regards, Bogdan Dobrelya, Irc #bogdando From strigazi at gmail.com Mon Apr 23 08:18:35 2018 From: strigazi at gmail.com (Spyros Trigazis) Date: Mon, 23 Apr 2018 10:18:35 +0200 Subject: [openstack-dev] [magnum] K8S apiserver key sync In-Reply-To: <0A797CB1-E1C4-4E13-AA3A-9A9000D07A07@gmail.com> References: <0A797CB1-E1C4-4E13-AA3A-9A9000D07A07@gmail.com> Message-ID: Hi Sergey, In magnum queens we can set the private ca as a service account key. Here [1] we can set the ca.key file. When the label cert_manager_api is set to true. Cheers, Spyros [1] https://github.com/openstack/magnum/blob/master/magnum/drivers/common/templates/kubernetes/fragments/configure-kubernetes-master.sh#L32 On 20 April 2018 at 19:57, Sergey Filatov wrote: > Hello, > > I looked into k8s drivers for magnum I see that each api-server on master > node generates it’s own service-account-key-file. This causes issues with > service-accounts authenticating on api-server. (In case api-server endpoint > moves). > As far as I understand we should have either all api-server keys synced on > api-servesr or pre-generate single api-server key. > > What is the way for magnum to get over this issue? > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eumel at arcor.de Mon Apr 23 09:52:42 2018 From: eumel at arcor.de (Frank Kloeker) Date: Mon, 23 Apr 2018 11:52:42 +0200 Subject: [openstack-dev] [I18n] Office Hours, Thursday, 2018/04/26 13:00-14:00 UTC & 2018/05/03 07:00-08:00 UTC Message-ID: <2fbf8d44661c0af21ca59ac358abe3e5@arcor.de> Hello, the I18n team wants to change something in collaboration and communication with other teams and users. Instead of team meetings we offer around the Summit an open communication on Freenode IRC #openstack-i18n channel. Feel free to add your topics on the wiki page on [1]. Or better join one of our Office Hours to discuss topis around I18n. We are especially interested on: * Feedback about quality of translation in different languages * New projects or documents with interests on translation * New ideas like AI for I18n or new feature requests for Zanata, our translation platform You can meet us in person together with Docs team on the Project Onboarding Session during the Vancouver Summit [2]. kind regards Frank PTL I18n [1] https://wiki.openstack.org/wiki/Meetings/I18nTeamMeeting [2] https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21627/docsi18n-project-onboarding From dougal at redhat.com Mon Apr 23 10:20:07 2018 From: dougal at redhat.com (Dougal Matthews) Date: Mon, 23 Apr 2018 11:20:07 +0100 Subject: [openstack-dev] [mistral] [vitrage] Propose adding Vitrage's actions to Mistral Actions In-Reply-To: References: Message-ID: I spoke with Jaewook briefly in #openstack-mistral, for anyone interested in following this work there is now a blueprint to track it. https://blueprints.launchpad.net/mistral/+spec/mistral-vitrage-actions Thanks all On 23 April 2018 at 07:57, Jaewook Oh wrote: > Hello Renat, > > I'll join the IRC channel :) > > Thanks, > Jaewook. > > 2018-04-23 15:45 GMT+09:00 Renat Akhmerov : > >> On 23 Apr 2018, 13:38 +0700, Jaewook Oh , wrote: >> >> Hello Mistral and Vitrage team, >> >> I've been testing vitrage with mistral workflow, >> but it seems that there are no Vitrage actions in Mistral yet. >> >> I think Vitrage actions should be added to Mistral. >> We can use the actions in mistral workflow to automate lots of repeated >> tasks as it was originally intended. >> >> So, I'd like to add them to the Mistral Actions. >> Can I do this work? >> >> >> Hi, I see no reason why not. We’ll assist, if needed. I’d recommend to >> join us at #openstack-mistral IRC channel for better communication. >> >> Thanks >> >> Renat Akhmerov >> @Nokia >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > ================================================ > *Jaewook Oh* (오재욱) > IISTRC - Internet Infra System Technology Research Center > 369 Sangdo-ro, Dongjak-gu, > 06978, Seoul, Republic of Korea > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Mon Apr 23 11:06:51 2018 From: amotoki at gmail.com (Akihiro Motoki) Date: Mon, 23 Apr 2018 20:06:51 +0900 Subject: [openstack-dev] [horizon][plugins] mox -> mock migration In-Reply-To: References: Message-ID: Hi horizon plugin developers, As I announced in the quoted mail, Rocky-1 was released and mox is NOT prepared in the horizon test helpers by default now [1]. If your horizon plugin still depends on mox, please ensure to set use_mox = True in your test classes. > 2) After Rocky-1, use_mox of openstack_dashboard.test.helpers.TestCase will be changed from True to False. > This means your plugin needs to set use_mox to True explicitly if your unit tests still depends on mox. > Our suggestion is to set use_mox=True until Rocky-1 milestone if your tests depends on mox not to break your gate. [1] https://review.openstack.org/558048 Thanks, Akihiro Motoki (amotoki) 2018-03-18 17:54 GMT+09:00 Akihiro Motoki : > Hi horizon plugin developers, > > As you know, mox-removal is one of the community goal in Rocky and > horizon team is working on removing usage of mox [1]. > > This mail announces the plan of dropping mox dependencies in horizon > test helpers (horizon.test.helpers.TestCase and/or > openstack_dashboard.test.helpers.TestCase). > > 1) The first step is to introduce "use_mox" flag in > horizon.test.helpers.TestCase. The flag is available now. > If you set the flag to False, you can run your plugin test without mox. > The default value of use_mox is False for > horizon.test.helpers.TestCase [2] and True for > openstack_dashboard.test.helpers.TestCase [3]. > > 2) After Rocky-1, use_mox of openstack_dashboard.test.helpers.TestCase > will be changed from True to False. > This means your plugin needs to set use_mox to True explicitly if > your unit tests still depends on mox. > Our suggestion is to set use_mox=True until Rocky-1 milestone if > your tests depends on mox not to break your gate. > > 3) After Rocky RC1 is released, "use_mox" flag in the horizon repo > will be dropped. > This means use_mox flag will no longer be in effect. > If your plugin tests still depends on mox at this stage, your > plugin test needs to set up mox explicitly. > > Thanks, > Akihiro Motoki (amotoki) > > [1] https://blueprints.launchpad.net/horizon/+spec/mock- > framework-in-unit-tests > [2] https://github.com/openstack/horizon/blob/ > 6e29fdde1edc67a6797eba2c3f9c557f840d4ea7/horizon/test/helpers.py#L138 > [3] https://github.com/openstack/horizon/blob/ > 6e29fdde1edc67a6797eba2c3f9c557f840d4ea7/openstack_ > dashboard/test/helpers.py#L257 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Mon Apr 23 11:09:42 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Mon, 23 Apr 2018 12:09:42 +0100 (BST) Subject: [openstack-dev] [tc] campaign question related to new projects In-Reply-To: <1524259233-sup-3003@lrrr.local> References: <1524259233-sup-3003@lrrr.local> Message-ID: On Fri, 20 Apr 2018, Doug Hellmann wrote: > [This is meant to be one of (I hope) several conversation-provoking > questions directed at prospective TC members to help the community > understand their positions before considering how to vote in the > ongoing election.] Thanks for getting the ball rolling on some discussion, Doug. > Without letting the conversation devolve too much into a discussion > of Adjutant's case, please talk a little about how you would evaluate > a project's application in general. What sorts of things do you > consider when deciding whether a project "aligns with the OpenStack > Mission," for example? This is an important question because project applications are one of the few ways in which the TC exercises any direct influence over the shape and direction of OpenStack. Much of the rest of the time the TC's influence is either indirect or limited. That's something I think we should change, in part because I feel the role of the TC should be at least as, if not more, focused on the day-to-day experiences and capabilities of existing contributors as it is on new ones. I prefer that we keep a large human factor involved in the application process. I do not want us to be purely objective because such a process can never take into account the wider and ever changing world. The members of the TC can be human info sponges that do that accounting. The current process was created in part to overcome the far too heavy and nitpicking (and human) previous process but it has resulted in what amounts to a dilution in direction. For me, each application tends to result in a lot of questions such as the list I produced on patchset 34 of the Adjutant review[1]. I worry that we are predisposed to accept applicants out of a general sense of being "nice" and a belief that growth is a sign of health. I'm unsure how these behaviors help to drive OpenStack in its mission, but while the rules [2] say something as broad as It should help further the OpenStack mission, by providing a cloud infrastructure service, or directly building on an existing OpenStack infrastructure service. I feel we're painted into something of a corner where acceptance must be the default unless there are egregious interoperability or "four opens" violations. I'd like to see us work harder to refine the long term goals we are trying to satisfy with the projects that make up OpenStack. This will require us to continue the never-ending discussion about whether OpenStack is a "Software Defined Infrastructure Framework" or a "Cloud Solution" (plenty of people talk the latter, but plenty of other people are spending energy on the former). And then actually follow through: using the outcome of those discussions to impact not just projects that we accept but also where existing project focus their attention. We need to be as capable of saying an informed "no" as we are of saying "yes". In the modern OpenSource world there are so many different ecosystems that are cloud friendly: We don't need to provide a home for everyone. There are plenty of places for people to go, including the many different (and growing) facets of the OpenStack community. I would prefer that we be assertive in how we evaluate for alignment with the OpenStack mission. Doing that requires fairly constant re-evaluation of the mission and a willingness to accept that it does (and must) change. [1] https://review.openstack.org/#/c/553643/ [2] https://governance.openstack.org/tc/reference/new-projects-requirements.html -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From gr at ham.ie Mon Apr 23 11:11:12 2018 From: gr at ham.ie (Graham Hayes) Date: Mon, 23 Apr 2018 12:11:12 +0100 Subject: [openstack-dev] [designate] Meeting Times - change to office hours? Message-ID: Hi All, We moved our meeting time to 14:00UTC on Wednesdays, but attendance has been low, and it is also the middle of the night for one of our cores. I would like to suggest we have an office hours style meeting, with one in the UTC evening and one in the UTC morning. If this seems reasonable - when and what frequency should we do them? What times suit the current set of contributors? Thanks, Graham -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From gr at ham.ie Mon Apr 23 11:15:24 2018 From: gr at ham.ie (Graham Hayes) Date: Mon, 23 Apr 2018 12:15:24 +0100 Subject: [openstack-dev] [tc] campaign question related to new projects In-Reply-To: <1524259233-sup-3003@lrrr.local> References: <1524259233-sup-3003@lrrr.local> Message-ID: <941e3fdc-cfe6-5752-3245-c3a138d5bd4b@ham.ie> 7On 20/04/18 22:26, Doug Hellmann wrote: > Without letting the conversation devolve too much into a discussion > of Adjutant's case, please talk a little about how you would evaluate > a project's application in general. What sorts of things do you > consider when deciding whether a project "aligns with the OpenStack > Mission," for example? > > Doug > For me, the most important thing for a project that wants to join is that they act like "one of us" - what I think ttx refered to as "culture fit". This is fairly wide ranging, but includes things like: * Do they use the PTIs[0] * Do they use gerrit, or if they use something else, do they follow the same review styles and mechanisms? * Are they on IRC? * Do they use the mailing list for long running discussion? ** If a project doesn't have long running discussions and as a result does not have ML activity, I would see that as OK - my problem would be with a team that ran their own list. * Do they use standard devstack / -infra jobs for testing? * Do they use the standard common libraries (where appropriate)? If a project fails this test (and would have been accepted as something that drives the mission), I see no issue with the TC trying to bring them into the fold by helping them work like one of us, and accepting them when they have shown that they are willing to change how they do things. For the "product" fit, it is a lot more subjective. We used to have a system (pre Big Tent) where the TC picked "winners" in a space and blessed one project as the way to do $thing. Then, in big tent we started to not pick winners, and allow anyone who was one of us, and had a "cloud" application. Recently, we have moved back to seeing if a project overlaps with another. The real test for this (from my viewpoint) is if the perceived overlap is an area that the team that is currently in OpenStack is interested in pursuing - if not we should default to adding the project. Personally, if the project adds something that we currently lack, and have lacked for a long time (not to get too close to the current discussion), or tries to reduce the amount of extra tooling that deployers currently write in house, we should welcome them. The acid test for me is "How would I use this?" or "Have I written tooling or worked somewhere that wrote tooling to do this?" If the answer is yes, it is a good indication that they fit with the mission. - Graham 0 - https://governance.openstack.org/tc/reference/project-testing-interface.html -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From james.slagle at gmail.com Mon Apr 23 11:55:47 2018 From: james.slagle at gmail.com (James Slagle) Date: Mon, 23 Apr 2018 07:55:47 -0400 Subject: [openstack-dev] [tripleo] Proposing Marius Cornea core on upgrade bits In-Reply-To: References: Message-ID: On Thu, Apr 19, 2018 at 1:01 PM, Emilien Macchi wrote: > Greetings, > > As you probably know mcornea on IRC, Marius Cornea has been contributing on > TripleO for a while, specially on the upgrade bits. > Part of the quality team, he's always testing real customer scenarios and > brings a lot of good feedback in his reviews, and quite often takes care of > fixing complex bugs when it comes to advanced upgrades scenarios. > He's very involved in tripleo-upgrade repository where he's already core, > but I think it's time to let him +2 on other tripleo repos for the patches > related to upgrades (we trust people's judgement for reviews). > > As usual, we'll vote! > > Thanks everyone for your feedback and thanks Marius for your hard work and > involvement in the project. +1 -- -- James Slagle -- From dtantsur at redhat.com Mon Apr 23 12:03:11 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 23 Apr 2018 14:03:11 +0200 Subject: [openstack-dev] [all] [api] Re-Reminder on the state of WSME In-Reply-To: References: <4bb99da6-1071-3f7b-2c87-979e0d48876d@nemebean.com> Message-ID: <142c7d43-de6a-367f-147b-e2b0097ff5f0@redhat.com> ironic-inspector is using Flask, and it has been quite nice so far. On 04/11/2018 12:56 AM, Michael Johnson wrote: > I echo Ben's question about what is the recommended replacement. > > Not long ago we were advised to use WSME over the alternatives which > is why Octavia is using the WSME types and pecan extension. > > Thanks, > Michael > > On Mon, Apr 9, 2018 at 10:16 AM, Ben Nemec wrote: >> >> >> On 04/09/2018 07:22 AM, Chris Dent wrote: >>> >>> >>> A little over two years ago I sent a reminder that WSME is not being >>> actively maintained: >>> >>> >>> http://lists.openstack.org/pipermail/openstack-dev/2016-March/088658.html >>> >>> Today I was reminded of this becasue a random (typo-related) >>> patchset demonstrated that the tests were no longer passing and >>> fixing them is enough of a chore that I (at least temporarily) >>> marked one test as an expected failure.o >>> >>> https://review.openstack.org/#/c/559717/ >>> >>> The following projects appear to still use WSME: >>> >>> aodh >>> blazar >>> cloudkitty >>> cloudpulse >>> cyborg >>> glance >>> gluon >>> iotronic >>> ironic >>> magnum >>> mistral >>> mogan >>> octavia >>> panko >>> qinling >>> radar >>> ranger >>> searchlight >>> solum >>> storyboard >>> surveil >>> terracotta >>> watcher >>> >>> Most of these are using the 'types' handling in WSME and sometimes >>> the pecan extension, and not the (potentially broken) Flask >>> extension, so things should be stable. >>> >>> However: nobody is working on keeping WSME up to date. It is not a >>> good long term investment. >> >> >> What would be the recommended alternative, either for new work or as a >> migration path for existing projects? >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From mjturek at linux.vnet.ibm.com Mon Apr 23 12:04:05 2018 From: mjturek at linux.vnet.ibm.com (Michael Turek) Date: Mon, 23 Apr 2018 08:04:05 -0400 Subject: [openstack-dev] [ironic] Monthly bug day? Message-ID: Hey everyone! We had a bug day about two weeks ago and it went pretty well! At last week's IRC meeting the idea of having one every month was thrown around. What does everyone think about having Bug Day the first Thursday of every month? Thanks, Mike Turek From delightwook at ssu.ac.kr Mon Apr 23 13:02:54 2018 From: delightwook at ssu.ac.kr (MinWookKim) Date: Mon, 23 Apr 2018 22:02:54 +0900 Subject: [openstack-dev] [Vitrage] Vitrage graph error Message-ID: <01b501d3db03$650d4670$2f27d350$@ssu.ac.kr> Hello Vitrage team, A few days ago I used Devstack to install the Openstack master version, which included Vitrage. However, I found that the Vitrage graph does not work on the Vitrage-dashboard. The state of all Vitrage components is active. Could you check it once? Thanks. Best Regards, Minwook. -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 3358 bytes Desc: not available URL: From balazs.gibizer at ericsson.com Mon Apr 23 13:07:48 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 23 Apr 2018 15:07:48 +0200 Subject: [openstack-dev] [nova] Notification update week 17 Message-ID: <1524488868.25291.1@smtp.office365.com> Hi, New week, new status mail. Bugs ---- New bugs ~~~~~~~~ [Undecided] https://bugs.launchpad.net/nova/+bug/1764927 Should send out notification when instance metadata get updated Nova already sends instance.update notification when instance.metadata is changed so I marked the bug invalid. Still open bugs ~~~~~~~~~~~~~~~ [Low] https://bugs.launchpad.net/nova/+bug/1757407 Notification sending sometimes hits the keystone API to get glance endpoints As the versioned notifications does not use the glance endpoints info we can avoid hitting the keystone API if notification_format is set to 'versioned' [Medium] https://bugs.launchpad.net/nova/+bug/1763051 Need to audit when notifications are sent during live migration We need to go throught the live migration codepath and make sure that the different live migartion notifications sent at a proper time. [Low] https://bugs.launchpad.net/nova/+bug/1764390 Replace passing system_metadata to notification functions with instance.system_metadata usage Fix has been proposed in https://review.openstack.org/#/c/561724 and needs a final +2 [Low] https://bugs.launchpad.net/nova/+bug/1764392 Avoid bandwidth usage db query in notifications when the virt driver does not support collecting such data [High] https://bugs.launchpad.net/nova/+bug/1739325 Server operations fail to complete with versioned notifications if payload contains unset non-nullable fields No progress. We still need to understand how this problem happens to find the proper solution. [Low] https://bugs.launchpad.net/nova/+bug/1487038 nova.exception._cleanse_dict should use oslo_utils.strutils._SANITIZE_KEYS Old abandoned patches exist but need somebody to pick them up: * https://review.openstack.org/#/c/215308/ * https://review.openstack.org/#/c/388345/ Versioned notification transformation ------------------------------------- https://review.openstack.org/#/q/topic:bp/versioned-notification-transformation-rocky+status:open * https://review.openstack.org/#/c/403660 Transform instance.exists notification - needs a rebase and a final +2 Introduce instance.lock and instance.unlock notifications --------------------------------------------------------- https://blueprints.launchpad.net/nova/+spec/trigger-notifications-when-lock-unlock-instances Implementation proposed but needs some work: https://review.openstack.org/#/c/526251/ - No progress. I've pinged the author. Add the user id and project id of the user initiated the instance action to the notification ----------------------------------------------------------------- https://blueprints.launchpad.net/nova/+spec/add-action-initiator-to-instance-action-notifications Implementation patch exists but still needs work https://review.openstack.org/#/c/536243/ - No progress. I've pinged the author. Add request_id to the InstanceAction versioned notifications ------------------------------------------------------------ https://blueprints.launchpad.net/nova/+spec/add-request-id-to-instance-action-notifications The main implementation patch has been merged. The follow up patch https://review.openstack.org/#/c/562757 needs the final +2 Then the bp can be marked as implemented. Sending full traceback in versioned notifications ------------------------------------------------- https://blueprints.launchpad.net/nova/+spec/add-full-traceback-to-error-notifications I have to propose the implementation. Add versioned notifications for removing a member from a server group --------------------------------------------------------------------- The specless bp https://blueprints.launchpad.net/nova/+spec/add-server-group-remove-member-notifications is pending approval as we would like to see the POC code first. Takashi has been proposed the POC code https://review.openstack.org/#/c/559076/ so we have to look at it. Factor out duplicated notification sample ----------------------------------------- https://review.openstack.org/#/q/topic:refactor-notification-samples+status:open It seems we are done with this. Every notification sample is either small on its own (e.g. flavor.create) or already based on common sample fragments. Thanks to everybody who contributed time to this effort. \o/ Weekly meeting -------------- The next meeting will be held on 24th of April on #openstack-meeting-4 https://www.timeanddate.com/worldclock/fixedtime.html?iso=20180424T170000 Cheers, From dmsimard at redhat.com Mon Apr 23 13:23:38 2018 From: dmsimard at redhat.com (David Moreau Simard) Date: Mon, 23 Apr 2018 09:23:38 -0400 Subject: [openstack-dev] [all][kolla][rdo] Collaboration with Kolla for the RDO test days In-Reply-To: References: Message-ID: Hi, For a mix of good and bad reasons, we haven't been able to do this until now. The upcoming RDO test days will be May 3rd and 4th and we'll be testing the latest content from Rocky now that M1 has been released. We can re-use the pad we had started last time around [1]. I'll be in touch. [1]: https://etherpad.openstack.org/p/kolla-rdo-m3 David Moreau Simard Senior Software Engineer | OpenStack RDO dmsimard = [irc, github, twitter] On Mon, Jan 29, 2018 at 8:29 AM, David Moreau Simard wrote: > Hi ! > > For those who might be unfamiliar with the RDO [1] community project: > we hang out in #rdo, we don't bite and we build vanilla OpenStack > packages. > > These packages are what allows you to leverage one of the deployment > projects such as TripleO, PackStack or Kolla to deploy on CentOS or > RHEL. > The RDO community collaborates with these deployment projects by > providing trunk and stable packages in order to let them develop and > test against the latest and the greatest of OpenStack. > > RDO test days typically happen around a week after an upstream > milestone has been reached [2]. > The purpose is to get everyone together in #rdo: developers, users, > operators, maintainers -- and test not just RDO but OpenStack itself > as installed by the different deployment projects. > > We tried something new at our last test day [3] and it worked out great. > Instead of encouraging participants to install their own cloud for > testing things, we supplied a cloud of our own... a bit like a limited > duration TryStack [4]. > This lets users without the operational knowledge, time or hardware to > install an OpenStack environment to see what's coming in the upcoming > release of OpenStack and get the feedback loop going ahead of the > release. > > We used Packstack for the last deployment and invited Packstack cores > to deploy, operate and troubleshoot the installation for the duration > of the test days. > The idea is to rotate between the different deployment projects to > give every interested project a chance to participate. > > Last week, we reached out to Kolla to see if they would be interested > in participating in our next RDO test days [5] around February 8th. > We supply the bare metal hardware and their core contributors get to > deploy and operate a cloud with real users and developers poking > around. > All around, this is a great opportunity to get feedback for RDO, Kolla > and OpenStack. > > We'll be advertising the event a bit more as the test days draw closer > but until then, I thought it was worthwhile to share some context for > this new thing we're doing. > > Let me know if you have any questions ! > > Thanks, > > [1]: https://www.rdoproject.org/ > [2]: https://www.rdoproject.org/testday/ > [3]: https://dmsimard.com/2017/11/29/come-try-a-real-openstack-queens-deployment/ > [4]: http://trystack.org/ > [5]: http://eavesdrop.openstack.org/meetings/kolla/2018/kolla.2018-01-24-16.00.log.html > > David Moreau Simard > Senior Software Engineer | OpenStack RDO > > dmsimard = [irc, github, twitter] From doug at doughellmann.com Mon Apr 23 13:27:09 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 23 Apr 2018 09:27:09 -0400 Subject: [openstack-dev] [tc] campaign question: How "active" should the TC be? Message-ID: <1524489055-sup-8435@lrrr.local> [This is meant to be one of (I hope) several conversation-provoking questions directed at prospective TC members to help the community understand their positions before considering how to vote in the ongoing election.] We frequently have discussions about whether the TC is active enough, in terms of driving new policies, technology choices, and other issues that affect the entire community. Please describe one case where we were either active or reactive and how that was shown to be the right choice over time. Please describe another case where the choice to be active or reactive ended up being the wrong choice. If you think the TC should tend to be more active in driving change than it is today, please describe the changes (policy, culture, etc.) you think would need to be made to do that effectively (not which policies you want us to be more active on, but *how* to organize the TC to be more active and have that work within the community culture). If you think the TC should tend to be less active in driving change overall, please describe what policies you think the TC should be taking an active role in implementing. Doug From doug at doughellmann.com Mon Apr 23 13:35:11 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 23 Apr 2018 09:35:11 -0400 Subject: [openstack-dev] [tc] campaign question: How "active" should the TC be? In-Reply-To: <1524489055-sup-8435@lrrr.local> References: <1524489055-sup-8435@lrrr.local> Message-ID: <1524490343-sup-7680@lrrr.local> Excerpts from Doug Hellmann's message of 2018-04-23 09:27:09 -0400: > [This is meant to be one of (I hope) several conversation-provoking > questions directed at prospective TC members to help the community > understand their positions before considering how to vote in the > ongoing election.] > > We frequently have discussions about whether the TC is active enough, > in terms of driving new policies, technology choices, and other > issues that affect the entire community. > > Please describe one case where we were either active or reactive > and how that was shown to be the right choice over time. > > Please describe another case where the choice to be active or > reactive ended up being the wrong choice. > > If you think the TC should tend to be more active in driving change > than it is today, please describe the changes (policy, culture, > etc.) you think would need to be made to do that effectively (not > which policies you want us to be more active on, but *how* to > organize the TC to be more active and have that work within the > community culture). > > If you think the TC should tend to be less active in driving change > overall, please describe what policies you think the TC should be > taking an active role in implementing. > > Doug There was a question from ttx on IRC [1] about my use of the terms "active" and "reactive" here. I mean active as "going out there and doing things and anticipating issues" and reactive as "dealing with things as they come up and aren't resolved in another way". Doug [1] http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-04-23.log.html From pabelanger at redhat.com Mon Apr 23 13:41:58 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Mon, 23 Apr 2018 09:41:58 -0400 Subject: [openstack-dev] [bifrost][bandit][magnum][ironic][kolla][pyeclib][mistral] Please merge bindep changes Message-ID: <20180423134158.GC17029@localhost.localdomain> Greetings, Could you please review the following bindep.txt[1] changes to your projects and approve them, it would be helpful to the openstack-infra team. We are looking to remove some legacy jenkins scripts from openstack-infra/project-config and your projects are still using them. The following patches will update your jobs to the new functionality of using our bindep role. If you have any questions, please reach out to us in #openstack-infra. Thanks, Paul [1] https://review.openstack.org/#/q/topic:bindep.txt+status:open From doug at doughellmann.com Mon Apr 23 13:50:29 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 23 Apr 2018 09:50:29 -0400 Subject: [openstack-dev] [tc] campaign question: How should we handle projects with overlapping feature sets? Message-ID: <1524490775-sup-9488@lrrr.local> [This is meant to be one of (I hope) several conversation-provoking questions directed at prospective TC members to help the community understand their positions before considering how to vote in the ongoing election.] In the course of evaluating new projects that have asked to join as official members of the OpenStack community, we often discuss whether the feature set of the project overlaps too much with other existing projects. This came up within the last year during Glare's application, and more recently as part of the Adjutant application. Our current policy regarding Open Development is that a project should cooperate with existing projects "rather than gratuitously competing or reinventing the wheel." [1] The flexibility provided by the use of the term "gratuitously" has allowed us to support multiple solutions in the deployment and telemetry problem spaces. At the same time it has left us with questions about how (and whether) the community would be able to replace the implementation of any given component with a new set of technologies by "starting from scratch". Where do you draw the line at "gratuitous"? What benefits and drawbacks do you see in supporting multiple tools with similar features? How would our community be different, in positive and negative ways, if we were more strict about avoiding such overlap? Doug [1] https://governance.openstack.org/tc/reference/new-projects-requirements.html From zhipengh512 at gmail.com Mon Apr 23 13:50:15 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Mon, 23 Apr 2018 21:50:15 +0800 Subject: [openstack-dev] [tc] campaign question: How "active" should the TC be? In-Reply-To: <1524490343-sup-7680@lrrr.local> References: <1524489055-sup-8435@lrrr.local> <1524490343-sup-7680@lrrr.local> Message-ID: In general I would prefer TC take an active role regarding exploring new use cases and technology directions leverage the existing OpenStack infrastructure. I would against TC being too active on project level governance. For example we have been discussing about edge computing recently and we don't have any idea on how a lightweight OpenStack should look like: maybe no scheduling since edge is more about provisioning ? maybe a Rust implementation of this lightweight version of OpenStack ? There are so many interesting new things that yet to be explored and should be championed by the TC. However regarding issues like how a project should govern itself, it is better for TC to reactive and let project team driven its own structure. I can't think of there is any concrete example on this matter now since TC has been doing rather well on this matter , but I guess this could be a precautious action :) On Mon, Apr 23, 2018 at 9:35 PM, Doug Hellmann wrote: > Excerpts from Doug Hellmann's message of 2018-04-23 09:27:09 -0400: > > [This is meant to be one of (I hope) several conversation-provoking > > questions directed at prospective TC members to help the community > > understand their positions before considering how to vote in the > > ongoing election.] > > > > We frequently have discussions about whether the TC is active enough, > > in terms of driving new policies, technology choices, and other > > issues that affect the entire community. > > > > Please describe one case where we were either active or reactive > > and how that was shown to be the right choice over time. > > > > Please describe another case where the choice to be active or > > reactive ended up being the wrong choice. > > > > If you think the TC should tend to be more active in driving change > > than it is today, please describe the changes (policy, culture, > > etc.) you think would need to be made to do that effectively (not > > which policies you want us to be more active on, but *how* to > > organize the TC to be more active and have that work within the > > community culture). > > > > If you think the TC should tend to be less active in driving change > > overall, please describe what policies you think the TC should be > > taking an active role in implementing. > > > > Doug > > There was a question from ttx on IRC [1] about my use of the terms > "active" and "reactive" here. I mean active as "going out there and > doing things and anticipating issues" and reactive as "dealing with > things as they come up and aren't resolved in another way". > > Doug > > [1] > http://eavesdrop.openstack.org/irclogs/%23openstack-tc/% > 23openstack-tc.2018-04-23.log.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Mon Apr 23 14:05:52 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Mon, 23 Apr 2018 22:05:52 +0800 Subject: [openstack-dev] [tc] campaign question: How should we handle projects with overlapping feature sets? In-Reply-To: <1524490775-sup-9488@lrrr.local> References: <1524490775-sup-9488@lrrr.local> Message-ID: I think this depends on the nature of the project. For deployment tools, as we also have witnessed in OPNFV, it tends to have multiple solutions. So it is normal to have multiple such projects although they are solving the same problem generally speaking. For projects that has a clear definition on a specific set of features of functionalities which are critical to any cloud infrastructure, then overlapping should be strictly avoided. I don't think for a team that proposes a new project that got a significant overlap with existing project has seriously studies the community or a good intention to collaborate within the community. Of course there will be exceptions for implementations in different langs but generally I would prefer to take a strong stance on strictly avoiding the overlap. The benefit we would got as a community is that we will have developers working on projects that is clearly defined both individually and collaboratively without any confusion. On Mon, Apr 23, 2018 at 9:50 PM, Doug Hellmann wrote: > [This is meant to be one of (I hope) several conversation-provoking > questions directed at prospective TC members to help the community > understand their positions before considering how to vote in the > ongoing election.] > > In the course of evaluating new projects that have asked to join > as official members of the OpenStack community, we often discuss > whether the feature set of the project overlaps too much with other > existing projects. This came up within the last year during Glare's > application, and more recently as part of the Adjutant application. > > Our current policy regarding Open Development is that a project > should cooperate with existing projects "rather than gratuitously > competing or reinventing the wheel." [1] The flexibility provided > by the use of the term "gratuitously" has allowed us to support > multiple solutions in the deployment and telemetry problem spaces. > At the same time it has left us with questions about how (and > whether) the community would be able to replace the implementation > of any given component with a new set of technologies by "starting > from scratch". > > Where do you draw the line at "gratuitous"? > > What benefits and drawbacks do you see in supporting multiple tools > with similar features? > > How would our community be different, in positive and negative ways, > if we were more strict about avoiding such overlap? > > Doug > > [1] https://governance.openstack.org/tc/reference/new-projects- > requirements.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Apr 23 14:06:36 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 23 Apr 2018 10:06:36 -0400 Subject: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier? Message-ID: <1524491647-sup-1779@lrrr.local> [This is meant to be one of (I hope) several conversation-provoking questions directed at prospective TC members to help the community understand their positions before considering how to vote in the ongoing election.] Over the last year we have seen some contraction in the number of companies and individuals contributing to OpenStack. At the same time we have started seeing contributions from other companies and individuals. To some degree this contraction and shift in contributor base is a natural outcome of changes in OpenStack itself along with the rest of the technology industry, but as with any change it raises questions about how and whether we can ensure a smooth transition to a new steady state. What aspects of our policies or culture make contributing to OpenStack more difficult than contributing to other open source projects? Which of those would you change, and how? Where else should we be looking for contributors? Doug From sean.mcginnis at gmx.com Mon Apr 23 14:16:03 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 23 Apr 2018 09:16:03 -0500 Subject: [openstack-dev] [searchlight][release] Searchlight deliverable type Message-ID: <20180423141603.GA4948@sm-xps> Hello searchlighters, The Rocky 1 milestone was last Thursday, and there has been no release request was submitted for the searchlight deliverables [1]. I remember some discussion at the last Denver PTG about searchlight and that it is basically considered "code complete" at this point until any new requirements come up for it. Is this still (or ever) an accurate assessment of the current project state? If so, I am wondering if this project's deliverables should be switched from being a cycle-based deliverable to being considered an independent deliverable. This allows the project to release at any point as needed, and does not require adherance to the milestone within cycle model that it is currently set up to follow. I would like to hear from the team to get a better understanding of where this project is and how to best support its release needs. Thanks, Sean From zhipengh512 at gmail.com Mon Apr 23 14:34:05 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Mon, 23 Apr 2018 22:34:05 +0800 Subject: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier? In-Reply-To: <1524491647-sup-1779@lrrr.local> References: <1524491647-sup-1779@lrrr.local> Message-ID: Culture wise, being too IRC-centric is definitely not helping, from my own experience getting new Cyborg developer joining our weekly meeting from China. Well we could always argue it is part of a open source/hacker culture and preferable to commercial solutions that have the constant risk of suddenly being shut down someday. But as OpenStack becomes more commercialized and widely adopted, we should be aware that more and more (potential) contributors will come from the groups who are used to non-strictly open source environment, such as product develop team which relies on a lot of "closed source" but easy to use softwares. The change ? Use more video conferences, and more commercial tools that preferred in certain region. Stop being allergic to non-open source softwares and bring more capable but not hacker culture inclined contributors to the community. I know this is not a super welcomed stance in the open source hacker culture. But if we want OpenStack to be able to sustain more developers and not have a mid-life crisis then got fringed, we need to start changing the hacker mindset. Another important thing, as I stated in the previous email, is that OpenStack should keep explore new technology directions and TC should take the lead position on it. No matter how good we could facilitate the contributors, a stale community cannot win more contributors. I'm against hype like any other, but reluctant or lazy on innovation is another thing and will cost the community to lose more and more existing and potential contributors. On Mon, Apr 23, 2018 at 10:06 PM, Doug Hellmann wrote: > [This is meant to be one of (I hope) several conversation-provoking > questions directed at prospective TC members to help the community > understand their positions before considering how to vote in the > ongoing election.] > > Over the last year we have seen some contraction in the number of > companies and individuals contributing to OpenStack. At the same > time we have started seeing contributions from other companies and > individuals. To some degree this contraction and shift in contributor > base is a natural outcome of changes in OpenStack itself along with > the rest of the technology industry, but as with any change it > raises questions about how and whether we can ensure a smooth > transition to a new steady state. > > What aspects of our policies or culture make contributing to OpenStack > more difficult than contributing to other open source projects? > > Which of those would you change, and how? > > Where else should we be looking for contributors? > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Apr 23 14:35:32 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 23 Apr 2018 10:35:32 -0400 Subject: [openstack-dev] [tc] campaign question: How "active" should the TC be? In-Reply-To: References: <1524489055-sup-8435@lrrr.local> <1524490343-sup-7680@lrrr.local> Message-ID: <1524494065-sup-8223@lrrr.local> Excerpts from Zhipeng Huang's message of 2018-04-23 21:50:15 +0800: > In general I would prefer TC take an active role regarding exploring new > use cases and technology directions leverage the existing OpenStack > infrastructure. I would against TC being too active on project level > governance. This would be a new area for the TC to consider. Can you elaborate a bit on what you think we would need to change in order to support that, and why the TC is the best place to do it (rather than one of our other team-based structures like a project team or SIG)? > > For example we have been discussing about edge computing recently and we > don't have any idea on how a lightweight OpenStack should look like: maybe > no scheduling since edge is more about provisioning ? maybe a Rust > implementation of this lightweight version of OpenStack ? There are so many > interesting new things that yet to be explored and should be championed by > the TC. > > However regarding issues like how a project should govern itself, it is > better for TC to reactive and let project team driven its own structure. I > can't think of there is any concrete example on this matter now since TC > has been doing rather well on this matter , but I guess this could be a > precautious action :) > > On Mon, Apr 23, 2018 at 9:35 PM, Doug Hellmann > wrote: > > > Excerpts from Doug Hellmann's message of 2018-04-23 09:27:09 -0400: > > > [This is meant to be one of (I hope) several conversation-provoking > > > questions directed at prospective TC members to help the community > > > understand their positions before considering how to vote in the > > > ongoing election.] > > > > > > We frequently have discussions about whether the TC is active enough, > > > in terms of driving new policies, technology choices, and other > > > issues that affect the entire community. > > > > > > Please describe one case where we were either active or reactive > > > and how that was shown to be the right choice over time. > > > > > > Please describe another case where the choice to be active or > > > reactive ended up being the wrong choice. > > > > > > If you think the TC should tend to be more active in driving change > > > than it is today, please describe the changes (policy, culture, > > > etc.) you think would need to be made to do that effectively (not > > > which policies you want us to be more active on, but *how* to > > > organize the TC to be more active and have that work within the > > > community culture). > > > > > > If you think the TC should tend to be less active in driving change > > > overall, please describe what policies you think the TC should be > > > taking an active role in implementing. > > > > > > Doug > > > > There was a question from ttx on IRC [1] about my use of the terms > > "active" and "reactive" here. I mean active as "going out there and > > doing things and anticipating issues" and reactive as "dealing with > > things as they come up and aren't resolved in another way". > > > > Doug > > > > [1] > > http://eavesdrop.openstack.org/irclogs/%23openstack-tc/% > > 23openstack-tc.2018-04-23.log.html > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > -- > Zhipeng (Howard) Huang > > Standard Engineer > IT Standard & Patent/IT Product Line > Huawei Technologies Co,. Ltd > Email: huangzhipeng at huawei.com > Office: Huawei Industrial Base, Longgang, Shenzhen > > (Previous) > Research Assistant > Mobile Ad-Hoc Network Lab, Calit2 > University of California, Irvine > Email: zhipengh at uci.edu > Office: Calit2 Building Room 2402 > > OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado From sean.mcginnis at gmx.com Mon Apr 23 14:35:57 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 23 Apr 2018 09:35:57 -0500 Subject: [openstack-dev] [searchlight][release] Searchlight deliverable type In-Reply-To: <20180423141603.GA4948@sm-xps> References: <20180423141603.GA4948@sm-xps> Message-ID: <20180423143556.GA7185@sm-xps> On Mon, Apr 23, 2018 at 09:16:03AM -0500, Sean McGinnis wrote: > Hello searchlighters, > > The Rocky 1 milestone was last Thursday, and there has been no release request > was submitted for the searchlight deliverables [1]. > > I remember some discussion at the last Denver PTG about searchlight and that it > is basically considered "code complete" at this point until any new > requirements come up for it. Is this still (or ever) an accurate assessment of > the current project state? > > If so, I am wondering if this project's deliverables should be switched from > being a cycle-based deliverable to being considered an independent deliverable. > This allows the project to release at any point as needed, and does not require > adherance to the milestone within cycle model that it is currently set up to > follow. > > I would like to hear from the team to get a better understanding of where this > project is and how to best support its release needs. > In a time honored tradition, I missed actually adding the link referred to above. [1] http://lists.openstack.org/pipermail/openstack-dev/2018-April/129617.html From gr at ham.ie Mon Apr 23 14:36:32 2018 From: gr at ham.ie (Graham Hayes) Date: Mon, 23 Apr 2018 15:36:32 +0100 Subject: [openstack-dev] [all] Topics for the Board+TC+UC meeting in Vancouver In-Reply-To: References: <1df6fb53-ed96-8eea-a290-ab0b889a1ae2@openstack.org> Message-ID: <7c47c11d-6731-09e7-50ad-76b22eab11c1@ham.ie> On 18/04/18 11:38, Chris Dent wrote: > On Tue, 17 Apr 2018, Thierry Carrez wrote: > >> So... Is there any specific topic you think we should cover in that >> meeting ? > > The topics: > > 1. What are we to do, as a community, when external pressures for > results are not matched by contribution of resources to produce > those results? There are probably several examples of this, but one > that I'm particularly familiar with is the drive to be able to > satisfy complex hardware topologies demanded by virtual network > functions and related NFV use cases. Within nova, and I suspect other > projects, there is intense pressure to make progress and intense > effort that is removing resources from other areas. But the amount > of daily, visible contribution from the interest companies [1] is > _sometimes_ limited. There are many factors in this, and obviously > "throw more people at it" is not a silver bullet, but there are > things to talk about here that need the input from all the segments. > > 2. We've made progress of late with acknowledging the concepts > and importance of casual contribution and "drive-by bug fixing" in > our changing environment. But we've not yet made enough progress in > changing the way we do work. Corporate foundation members need to be > more aware and more accepting that the people they provide to work > "mostly upstream" need to be focused on making other people capable > of contribution. Not on getting features done. And those of us who > do have the privilege of being "mostly upstream" need to adjust our > priorities. > > Somewhere in that screed are, I think, some things worth talking > about, but they need to be distilled out. > > [1] http://superuser.openstack.org/articles/5g-open-source-att/ I think as an add on to this, would to ask the board to talk to members and see what contributions they have made to the technical side of OpenStack. This should not just be Number of commits / reviews / bugs etc but also the motivation for the work, e.g. - Feature for a product, bug fix found in a product, cross project work or upstream project maintenance. I don't necessarily want to shame corporate members of the foundation, but I think it is important to understand where our contributor base comes from, and what each member brings to the community table. We should also ask the board to try and formulate a plan for growing new cross project leaders (not just TC / PTLs). We need to grow more technical contributors in the horizontal teams, which requires more than assigning a contributor to the QA / Infra / Olso / Docs teams for a year or so - the people should be allowed a certain amount of stability in a role, while not necessarily driving business goals. > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From doug at doughellmann.com Mon Apr 23 14:39:59 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 23 Apr 2018 10:39:59 -0400 Subject: [openstack-dev] [tc] campaign question related to new projects In-Reply-To: References: <1524259233-sup-3003@lrrr.local> Message-ID: <1524494303-sup-2328@lrrr.local> Excerpts from Zhipeng Huang's message of 2018-04-21 07:06:30 +0800: > As the one who just lead a new project into governance last year, I think I > could take a first stab at it. > > For me the current requirements in general works fine, as I emphasized in > my recent blog [0], the four opens are extremely important. Open Design is > one of the most important out the four I guess, because it actually will > lead to the diversity question. A team with a single vendor, although it > could satisfy all the other three easily, could not have a good open design > rather well. > > Another criteria (more related to the mission statement specifically) I > would consider important is the ability to demonstrate (1)its scope does > not overlap with existing official projects and (2) its ability to actively > work with related projects. The cross project collaboration does not have > to be waited after the project got anointed, rather started when the > project is in conception. In the past we have had challenges with existing teams having time, energy, or interest in working with new teams. These issues are often, but not always, outside of the control of the new teams. What role can, or should, the TC play in mediating these situations? Doug > > Well I guess that is my two cents :) > > [0] https://hannibalhuang.github.io/ > > > > On Sat, Apr 21, 2018 at 5:26 AM, Doug Hellmann > wrote: > > > [This is meant to be one of (I hope) several conversation-provoking > > questions directed at prospective TC members to help the community > > understand their positions before considering how to vote in the > > ongoing election.] > > > > We are discussing adding at least one new project this cycle, and > > the specific case of Adjutant has brought up questions about the > > criteria we use for evaluating new projects when they apply to > > become official. Although the current system does include some > > well-defined requirements [1], it was also designed to rely on TC > > members to use their judgement in some other areas, to account for > > changing circumstances over the life of the project and to reflect > > the position that governance is not something we can automate away. > > > > Without letting the conversation devolve too much into a discussion > > of Adjutant's case, please talk a little about how you would evaluate > > a project's application in general. What sorts of things do you > > consider when deciding whether a project "aligns with the OpenStack > > Mission," for example? > > > > Doug > > > > [1] https://governance.openstack.org/tc/reference/new-projects- > > requirements.html > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > -- > Zhipeng (Howard) Huang > > Standard Engineer > IT Standard & Patent/IT Product Line > Huawei Technologies Co,. Ltd > Email: huangzhipeng at huawei.com > Office: Huawei Industrial Base, Longgang, Shenzhen > > (Previous) > Research Assistant > Mobile Ad-Hoc Network Lab, Calit2 > University of California, Irvine > Email: zhipengh at uci.edu > Office: Calit2 Building Room 2402 > > OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado From zhipengh512 at gmail.com Mon Apr 23 14:40:52 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Mon, 23 Apr 2018 22:40:52 +0800 Subject: [openstack-dev] [tc] campaign question: How "active" should the TC be? In-Reply-To: <1524494065-sup-8223@lrrr.local> References: <1524489055-sup-8435@lrrr.local> <1524490343-sup-7680@lrrr.local> <1524494065-sup-8223@lrrr.local> Message-ID: I don't have specific ideas now, but it would be great to have TC publish something like a new direction outlook per cycle or per year, to summarize that these x,y,z new areas are what the OpenStack Technical Committee considers worth exploring for new directions and we will sponsor projects that will do the development in these areas. Of course I think it would be great for TC member to personally leading projects in these new directions, but find a way to sponsor or encourage other people leading is also a great choice :) Hope this clarifies a bit :) On Mon, Apr 23, 2018 at 10:35 PM, Doug Hellmann wrote: > Excerpts from Zhipeng Huang's message of 2018-04-23 21:50:15 +0800: > > In general I would prefer TC take an active role regarding exploring new > > use cases and technology directions leverage the existing OpenStack > > infrastructure. I would against TC being too active on project level > > governance. > > This would be a new area for the TC to consider. Can you elaborate a bit > on what you think we would need to change in order to support that, and > why the TC is the best place to do it (rather than one of our other > team-based structures like a project team or SIG)? > > > > > For example we have been discussing about edge computing recently and we > > don't have any idea on how a lightweight OpenStack should look like: > maybe > > no scheduling since edge is more about provisioning ? maybe a Rust > > implementation of this lightweight version of OpenStack ? There are so > many > > interesting new things that yet to be explored and should be championed > by > > the TC. > > > > However regarding issues like how a project should govern itself, it is > > better for TC to reactive and let project team driven its own structure. > I > > can't think of there is any concrete example on this matter now since TC > > has been doing rather well on this matter , but I guess this could be a > > precautious action :) > > > > On Mon, Apr 23, 2018 at 9:35 PM, Doug Hellmann > > wrote: > > > > > Excerpts from Doug Hellmann's message of 2018-04-23 09:27:09 -0400: > > > > [This is meant to be one of (I hope) several conversation-provoking > > > > questions directed at prospective TC members to help the community > > > > understand their positions before considering how to vote in the > > > > ongoing election.] > > > > > > > > We frequently have discussions about whether the TC is active enough, > > > > in terms of driving new policies, technology choices, and other > > > > issues that affect the entire community. > > > > > > > > Please describe one case where we were either active or reactive > > > > and how that was shown to be the right choice over time. > > > > > > > > Please describe another case where the choice to be active or > > > > reactive ended up being the wrong choice. > > > > > > > > If you think the TC should tend to be more active in driving change > > > > than it is today, please describe the changes (policy, culture, > > > > etc.) you think would need to be made to do that effectively (not > > > > which policies you want us to be more active on, but *how* to > > > > organize the TC to be more active and have that work within the > > > > community culture). > > > > > > > > If you think the TC should tend to be less active in driving change > > > > overall, please describe what policies you think the TC should be > > > > taking an active role in implementing. > > > > > > > > Doug > > > > > > There was a question from ttx on IRC [1] about my use of the terms > > > "active" and "reactive" here. I mean active as "going out there and > > > doing things and anticipating issues" and reactive as "dealing with > > > things as they come up and aren't resolved in another way". > > > > > > Doug > > > > > > [1] > > > http://eavesdrop.openstack.org/irclogs/%23openstack-tc/% > > > 23openstack-tc.2018-04-23.log.html > > > > > > ____________________________________________________________ > ______________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > -- > > Zhipeng (Howard) Huang > > > > Standard Engineer > > IT Standard & Patent/IT Product Line > > Huawei Technologies Co,. Ltd > > Email: huangzhipeng at huawei.com > > Office: Huawei Industrial Base, Longgang, Shenzhen > > > > (Previous) > > Research Assistant > > Mobile Ad-Hoc Network Lab, Calit2 > > University of California, Irvine > > Email: zhipengh at uci.edu > > Office: Calit2 Building Room 2402 > > > > OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Mon Apr 23 14:41:21 2018 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 23 Apr 2018 16:41:21 +0200 Subject: [openstack-dev] [tc] campaign question: How "active" should the TC be? In-Reply-To: <1524489055-sup-8435@lrrr.local> References: <1524489055-sup-8435@lrrr.local> Message-ID: <8afb133f-6ead-3115-65e6-74d867adc9ed@openstack.org> Doug Hellmann wrote: > [...] > Please describe one case where we were either active or reactive > and how that was shown to be the right choice over time. I think that the work on documenting our key principles was proactive, and it really helped to set expectations for new people in our community. > Please describe another case where the choice to be active or > reactive ended up being the wrong choice. The definition of "base services" was also a proactive step, but it failed (so far) to trigger the desired effect (solve the catch-22 around etcd3). > If you think the TC should tend to be more active in driving change > than it is today, please describe the changes (policy, culture, > etc.) you think would need to be made to do that effectively (not > which policies you want us to be more active on, but *how* to > organize the TC to be more active and have that work within the > community culture). Even if the proactive decisions were not all successful, I still think the TC needs to be proactive rather than reactive. We are in a unique position to be able to take a step back and look at the whole picture, rather than look for the dead fish only once you start noticing the smell. We have a few issues that bubbled up that are still unsolved (like the decision on driver teams) which if we had addressed them proactively would likely have been easier. I don't think we need dramatic changes to be able to do active changes effectively. The TC members generally have enough influence to drive that. Some of them are a little shy in using that influence in this way, though, so it ends up falling on the same smaller set of people to burn their influence credit to drive governance change, and that only lasts for so long. So I'd like to see the TC members (and more generally the people interested in governance problems) more active in discovering issues, proactively addressing them and owning the changes. -- Thierry Carrez (ttx) From jim at jimrollenhagen.com Mon Apr 23 14:43:27 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Mon, 23 Apr 2018 10:43:27 -0400 Subject: [openstack-dev] [ironic] Monthly bug day? In-Reply-To: References: Message-ID: On Mon, Apr 23, 2018 at 8:04 AM, Michael Turek wrote: > Hey everyone! > > We had a bug day about two weeks ago and it went pretty well! At last > week's IRC meeting the idea of having one every month was thrown around. > > What does everyone think about having Bug Day the first Thursday of every > month? > I'd totally support a monthly bug day! I'm not sure Thursday is the best day for me but I may be able to make it work. // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Apr 23 14:43:43 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 23 Apr 2018 10:43:43 -0400 Subject: [openstack-dev] [tc] campaign question related to new projects In-Reply-To: References: <1524259233-sup-3003@lrrr.local> Message-ID: <1524494405-sup-2731@lrrr.local> Excerpts from Rico Lin's message of 2018-04-22 16:50:51 +0800: > Thanks, Doug, for raising this campaign question > > > Here are my answers: > > > ***How you would evaluate a project's application in general > > First I would work through the requirements ([1]) to evaluate projects. > Since most of the requirements are specific enough. And here's more > important part, to leave evaluate logs or comments for projects which we > considered but didn't reach some requirements. It's very important to guide > projects to cross over requirements (and remember, a `-1` only means we > trying to help). > > Then, I work on questions, like: > > `How many user are interesting to/needs the functionality that service > provided?` > > `How active is this project and how's the diversity of contributors?` Our current policy is to allow projects with contributors from a small number of affiliations (even a single employer), under the theory that bringing a team into the community officially will help them grow by showing them the benefits of being more diverse and by making it easier for other community members who have employer restrictions on their open source work to justify contributing. Would you change that policy in any way? > > `Is this project required cross communities/projects cooperation? If yes, > how's the development workflows are working between communities/projects?` > > And last but is one of the most important questions, > > `Is this service aligns with the OpenStack Mission`? (and let's jump to > next question to answer this part) > > > > **What sorts of things do you consider when deciding whether a project > "aligns with the OpenStack Mission," for example?* > > I would consider things like: > > `Is the project's functionality complete the OpenStack infrastructure map?` > > Asking from user requirement and functionality point of view, `how's the > project(services) will make OpenStack better infrastructure for > user/operators?` and `how's this functionality provide a better life for > OpenStack developers?` > > `Is the project provides better integration point between communities` > > To build a better infrastructure, IMO it's also important to ask if a > project (service) really help on integration with other communities like > Kubernetes, OPNFV, CEPH, etc. I think to keep us as an active > infrastructure to solutions is part of our mission too. > > `Is it providing functionality which we can integrate with current projects > or SIG instead?` > > In short, we should be gathering our development energy, to really achieve > the jobs which is exactly why we spend times on trying to find official > projects and said this is part of our mission to work on. So when new > projects jump out, it's really important to discuss cross-project `is it > suitable for projects integrated and join force on specific functionality?` > (to do this while evaluating a project instead of when it's creating might > not be the best time to said `please integrate or join forces with other > teams together`(not even with a smiling face), but it's never too late for > a non-official/incubating project to consider about this). I really don't > like to to see any project get higher chances to die just because > developers chance their developing focus. It's happening when projects are > all willing to do the functionality, but no communication between(some > cases, not even now other projects exists), and new/old projects dead, then > TC needs to spend the time to pick those projects out. So IMO, it's worth > to spend times to investigate on whether projects can be joined. Or ideally > to put a resolution said, it's project's obligation to help on this, and > help other join force to be part of the team. Please see my other question about projects with overlapping feature sets [1]. Doug [1] http://lists.openstack.org/pipermail/openstack-dev/2018-April/129661.html > > `Can projects provide cross-project gating?` > > Do think if it's possible, we should consider this when asking if a service > aligns with our mission because not breaking rest of infrastructure is part > of the definition of `to build`. And providing cross-project gate jobs > seems like a way to go. To stable the integration between projects and > prevent released a failed feature when other services trying to work on new > ways and provide no guideline, ML, or solution, just only leave words like > `this is not part of our function to fix`. > > > > And finally, > > If we can answer all above questions, try to put in with the more accurate > number (like from user survey), and provides communications it needs, will > definitely help in finding next official projects. > > Also, when the evaluation is done, we should also evaluate the how's these > evaluation processes, how's guideline working for us? and which questions > above doesn't make any sense?. > > > [1] > https://governance.openstack.org/tc/reference/new-projects-requirements.html > > > May The Force of OpenStack Be With You, > > *Rico Lin*irc: ricolin From zhipengh512 at gmail.com Mon Apr 23 14:45:54 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Mon, 23 Apr 2018 22:45:54 +0800 Subject: [openstack-dev] [tc] campaign question related to new projects In-Reply-To: <1524494303-sup-2328@lrrr.local> References: <1524259233-sup-3003@lrrr.local> <1524494303-sup-2328@lrrr.local> Message-ID: I think it actually relies upon the new team to actively reaching out to the existing team. The new team cannot be lazy and wait for something happen for them, they have to keep reaching out and believe me the core developers from the existing official project will lend a hand in the end :) For Cyborg, I didn't even rush the application for an official project status. We've had more than one year just discuss the necessity and usefulness of the project, because I was not sure as well at the time if we are overlapping with something Nova or other project is already doing or we are just dreaming up some use cases. It turns out by conducting active and long discussions , we've got more than enough help from the Nova team :) I believe this is the team that is famous for their busy schedule, but the core developers helped tramendously because we are the one that is actively reaching out. SO back to topic, for TC's role, it should be to help the new team to find a productive way to actively get in contact with any related existing teams, not to put pressure on the existing projects :) On Mon, Apr 23, 2018 at 10:39 PM, Doug Hellmann wrote: > Excerpts from Zhipeng Huang's message of 2018-04-21 07:06:30 +0800: > > As the one who just lead a new project into governance last year, I > think I > > could take a first stab at it. > > > > For me the current requirements in general works fine, as I emphasized in > > my recent blog [0], the four opens are extremely important. Open Design > is > > one of the most important out the four I guess, because it actually will > > lead to the diversity question. A team with a single vendor, although it > > could satisfy all the other three easily, could not have a good open > design > > rather well. > > > > Another criteria (more related to the mission statement specifically) I > > would consider important is the ability to demonstrate (1)its scope does > > not overlap with existing official projects and (2) its ability to > actively > > work with related projects. The cross project collaboration does not have > > to be waited after the project got anointed, rather started when the > > project is in conception. > > In the past we have had challenges with existing teams having time, > energy, or interest in working with new teams. These issues are > often, but not always, outside of the control of the new teams. > What role can, or should, the TC play in mediating these situations? > > Doug > > > > > Well I guess that is my two cents :) > > > > [0] https://hannibalhuang.github.io/ > > > > > > > > On Sat, Apr 21, 2018 at 5:26 AM, Doug Hellmann > > wrote: > > > > > [This is meant to be one of (I hope) several conversation-provoking > > > questions directed at prospective TC members to help the community > > > understand their positions before considering how to vote in the > > > ongoing election.] > > > > > > We are discussing adding at least one new project this cycle, and > > > the specific case of Adjutant has brought up questions about the > > > criteria we use for evaluating new projects when they apply to > > > become official. Although the current system does include some > > > well-defined requirements [1], it was also designed to rely on TC > > > members to use their judgement in some other areas, to account for > > > changing circumstances over the life of the project and to reflect > > > the position that governance is not something we can automate away. > > > > > > Without letting the conversation devolve too much into a discussion > > > of Adjutant's case, please talk a little about how you would evaluate > > > a project's application in general. What sorts of things do you > > > consider when deciding whether a project "aligns with the OpenStack > > > Mission," for example? > > > > > > Doug > > > > > > [1] https://governance.openstack.org/tc/reference/new-projects- > > > requirements.html > > > > > > ____________________________________________________________ > ______________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > > > > > -- > > Zhipeng (Howard) Huang > > > > Standard Engineer > > IT Standard & Patent/IT Product Line > > Huawei Technologies Co,. Ltd > > Email: huangzhipeng at huawei.com > > Office: Huawei Industrial Base, Longgang, Shenzhen > > > > (Previous) > > Research Assistant > > Mobile Ad-Hoc Network Lab, Calit2 > > University of California, Irvine > > Email: zhipengh at uci.edu > > Office: Calit2 Building Room 2402 > > > > OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Mon Apr 23 14:47:18 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Mon, 23 Apr 2018 22:47:18 +0800 Subject: [openstack-dev] [all] Topics for the Board+TC+UC meeting in Vancouver In-Reply-To: <7c47c11d-6731-09e7-50ad-76b22eab11c1@ham.ie> References: <1df6fb53-ed96-8eea-a290-ab0b889a1ae2@openstack.org> <7c47c11d-6731-09e7-50ad-76b22eab11c1@ham.ie> Message-ID: big +1 to Graham's suggestion On Mon, Apr 23, 2018 at 10:36 PM, Graham Hayes wrote: > On 18/04/18 11:38, Chris Dent wrote: > > On Tue, 17 Apr 2018, Thierry Carrez wrote: > > > >> So... Is there any specific topic you think we should cover in that > >> meeting ? > > > > The topics: > > > > 1. What are we to do, as a community, when external pressures for > > results are not matched by contribution of resources to produce > > those results? There are probably several examples of this, but one > > that I'm particularly familiar with is the drive to be able to > > satisfy complex hardware topologies demanded by virtual network > > functions and related NFV use cases. Within nova, and I suspect other > > projects, there is intense pressure to make progress and intense > > effort that is removing resources from other areas. But the amount > > of daily, visible contribution from the interest companies [1] is > > _sometimes_ limited. There are many factors in this, and obviously > > "throw more people at it" is not a silver bullet, but there are > > things to talk about here that need the input from all the segments. > > > > 2. We've made progress of late with acknowledging the concepts > > and importance of casual contribution and "drive-by bug fixing" in > > our changing environment. But we've not yet made enough progress in > > changing the way we do work. Corporate foundation members need to be > > more aware and more accepting that the people they provide to work > > "mostly upstream" need to be focused on making other people capable > > of contribution. Not on getting features done. And those of us who > > do have the privilege of being "mostly upstream" need to adjust our > > priorities. > > > > Somewhere in that screed are, I think, some things worth talking > > about, but they need to be distilled out. > > > > [1] http://superuser.openstack.org/articles/5g-open-source-att/ > > > I think as an add on to this, would to ask the board to talk to members > and see what contributions they have made to the technical side of > OpenStack. > > This should not just be Number of commits / reviews / bugs etc but > also the motivation for the work, e.g. - Feature for a product, bug fix > found in a product, cross project work or upstream project maintenance. > > I don't necessarily want to shame corporate members of the foundation, > but I think it is important to understand where our contributor base > comes from, and what each member brings to the community table. > > We should also ask the board to try and formulate a plan for growing > new cross project leaders (not just TC / PTLs). We need to grow more > technical contributors in the horizontal teams, which requires more > than assigning a contributor to the QA / Infra / Olso / Docs teams > for a year or so - the people should be allowed a certain amount > of stability in a role, while not necessarily driving business goals. > > > > > > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Apr 23 14:48:14 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 23 Apr 2018 10:48:14 -0400 Subject: [openstack-dev] [tc] campaign question related to new projects In-Reply-To: References: <1524259233-sup-3003@lrrr.local> Message-ID: <1524494676-sup-5440@lrrr.local> Excerpts from Thierry Carrez's message of 2018-04-22 15:10:40 +0200: > Doug Hellmann wrote: > > [This is meant to be one of (I hope) several conversation-provoking > > questions directed at prospective TC members to help the community > > understand their positions before considering how to vote in the > > ongoing election.] > > > > We are discussing adding at least one new project this cycle, and > > the specific case of Adjutant has brought up questions about the > > criteria we use for evaluating new projects when they apply to > > become official. Although the current system does include some > > well-defined requirements [1], it was also designed to rely on TC > > members to use their judgement in some other areas, to account for > > changing circumstances over the life of the project and to reflect > > the position that governance is not something we can automate away. > > > > Without letting the conversation devolve too much into a discussion > > of Adjutant's case, please talk a little about how you would evaluate > > a project's application in general. What sorts of things do you > > consider when deciding whether a project "aligns with the OpenStack > > Mission," for example? > > Thanks for getting the discussion started, Doug. > > We have two main criteria in the requirements. The "follows the > OpenStack way" one, which I call the culture fit, and the "aligns with > the OpenStack mission" one, which I call the product fit. In both cases > there is room for interpretation and for personal differences in > appreciation. > > For the culture fit, while in most cases its straightforward (as the > project is born out of our existing community members), it is sometimes > much more blurry. When the group behind the new project is sufficiently > disjoint from our existing team members, you are judging a future > promise to behave in "the OpenStack way". In those cases it's really an > opportunity to reach out and explain how and why we do things the way we > do them, the principles behind our community norms. In the end it's a > leap of faith. The line I personally stand on is the willingness to > openly collaborate. If the new group is a closed group that has no > interest in including new people and wants to retain "control" over the > project, and is only interested in the marketing boost of being a part > of "OpenStack", then it should really be denied. We provide a space for > open collaboration, not for openwashing projects. > > For the product fit, there is also a lot of room for interpretation. For > me it boils down to whether "OpenStack" (the product) is better with > that project "in" rather than with that project "out". Sometimes it's an > easy sell: if a group wants to collaborate on packaging OpenStack for a > certain format/distro/deployment tool, it's clearly a win. In that case Given the number of complaints we have had over the lifetime of the project about the difficulty of upgrading, I am starting to wonder if we wouldn't have been better off sticking to a single deployment tool. > more is always better. But in most cases it's not as straightforward. > There is always tension between added functionality on one side, and > complexity / dilution / confusion on the other. Every "service" project > we add makes OpenStack more complex to explain, cross-project work more > difficult and interoperability incrementally harder. Whatever is added > has to be damn interesting to counterbalance that. The same goes for Why do you think OpenStack has more trouble explaining our feature set than other cloud systems that have a similarly diverse array of features? > competitive / alternative projects: in some cases the net result is a > win (different approaches to monitoring), while in some cases the net > result would be a loss (a Keystone alternative that would make everyone > else's life more miserable). > > In summary while the rules are precise, the way we interpret them can > still be varied. That is why this discussion is useful: comparing notes > on how we answer that difficult question, understanding where everyone > stands, helps us converge to a general consensus of the goals we are > trying to achieve when defining "OpenStack" scope, even if we disagree > on the particulars. > From doug at doughellmann.com Mon Apr 23 14:51:05 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 23 Apr 2018 10:51:05 -0400 Subject: [openstack-dev] [tc] campaign question related to new projects In-Reply-To: <20180423020145.GB6379@sm-xps> References: <1524259233-sup-3003@lrrr.local> <20180423020145.GB6379@sm-xps> Message-ID: <1524494933-sup-6781@lrrr.local> Excerpts from Sean McGinnis's message of 2018-04-22 21:01:46 -0500: > > > > We are discussing adding at least one new project this cycle, and > > the specific case of Adjutant has brought up questions about the > > criteria we use for evaluating new projects when they apply to > > become official. Although the current system does include some > > well-defined requirements [1], it was also designed to rely on TC > > members to use their judgement in some other areas, to account for > > changing circumstances over the life of the project and to reflect > > the position that governance is not something we can automate away. > > > > Good question to get the conversation going Doug. This is an interesting point > that I think will require some longer term discussions. > > It would be nice if we can narrow this down to a more defined decision tree, > but I also think it may be too difficult to get to the point where it is > something that can be that black and white. For better or worse, I do think > there is some subjective evaluation that is required for each of these so far. > > I think following our four opens is the basis for most decisions. They need to > be developing projects in an open way, and open to community involvement with > the implementation and direction of the project, as a basic starting point. If > they are not willing to follow these basic principles then I think it is an > easy decision to not go any further from there. > > We do care about diversity in contributors. I think it is very important for > the long term health of a project to have multiple interests involved. But I do > not consider this a bar to entry. I think it is perfectly OK for a new (but > open) project to come in with the majority of the work coming from one vendor. > As long as they are open and willing to get others involved in the development > of the project, then it is good. And at least sometimes, starting off is > sometimes better with one perspective driving things toward a focused solution. > > I think one of the important things is if it fits in to furthering what is > "OpenStack", as far as whether it is a service or functionality that is needed > and useful for those running an OpenStack cloud. This is one of the parts that > may be more on the subjective side. We need to see that adding the new project > in question will enhance the use or operation of an OpenStack environment. What do you think we can do to be better informed about whether something is actually useful, or just appears useful? > > There is the question about overlap with existing projects. While I think it's > true that a new project can come along that meets a need in a better way than > an existing solution, I think that bar needs so be raised a lot higher. I > personally would much rather see resources joining together on an existing > solution than a bunch of resources used to come up with a competing solution. > Even with a less than ideal solution, there is a lot that is learned from the > process that can be fed into and combined with new ideas to create a better > solution than just having a new replacement. Where should we draw the line with building something new and using tools available from other communities? > > There's probably a lot more that can be said about all of this, but that's my > initial take. Looking forward to seeing what everyone else has to say and > learning from how we are the same and how we are different on this topic. > > Sean > From doug at doughellmann.com Mon Apr 23 14:57:14 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 23 Apr 2018 10:57:14 -0400 Subject: [openstack-dev] [tc] campaign question related to new projects In-Reply-To: References: <1524259233-sup-3003@lrrr.local> Message-ID: <1524495076-sup-1855@lrrr.local> Excerpts from Chris Dent's message of 2018-04-23 12:09:42 +0100: > On Fri, 20 Apr 2018, Doug Hellmann wrote: > > > [This is meant to be one of (I hope) several conversation-provoking > > questions directed at prospective TC members to help the community > > understand their positions before considering how to vote in the > > ongoing election.] > > Thanks for getting the ball rolling on some discussion, Doug. > > > Without letting the conversation devolve too much into a discussion > > of Adjutant's case, please talk a little about how you would evaluate > > a project's application in general. What sorts of things do you > > consider when deciding whether a project "aligns with the OpenStack > > Mission," for example? > > This is an important question because project applications are one > of the few ways in which the TC exercises any direct influence over > the shape and direction of OpenStack. Much of the rest of the time > the TC's influence is either indirect or limited. That's something I > think we should change, in part because I feel the role of the TC > should be at least as, if not more, focused on the day-to-day > experiences and capabilities of existing contributors as it is on > new ones. Please see my other question about the role of the TC, and being active or reactive. [1] [1] http://lists.openstack.org/pipermail/openstack-dev/2018-April/129658.html > I prefer that we keep a large human factor involved in the > application process. I do not want us to be purely objective because > such a process can never take into account the wider and ever > changing world. The members of the TC can be human info sponges that > do that accounting. The current process was created in part to > overcome the far too heavy and nitpicking (and human) previous > process but it has resulted in what amounts to a dilution in > direction. > > For me, each application tends to result in a lot of questions such > as the list I produced on patchset 34 of the Adjutant review[1]. I > worry that we are predisposed to accept applicants out of a general > sense of being "nice" and a belief that growth is a sign of health. > I'm unsure how these behaviors help to drive OpenStack in its > mission, but while the rules [2] say something as broad as > > It should help further the OpenStack mission, by providing a > cloud infrastructure service, or directly building on an > existing OpenStack infrastructure service. > > I feel we're painted into something of a corner where acceptance > must be the default unless there are egregious interoperability or > "four opens" violations. > > I'd like to see us work harder to refine the long term goals we are > trying to satisfy with the projects that make up OpenStack. This > will require us to continue the never-ending discussion about > whether OpenStack is a "Software Defined Infrastructure Framework" > or a "Cloud Solution" (plenty of people talk the latter, but plenty > of other people are spending energy on the former). And then Do you consider those two approaches to be mutually exclusive? In the past our community has had trouble defining "infrastructure" in a way that satisfies everyone. Some people stop at "allocating what you need to run a VM" while others consider it closer to "everything you need to run an application". How do you define "infrastructure"? > actually follow through: using the outcome of those discussions to > impact not just projects that we accept but also where existing > project focus their attention. We need to be as capable of saying > an informed "no" as we are of saying "yes". > > In the modern OpenSource world there are so many different > ecosystems that are cloud friendly: We don't need to provide a home > for everyone. There are plenty of places for people to go, including > the many different (and growing) facets of the OpenStack community. > I would prefer that we be assertive in how we evaluate for alignment > with the OpenStack mission. Doing that requires fairly constant > re-evaluation of the mission and a willingness to accept that it > does (and must) change. > > [1] https://review.openstack.org/#/c/553643/ > [2] https://governance.openstack.org/tc/reference/new-projects-requirements.html > From thierry at openstack.org Mon Apr 23 14:59:20 2018 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 23 Apr 2018 16:59:20 +0200 Subject: [openstack-dev] [tc] campaign question: How should we handle projects with overlapping feature sets? In-Reply-To: <1524490775-sup-9488@lrrr.local> References: <1524490775-sup-9488@lrrr.local> Message-ID: <87faae42-58b1-1c93-dd88-fc7d53fa74ec@openstack.org> Doug Hellmann wrote: > Where do you draw the line at "gratuitous"? The way I interpret "gratuitous" here is: is the new project using a technically-different approach to the same problem, or is it just another group working at the same problem in the same way ? Is the new project just a way to avoid openly collaborating with the existing group ? Is the new project just a way for a specific organization (or group of organizations) to create something they have more control over ? That would be gratuitous duplication, not motivated by a technical reason. We don't really want copies or forks of projects that are just running around the current group in charge. That should be solved at the governance level (and it's the TC's role to address that). > What benefits and drawbacks do you see in supporting multiple tools > with similar features? I touched on that point a bit in my answer on considering new projects. Allowing competition gives you options and lets a thousand flowers bloom, but at the cost of adding complexity / dilution / confusion to the "product" and making interoperability generally more difficult. Generally, the closer to the "core" you are, the less competition you should allow. It's OK to have multiple options for operational tooling or deployment. It's less OK to have two Keystones that every component now needs to be compatible with. Of course teh area between those two extremes is all shades of grey. > How would our community be different, in positive and negative ways, > if we were more strict about avoiding such overlap? I feel like we have been pretty strict with competitive projects. I fear that being stricter would completely close the door to potential evolution. -- Thierry Carrez (ttx) From doug at doughellmann.com Mon Apr 23 15:04:57 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 23 Apr 2018 11:04:57 -0400 Subject: [openstack-dev] [tc] campaign question related to new projects In-Reply-To: <941e3fdc-cfe6-5752-3245-c3a138d5bd4b@ham.ie> References: <1524259233-sup-3003@lrrr.local> <941e3fdc-cfe6-5752-3245-c3a138d5bd4b@ham.ie> Message-ID: <1524495441-sup-7739@lrrr.local> Excerpts from Graham Hayes's message of 2018-04-23 12:15:24 +0100: > 7On 20/04/18 22:26, Doug Hellmann wrote: > > > Without letting the conversation devolve too much into a discussion > > of Adjutant's case, please talk a little about how you would evaluate > > a project's application in general. What sorts of things do you > > consider when deciding whether a project "aligns with the OpenStack > > Mission," for example? > > > > Doug > > > > For me, the most important thing for a project that wants to join is > that they act like "one of us" - what I think ttx refered to as "culture > fit". > > This is fairly wide ranging, but includes things like: > > * Do they use the PTIs[0] > * Do they use gerrit, or if they use something else, do they follow > the same review styles and mechanisms? > * Are they on IRC? > * Do they use the mailing list for long running discussion? > ** If a project doesn't have long running discussions and as a result > does not have ML activity, I would see that as OK - my problem > would be with a team that ran their own list. > * Do they use standard devstack / -infra jobs for testing? > * Do they use the standard common libraries (where appropriate)? > > If a project fails this test (and would have been accepted as something > that drives the mission), I see no issue with the TC trying to bring > them into the fold by helping them work like one of us, and accepting > them when they have shown that they are willing to change how they > do things. > > For the "product" fit, it is a lot more subjective. We used to have a > system (pre Big Tent) where the TC picked "winners" in a space and > blessed one project as the way to do $thing. Then, in big tent we > started to not pick winners, and allow anyone who was one of us, and > had a "cloud" application. > > Recently, we have moved back to seeing if a project overlaps with > another. The real test for this (from my viewpoint) is if the > perceived overlap is an area that the team that is currently in > OpenStack is interested in pursuing - if not we should default to > adding the project. We've always considered overlap to some degree, but it has come up more explicitly in a few recent discussions because of the nature of the projects. Please see the other thread on this topic [1]. [1] http://lists.openstack.org/pipermail/openstack-dev/2018-April/129661.html > Personally, if the project adds something that we currently lack, > and have lacked for a long time (not to get too close to the current > discussion), or tries to reduce the amount of extra tooling that > deployers currently write in house, we should welcome them. > > The acid test for me is "How would I use this?" or "Have I written > tooling or worked somewhere that wrote tooling to do this?" > > If the answer is yes, it is a good indication that they fit with the > mission. This feels like the ideal open source approach, in which contributors are "scratching their own itch." How can we encourage more deployers and users of OpenStack to consider contributing their customization and integration projects? Should we? Doug > > - Graham > > 0 - > https://governance.openstack.org/tc/reference/project-testing-interface.html From doug at doughellmann.com Mon Apr 23 15:14:57 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 23 Apr 2018 11:14:57 -0400 Subject: [openstack-dev] [all] Topics for the Board+TC+UC meeting in Vancouver In-Reply-To: <7c47c11d-6731-09e7-50ad-76b22eab11c1@ham.ie> References: <1df6fb53-ed96-8eea-a290-ab0b889a1ae2@openstack.org> <7c47c11d-6731-09e7-50ad-76b22eab11c1@ham.ie> Message-ID: <1524496161-sup-6113@lrrr.local> Excerpts from Graham Hayes's message of 2018-04-23 15:36:32 +0100: > On 18/04/18 11:38, Chris Dent wrote: > > On Tue, 17 Apr 2018, Thierry Carrez wrote: > > > >> So... Is there any specific topic you think we should cover in that > >> meeting ? > > > > The topics: > > > > 1. What are we to do, as a community, when external pressures for > > results are not matched by contribution of resources to produce > > those results? There are probably several examples of this, but one > > that I'm particularly familiar with is the drive to be able to > > satisfy complex hardware topologies demanded by virtual network > > functions and related NFV use cases. Within nova, and I suspect other > > projects, there is intense pressure to make progress and intense > > effort that is removing resources from other areas. But the amount > > of daily, visible contribution from the interest companies [1] is > > _sometimes_ limited. There are many factors in this, and obviously > > "throw more people at it" is not a silver bullet, but there are > > things to talk about here that need the input from all the segments. > > > > 2. We've made progress of late with acknowledging the concepts > > and importance of casual contribution and "drive-by bug fixing" in > > our changing environment. But we've not yet made enough progress in > > changing the way we do work. Corporate foundation members need to be > > more aware and more accepting that the people they provide to work > > "mostly upstream" need to be focused on making other people capable > > of contribution. Not on getting features done. And those of us who > > do have the privilege of being "mostly upstream" need to adjust our > > priorities. > > > > Somewhere in that screed are, I think, some things worth talking > > about, but they need to be distilled out. > > > > [1] http://superuser.openstack.org/articles/5g-open-source-att/ > > > I think as an add on to this, would to ask the board to talk to members > and see what contributions they have made to the technical side of > OpenStack. > > This should not just be Number of commits / reviews / bugs etc but > also the motivation for the work, e.g. - Feature for a product, bug fix > found in a product, cross project work or upstream project maintenance. A while back Jay Pipes suggested that we ask contributing companies to summarize their work. I think that was in the context of understanding what platinum members are doing, but it could apply to everyone. By leaving the definition of "contribution" open-ended and asking as a way to celebrate those contributions, we could avoid any sense of shaming as well as see what the companies consider to be important. > > I don't necessarily want to shame corporate members of the foundation, > but I think it is important to understand where our contributor base > comes from, and what each member brings to the community table. > > We should also ask the board to try and formulate a plan for growing > new cross project leaders (not just TC / PTLs). We need to grow more > technical contributors in the horizontal teams, which requires more > than assigning a contributor to the QA / Infra / Olso / Docs teams > for a year or so - the people should be allowed a certain amount > of stability in a role, while not necessarily driving business goals. This topic has come up a few times. I wonder if we could get more traction here if we had details about how attempts in the past have failed ("person X was given 6 months to train on the team before being moved to a different project", etc.)? Pulling together that sort of information might take longer than we have between now and the Vancouver meeting. I also anticipate the board's response being, "Tell us what you need done." so we should have an answer to that, even if it's just "we need help with ideas, let's form a working group". > > > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > From fungi at yuggoth.org Mon Apr 23 15:15:40 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 23 Apr 2018 15:15:40 +0000 Subject: [openstack-dev] [all] Topics for the Board+TC+UC meeting in Vancouver In-Reply-To: <7c47c11d-6731-09e7-50ad-76b22eab11c1@ham.ie> References: <1df6fb53-ed96-8eea-a290-ab0b889a1ae2@openstack.org> <7c47c11d-6731-09e7-50ad-76b22eab11c1@ham.ie> Message-ID: <20180423151540.blcv4wbiaucxv2al@yuggoth.org> On 2018-04-23 15:36:32 +0100 (+0100), Graham Hayes wrote: > I think as an add on to this, would to ask the board to talk to members > and see what contributions they have made to the technical side of > OpenStack. > > This should not just be Number of commits / reviews / bugs etc but > also the motivation for the work, e.g. - Feature for a product, bug fix > found in a product, cross project work or upstream project maintenance. > > I don't necessarily want to shame corporate members of the foundation, > but I think it is important to understand where our contributor base > comes from, and what each member brings to the community table. > > We should also ask the board to try and formulate a plan for growing > new cross project leaders (not just TC / PTLs). We need to grow more > technical contributors in the horizontal teams, which requires more > than assigning a contributor to the QA / Infra / Olso / Docs teams > for a year or so - the people should be allowed a certain amount > of stability in a role, while not necessarily driving business goals. [...] Taking this further, I really think that the spirit of our requirement that certain member organizations dedicate staff to contributing is that they be applied to under-served commons in the project (whether that's helping in horizontal teams and on cross-project goals, or triaging bugs and answering random usage questions). If they get to count the staff they put on some feature they needed for their new product launch, that's rather self-serving and doesn't really help us much. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gr at ham.ie Mon Apr 23 15:20:46 2018 From: gr at ham.ie (Graham Hayes) Date: Mon, 23 Apr 2018 16:20:46 +0100 Subject: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier? In-Reply-To: <1524491647-sup-1779@lrrr.local> References: <1524491647-sup-1779@lrrr.local> Message-ID: <3ccc173b-954a-3439-800c-d8a3092a22c3@ham.ie> On 23/04/18 15:06, Doug Hellmann wrote: > [This is meant to be one of (I hope) several conversation-provoking > questions directed at prospective TC members to help the community > understand their positions before considering how to vote in the > ongoing election.] > > Over the last year we have seen some contraction in the number of > companies and individuals contributing to OpenStack. At the same > time we have started seeing contributions from other companies and > individuals. To some degree this contraction and shift in contributor > base is a natural outcome of changes in OpenStack itself along with > the rest of the technology industry, but as with any change it > raises questions about how and whether we can ensure a smooth > transition to a new steady state. > > What aspects of our policies or culture make contributing to OpenStack > more difficult than contributing to other open source projects? Our scale. To get a large feature merged can require get code prioritised by 2 or 3 different teams, and merged into any number of repositories. To get a small feature merged on some projects can take some time as well, purely from the amount of code that is submitted for review to these projects. > Which of those would you change, and how? Well, I definitely wouldn't change our scale. What I think we need to is start breaking down some of the gigantic mono repos we have, so that reviewing a small feature does not need large amounts of contextual knowledge. I think this is happening organically in some teams already with a few teams completely plugin based and distributed (like the docs team). When code can be reviewed in isolation without the fear of breaking something 2 projects away, it helps both review time, and new contributor experience. > Where else should we be looking for contributors? Honestly, I don't know. The kind of work that our contributors do, does require a certain level of equipment, and "upstream time" that makes any serious feature development hard for a casual weekend contributor. > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From gr at ham.ie Mon Apr 23 15:27:04 2018 From: gr at ham.ie (Graham Hayes) Date: Mon, 23 Apr 2018 16:27:04 +0100 Subject: [openstack-dev] [tc] campaign question related to new projects In-Reply-To: <1524495441-sup-7739@lrrr.local> References: <1524259233-sup-3003@lrrr.local> <941e3fdc-cfe6-5752-3245-c3a138d5bd4b@ham.ie> <1524495441-sup-7739@lrrr.local> Message-ID: <3a9397e9-65ce-10f4-c3b9-897879e346ac@ham.ie> On 23/04/18 16:04, Doug Hellmann wrote: > Excerpts from Graham Hayes's message of 2018-04-23 12:15:24 +0100: >> 7On 20/04/18 22:26, Doug Hellmann wrote: >> >>> Without letting the conversation devolve too much into a discussion >>> of Adjutant's case, please talk a little about how you would evaluate >>> a project's application in general. What sorts of things do you >>> consider when deciding whether a project "aligns with the OpenStack >>> Mission," for example? >>> >>> Doug >>> >> >> For me, the most important thing for a project that wants to join is >> that they act like "one of us" - what I think ttx refered to as "culture >> fit". >> >> This is fairly wide ranging, but includes things like: >> >> * Do they use the PTIs[0] >> * Do they use gerrit, or if they use something else, do they follow >> the same review styles and mechanisms? >> * Are they on IRC? >> * Do they use the mailing list for long running discussion? >> ** If a project doesn't have long running discussions and as a result >> does not have ML activity, I would see that as OK - my problem >> would be with a team that ran their own list. >> * Do they use standard devstack / -infra jobs for testing? >> * Do they use the standard common libraries (where appropriate)? >> >> If a project fails this test (and would have been accepted as something >> that drives the mission), I see no issue with the TC trying to bring >> them into the fold by helping them work like one of us, and accepting >> them when they have shown that they are willing to change how they >> do things. >> >> For the "product" fit, it is a lot more subjective. We used to have a >> system (pre Big Tent) where the TC picked "winners" in a space and >> blessed one project as the way to do $thing. Then, in big tent we >> started to not pick winners, and allow anyone who was one of us, and >> had a "cloud" application. >> >> Recently, we have moved back to seeing if a project overlaps with >> another. The real test for this (from my viewpoint) is if the >> perceived overlap is an area that the team that is currently in >> OpenStack is interested in pursuing - if not we should default to >> adding the project. > > We've always considered overlap to some degree, but it has come up > more explicitly in a few recent discussions because of the nature > of the projects. Please see the other thread on this topic [1]. > > [1] http://lists.openstack.org/pipermail/openstack-dev/2018-April/129661.html > >> Personally, if the project adds something that we currently lack, >> and have lacked for a long time (not to get too close to the current >> discussion), or tries to reduce the amount of extra tooling that >> deployers currently write in house, we should welcome them. >> >> The acid test for me is "How would I use this?" or "Have I written >> tooling or worked somewhere that wrote tooling to do this?" >> >> If the answer is yes, it is a good indication that they fit with the >> mission. > > This feels like the ideal open source approach, in which contributors > are "scratching their own itch." How can we encourage more deployers > and users of OpenStack to consider contributing their customization > and integration projects? Should we? I think a lot of our major users are good citizens and are doing some or all of this work in the open - we just have a discoverability issue. A lot of the benefit of joining the foundation as a project, is the increased visibility gained from it, so that others who are deploying OpenStack in a similar layout can find a project and use it. I think at the very least we should find a way to promote them (this is where constellations could really help, as we could add non member projects to constellations where they are appropriate. > Doug > >> >> - Graham >> >> 0 - >> https://governance.openstack.org/tc/reference/project-testing-interface.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From cdent+os at anticdent.org Mon Apr 23 15:28:11 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Mon, 23 Apr 2018 16:28:11 +0100 (BST) Subject: [openstack-dev] [tc] campaign question: How "active" should the TC be? In-Reply-To: <1524489055-sup-8435@lrrr.local> References: <1524489055-sup-8435@lrrr.local> Message-ID: On Mon, 23 Apr 2018, Doug Hellmann wrote: > We frequently have discussions about whether the TC is active enough, > in terms of driving new policies, technology choices, and other > issues that affect the entire community. Another good question. Like all the others I wish they had come a bit earlier so that we had more time to deliberate and converse before the elections start tonight. These deserve considerable thought. I hope it's no secret that I think the TC should be more active in its leadership, both technically and culturally. Often the TC operates as a kind of supreme court, leading from behind. Since I joined the community four years ago I've often wished for a more unified leadership from the front, and I think the representative model provided by the TC (a model which transcends the individual projects and concentrates on the bigger picture) could provide that if we want it to. > Please describe one case where we were either active or reactive > and how that was shown to be the right choice over time. > > Please describe another case where the choice to be active or > reactive ended up being the wrong choice. I think the recent process which eventually led to clarification on interop testing at https://review.openstack.org/#/c/550571/ is a relatively good example of what might be described as active reaction. Through consultation with many involved parties we changed the rules to better reflect reality and support projects more effectively. At the same time, we failed to act quickly enough on the same topic with https://review.openstack.org/#/c/521602/ , where though some parties had identified some clear problems, the TC (as a group) failed to act in a timely fashion (there's a nearly two month gap with no comments) to resolve them, in part because there wasn't agreement that it was a domain that the TC should legislate. My feeling is that if technical contributors to OpenStack are involved, then that's a place where the TC can and should engage. > If you think the TC should tend to be more active in driving change > than it is today, please describe the changes (policy, culture, > etc.) you think would need to be made to do that effectively (not > which policies you want us to be more active on, but *how* to > organize the TC to be more active and have that work within the > community culture). Despite my use of the term "legislate" above I think Howard's idea of "a new direction outlook per cycle or per year" is a critical aspect of what the TC should be doing. Setting tone and overarching themes to help distinguish between what matters and what does not matter. The vision statement was somewhat useful in this regard, but we also need something that is more immediate term: thematic goals for this cycle. OpenStack-wide goals are also helpful, but they tend to be very specific and don't do much to help answer "no" to the question: "is this thing I'm considering aligned with the current themes?" We've talked in the past about using time at the PTG to express these themes but I think we need to do more than that. As you (Doug), have said before: We need to habituate people to where they can reliably find and discover information about what matters. This will often mean what feels like a lot of repetition. It will take effort to make these kinds of changes. We are large enough now, and vest so much power and self-determination in the individual projects, that it will take a lot of convincing and orchestrating to make a significant culture change that aligns us on common goals. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From sean.mcginnis at gmx.com Mon Apr 23 15:29:43 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 23 Apr 2018 10:29:43 -0500 Subject: [openstack-dev] [tc] campaign question related to new projects In-Reply-To: <1524494933-sup-6781@lrrr.local> References: <1524259233-sup-3003@lrrr.local> <20180423020145.GB6379@sm-xps> <1524494933-sup-6781@lrrr.local> Message-ID: <20180423152942.GA8979@sm-xps> > > > > I think one of the important things is if it fits in to furthering what is > > "OpenStack", as far as whether it is a service or functionality that is needed > > and useful for those running an OpenStack cloud. This is one of the parts that > > may be more on the subjective side. We need to see that adding the new project > > in question will enhance the use or operation of an OpenStack environment. > > What do you think we can do to be better informed about whether > something is actually useful, or just appears useful? > This is definitely a tricky part. We need to be willing to get out and make connections outside of our small group and learn what we can about how things are used in the real world. This is one of the main reasons I've been involved in the ops meetups. I want to be able to hear directly from the folks running OpenStack clouds what their challenges are and how they are addressing those challenges today. That helps inform later decisions about whether some new service fits in with what they need, or if it would be something that doesn't actually fit with what is commonly done. > > > > There is the question about overlap with existing projects. While I think it's > > true that a new project can come along that meets a need in a better way than > > an existing solution, I think that bar needs so be raised a lot higher. I > > personally would much rather see resources joining together on an existing > > solution than a bunch of resources used to come up with a competing solution. > > Even with a less than ideal solution, there is a lot that is learned from the > > process that can be fed into and combined with new ideas to create a better > > solution than just having a new replacement. > > Where should we draw the line with building something new and using > tools available from other communities? > Fighting "not invented here" tendencies is always a challenge. There's usually no clear line with these things from my experience. I think we need to be willing to take a look at what something is trying to solve, and able to take a look around and see if there is already something solving it, or doing something close enough to be easily adapted to fit our specific usage. Even if there is a potential existing tool available, we also need to evaluate whether that tools technology (programming language, platform, etc) fit and whether its community is compatible enough with our. For example, are they willing to work with outside consumers like us that may have some different needs than their current user base? Are they an open community and not a vendor of a proprietary tool? From sambetts at cisco.com Mon Apr 23 15:41:39 2018 From: sambetts at cisco.com (Sam Betts (sambetts)) Date: Mon, 23 Apr 2018 15:41:39 +0000 Subject: [openstack-dev] [ironic] Monthly bug day? In-Reply-To: References: Message-ID: <1C3ECD04-8551-46E3-8696-AAC1AC12E103@cisco.com> 100% on board with this, I think it was really productive! Sam On 23/04/2018, 15:44, "Jim Rollenhagen" > wrote: On Mon, Apr 23, 2018 at 8:04 AM, Michael Turek > wrote: Hey everyone! We had a bug day about two weeks ago and it went pretty well! At last week's IRC meeting the idea of having one every month was thrown around. What does everyone think about having Bug Day the first Thursday of every month? I'd totally support a monthly bug day! I'm not sure Thursday is the best day for me but I may be able to make it work. // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Mon Apr 23 16:02:14 2018 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 23 Apr 2018 12:02:14 -0400 Subject: [openstack-dev] [tc] campaign question related to new projects In-Reply-To: <1524259233-sup-3003@lrrr.local> References: <1524259233-sup-3003@lrrr.local> Message-ID: <92a3703e-428b-1793-b01f-5751ad0f4e33@redhat.com> On 20/04/18 17:26, Doug Hellmann wrote: > [This is meant to be one of (I hope) several conversation-provoking > questions directed at prospective TC members to help the community > understand their positions before considering how to vote in the > ongoing election.] Thanks Doug, I think this is a really helpful question. > We are discussing adding at least one new project this cycle, and > the specific case of Adjutant has brought up questions about the > criteria we use for evaluating new projects when they apply to > become official. Although the current system does include some > well-defined requirements [1], it was also designed to rely on TC > members to use their judgement in some other areas, to account for > changing circumstances over the life of the project and to reflect > the position that governance is not something we can automate away. > > Without letting the conversation devolve too much into a discussion > of Adjutant's case, please talk a little about how you would evaluate > a project's application in general. What sorts of things do you > consider when deciding whether a project "aligns with the OpenStack > Mission," for example? > > Doug > > [1] https://governance.openstack.org/tc/reference/new-projects-requirements.html The first thing to mention is that I take a fairly expansive view of what IaaS comprises. (For example, I think a message queue like Zaqar is a critical component of an IaaS, which often surprises people, but how can an application use its infrastructure as a server without notifications about what's going on in the infrastructure? I guess we could try to optimise for polling in a thousand different places, but it's much simpler to expose events in one place and optimising polling of that.) A while back[1] I threw together a non-exhaustive list of the kinds of goals I think OpenStack should be working towards: * Infinite scaling - the ability in principle to scale from zero to an arbitrarily large number of users without rewriting your application (e.g. if your application can store one file in Swift then there's no theoretical limit to how many it can store. c.f. Cinder where at some point you'd have to start juggling multiple volumes.) * Granularity of allocation - pay only for the resources you actually use, rather than to allocate a chunk that you may or may not be using (so Nova->containers->FaaS, Cinder->Swift, Trove->??? [RIP MagnetoDB], &c.) * Full control of infrastructure - notwithstanding the above, maintain Nova/Cinder/Neutron/Trove/&c. so that legacy applications, highly specialised applications, and higher-level services like PaaS can make fully customised use of the virtual infrastructure. * Hardware virtualisation - make anything that might typically be done in hardware available in a multi-tenant software-defined environment: servers, routers, load balancers, firewalls, video codecs, GPGPUs, FPGAs... * Built-in reliability - don't require even the smallest apps to have 3 VMs + a cluster manager to enforce any reliability guarantees; provide those guarantees using multi-tenant services that efficiently share resources between applications (see also: Infinite scaling, Granularity of allocation). * Application control - (securely) give applications control over their own infrastructure, so that no part of the application needs to reside outside of the cloud. * Integration - cloud services that effectively form part of the user's application can communicate amongst themselves, where appropriate, without the need for client-side glue (see also: Built-in reliability). * Interoperability - the same applications can be deployed on a variety of private and public OpenStack clouds. I'm definitely not claiming to have captured the full range of possibilities there (notably it doesn't attempt to cover e.g. deployment-related projects), but at a minimum any project contributing to one or more of those points is something I would consider to be aligned with OpenStack's mission. (That, of course, is only one of several criteria that are considered.) I would love to see us have a conversation as a community to figure out what we all, collectively, think that list should look like and document it. Ideally new projects shouldn't have to wait until they've applied to join OpenStack to get a sense of whether we believe they're furthering our mission or not. To be clear, although I don't expect it to come up, something like a PaaS (think CloudFoundry, or OpenShift) would be *not* be in-scope for OpenStack in my view. (We should definitely to encourage them to integrate with Keystone anyway though!) One thing I think the TC needs to be wary of is that at this stage of maturity there may be a temptation to engage in a sort of 'regulatory arbitrage': to try to land your pet feature wherever it's easiest to get it accepted, rather than where it makes the most technical sense. Indulging that temptation works against interoperability, which is a critical part of our mission. It should be easy enough for the TC to spot and reject projects that should just be a feature somewhere else. But another danger is projects whose implementations provide a very flexible framework (Adjutant is an example, it has a plug-in model that can basically make it an API for anything) - they run the risk of turning into a catch-all bucket for random features. The solution there is clear communication. From the TC perspective I think that should mean insisting on an explicitly-defined scope, erring on the side of too narrow if necessary - a team can always come back to the TC and negotiate an increased scope, but it's almost impossible to narrow the scope of an established project. The Four Opens obviously remain critical, and I think the TC has been doing a good job of policing new projects on those. It will be necessary to accept single-vendor projects, because in part the purpose of bringing projects under TC governance is to reassure potential contributors that the project is committed to the Four Opens. (An example from my own experience: Heat was accepted into incubation as a single-vendor project, and only later blossomed into an extremely diverse project. So it can work.) The main thing I will be looking out for in those cases is that the project followed the Four Opens *from the beginning*. Projects that start from a code dump are much less likely to attract other contributors in my view. Open Source is not a verb. cheers, Zane. [1] You can read it in-context here: https://review.openstack.org/#/c/401226/2/reference/openstack-vision.rst at 34 From cdent+os at anticdent.org Mon Apr 23 16:02:41 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Mon, 23 Apr 2018 17:02:41 +0100 (BST) Subject: [openstack-dev] [tc] campaign question: How should we handle projects with overlapping feature sets? In-Reply-To: <1524490775-sup-9488@lrrr.local> References: <1524490775-sup-9488@lrrr.local> Message-ID: On Mon, 23 Apr 2018, Doug Hellmann wrote: > Where do you draw the line at "gratuitous"? > > What benefits and drawbacks do you see in supporting multiple tools > with similar features? > > How would our community be different, in positive and negative ways, > if we were more strict about avoiding such overlap? This is a tough question. We've held API stability and interoperability as such important factors that any discussion of overlapping of competing projects has to consider whether we are willing to bend on that. It's also never been entirely clear the extent to which new projects eat into the resource pool that is available to existing projects. But if we set aside those issues for a moment: I would say that "gratuitous" overlap would be when a project wants to provide a service similar to one that already exists and has failed utterly to engage with the existing service. It would not, however, be gratuitous if a potential project, after presenting their alternate proposal to the similar project and getting a "not interested" or "we can't attend to that any time in the next $TIME_PERIOD", chose to go ahead. For example, one can imagine a world where someone thinks up a project to create a different service for managing VMs. One that intends to "innovate" in the compute API space (breaking compatibility with nova's API) and manage compute nodes using etcd in a way somewhat like Kubernetes. Nova is approached and says "yeah, interesting, but not going to happen, we are booked up solid for the next two years". If the people involved in the potential project are numerous and diverse enough to have a chance of getting something done, then I think they should be encouraged, for the sake of innovation, diversity, attracting new contributors and leapfrogging ourselves into the future. It's quite likely that during discussions a "compute api v2.x compatibility layer" would be negotiated. A real world example where things could have gone better is with Mogan: https://review.openstack.org/#/c/508400/ There are some fairly obvious costs from overlapping projects: * potential drains on the resource pool * confusion and churn for people downstream (packagers, client makers, deployers, every day users) These are potentially countered by: * new or rejuvenated contributors, inspired by new stuff * advancements in capability provided by new technologies * a potential for positive and collaborative competition between the two related projects People's needs evolve and change. OpenStack needs to as well. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From thierry at openstack.org Mon Apr 23 16:09:39 2018 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 23 Apr 2018 18:09:39 +0200 Subject: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier? In-Reply-To: <1524491647-sup-1779@lrrr.local> References: <1524491647-sup-1779@lrrr.local> Message-ID: <8b5aa508-b859-1172-4ea1-3a40d70989b4@openstack.org> Doug Hellmann wrote: > Over the last year we have seen some contraction in the number of > companies and individuals contributing to OpenStack. At the same > time we have started seeing contributions from other companies and > individuals. To some degree this contraction and shift in contributor > base is a natural outcome of changes in OpenStack itself along with > the rest of the technology industry, but as with any change it > raises questions about how and whether we can ensure a smooth > transition to a new steady state. > > What aspects of our policies or culture make contributing to OpenStack > more difficult than contributing to other open source projects? > > Which of those would you change, and how? Our focus for the past 7 years was on handling the enormous growth of the OpenStack project. If you asked me in 2010 how many total code contributors we'd have by 2018, my answer would probably have been closer to 700 than to 7000. We've built systems and processes to sustain that growth, and we were very successful at it. The issue is that systems and processes designed to sustain times of inflation do not work so well in a deflation period, or even a stagnation period. It's urgent now to have a critical look at them, see what is useful and what is a scale optimization we could do away with. Our largest reserve of potential contributors lies in the vast number of users we have. In my opinion, one of the mistakes we made was to create an "operators" community separate from the "developers" community, almost in reaction to it. That makes it more difficult to smoothly transition users into contributors and ultimately into code contributions. Melvin and I have been busy over the past two cycles fixing that in various ways, but there is still a lot of work to do. > Where else should we be looking for contributors? Like other large open source projects, OpenStack has a lot of visibility in the academic sector. I feel like we are less successful than others in attracting contributions from there, and we could do a lot better by engaging with them more directly. -- Thierry Carrez (ttx) From doug at doughellmann.com Mon Apr 23 16:14:43 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 23 Apr 2018 12:14:43 -0400 Subject: [openstack-dev] [tc] campaign question related to new projects In-Reply-To: <3a9397e9-65ce-10f4-c3b9-897879e346ac@ham.ie> References: <1524259233-sup-3003@lrrr.local> <941e3fdc-cfe6-5752-3245-c3a138d5bd4b@ham.ie> <1524495441-sup-7739@lrrr.local> <3a9397e9-65ce-10f4-c3b9-897879e346ac@ham.ie> Message-ID: <1524500051-sup-3248@lrrr.local> Excerpts from Graham Hayes's message of 2018-04-23 16:27:04 +0100: > On 23/04/18 16:04, Doug Hellmann wrote: > > Excerpts from Graham Hayes's message of 2018-04-23 12:15:24 +0100: > >> 7On 20/04/18 22:26, Doug Hellmann wrote: > >> > >>> Without letting the conversation devolve too much into a discussion > >>> of Adjutant's case, please talk a little about how you would evaluate > >>> a project's application in general. What sorts of things do you > >>> consider when deciding whether a project "aligns with the OpenStack > >>> Mission," for example? > >>> > >>> Doug > >>> > >> > >> For me, the most important thing for a project that wants to join is > >> that they act like "one of us" - what I think ttx refered to as "culture > >> fit". > >> > >> This is fairly wide ranging, but includes things like: > >> > >> * Do they use the PTIs[0] > >> * Do they use gerrit, or if they use something else, do they follow > >> the same review styles and mechanisms? > >> * Are they on IRC? > >> * Do they use the mailing list for long running discussion? > >> ** If a project doesn't have long running discussions and as a result > >> does not have ML activity, I would see that as OK - my problem > >> would be with a team that ran their own list. > >> * Do they use standard devstack / -infra jobs for testing? > >> * Do they use the standard common libraries (where appropriate)? > >> > >> If a project fails this test (and would have been accepted as something > >> that drives the mission), I see no issue with the TC trying to bring > >> them into the fold by helping them work like one of us, and accepting > >> them when they have shown that they are willing to change how they > >> do things. > >> > >> For the "product" fit, it is a lot more subjective. We used to have a > >> system (pre Big Tent) where the TC picked "winners" in a space and > >> blessed one project as the way to do $thing. Then, in big tent we > >> started to not pick winners, and allow anyone who was one of us, and > >> had a "cloud" application. > >> > >> Recently, we have moved back to seeing if a project overlaps with > >> another. The real test for this (from my viewpoint) is if the > >> perceived overlap is an area that the team that is currently in > >> OpenStack is interested in pursuing - if not we should default to > >> adding the project. > > > > We've always considered overlap to some degree, but it has come up > > more explicitly in a few recent discussions because of the nature > > of the projects. Please see the other thread on this topic [1]. > > > > [1] http://lists.openstack.org/pipermail/openstack-dev/2018-April/129661.html > > > >> Personally, if the project adds something that we currently lack, > >> and have lacked for a long time (not to get too close to the current > >> discussion), or tries to reduce the amount of extra tooling that > >> deployers currently write in house, we should welcome them. > >> > >> The acid test for me is "How would I use this?" or "Have I written > >> tooling or worked somewhere that wrote tooling to do this?" > >> > >> If the answer is yes, it is a good indication that they fit with the > >> mission. > > > > This feels like the ideal open source approach, in which contributors > > are "scratching their own itch." How can we encourage more deployers > > and users of OpenStack to consider contributing their customization > > and integration projects? Should we? > > I think a lot of our major users are good citizens and are doing some or > all of this work in the open - we just have a discoverability issue. > > A lot of the benefit of joining the foundation as a project, is the > increased visibility gained from it, so that others who are deploying > OpenStack in a similar layout can find a project and use it. > > I think at the very least we should find a way to promote them (this > is where constellations could really help, as we could add non member > projects to constellations where they are appropriate. Do you foresee any issues with adding unofficial projects to the constellations? Doug > > > Doug > > > >> > >> - Graham > >> > >> 0 - > >> https://governance.openstack.org/tc/reference/project-testing-interface.html > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > From thierry at openstack.org Mon Apr 23 16:18:03 2018 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 23 Apr 2018 18:18:03 +0200 Subject: [openstack-dev] [tc] campaign question related to new projects In-Reply-To: <1524494676-sup-5440@lrrr.local> References: <1524259233-sup-3003@lrrr.local> <1524494676-sup-5440@lrrr.local> Message-ID: Doug Hellmann wrote: > Excerpts from Thierry Carrez's message of 2018-04-22 15:10:40 +0200: >> For the product fit, there is also a lot of room for interpretation. For >> me it boils down to whether "OpenStack" (the product) is better with >> that project "in" rather than with that project "out". Sometimes it's an >> easy sell: if a group wants to collaborate on packaging OpenStack for a >> certain format/distro/deployment tool, it's clearly a win. In that case>> more is always better. But in most cases it's not as straightforward. >> There is always tension between added functionality on one side, and >> complexity / dilution / confusion on the other. Every "service" project >> we add makes OpenStack more complex to explain, cross-project work more >> difficult and interoperability incrementally harder. Whatever is added >> has to be damn interesting to counterbalance that. The same goes for > > Why do you think OpenStack has more trouble explaining our feature set > than other cloud systems that have a similarly diverse array of > features? You mean compared to AWS ? It's not the same thing to explain a set of APIs to end users of the cloud and to describe available components to the deployers of the cloud, especially newcomers. For example, Zun API users don't have to know if it relies on Heat, Magnum or Nova to actually do its magic behind the scenes. A Zun deployer absolutely needs to know that. I hope that the Constellation concept will help the latter traverse our product map more efficiently. -- Thierry Carrez (ttx) From gr at ham.ie Mon Apr 23 16:19:04 2018 From: gr at ham.ie (Graham Hayes) Date: Mon, 23 Apr 2018 17:19:04 +0100 Subject: [openstack-dev] [tc] campaign question: How should we handle projects with overlapping feature sets? In-Reply-To: <1524490775-sup-9488@lrrr.local> References: <1524490775-sup-9488@lrrr.local> Message-ID: On 23/04/18 14:50, Doug Hellmann wrote: > [This is meant to be one of (I hope) several conversation-provoking > questions directed at prospective TC members to help the community > understand their positions before considering how to vote in the > ongoing election.] > > In the course of evaluating new projects that have asked to join > as official members of the OpenStack community, we often discuss > whether the feature set of the project overlaps too much with other > existing projects. This came up within the last year during Glare's > application, and more recently as part of the Adjutant application. > > Our current policy regarding Open Development is that a project > should cooperate with existing projects "rather than gratuitously > competing or reinventing the wheel." [1] The flexibility provided > by the use of the term "gratuitously" has allowed us to support > multiple solutions in the deployment and telemetry problem spaces. > At the same time it has left us with questions about how (and > whether) the community would be able to replace the implementation > of any given component with a new set of technologies by "starting > from scratch". > > Where do you draw the line at "gratuitous"? Does the project basically promise "$OTHER_PROJECT but better"? For example, for me, if a project re-created another projects API - I would call that gratuitous. > What benefits and drawbacks do you see in supporting multiple tools > with similar features? It depends on the context - for example with deployment tooling, companies may have pre existing DC orchestration tools, and having and OpenStack deployment tool in $CONFIGMGMT can help people run quicker. Having 2 image stores, not so much, as there is then confusion about what tool to deploy, or deploy both, and any issues may need to have 2 different solutions, or at least 2 patches. There may be circumstances where 2 tools make sense (e.g. Messaging as a Service did have 2 projects, but they served 2 different use cases, so it made sense) > How would our community be different, in positive and negative ways, > if we were more strict about avoiding such overlap? For deployment tooling - having the one true way to deploy OpenStack would have made a lot of the work I have done in the last 4 or 5 years redundant :) - We would probably be using bash scripts, but not having people re-create the flow of installing OpenStack in $CFGMGMT_TOOL de-jour in OpenStack may have focused resource. Or just forced deployment teams out of OpenStack to somewhere else. OS packing is definitely a good thing for duplication. I don't think we have many service project areas that we have duplication that would not have failed some of the stricter "culture fit" discussions we have now had in the post Big Tent OpenStack. We would have probably blocked things like Octavia (as Neutron LBaaS existed), Designate (as Nova DNS was a thing back then), Monasca, Neutron itself (as Nova Network was a thing). > Doug > > [1] https://governance.openstack.org/tc/reference/new-projects-requirements.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From gr at ham.ie Mon Apr 23 16:23:20 2018 From: gr at ham.ie (Graham Hayes) Date: Mon, 23 Apr 2018 17:23:20 +0100 Subject: [openstack-dev] [tc] campaign question related to new projects In-Reply-To: <1524500051-sup-3248@lrrr.local> References: <1524259233-sup-3003@lrrr.local> <941e3fdc-cfe6-5752-3245-c3a138d5bd4b@ham.ie> <1524495441-sup-7739@lrrr.local> <3a9397e9-65ce-10f4-c3b9-897879e346ac@ham.ie> <1524500051-sup-3248@lrrr.local> Message-ID: <565b9dd2-19be-f099-d9af-4d1cc4658fb1@ham.ie> On 23/04/18 17:14, Doug Hellmann wrote: > Excerpts from Graham Hayes's message of 2018-04-23 16:27:04 +0100: >> On 23/04/18 16:04, Doug Hellmann wrote: >>> Excerpts from Graham Hayes's message of 2018-04-23 12:15:24 +0100: >>>> 7On 20/04/18 22:26, Doug Hellmann wrote: >>>> >>>>> Without letting the conversation devolve too much into a discussion >>>>> of Adjutant's case, please talk a little about how you would evaluate >>>>> a project's application in general. What sorts of things do you >>>>> consider when deciding whether a project "aligns with the OpenStack >>>>> Mission," for example? >>>>> >>>>> Doug >>>>> >>>> >>>> For me, the most important thing for a project that wants to join is >>>> that they act like "one of us" - what I think ttx refered to as "culture >>>> fit". >>>> >>>> This is fairly wide ranging, but includes things like: >>>> >>>> * Do they use the PTIs[0] >>>> * Do they use gerrit, or if they use something else, do they follow >>>> the same review styles and mechanisms? >>>> * Are they on IRC? >>>> * Do they use the mailing list for long running discussion? >>>> ** If a project doesn't have long running discussions and as a result >>>> does not have ML activity, I would see that as OK - my problem >>>> would be with a team that ran their own list. >>>> * Do they use standard devstack / -infra jobs for testing? >>>> * Do they use the standard common libraries (where appropriate)? >>>> >>>> If a project fails this test (and would have been accepted as something >>>> that drives the mission), I see no issue with the TC trying to bring >>>> them into the fold by helping them work like one of us, and accepting >>>> them when they have shown that they are willing to change how they >>>> do things. >>>> >>>> For the "product" fit, it is a lot more subjective. We used to have a >>>> system (pre Big Tent) where the TC picked "winners" in a space and >>>> blessed one project as the way to do $thing. Then, in big tent we >>>> started to not pick winners, and allow anyone who was one of us, and >>>> had a "cloud" application. >>>> >>>> Recently, we have moved back to seeing if a project overlaps with >>>> another. The real test for this (from my viewpoint) is if the >>>> perceived overlap is an area that the team that is currently in >>>> OpenStack is interested in pursuing - if not we should default to >>>> adding the project. >>> >>> We've always considered overlap to some degree, but it has come up >>> more explicitly in a few recent discussions because of the nature >>> of the projects. Please see the other thread on this topic [1]. >>> >>> [1] http://lists.openstack.org/pipermail/openstack-dev/2018-April/129661.html >>> >>>> Personally, if the project adds something that we currently lack, >>>> and have lacked for a long time (not to get too close to the current >>>> discussion), or tries to reduce the amount of extra tooling that >>>> deployers currently write in house, we should welcome them. >>>> >>>> The acid test for me is "How would I use this?" or "Have I written >>>> tooling or worked somewhere that wrote tooling to do this?" >>>> >>>> If the answer is yes, it is a good indication that they fit with the >>>> mission. >>> >>> This feels like the ideal open source approach, in which contributors >>> are "scratching their own itch." How can we encourage more deployers >>> and users of OpenStack to consider contributing their customization >>> and integration projects? Should we? >> >> I think a lot of our major users are good citizens and are doing some or >> all of this work in the open - we just have a discoverability issue. >> >> A lot of the benefit of joining the foundation as a project, is the >> increased visibility gained from it, so that others who are deploying >> OpenStack in a similar layout can find a project and use it. >> >> I think at the very least we should find a way to promote them (this >> is where constellations could really help, as we could add non member >> projects to constellations where they are appropriate. > > Do you foresee any issues with adding unofficial projects to the > constellations? > > Doug No (from my viewpoint anyway) - I think they will be important to include in any true collection of use cases - for example we definitely will want to have a "PaaS" Constellation that includes things like Kubernetes, Cloud Foundry and / or OpenShift. We need to show how OpenStack works in the entire open source infrastructure community and not just how it works internally - and showing how you can use other open source software components *with* OpenStack is vital for that. - Graham >> >>> Doug >>> >>>> >>>> - Graham >>>> >>>> 0 - >>>> https://governance.openstack.org/tc/reference/project-testing-interface.html >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From Tim.Bell at cern.ch Mon Apr 23 16:28:13 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Mon, 23 Apr 2018 16:28:13 +0000 Subject: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier? In-Reply-To: <8b5aa508-b859-1172-4ea1-3a40d70989b4@openstack.org> References: <1524491647-sup-1779@lrrr.local> <8b5aa508-b859-1172-4ea1-3a40d70989b4@openstack.org> Message-ID: One of the challenges in the academic sector is the time from lightbulb moment to code commit. Many of the academic resource opportunities are short term (e.g. PhDs, student projects, government funded projects) and there is a latency in current system to onboard, get the appropriate recognition in the community (such as by reviewing other changes) and then get the code committed. This is a particular problem for the larger projects where the patch is not in one of the project goal areas for that release. Not sure what the solution is but I would agree that there is a significant opportunity. Tim -----Original Message----- From: Thierry Carrez Organization: OpenStack Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Monday, 23 April 2018 at 18:11 To: "openstack-dev at lists.openstack.org" Subject: Re: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier? > Where else should we be looking for contributors? Like other large open source projects, OpenStack has a lot of visibility in the academic sector. I feel like we are less successful than others in attracting contributions from there, and we could do a lot better by engaging with them more directly. -- Thierry Carrez (ttx) __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From kgiusti at gmail.com Mon Apr 23 16:38:57 2018 From: kgiusti at gmail.com (Ken Giusti) Date: Mon, 23 Apr 2018 12:38:57 -0400 Subject: [openstack-dev] Catching missing or stale package requirements in requirements.txt Message-ID: Hi Folks, Some of the Oslo libraries have a tox test that does the above [0]. This ensures that our requirements.txt file is kept current with the code. This test uses a tool called pip_check_reqs [1]. Unfortunately this tool is not compatible with pip version 10, and it appears as if the github project hasn't seen any development activity in the last 2 years. Seems unlikely that pip 10 support will be added anytime soon. Can anyone recommend a suitable alternative to the pip_check_reqs tool? Thanks in advance, [0] https://git.openstack.org/cgit/openstack/oslo.messaging/tree/tox.ini#n116 [1] https://github.com/r1chardj0n3s/pip-check-reqs -- Ken Giusti (kgiusti at gmail.com) From cdent+os at anticdent.org Mon Apr 23 16:41:07 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Mon, 23 Apr 2018 17:41:07 +0100 (BST) Subject: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier? In-Reply-To: <1524491647-sup-1779@lrrr.local> References: <1524491647-sup-1779@lrrr.local> Message-ID: On Mon, 23 Apr 2018, Doug Hellmann wrote: > What aspects of our policies or culture make contributing to OpenStack > more difficult than contributing to other open source projects? Size, isolation, and perfectionism. Size in at least three dimensions: * the entire community * individual projects in terms of humans and scope * individual projects in terms of lines of code (per repo and per file) Isolation in at least two dimensions: * For someone who is not "of OpenStack", OpenStack is kind of "over there, doing its own thing". Non-OpenStack colleagues wonder about the tempestuous teapot I'm in. * Individual members of project teams sometimes self-identify as members of that that team, not of OpenStack. Perfectionism: * In at least some teams project teams (see, look at me identifying and isolating project teams) proposed specs and code can be nitpicked to death and forward progress delayed while every edge case is considered. We should strive to iterate more. * At the same time there is too strong and attachment to master needing to be perfect. A bug on master is an invitation addressed to a potential new contributor. > Which of those would you change, and how? I think we've started making a more conscious effort on all three of these areas. We talk more often about incomplete bug fixes being adopted experienced contributors. Decomposing repositories to harden contractual and social boundaries is increasingly common. Actively working with other communities (notably Kubernetes) is on the rise. But there is plenty more to do in each of these areas. > Where else should we be looking for contributors? I agree with Thierry that academia is a good place to look and that we made a mistake when we highlighted and enforced an artificial boundary between developers and operators. Ideally many features and bug fixes would come from people who _use_ OpenStack as their day job. The people who think of _developing_ OpenStack as their day job should be most focused on enabling those other people and cleaning up and refining what already exists. I also think that we need to figure out, if possible, some way to make OpenStack relevant and interesting to individuals who are technically curious enough to want to try playing with their own mini cloud at home. If we can make OpenStack accessible to amateurs (not amateurish!) there's a big world of good input to come. Something as one stop, integrated in the documentation and official seeming as minikube is for Kubernetes. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From zbitter at redhat.com Mon Apr 23 16:49:01 2018 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 23 Apr 2018 12:49:01 -0400 Subject: [openstack-dev] [tc] campaign question: How should we handle projects with overlapping feature sets? In-Reply-To: <1524490775-sup-9488@lrrr.local> References: <1524490775-sup-9488@lrrr.local> Message-ID: <64475f3e-a8c7-98ca-056e-da7878b640fb@redhat.com> On 23/04/18 09:50, Doug Hellmann wrote: > [This is meant to be one of (I hope) several conversation-provoking > questions directed at prospective TC members to help the community > understand their positions before considering how to vote in the > ongoing election.] > > In the course of evaluating new projects that have asked to join > as official members of the OpenStack community, we often discuss > whether the feature set of the project overlaps too much with other > existing projects. This came up within the last year during Glare's > application, and more recently as part of the Adjutant application. > > Our current policy regarding Open Development is that a project > should cooperate with existing projects "rather than gratuitously > competing or reinventing the wheel." [1] The flexibility provided > by the use of the term "gratuitously" has allowed us to support > multiple solutions in the deployment and telemetry problem spaces. > At the same time it has left us with questions about how (and > whether) the community would be able to replace the implementation > of any given component with a new set of technologies by "starting > from scratch". > > Where do you draw the line at "gratuitous"? I'd want to see sound technical reasons for taking a different approach that addresses, or partially addresses, the same problem. If people are starting their own projects to avoid having to work with the existing team then I'd label that gratuitous. Evidence of co-operation with the existing project, and the provision of migration paths for existing operators and users, would be points in favour of a project wanting to go down this route. > What benefits and drawbacks do you see in supporting multiple tools > with similar features? > > How would our community be different, in positive and negative ways, > if we were more strict about avoiding such overlap? We used to have that rule, of course, and the primary result was that some folks who were for all intents and purposes part of our community got left out in the cold, officially speaking, only because some other folks got there first. I don't think it even contributed much to interoperability - Monasca is the project that comes to mind, and people who wanted to run that instead of Ceilometer did so regardless of the official status. On the other hand, the Telemetry projects have completely transformed themselves since the days when people used to complain about the scalability of Ceilometer, and they did so while maintaining an orderly deprecation and migration of the APIs. Perhaps if if we'd doubled down on that path we'd have ended up with less fragmentation for the same benefit? It's really hard to say, and I think that is perhaps the point. None of us have all that much confidence in our ability to predict the future, so we have chosen to err on the side of not picking winners. cheers, Zane. From cdent+os at anticdent.org Mon Apr 23 16:50:31 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Mon, 23 Apr 2018 17:50:31 +0100 (BST) Subject: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier? In-Reply-To: References: <1524491647-sup-1779@lrrr.local> <8b5aa508-b859-1172-4ea1-3a40d70989b4@openstack.org> Message-ID: On Mon, 23 Apr 2018, Tim Bell wrote: > One of the challenges in the academic sector is the time from > lightbulb moment to code commit. Many of the academic resource > opportunities are short term (e.g. PhDs, student projects, > government funded projects) and there is a latency in current > system to onboard, get the appropriate recognition in the > community (such as by reviewing other changes) and then get the > code committed. This is a particular problem for the larger > projects where the patch is not in one of the project goal areas > for that release. This. Many times over this. The latency that a casual contributor may experience when interacting with one of the larger OpenStack projects is discouraging and a significant impedance mismatch for the contributor. One thing that might help is what I implied in one of my responses elsewhere in Doug's collection of questions: Professional OpenStack developers could be oriented towards enabling and attending to casual contributors more than addressing feature development. This is a large shift in how OpenStack is done, but makes sense in a world where we are trying to maintain an existing and fairly mature thing: We need maintainers. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From rico.lin.guanyu at gmail.com Mon Apr 23 16:54:14 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Tue, 24 Apr 2018 00:54:14 +0800 Subject: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier? In-Reply-To: <1524491647-sup-1779@lrrr.local> References: <1524491647-sup-1779@lrrr.local> Message-ID: ** What aspects of our policies or culture make contributing to OpenStackmore difficult than contributing to other open source projects?To fully understand the map of OpenStack services is a huge challenge, especially for new join developers. And for project teams, might not provide new contributors guidelines to be quicker to become part of the team. Finally, the format or WG/SIG/Team might confuse contributors.* Which of those would you change, and how?IMO to provides clear landscape will help on give people better view on the whole map and might get the better idea on how to fit in their plan without spending too much time on finding where to contribute. Also, we need provides better ways to communicate to new contributors to at least make them feel welcome. Which maybe we can try to add in PTL/TC's (or other possible position) duty and to provide better guidelines to new join contributors who seems got no clue on what's the project been working on or where the project needs help. Only people we really understand that project can provide such judgment, and it seems like a duty to provide guidelines to others (Aka help people working with you). Finally, I personally think it's a good idea to have SIG in OpenStack, but I think we need to provide technical guidelines to SIGs, so they can make a clear decision on what's their mission, where are the resources they can use, and how they might be able to use it. A clear vision makes clear actions.* Where else should we be looking for contributors?IMO we actually got a bunch new contributors around OpenStack (mostly for nova and neutron of course) and trying to figure out what they can/should do. Also possibly from other projects which might be doing overlapping jobs. Also to form SIG might be a more productive way to collect contributors.* May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin 2018-04-23 22:06 GMT+08:00 Doug Hellmann : > [This is meant to be one of (I hope) several conversation-provoking > questions directed at prospective TC members to help the community > understand their positions before considering how to vote in the > ongoing election.] > > Over the last year we have seen some contraction in the number of > companies and individuals contributing to OpenStack. At the same > time we have started seeing contributions from other companies and > individuals. To some degree this contraction and shift in contributor > base is a natural outcome of changes in OpenStack itself along with > the rest of the technology industry, but as with any change it > raises questions about how and whether we can ensure a smooth > transition to a new steady state. > > What aspects of our policies or culture make contributing to OpenStack > more difficult than contributing to other open source projects? > > Which of those would you change, and how? > > Where else should we be looking for contributors? > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Mon Apr 23 17:00:10 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Tue, 24 Apr 2018 01:00:10 +0800 Subject: [openstack-dev] [tc] campaign question: How should we handle projects with overlapping feature sets? In-Reply-To: <1524490775-sup-9488@lrrr.local> References: <1524490775-sup-9488@lrrr.local> Message-ID: *Thanks, Doug for bringing out this campaign questionI think we have a start now with providing a decent map to show services in OpenStack and fill in with projects. What we should have and will be nice is to ask projects to search through the map (with a brief introduction of services) when they're registering. To prevent overlapping from the very beginning seems to be the most ideal, which might also mean it's actually our responsibility to search through what exactly a project aims to and what kind of feature it will provide when we allow people to register a project. We can also discuss about to let projects know that we encourage new ideas but we not encourage provides overlapping features just because you think the service is bad and you don't like to fix it (IMO to encourage people to point out problems and even try to fix it is very important when talking about continuing efforts). And to give credits instead of warnings might work better.* How (and whether) the community would be able to replace the implementationof any given component with a new set of technologies by "startingfrom scratch".With try to make such action as a long-term community goal, might be possible to said we're able to do it (if this new technology does matters, like containerize), and it might be even better than wait for people to pick up the job and keep asking him `are we there yet?`. We have to be really careful to prevent changing the behavior of services or cause a huge burden to developers.* Where do you draw the line at "gratuitous"?When a project is about more than 80% chances to dead or no maintainer, and pure overlapping effort.* What benefits and drawbacks do you see in supporting multiple toolswith similar features?It's good and allow people from multiple tools to help construct the bridge to us together. What I concern is we should try to decide a pattern and make it a success instead of letting parallel jobs works on similar features. So we can have a preview version of all other paths. And if we can make sure our success path first, we can even look back and provide features plus bug fixes to other tools. That brings a question back, `what're users using the most?`* How would our community be different, in positive and negative ways,if we were more strict about avoiding such overlap?To allow concentrate our development energy on features, also to prevent lack of diversity/ideas/activity for those projects we promise to provide guideline when we allow them to stay under TC's governance. What we should also try to prevent it when it's overlap but exists project didn't provide fair communication or close their mind to new features/fixes. Which we should strong/change part of our TC resolutions and prevent this because that might just lead to a negative way that people quitting on providing new innovation.* May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin 2018-04-23 21:50 GMT+08:00 Doug Hellmann : > [This is meant to be one of (I hope) several conversation-provoking > questions directed at prospective TC members to help the community > understand their positions before considering how to vote in the > ongoing election.] > > In the course of evaluating new projects that have asked to join > as official members of the OpenStack community, we often discuss > whether the feature set of the project overlaps too much with other > existing projects. This came up within the last year during Glare's > application, and more recently as part of the Adjutant application. > > Our current policy regarding Open Development is that a project > should cooperate with existing projects "rather than gratuitously > competing or reinventing the wheel." [1] The flexibility provided > by the use of the term "gratuitously" has allowed us to support > multiple solutions in the deployment and telemetry problem spaces. > At the same time it has left us with questions about how (and > whether) the community would be able to replace the implementation > of any given component with a new set of technologies by "starting > from scratch". > > Where do you draw the line at "gratuitous"? > > What benefits and drawbacks do you see in supporting multiple tools > with similar features? > > How would our community be different, in positive and negative ways, > if we were more strict about avoiding such overlap? > > Doug > > [1] https://governance.openstack.org/tc/reference/new-projects- > requirements.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Apr 23 17:02:07 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 23 Apr 2018 17:02:07 +0000 Subject: [openstack-dev] [tc] campaign question related to new projects In-Reply-To: <92a3703e-428b-1793-b01f-5751ad0f4e33@redhat.com> References: <1524259233-sup-3003@lrrr.local> <92a3703e-428b-1793-b01f-5751ad0f4e33@redhat.com> Message-ID: <20180423170207.sdiap5m6dtnb6v5p@yuggoth.org> On 2018-04-23 12:02:14 -0400 (-0400), Zane Bitter wrote: [...] > The main thing I will be looking out for in those cases is that > the project followed the Four Opens *from the beginning*. Projects > that start from a code dump are much less likely to attract other > contributors in my view. Open Source is not a verb. [...] Not to add more noise, but I wanted to mention that I _really_ like this point in particular. We've definitely seen plenty of applications for inclusion which started out as an internally-developed tool behind closed doors somewhere. When they get "flung over the wall" to the community they tend to flounder in their attempts to gain traction as properly open projects in their own right. I don't think we do a good enough job at highlighting this risk (yet), and will remember to point it out more often when I spot it in the future. Thanks! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From cdent+os at anticdent.org Mon Apr 23 17:11:16 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Mon, 23 Apr 2018 18:11:16 +0100 (BST) Subject: [openstack-dev] [tc] campaign question related to new projects In-Reply-To: <1524495076-sup-1855@lrrr.local> References: <1524259233-sup-3003@lrrr.local> <1524495076-sup-1855@lrrr.local> Message-ID: On Mon, 23 Apr 2018, Doug Hellmann wrote: > Excerpts from Chris Dent's message of 2018-04-23 12:09:42 +0100: >> I'd like to see us work harder to refine the long term goals we are >> trying to satisfy with the projects that make up OpenStack. This >> will require us to continue the never-ending discussion about >> whether OpenStack is a "Software Defined Infrastructure Framework" >> or a "Cloud Solution" (plenty of people talk the latter, but plenty >> of other people are spending energy on the former). And then > > Do you consider those two approaches to be mutually exclusive? No, but I do think how we balance and think about them them helps us understand how to make progress. > In the past our community has had trouble defining "infrastructure" > in a way that satisfies everyone. Some people stop at "allocating > what you need to run a VM" while others consider it closer to > "everything you need to run an application". > > How do you define "infrastructure"? In this context I'm thinking of infrastructure in terms of plumbing, using the plumbing and porcelain metaphor sometimes associted with git: https://git-scm.com/book/en/v2/Git-Internals-Plumbing-and-Porcelain So in your terms "allocating what you need to run a VM" but with some tweaks. In Zane's response on this topic, he talks a lot about the applications that are using the "cloud" and some of the additional tooling that is needed to satisfy that (some of which might be considered porcelain). Since my introduction to OpenStack that's what I've hoped we are trying to build. An Open Source "Cloud Solution" that enables a healthy application environment that is a clear and complete alternative to the big three clouds. However, over the years it has become increasingly evident that a great deal of our energy is spent working at a different angle to enable software defined data centers (including data centers that are decomposed to the edge) that are hyper-aware of hardware and networks and making that hardware available in the most cost effective way possible. That's a useful thing to do but our attention to it is not well aligned with building elastic web services. (In this particular case I'm speaking from experience perhaps overly informed by Nova, where so much work is devoted to NFV-related use cases. To such extent that people joke about hardware defined software.) While I don't think we need to say that we are doing one thing or the other, we may make some decisions easier by being more willing to identify which domain or perspective we are thinking about in any given decision. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From doug at doughellmann.com Mon Apr 23 17:14:08 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 23 Apr 2018 13:14:08 -0400 Subject: [openstack-dev] [tc] campaign question related to new projects In-Reply-To: <565b9dd2-19be-f099-d9af-4d1cc4658fb1@ham.ie> References: <1524259233-sup-3003@lrrr.local> <941e3fdc-cfe6-5752-3245-c3a138d5bd4b@ham.ie> <1524495441-sup-7739@lrrr.local> <3a9397e9-65ce-10f4-c3b9-897879e346ac@ham.ie> <1524500051-sup-3248@lrrr.local> <565b9dd2-19be-f099-d9af-4d1cc4658fb1@ham.ie> Message-ID: <1524503587-sup-2922@lrrr.local> Excerpts from Graham Hayes's message of 2018-04-23 17:23:20 +0100: > On 23/04/18 17:14, Doug Hellmann wrote: > > Excerpts from Graham Hayes's message of 2018-04-23 16:27:04 +0100: > >> On 23/04/18 16:04, Doug Hellmann wrote: > >>> Excerpts from Graham Hayes's message of 2018-04-23 12:15:24 +0100: > >>>> 7On 20/04/18 22:26, Doug Hellmann wrote: > >>>> > >>>>> Without letting the conversation devolve too much into a discussion > >>>>> of Adjutant's case, please talk a little about how you would evaluate > >>>>> a project's application in general. What sorts of things do you > >>>>> consider when deciding whether a project "aligns with the OpenStack > >>>>> Mission," for example? > >>>>> > >>>>> Doug > >>>>> > >>>> > >>>> For me, the most important thing for a project that wants to join is > >>>> that they act like "one of us" - what I think ttx refered to as "culture > >>>> fit". > >>>> > >>>> This is fairly wide ranging, but includes things like: > >>>> > >>>> * Do they use the PTIs[0] > >>>> * Do they use gerrit, or if they use something else, do they follow > >>>> the same review styles and mechanisms? > >>>> * Are they on IRC? > >>>> * Do they use the mailing list for long running discussion? > >>>> ** If a project doesn't have long running discussions and as a result > >>>> does not have ML activity, I would see that as OK - my problem > >>>> would be with a team that ran their own list. > >>>> * Do they use standard devstack / -infra jobs for testing? > >>>> * Do they use the standard common libraries (where appropriate)? > >>>> > >>>> If a project fails this test (and would have been accepted as something > >>>> that drives the mission), I see no issue with the TC trying to bring > >>>> them into the fold by helping them work like one of us, and accepting > >>>> them when they have shown that they are willing to change how they > >>>> do things. > >>>> > >>>> For the "product" fit, it is a lot more subjective. We used to have a > >>>> system (pre Big Tent) where the TC picked "winners" in a space and > >>>> blessed one project as the way to do $thing. Then, in big tent we > >>>> started to not pick winners, and allow anyone who was one of us, and > >>>> had a "cloud" application. > >>>> > >>>> Recently, we have moved back to seeing if a project overlaps with > >>>> another. The real test for this (from my viewpoint) is if the > >>>> perceived overlap is an area that the team that is currently in > >>>> OpenStack is interested in pursuing - if not we should default to > >>>> adding the project. > >>> > >>> We've always considered overlap to some degree, but it has come up > >>> more explicitly in a few recent discussions because of the nature > >>> of the projects. Please see the other thread on this topic [1]. > >>> > >>> [1] http://lists.openstack.org/pipermail/openstack-dev/2018-April/129661.html > >>> > >>>> Personally, if the project adds something that we currently lack, > >>>> and have lacked for a long time (not to get too close to the current > >>>> discussion), or tries to reduce the amount of extra tooling that > >>>> deployers currently write in house, we should welcome them. > >>>> > >>>> The acid test for me is "How would I use this?" or "Have I written > >>>> tooling or worked somewhere that wrote tooling to do this?" > >>>> > >>>> If the answer is yes, it is a good indication that they fit with the > >>>> mission. > >>> > >>> This feels like the ideal open source approach, in which contributors > >>> are "scratching their own itch." How can we encourage more deployers > >>> and users of OpenStack to consider contributing their customization > >>> and integration projects? Should we? > >> > >> I think a lot of our major users are good citizens and are doing some or > >> all of this work in the open - we just have a discoverability issue. > >> > >> A lot of the benefit of joining the foundation as a project, is the > >> increased visibility gained from it, so that others who are deploying > >> OpenStack in a similar layout can find a project and use it. > >> > >> I think at the very least we should find a way to promote them (this > >> is where constellations could really help, as we could add non member > >> projects to constellations where they are appropriate. > > > > Do you foresee any issues with adding unofficial projects to the > > constellations? > > > > Doug > > No (from my viewpoint anyway) - I think they will be important to > include in any true collection of use cases - for example we definitely > will want to have a "PaaS" Constellation that includes things like > Kubernetes, Cloud Foundry and / or OpenShift. We need to show how > OpenStack works in the entire open source infrastructure community > and not just how it works internally - and showing how you can use other > open source software components *with* OpenStack is vital for that. Would you make a distinction between things that have their own community like kubernetes, and things that might consider themselves on track to be part of the OpenStack community one day? Doug From doug at doughellmann.com Mon Apr 23 17:14:59 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 23 Apr 2018 13:14:59 -0400 Subject: [openstack-dev] [tc] campaign question related to new projects In-Reply-To: <20180423170207.sdiap5m6dtnb6v5p@yuggoth.org> References: <1524259233-sup-3003@lrrr.local> <92a3703e-428b-1793-b01f-5751ad0f4e33@redhat.com> <20180423170207.sdiap5m6dtnb6v5p@yuggoth.org> Message-ID: <1524503653-sup-1242@lrrr.local> Excerpts from Jeremy Stanley's message of 2018-04-23 17:02:07 +0000: > On 2018-04-23 12:02:14 -0400 (-0400), Zane Bitter wrote: > [...] > > The main thing I will be looking out for in those cases is that > > the project followed the Four Opens *from the beginning*. Projects > > that start from a code dump are much less likely to attract other > > contributors in my view. Open Source is not a verb. > [...] > > Not to add more noise, but I wanted to mention that I _really_ like > this point in particular. We've definitely seen plenty of > applications for inclusion which started out as an > internally-developed tool behind closed doors somewhere. When they > get "flung over the wall" to the community they tend to flounder in > their attempts to gain traction as properly open projects in their > own right. I don't think we do a good enough job at highlighting > this risk (yet), and will remember to point it out more often when I > spot it in the future. Thanks! I hope that no one considers any of this "noise," so thank you for highlighting that point. Doug From hjensas at redhat.com Mon Apr 23 17:16:42 2018 From: hjensas at redhat.com (Harald =?ISO-8859-1?Q?Jens=E5s?=) Date: Mon, 23 Apr 2018 19:16:42 +0200 Subject: [openstack-dev] [Heat][TripleO] - Getting attributes of openstack resources not created by the stack for TripleO NetworkConfig. In-Reply-To: References: <1524142764.4383.83.camel@redhat.com> Message-ID: <1524503802.4383.149.camel@redhat.com> On Fri, 2018-04-20 at 14:44 +0200, Thomas Herve wrote: > On Thu, Apr 19, 2018 at 2:59 PM, Harald Jensås > wrote: > > Hi, > > Hi, thanks for sending this. Responses inline. > > > When configuring TripleO deployments with nodes on routed ctlplane > > networks we need to pass some per-network properties to the > > NetworkConfig resource[1] in THT. We get the ``ControlPlaneIp`` > > property using get_attr, but the NIC configs need a couple of more > > parameters[2], for example: ``ControlPlaneSubnetCidr``, > > ``ControlPlaneDefaultRoute`` and ``DnsServers``. > > > > Since queens these templates are jinja templated, to generate > > things > > from from network_data.yaml. When using routed ctlplane networks, > > the > > parameters ``ControlPlaneSubnetCidr`` and > > ``ControlPlaneDefaultRoute`` > > will be different. So we need to use static per-role > > Net::SoftwareConfig templates, and add parameters such as > > ``ControlPlaneDefaultRouteLeafX``. > > > > The values the use need to pass in for these are already available > > in > > the neutron ctlplane network configuration on the undercloud. So > > ideally we should not need to ask the user to provide them in > > parameter_defaults, we should resolve the correct values > > automatically. > > To make it clear, what you want to prevent is the need to add more > keys in network_data.yaml? > > As those had to be provided at some point, I wonder if tripleo can't > find a way to pass them again on the overcloud deploy. > No, the networks defined in network_data.yaml is fine, that is the data used to create the neutron stuff so passing the data from there is already in place to some extent. But, the ctlplane network is not defined in network_data.yaml. > Inspecting neutron is an elegant solution, though. > > > : We can get the port ID using get_attr: > > > > {get_attr: [, addresses, , 0, port]} > > > > : From there outside of heat we can get the subnet_id: > > > > openstack port show 2fb4baf9-45b0-48cb-8249-c09a535b9eda \ > > -f yaml -c fixed_ips > > > > fixed_ips: ip_address='172.20.0.10', subnet_id='2b06ae2e-423f- > > 4a73- > > 97ad-4e9822d201e5' > > > > : And finally we can get the gateway_ip and cidr of the subnet: > > > > openstack subnet show 2b06ae2e-423f-4a73-97ad-4e9822d201e5 \ > > -f yaml -c gateway_ip -c cidr > > > > cidr: 172.20.0.0/26 > > gateway_ip: 172.20.0.62 > > > > > > The problem is getting there using heat ... > > a couple of ideas: > > > > a) Use heat's ``external_resource`` to create a port resource, > > and then a external subnet resource. Then get the data > > from the external resources. We probably would have to make > > it possible for a ``external_resource`` depend on the server > > resource, and verify that these resource have the required > > attributes. > > I believe that's a relatively easy fix. It's unclear why we didn't > allow that in the first place, probably because we were missing a use > case, but it seems valuable here. > > > b) Extend attributes of OS::Nova::Server (OS::Neutron::Port as > > well probably) to include the data. > > > > If we do this we should probably aim to be in parity with > > what is made available to clients getting the configuration > > from dhcp. (mtu, dns_domain, dns_servers, prefixlen, > > gateway_ip, host_routes, ipv6_address_mode, ipv6_ra_mode > > etc.) > > I'm with you on exposing more neutron data to the Port resource. It > can be complicated because some of them are implementation specific, > but we can look into those. > > I don't think adding them directly to the Server resource makes a ton > of sense though. > In tripleo, the ctlplane interface is an implicit port created by the server resource. :( (Attempts where made to change this, but upgrades would'nt work) So the server resource is where I would find it most useful. (Adding attributes to the port resource, and then using external resource for the implicit server ports may be a compromise. Nested dependencies for external_resources might be hard?) > > c) Create a new heat function to read properties of any > > openstack resource, without having to make use of the > > external_resource in heat. > > It's an interesting idea, but I think it would look a lot like what > external resources are supposed to be. > > I see a few changes: > * Allow external resource to depend on other resources > * Expose more port attributes > * Expose more subnet attributes > > If you can list the attributes you care about that'd be great. > Guess what I envision is a client_config attribute, a map with data useful to configure a network interface on the client. (I put * on the ones I believe could be useful for TripleO) * /v2.0/networks/{network_id}/mtu /v2.0/networks/{network_id}/dns_domain * /v2.0/subnets/{subnet_id}/dns_nameservers * /v2.0/subnets/{subnet_id}/host_routes /v2.0/subnets/{subnet_id}/ip_version * /v2.0/subnets/{subnet_id}/gateway_ip * /v2.0/subnets/{subnet_id}/cidr * /v2.0/subnets/{subnet_id}/ipv6_address_mode * /v2.0/subnets/{subnet_id}/ipv6_ra_mode /v2.0/ports/{port_id}/description - Why not? /v2.0/ports/{port_id}/dns_assignment /v2.0/ports/{port_id}/dns_domain /v2.0/ports/{port_id}/dns_name * /v2.0/ports/{port_id}/fixed_ips - We have this already /v2.0/ports/{port_id}/name - Why not? I've added Dan Sneddon on CC as well. Guess there is the question if TripleO will/want to continue using heat, neutron, nova etc. // Harald From doug at doughellmann.com Mon Apr 23 17:18:22 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 23 Apr 2018 13:18:22 -0400 Subject: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier? In-Reply-To: References: <1524491647-sup-1779@lrrr.local> <8b5aa508-b859-1172-4ea1-3a40d70989b4@openstack.org> Message-ID: <1524503764-sup-3631@lrrr.local> Excerpts from Chris Dent's message of 2018-04-23 17:50:31 +0100: > On Mon, 23 Apr 2018, Tim Bell wrote: > > > One of the challenges in the academic sector is the time from > > lightbulb moment to code commit. Many of the academic resource > > opportunities are short term (e.g. PhDs, student projects, > > government funded projects) and there is a latency in current > > system to onboard, get the appropriate recognition in the > > community (such as by reviewing other changes) and then get the > > code committed. This is a particular problem for the larger > > projects where the patch is not in one of the project goal areas > > for that release. > > This. Many times over this. > > The latency that a casual contributor may experience when > interacting with one of the larger OpenStack projects is > discouraging and a significant impedance mismatch for the > contributor. > > One thing that might help is what I implied in one of my responses > elsewhere in Doug's collection of questions: Professional OpenStack > developers could be oriented towards enabling and attending to > casual contributors more than addressing feature development. This > is a large shift in how OpenStack is done, but makes sense in a > world where we are trying to maintain an existing and fairly mature > thing: We need maintainers. I would like for us to collect some more data about what efforts teams are making with encouraging new contributors, and what seems to be working or not. In the past we've done pretty well at finding new techniques by experimenting within one team and then adapting the results to scale them out to other teams. Does anyone have any examples of things that we ought to be trying more of? Doug From fungi at yuggoth.org Mon Apr 23 17:22:05 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 23 Apr 2018 17:22:05 +0000 Subject: [openstack-dev] [tc] campaign question related to new projects In-Reply-To: <1524503653-sup-1242@lrrr.local> References: <1524259233-sup-3003@lrrr.local> <92a3703e-428b-1793-b01f-5751ad0f4e33@redhat.com> <20180423170207.sdiap5m6dtnb6v5p@yuggoth.org> <1524503653-sup-1242@lrrr.local> Message-ID: <20180423172205.l4562vgurpdv753a@yuggoth.org> On 2018-04-23 13:14:59 -0400 (-0400), Doug Hellmann wrote: [...] > I hope that no one considers any of this "noise," so thank you for > highlighting that point. Oh, yes I didn't mean to imply that any of the responses so far have been noise, but I was walking a thin line on it being a hollow sort of "me too" reply. I have added this as one of my suggested bullet points for proposed forum discussion http://forumtopics.openstack.org/cfp/details/122 as well. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From doug at doughellmann.com Mon Apr 23 17:26:00 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 23 Apr 2018 13:26:00 -0400 Subject: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier? In-Reply-To: References: <1524491647-sup-1779@lrrr.local> Message-ID: <1524503909-sup-9816@lrrr.local> Excerpts from Rico Lin's message of 2018-04-24 00:54:14 +0800: > ** What aspects of our policies or culture make contributing to > OpenStackmore difficult than contributing to other open source projects?To > fully understand the map of OpenStack services is a huge challenge, > especially for new join developers. And for project teams, might not This is an interesting point that I haven't heard raised before. Typically the number of projects is used as an example of something that is confusing to users or deployers, but can you elaborate on how it is confusing to contributors? > provide new contributors guidelines to be quicker to become part of the > team. Finally, the format or WG/SIG/Team might confuse contributors.* Which Do you mean because it isn't clear what sort of group to start in order to accomplish something? > of those would you change, and how?IMO to provides clear landscape will > help on give people better view on the whole map and might get the better > idea on how to fit in their plan without spending too much time on finding > where to contribute. Also, we need provides better ways to communicate to > new contributors to at least make them feel welcome. Which maybe we can try > to add in PTL/TC's (or other possible position) duty and to provide better > guidelines to new join contributors who seems got no clue on what's the > project been working on or where the project needs help. Only people we What role do you think the First Contact SIG might play in that? > really understand that project can provide such judgment, and it seems like > a duty to provide guidelines to others (Aka help people working with you). > Finally, I personally think it's a good idea to have SIG in OpenStack, but > I think we need to provide technical guidelines to SIGs, so they can make a > clear decision on what's their mission, where are the resources they can > use, and how they might be able to use it. A clear vision makes clear > actions.* Where else should we be looking for contributors?IMO we actually > got a bunch new contributors around OpenStack (mostly for nova and neutron > of course) and trying to figure out what they can/should do. Also possibly > from other projects which might be doing overlapping jobs. Also to form SIG > might be a more productive way to collect contributors.* > > > > May The Force of OpenStack Be With You, > > *Rico Lin*irc: ricolin > > 2018-04-23 22:06 GMT+08:00 Doug Hellmann : > > > [This is meant to be one of (I hope) several conversation-provoking > > questions directed at prospective TC members to help the community > > understand their positions before considering how to vote in the > > ongoing election.] > > > > Over the last year we have seen some contraction in the number of > > companies and individuals contributing to OpenStack. At the same > > time we have started seeing contributions from other companies and > > individuals. To some degree this contraction and shift in contributor > > base is a natural outcome of changes in OpenStack itself along with > > the rest of the technology industry, but as with any change it > > raises questions about how and whether we can ensure a smooth > > transition to a new steady state. > > > > What aspects of our policies or culture make contributing to OpenStack > > more difficult than contributing to other open source projects? > > > > Which of those would you change, and how? > > > > Where else should we be looking for contributors? > > > > Doug > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > From gr at ham.ie Mon Apr 23 17:27:21 2018 From: gr at ham.ie (Graham Hayes) Date: Mon, 23 Apr 2018 18:27:21 +0100 Subject: [openstack-dev] [tc] campaign question related to new projects In-Reply-To: <1524503587-sup-2922@lrrr.local> References: <1524259233-sup-3003@lrrr.local> <941e3fdc-cfe6-5752-3245-c3a138d5bd4b@ham.ie> <1524495441-sup-7739@lrrr.local> <3a9397e9-65ce-10f4-c3b9-897879e346ac@ham.ie> <1524500051-sup-3248@lrrr.local> <565b9dd2-19be-f099-d9af-4d1cc4658fb1@ham.ie> <1524503587-sup-2922@lrrr.local> Message-ID: <024aeb51-6eba-1a5a-2d23-843fa7c86362@ham.ie> On 23/04/18 18:14, Doug Hellmann wrote: > Excerpts from Graham Hayes's message of 2018-04-23 17:23:20 +0100: >> On 23/04/18 17:14, Doug Hellmann wrote: >>> Excerpts from Graham Hayes's message of 2018-04-23 16:27:04 +0100: >>>> On 23/04/18 16:04, Doug Hellmann wrote: >>>>> Excerpts from Graham Hayes's message of 2018-04-23 12:15:24 +0100: >>>>>> 7On 20/04/18 22:26, Doug Hellmann wrote: >>>>>> >>>>>>> Without letting the conversation devolve too much into a discussion >>>>>>> of Adjutant's case, please talk a little about how you would evaluate >>>>>>> a project's application in general. What sorts of things do you >>>>>>> consider when deciding whether a project "aligns with the OpenStack >>>>>>> Mission," for example? >>>>>>> >>>>>>> Doug >>>>>>> >>>>>> >>>>>> For me, the most important thing for a project that wants to join is >>>>>> that they act like "one of us" - what I think ttx refered to as "culture >>>>>> fit". >>>>>> >>>>>> This is fairly wide ranging, but includes things like: >>>>>> >>>>>> * Do they use the PTIs[0] >>>>>> * Do they use gerrit, or if they use something else, do they follow >>>>>> the same review styles and mechanisms? >>>>>> * Are they on IRC? >>>>>> * Do they use the mailing list for long running discussion? >>>>>> ** If a project doesn't have long running discussions and as a result >>>>>> does not have ML activity, I would see that as OK - my problem >>>>>> would be with a team that ran their own list. >>>>>> * Do they use standard devstack / -infra jobs for testing? >>>>>> * Do they use the standard common libraries (where appropriate)? >>>>>> >>>>>> If a project fails this test (and would have been accepted as something >>>>>> that drives the mission), I see no issue with the TC trying to bring >>>>>> them into the fold by helping them work like one of us, and accepting >>>>>> them when they have shown that they are willing to change how they >>>>>> do things. >>>>>> >>>>>> For the "product" fit, it is a lot more subjective. We used to have a >>>>>> system (pre Big Tent) where the TC picked "winners" in a space and >>>>>> blessed one project as the way to do $thing. Then, in big tent we >>>>>> started to not pick winners, and allow anyone who was one of us, and >>>>>> had a "cloud" application. >>>>>> >>>>>> Recently, we have moved back to seeing if a project overlaps with >>>>>> another. The real test for this (from my viewpoint) is if the >>>>>> perceived overlap is an area that the team that is currently in >>>>>> OpenStack is interested in pursuing - if not we should default to >>>>>> adding the project. >>>>> >>>>> We've always considered overlap to some degree, but it has come up >>>>> more explicitly in a few recent discussions because of the nature >>>>> of the projects. Please see the other thread on this topic [1]. >>>>> >>>>> [1] http://lists.openstack.org/pipermail/openstack-dev/2018-April/129661.html >>>>> >>>>>> Personally, if the project adds something that we currently lack, >>>>>> and have lacked for a long time (not to get too close to the current >>>>>> discussion), or tries to reduce the amount of extra tooling that >>>>>> deployers currently write in house, we should welcome them. >>>>>> >>>>>> The acid test for me is "How would I use this?" or "Have I written >>>>>> tooling or worked somewhere that wrote tooling to do this?" >>>>>> >>>>>> If the answer is yes, it is a good indication that they fit with the >>>>>> mission. >>>>> >>>>> This feels like the ideal open source approach, in which contributors >>>>> are "scratching their own itch." How can we encourage more deployers >>>>> and users of OpenStack to consider contributing their customization >>>>> and integration projects? Should we? >>>> >>>> I think a lot of our major users are good citizens and are doing some or >>>> all of this work in the open - we just have a discoverability issue. >>>> >>>> A lot of the benefit of joining the foundation as a project, is the >>>> increased visibility gained from it, so that others who are deploying >>>> OpenStack in a similar layout can find a project and use it. >>>> >>>> I think at the very least we should find a way to promote them (this >>>> is where constellations could really help, as we could add non member >>>> projects to constellations where they are appropriate. >>> >>> Do you foresee any issues with adding unofficial projects to the >>> constellations? >>> >>> Doug >> >> No (from my viewpoint anyway) - I think they will be important to >> include in any true collection of use cases - for example we definitely >> will want to have a "PaaS" Constellation that includes things like >> Kubernetes, Cloud Foundry and / or OpenShift. We need to show how >> OpenStack works in the entire open source infrastructure community >> and not just how it works internally - and showing how you can use other >> open source software components *with* OpenStack is vital for that. > > Would you make a distinction between things that have their own > community like kubernetes, and things that might consider themselves > on track to be part of the OpenStack community one day? > > Doug No - I would hope that one day they will just get a nice mascot image on the constellation when they join the foundation, or a Strategic Focus Area in the future. Obviously, we should be conservative with the software we place on the maps - we shouldn't add $THING days after it is first released / open sourced, but if software is stable and solves a problem (or is actually deployed in an environment) we should consider it, be it part of CNCF, hosted on openstack infra, or somewhere else, as long as it aligns with our principles, and our software. At the risk of derailing this, a more interesting question would be "Should we add something like Ceph to a constellation?" -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From mriedemos at gmail.com Mon Apr 23 17:35:07 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 23 Apr 2018 12:35:07 -0500 Subject: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier? In-Reply-To: <1524503764-sup-3631@lrrr.local> References: <1524491647-sup-1779@lrrr.local> <8b5aa508-b859-1172-4ea1-3a40d70989b4@openstack.org> <1524503764-sup-3631@lrrr.local> Message-ID: <7832fe8b-606c-fc44-c13e-a1ba2b4181e8@gmail.com> On 4/23/2018 12:18 PM, Doug Hellmann wrote: > I would like for us to collect some more data about what efforts > teams are making with encouraging new contributors, and what seems > to be working or not. In the past we've done pretty well at finding > new techniques by experimenting within one team and then adapting > the results to scale them out to other teams. > > Does anyone have any examples of things that we ought to be trying more > of? The nova team is now trying runways [1] for trying to focus reviews on blueprints which are ready but otherwise don't get the focus from the core team. The certificate validation stuff in there for the John Hopkins team is a prime example of how this is putting focus on something that has otherwise been getting deferred since at least the Ocata summit. [1] https://etherpad.openstack.org/p/nova-runways-rocky -- Thanks, Matt From rico.lin.guanyu at gmail.com Mon Apr 23 17:39:42 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Tue, 24 Apr 2018 01:39:42 +0800 Subject: [openstack-dev] [tc] campaign question: How "active" should the TC be? In-Reply-To: <1524489055-sup-8435@lrrr.local> References: <1524489055-sup-8435@lrrr.local> Message-ID: *IMO TC should be more active as possible. Since we try to use this position to make policies, we should also consider hard how we can broadcast those policies to each developer to provide guidelines and to get possible feedbacks.To reach out current/potential technical contributors, to sell this technical community and to communicate with other parts (UC/Board/other communities/ops/users) and bring ideas/actions to renew our policies or to the entire technical community. That will need TCs jump into local/global events, meetings and MLs.I believe it's not just about what TC defines our own duty, but most of the developers believe in TC's governance.So I think we should definitely be more active and keep trying to renew our goals. Here's example, I pretty sure a lot of developers from our community doesn't know exactly what policy we were made.Which provides the higher risk for gaps between what TCs think OpenStack and what they try to present in their local community. I'm pretty sure such gaps exist in the most local community (which developers learn what's current OpenStack looks like) in Asia.As for the discussion on how to organize TCs to be more active. To make a policy for that actually make sense to me since all TCs should read through and follow policies which they made. Second to try to reach out to project teams, rest of community, and other communities should be a good start.* 2018-04-23 21:27 GMT+08:00 Doug Hellmann : > [This is meant to be one of (I hope) several conversation-provoking > questions directed at prospective TC members to help the community > understand their positions before considering how to vote in the > ongoing election.] > > We frequently have discussions about whether the TC is active enough, > in terms of driving new policies, technology choices, and other > issues that affect the entire community. > > Please describe one case where we were either active or reactive > and how that was shown to be the right choice over time. > > Please describe another case where the choice to be active or > reactive ended up being the wrong choice. > > If you think the TC should tend to be more active in driving change > than it is today, please describe the changes (policy, culture, > etc.) you think would need to be made to do that effectively (not > which policies you want us to be more active on, but *how* to > organize the TC to be more active and have that work within the > community culture). > > If you think the TC should tend to be less active in driving change > overall, please describe what policies you think the TC should be > taking an active role in implementing. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Apr 23 17:57:27 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 23 Apr 2018 13:57:27 -0400 Subject: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier? In-Reply-To: <7832fe8b-606c-fc44-c13e-a1ba2b4181e8@gmail.com> References: <1524491647-sup-1779@lrrr.local> <8b5aa508-b859-1172-4ea1-3a40d70989b4@openstack.org> <1524503764-sup-3631@lrrr.local> <7832fe8b-606c-fc44-c13e-a1ba2b4181e8@gmail.com> Message-ID: <1524506194-sup-2019@lrrr.local> Excerpts from Matt Riedemann's message of 2018-04-23 12:35:07 -0500: > On 4/23/2018 12:18 PM, Doug Hellmann wrote: > > I would like for us to collect some more data about what efforts > > teams are making with encouraging new contributors, and what seems > > to be working or not. In the past we've done pretty well at finding > > new techniques by experimenting within one team and then adapting > > the results to scale them out to other teams. > > > > Does anyone have any examples of things that we ought to be trying more > > of? > > The nova team is now trying runways [1] for trying to focus reviews on > blueprints which are ready but otherwise don't get the focus from the > core team. > > The certificate validation stuff in there for the John Hopkins team is a > prime example of how this is putting focus on something that has > otherwise been getting deferred since at least the Ocata summit. > > [1] https://etherpad.openstack.org/p/nova-runways-rocky > Great example. It sounds like it's helping, and I look forward to hearing the retrospective at the end of the cycle. Doug From rico.lin.guanyu at gmail.com Mon Apr 23 18:00:53 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Tue, 24 Apr 2018 02:00:53 +0800 Subject: [openstack-dev] [tc] campaign question related to new projects In-Reply-To: <1524494405-sup-2731@lrrr.local> References: <1524259233-sup-3003@lrrr.local> <1524494405-sup-2731@lrrr.local> Message-ID: 2018-04-23 22:43 GMT+08:00 Doug Hellmann : > > Excerpts from Rico Lin's message of 2018-04-22 16:50:51 +0800: > > Thanks, Doug, for raising this campaign question > > > > > > Here are my answers: > > > > > > ***How you would evaluate a project's application in general > > > > First I would work through the requirements ([1]) to evaluate projects. > > Since most of the requirements are specific enough. And here's more > > important part, to leave evaluate logs or comments for projects which we > > considered but didn't reach some requirements. It's very important to guide > > projects to cross over requirements (and remember, a `-1` only means we > > trying to help). > > > > Then, I work on questions, like: > > > > `How many user are interesting to/needs the functionality that service > > provided?` > > > > `How active is this project and how's the diversity of contributors?` > > Our current policy is to allow projects with contributors from a small > number of affiliations (even a single employer), under the theory that > bringing a team into the community officially will help them grow by > showing them the benefits of being more diverse and by making it easier > for other community members who have employer restrictions on their open > source work to justify contributing. > > Would you change that policy in any way? I'm fine with the number of developers involved in the project. we should encourage people working on any crazy ideas. But the point is `is that developer active? and will he/she helps others to join that projects or just waiting for others?`. If we can try to put such requirement in policy will be better IMO. Otherwise, we can keep the policy but the diversity of developers might help to reduce chances of that risk. > > > > > `Is this project required cross communities/projects cooperation? If yes, > > how's the development workflows are working between communities/projects?` > > > > And last but is one of the most important questions, > > > > `Is this service aligns with the OpenStack Mission`? (and let's jump to > > next question to answer this part) > > > > > > > > **What sorts of things do you consider when deciding whether a project > > "aligns with the OpenStack Mission," for example?* > > > > I would consider things like: > > > > `Is the project's functionality complete the OpenStack infrastructure map?` > > > > Asking from user requirement and functionality point of view, `how's the > > project(services) will make OpenStack better infrastructure for > > user/operators?` and `how's this functionality provide a better life for > > OpenStack developers?` > > > > `Is the project provides better integration point between communities` > > > > To build a better infrastructure, IMO it's also important to ask if a > > project (service) really help on integration with other communities like > > Kubernetes, OPNFV, CEPH, etc. I think to keep us as an active > > infrastructure to solutions is part of our mission too. > > > > `Is it providing functionality which we can integrate with current projects > > or SIG instead?` > > > > In short, we should be gathering our development energy, to really achieve > > the jobs which is exactly why we spend times on trying to find official > > projects and said this is part of our mission to work on. So when new > > projects jump out, it's really important to discuss cross-project `is it > > suitable for projects integrated and join force on specific functionality?` > > (to do this while evaluating a project instead of when it's creating might > > not be the best time to said `please integrate or join forces with other > > teams together`(not even with a smiling face), but it's never too late for > > a non-official/incubating project to consider about this). I really don't > > like to to see any project get higher chances to die just because > > developers chance their developing focus. It's happening when projects are > > all willing to do the functionality, but no communication between(some > > cases, not even now other projects exists), and new/old projects dead, then > > TC needs to spend the time to pick those projects out. So IMO, it's worth > > to spend times to investigate on whether projects can be joined. Or ideally > > to put a resolution said, it's project's obligation to help on this, and > > help other join force to be part of the team. > > Please see my other question about projects with overlapping feature > sets [1]. Done:) > > Doug > > [1] http://lists.openstack.org/pipermail/openstack-dev/2018-April/129661.html > > > > > `Can projects provide cross-project gating?` > > > > Do think if it's possible, we should consider this when asking if a service > > aligns with our mission because not breaking rest of infrastructure is part > > of the definition of `to build`. And providing cross-project gate jobs > > seems like a way to go. To stable the integration between projects and > > prevent released a failed feature when other services trying to work on new > > ways and provide no guideline, ML, or solution, just only leave words like > > `this is not part of our function to fix`. > > > > > > > > And finally, > > > > If we can answer all above questions, try to put in with the more accurate > > number (like from user survey), and provides communications it needs, will > > definitely help in finding next official projects. > > > > Also, when the evaluation is done, we should also evaluate the how's these > > evaluation processes, how's guideline working for us? and which questions > > above doesn't make any sense?. > > > > > > [1] > > https://governance.openstack.org/tc/reference/new-projects-requirements.html > > > > > > May The Force of OpenStack Be With You, > > > > *Rico Lin*irc: ricolin > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsneddon at redhat.com Mon Apr 23 18:09:01 2018 From: dsneddon at redhat.com (Dan Sneddon) Date: Mon, 23 Apr 2018 11:09:01 -0700 Subject: [openstack-dev] [Heat][TripleO] - Getting attributes of openstack resources not created by the stack for TripleO NetworkConfig. In-Reply-To: <1524503802.4383.149.camel@redhat.com> References: <1524142764.4383.83.camel@redhat.com> <1524503802.4383.149.camel@redhat.com> Message-ID: On Mon, Apr 23, 2018 at 10:16 AM, Harald Jensås wrote: > On Fri, 2018-04-20 at 14:44 +0200, Thomas Herve wrote: > > On Thu, Apr 19, 2018 at 2:59 PM, Harald Jensås > > wrote: > > > Hi, > > > > Hi, thanks for sending this. Responses inline. > > > > > When configuring TripleO deployments with nodes on routed ctlplane > > > networks we need to pass some per-network properties to the > > > NetworkConfig resource[1] in THT. We get the ``ControlPlaneIp`` > > > property using get_attr, but the NIC configs need a couple of more > > > parameters[2], for example: ``ControlPlaneSubnetCidr``, > > > ``ControlPlaneDefaultRoute`` and ``DnsServers``. > > > > > > Since queens these templates are jinja templated, to generate > > > things > > > from from network_data.yaml. When using routed ctlplane networks, > > > the > > > parameters ``ControlPlaneSubnetCidr`` and > > > ``ControlPlaneDefaultRoute`` > > > will be different. So we need to use static per-role > > > Net::SoftwareConfig templates, and add parameters such as > > > ``ControlPlaneDefaultRouteLeafX``. > > > > > > The values the use need to pass in for these are already available > > > in > > > the neutron ctlplane network configuration on the undercloud. So > > > ideally we should not need to ask the user to provide them in > > > parameter_defaults, we should resolve the correct values > > > automatically. > > > > To make it clear, what you want to prevent is the need to add more > > keys in network_data.yaml? > > > > As those had to be provided at some point, I wonder if tripleo can't > > find a way to pass them again on the overcloud deploy. > > > No, the networks defined in network_data.yaml is fine, that is the data > used to create the neutron stuff so passing the data from there is > already in place to some extent. > > But, the ctlplane network is not defined in network_data.yaml. > We could add the ControlPlaneDefaultRoute and ControlPlaneSubnetCidr to network_data.yaml, but this would involve some duplication of configuration data, since those are currently defined in undercloud.conf. A more robust solution might be to generate network_data.yaml from that info in undercloud.conf, but currently we don't modify any files in the tripleo-heat-templates package after it gets installed. > > > Inspecting neutron is an elegant solution, though. > > > > > > > : We can get the port ID using get_attr: > > > > > > {get_attr: [, addresses, , 0, port]} > > > > > > : From there outside of heat we can get the subnet_id: > > > > > > openstack port show 2fb4baf9-45b0-48cb-8249-c09a535b9eda \ > > > -f yaml -c fixed_ips > > > > > > fixed_ips: ip_address='172.20.0.10', subnet_id='2b06ae2e-423f- > > > 4a73- > > > 97ad-4e9822d201e5' > > > > > > : And finally we can get the gateway_ip and cidr of the subnet: > > > > > > openstack subnet show 2b06ae2e-423f-4a73-97ad-4e9822d201e5 \ > > > -f yaml -c gateway_ip -c cidr > > > > > > cidr: 172.20.0.0/26 > > > gateway_ip: 172.20.0.62 > > > > > > > > > The problem is getting there using heat ... > > > a couple of ideas: > > > > > > a) Use heat's ``external_resource`` to create a port resource, > > > and then a external subnet resource. Then get the data > > > from the external resources. We probably would have to make > > > it possible for a ``external_resource`` depend on the server > > > resource, and verify that these resource have the required > > > attributes. > > > > I believe that's a relatively easy fix. It's unclear why we didn't > > allow that in the first place, probably because we were missing a use > > case, but it seems valuable here. > > > > > b) Extend attributes of OS::Nova::Server (OS::Neutron::Port as > > > well probably) to include the data. > > > > > > If we do this we should probably aim to be in parity with > > > what is made available to clients getting the configuration > > > from dhcp. (mtu, dns_domain, dns_servers, prefixlen, > > > gateway_ip, host_routes, ipv6_address_mode, ipv6_ra_mode > > > etc.) > > > > I'm with you on exposing more neutron data to the Port resource. It > > can be complicated because some of them are implementation specific, > > but we can look into those. > > > > I don't think adding them directly to the Server resource makes a ton > > of sense though. > > > In tripleo, the ctlplane interface is an implicit port created by the > server resource. :( (Attempts where made to change this, but upgrades > would'nt work) So the server resource is where I would find it most > useful. (Adding attributes to the port resource, and then using > external resource for the implicit server ports may be a compromise. > Nested dependencies for external_resources might be hard?) > Yes, the port is currently created as part of the Ironic server resource. We would have more flexibility if this were a separate Neutron port, but we need to be able to support upgrades. This would require the ability in Heat to detach the implicit port from the Ironic resource, and attach a Neutron port resource with the same IP to a node without rebuilding the entire node. This isn't currently possible. > > > > c) Create a new heat function to read properties of any > > > openstack resource, without having to make use of the > > > external_resource in heat. > > > > It's an interesting idea, but I think it would look a lot like what > > external resources are supposed to be. > > > > I see a few changes: > > * Allow external resource to depend on other resources > > * Expose more port attributes > > * Expose more subnet attributes > > > > If you can list the attributes you care about that'd be great. > > > > Guess what I envision is a client_config attribute, a map with data > useful to configure a network interface on the client. (I put * on the > ones I believe could be useful for TripleO) > > * /v2.0/networks/{network_id}/mtu > /v2.0/networks/{network_id}/dns_domain > * /v2.0/subnets/{subnet_id}/dns_nameservers > * /v2.0/subnets/{subnet_id}/host_routes > /v2.0/subnets/{subnet_id}/ip_version > * /v2.0/subnets/{subnet_id}/gateway_ip > * /v2.0/subnets/{subnet_id}/cidr > * /v2.0/subnets/{subnet_id}/ipv6_address_mode > * /v2.0/subnets/{subnet_id}/ipv6_ra_mode > /v2.0/ports/{port_id}/description - Why not? > /v2.0/ports/{port_id}/dns_assignment > /v2.0/ports/{port_id}/dns_domain > /v2.0/ports/{port_id}/dns_name > * /v2.0/ports/{port_id}/fixed_ips - We have this already > /v2.0/ports/{port_id}/name - Why not? > > > I've added Dan Sneddon on CC as well. Guess there is the question if > TripleO will/want to continue using heat, neutron, nova etc. > > > > // > Harald > I can't speak to the roadmap of Heat/Neutron/Nova on the undercloud, for the immediate future I don't see us moving away from Heat entirely due to upgrade requirements. I can see another use case for this Heat functionality, which is that I would like to be able to generate a report using Heat that lists all the ports in use in the entire deployment. This would be generated post-deployment, and could be used to populate an external DNS server, or simply to report on which IPs belong to which nodes. -- Dan Sneddon | Senior Principal OpenStack Engineer dsneddon at redhat.com | redhat.com/openstack dsneddon:irc | @dxs:twitter -------------- next part -------------- An HTML attachment was scrubbed... URL: From gr at ham.ie Mon Apr 23 18:14:50 2018 From: gr at ham.ie (Graham Hayes) Date: Mon, 23 Apr 2018 19:14:50 +0100 Subject: [openstack-dev] [tc] campaign question: How "active" should the TC be? In-Reply-To: <1524489055-sup-8435@lrrr.local> References: <1524489055-sup-8435@lrrr.local> Message-ID: On 23/04/18 14:27, Doug Hellmann wrote: > [This is meant to be one of (I hope) several conversation-provoking > questions directed at prospective TC members to help the community > understand their positions before considering how to vote in the > ongoing election.] > > We frequently have discussions about whether the TC is active enough, > in terms of driving new policies, technology choices, and other > issues that affect the entire community. > > Please describe one case where we were either active or reactive > and how that was shown to be the right choice over time. I think the best example of the TC being proactive and it being the right choice is the Visioning document and exercise. > Please describe another case where the choice to be active or > reactive ended up being the wrong choice. The InterOp testing and Tempest situation is the most vivid in my mind (after being in the centre of it for months). Members of the TC were proactive, but the TC as a whole was passive on it. The TC reacted 3 or 4 days after the board had approved the program - when we should have had an answer months before. > If you think the TC should tend to be more active in driving change > than it is today, please describe the changes (policy, culture, > etc.) you think would need to be made to do that effectively (not > which policies you want us to be more active on, but *how* to > organize the TC to be more active and have that work within the > community culture). I do think the TC should be more active in driving OpenStack forward. I think the TC has a role in listening to the developers who are driving the projects forward, and connecting them with other project developers where appropriate, while also co-ordinating with the User Committee, to see where commonalities are, and then using its voice to drive change in the foundation, and member companies (via the Board, foundation staff and other potentially more informal avenues). But for that, the TC will need to find a collective voice, that is pro-active, as trying to drive a project in the manner above cannot be reactive - by the time we develop a position that we are reacting with it, it will be too late. I think introducing more formal in-person blocks of time as a group is important, with a time blocked agenda, and enforced chairing could help us do that. I know it is not a popular opinion, but a 1/2 day every 6 months where all TC members can be available and attend the meeting can really help a group find a mutual voice. > If you think the TC should tend to be less active in driving change > overall, please describe what policies you think the TC should be > taking an active role in implementing. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From fungi at yuggoth.org Mon Apr 23 18:24:36 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 23 Apr 2018 18:24:36 +0000 Subject: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier? In-Reply-To: <1524503764-sup-3631@lrrr.local> References: <1524491647-sup-1779@lrrr.local> <8b5aa508-b859-1172-4ea1-3a40d70989b4@openstack.org> <1524503764-sup-3631@lrrr.local> Message-ID: <20180423182436.muwrtixxgksuzlrt@yuggoth.org> On 2018-04-23 13:18:22 -0400 (-0400), Doug Hellmann wrote: [...] > I would like for us to collect some more data about what efforts > teams are making with encouraging new contributors, and what seems > to be working or not. In the past we've done pretty well at finding > new techniques by experimenting within one team and then adapting > the results to scale them out to other teams. > > Does anyone have any examples of things that we ought to be trying > more of? A while back (and I'm sorry I seem to be failing at finding the right keywords to locate any of it) it was pointed out that the Kolla team has a handbook for how to become a core reviewer for their deliverables with a process that contributors interested in getting more involved that way can follow. While perhaps not necessarily applicable everywhere, and certainly would be extremely team-specific, it sounded like an intriguing solution. I'd be curious to follow up and find out whether that model has continued to work out for them. Some of us also urged existing leaders in various projects to record videos encouraging contributors to get more involved by demystifying processes like code review or bug triage. This could be as simple as signing up for an available lightning talk slot at one of our conferences and then performing what you consider to be mundane but much-needed activities while narrating an explanation of what's going on in your head. What we've failed to do, as far as I'm aware, is aggregate links to these somewhere and promote that in ways that the intended audience will find them. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From rico.lin.guanyu at gmail.com Mon Apr 23 18:29:05 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Tue, 24 Apr 2018 02:29:05 +0800 Subject: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier? In-Reply-To: <1524503909-sup-9816@lrrr.local> References: <1524491647-sup-1779@lrrr.local> <1524503909-sup-9816@lrrr.local> Message-ID: 2018-04-24 1:26 GMT+08:00 Doug Hellmann : > > Excerpts from Rico Lin's message of 2018-04-24 00:54:14 +0800: > > ** What aspects of our policies or culture make contributing to > > OpenStackmore difficult than contributing to other open source projects?To > > fully understand the map of OpenStack services is a huge challenge, > > especially for new join developers. And for project teams, might not > > This is an interesting point that I haven't heard raised before. > Typically the number of projects is used as an example of something > that is confusing to users or deployers, but can you elaborate on > how it is confusing to contributors? Because in some cases, users provide contributors and when a user feature jump in, to clarify which projects to might be part of that feature will cause time when they weren't in OpenStack for long (as a contributor working on cross communities). And usually, when he/she send out an ML for such a cross-projects usecase won't get much replied (really depends on teams). For other cases, user rely on what developers' report to decides where they should put resource on, but developers just provides the first match (and seems usable) project he can find in repositories. > > > provide new contributors guidelines to be quicker to become part of the > > team. Finally, the format or WG/SIG/Team might confuse contributors.* Which > > Do you mean because it isn't clear what sort of group to start in order > to accomplish something? exactly > > > of those would you change, and how?IMO to provides clear landscape will > > help on give people better view on the whole map and might get the better > > idea on how to fit in their plan without spending too much time on finding > > where to contribute. Also, we need provides better ways to communicate to > > new contributors to at least make them feel welcome. Which maybe we can try > > to add in PTL/TC's (or other possible position) duty and to provide better > > guidelines to new join contributors who seems got no clue on what's the > > project been working on or where the project needs help. Only people we > > What role do you think the First Contact SIG might play in that? I think in this specific scenario, First Contact SIG can help define the scope and suggest the guideline. Because new developers always reach to SIG/project team directly, and if it's not working, they might just try to work around issues and skip the chances to join OpenStack community. > > > really understand that project can provide such judgment, and it seems like > > a duty to provide guidelines to others (Aka help people working with you). > > Finally, I personally think it's a good idea to have SIG in OpenStack, but > > I think we need to provide technical guidelines to SIGs, so they can make a > > clear decision on what's their mission, where are the resources they can > > use, and how they might be able to use it. A clear vision makes clear > > actions.* Where else should we be looking for contributors?IMO we actually > > got a bunch new contributors around OpenStack (mostly for nova and neutron > > of course) and trying to figure out what they can/should do. Also possibly > > from other projects which might be doing overlapping jobs. Also to form SIG > > might be a more productive way to collect contributors.* > > > > > > > > May The Force of OpenStack Be With You, > > > > *Rico Lin*irc: ricolin > > > > 2018-04-23 22:06 GMT+08:00 Doug Hellmann : > > > > > [This is meant to be one of (I hope) several conversation-provoking > > > questions directed at prospective TC members to help the community > > > understand their positions before considering how to vote in the > > > ongoing election.] > > > > > > Over the last year we have seen some contraction in the number of > > > companies and individuals contributing to OpenStack. At the same > > > time we have started seeing contributions from other companies and > > > individuals. To some degree this contraction and shift in contributor > > > base is a natural outcome of changes in OpenStack itself along with > > > the rest of the technology industry, but as with any change it > > > raises questions about how and whether we can ensure a smooth > > > transition to a new steady state. > > > > > > What aspects of our policies or culture make contributing to OpenStack > > > more difficult than contributing to other open source projects? > > > > > > Which of those would you change, and how? > > > > > > Where else should we be looking for contributors? > > > > > > Doug > > > > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- May The Force of OpenStack Be With You, Rico Lin irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Mon Apr 23 18:31:08 2018 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 23 Apr 2018 14:31:08 -0400 Subject: [openstack-dev] [tc] campaign question: How "active" should the TC be? In-Reply-To: <1524489055-sup-8435@lrrr.local> References: <1524489055-sup-8435@lrrr.local> Message-ID: <37bb1679-a884-628d-08f3-4856a750ce31@redhat.com> On 23/04/18 09:27, Doug Hellmann wrote: > [This is meant to be one of (I hope) several conversation-provoking > questions directed at prospective TC members to help the community > understand their positions before considering how to vote in the > ongoing election.] > > We frequently have discussions about whether the TC is active enough, > in terms of driving new policies, technology choices, and other > issues that affect the entire community. I guess you can put me in the camp of wanting the TC to be proactive as well as reactive. I don't want to say it's not being active enough, but I do think it's valuable to proactively consider other ways in which we can be proactive. > Please describe one case where we were either active or reactive > and how that was shown to be the right choice over time. A couple of examples that come to mind of the TC being actively involved in driving changes would be the addition of etcd3 to the required set of base services (alongside RabbitMQ and MySQL/MariaDB), and the project-wide goals initiative. Those are both examples of decisions that need to be co-ordinated across the whole of OpenStack. Since the TC is the only elected body that represents the whole technical community, it needs to have a role in decisions such as those - either by making them directly or by delegating them to some group of experts. If it doesn't, we'll generally be stuck with the status quo by default. In my experience, major decisions getting made by default is a common failure mode in a lot of bad products. > Please describe another case where the choice to be active or > reactive ended up being the wrong choice. This is a difficult one to answer, in part because being purely reactive need not be a choice - it's the default. One example, that's closely related to the other thread, might be the way we've chosen to define the scope of OpenStack. That's largely been by reactively approving or rejecting projects as they requested to join, rather than by attempting to lay out a vision in more detail than our mission statement and correcting course when necessary in response to new project applications. The picture that has emerged from that process has essentially been one of a full-featured cloud (which, for the record, I fully agree with) - most projects were approved. But as Chris pointed out there are plenty of folks out there who disagree with that. By not having a proactive debate we've missed an opportunity to gain a deeper understanding of their concerns and address them as far as is possible. I believe there are a lot of folks still working at cross-purposes without a unified vision of what we're trying to build as a result. > If you think the TC should tend to be more active in driving change > than it is today, please describe the changes (policy, culture, > etc.) you think would need to be made to do that effectively (not > which policies you want us to be more active on, but *how* to > organize the TC to be more active and have that work within the > community culture). One of my concerns is that the dropping of the weekly TC meeting with a published agenda in favour of the unstructured office hours has diminished the TC's ability to be proactive. For example, the constellations initiative was adopted by the TC as a goal to get underway by 2019 (barely more than 8 months away). Who is working on it? What is the status? What are the open questions requiring feedback? I don't know, and I follow #openstack-tc and the TC mailing list fairly closely compared to most people. I definitely don't want to get rid of office hours, and I think the reasons for dropping the meeting (encouraging geographically diverse participation) are still valid. I'd like to see the TC come up with a program of work for the term after each Summit, and actively track the progress of it using asynchronous tools - perhaps Storyboard supported by follow-ups on the mailing list. Perhaps we can also do more to, for example, empower SIGs to make recommendations on community-wide issues that the TC would then commit to either ratifying or rejecting within a fixed time frame. One reason that I think the TC is (correctly) wary of promulgating too many edicts is that they're perceived as difficult to change as circumstances demand. So reducing the cost of changes is key to allowing the TC to take a more active role without stifling the community. cheers, Zane. > If you think the TC should tend to be less active in driving change > overall, please describe what policies you think the TC should be > taking an active role in implementing. > > Doug From mriedemos at gmail.com Mon Apr 23 18:32:26 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 23 Apr 2018 13:32:26 -0500 Subject: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier? In-Reply-To: <20180423182436.muwrtixxgksuzlrt@yuggoth.org> References: <1524491647-sup-1779@lrrr.local> <8b5aa508-b859-1172-4ea1-3a40d70989b4@openstack.org> <1524503764-sup-3631@lrrr.local> <20180423182436.muwrtixxgksuzlrt@yuggoth.org> Message-ID: <0c2b4339-b805-a0e7-4ba1-3b8cfcb4233f@gmail.com> On 4/23/2018 1:24 PM, Jeremy Stanley wrote: > Some of us also urged existing leaders in various projects to record > videos encouraging contributors to get more involved by demystifying > processes like code review or bug triage. This could be as simple as > signing up for an available lightning talk slot at one of our > conferences and then performing what you consider to be mundane but > much-needed activities while narrating an explanation of what's > going on in your head. What we've failed to do, as far as I'm aware, > is aggregate links to these somewhere and promote that in ways that > the intended audience will find them. This reminded me of something I linked into the nova contributor docs based on a presentation that stephenfin and bauzas gave in Sydney about bug triage: https://docs.openstack.org/nova/latest/contributor/how-to-get-involved.html#how-to-do-great-bug-triage Over time I've tried to link more and more relevant summit videos into the nova docs for things like Placement, Cells v2, and really anything that is specific to a domain of nova for new contributors. We spend so much time working on these presentations that it's a shame when we don't actually link them back into our docs for people to find later when they are trying to learn. -- Thanks, Matt From andrey.mp at gmail.com Mon Apr 23 18:42:56 2018 From: andrey.mp at gmail.com (Andrey Pavlov) Date: Mon, 23 Apr 2018 21:42:56 +0300 Subject: [openstack-dev] [rally][dragonflow][ec2-api][PowerVMStackers][murano] Tagging rights Message-ID: Hello Sean, EC2-api team always used manual tagging because I know only this procedure. I thought that it's more convenient for me cause I can manage commits/branches. But in fact I don't mind to switch to automatic scheme. If somethig else is needed from please let me know. Regards, Andrey Pavlov. > Hello teams, I am following up on some recently announced changes regarding governed projects and tagging rights. See [1] for background. It was mostly followed before that when a project came under official governance that all tagging and releases would then move to using the openstack/releases repo and associated automation. It was not officially stated until recently that this was one of the steps of coming under governance, so there were a few projects that became official but that continued to do their own releases. We've cleaned up most projects' rights to push tags, but for the ones listed here we waited: - rally - dragflow - ec2-api - networking-powervm - nova-powervm - yaql We would like to finish cleaning up the ACLs for these, but I wanted to check with the teams to make sure there wasn't a reason why these repos had continued tagging separately. Please let me know, either here or in the #openstack-release channel, if there is something we are overlooking. Thanks for your attention. --- Sean (smcginnis) -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Apr 23 18:52:42 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 23 Apr 2018 18:52:42 +0000 Subject: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier? In-Reply-To: References: <1524491647-sup-1779@lrrr.local> <8b5aa508-b859-1172-4ea1-3a40d70989b4@openstack.org> Message-ID: <20180423185241.jwmqmvzgtajdzfuf@yuggoth.org> On 2018-04-23 16:28:13 +0000 (+0000), Tim Bell wrote: > One of the challenges in the academic sector is the time from > lightbulb moment to code commit. Many of the academic resource > opportunities are short term (e.g. PhDs, student projects, > government funded projects) and there is a latency in current > system to onboard, get the appropriate recognition in the > community (such as by reviewing other changes) and then get the > code committed. This is a particular problem for the larger > projects where the patch is not in one of the project goal areas > for that release. [...] Not to seem pessimistic (I'm not!) but I have hopes that with a trend of decreasing full-time investment from companies "productizing" OpenStack we'll see a corresponding decrease in project velocity as well. I think that one of the primary scaling challenges we have which translates to a negative experience for casual contributors is the overall change volume in some of our larger projects. We've optimized our processes for people who are going to work on many things in parallel, so that the amount of time any one of those things takes to land is less of a problem for their effective personal throughput. As the pace of development slows and the hype continues to cool, this could at least partly self-correct. We'll be taking on changes from users and other casual contributors out of necessity when they're all we have. What we need to do is fill in the gaps in the meantime and carefully manage the transition so that we increase ease of contribution for them ahead of that curve rather than once it's too late. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From kennelson11 at gmail.com Mon Apr 23 19:32:40 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 23 Apr 2018 19:32:40 +0000 Subject: [openstack-dev] [mistral] September PTG in Denver In-Reply-To: References: Message-ID: Hey Dougal, I think I had said May 2nd in my initial email asking about attendance. If you can get an answer out of your team by then I would greatly appreciate it! If you need more time please let me know by then (May 2nd) instead. -Kendall (diablo_rojo) On Fri, Apr 20, 2018 at 8:17 AM Dougal Matthews wrote: > Hey all, > > You may have seen the news already, but yesterday the next PTG location > was announced [1]. It will be in Denver again. > > Can you let me know if you are planning to attend and go to Mistral > sessions? I have been asked about numbers and need to reply by May 5th. > > Thanks, > Dougal > > > [1]: > http://lists.openstack.org/pipermail/openstack-dev/2018-April/129564.html > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Mon Apr 23 19:45:35 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 23 Apr 2018 14:45:35 -0500 Subject: [openstack-dev] [horizon] Release of openstack/xstatic-angular-vis failed Message-ID: <20180423194534.GA17397@sm-xps> See below for logs from a failed xstatic release job. It appears something is not set up right with this job. "can't open file 'xstatic_check_version.py': [Errno 2] No such file or directory" I missed it initially, but this release did not actually contain any functional change, so I think it is fine that it failed. We can just hold off on doing anything with it until there are actual changes made that need to be delivered. But it did at least act as a good pipecleaner in that it found this job failure. I don't know enough about the release job itself, but please feel free to reach out in the #openstack-release channel if there is anything the release team can do to help get this sorted out and ready for when an actual release is needed. Thanks, Sean ----- Forwarded message from zuul at openstack.org ----- Date: Mon, 23 Apr 2018 17:03:18 +0000 From: zuul at openstack.org To: release-job-failures at lists.openstack.org Subject: [Release-job-failures] Release of openstack/xstatic-angular-vis failed Reply-To: openstack-dev at lists.openstack.org Build failed. - xstatic-check-version http://logs.openstack.org/59/591c61a6bf706434e19de85809f4c37adc612280/release/xstatic-check-version/613f7fc/ : FAILURE in 2m 23s - release-openstack-python release-openstack-python : SKIPPED - announce-release announce-release : SKIPPED - propose-update-constraints propose-update-constraints : SKIPPED _______________________________________________ Release-job-failures mailing list Release-job-failures at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures ----- End forwarded message ----- From mriedemos at gmail.com Mon Apr 23 19:48:38 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 23 Apr 2018 14:48:38 -0500 Subject: [openstack-dev] [nova][placement] Trying to summarize bp/glance-image-traits scheduling alternatives for rebuild Message-ID: <221636a9-4b8f-1098-10b8-2240a7cb0ff7@gmail.com> We seem to be at a bit of an impasse in this spec amendment [1] so I want to try and summarize the alternative solutions as I see them. The overall goal of the blueprint is to allow defining traits via image properties, like flavor extra specs. Those image-defined traits are used to filter hosts during scheduling of the instance. During server create, that filtering happens during the normal "GET /allocation_candidates" call to placement. The problem is during rebuild with a new image that specifies new required traits. A rebuild is not a move operation, but we run through the scheduler filters to make sure the new image (if one is specified), is valid for the host on which the instance is currently running. We don't currently call "GET /allocation_candidates" during rebuild because that could inadvertently filter out the host we know we need [2]. Also, since flavors don't change for rebuild, we haven't had a need for getting allocation candidates during rebuild since we're not allocating new resources (pretend bug 1763766 [3] does not exist for now). Now that we know the problem, here are some of the solutions that have been discussed in the spec amendment, again, only for rebuild with a new image that has new traits: 1. Fail in the API saying you can't rebuild with a new image with new required traits. Pros: - Simple way to keep the new image off a host that doesn't support it. - Similar solution to volume-backed rebuild with a new image. Cons: - Confusing user experience since they might be able to rebuild with some new images but not others with no clear explanation about the difference. 2. Have the ImagePropertiesFilter call "GET /resource_providers/{rp_uuid}/traits" and compare the compute node root provider traits against the new image's required traits. Pros: - Avoids having to call "GET /allocation_candidates" during rebuild. - Simple way to compare the required image traits against the compute node provider traits. Cons: - Does not account for nested providers so the scheduler could reject the image due to its required traits which actually apply to a nested provider in the tree. This is somewhat related to bug 1763766. 3. Slight variation on #2 except build a set of all traits from all providers in the same tree. Pros: - Handles the nested provider traits issue from #2. Cons: - Duplicates filtering in ImagePropertiesFilter that could otherwise happen in "GET /allocation_candidates". 4. Add a microversion to change "GET /allocation_candidates" to make two changes: a) Add an "in_tree" filter like in "GET /resource_providers". This would be needed to limit the scope of what gets returned since we know we only want to check against one specific host (the current host for the instance). b) Make "resources" optional since on a rebuild we don't want to allocate new resources (again, notwithstanding bug 1763766). Pros: - We can call "GET /allocation_candidates?in_tree=&required=" and if nothing is returned, we know the new image's required traits don't work with the current node. - The filtering is baked into "GET /allocation_candidates" and not client-side in ImagePropertiesFilter. Cons: - Changes to the "GET /allocation_candidates" API which is going to be more complicated and more up-front work, but I don't have a good idea of how hard this would be to add since we already have the same "in_tree" logic in "GET /resource_providers". - Potentially slows down the completion of the overall blueprint. =========== My personal thoughts are, I don't like option 1 since it adds technical debt which we'll eventually just need to solve later (think about [4]). Similar feelings for #2. #3 might be a short-term solution until #4 is done, but I think the best long-term solution to this problem is #4. [1] https://review.openstack.org/#/c/560718/ [2] https://review.openstack.org/#/c/546357/ [3] https://bugs.launchpad.net/nova/+bug/1763766 [4] https://review.openstack.org/#/c/532407/ -- Thanks, Matt From sean.mcginnis at gmx.com Mon Apr 23 19:58:24 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 23 Apr 2018 14:58:24 -0500 Subject: [openstack-dev] [mistral] September PTG in Denver In-Reply-To: References: Message-ID: <20180423195823.GC17397@sm-xps> On Mon, Apr 23, 2018 at 07:32:40PM +0000, Kendall Nelson wrote: > Hey Dougal, > > I think I had said May 2nd in my initial email asking about attendance. If > you can get an answer out of your team by then I would greatly appreciate > it! If you need more time please let me know by then (May 2nd) instead. > > -Kendall (diablo_rojo) > Do we need to collect this data for September already by the beginning of May? Granted, the sooner we know details and can start planning, the better. But as I started looking over the survey, it just seems really early to predict where things will be 5 months from now. Especially considering we will have a different set of PTLs for many projects by then, and it is too early for some of those hand off discussions to have started yet. Sean From therve at redhat.com Mon Apr 23 20:04:31 2018 From: therve at redhat.com (Thomas Herve) Date: Mon, 23 Apr 2018 22:04:31 +0200 Subject: [openstack-dev] [Heat][TripleO] - Getting attributes of openstack resources not created by the stack for TripleO NetworkConfig. In-Reply-To: <1524503802.4383.149.camel@redhat.com> References: <1524142764.4383.83.camel@redhat.com> <1524503802.4383.149.camel@redhat.com> Message-ID: On Mon, Apr 23, 2018 at 7:16 PM, Harald Jensås wrote: > On Fri, 2018-04-20 at 14:44 +0200, Thomas Herve wrote: >> To make it clear, what you want to prevent is the need to add more >> keys in network_data.yaml? >> >> As those had to be provided at some point, I wonder if tripleo can't >> find a way to pass them again on the overcloud deploy. >> > No, the networks defined in network_data.yaml is fine, that is the data > used to create the neutron stuff so passing the data from there is > already in place to some extent. > > But, the ctlplane network is not defined in network_data.yaml. OK. >> If you can list the attributes you care about that'd be great. >> > > Guess what I envision is a client_config attribute, a map with data > useful to configure a network interface on the client. (I put * on the > ones I believe could be useful for TripleO) > > * /v2.0/networks/{network_id}/mtu > /v2.0/networks/{network_id}/dns_domain > * /v2.0/subnets/{subnet_id}/dns_nameservers > * /v2.0/subnets/{subnet_id}/host_routes > /v2.0/subnets/{subnet_id}/ip_version > * /v2.0/subnets/{subnet_id}/gateway_ip > * /v2.0/subnets/{subnet_id}/cidr > * /v2.0/subnets/{subnet_id}/ipv6_address_mode > * /v2.0/subnets/{subnet_id}/ipv6_ra_mode > /v2.0/ports/{port_id}/description - Why not? > /v2.0/ports/{port_id}/dns_assignment > /v2.0/ports/{port_id}/dns_domain > /v2.0/ports/{port_id}/dns_name > * /v2.0/ports/{port_id}/fixed_ips - We have this already > /v2.0/ports/{port_id}/name - Why not? I think we have most of those on resources already. From the required ones, I think the only ones mising are ipv6_address_mode and ipv6_ra_mode on subnets. If we make external resources work, it'll be easy to provide what you need. -- Thomas From therve at redhat.com Mon Apr 23 20:07:43 2018 From: therve at redhat.com (Thomas Herve) Date: Mon, 23 Apr 2018 22:07:43 +0200 Subject: [openstack-dev] [Heat][TripleO] - Getting attributes of openstack resources not created by the stack for TripleO NetworkConfig. In-Reply-To: References: <1524142764.4383.83.camel@redhat.com> <1524503802.4383.149.camel@redhat.com> Message-ID: On Mon, Apr 23, 2018 at 8:09 PM, Dan Sneddon wrote: > We could add the ControlPlaneDefaultRoute and ControlPlaneSubnetCidr to > network_data.yaml, but this would involve some duplication of configuration > data, since those are currently defined in undercloud.conf. A more robust > solution might be to generate network_data.yaml from that info in > undercloud.conf, but currently we don't modify any files in the > tripleo-heat-templates package after it gets installed. Right, it seems getting those values from Neutron is better. > I can't speak to the roadmap of Heat/Neutron/Nova on the undercloud, for the > immediate future I don't see us moving away from Heat entirely due to > upgrade requirements. > > I can see another use case for this Heat functionality, which is that I > would like to be able to generate a report using Heat that lists all the > ports in use in the entire deployment. This would be generated > post-deployment, and could be used to populate an external DNS server, or > simply to report on which IPs belong to which nodes. Jiri wrote a small tool that does mostly that: https://gist.github.com/jistr/ad385d77db7600c18e8d52652358b616 We could make that more official, but we already have the info. -- Thomas From ildiko.vancsa at gmail.com Mon Apr 23 20:08:48 2018 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Mon, 23 Apr 2018 22:08:48 +0200 Subject: [openstack-dev] [os-upstream-institute] Prep call for Vancouver today Message-ID: Hi Training Team, It is a friendly reminder that we will have a conference call on Zoom today at 2200 UTC as opposed to the weekly meeting to better sync up before the training in Vancouver. You can find the call details here: https://etherpad.openstack.org/p/openstack-upstream-institute-meetings Please let me know if you have any questions. Thanks, Ildikó (IRC: ildikov) From aj at suse.com Mon Apr 23 20:09:08 2018 From: aj at suse.com (Andreas Jaeger) Date: Mon, 23 Apr 2018 22:09:08 +0200 Subject: [openstack-dev] [horizon] Release of openstack/xstatic-angular-vis failed In-Reply-To: <20180423194534.GA17397@sm-xps> References: <20180423194534.GA17397@sm-xps> Message-ID: <3e18636f-376f-2924-35cf-b02b96c5fe87@suse.com> On 2018-04-23 21:45, Sean McGinnis wrote: > See below for logs from a failed xstatic release job. It appears something is > not set up right with this job. > > "can't open file 'xstatic_check_version.py': [Errno 2] No such file or > directory" > > I missed it initially, but this release did not actually contain any functional > change, so I think it is fine that it failed. We can just hold off on doing > anything with it until there are actual changes made that need to be delivered. > > But it did at least act as a good pipecleaner in that it found this job > failure. I don't know enough about the release job itself, but please feel free > to reach out in the #openstack-release channel if there is anything the release > team can do to help get this sorted out and ready for when an actual release is > needed. https://review.openstack.org/563752 should fix it, Andreas > Thanks, > Sean > > ----- Forwarded message from zuul at openstack.org ----- > > Date: Mon, 23 Apr 2018 17:03:18 +0000 > From: zuul at openstack.org > To: release-job-failures at lists.openstack.org > Subject: [Release-job-failures] Release of openstack/xstatic-angular-vis failed > Reply-To: openstack-dev at lists.openstack.org > > Build failed. > > - xstatic-check-version http://logs.openstack.org/59/591c61a6bf706434e19de85809f4c37adc612280/release/xstatic-check-version/613f7fc/ : FAILURE in 2m 23s > - release-openstack-python release-openstack-python : SKIPPED > - announce-release announce-release : SKIPPED > - propose-update-constraints propose-update-constraints : SKIPPED > > _______________________________________________ > Release-job-failures mailing list > Release-job-failures at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures > > ----- End forwarded message ----- > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 (AG Nürnberg) GPG fingerprint = 93A3 365E CE47 B889 DF7F FED1 389A 563C C272 A126 From Louie.Kwan at windriver.com Mon Apr 23 20:10:00 2018 From: Louie.Kwan at windriver.com (Kwan, Louie) Date: Mon, 23 Apr 2018 20:10:00 +0000 Subject: [openstack-dev] [ceilometer] Ceilometer-file-publisher-compression-csv-format Message-ID: <47EFB32CD8770A4D9590812EE28C977E962F2E48@ALA-MBD.corp.ad.wrs.com> Submitted the following review on April 19, https://review.openstack.org/#/c/562768/ Would like to know who else could be on the reviewer list and anything else for the next step? Thanks. Louie From Louie.Kwan at windriver.com Mon Apr 23 20:15:38 2018 From: Louie.Kwan at windriver.com (Kwan, Louie) Date: Mon, 23 Apr 2018 20:15:38 +0000 Subject: [openstack-dev] [masakari] Introspective Instance Monitoring through QEMU Guest Agent Message-ID: <47EFB32CD8770A4D9590812EE28C977E962F2E66@ALA-MBD.corp.ad.wrs.com> Submitted the following review on January 17, 2018, https://review.openstack.org/#/c/534958/ Would like to know who else could be on the reviewer list ? or anything else is needed for the next step? Also, I am planning to attend our coming Masakari Weekly meeting, April 24, 0400 UTC in #openstack-meeting and would like add an agenda item to follow up how to move the review forward. Thanks. Louie From openstack at fried.cc Mon Apr 23 20:26:25 2018 From: openstack at fried.cc (Eric Fried) Date: Mon, 23 Apr 2018 15:26:25 -0500 Subject: [openstack-dev] [nova][placement] Trying to summarize bp/glance-image-traits scheduling alternatives for rebuild In-Reply-To: <221636a9-4b8f-1098-10b8-2240a7cb0ff7@gmail.com> References: <221636a9-4b8f-1098-10b8-2240a7cb0ff7@gmail.com> Message-ID: <98caed7b-8c7e-3f3c-f6b4-90c3b7ff72a9@fried.cc> Semantically, GET /allocation_candidates where we don't actually want to allocate anything (i.e. we don't want to use the returned candidates) is goofy, and talking about what the result would look like when there's no `resources` is going to spider into some weird questions. Like what does the response payload look like? In the "good" scenario, you would be expecting an allocation_request like: "allocations": { $rp_uuid: { "resources": { # Nada } }, } ...which is something we discussed recently [1] in relation to "anchor" providers, and killed. No, the question you're really asking in this case is, "Do the resource providers in this tree contain (or not contain) these traits?" Which to me, translates directly to: GET /resource_providers?in_tree=$rp_uuid&required={$TRAIT|!$TRAIT, ...} ...which we already support. The answer is a list of providers. Compare that to the providers from which resources are already allocated, and Bob's your uncle. (I do find it messy/weird that the required/forbidden traits in the image meta are supposed to apply *anywhere* in the provider tree. But I get that that's probably going to make the most sense.) [1] http://lists.openstack.org/pipermail/openstack-dev/2018-April/129408.html On 04/23/2018 02:48 PM, Matt Riedemann wrote: > We seem to be at a bit of an impasse in this spec amendment [1] so I > want to try and summarize the alternative solutions as I see them. > > The overall goal of the blueprint is to allow defining traits via image > properties, like flavor extra specs. Those image-defined traits are used > to filter hosts during scheduling of the instance. During server create, > that filtering happens during the normal "GET /allocation_candidates" > call to placement. > > The problem is during rebuild with a new image that specifies new > required traits. A rebuild is not a move operation, but we run through > the scheduler filters to make sure the new image (if one is specified), > is valid for the host on which the instance is currently running. > > We don't currently call "GET /allocation_candidates" during rebuild > because that could inadvertently filter out the host we know we need > [2]. Also, since flavors don't change for rebuild, we haven't had a need > for getting allocation candidates during rebuild since we're not > allocating new resources (pretend bug 1763766 [3] does not exist for now). > > Now that we know the problem, here are some of the solutions that have > been discussed in the spec amendment, again, only for rebuild with a new > image that has new traits: > > 1. Fail in the API saying you can't rebuild with a new image with new > required traits. > > Pros: > > - Simple way to keep the new image off a host that doesn't support it. > - Similar solution to volume-backed rebuild with a new image. > > Cons: > > - Confusing user experience since they might be able to rebuild with > some new images but not others with no clear explanation about the > difference. > > 2. Have the ImagePropertiesFilter call "GET > /resource_providers/{rp_uuid}/traits" and compare the compute node root > provider traits against the new image's required traits. > > Pros: > > - Avoids having to call "GET /allocation_candidates" during rebuild. > - Simple way to compare the required image traits against the compute > node provider traits. > > Cons: > > - Does not account for nested providers so the scheduler could reject > the image due to its required traits which actually apply to a nested > provider in the tree. This is somewhat related to bug 1763766. > > 3. Slight variation on #2 except build a set of all traits from all > providers in the same tree. > > Pros: > > - Handles the nested provider traits issue from #2. > > Cons: > > - Duplicates filtering in ImagePropertiesFilter that could otherwise > happen in "GET /allocation_candidates". > > 4. Add a microversion to change "GET /allocation_candidates" to make two > changes: > > a) Add an "in_tree" filter like in "GET /resource_providers". This would > be needed to limit the scope of what gets returned since we know we only > want to check against one specific host (the current host for the > instance). > > b) Make "resources" optional since on a rebuild we don't want to > allocate new resources (again, notwithstanding bug 1763766). > > Pros: > > - We can call "GET /allocation_candidates?in_tree= UUID>&required=" and if nothing is returned, > we know the new image's required traits don't work with the current node. > - The filtering is baked into "GET /allocation_candidates" and not > client-side in ImagePropertiesFilter. > > Cons: > > - Changes to the "GET /allocation_candidates" API which is going to be > more complicated and more up-front work, but I don't have a good idea of > how hard this would be to add since we already have the same "in_tree" > logic in "GET /resource_providers". > - Potentially slows down the completion of the overall blueprint. > > =========== > > My personal thoughts are, I don't like option 1 since it adds technical > debt which we'll eventually just need to solve later (think about [4]). > Similar feelings for #2. #3 might be a short-term solution until #4 is > done, but I think the best long-term solution to this problem is #4. > > [1] https://review.openstack.org/#/c/560718/ > [2] https://review.openstack.org/#/c/546357/ > [3] https://bugs.launchpad.net/nova/+bug/1763766 > [4] https://review.openstack.org/#/c/532407/ > From mnaser at vexxhost.com Mon Apr 23 20:30:10 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 23 Apr 2018 16:30:10 -0400 Subject: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier? In-Reply-To: <1524491647-sup-1779@lrrr.local> References: <1524491647-sup-1779@lrrr.local> Message-ID: On Mon, Apr 23, 2018 at 10:06 AM, Doug Hellmann wrote: > [This is meant to be one of (I hope) several conversation-provoking > questions directed at prospective TC members to help the community > understand their positions before considering how to vote in the > ongoing election.] > > Over the last year we have seen some contraction in the number of > companies and individuals contributing to OpenStack. At the same > time we have started seeing contributions from other companies and > individuals. To some degree this contraction and shift in contributor > base is a natural outcome of changes in OpenStack itself along with > the rest of the technology industry, but as with any change it > raises questions about how and whether we can ensure a smooth > transition to a new steady state. > > What aspects of our policies or culture make contributing to OpenStack > more difficult than contributing to other open source projects? > > Which of those would you change, and how? > > Where else should we be looking for contributors? I think that for the most part, the most vocal audience is the one that contributes the most is mostly very comfortable with the tools and processes that we have in place today. However, I think we may have become 'blind' to the viewpoint of new contributors and forgot some of our habits might be very difficult pain points for other users. ## Communication There is a significant amount of communication and work that happens over IRC. I'll admit, it's one of my most preferable ways of communicating. However, it's not something that is common for newer contributors to use. Our developer manual lists it before anything: https://docs.openstack.org/infra/manual/developers.html#irc-account There are a few other communities which are growing quickly and they're using alternative ways of communication. I personally prefer IRC, but maybe we should put our preferences aside and look at what's sustainable, because we have to be progressive and move quickly. Perhaps we should look into a OpenStack Slack community in combination with an IRC bridge? ## Tooling The majority of long time OpenStack contributors are very comfortable with the Gerrit workflow. They're also very comfortable with rebasing patches, pushing them, setting up dependencies, etc. The newer developer might have some Gerrit experience but more than likely, they might probably have more of a GitHub workflow experience and that's the easiest way that the can submit code. While my own preference is to use Gerrit, I think that perhaps looking into opening up a way for contributions via GitHub to be available could be an interesting option. Now, the technical side of me can imagine all the challenges, but again, we must keep things easy and approachable. If submitting a patch to the OpenStack community involves setting up an account in the Canonical "Ubuntu One" OpenID, creating a username in Gerrit afterwards, sign the CLA, which could get complicated depending on your organization, upload your keys, setup git-review before being able to push up a single patch (and then there's Launchpad for bugs and some projects are on Storyboard, etc) That's a lot of extra work that we're putting on new potential contributors. I don't mind it, but I think we have to collectively think about new potential contributors rather than our preferences. I'm giving a lot of ideas that I might not have technical solutions in place, but I think putting them out might bring up some other ways that we can come to a compromise and make it work to make contributing to OpenStack easy. > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mriedemos at gmail.com Mon Apr 23 20:38:09 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 23 Apr 2018 15:38:09 -0500 Subject: [openstack-dev] [nova] Do we need bp/list-show-all-server-migration-types? Message-ID: <173f8027-7cd4-2598-4d42-19387da4f4df@gmail.com> Looking over the things in the runways queue [1], excluding the zVM driver (because I'm not sure what the status is on that thread), the next in line is blueprint list-show-all-server-migration-types [2]. I know this has been approved since Pike, but I wanted to raise some questions again [3] about whether or not we actually need this. Looking at the spec, the problem description is totally tied to the abort in-progress cold migration blueprint [4] which we haven't agreed to do. We talked about that blueprint at the PTG in Dublin and the action item [5] was for Takashi to follow up in the mailing list (dev and operators) to determine if that is functionality people actually need. I haven't seen that happen yet. If we aren't going to add the ability to abort a cold migration, I'm not sure why we need list-show-all-server-migration-types. The use case in the spec is something an admin can do today with the GET /os-migrations API [6]. That should at least be an alternative in the spec. So beyond being a dependency to abort an in-progress cold migration, what would be the other reasons for list-show-all-server-migration-types? Because if that is the only thing, I think it likely should be held up and dependent on [4] being approved, otherwise I feel it's churn for little gain. If there are other reasons for this beyond a dependency for abort cold migration, the spec should likely be updated to clearly indicate what they are. [1] https://etherpad.openstack.org/p/nova-runways-rocky [2] https://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/list-show-all-server-migration-types.html [3] http://lists.openstack.org/pipermail/openstack-dev/2017-April/115494.html [4] https://blueprints.launchpad.net/nova/+spec/abort-cold-migration [5] https://etherpad.openstack.org/p/nova-ptg-rocky (L362) [6] https://developer.openstack.org/api-ref/compute/#list-migrations -- Thanks, Matt From zbitter at redhat.com Mon Apr 23 20:47:11 2018 From: zbitter at redhat.com (Zane Bitter) Date: Mon, 23 Apr 2018 16:47:11 -0400 Subject: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier? In-Reply-To: <1524491647-sup-1779@lrrr.local> References: <1524491647-sup-1779@lrrr.local> Message-ID: <9555e900-24e7-9a9f-5a09-b957faa44fc2@redhat.com> On 23/04/18 10:06, Doug Hellmann wrote: > [This is meant to be one of (I hope) several conversation-provoking > questions directed at prospective TC members to help the community > understand their positions before considering how to vote in the > ongoing election.] > > Over the last year we have seen some contraction in the number of > companies and individuals contributing to OpenStack. At the same > time we have started seeing contributions from other companies and > individuals. To some degree this contraction and shift in contributor > base is a natural outcome of changes in OpenStack itself along with > the rest of the technology industry, but as with any change it > raises questions about how and whether we can ensure a smooth > transition to a new steady state. > > What aspects of our policies or culture make contributing to OpenStack > more difficult than contributing to other open source projects? > > Which of those would you change, and how? There's probably two separate groups we need to consider. The first is operators and users of OpenStack. We want those folks to contribute when they see a problem or an opportunity to improve, and their feedback is extremely valuable because they know the product best. We need to encourage new contributors in this group and retain existing ones by: * Reducing barriers to contributing, like having to register for multiple services, sign a CLA &c. We're mostly aware of the problems in this area and have been making incremental progress on them over a long period of time. * Encouraging people to get involved. Low-hanging-fruit bug lists are useful. Even something like a link on every docs page indicating where to edit the source would help encourage people to take that first step. (Technically we have this when you click the 'bug' link - but it's not obvious, and you need to sign up for a Launchpad account to use it... see above.) Once people have done the initial setup work for a first patch, they're more likely to contribute again. The First Contact SIG is doing great work in this area. * The most important one: provide prompt, actionable feedback on changes. Nothing kills contributor motivation like having your changes ignored for months. Unfortunately this is also the hardest one to deal with; the situation is different in every project, and much depends on the amount of time available from the existing contributors. Adding more core reviewers helps; finding ways to limit the proportion of the code base that a core reviewer is responsible for (either by splitting up repos or giving cores a specific area of responsibility in a repo) would be one way to train them quicker. Another way, which I already alluded to in my candidacy message, is to expand the pool of OpenStack users. One of my goals is to make OpenStack an attractive cloud platform to write applications against, and not merely somewhere to get a VM to run your application in. If we can achieve that we'll increase the market for OpenStack and hence the number of users and thus potential contributors. But those new users would be more motivated than anyone to find and fix bugs, and they're already developers so they'd be disproportionately more likely to contribute code in addition to documentation or bug reports (which are also important contributions). The second group is those who are paid specifically to spend a portion of their time on upstream contribution, which brings us to... > Where else should we be looking for contributors? Companies who are making money from OpenStack! It's their responsibility to maintain the commons and, collectively speaking at least, their problem if they don't. For a start, we need to convince anybody who is maintaining a fork of OpenStack to do something more useful with their money. Like, for example, building it into a big pile and setting fire to it to keep warm. Maybe education is something that can help here. For a lot of folks, OpenStack is their first direct contact with an open source community. If we could help them to learn why contributing is in their best interest, and how to do it effectively, then we could make some progress. It's pretty remarkable that there are Foundation board members still asking the TC to direct employees of other companies to work on the stuff they want them to for free. cheers, Zane. From mriedemos at gmail.com Mon Apr 23 20:47:33 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 23 Apr 2018 15:47:33 -0500 Subject: [openstack-dev] [nova][placement] Trying to summarize bp/glance-image-traits scheduling alternatives for rebuild In-Reply-To: <98caed7b-8c7e-3f3c-f6b4-90c3b7ff72a9@fried.cc> References: <221636a9-4b8f-1098-10b8-2240a7cb0ff7@gmail.com> <98caed7b-8c7e-3f3c-f6b4-90c3b7ff72a9@fried.cc> Message-ID: On 4/23/2018 3:26 PM, Eric Fried wrote: > No, the question you're really asking in this case is, "Do the resource > providers in this tree contain (or not contain) these traits?" Which to > me, translates directly to: > > GET /resource_providers?in_tree=$rp_uuid&required={$TRAIT|!$TRAIT, ...} > > ...which we already support. The answer is a list of providers. Compare > that to the providers from which resources are already allocated, and > Bob's your uncle. OK and that will include filtering the required traits on nested providers in that tree rather than just against the root provider? If so, then yeah that sounds like an improvement on option 2 or 3 in my original email and resolves the issue without having to call (or change) "GET /allocation_candidates". I still think it should happen from within ImagePropertiesFilter, but that's an implementation detail. -- Thanks, Matt From sean.mcginnis at gmx.com Mon Apr 23 21:08:12 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 23 Apr 2018 16:08:12 -0500 Subject: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier? In-Reply-To: <1524491647-sup-1779@lrrr.local> References: <1524491647-sup-1779@lrrr.local> Message-ID: <20180423210812.GD17397@sm-xps> > > Over the last year we have seen some contraction in the number of > companies and individuals contributing to OpenStack. At the same > time we have started seeing contributions from other companies and > individuals. To some degree this contraction and shift in contributor > base is a natural outcome of changes in OpenStack itself along with > the rest of the technology industry, but as with any change it > raises questions about how and whether we can ensure a smooth > transition to a new steady state. > > What aspects of our policies or culture make contributing to OpenStack > more difficult than contributing to other open source projects? > > Which of those would you change, and how? > Comparing OpenStack contribution to other open source projects, the biggest and most obvious thing coming in to it is our use of gerrit vs GitHub pull requests. For those used to contributing to other current large scale projects, this can be a non-intuitive thing for them to learn. Luckily, I think we have a lot of guidance documented on how our workflow works. And having worked with both types of projects, I definitely would not propose or support moving away from Gerrit, even with it's warts. One of the bigger challenges I see for new or casual contributors is the tendency for a lot of projects to only accept perfection. I don't know if this is a side effect of the explosive growth years, something we have indirectly encouraged with the way we do code reviews, or some other factor, but I have seen plenty of patches proposed that are clear improvements, but get downvoted for comment spelling, preferred variable naming, or other minor things that either are not ultimately too important or would be easy to clean up with a later patch. I do think it's important we have high standards for new code accepted into our projects. We need to make sure we are delivering high quality services and tools. But for things that do not end up changing the end user or operator experience of using OpenStack, I feel we need to be more relaxed. This can easily change things for a new or casual contributor. They might get excited to find something they can quickly change in the code to improve things, but then get discouraged and leave and never come back if we make it look like we are more concerned about grammatically correct code comments than functioning code. I would also love to see more of our existing members spend time helping new contributors. But I don't know how we can really change any policies to make this more likely to happen. Speaking from experience, even for full time contributors (or maybe especially for full time contributors?) we are usually already busy with several other things that make it hard to carve out the time to work with someone new. But I do feel it is an important way to welcome new contributors and make sure it is not always the same folks overloaded on trying to address several issues at the same time. We do have some great work done with our onboarding documentation and our regular events with the Upstream Institute. We just need to make some effort to help consumers of those resources move on past that point. Which makes me think of some of the discussion we've had about getting people to core. I am actually not sure if this is the right focus. I do think it would be great to have a lot of core members or potential candidates, but I think there are plenty of contributors that would like to be involved and would be able really help out projects without necessarily wanting or needing to be cores to do so. I would like to see more focus on helping people contribute without needing to commit to taking on more responsibilities. > Where else should we be looking for contributors? > Universities are a good one. And being an open source project in a relatively easy to learn programming language, I think we could do more to encourage formal programs with CS schools as something students could do. I've brought up the idea of "internships" in the past. It would be great if we could work with schools to set up some sort of program where we are able to help someone new through accomplishing a discrete set up tasks that can benefit all involved. I do think the majority of our resources will be through commercial interests though, with vendors using or benefitting from OpenStack contributing development, infrastructure, or testing to help the project continue to meet their customers' needs. NFV is a big area now where I think there are some resistant to changes being driven to meet their use cases, but I think it's important that we are open to those types of changes in order for OpenStack to be able to meet their needs. Sean From openstack at fried.cc Mon Apr 23 21:27:11 2018 From: openstack at fried.cc (Eric Fried) Date: Mon, 23 Apr 2018 16:27:11 -0500 Subject: [openstack-dev] [nova][placement] Trying to summarize bp/glance-image-traits scheduling alternatives for rebuild In-Reply-To: References: <221636a9-4b8f-1098-10b8-2240a7cb0ff7@gmail.com> <98caed7b-8c7e-3f3c-f6b4-90c3b7ff72a9@fried.cc> Message-ID: <39aa5145-ba18-7158-cde4-89f12ffa5505@fried.cc> Following the discussion on IRC, here's what I think you need to do: - Assuming the set of traits from your new image is called image_traits... - Use GET /allocations/{instance_uuid} and pull out the set of all RP UUIDs. Let's call this instance_rp_uuids. - Use the SchedulerReportClient.get_provider_tree_and_ensure_root method [1] to populate and return the ProviderTree for the host. (If we're uncomfortable about the `ensure_root` bit, we can factor that away.) Call this ptree. - Collect all the traits in the RPs you've got allocated to your instance: traits_in_instance_rps = set() for rp_uuid in instance_rp_uuids: traits_in_instance_rps.update(ptree.data(rp_uuid).traits) - See if any of your image traits are *not* in those RPs. missing_traits = image_traits - traits_in_instance_rps - If there were any, it's a no go. if missing_traits: FAIL(_("The following traits were in the image but not in the instance's RPs: %s") % ', '.join(missing_traits)) [1] https://github.com/openstack/nova/blob/master/nova/scheduler/client/report.py#L986 On 04/23/2018 03:47 PM, Matt Riedemann wrote: > On 4/23/2018 3:26 PM, Eric Fried wrote: >> No, the question you're really asking in this case is, "Do the resource >> providers in this tree contain (or not contain) these traits?"  Which to >> me, translates directly to: >> >>   GET /resource_providers?in_tree=$rp_uuid&required={$TRAIT|!$TRAIT, ...} >> >> ...which we already support.  The answer is a list of providers. Compare >> that to the providers from which resources are already allocated, and >> Bob's your uncle. > > OK and that will include filtering the required traits on nested > providers in that tree rather than just against the root provider? If > so, then yeah that sounds like an improvement on option 2 or 3 in my > original email and resolves the issue without having to call (or change) > "GET /allocation_candidates". I still think it should happen from within > ImagePropertiesFilter, but that's an implementation detail. > From aschultz at redhat.com Mon Apr 23 21:33:48 2018 From: aschultz at redhat.com (Alex Schultz) Date: Mon, 23 Apr 2018 15:33:48 -0600 Subject: [openstack-dev] [tripleo] Proposing Marius Cornea core on upgrade bits In-Reply-To: References: Message-ID: +1 On Mon, Apr 23, 2018 at 5:55 AM, James Slagle wrote: > On Thu, Apr 19, 2018 at 1:01 PM, Emilien Macchi wrote: >> Greetings, >> >> As you probably know mcornea on IRC, Marius Cornea has been contributing on >> TripleO for a while, specially on the upgrade bits. >> Part of the quality team, he's always testing real customer scenarios and >> brings a lot of good feedback in his reviews, and quite often takes care of >> fixing complex bugs when it comes to advanced upgrades scenarios. >> He's very involved in tripleo-upgrade repository where he's already core, >> but I think it's time to let him +2 on other tripleo repos for the patches >> related to upgrades (we trust people's judgement for reviews). >> >> As usual, we'll vote! >> >> Thanks everyone for your feedback and thanks Marius for your hard work and >> involvement in the project. > > +1 > > > -- > -- James Slagle > -- > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From emilien at redhat.com Mon Apr 23 21:35:10 2018 From: emilien at redhat.com (Emilien Macchi) Date: Mon, 23 Apr 2018 14:35:10 -0700 Subject: [openstack-dev] [tripleo] Proposing Marius Cornea core on upgrade bits In-Reply-To: References: Message-ID: Thanks everyone for your positive feedback. I've updated Gerrit! Welcome Marius and thanks again for your hard work! On Mon, Apr 23, 2018 at 4:55 AM, James Slagle wrote: > On Thu, Apr 19, 2018 at 1:01 PM, Emilien Macchi > wrote: > > Greetings, > > > > As you probably know mcornea on IRC, Marius Cornea has been contributing > on > > TripleO for a while, specially on the upgrade bits. > > Part of the quality team, he's always testing real customer scenarios and > > brings a lot of good feedback in his reviews, and quite often takes care > of > > fixing complex bugs when it comes to advanced upgrades scenarios. > > He's very involved in tripleo-upgrade repository where he's already core, > > but I think it's time to let him +2 on other tripleo repos for the > patches > > related to upgrades (we trust people's judgement for reviews). > > > > As usual, we'll vote! > > > > Thanks everyone for your feedback and thanks Marius for your hard work > and > > involvement in the project. > > +1 > > > -- > -- James Slagle > -- > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Mon Apr 23 21:43:51 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 23 Apr 2018 17:43:51 -0400 Subject: [openstack-dev] [nova][placement] Trying to summarize bp/glance-image-traits scheduling alternatives for rebuild In-Reply-To: <221636a9-4b8f-1098-10b8-2240a7cb0ff7@gmail.com> References: <221636a9-4b8f-1098-10b8-2240a7cb0ff7@gmail.com> Message-ID: <8eec45ab-f9ed-cd96-51a1-9be78849fb9b@gmail.com> On 04/23/2018 03:48 PM, Matt Riedemann wrote: > We seem to be at a bit of an impasse in this spec amendment [1] so I > want to try and summarize the alternative solutions as I see them. > > The overall goal of the blueprint is to allow defining traits via image > properties, like flavor extra specs. Those image-defined traits are used > to filter hosts during scheduling of the instance. During server create, > that filtering happens during the normal "GET /allocation_candidates" > call to placement. > > The problem is during rebuild with a new image that specifies new > required traits. A rebuild is not a move operation, but we run through > the scheduler filters to make sure the new image (if one is specified), > is valid for the host on which the instance is currently running. What you are discussing above is simple a validation that the compute node performing the rebuild for an instance supports the capabilities that were required by the original image. How about just having the conductor call GET /resource_providers?in_tree=&required=, see if there is a result, and if not, don't even call the scheduler at all (because conductor would already know there would be a NoValidHost returned)? If there's no image traits, or if there is a result from GET /resource_providers, continue to do the existing call-the-scheduler behaviour in order to fulfill the ComputeCapabilitiesFilter and ImageMetadataFilter requirements that exist today. So, in short, just do a quick pre-flight check from the conductor if image traits are found before ever calling the scheduler. Otherwise, proceed as normal. Best, -jay From arvindn05 at gmail.com Mon Apr 23 21:51:39 2018 From: arvindn05 at gmail.com (Arvind N) Date: Mon, 23 Apr 2018 14:51:39 -0700 Subject: [openstack-dev] [nova][placement] Trying to summarize bp/glance-image-traits scheduling alternatives for rebuild In-Reply-To: <8eec45ab-f9ed-cd96-51a1-9be78849fb9b@gmail.com> References: <221636a9-4b8f-1098-10b8-2240a7cb0ff7@gmail.com> <8eec45ab-f9ed-cd96-51a1-9be78849fb9b@gmail.com> Message-ID: Thanks for the detailed options Matt/eric/jay. Just few of my thoughts, For #1, we can make the explanation very clear that we rejected the request because the original traits specified in the original image and the new traits specified in the new image do not match and hence rebuild is not supported. For #2, Other Cons: 1. None of the filters currently make other API requests and my understanding is we want to avoid reintroducing such a pattern. But definitely workable solution. 2. If the user disables the image properties filter, then traits based filtering will not be run in rebuild case For #3, Even though it handles the nested provider, there is a potential issue. Lets say a host with two SRIOV nic. One is normal SRIOV nic(VF1), another one with some kind of offload feature(VF2).(Described by alex) Initial instance launch happens with VF:1 allocated, rebuild launches with modified request with traits=HW_NIC_OFFLOAD_X, so basically we want the instance to be allocated VF2. But the original allocation happens against VF1 and since in rebuild the original allocations are not changed, we have wrong allocations. for #4, there is good amount of pushback against modifying the allocation_candiadates api to not have resources. Jay: for the GET /resource_providers?in_tree=&required=, nested resource providers and allocation pose a problem see #3 above. I will investigate erics option and update the spec. -- Arvind N -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Mon Apr 23 21:56:28 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 23 Apr 2018 16:56:28 -0500 Subject: [openstack-dev] [tc] campaign question: How "active" should the TC be? In-Reply-To: <1524489055-sup-8435@lrrr.local> References: <1524489055-sup-8435@lrrr.local> Message-ID: <20180423215627.GA25667@sm-xps> > > If you think the TC should tend to be more active in driving change > than it is today, please describe the changes (policy, culture, > etc.) you think would need to be made to do that effectively (not > which policies you want us to be more active on, but *how* to > organize the TC to be more active and have that work within the > community culture). > I'm going to skip over some of the other questions in this one for now, but I wanted to chime in on this one. I think Howard had an excellent idea of the TC coming up with themes for each cycle. I think that could be used to create a good cadence or focus to make sure we are making progress in key areas. It struck me that we came up with the long term vision, but there really isn't too much attention paid to it. At least not in a regular way that keeps some of these goals in mind. We could use the idea of cycle themes to make sure we are targetting key areas of that long term vision to help us move towards bringing that vision to reality. From mcdkr at yandex.ru Mon Apr 23 22:00:17 2018 From: mcdkr at yandex.ru (Vitalii Solodilov) Date: Tue, 24 Apr 2018 01:00:17 +0300 Subject: [openstack-dev] =?utf-8?q?=E2=80=8B_=5Bmistral=5D_timeout_and_ret?= =?utf-8?q?ry?= Message-ID: <3369991524520817@web43g.yandex.ru> Hi Renat, Can you explain me and Dougal how timeout policy should work with retry policy? I guess there is bug right now. The behaviour is something like this https://ibb.co/hhm0eH Example: https://review.openstack.org/#/c/563759/ Logs: http://logs.openstack.org/59/563759/1/check/openstack-tox-py27/6f38808/job-output.txt.gz#_2018-04-23_20_54_55_376083 Even we will fix this bug and after task timeout we will not retry task. I don't understand which problem is decided by this timeout and retry. Other problem. What about task retry? I mean using mistral api. The problem is that timeout delayed calls was not created. IMHO the combination of these policies should work like this https://ibb.co/fe5tzH It is not a timeout per action because when task retry it move to some complete state and then back to RUNNING state. And it will work fine with with-items policy. The main advantage is executor and rabbitmq HA. I can specify small timeout if executor will die the task retried by timeout and create new action. The second is predictable behaviour. When I specify timeout: 10 and retry.count: 5 I know that will be create maximum 5 action before SUCCESS state and every action will be executes no longer than 10 seconds. --  Best regards, Vitalii Solodilov From openstack at fried.cc Mon Apr 23 22:02:02 2018 From: openstack at fried.cc (Eric Fried) Date: Mon, 23 Apr 2018 17:02:02 -0500 Subject: [openstack-dev] [nova][placement] Trying to summarize bp/glance-image-traits scheduling alternatives for rebuild In-Reply-To: References: <221636a9-4b8f-1098-10b8-2240a7cb0ff7@gmail.com> <8eec45ab-f9ed-cd96-51a1-9be78849fb9b@gmail.com> Message-ID: <4bb0b79d-c11a-d8b9-4465-4043e494c73e@fried.cc> > for the GET > /resource_providers?in_tree=&required=, nested > resource providers and allocation pose a problem see #3 above. This *would* work as a quick up-front check as Jay described (if you get no results from this, you know that at least one of your image traits doesn't exist anywhere in the tree) except that it doesn't take sharing providers into account :( From fungi at yuggoth.org Mon Apr 23 22:12:49 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 23 Apr 2018 22:12:49 +0000 Subject: [openstack-dev] [tc] campaign question: How "active" should the TC be? In-Reply-To: <20180423215627.GA25667@sm-xps> References: <1524489055-sup-8435@lrrr.local> <20180423215627.GA25667@sm-xps> Message-ID: <20180423221248.hvdb3yhyvdu5fwhv@yuggoth.org> On 2018-04-23 16:56:28 -0500 (-0500), Sean McGinnis wrote: [...] > I think Howard had an excellent idea of the TC coming up with > themes for each cycle. I think that could be used to create a good > cadence or focus to make sure we are making progress in key areas. > > It struck me that we came up with the long term vision, but there > really isn't too much attention paid to it. At least not in a > regular way that keeps some of these goals in mind. > > We could use the idea of cycle themes to make sure we are > targetting key areas of that long term vision to help us move > towards bringing that vision to reality. So (straw man!) we can make Rocky "the constellations cycle"? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From arvindn05 at gmail.com Mon Apr 23 22:23:03 2018 From: arvindn05 at gmail.com (Arvind N) Date: Mon, 23 Apr 2018 15:23:03 -0700 Subject: [openstack-dev] [nova][placement] Trying to summarize bp/glance-image-traits scheduling alternatives for rebuild In-Reply-To: <4bb0b79d-c11a-d8b9-4465-4043e494c73e@fried.cc> References: <221636a9-4b8f-1098-10b8-2240a7cb0ff7@gmail.com> <8eec45ab-f9ed-cd96-51a1-9be78849fb9b@gmail.com> <4bb0b79d-c11a-d8b9-4465-4043e494c73e@fried.cc> Message-ID: > > 1. Fail in the API saying you can't rebuild with a new image with new > required traits. Pros: - Simple way to keep the new image off a host that doesn't support it. > - Similar solution to volume-backed rebuild with a new image. Cons: - Confusing user experience since they might be able to rebuild with > some new images but not others with no clear explanation about the > difference. Still want to get thoughts on Option 1 from the community, the only main con can be addressed by a better error message. My main concern is the amount of complexity being introduced now but also what we are setting ourselfs up for the future. When/If we decide to support forbidden traits, granular resource traits, preferred traits etc based on image properties, we would have to handle all those complexities for the rebuild case and possibly re-implement some of the logic already within placement to handle these cases. IMHO, i dont see a whole lot of benefit when weighing against the cost. Feedback is appreciated. :) Arvind On Mon, Apr 23, 2018 at 3:02 PM, Eric Fried wrote: > > for the GET > > /resource_providers?in_tree=&required=, nested > > resource providers and allocation pose a problem see #3 above. > > This *would* work as a quick up-front check as Jay described (if you get > no results from this, you know that at least one of your image traits > doesn't exist anywhere in the tree) except that it doesn't take sharing > providers into account :( > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Arvind N -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Mon Apr 23 22:31:17 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 23 Apr 2018 18:31:17 -0400 Subject: [openstack-dev] [rally][dragonflow][ec2-api][PowerVMStackers][murano] Tagging rights In-Reply-To: References: Message-ID: <1524522634-sup-3270@lrrr.local> Excerpts from Andrey Pavlov's message of 2018-04-23 21:42:56 +0300: > Hello Sean, > > EC2-api team always used manual tagging because I know only this procedure. > I thought that it's more convenient for me cause I can manage > commits/branches. > But in fact I don't mind to switch to automatic scheme. > If somethig else is needed from please let me know. > > Regards, > Andrey Pavlov. You will still need to trigger tags, you will just do it in a different way. See http://git.openstack.org/cgit/openstack/releases/tree/README.rst for details and drop in to #openstack-release or send email to this list with the subject tag "[release]" if you have any questions. Doug > > > > > Hello teams, > > I am following up on some recently announced changes regarding governed > projects and tagging rights. See [1] for background. > > It was mostly followed before that when a project came under official > governance that all tagging and releases would then move to using the > openstack/releases repo and associated automation. It was not officially stated > until recently that this was one of the steps of coming under governance, so > there were a few projects that became official but that continued to do their > own releases. > > We've cleaned up most projects' rights to push tags, but for the ones listed > here we waited: > > - rally > - dragflow > - ec2-api > - networking-powervm > - nova-powervm > - yaql > > We would like to finish cleaning up the ACLs for these, but I wanted to check > with the teams to make sure there wasn't a reason why these repos had continued > tagging separately. Please let me know, either here or in the > #openstack-release channel, if there is something we are overlooking. > > Thanks for your attention. > > --- > Sean (smcginnis) From ramamani.yeleswarapu at intel.com Mon Apr 23 22:34:22 2018 From: ramamani.yeleswarapu at intel.com (Yeleswarapu, Ramamani) Date: Mon, 23 Apr 2018 22:34:22 +0000 Subject: [openstack-dev] [ironic] this week's priorities and subteam reports Message-ID: Hi, We are glad to present this week's priorities and subteam report for Ironic. As usual, this is pulled directly from the Ironic whiteboard[0] and formatted. This Week's Priorities (as of the weekly ironic meeting) ======================================================== Weekly priorities ----------------- - Bios interface support - RPC Interfaces https://review.openstack.org/#/c/511714/ - Hardware type cleanup - https://review.openstack.org/#/q/status:open+topic:hw-types - Python-ironicclient things - Accept a version on set_provision_state - https://review.openstack.org/#/c/557850/ - Wire in header microversion into client negotiion - https://review.openstack.org/#/c/558027/ - Remaining Rescue patches - https://review.openstack.org/#/c/528699/ - Tempest tests with nova (This can land after nova work is done. But, it should be ready to get the nova patch reviewed.) (Rebased by TheJulia 20180416) - Management interface boot_mode change - https://review.openstack.org/#/c/526773/ - Bug Fixes - To be written: - "periodic tasks of non-classic driver Interfaces aren't run" https://storyboard.openstack.org/#!/story/2001884 - Bifrost pip10 failure - House Keeping: - https://review.openstack.org/#/c/557441/ 2x+2 and +A, CI failure, rechecked. Vendor priorities ----------------- cisco-ucs: Patches in works for SDK update, but not posted yet, currently rebuilding third party CI infra after a disaster... idrac: RFE and first several patches for adding UEFI support will be posted by Tuesday, 1/9 ilo: None irmc: None - a few works are work in progress oneview: None at this time - No subteam at present. xclarity: Fix XClarity parameters discrepancy: https://review.openstack.org/#/c/561405/ Subproject priorities --------------------- bifrost: ironic-inspector (or its client): networking-baremetal: networking-generic-switch: sushy and the redfish driver: Bugs (dtantsur, vdrok, TheJulia) -------------------------------- - (TheJulia) Ironic has moved to Storyboard. Dtantsur has indicated he will update the tool that generates these stats. - I still did not, may find some time this week - Stats (diff between 12 Mar 2018 and 19 Mar 2018) - Ironic: 225 bugs (+14) + 250 wishlist items (+2). 15 new (+10), 152 in progress, 1 critical, 36 high (+3) and 26 incomplete (+2) - Inspector: 15 bugs (+1) + 26 wishlist items. 1 new (+1), 14 in progress, 0 critical, 3 high and 4 incomplete - Nova bugs with Ironic tag: 14 (-1). 1 new, 0 critical, 0 high - HIGH bugs with patches to review: - Clean steps are not tested in gate https://bugs.launchpad.net/ironic/+bug/1523640: Add manual clean step ironic standalone test https://review.openstack.org/#/c/429770/15 - Needs to be reproposed to the ironic tempest plugin repository. - prepare_instance() is not called for whole disk images with 'agent' deploy interface https://bugs.launchpad.net/ironic/+bug/1713916: - Fix ``agent`` deploy interface to call ``boot.prepare_instance`` https://review.openstack.org/#/c/499050/ MERGED - Backport to stable/queens proposed Priorities ========== Deploy Steps (rloo, mgoddard) ----------------------------- - spec for deployment steps framework has merged: https://review.openstack.org/#/c/549493/ - waiting for code from rloo, no timeframe yet BIOS config framework(zshi, yolanda, mgoddard, hshiina) ------------------------------------------------------- - status as of 23 April 2018: - Spec: http://specs.openstack.org/openstack/ironic-specs/specs/approved/generic-bios-config.html - List of ordered patches: - BIOS Settings: Add DB model: https://review.openstack.org/511162 agreed that column type of bios setting value is string, blocked by the gate failure - Add bios_interface db field https://review.openstack.org/528609 many +2s, can be merged soon after the patch above is merged - BIOS Settings: Add DB API: https://review.openstack.org/511402 1x +1, actively reviewed and updated - BIOS Settings: Add RPC object https://review.openstack.org/511714 - Add BIOSInterface to base driver class https://review.openstack.org/507793 - BIOS Settings: Add BIOS caching: https://review.openstack.org/512200 - Add Node BIOS support - REST API: https://review.openstack.org/512579 Conductor Location Awareness (jroll, dtantsur) ---------------------------------------------- - story: https://storyboard.openstack.org/#!/story/2001795 - (april 23) spec has good feedback, one issue to resolve, should be able to land this week - https://review.openstack.org/#/c/559420/ Reference architecture guide (dtantsur, jroll) ---------------------------------------------- - story: https://storyboard.openstack.org/#!/story/2001745 - status as of 23 April 2018: - Dublin PTG consensus was to start with small architectural building blocks. - list of cases from the Denver PTG - see in the story - nothing new this week Graphical console interface (mkrai, anup-d-navare, TheJulia) ------------------------------------------------------------ - status as of 23 Apr 2018: - No update - Have not had a chance to get to this yet this cycle. Goal for the cycle was a plan, not necessarily implementation. - VNC Graphical console spec: https://review.openstack.org/#/c/306074/ - needs update, address comments - nova blueprint: https://blueprints.launchpad.net/nova/+spec/ironic-vnc-console Neutron event processing (vdrok) -------------------------------- - status as of 23 April 2018: - spec at https://review.openstack.org/343684 - Needs update - WIP code at https://review.openstack.org/440778 - code is being rewritten to look a bit nicer (major rewrite), spec update coming afterwards Goals ===== Make nova flexible with ironic API versions (TheJulia) ------------------------------------------------------ Status as of 23 APR 2018: (TheJulia) No update this week. Alternatively existing functionality could be used. The rescue patch for nova might end up landing with a version list. I've checked with some nova folks and they are on board with that option as a short term compromise. (TheJulia) We need python-ironicclient reviews which would be required to do this https://review.openstack.org/#/c/557850/ https://review.openstack.org/#/c/558027/ Storyboard migration (TheJulia, dtantsur) ----------------------------------------- Status as of Apr 23th. - Done with moving data. - dtantsur to rewrite the bug dashboard Management interface refactoring (etingof, dtantsur) ---------------------------------------------------- - Status as of 23 Apr: - boot mode in ManagementInterface: https://review.openstack.org/#/c/526773/ active review Getting clean steps (rloo, TheJulia) ------------------------------------ - Stat as of April 22nd 2018 - spec: https://review.openstack.org/#/c/507910/ - Updated Project vision (jroll, TheJulia) -------------------------------- - Status as of April 16: - jroll still trying to find time to collect enough thoughts for an email SIGHUP support (rloo) --------------------- - Status as of April 23 - ironic Done - ironic-inspector: - doesn't use oslo.service because not sure if can use flask with it - https://review.openstack.org/560243 custom signal handling - https://review.openstack.org/561823 oslo.service approach - networking-baremetal: https://review.openstack.org/561257 Need Reviews 2x+2 Stretch Goals ============= NOTE: These items will be migrated into storyboard and will be removed from the weekly whiteboard once storyboard is in-place Classic driver removal formerly Classic drivers deprecation (dtantsur) ---------------------------------------------------------------------- - spec: http://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/classic-drivers-future.html - status as of 26 Mar 2018: - switch documentation to hardware types: - api-ref examples: TODO - update https://wiki.openstack.org/wiki/Ironic/Drivers: TODO - or should we kill it with fire in favour of the docs? - ironic-inspector: - documentation: https://review.openstack.org/#/c/545285/ MERGED - backport: https://review.openstack.org/#/c/554586/ - enable fake-hardware in devstack: https://review.openstack.org/#/c/550811/ MERGED - change the default discovery driver: https://review.openstack.org/#/c/550464/ - migration of CI to hardware types - IPA: https://review.openstack.org/553431 MERGED - ironic-lib: https://review.openstack.org/#/c/552537/ MERGED - python-ironicclient: https://review.openstack.org/552543 MERGED - python-ironic-inspector-client: https://review.openstack.org/552546 +A MERGED - virtualbmc: https://review.openstack.org/#/c/555361/ MERGED - started an ML thread tagging potentially affected projects: http://lists.openstack.org/pipermail/openstack-dev/2018-March/128438.html - bug needs to be fixed: "periodic tasks of non-classic driver Interfaces aren't run" https://storyboard.openstack.org/#!/story/2001884 Redfish OOB inspection (etingof, deray, stendulker) --------------------------------------------------- - sushy Storage API -- https://review.openstack.org/#/c/563051/1 Zuul v3 playbook refactoring (sambetts, pas-ha) ----------------------------------------------- Before Rocky ============ CI refactoring and missing test coverage ---------------------------------------- - not considered a priority, it's a 'do it always' thing - Standalone CI tests (vsaienk0) - next patch to be reviewed, needed for 3rd party CI: https://review.openstack.org/#/c/429770/ - localboot with partitioned image patches: - Ironic - add localboot partitioned image test: https://review.openstack.org/#/c/502886/ Rebase/update required - when previous are merged TODO (vsaienko) - Upload tinycore partitioned image to tarbals.openstack.org - Switch ironic to use tinyipa partitioned image by default - Missing test coverage (all) - portgroups and attach/detach tempest tests: https://review.openstack.org/382476 - adoption: https://review.openstack.org/#/c/344975/ - should probably be changed to use standalone tests - root device hints: TODO - node take over - resource classes integration tests: https://review.openstack.org/#/c/443628/ - radosgw (https://bugs.launchpad.net/ironic/+bug/1737957) Queens High Priorities ====================== Routed network support (sambetts, vsaienk0, bfournie, hjensas) -------------------------------------------------------------- - status as of 12 Feb 2018: - All code patches are merged. - One CI patch left, rework devstack baremetal simulation. To be done in Rocky? - This is to have actual 'flat' networks in CI. - Placement API work to be done in Rocky due to: Challenges with integration to Placement due to the way the integration was done in neutron. Neutron will create a resource provider for network segments in Placement, then it creates an os-aggregate in Nova for the segment, adds nova compute hosts to this aggregate. Ironic nodes cannot be added to host-aggregates. I (hjensas) had a short discussion with neutron devs (mlavalle) on the issue: http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2018-01-12.log.html#t2018-01-12T17:05:38 There are patches in Nova to add support for ironic nodes in host-aggregates: - https://review.openstack.org/#/c/526753/ allow compute nodes to be associated with host agg - https://review.openstack.org/#/c/529135/ (Spec) - Patches: - CI Patches: - https://review.openstack.org/#/c/392959/ Rework Ironic devstack baremetal network simulation - RFEs (Rocky) - https://bugs.launchpad.net/networking-baremetal/+bug/1749166 - TheJulia, March 19th 2018: This RFE seems not to contain detail on what is desired to be improved upon, and ultimately just seems like refactoring/improvement work and may not then need an rfe. - https://bugs.launchpad.net/networking-baremetal/+bug/1749162 - TheJulia, March 19th 2018: This RFE makes sense, although I would classify it as a general improvement. If we wish to adhere to strict RFE approval for networking-baremetal work, then I think we should consider this approved since it is minor enhancement to improve operation. Rescue mode (rloo, stendulker) ------------------------------ - Status as on 12 Feb 2018 - spec: http://specs.openstack.org/openstack/ironic-specs/specs/approved/implement-rescue-mode.html - code: https://review.openstack.org/#/q/topic:bug/1526449+status:open+OR+status:merged - ironic side: - all code patches have merged except for - Rescue mode standalone tests: https://review.openstack.org/#/c/538119/ (failing CI, not ready for reviews) - Tempest tests with nova: https://review.openstack.org/#/c/528699/ - Run the tempest test on the CI: https://review.openstack.org/#/c/528704/ - succeeded in rescuing: http://logs.openstack.org/04/528704/16/check/ironic-tempest-dsvm-ipa-wholedisk-bios-agent_ipmitool-tinyipa/4b74169/logs/screen-ir-cond.txt.gz#_Feb_02_09_44_12_940007 - nova side: - https://blueprints.launchpad.net/nova/+spec/ironic-rescue-mode: - approved for Queens but didn't get the ironic code (client) done in time - (TheJulia) Nova has indicated that this is deferred until Rocky. - To get the nova patch merged, we need: - release new python-ironicclient - Done - update ironicclient version in upper-constraints (this patch will be posted automatically) - update ironicclient version in global-requirement (this patch needs to be posted manually) Posted https://review.openstack.org/554673 - code patch: https://review.openstack.org/#/c/416487/ Needs revision - CI is needed for nova part to land - tiendc is working for CI Clean up deploy interfaces (vdrok) ---------------------------------- - status as of 5 Feb 2017: - patch https://review.openstack.org/524433 needs update and rebase Zuul v3 jobs in-tree (sambetts, derekh, jlvillal) ------------------------------------------------- - Next TODO is to convert jobs on master, to proper ansible. NOT a high priority though. - (pas-ha) DNM experimental patch with "devstack-tempest" as base job https://review.openstack.org/#/c/520167/ OpenStack Priorities ==================== Mox --- - TheJulia declared this DONE. Python 3.5 compatibility (Nisha, Ankit) --------------------------------------- - Topic: https://review.openstack.org/#/q/topic:goal-python35+NOT+project:openstack/governance+NOT+project:openstack/releases - this include all projects, not only ironic - please tag all reviews with topic "goal-python35" - TODO submit the python3 job for IPA - for ironic and ironic-inspector job enabled by disabling swift as swift is still lacking py3.5 support. - anupn to update the python3 job to build tinyipa with python3 - (anupn): Talked with swift folks and there is a bug upstream opened https://review.openstack.org/#/c/401397 for py3 support in swift. But this is not on their priority - Right now patch pass all gate jobs except agent_- drivers. - (TheJulia) It seems we might not have py3 compatibility with swift until the T- cycle. - updating setup.cfg (part of requirements for the goal): - ironic: https://review.openstack.org/#/c/539500/ - MERGED - ironic-inspector: https://review.openstack.org/#/c/539502/ - MERGED Deploying with Apache and WSGI in CI (pas-ha, vsaienk0) ------------------------------------------------------- - ironic is mostly finished - (pas-ha) needs to be rewritten for uWSGI, patches on review: - https://review.openstack.org/#/c/507067 - inspector is TODO and depends on https://review.openstack.org/#/q/topic:bug/1525218 - delayed as the HA work seems to take a different direction - (TheJulia, March 19th, 2018) Perhaps because of the different direction, we should consider ourselves done? Subprojects =========== Inspector (dtantsur) -------------------- - trying to flip dsvm-discovery to use the new dnsmasq pxe filter and failing because of bash :Dhttps://review.openstack.org/#/c/525685/6/devstack/plugin.sh at 202 - follow-ups being merged/reviewed; working on state consistency enhancements https://review.openstack.org/#/c/510928/ too (HA demo follow-up) Bifrost (TheJulia) ------------------ - Also seems a recent authentication change in keystoneauth1 has broken processing of the clouds.yaml files, i.e. `openstack` command does not work. - TheJulia will try to look at this this week. Drivers: -------- OneView (???) ~~~~~~~~~~~~~ - Oneview presently does not have a subteam. Cisco UCS (sambetts) Last updated 2018/02/05 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - Cisco CIMC driver CI back up and working on every patch - Cisco UCSM driver CI in development - Patches for updating the UCS python SDKs are in the works and should be posted soon ......... Until next week, --rama [0] https://etherpad.openstack.org/p/IronicWhiteBoard -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Fox at pnnl.gov Mon Apr 23 23:08:47 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Mon, 23 Apr 2018 23:08:47 +0000 Subject: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier? In-Reply-To: <9555e900-24e7-9a9f-5a09-b957faa44fc2@redhat.com> References: <1524491647-sup-1779@lrrr.local>, <9555e900-24e7-9a9f-5a09-b957faa44fc2@redhat.com> Message-ID: <1A3C52DFCD06494D8528644858247BF01C0B3990@EX10MBOX03.pnnl.gov> One more I'll add which is touched on a little below. Contributors spawn from a healthy Userbase/Operatorbase. If their needs are not met, then they go elsewhere and the contributor base shrinks. OpenStack has created artificial walls between the various Projects. It shows up, for example, as holes in usability at a user level or extra difficulty for operators juggling around so many projects. Users and for the most part, Operators don't really care about project organization, or ptls, or cores or such. OpenStack has made some progress this direction with stuff like the unified cli. But OpenStack is not very unified. I think OpenStack, as a whole, needs to look at ways to minimize how its archetecture impacts Users/Operators so they don't continue to migrate to platforms that do minimize the stuff they have the operator/user deal with. One goes to a cloud so you don't have to deal so much with the details. Thanks, Kevin _________________________________ _______ From: Zane Bitter [zbitter at redhat.com] Sent: Monday, April 23, 2018 1:47 PM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier? On 23/04/18 10:06, Doug Hellmann wrote: > [This is meant to be one of (I hope) several conversation-provoking > questions directed at prospective TC members to help the community > understand their positions before considering how to vote in the > ongoing election.] > > Over the last year we have seen some contraction in the number of > companies and individuals contributing to OpenStack. At the same > time we have started seeing contributions from other companies and > individuals. To some degree this contraction and shift in contributor > base is a natural outcome of changes in OpenStack itself along with > the rest of the technology industry, but as with any change it > raises questions about how and whether we can ensure a smooth > transition to a new steady state. > > What aspects of our policies or culture make contributing to OpenStack > more difficult than contributing to other open source projects? > > Which of those would you change, and how? There's probably two separate groups we need to consider. The first is operators and users of OpenStack. We want those folks to contribute when they see a problem or an opportunity to improve, and their feedback is extremely valuable because they know the product best. We need to encourage new contributors in this group and retain existing ones by: * Reducing barriers to contributing, like having to register for multiple services, sign a CLA &c. We're mostly aware of the problems in this area and have been making incremental progress on them over a long period of time. * Encouraging people to get involved. Low-hanging-fruit bug lists are useful. Even something like a link on every docs page indicating where to edit the source would help encourage people to take that first step. (Technically we have this when you click the 'bug' link - but it's not obvious, and you need to sign up for a Launchpad account to use it... see above.) Once people have done the initial setup work for a first patch, they're more likely to contribute again. The First Contact SIG is doing great work in this area. * The most important one: provide prompt, actionable feedback on changes. Nothing kills contributor motivation like having your changes ignored for months. Unfortunately this is also the hardest one to deal with; the situation is different in every project, and much depends on the amount of time available from the existing contributors. Adding more core reviewers helps; finding ways to limit the proportion of the code base that a core reviewer is responsible for (either by splitting up repos or giving cores a specific area of responsibility in a repo) would be one way to train them quicker. Another way, which I already alluded to in my candidacy message, is to expand the pool of OpenStack users. One of my goals is to make OpenStack an attractive cloud platform to write applications against, and not merely somewhere to get a VM to run your application in. If we can achieve that we'll increase the market for OpenStack and hence the number of users and thus potential contributors. But those new users would be more motivated than anyone to find and fix bugs, and they're already developers so they'd be disproportionately more likely to contribute code in addition to documentation or bug reports (which are also important contributions). The second group is those who are paid specifically to spend a portion of their time on upstream contribution, which brings us to... > Where else should we be looking for contributors? Companies who are making money from OpenStack! It's their responsibility to maintain the commons and, collectively speaking at least, their problem if they don't. For a start, we need to convince anybody who is maintaining a fork of OpenStack to do something more useful with their money. Like, for example, building it into a big pile and setting fire to it to keep warm. Maybe education is something that can help here. For a lot of folks, OpenStack is their first direct contact with an open source community. If we could help them to learn why contributing is in their best interest, and how to do it effectively, then we could make some progress. It's pretty remarkable that there are Foundation board members still asking the TC to direct employees of other companies to work on the stuff they want them to for free. cheers, Zane. __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From kennelson11 at gmail.com Tue Apr 24 00:03:55 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 24 Apr 2018 00:03:55 +0000 Subject: [openstack-dev] [All][Election] Rocky TC Election Voting Begins! Message-ID: Hello Everyone! The poll for the TC Election is now open and will remain open until Apr 30, 2018 23:45 UTC. We are selecting 7 TC members, please rank all candidates in your order of preference. You are eligible to vote if you are a Foundation individual member[1] that also has committed to one of the official programs projects[2] over the Pike-Queens timeframe (2017-02-21T23:59 to 2018-02-28T23:59) or if you are one of the extra-atcs.[3] What to do if you don't see the email and have a commit in at least one of the official programs projects[2]: * check the trash or spam folder of your gerrit Preferred Email address[4], in case it went into trash or spam * wait a bit and check again, in case your email server is a bit slow * find the sha of at least one commit from the program project repos[2] and email the election officials[1]. If we can confirm that you are entitled to vote, we will add you to the voters list and you will be emailed a ballot. Our democratic process is important to the health of OpenStack, please exercise your right to vote! Candidate statements/platforms can be found linked to Candidate names[6]. Happy voting! Thank you, -Kendall Nelson (diablo_rojo) [1] http://www.openstack.org/community/members/ [2] https://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml?id=apr-2018-elections [3] Look for the extra-atcs element in [2] [4] Sign into review.openstack.org: Go to Settings > Contact Information. Look at the email listed as your Preferred Email. That is where the ballot has been sent. [5] http://governance.openstack.org/election/#election-officials [6] http://governance.openstack.org/election/#rocky-tc-candidates -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Apr 24 00:44:36 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 24 Apr 2018 00:44:36 +0000 Subject: [openstack-dev] [All][Election] Rocky TC Election Voting Begins! In-Reply-To: References: Message-ID: <20180424004436.nhysk63btssr4q2t@yuggoth.org> On 2018-04-24 00:03:55 +0000 (+0000), Kendall Nelson wrote: > The poll for the TC Election is now open and will remain open > until Apr 30, 2018 23:45 UTC. [...] In finest OpenStack scaling tradition, we seem to have overloaded CIVS with the volume of ballots we wanted to send and so ended up readding around 15% (the ones which looked like they generated errors according to the WebUI feedback). If you receive a second ballot E-mail, you can discard it. They're just duplicates with the same unique URL in them, so not good for a second vote. ;) -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From rochelle.grober at huawei.com Tue Apr 24 01:30:57 2018 From: rochelle.grober at huawei.com (Rochelle Grober) Date: Tue, 24 Apr 2018 01:30:57 +0000 Subject: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier? In-Reply-To: <1524503764-sup-3631@lrrr.local> References: <1524491647-sup-1779@lrrr.local> <8b5aa508-b859-1172-4ea1-3a40d70989b4@openstack.org> <1524503764-sup-3631@lrrr.local> Message-ID: Doug Hellmann wrote: > I would like for us to collect some more data about what efforts teams are > making with encouraging new contributors, and what seems to be working or > not. In the past we've done pretty well at finding new techniques by > experimenting within one team and then adapting the results to scale them > out to other teams. > > Does anyone have any examples of things that we ought to be trying more > of? > Okay, here I am sticking my foot in it after reading all the other excellent replies. Lots of good suggestions. Matt, Zane, Chris, Rico, etc. Here are is another one: I've noticed that as the projects mature, they have developed some new processes that are regular, but not daily. Some are baked into the schedule, others are scheduled on a semi recurring basis but not "official. One that I've seen a few times is the "bug swat day". Some projects are scheduling triage and fix days throughout the cycle. One project just decided to make it monthly. This is great. Invite Ops and users to participate. Invite the folks who filed the bugs you might fix to participate. Use IRC, paste and etherpad to develop the fixes and show the symptoms. Maybe to develop the test to demonstrate the fix works, too. If an operator really wants to see a bug fixed, they let the project know and let them know when she will turn up in IRC to help. If they help enough, add them as co-owner of the patch. Don't make them get all the accounts (if that's possible with Gerrit), just put their name on it. They'll be overjoyed to both have the bug fixed *and* get some credit for stepping up. This get devs, users and ops all on the same IRC page, focusing on enduser problems and collaborating on solutions in a regularly scheduled day(time) slot. And the "needs more info" problem for bugs gets solved. You can also invite everyone to Spec review days, or test writing days, or documentation days. And you can invites students, academicians, etc. If people know to show up, and they know *if* they show up *and* they are willing to discuss symptoms, ask questions, provide logs, whatever, that some pain in their butt will be closer to getting fixed, some will show up. You give them credit and they'll feel even better showing up. Not quite drive-by contributors, but it means pain points get addressed based on participation and existing contributors partner with the people who know the pain points to solve them. On a regularly scheduled basis. Oh, and you can put these days on the OpenStack event schedule, too. --Rocky From tony at bakeyournoodle.com Tue Apr 24 01:49:20 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Tue, 24 Apr 2018 11:49:20 +1000 Subject: [openstack-dev] [tripleo] roadmap on containers workflow In-Reply-To: References: <1bf26224-6cb3-099b-f36a-88e0138eb502@redhat.com> Message-ID: <20180424014919.GE6516@thor.bakeyournoodle.com> On Sun, Apr 15, 2018 at 07:24:58PM -0700, Emilien Macchi wrote: > This patch: https://review.openstack.org/#/c/561377 is deploying Docker and > Docker Registry v2 *before* containers deployment in the docker_steps. > It's using the external_deploy_tasks interface that runs right after the > host_prep_tasks, so still before starting containers. > > It's using the Ansible role that was prototyped on Friday, please take a > look and raise any concern. > Now I would like to investigate how we can run container workflows between > the deployment and docker and containers deployments. This looks pretty good to me and if I understand correctly as it's creating a v2 registry then we'll get manifest list images (for multi-arch) by default which is a massive win for me. Thanks Emilien > -- > Emilien Macchi > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From zhipengh512 at gmail.com Tue Apr 24 02:22:48 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Tue, 24 Apr 2018 10:22:48 +0800 Subject: [openstack-dev] Public Cloud WG PTG Summary Message-ID: Hi team, Sorry for this long overdue summary. During the Dublin PTG as a WG we held two successful discussion sessions on Mon and Tues, and below are the conclusions for this year's planning as far as I could recall. Please feel free to provide further feedback :) - Passport Program v2 - We want to push forward the passport program into the v2 stage this year, including QR code promotion, more member clouds (APAC and North America) and possibly a blockchain experiment (cloud ledger proposal [0]) targeting Berlin Summit if the testnet proves to be successful. - We will be also looking into the possibility of having OpenLab as a special member of Passport Program to help ease some of the difficulties of purely business facing or academic clouds to join the initiative. - Public Cloud Feature List - We will look at a more formal draft of the feature list [1] ready for Vancouver and gather some additional requirement at Vancouver summit. It is also possible for us to do a white paper based upon the feature list content this year, to help user and operators alike better understanding what OpenStack public cloud could offer. - Public Cloud SDK Certification - Chris Hoge, Dims and Melvin have been helping putting up a testing plan for public cloud sdk certification based upon the initial work OpenLab team has achieved. Public Cloud WG will provide a interop-like guideline based upon the testing mechanism. - Public Cloud Meetup - We look forward to have more :) [0] https://docs.google.com/presentation/d/1RYRq1YdYEoZ5KNKwlDDtnunMdoYRAHPjPslnng3VqcI/edit?usp=sharing [1] https://docs.google.com/spreadsheets/d/1Mf8OAyTzZxCKzYHMgBl-QK_2-XSycSkOjqCyMTIedkA/edit?usp=sharing -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From sam47priya at gmail.com Tue Apr 24 02:50:56 2018 From: sam47priya at gmail.com (Sam P) Date: Tue, 24 Apr 2018 11:50:56 +0900 Subject: [openstack-dev] [masakari] Introspective Instance Monitoring through QEMU Guest Agent In-Reply-To: <47EFB32CD8770A4D9590812EE28C977E962F2E66@ALA-MBD.corp.ad.wrs.com> References: <47EFB32CD8770A4D9590812EE28C977E962F2E66@ALA-MBD.corp.ad.wrs.com> Message-ID: Hi Louie, Thanks for bring this up. I add this topic to today's meeting agenda. --- Regards, Sampath On Tue, Apr 24, 2018 at 5:15 AM, Kwan, Louie wrote: > Submitted the following review on January 17, 2018, > > https://review.openstack.org/#/c/534958/ > > Would like to know who else could be on the reviewer list ? or anything > else is needed for the next step? > > Also, I am planning to attend our coming Masakari Weekly meeting, April > 24, 0400 UTC in #openstack-meeting > and would like add an agenda item to follow up how to move the review > forward. > > Thanks. > Louie > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From myoung at redhat.com Tue Apr 24 03:34:58 2018 From: myoung at redhat.com (Matt Young) Date: Mon, 23 Apr 2018 23:34:58 -0400 Subject: [openstack-dev] [tripleo] CI Community Meeting tomorrow (2018-04-24) Message-ID: Greetings, Tomorrow the CI team will be hosting its weekly Community Meeting. We welcome any/all to join. The meeting is a place to discuss any concerns / questions / issues from the community regarding CI. It will (as usual) be held immediately following the general #tripleo meeting on BlueJeans [2], typically ~14:30 UTC. Please feel free to add items to the agenda [2] or simply come and chat. Thanks, Matt [1] https://bluejeans.com/7050859455 [2] https://etherpad.openstack.org/p/tripleo-ci-squad-meeting -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.harbott at x-ion.de Tue Apr 24 05:49:25 2018 From: j.harbott at x-ion.de (Jens Harbott) Date: Tue, 24 Apr 2018 07:49:25 +0200 Subject: [openstack-dev] [designate] Meeting Times - change to office hours? In-Reply-To: References: Message-ID: 2018-04-23 13:11 GMT+02:00 Graham Hayes : > Hi All, > > We moved our meeting time to 14:00UTC on Wednesdays, but attendance > has been low, and it is also the middle of the night for one of our > cores. > > I would like to suggest we have an office hours style meeting, with > one in the UTC evening and one in the UTC morning. > > If this seems reasonable - when and what frequency should we do > them? What times suit the current set of contributors? My preferred range would be 06:00UTC-14:00UTC, Mon-Thu, though extending a couple of hours in either direction might be possible for me, too. If we do alternating times, with the current amount of work happening we maybe could make each of them monthly, so we end up with a roughly bi-weekly schedule. I also have a slight preference for continuing to use one of the meeting channels as opposed to meeting in the designate channel, if that is what "office hours style meeting" is meant to imply. From sam47priya at gmail.com Tue Apr 24 06:33:03 2018 From: sam47priya at gmail.com (Sam P) Date: Tue, 24 Apr 2018 15:33:03 +0900 Subject: [openstack-dev] [all][ptl][release][masakari][murano][qinling][searchlight][zaqar] reminder for rocky-1 milestone deadline In-Reply-To: <1524246767-sup-1191@lrrr.local> References: <1524143700-sup-9515@lrrr.local> <1524246767-sup-1191@lrrr.local> Message-ID: Hi Doug and ALL, Thank you for the reminder. This was my mistake and really ​s orry for any inconvenience this may have caused. This will not happen again.​​ ​Patches are up to review for following projects.​ > masakari-monitors ​> ​ masakari --- Regards, Sampath On Sat, Apr 21, 2018 at 3:04 AM, Doug Hellmann wrote: > Excerpts from Doug Hellmann's message of 2018-04-19 09:15:49 -0400: > > Today is the deadline for proposing a release for the Rocky-1 milestone. > > Please don't forget to include your libraries (client or otherwise) as > > well. > > > > Doug > > A few projects have missed the first milestone tagging deadline: > > ​​ > masakari-monitors > masakari > > murano-dashboard > > qinling > > searchlight-ui > searchlight > > zaqar-ui > zaqar > > The policy on missing deadlines this cycle is changing [1]: > > Projects using milestones are expected to tag at least 2 out of > the 3 for each cycle, or risk being dropped as an official project. > The release team will remind projects that miss the first milestone, > and force tags on any later milestones by tagging HEAD at the > time of the deadline. > > The masakari, murano, qinling, searchlight, and zaqar teams should > consider this your reminder. > > We really don't want to be making decisions for you about what > constitutes a good release, but we also do not want to have projects > that are not preparing releases. Please keep up with the deadlines. > > Doug > > [1] https://review.openstack.org/#/c/561258 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eandersson at blizzard.com Tue Apr 24 06:37:25 2018 From: eandersson at blizzard.com (Erik Olof Gunnar Andersson) Date: Tue, 24 Apr 2018 06:37:25 +0000 Subject: [openstack-dev] [designate] Meeting Times - change to office hours? In-Reply-To: References: , Message-ID: I can do anytime ranging from 16:00 UTC to 03:00 UTC, Mon-Fri, maybe up to 07:00 UTC assuming that it's once bi-weekly. ________________________________ From: Jens Harbott Sent: Monday, April 23, 2018 10:49:25 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [designate] Meeting Times - change to office hours? 2018-04-23 13:11 GMT+02:00 Graham Hayes : > Hi All, > > We moved our meeting time to 14:00UTC on Wednesdays, but attendance > has been low, and it is also the middle of the night for one of our > cores. > > I would like to suggest we have an office hours style meeting, with > one in the UTC evening and one in the UTC morning. > > If this seems reasonable - when and what frequency should we do > them? What times suit the current set of contributors? My preferred range would be 06:00UTC-14:00UTC, Mon-Thu, though extending a couple of hours in either direction might be possible for me, too. If we do alternating times, with the current amount of work happening we maybe could make each of them monthly, so we end up with a roughly bi-weekly schedule. I also have a slight preference for continuing to use one of the meeting channels as opposed to meeting in the designate channel, if that is what "office hours style meeting" is meant to imply. __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From soulxu at gmail.com Tue Apr 24 07:08:22 2018 From: soulxu at gmail.com (Alex Xu) Date: Tue, 24 Apr 2018 15:08:22 +0800 Subject: [openstack-dev] [nova][placement] Trying to summarize bp/glance-image-traits scheduling alternatives for rebuild In-Reply-To: References: <221636a9-4b8f-1098-10b8-2240a7cb0ff7@gmail.com> <8eec45ab-f9ed-cd96-51a1-9be78849fb9b@gmail.com> Message-ID: 2018-04-24 5:51 GMT+08:00 Arvind N : > Thanks for the detailed options Matt/eric/jay. > > Just few of my thoughts, > > For #1, we can make the explanation very clear that we rejected the > request because the original traits specified in the original image and the > new traits specified in the new image do not match and hence rebuild is not > supported. > > For #2, > > Other Cons: > > 1. None of the filters currently make other API requests and my > understanding is we want to avoid reintroducing such a pattern. But > definitely workable solution. > 2. If the user disables the image properties filter, then traits based > filtering will not be run in rebuild case > > For #3, > > Even though it handles the nested provider, there is a potential issue. > > Lets say a host with two SRIOV nic. One is normal SRIOV nic(VF1), another > one with some kind of offload feature(VF2).(Described by alex) > > Initial instance launch happens with VF:1 allocated, rebuild launches with > modified request with traits=HW_NIC_OFFLOAD_X, so basically we want the > instance to be allocated VF2. > > But the original allocation happens against VF1 and since in rebuild the > original allocations are not changed, we have wrong allocations. > Yes, that is the case what I said, and none of #1,2,3,4 and the proposal in this threads works also. The problem isn't just checking the traits in the nested resource provider. We also need to ensure the trait in the exactly same child resource provider. Or we need to adjust allocations for the child resource provider. > for #4, there is good amount of pushback against modifying the > allocation_candiadates api to not have resources. > > Jay: > for the GET /resource_providers?in_tree=&required=, > nested resource providers and allocation pose a problem see #3 above. > > I will investigate erics option and update the spec. > -- > Arvind N > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ifat.afek at nokia.com Tue Apr 24 07:14:59 2018 From: ifat.afek at nokia.com (Afek, Ifat (Nokia - IL/Kfar Sava)) Date: Tue, 24 Apr 2018 07:14:59 +0000 Subject: [openstack-dev] [Vitrage] Vitrage graph error In-Reply-To: <01b501d3db03$650d4670$2f27d350$@ssu.ac.kr> References: <01b501d3db03$650d4670$2f27d350$@ssu.ac.kr> Message-ID: <720B7050-A658-4E3D-BB60-C9AECC5D4186@nokia.com> Hi Minwook, Is the problem only in the Entity Graph? Do the Alarms view and the Topology view work? And what about the CLI? I’ll check it and get back to you. Thanks, Ifat From: MinWookKim Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Monday, 23 April 2018 at 16:02 To: "'OpenStack Development Mailing List (not for usage questions)'" Subject: [openstack-dev] [Vitrage] Vitrage graph error Hello Vitrage team, A few days ago I used Devstack to install the Openstack master version, which included Vitrage. However, I found that the Vitrage graph does not work on the Vitrage-dashboard. The state of all Vitrage components is active. Could you check it once? Thanks. Best Regards, Minwook. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Tue Apr 24 07:22:51 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Tue, 24 Apr 2018 15:22:51 +0800 Subject: [openstack-dev] [publiccloud-wg]KubeCon EU Public Cloud Meetup ? Message-ID: Hi, I'm wondering for people who will attend KubeCon EU is there any interest for a public cloud meetup ? We could discuss many items listed in the ptg summary I just sent out via the meetup :) -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From delightwook at ssu.ac.kr Tue Apr 24 07:59:05 2018 From: delightwook at ssu.ac.kr (MinWookKim) Date: Tue, 24 Apr 2018 16:59:05 +0900 Subject: [openstack-dev] [Vitrage] Vitrage graph error In-Reply-To: <720B7050-A658-4E3D-BB60-C9AECC5D4186@nokia.com> References: <01b501d3db03$650d4670$2f27d350$@ssu.ac.kr> <720B7050-A658-4E3D-BB60-C9AECC5D4186@nokia.com> Message-ID: <03ef01d3dba2$1e1d06c0$5a571440$@ssu.ac.kr> Hello Ifat, I have not checked the alarm yet. (I think it does not work.) However, i confirmed that the entity graph and the topology do not work. Additionally, the CLI does not seem to work either. I'll check it out with you. : ) Thank you. Best Regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Tuesday, April 24, 2018 4:15 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] Vitrage graph error Hi Minwook, Is the problem only in the Entity Graph? Do the Alarms view and the Topology view work? And what about the CLI? I’ll check it and get back to you. Thanks, Ifat From: MinWookKim > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Monday, 23 April 2018 at 16:02 To: "'OpenStack Development Mailing List (not for usage questions)'" > Subject: [openstack-dev] [Vitrage] Vitrage graph error Hello Vitrage team, A few days ago I used Devstack to install the Openstack master version, which included Vitrage. However, I found that the Vitrage graph does not work on the Vitrage-dashboard. The state of all Vitrage components is active. Could you check it once? Thanks. Best Regards, Minwook. -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 11014 bytes Desc: not available URL: From sangho at opennetworking.org Tue Apr 24 08:22:33 2018 From: sangho at opennetworking.org (Sangho Shin) Date: Tue, 24 Apr 2018 17:22:33 +0900 Subject: [openstack-dev] [openstack-infra] How to take over a project? In-Reply-To: <3C5A1D78-828F-4C6D-B3A1-B6597403233F@opennetworking.org> References: <0630C19B-9EA1-457D-A74C-8A7A03D96DD6@vmware.com> <2bb037db-61aa-4fec-a694-00eee36bbdab@gmail.com> <7a4390b1-2c4e-6600-4d93-167697ea9f12@redhat.com> <81B28CCD-93B2-4BC8-B2C5-50B0C5D2A972@opennetworking.org> <3C5A1D78-828F-4C6D-B3A1-B6597403233F@opennetworking.org> Message-ID: <0202894D-3C05-434F-A7F4-93678C7613FE@opennetworking.org> Dear Neutron-Release team members, Can any of you handle the issue below? Thank you so much for your help in advance. Sangho > On 20 Apr 2018, at 10:01 AM, Sangho Shin wrote: > > Dear Neutron-Release team, > > I wonder if any of you can add me to the network-onos-release member. > It seems that Vikram is busy. :-) > > Thank you, > > Sangho > > > >> On 19 Apr 2018, at 9:18 AM, Sangho Shin wrote: >> >> Ian, >> >> Thank you so much for your help. >> I have requested Vikram to add me to the release team. >> He should be able to help me. :-) >> >> Sangho >> >> >>> On 19 Apr 2018, at 8:36 AM, Ian Wienand wrote: >>> >>> On 04/19/2018 01:19 AM, Ian Y. Choi wrote: >>>> By the way, since the networking-onos-release group has no neutron >>>> release team group, I think infra team can help to include neutron >>>> release team and neutron release team can help to create branches >>>> for the repo if there is no reponse from current >>>> networking-onos-release group member. >>> >>> This seems sane and I've added neutron-release to >>> networking-onos-release. >>> >>> I'm hesitant to give advice on branching within a project like neutron >>> as I'm sure there's stuff I'm not aware of; but members of the >>> neutron-release team should be able to get you going. >>> >>> Thanks, >>> >>> -i >> > From balazs.gibizer at ericsson.com Tue Apr 24 08:25:11 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Tue, 24 Apr 2018 10:25:11 +0200 Subject: [openstack-dev] [nova][placement] Trying to summarize bp/glance-image-traits scheduling alternatives for rebuild In-Reply-To: References: <221636a9-4b8f-1098-10b8-2240a7cb0ff7@gmail.com> <8eec45ab-f9ed-cd96-51a1-9be78849fb9b@gmail.com> Message-ID: <1524558311.25291.2@smtp.office365.com> On Tue, Apr 24, 2018 at 9:08 AM, Alex Xu wrote: > > > 2018-04-24 5:51 GMT+08:00 Arvind N : >> Thanks for the detailed options Matt/eric/jay. >> >> Just few of my thoughts, >> >> For #1, we can make the explanation very clear that we rejected the >> request because the original traits specified in the original image >> and the new traits specified in the new image do not match and hence >> rebuild is not supported. >> >> For #2, >> >> Other Cons: >> None of the filters currently make other API requests and my >> understanding is we want to avoid reintroducing such a pattern. But >> definitely workable solution. >> If the user disables the image properties filter, then traits based >> filtering will not be run in rebuild case >> For #3, >> >> Even though it handles the nested provider, there is a potential >> issue. >> >> Lets say a host with two SRIOV nic. One is normal SRIOV nic(VF1), >> another one with some kind of offload feature(VF2).(Described by >> alex) >> >> Initial instance launch happens with VF:1 allocated, rebuild >> launches with modified request with traits=HW_NIC_OFFLOAD_X, so >> basically we want the instance to be allocated VF2. >> >> But the original allocation happens against VF1 and since in rebuild >> the original allocations are not changed, we have wrong allocations. > > > Yes, that is the case what I said, and none of #1,2,3,4 and the > proposal in this threads works also. > > The problem isn't just checking the traits in the nested resource > provider. We also need to ensure the trait in the exactly same child > resource provider. Or we need to adjust allocations for the child > resource provider. I agree that in_tree only ensure that the compute node tree has the required traits but it does not take into account that only some of those RPs from the tree provides resources for the current allocation. The algorithm Eric provided in a previous mail do the filtering for the RPs that are part of the instance allocation so that sounds good to me. I think we should not try to adjust allocations during a rebuild. Changing the allocation would mean it is not a rebuild any more but a resize. Cheers, gibi > > >> >> for #4, there is good amount of pushback against modifying the >> allocation_candiadates api to not have resources. >> >> Jay: >> for the GET >> /resource_providers?in_tree=&required=, >> nested resource providers and allocation pose a problem see #3 above. >> >> I will investigate erics option and update the spec. >> -- >> Arvind N >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > From jlibosva at redhat.com Tue Apr 24 08:33:10 2018 From: jlibosva at redhat.com (Jakub Libosvar) Date: Tue, 24 Apr 2018 10:33:10 +0200 Subject: [openstack-dev] [neutron] Gate-failure bugs spring cleaning In-Reply-To: <40CBAEA8-B68B-482D-933E-76A97112A6E0@redhat.com> References: <40CBAEA8-B68B-482D-933E-76A97112A6E0@redhat.com> Message-ID: <9d5765e0-55f6-d04a-36e1-fa66abd4880e@redhat.com> On 21/04/2018 14:16, Slawomir Kaplonski wrote: > Hi Neutrinos, > > There is time for some spring cleaning now so I went through list of Neutron bugs with „gate-failure” tag https://tinyurl.com/y826rccx > I mark some of them as incomplete if there was not hits of same errors in last 30 days. Please reopen them with proper comment if You think that it is still valid bug or if You will spot similar error in some recent test runs. > > About some of them I’m not sure if are still valid so please check it and maybe update comment or close it if it’s already fixed somehow :) > > Below detailed summary of bugs which I checked: > > I removed Neutron from affected projects: > * https://bugs.launchpad.net/tempest/+bug/1660612 > > I marked as incomplete: > * https://bugs.launchpad.net/neutron/+bug/1687027 > * https://bugs.launchpad.net/neutron/+bug/1693931 > * https://bugs.launchpad.net/neutron/+bug/1676966 > > Bugs which needs check of owner: > * https://bugs.launchpad.net/neutron/+bug/1711463 - @Miguel, is it still valid? Can we close it? > * https://bugs.launchpad.net/neutron/+bug/1717302 - @Brian, no action since 2017-12-12, is it failing still? > > Bug which IMO should be reported against Cinder instead of Neutron, can someone check and confirm that: > * https://bugs.launchpad.net/neutron/+bug/1726462 - Is it related to Neutron really, IMO it look like error with Cinder and it happens also in other than neutron jobs, like „devstack-platform-opensuse-tumbleweed” and „nova-multiattach” for example > > Still valid bugs probably: > * https://bugs.launchpad.net/neutron/+bug/1693950 - not exactly same error but same tests failures I found recently so I think it is still valid to check > * https://bugs.launchpad.net/neutron/+bug/1756301 - @Miguel: Can You check and confirm that this is still valid > * https://bugs.launchpad.net/neutron/+bug/1569621 - should be fixed by https://review.openstack.org/#/c/562220/ - @Jakub can You confirm that? That is correct - it's tracked as a fix for https://bugs.launchpad.net/neutron/+bug/1708731 > > — > Best regards > Slawek Kaplonski > skaplons at redhat.com > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From thierry at openstack.org Tue Apr 24 09:11:16 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 24 Apr 2018 11:11:16 +0200 Subject: [openstack-dev] [all] Topics for the Board+TC+UC meeting in Vancouver In-Reply-To: <1524496161-sup-6113@lrrr.local> References: <1df6fb53-ed96-8eea-a290-ab0b889a1ae2@openstack.org> <7c47c11d-6731-09e7-50ad-76b22eab11c1@ham.ie> <1524496161-sup-6113@lrrr.local> Message-ID: <5f27828a-cde1-f5fa-182f-742fa2ef5064@openstack.org> Doug Hellmann wrote: > Excerpts from Graham Hayes's message of 2018-04-23 15:36:32 +0100: >> I think as an add on to this, would to ask the board to talk to members >> and see what contributions they have made to the technical side of >> OpenStack. >> >> This should not just be Number of commits / reviews / bugs etc but >> also the motivation for the work, e.g. - Feature for a product, bug fix >> found in a product, cross project work or upstream project maintenance. > > A while back Jay Pipes suggested that we ask contributing companies > to summarize their work. I think that was in the context of > understanding what platinum members are doing, but it could apply > to everyone. By leaving the definition of "contribution" open-ended > and asking as a way to celebrate those contributions, we could avoid > any sense of shaming as well as see what the companies consider to > be important. Yes, we discussed this in Sydney and I took the action to try to include it in the Foundation annual report. You can find the result in the Foundation annual report this year: https://www.openstack.org/assets/reports/OpenStack-AnnualReport2017.pdf See pages 7-9. Obviously not optimal (not everybody answered, and some of the responses are a bit off-topic), but we had limited time to pull it off and I think it's a good first step. We can take that as a basis for the next stage of discussion. -- Thierry Carrez (ttx) From thierry at openstack.org Tue Apr 24 09:21:22 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 24 Apr 2018 11:21:22 +0200 Subject: [openstack-dev] [tc] campaign question: How should we handle projects with overlapping feature sets? In-Reply-To: References: <1524490775-sup-9488@lrrr.local> Message-ID: <5eb7ef27-c5f1-11e6-0d7e-81b911499fb8@openstack.org> Rico Lin wrote: > I think we have a start now with providing a decent map to show services > in OpenStack and fill in with projects. What we should have and will be > nice is to ask projects to search through the map (with a brief > introduction of services) when they're registering. To prevent > overlapping from the very beginning seems to be the most ideal, which > might also mean it's actually our responsibility to search through what > exactly a project aims to and what kind of feature it will provide when > we allow people to register a project. I like the idea of asking a new project to tell us where they expect to fit in the OpenStack Map[1]. Projects don't exist in a vacuum and the more they fit in the existing layout / buckets the best the overall "product" (framework of cooperating components) will look. Personally I find that every time I have trouble placing a new project on the map, it's symptomatic of a deeper issue (like unclear scope / usage definition) that needs to be discussed early rather than late. [1] https://www.openstack.org/openstack-map -- Thierry Carrez (ttx) From thierry at openstack.org Tue Apr 24 09:55:03 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 24 Apr 2018 11:55:03 +0200 Subject: [openstack-dev] [tc] campaign question related to new projects In-Reply-To: <92a3703e-428b-1793-b01f-5751ad0f4e33@redhat.com> References: <1524259233-sup-3003@lrrr.local> <92a3703e-428b-1793-b01f-5751ad0f4e33@redhat.com> Message-ID: <9f3c34e0-076c-4a6e-f073-eee57d0daaae@openstack.org> Zane Bitter wrote: > [...] > I would love to see us have a conversation as a community to figure out > what we all, collectively, think that list should look like and document > it. Ideally new projects shouldn't have to wait until they've applied to > join OpenStack to get a sense of whether we believe they're furthering > our mission or not. I agree that we are not really (collectively) taking a step back and looking at the big picture. Forcing myself to work on a map over the past year really helped me reframe how I perceive OpenStack, and I think we should do that sort of exercise more often. What do you think should be the right forum for continuing that discussion? Is that something you think we should discuss at the Forum[tm] ? Or more as an asynchronous discussion at the TC level ? -- Thierry Carrez (ttx) From dougal at redhat.com Tue Apr 24 10:05:00 2018 From: dougal at redhat.com (Dougal Matthews) Date: Tue, 24 Apr 2018 11:05:00 +0100 Subject: [openstack-dev] [mistral] September PTG in Denver In-Reply-To: <20180423195823.GC17397@sm-xps> References: <20180423195823.GC17397@sm-xps> Message-ID: On 23 April 2018 at 20:58, Sean McGinnis wrote: > On Mon, Apr 23, 2018 at 07:32:40PM +0000, Kendall Nelson wrote: > > Hey Dougal, > > > > I think I had said May 2nd in my initial email asking about attendance. > If > > you can get an answer out of your team by then I would greatly appreciate > > it! If you need more time please let me know by then (May 2nd) instead. > Whoops - thanks for the correction. > > > > -Kendall (diablo_rojo) > > > > Do we need to collect this data for September already by the beginning of > May? > > Granted, the sooner we know details and can start planning, the better. > But as > I started looking over the survey, it just seems really early to predict > where > things will be 5 months from now. Especially considering we will have a > different set of PTLs for many projects by then, and it is too early for > some > of those hand off discussions to have started yet. > Good question! I don't mean to ask people to commit 100% or not, I just want to know their intentions so I have more information when filling out the survey. > > Sean > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Tue Apr 24 10:07:03 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 24 Apr 2018 12:07:03 +0200 Subject: [openstack-dev] [tc] campaign question: How "active" should the TC be? In-Reply-To: <37bb1679-a884-628d-08f3-4856a750ce31@redhat.com> References: <1524489055-sup-8435@lrrr.local> <37bb1679-a884-628d-08f3-4856a750ce31@redhat.com> Message-ID: <5cda2e45-3989-b59a-63fd-e29fe76b7c2d@openstack.org> Zane Bitter wrote: > [...]> I definitely don't want to get rid of office hours, and I think the > reasons for dropping the meeting (encouraging geographically diverse > participation) are still valid. I'd like to see the TC come up with a > program of work for the term after each Summit, and actively track the > progress of it using asynchronous tools - perhaps Storyboard supported > by follow-ups on the mailing list. FWIW we did translate the work items we discussed in Dublin into a set of StoryBoard stories at: https://storyboard.openstack.org/#!/project/923 But it's pretty recent :) -- Thierry Carrez (ttx) From thierry at openstack.org Tue Apr 24 10:24:45 2018 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 24 Apr 2018 12:24:45 +0200 Subject: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier? In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C0B3990@EX10MBOX03.pnnl.gov> References: <1524491647-sup-1779@lrrr.local> <9555e900-24e7-9a9f-5a09-b957faa44fc2@redhat.com> <1A3C52DFCD06494D8528644858247BF01C0B3990@EX10MBOX03.pnnl.gov> Message-ID: <9baf6a58-4092-7417-de14-7be4269d6dbc@openstack.org> Fox, Kevin M wrote: > OpenStack has created artificial walls between the various Projects. It shows up, for example, as holes in usability at a user level or extra difficulty for operators juggling around so many projects. Users and for the most part, Operators don't really care about project organization, or ptls, or cores or such. OpenStack has made some progress this direction with stuff like the unified cli. But OpenStack is not very unified. I've been giving this some thought (in the context of a presentation I was giving on hard lessons learned from 8 years of OpenStack). I think that organizing development around project teams and components was the best way to cope with the growth of OpenStack in 2011-1015 and get to a working set of components. However it's not the best organization to improve on the overall "product experience", or for a maintenance phase. While it can be confusing, I like the two-dimensional approach that Kubernetes followed (code ownership in one dimension, SIGs in the other). The introduction of SIGs in OpenStack, beyond providing a way to build closer feedback loops around specific topics, can help us tackle this "unified experience" problem you raised. The formation of the upgrades SIG, or the self-healing SIG is a sign that times change. Maybe we need to push in that direction even more aggressively and start thinking about de-emphasizing project teams themselves. -- Thierry Carrez (ttx) From davanum at gmail.com Tue Apr 24 11:24:02 2018 From: davanum at gmail.com (Davanum Srinivas) Date: Tue, 24 Apr 2018 07:24:02 -0400 Subject: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier? In-Reply-To: <9baf6a58-4092-7417-de14-7be4269d6dbc@openstack.org> References: <1524491647-sup-1779@lrrr.local> <9555e900-24e7-9a9f-5a09-b957faa44fc2@redhat.com> <1A3C52DFCD06494D8528644858247BF01C0B3990@EX10MBOX03.pnnl.gov> <9baf6a58-4092-7417-de14-7be4269d6dbc@openstack.org> Message-ID: Thierry, please see below: On Tue, Apr 24, 2018 at 6:24 AM, Thierry Carrez wrote: > Fox, Kevin M wrote: >> OpenStack has created artificial walls between the various Projects. It shows up, for example, as holes in usability at a user level or extra difficulty for operators juggling around so many projects. Users and for the most part, Operators don't really care about project organization, or ptls, or cores or such. OpenStack has made some progress this direction with stuff like the unified cli. But OpenStack is not very unified. > > I've been giving this some thought (in the context of a presentation I > was giving on hard lessons learned from 8 years of OpenStack). I think > that organizing development around project teams and components was the > best way to cope with the growth of OpenStack in 2011-1015 and get to a > working set of components. However it's not the best organization to > improve on the overall "product experience", or for a maintenance phase. > > While it can be confusing, I like the two-dimensional approach that > Kubernetes followed (code ownership in one dimension, SIGs in the > other). The introduction of SIGs in OpenStack, beyond providing a way to > build closer feedback loops around specific topics, can help us tackle > this "unified experience" problem you raised. The formation of the > upgrades SIG, or the self-healing SIG is a sign that times change. Maybe > we need to push in that direction even more aggressively and start > thinking about de-emphasizing project teams themselves. Big +1. Another thing to check into is how can we split some of the work the PTL does into multiple roles ... that are short term and is rotated around. Hoping that will help with the problem where we need folks to be totally available full time to do meaningful work in a project. > -- > Thierry Carrez (ttx) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Davanum Srinivas :: https://twitter.com/dims From yamamoto at midokura.com Tue Apr 24 12:11:42 2018 From: yamamoto at midokura.com (Takashi Yamamoto) Date: Tue, 24 Apr 2018 21:11:42 +0900 Subject: [openstack-dev] [neutron] Bug deputy report Message-ID: hi, here's a summary of this week. RFEs for drivers team: https://bugs.launchpad.net/neutron/+bug/1766380 [RFE] Create host-routes for routed networks (segments) https://bugs.launchpad.net/neutron/+bug/1764738 routed provider networks limit to one host Medium: https://bugs.launchpad.net/neutron/+bug/1764330 Cannot set --no-share on shared network covered also by "access_as_shared" RBAC policy https://bugs.launchpad.net/neutron/+bug/1763627 neutron service-provider-list return duplicated entries https://bugs.launchpad.net/neutron/+bug/1763604 neutron-ovs-cleanup failing when there are too many ports in bridge https://bugs.launchpad.net/neutron/+bug/1765452 Unable to use project_id as sort_key Low: https://bugs.launchpad.net/neutron/+bug/1765208 IPtables firewall code sometimes tries to remove non-existent rules Wishlist: https://bugs.launchpad.net/neutron/+bug/1765519 Add fullstack tests for shared networks API Incomplete / waiting a feedback from the submitter: https://bugs.launchpad.net/neutron/+bug/1762708 Unable to ssh/ping IP assigned to VM deployed on flat network https://bugs.launchpad.net/neutron/+bug/1762733 l3agentscheduler doesn't return a response body with POST /v2.0/agents/{agent_id}/l3-routers https://bugs.launchpad.net/neutron/+bug/1765691 OVN vlan networks use geneve tunneling for SNAT traffic https://bugs.launchpad.net/neutron/+bug/1765530 VM failed to reboot after compute host reboot in Queens From cdent+os at anticdent.org Tue Apr 24 12:35:23 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Tue, 24 Apr 2018 13:35:23 +0100 (BST) Subject: [openstack-dev] [tc] [all] TC Report 18-17 Message-ID: HTML: https://anticdent.org/tc-report-18-17.html The main TC-related activity over the past week has been the [elections](https://governance.openstack.org/election/) currently in progress. A quiet campaigning period burst into late activity with a series of four questions posted in email by Doug Hellman: * [campaign question related to new projects](http://lists.openstack.org/pipermail/openstack-dev/2018-April/129622.html) * [How "active" should the TC be?](http://lists.openstack.org/pipermail/openstack-dev/2018-April/129658.html) * [How should we handle projects with overlapping feature sets?](http://lists.openstack.org/pipermail/openstack-dev/2018-April/129661.html) * [How can we make contributing to OpenStack easier?](http://lists.openstack.org/pipermail/openstack-dev/2018-April/129664.html) I feel we should be working with these sorts of questions and conversations more frequently. There are many good ideas and observations in the threads. Voting continues until the end of the day on April 30th. If you're eligible to vote, please do. You should receive an email with the subject "Poll: Rocky TC Election" that includes a link to vote. If you feel you are eligible but did not receive a link contact the [election officials](https://governance.openstack.org/election/#election-officials). Acknowledgement that OpenStack needs to make some changes, and willingness to consider new options is evident in the above emails. These are interesting times and your vote will help drive change. Through the week there have been many different bits of conversation related to the election. [Scan the logs](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/index.html) to get all the details. There have been a few other topics throughout the week. # Python 3 There's been [some discussion](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-04-17.log.html#t2018-04-17T11:56:56) on how to proceed with the migration to Python 3, including [a review](https://review.openstack.org/#/c/561922/) to allow projects to choose to only support Python 3. There are some differences of opinion on this, some of which are driven by positions on the extent to which the upstream OpenStack community should adapt its schedule to the velocity of downstreams (in this case, RHEL). # Kolla K8s The fate of the kolla-kubernetes repo remained a big topic of conversation. More people are getting involved, leading to some more informed decisions. See discussion on [Wednesday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-04-18.log.html#t2018-04-18T12:31:35) (including quite a bit about the use of containers in OpenStack, for OpenStack) and [Thursday](http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2018-04-19.log.html#t2018-04-19T15:01:03) # Next If I get elected to another term on the TC I plan to continue writing these weekly reports, and even if not I'll endeavor to keep track of things and report on the highlights, but perhaps less regularly. However, for the next two weeks there will be no reporting: my brother is visiting and I'm going to take some time off with him to walk around and think about something besides OpenStack. I'll pick things back up in mid-May. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From openstack at fried.cc Tue Apr 24 12:53:47 2018 From: openstack at fried.cc (Eric Fried) Date: Tue, 24 Apr 2018 07:53:47 -0500 Subject: [openstack-dev] [nova][placement] Trying to summarize bp/glance-image-traits scheduling alternatives for rebuild In-Reply-To: References: <221636a9-4b8f-1098-10b8-2240a7cb0ff7@gmail.com> <8eec45ab-f9ed-cd96-51a1-9be78849fb9b@gmail.com> Message-ID: > The problem isn't just checking the traits in the nested resource > provider. We also need to ensure the trait in the exactly same child > resource provider. No, we can't get "granular" with image traits. We accepted this as a limitation for the spawn aspect of this spec [1], for all the same reasons [2]. And by the time we've spawned the instance, we've lost the information about which granular request groups (from the flavor) were satisfied by which resources - retrofitting that information from a new image would be even harder. So we need to accept the same limitation for rebuild. [1] "Due to the difficulty of attempting to reconcile granular request groups between an image and a flavor, only the (un-numbered) trait group is supported. The traits listed there are merged with those of the un-numbered request group from the flavor." (http://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/glance-image-traits.html#proposed-change) [2] https://review.openstack.org/#/c/554305/2/specs/rocky/approved/glance-image-traits.rst at 86 From jaypipes at gmail.com Tue Apr 24 12:55:01 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 24 Apr 2018 08:55:01 -0400 Subject: [openstack-dev] [nova][placement] Trying to summarize bp/glance-image-traits scheduling alternatives for rebuild In-Reply-To: References: <221636a9-4b8f-1098-10b8-2240a7cb0ff7@gmail.com> <8eec45ab-f9ed-cd96-51a1-9be78849fb9b@gmail.com> Message-ID: <530903a4-701d-595e-acc3-05369697cf06@gmail.com> On 04/23/2018 05:51 PM, Arvind N wrote: > Thanks for the detailed options Matt/eric/jay. > > Just few of my thoughts, > > For #1, we can make the explanation very clear that we rejected the > request because the original traits specified in the original image and > the new traits specified in the new image do not match and hence rebuild > is not supported. I believe I had suggested that on the spec amendment patch. Matt had concerns about an error message being a poor user experience (I don't necessarily disagree with that) and I had suggested a clearer error message to try and make that user experience slightly less sucky. > For #3, > > Even though it handles the nested provider, there is a potential issue. > > Lets say a host with two SRIOV nic. One is normal SRIOV nic(VF1), > another one with some kind of offload feature(VF2).(Described by alex) > > Initial instance launch happens with VF:1 allocated, rebuild launches > with modified request with traits=HW_NIC_OFFLOAD_X, so basically we want > the instance to be allocated VF2. > > But the original allocation happens against VF1 and since in rebuild the > original allocations are not changed, we have wrong allocations. Yep, that is certainly an issue. The only solution to this that I can see would be to have the conductor ask the compute node to do the pre-flight check. The compute node already has the entire tree of providers, their inventories and traits, along with information about providers that share resources with the compute node. It has this information in the ProviderTree object in the reportclient that is contained in the compute node resource tracker. The pre-flight check, if run on the compute node, would be able to grab the allocation records for the instance and determine if the required traits for the new image are present on the actual resource providers allocated against for the instance (and not including any child providers not allocated against). Or... we chalk this up as a "too bad" situation and just either go with option #1 or simply don't care about it. Best, -jay From yamamoto at midokura.com Tue Apr 24 12:55:26 2018 From: yamamoto at midokura.com (Takashi Yamamoto) Date: Tue, 24 Apr 2018 21:55:26 +0900 Subject: [openstack-dev] [neutron] Bug deputy report In-Reply-To: References: Message-ID: oops, i forgot to add the critical one. Critical https://bugs.launchpad.net/neutron/+bug/1765008 Tempest API tests failing for stable/queens branch On Tue, Apr 24, 2018 at 9:11 PM, Takashi Yamamoto wrote: > hi, > > here's a summary of this week. > > RFEs for drivers team: > https://bugs.launchpad.net/neutron/+bug/1766380 [RFE] Create > host-routes for routed networks (segments) > https://bugs.launchpad.net/neutron/+bug/1764738 routed provider > networks limit to one host > > Medium: > https://bugs.launchpad.net/neutron/+bug/1764330 Cannot set --no-share > on shared network covered also by "access_as_shared" RBAC policy > https://bugs.launchpad.net/neutron/+bug/1763627 neutron > service-provider-list return duplicated entries > https://bugs.launchpad.net/neutron/+bug/1763604 neutron-ovs-cleanup > failing when there are too many ports in bridge > https://bugs.launchpad.net/neutron/+bug/1765452 Unable to use > project_id as sort_key > > Low: > https://bugs.launchpad.net/neutron/+bug/1765208 IPtables firewall code > sometimes tries to remove non-existent rules > > Wishlist: > https://bugs.launchpad.net/neutron/+bug/1765519 Add fullstack tests > for shared networks API > > Incomplete / waiting a feedback from the submitter: > https://bugs.launchpad.net/neutron/+bug/1762708 Unable to ssh/ping IP > assigned to VM deployed on flat network > https://bugs.launchpad.net/neutron/+bug/1762733 l3agentscheduler > doesn't return a response body with POST > /v2.0/agents/{agent_id}/l3-routers > https://bugs.launchpad.net/neutron/+bug/1765691 OVN vlan networks use > geneve tunneling for SNAT traffic > https://bugs.launchpad.net/neutron/+bug/1765530 VM failed to reboot > after compute host reboot in Queens From ildiko.vancsa at gmail.com Tue Apr 24 12:55:26 2018 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 24 Apr 2018 14:55:26 +0200 Subject: [openstack-dev] [os-upstream-institute] Prep call for Vancouver today - Minutes In-Reply-To: References: Message-ID: Hi All, I would like to thank all of you who were able to join to our planning call yesterday for the Vancouver training. For meeting minutes please visit the following etherpad: https://etherpad.openstack.org/p/OUI-AIO-guide-planning We also recorded the call, so those of you who couldn’t dial in can listen to the whole discussion here: https://zoom.us/recording/share/7ysh3YPRjhCh8cS7iJcT9EttcJ7cC3IBIpgshOCD9CuwIumekTziMw As we have only a couple of weeks and a lot of tasks to complete left, please look in to the corresponding StoryBoard project and pick an item to complete: https://storyboard.openstack.org/#!/project/913 We are planning one more sync call the week of the training. We are aiming for May 14 at 2200 UTC, but it’s still subject to change so stay tuned for further info. Also please note that __until the Vancouver training we will keep all our meetings on Mondays at 2000 UTC__. Please let me know if you have any questions. Thanks and Best Regards, Ildikó (IRC: ildikov) > On 2018. Apr 23., at 22:08, Ildiko Vancsa wrote: > > Hi Training Team, > > It is a friendly reminder that we will have a conference call on Zoom today at 2200 UTC as opposed to the weekly meeting to better sync up before the training in Vancouver. > > You can find the call details here: https://etherpad.openstack.org/p/openstack-upstream-institute-meetings > > Please let me know if you have any questions. > > Thanks, > Ildikó > (IRC: ildikov) > > From renat.akhmerov at gmail.com Tue Apr 24 12:46:36 2018 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Tue, 24 Apr 2018 19:46:36 +0700 Subject: [openstack-dev] [mistral] September PTG in Denver In-Reply-To: References: <20180423195823.GC17397@sm-xps> Message-ID: <457cf2f4-84bd-44ab-bfb7-04b5c0b8d974@Spark> Dougal, Most likely, I’ll go. Thanks Renat Akhmerov @Nokia 24 апр. 2018 г., 17:05 +0700, Dougal Matthews , писал: > > > > On 23 April 2018 at 20:58, Sean McGinnis wrote: > > > On Mon, Apr 23, 2018 at 07:32:40PM +0000, Kendall Nelson wrote: > > > > Hey Dougal, > > > > > > > > I think I had said May 2nd in my initial email asking about attendance. If > > > > you can get an answer out of your team by then I would greatly appreciate > > > > it! If you need more time please let me know by then (May 2nd) instead. > > > > Whoops - thanks for the correction. > > > > > > > > > > -Kendall (diablo_rojo) > > > > > > > > > > Do we need to collect this data for September already by the beginning of May? > > > > > > Granted, the sooner we know details and can start planning, the better. But as > > > I started looking over the survey, it just seems really early to predict where > > > things will be 5 months from now. Especially considering we will have a > > > different set of PTLs for many projects by then, and it is too early for some > > > of those hand off discussions to have started yet. > > > > Good question! I don't mean to ask people to commit 100% or not, I just want to know their intentions so I have more information when filling out the survey. > > > > > > > > Sean > > > > > > __________________________________________________________________________ > > > OpenStack Development Mailing List (not for usage questions) > > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From lhinds at redhat.com Tue Apr 24 13:10:01 2018 From: lhinds at redhat.com (Luke Hinds) Date: Tue, 24 Apr 2018 14:10:01 +0100 Subject: [openstack-dev] [OSSN-0083] Keystone policy rule "identity:get_identity_providers" was ignored Message-ID: Keystone policy rule "identity:get_identity_providers" was ignored --- ### Summary ### A policy rule in Keystone did not behave as intended leading to a less secure configuration than would be expected. ### Affected Services / Software ### OpenStack Identity Service (Keystone) versions through Mitaka, as well as Newton (<= 10.0.3), and Ocata (<= 11.0.3). ### Discussion ### Deployments were unaffected by this problem if the default rule was changed or the "get_identity_providers" rule was manually changed to be "get_identity_provider" (singular) in keystone's `policy.json`. A spelling mistake in the default policy configuration caused these rules to be ignored. As a result operators that attempted to restrict this API were unlikely to actually enforce it. ### Recommended Actions ### Update Keystone to a minimum version of 12.0.0.0b3. Additionally, this fix has been backported to Ocata (11.0.3) and Newton (10.0.3). Fix any lingering rules: `identity:get_identity_providers` should be changed to `identity:get_identity_provider`. ### Contacts / References ### Author: Nick Tait This OSSN : https://wiki.openstack.org/wiki/OSSN/OSSN-0083 Original LaunchPad Bug : https://bugs.launchpad.net/ossn/+bug/1703369 Mailing List : [Security] tag on openstack-dev at lists.openstack.org OpenStack Security Project : https://launchpad.net/~openstack-ossg -------------- next part -------------- A non-text attachment was scrubbed... Name: 0x3C202614.asc Type: application/pgp-keys Size: 1680 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: OpenPGP digital signature URL: From sbauza at redhat.com Tue Apr 24 13:26:09 2018 From: sbauza at redhat.com (Sylvain Bauza) Date: Tue, 24 Apr 2018 15:26:09 +0200 Subject: [openstack-dev] [nova][placement] Trying to summarize bp/glance-image-traits scheduling alternatives for rebuild In-Reply-To: <530903a4-701d-595e-acc3-05369697cf06@gmail.com> References: <221636a9-4b8f-1098-10b8-2240a7cb0ff7@gmail.com> <8eec45ab-f9ed-cd96-51a1-9be78849fb9b@gmail.com> <530903a4-701d-595e-acc3-05369697cf06@gmail.com> Message-ID: Sorry folks for the late reply, I'll try to also weigh in the Gerrit change. On Tue, Apr 24, 2018 at 2:55 PM, Jay Pipes wrote: > On 04/23/2018 05:51 PM, Arvind N wrote: > >> Thanks for the detailed options Matt/eric/jay. >> >> Just few of my thoughts, >> >> For #1, we can make the explanation very clear that we rejected the >> request because the original traits specified in the original image and the >> new traits specified in the new image do not match and hence rebuild is not >> supported. >> > > I believe I had suggested that on the spec amendment patch. Matt had > concerns about an error message being a poor user experience (I don't > necessarily disagree with that) and I had suggested a clearer error message > to try and make that user experience slightly less sucky. > > For #3, >> >> Even though it handles the nested provider, there is a potential issue. >> >> Lets say a host with two SRIOV nic. One is normal SRIOV nic(VF1), another >> one with some kind of offload feature(VF2).(Described by alex) >> >> Initial instance launch happens with VF:1 allocated, rebuild launches >> with modified request with traits=HW_NIC_OFFLOAD_X, so basically we want >> the instance to be allocated VF2. >> >> But the original allocation happens against VF1 and since in rebuild the >> original allocations are not changed, we have wrong allocations. >> > > Yep, that is certainly an issue. The only solution to this that I can see > would be to have the conductor ask the compute node to do the pre-flight > check. The compute node already has the entire tree of providers, their > inventories and traits, along with information about providers that share > resources with the compute node. It has this information in the > ProviderTree object in the reportclient that is contained in the compute > node resource tracker. > > The pre-flight check, if run on the compute node, would be able to grab > the allocation records for the instance and determine if the required > traits for the new image are present on the actual resource providers > allocated against for the instance (and not including any child providers > not allocated against). > > Yup, that. We also have pre-flight checks for move operations like live and cold migrations, and I'd really like to keep all the conditionals in the conductor, because it knows better than the scheduler which operation is asked. I'm not really happy with adding more in the scheduler about "yeah, it's a rebuild, so please do something exceptional", and I'm also not happy with having a filter (that can be disabled) calling the Placement API. > Or... we chalk this up as a "too bad" situation and just either go with > option #1 or simply don't care about it. Also, that too. Maybe just provide an error should be enough, nope? Operators, what do you think ? (cross-calling openstack-operators@) -Sylvain > > Best, > -jay > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Tue Apr 24 13:59:57 2018 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 24 Apr 2018 09:59:57 -0400 Subject: [openstack-dev] [tc] campaign question related to new projects In-Reply-To: <9f3c34e0-076c-4a6e-f073-eee57d0daaae@openstack.org> References: <1524259233-sup-3003@lrrr.local> <92a3703e-428b-1793-b01f-5751ad0f4e33@redhat.com> <9f3c34e0-076c-4a6e-f073-eee57d0daaae@openstack.org> Message-ID: <46991161-337d-18f9-6a7a-a5f6c19e7588@redhat.com> On 24/04/18 05:55, Thierry Carrez wrote: > Zane Bitter wrote: >> [...] >> I would love to see us have a conversation as a community to figure out >> what we all, collectively, think that list should look like and document >> it. Ideally new projects shouldn't have to wait until they've applied to >> join OpenStack to get a sense of whether we believe they're furthering >> our mission or not. > > I agree that we are not really (collectively) taking a step back and > looking at the big picture. Forcing myself to work on a map over the > past year really helped me reframe how I perceive OpenStack, and I think > we should do that sort of exercise more often. > > What do you think should be the right forum for continuing that > discussion? Is that something you think we should discuss at the > Forum[tm] ? Or more as an asynchronous discussion at the TC level ? I think we need the widest audience possible, so if I had to pick one forum I would tend toward an asynchronous discussions e.g. on the mailing list. The Forum has limited attendance from developers, the PTG has limited attendance from operators, and both of those offer the opportunity for only a comparatively small number of people to speak, so I don't recommend that we try to have the discussion primarily in-person. It probably doesn't hurt to try to discuss it whenever we can with literally anybody who will listen though :) cheers, Zane. From soulxu at gmail.com Tue Apr 24 14:21:03 2018 From: soulxu at gmail.com (Alex Xu) Date: Tue, 24 Apr 2018 22:21:03 +0800 Subject: [openstack-dev] [nova][placement] Trying to summarize bp/glance-image-traits scheduling alternatives for rebuild In-Reply-To: References: <221636a9-4b8f-1098-10b8-2240a7cb0ff7@gmail.com> <8eec45ab-f9ed-cd96-51a1-9be78849fb9b@gmail.com> Message-ID: 2018-04-24 20:53 GMT+08:00 Eric Fried : > > The problem isn't just checking the traits in the nested resource > > provider. We also need to ensure the trait in the exactly same child > > resource provider. > > No, we can't get "granular" with image traits. We accepted this as a > limitation for the spawn aspect of this spec [1], for all the same > reasons [2]. And by the time we've spawned the instance, we've lost the > information about which granular request groups (from the flavor) were > satisfied by which resources - retrofitting that information from a new > image would be even harder. So we need to accept the same limitation > for rebuild. > > [1] "Due to the difficulty of attempting to reconcile granular request > groups between an image and a flavor, only the (un-numbered) trait group > is supported. The traits listed there are merged with those of the > un-numbered request group from the flavor." > (http://specs.openstack.org/openstack/nova-specs/specs/ > rocky/approved/glance-image-traits.html#proposed-change) > [2] > https://review.openstack.org/#/c/554305/2/specs/rocky/ > approved/glance-image-traits.rst at 86 Why we can return a RP which has a specific trait but we won't consume any resources on it? If the case is that we request two VFs, and this two VFs have different required traits, then that should be granular request. > > > __________________________________ > ________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mariusc at redhat.com Tue Apr 24 14:29:22 2018 From: mariusc at redhat.com (Marius Cornea) Date: Tue, 24 Apr 2018 10:29:22 -0400 Subject: [openstack-dev] [tripleo] Proposing Marius Cornea core on upgrade bits In-Reply-To: References: Message-ID: Thanks everyone for your trust and support. On Mon, Apr 23, 2018 at 5:35 PM, Emilien Macchi wrote: > Thanks everyone for your positive feedback. > I've updated Gerrit! > > Welcome Marius and thanks again for your hard work! > > On Mon, Apr 23, 2018 at 4:55 AM, James Slagle > wrote: >> >> On Thu, Apr 19, 2018 at 1:01 PM, Emilien Macchi >> wrote: >> > Greetings, >> > >> > As you probably know mcornea on IRC, Marius Cornea has been contributing >> > on >> > TripleO for a while, specially on the upgrade bits. >> > Part of the quality team, he's always testing real customer scenarios >> > and >> > brings a lot of good feedback in his reviews, and quite often takes care >> > of >> > fixing complex bugs when it comes to advanced upgrades scenarios. >> > He's very involved in tripleo-upgrade repository where he's already >> > core, >> > but I think it's time to let him +2 on other tripleo repos for the >> > patches >> > related to upgrades (we trust people's judgement for reviews). >> > >> > As usual, we'll vote! >> > >> > Thanks everyone for your feedback and thanks Marius for your hard work >> > and >> > involvement in the project. >> >> +1 >> >> >> -- >> -- James Slagle >> -- >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > > -- > Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From fungi at yuggoth.org Tue Apr 24 14:39:27 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 24 Apr 2018 14:39:27 +0000 Subject: [openstack-dev] [openstack-infra] [neutron] Change of control over network-onos (was: How to take over a project?) In-Reply-To: <0202894D-3C05-434F-A7F4-93678C7613FE@opennetworking.org> References: <0630C19B-9EA1-457D-A74C-8A7A03D96DD6@vmware.com> <2bb037db-61aa-4fec-a694-00eee36bbdab@gmail.com> <7a4390b1-2c4e-6600-4d93-167697ea9f12@redhat.com> <81B28CCD-93B2-4BC8-B2C5-50B0C5D2A972@opennetworking.org> <3C5A1D78-828F-4C6D-B3A1-B6597403233F@opennetworking.org> <0202894D-3C05-434F-A7F4-93678C7613FE@opennetworking.org> Message-ID: <20180424143926.lqfgmurus6tmpcyo@yuggoth.org> On 2018-04-20 01:01:33 +0900 (+0900), Sangho Shin wrote: > Dear Neutron-Release team, > > I wonder if any of you can add me to the network-onos-release member. > It seems that Vikram is busy. :-) [...] I've adjusted the subject line here in hopes the thread might better catch their attention. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Tue Apr 24 14:45:05 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 24 Apr 2018 14:45:05 +0000 Subject: [openstack-dev] [tc] campaign question related to new projects In-Reply-To: <46991161-337d-18f9-6a7a-a5f6c19e7588@redhat.com> References: <1524259233-sup-3003@lrrr.local> <92a3703e-428b-1793-b01f-5751ad0f4e33@redhat.com> <9f3c34e0-076c-4a6e-f073-eee57d0daaae@openstack.org> <46991161-337d-18f9-6a7a-a5f6c19e7588@redhat.com> Message-ID: <20180424144504.imkrtlwvy3xwh2eo@yuggoth.org> On 2018-04-24 09:59:57 -0400 (-0400), Zane Bitter wrote: [...] > the PTG has limited attendance from operators [...] I have high hopes that will not be the case for the next PTG. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From singh.surya64mnnit at gmail.com Tue Apr 24 14:48:07 2018 From: singh.surya64mnnit at gmail.com (Surya Singh) Date: Tue, 24 Apr 2018 20:18:07 +0530 Subject: [openstack-dev] [kolla][vote]Retire kolla-kubernetes project In-Reply-To: References: Message-ID: +1 As we don't have active core team in Kolla-kubernetes since months, unfortunately going for sunset is reasonable. Though happy to help in running OpenStack on kubernetes. --- Thanks Surya On Wed, Apr 18, 2018 at 7:21 AM, Jeffrey Zhang wrote: > Since many of the contributors in the kolla-kubernetes project are moved to > other things. And there is no active contributor for months. On the other > hand, there is another comparable project, openstack-helm, in the community. > For less confusion and disruptive community resource, I propose to retire > the kolla-kubernetes project. > > More discussion about this you can check the mail[0] and patch[1] > > please vote +1 to retire the repo, or -1 not to retire the repo. The vote > will be open until everyone has voted, or for 1 week until April 25th, 2018. > > [0] > http://lists.openstack.org/pipermail/openstack-dev/2018-March/128822.html > [1] https://review.openstack.org/552531 > > -- > Regards, > Jeffrey Zhang > Blog: http://xcodest.me > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From openstack at fried.cc Tue Apr 24 15:01:10 2018 From: openstack at fried.cc (Eric Fried) Date: Tue, 24 Apr 2018 10:01:10 -0500 Subject: [openstack-dev] [nova][placement] Trying to summarize bp/glance-image-traits scheduling alternatives for rebuild In-Reply-To: References: <221636a9-4b8f-1098-10b8-2240a7cb0ff7@gmail.com> <8eec45ab-f9ed-cd96-51a1-9be78849fb9b@gmail.com> Message-ID: Alex- On 04/24/2018 09:21 AM, Alex Xu wrote: > > > 2018-04-24 20:53 GMT+08:00 Eric Fried >: > > > The problem isn't just checking the traits in the nested resource > > provider. We also need to ensure the trait in the exactly same child > > resource provider. > > No, we can't get "granular" with image traits.  We accepted this as a > limitation for the spawn aspect of this spec [1], for all the same > reasons [2].  And by the time we've spawned the instance, we've lost the > information about which granular request groups (from the flavor) were > satisfied by which resources - retrofitting that information from a new > image would be even harder.  So we need to accept the same limitation > for rebuild. > > [1] "Due to the difficulty of attempting to reconcile granular request > groups between an image and a flavor, only the (un-numbered) trait group > is supported. The traits listed there are merged with those of the > un-numbered request group from the flavor." > (http://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/glance-image-traits.html#proposed-change > ) > [2] > https://review.openstack.org/#/c/554305/2/specs/rocky/approved/glance-image-traits.rst at 86 > > > > Why we can return a RP which has a specific trait but we won't consume > any resources on it? > If the case is that we request two VFs, and this two VFs have different > required traits, then that should be granular request. We don't care about RPs we're not consuming resources from. Forget rebuild - if the image used for the original spawn request has traits pertaining to VFs, we folded those traits into the un-numbered request group, which means the VF resources would have needed to be in the un-numbered request group in the flavor as well. That was the limitation discussed at [2]: trying to correlate granular groups from an image to granular groups in a trait would require nontrivial invention beyond what we're willing to do at this point. So we're limited at spawn time to VFs (or whatever) where we can't tell which trait belongs to which. The best we can do is ensure that the end result of the un-numbered request group will collectively satisfy all the traits from the image. And this same limitation exists, for the same reasons, on rebuild. It even goes a bit further, because if there are *other* VFs (or whatever) that came from numbered groups in the original request, we have no way to know that; so if *those* guys have traits required by the new image, we'll still pass. Which is almost certainly okay. -efried From jean-philippe at evrard.me Tue Apr 24 15:05:07 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Tue, 24 Apr 2018 16:05:07 +0100 Subject: [openstack-dev] [openstack-ansible] Proposing Mohammed Naser as core reviewer Message-ID: Hi everyone, I’d like to propose Mohammed Naser [1] as a core reviewer for OpenStack-Ansible. He has been working actively on fixing the telemetry stack, and is now willing to step up to improve the CentOS platform which is now in a very degraded state. I feel that it’s important that he’s able to safeguard the existing and future work about CentOS and help grow the maintenance community for it. [1] http://stackalytics.com/?module=openstackansible-group&user_id=mnaser&release=rocky&metric=person-day Best regards, Jean-Philippe Evrard IRC: evrardjp From amy at demarco.com Tue Apr 24 15:08:52 2018 From: amy at demarco.com (Amy Marrich) Date: Tue, 24 Apr 2018 10:08:52 -0500 Subject: [openstack-dev] [openstack-ansible] Proposing Mohammed Naser as core reviewer In-Reply-To: References: Message-ID: +2 from me! Amy (spotz) On Tue, Apr 24, 2018 at 10:05 AM, Jean-Philippe Evrard < jean-philippe at evrard.me> wrote: > Hi everyone, > > I’d like to propose Mohammed Naser [1] as a core reviewer for > OpenStack-Ansible. > > He has been working actively on fixing the telemetry stack, and is now > willing to step up to improve the CentOS platform which is now in a > very degraded state. > > I feel that it’s important that he’s able to safeguard the existing > and future work about CentOS > and help grow the maintenance community for it. > > [1] http://stackalytics.com/?module=openstackansible-group& > user_id=mnaser&release=rocky&metric=person-day > > Best regards, > > Jean-Philippe Evrard > IRC: evrardjp > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dimitri.pertin at inria.fr Tue Apr 24 15:09:48 2018 From: dimitri.pertin at inria.fr (Dimitri Pertin) Date: Tue, 24 Apr 2018 17:09:48 +0200 Subject: [openstack-dev] [FEMDC] Wed. 25 Apr - FEMDC IRC Meeting 15:00 UTC Message-ID: <3f1ca83b-cad0-dd2b-789c-5dcd2ef42551@inria.fr> Dear all, Here is a gentle reminder for the FEMDC meeting tomorrow at 15:00 UTC. A draft of the agenda is available at line 466 and you are very welcome to add any item: https://etherpad.openstack.org/p/massively_distributed_ircmeetings_2018 Best regards, Dimitri From sean.mcginnis at gmx.com Tue Apr 24 15:13:55 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 24 Apr 2018 10:13:55 -0500 Subject: [openstack-dev] [tc] campaign question: How "active" should the TC be? In-Reply-To: <20180423221248.hvdb3yhyvdu5fwhv@yuggoth.org> References: <1524489055-sup-8435@lrrr.local> <20180423215627.GA25667@sm-xps> <20180423221248.hvdb3yhyvdu5fwhv@yuggoth.org> Message-ID: <20180424151354.GA30030@sm-xps> On Mon, Apr 23, 2018 at 10:12:49PM +0000, Jeremy Stanley wrote: > On 2018-04-23 16:56:28 -0500 (-0500), Sean McGinnis wrote: > [...] > > I think Howard had an excellent idea of the TC coming up with > > themes for each cycle. I think that could be used to create a good > > cadence or focus to make sure we are making progress in key areas. > > > > It struck me that we came up with the long term vision, but there > > really isn't too much attention paid to it. At least not in a > > regular way that keeps some of these goals in mind. > > > > We could use the idea of cycle themes to make sure we are > > targetting key areas of that long term vision to help us move > > towards bringing that vision to reality. > > So (straw man!) we can make Rocky "the constellations cycle"? > -- > Jeremy Stanley That sounds good to me. The idea has kind of languished for a while now, but I think there are a couple of people getting more interested lately and trying to move forward with a couple more definitions. It might be good to take that start and try to get some more momentum behind it to get things going. From namnh at vn.fujitsu.com Tue Apr 24 15:26:29 2018 From: namnh at vn.fujitsu.com (namnh at vn.fujitsu.com) Date: Tue, 24 Apr 2018 15:26:29 +0000 Subject: [openstack-dev] [barbican] Hangout Barbican team Message-ID: <1524669644480.4791@vn.fujitsu.com> Hi Barbican team, In order to be easy for reviewing some patch sets in Barbican, we propose that it should have a hangout meeting on 10pm EDT - Monday 30 April. So i would like to send an email to notify everyone that feel free to join with us by leaving your email. Cheers, Nam? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mgariepy at ccs.usherbrooke.ca Tue Apr 24 15:31:42 2018 From: mgariepy at ccs.usherbrooke.ca (Marc Gariepy) Date: Tue, 24 Apr 2018 11:31:42 -0400 Subject: [openstack-dev] [openstack-ansible] Proposing Mohammed Naser as core reviewer In-Reply-To: References: Message-ID: +2 from me. Marc Gariépy On 2018-04-24 11:05 AM, Jean-Philippe Evrard wrote: > Hi everyone, > > I’d like to propose Mohammed Naser [1] as a core reviewer for OpenStack-Ansible. > > He has been working actively on fixing the telemetry stack, and is now > willing to step up to improve the CentOS platform which is now in a > very degraded state. > > I feel that it’s important that he’s able to safeguard the existing > and future work about CentOS > and help grow the maintenance community for it. > > [1] http://stackalytics.com/?module=openstackansible-group&user_id=mnaser&release=rocky&metric=person-day > > Best regards, > > Jean-Philippe Evrard > IRC: evrardjp > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From openstack at nemebean.com Tue Apr 24 15:55:23 2018 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 24 Apr 2018 10:55:23 -0500 Subject: [openstack-dev] [designate] Meeting Times - change to office hours? In-Reply-To: References: Message-ID: <90abe52b-d7d3-4d51-9b65-a21e499b4e85@nemebean.com> I prefer 14:00 to 22:00 UTC, although depending on the time of year I may have some flexibility on that. On 04/24/2018 01:37 AM, Erik Olof Gunnar Andersson wrote: > I can do anytime ranging from 16:00 UTC to 03:00 UTC, Mon-Fri, maybe up > to 07:00 UTC assuming that it's once bi-weekly. > > ------------------------------------------------------------------------ > *From:* Jens Harbott > *Sent:* Monday, April 23, 2018 10:49:25 PM > *To:* OpenStack Development Mailing List (not for usage questions) > *Subject:* Re: [openstack-dev] [designate] Meeting Times - change to > office hours? > 2018-04-23 13:11 GMT+02:00 Graham Hayes : >> Hi All, >> >> We moved our meeting time to 14:00UTC on Wednesdays, but attendance >> has been low, and it is also the middle of the night for one of our >> cores. >> >> I would like to suggest we have an office hours style meeting, with >> one in the UTC evening and one in the UTC morning. >> >> If this seems reasonable - when and what frequency should we do >> them? What times suit the current set of contributors? > > My preferred range would be 06:00UTC-14:00UTC, Mon-Thu, though > extending a couple of hours in either direction might be possible for > me, too. > > If we do alternating times, with the current amount of work happening > we maybe could make each of them monthly, so we end up with a roughly > bi-weekly schedule. > > I also have a slight preference for continuing to use one of the > meeting channels as opposed to meeting in the designate channel, if > that is what "office hours style meeting" is meant to imply. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From emilien at redhat.com Tue Apr 24 15:56:50 2018 From: emilien at redhat.com (Emilien Macchi) Date: Tue, 24 Apr 2018 08:56:50 -0700 Subject: [openstack-dev] [tripleo] The Weekly Owl - 18th Edition Message-ID: Note: this is the eighteenth edition of a weekly update of what happens in TripleO. The goal is to provide a short reading (less than 5 minutes) to learn where we are and what we're doing. Any contributions and feedback are welcome. Link to the previous version: http://lists.openstack.org/pipermail/openstack-dev/2018-April/129450.html +---------------------------------+ | General announcements | +---------------------------------+ +--> Rocky milestone 1 was released last week! We're currently in milestone 2 cycle: https://releases.openstack.org/rocky/schedule.html +------------------------------+ | Continuous Integration | +------------------------------+ +--> Ruck is panda and Rover is quiquell. Please let them know any new CI issue. +--> Master promotion is 4 day, Queens is 0 day, Pike is 7 days and Ocata is 4 days. +--> Still working on libvirt based multinode reproducer, see https://goo.gl/DYCnkx +--> More: https://etherpad.openstack.org/p/tripleo-ci-squad-meeting +-------------+ | Upgrades | +-------------+ +--> No updates this week, some reviews are still needed. +--> More: https://etherpad.openstack.org/p/tripleo-upgrade-squad-status +---------------+ | Containers | +---------------+ +--> Still working on UX and parity topics. +--> Upgrade job is waiting for a promotion on master to work. +--> Still prototyping container updates before undercloud deployment. +--> More: https://etherpad.openstack.org/p/tripleo-containers-squad-status +----------------------+ | config-download | +----------------------+ +--> config-download now the default with tripleoclient! +--> ceph jobs migrated to external_deploy_tasks +--> CI jobs almost all converted (experimental in progress) +--> octavia and skydive still in progress +--> finalizing tripleo-ui integration, tripleo-common patches are now stable +--> need workflows to cancel deployment and undeploy +--> More: https://etherpad.openstack.org/p/tripleo-config-download-squad-status +--------------+ | Integration | +--------------+ +--> No updates. +--> More: https://etherpad.openstack.org/p/tripleo-integration-squad-status +---------+ | UI/CLI | +---------+ +--> Finishing config-download integration +--> More: https://etherpad.openstack.org/p/tripleo-ui-cli-squad-status +---------------+ | Validations | +---------------+ +--> Custom validations/swift storage work. +--> More: https://etherpad.openstack.org/p/tripleo-validations-squad-status +---------------+ | Networking | +---------------+ +--> Working on neutron sidecar container. +--> NFV deployments testing config-download. +--> More: https://etherpad.openstack.org/p/tripleo-networking-squad-status +--------------+ | Workflows | +--------------+ +--> No updates. +--> More: https://etherpad.openstack.org/p/tripleo-workflows-squad-status +-----------+ | Security | +-----------+ +--> No updates. +--> More: https://etherpad.openstack.org/p/tripleo-security-squad +------------+ | Owl fact | +------------+ Owls May Have Coexisted With Dinosaurs! "We do know that owl-like birds like Berruornis and Ogygoptynx lived 60 million years ago, during the Paleocene epoch, which means it's entirely possible that the ultimate ancestors of owls coexisted with dinosaurs toward the end of the Cretaceous period. Technically speaking, owls are one of the most ancient groups of terrestrial birds, rivaled only by the gamebirds (i.e., chickens, turkeys and pheasants) of the order Galliformes." Source: https://www.thoughtco.com/fascinating-facts-about-owls-4107228 Thanks all for reading and stay tuned! -- Your fellow reporter, Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kevin.Fox at pnnl.gov Tue Apr 24 16:04:57 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Tue, 24 Apr 2018 16:04:57 +0000 Subject: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier? In-Reply-To: <9baf6a58-4092-7417-de14-7be4269d6dbc@openstack.org> References: <1524491647-sup-1779@lrrr.local> <9555e900-24e7-9a9f-5a09-b957faa44fc2@redhat.com> <1A3C52DFCD06494D8528644858247BF01C0B3990@EX10MBOX03.pnnl.gov>, <9baf6a58-4092-7417-de14-7be4269d6dbc@openstack.org> Message-ID: <1A3C52DFCD06494D8528644858247BF01C0B3EB2@EX10MBOX03.pnnl.gov> Yeah, I agree k8s seems to have hit on a good model where interests are separately grouped from the code bases. This has allowed architecture to not to be too heavily influenced by the logical groups interest. I'll go ahead and propose it again since its been a little while. In order to start breaking down the barriers between Projects and start working more towards integration, should the TC come up with an architecture group? Get folks from all the major projects involved in it and sharing common infrastructure. One possible pie in the sky goal of that group could be the following: k8s has many controllers. But they compile almost all of them into one service. the kube-apiserver. Architecturally they could break them out at any point, but so far they have been able to scale just fine without doing so. Having them combined has allowed much easier upgrade paths for users though. This has spurred adoption and contribution. Adding a new controller isn't a huge lift to an operator. they just upgrade to the newest version which has the new controller built in. Could the major components, nova-api, neutron-server, glance-apiserver, etc be built in a way to have 1 process for all of them, and combine the upgrade steps such that there is also, one db-sync for the entire constellation? The idea would be to take Constellations idea one step farther. That the Projects would deliver python libraries(and a binary for stand alone operation). Constilations would actually provide a code deliverable, not just reference architecture, combining the libraries together into a single usable entity. Operators most likely would consume the Constilations version rather then the individual Project versions. What do you think? Thanks, Kevin ________________________________________ From: Thierry Carrez [thierry at openstack.org] Sent: Tuesday, April 24, 2018 3:24 AM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier? Fox, Kevin M wrote: > OpenStack has created artificial walls between the various Projects. It shows up, for example, as holes in usability at a user level or extra difficulty for operators juggling around so many projects. Users and for the most part, Operators don't really care about project organization, or ptls, or cores or such. OpenStack has made some progress this direction with stuff like the unified cli. But OpenStack is not very unified. I've been giving this some thought (in the context of a presentation I was giving on hard lessons learned from 8 years of OpenStack). I think that organizing development around project teams and components was the best way to cope with the growth of OpenStack in 2011-1015 and get to a working set of components. However it's not the best organization to improve on the overall "product experience", or for a maintenance phase. While it can be confusing, I like the two-dimensional approach that Kubernetes followed (code ownership in one dimension, SIGs in the other). The introduction of SIGs in OpenStack, beyond providing a way to build closer feedback loops around specific topics, can help us tackle this "unified experience" problem you raised. The formation of the upgrades SIG, or the self-healing SIG is a sign that times change. Maybe we need to push in that direction even more aggressively and start thinking about de-emphasizing project teams themselves. -- Thierry Carrez (ttx) __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jaypipes at gmail.com Tue Apr 24 16:13:33 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 24 Apr 2018 12:13:33 -0400 Subject: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier? In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C0B3EB2@EX10MBOX03.pnnl.gov> References: <1524491647-sup-1779@lrrr.local> <9555e900-24e7-9a9f-5a09-b957faa44fc2@redhat.com> <1A3C52DFCD06494D8528644858247BF01C0B3990@EX10MBOX03.pnnl.gov> <9baf6a58-4092-7417-de14-7be4269d6dbc@openstack.org> <1A3C52DFCD06494D8528644858247BF01C0B3EB2@EX10MBOX03.pnnl.gov> Message-ID: On 04/24/2018 12:04 PM, Fox, Kevin M wrote: > Could the major components, nova-api, neutron-server, glance-apiserver, etc be built in a way to have 1 process for all of them, and combine the upgrade steps such that there is also, one db-sync for the entire constellation? So, basically the exact opposite of the 12-factor app design that "cloud-native" people espouse? -jay From Jesse.Pretorius at rackspace.co.uk Tue Apr 24 16:30:56 2018 From: Jesse.Pretorius at rackspace.co.uk (Jesse Pretorius) Date: Tue, 24 Apr 2018 16:30:56 +0000 Subject: [openstack-dev] [openstack-ansible] Proposing Mohammed Naser as core reviewer In-Reply-To: References: Message-ID: <4CD5FD1C-938C-441B-8ECD-C7F3C30C1A18@rackspace.co.uk> On 4/24/18, 4:08 PM, "Jean-Philippe Evrard" wrote: > I’d like to propose Mohammed Naser [1] as a core reviewer for OpenStack-Ansible. A happy +2 from me. ( ________________________________ Rackspace Limited is a company registered in England & Wales (company registered number 03897010) whose registered office is at 5 Millington Road, Hyde Park Hayes, Middlesex UB3 4AZ. Rackspace Limited privacy policy can be viewed at www.rackspace.co.uk/legal/privacy-policy - This e-mail message may contain confidential or privileged information intended for the recipient. Any dissemination, distribution or copying of the enclosed material is prohibited. If you receive this transmission in error, please notify us immediately by e-mail at abuse at rackspace.com and delete the original message. Your cooperation is appreciated. From mriedemos at gmail.com Tue Apr 24 17:18:46 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 24 Apr 2018 12:18:46 -0500 Subject: [openstack-dev] [nova] Notification update week 17 In-Reply-To: <1524488868.25291.1@smtp.office365.com> References: <1524488868.25291.1@smtp.office365.com> Message-ID: <57d83aaa-68b3-c899-ce8d-267168e17597@gmail.com> On 4/23/2018 8:07 AM, Balázs Gibizer wrote: > Add versioned notifications for removing a member from a server group > --------------------------------------------------------------------- > The specless bp > https://blueprints.launchpad.net/nova/+spec/add-server-group-remove-member-notifications > > is pending approval as we would like to see the POC code first. Takashi > has been proposed the POC code https://review.openstack.org/#/c/559076/ > so we have to look at it. I took a look at the patch and am -1 for the new "up-call" introduced from the compute service when deleting a server. Overall I'm against this blueprint for three reasons: 1. I don't see what need we have for this blueprint since I'm not hearing a request from a user for it. Maybe it's just for parity with the server group member add notification? I don't think that is sufficient justification though. 2. The discussion in the blueprint whiteboard mentions that we don't need the remove member notification for rolling back on over-quota during server create. 3. The new up-call from nova-compute is a non-starter for me. We need to shrink the list of things that perform calls up from a child cell to the API database. -- Thanks, Matt From fungi at yuggoth.org Tue Apr 24 17:34:30 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 24 Apr 2018 17:34:30 +0000 Subject: [openstack-dev] [All][Election] Rocky TC Election Voting Begins! In-Reply-To: References: Message-ID: <20180424173429.xd5qf7g2wct7fh5g@yuggoth.org> On 2018-04-24 00:03:55 +0000 (+0000), Kendall Nelson wrote: > The poll for the TC Election is now open and will remain open until Apr 30, > 2018 23:45 UTC. > > We are selecting 7 TC members, please rank all candidates in > your order of preference. [...] Please note that the poll was configured slightly incorrectly to indicate only one winner, but we will be using the resulting ranking to pick the top 7 candidates to fill the open seats. The poll description has been amended to mention "the top 7 candidates will win." We've already had nearly 300 votes cast prior to noticing this mistake (sorry!), and determined that restarting the poll and issuing new ballots to everyone would be more disruptive for a mostly aesthetic benefit. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From morgan.fainberg at gmail.com Tue Apr 24 17:58:06 2018 From: morgan.fainberg at gmail.com (Morgan Fainberg) Date: Tue, 24 Apr 2018 10:58:06 -0700 Subject: [openstack-dev] Changes to keystone-stable-maint members Message-ID: Hi, I am proposing making some changes to the Keystone Stable Maint team. A lot of this is cleanup for contributors that have moved on from OpenStack. For the most part, I've been the only one responsible for Keystone Stable Maint reviews, and I'm not comfortable being this bottleneck Removals ======== Dolph Matthews Steve Martinelli Brant Knudson Each of these members have left/moved on from OpenStack, or in the case of Brant, less involved with Keystone (and I believe OpenStack as a whole). Additions ======= Lance Bragstad Lance is the PTL and also highly aware (and does reviews for stable keystone when I ask, so we have a second pair of eyes on them) of the differences/stable policy. This will bring us to a solid 2 contributors for Keystone that are looking at the stable-maint reviews and ensuring we're not letting too much sit in limbo (or dumping it all on the main stable-core team). Long term I'd like to see a 3rd keystone stable maint, but I am unsure who else should be nominated. Getting to a full 2 members actively engaged is a big win for ensuring stable branches get appropriate love within Keystone. Cheers, --Morgan From mriedemos at gmail.com Tue Apr 24 19:20:02 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 24 Apr 2018 14:20:02 -0500 Subject: [openstack-dev] Changes to keystone-stable-maint members In-Reply-To: References: Message-ID: <77104ad1-23b6-6d22-bc8c-0acc07ee3957@gmail.com> On 4/24/2018 12:58 PM, Morgan Fainberg wrote: > Hi, > > I am proposing making some changes to the Keystone Stable Maint team. > A lot of this is cleanup for contributors that have moved on from > OpenStack. For the most part, I've been the only one responsible for > Keystone Stable Maint reviews, and I'm not comfortable being this > bottleneck > > Removals > ======== > Dolph Matthews > Steve Martinelli > Brant Knudson > > Each of these members have left/moved on from OpenStack, or in the > case of Brant, less involved with Keystone (and I believe OpenStack as > a whole). > > Additions > ======= > Lance Bragstad > > Lance is the PTL and also highly aware (and does reviews for stable > keystone when I ask, so we have a second pair of eyes on them) of the > differences/stable policy. > > This will bring us to a solid 2 contributors for Keystone that are > looking at the stable-maint reviews and ensuring we're not letting too > much sit in limbo (or dumping it all on the main stable-core team). > > Long term I'd like to see a 3rd keystone stable maint, but I am unsure > who else should be nominated. Getting to a full 2 members actively > engaged is a big win for ensuring stable branches get appropriate love > within Keystone. > > Cheers, > --Morgan > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > This looks OK to me. I know Lance is active on stable reviews for Keystone and is aware of the guidelines. -- Thanks, Matt From zbitter at redhat.com Tue Apr 24 19:39:02 2018 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 24 Apr 2018 15:39:02 -0400 Subject: [openstack-dev] [Heat][TripleO] - Getting attributes of openstack resources not created by the stack for TripleO NetworkConfig. In-Reply-To: References: <1524142764.4383.83.camel@redhat.com> <1524503802.4383.149.camel@redhat.com> Message-ID: On 23/04/18 14:09, Dan Sneddon wrote: > > Yes, the port is currently created as part of the Ironic server > resource. We would have more flexibility if this were a separate Neutron > port, but we need to be able to support upgrades. This would require the > ability in Heat to detach the implicit port from the Ironic resource, > and attach a Neutron port resource with the same IP to a node without > rebuilding the entire node. This isn't currently possible. I believe it's possible using a two-step migration. First create an OS::Neutron::Port resource with external_id as the current port ID. Then use get_resource to pass the ID of the Port explicitly to the Server in its network config. On update, Heat will recognise this as an unchanged config thanks to the fixes that Harald made in Queens. Second, do another update removing the external_id to allow Heat to manage this port (or don't, I guess, since Nova will clean up the port when the server is deleted regardless). This process is pretty horrible though, and more suited to a manual fix-up than something like TripleO. > I can see another use case for this Heat functionality, which is that I > would like to be able to generate a report using Heat that lists all the > ports in use in the entire deployment. This would be generated > post-deployment, and could be used to populate an external DNS server, > or simply to report on which IPs belong to which nodes. You can get IPs from the servers already. (Also, you should use Designate resources to populate your external DNS server ;) The issue here AIUI is that you can't get info from the subnet, like the CIDR, and in fact you may not even know the subnet because of the magical way Neutron will implicitly allocate stuff for you. cheers, Zane. From Kevin.Fox at pnnl.gov Tue Apr 24 19:51:51 2018 From: Kevin.Fox at pnnl.gov (Fox, Kevin M) Date: Tue, 24 Apr 2018 19:51:51 +0000 Subject: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier? In-Reply-To: References: <1524491647-sup-1779@lrrr.local> <9555e900-24e7-9a9f-5a09-b957faa44fc2@redhat.com> <1A3C52DFCD06494D8528644858247BF01C0B3990@EX10MBOX03.pnnl.gov> <9baf6a58-4092-7417-de14-7be4269d6dbc@openstack.org> <1A3C52DFCD06494D8528644858247BF01C0B3EB2@EX10MBOX03.pnnl.gov>, Message-ID: <1A3C52DFCD06494D8528644858247BF01C0B4077@EX10MBOX03.pnnl.gov> I support 12 factor. But 12 factor only works if you can commit to always deploying on top of 12 factor tools. If OpenStack committed to only ever deploying api services on k8s then my answer might be different. but so far has been unable to do that. Barring that, I think simplifying the operators life so you get more users/contributors has priority over pure 12 factor ideals. It also is about getting Project folks working together to see how their parts fit (or not) in the greater constilation. Just writing a document on how you could fit things together doesn't show the kinds of suffering that actually integrating it into a finished whole could show. Either way though, I think a unified db-sync would go a long way to making OpenStack easier to maintain as an Operator. Thanks, Kevin ________________________________________ From: Jay Pipes [jaypipes at gmail.com] Sent: Tuesday, April 24, 2018 9:13 AM To: openstack-dev at lists.openstack.org Subject: Re: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier? On 04/24/2018 12:04 PM, Fox, Kevin M wrote: > Could the major components, nova-api, neutron-server, glance-apiserver, etc be built in a way to have 1 process for all of them, and combine the upgrade steps such that there is also, one db-sync for the entire constellation? So, basically the exact opposite of the 12-factor app design that "cloud-native" people espouse? -jay __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mgagne at calavera.ca Tue Apr 24 19:54:00 2018 From: mgagne at calavera.ca (=?UTF-8?Q?Mathieu_Gagn=C3=A9?=) Date: Tue, 24 Apr 2018 15:54:00 -0400 Subject: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier? In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C0B4077@EX10MBOX03.pnnl.gov> References: <1524491647-sup-1779@lrrr.local> <9555e900-24e7-9a9f-5a09-b957faa44fc2@redhat.com> <1A3C52DFCD06494D8528644858247BF01C0B3990@EX10MBOX03.pnnl.gov> <9baf6a58-4092-7417-de14-7be4269d6dbc@openstack.org> <1A3C52DFCD06494D8528644858247BF01C0B3EB2@EX10MBOX03.pnnl.gov> <1A3C52DFCD06494D8528644858247BF01C0B4077@EX10MBOX03.pnnl.gov> Message-ID: On Tue, Apr 24, 2018 at 3:51 PM, Fox, Kevin M wrote: > I support 12 factor. But 12 factor only works if you can commit to always deploying on top of 12 factor tools. If OpenStack committed to only ever deploying api services on k8s then my answer might be different. but so far has been unable to do that. Barring that, I think simplifying the operators life so you get more users/contributors has priority over pure 12 factor ideals. > > It also is about getting Project folks working together to see how their parts fit (or not) in the greater constilation. Just writing a document on how you could fit things together doesn't show the kinds of suffering that actually integrating it into a finished whole could show. > > Either way though, I think a unified db-sync would go a long way to making OpenStack easier to maintain as an Operator. Yes please. Or any task that's remotely similar. -- Mathieu From zbitter at redhat.com Tue Apr 24 20:12:11 2018 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 24 Apr 2018 16:12:11 -0400 Subject: [openstack-dev] [Heat][TripleO] - Getting attributes of openstack resources not created by the stack for TripleO NetworkConfig. In-Reply-To: <1524142764.4383.83.camel@redhat.com> References: <1524142764.4383.83.camel@redhat.com> Message-ID: <1defd9c4-e2ad-c2bb-0232-d1159ab0a2af@redhat.com> On 19/04/18 08:59, Harald Jensås wrote: > The problem is getting there using heat ... The real answer is to make everything explicit - create a Subnet resource and a Port resource and don't allow Neutron/Nova to make any decisions for you that would have the effect of hiding data that you need. However, since that's impractical in this particular case... > a couple of ideas: > > a) Use heat's ``external_resource`` to create a port resource, > and then a external subnet resource. Then get the data > from the external resources. We probably would have to make > it possible for a ``external_resource`` depend on the server > resource, and verify that these resource have the required > attributes. Yeah, I don't know why we don't allow depends_on for resources with external_id. (There's also a bug where we don't recognise dependencies contributed by any functions used in the external_id field, like get_resource or get_attr, even though we allow those functions.) Apparently somebody had a brain explosion at a design summit session that nobody remembers attending, and here we are :D The difficulty is that the fix should be tied to a template version, but the offending check is in the template-independent part of the code base. Nevertheless, a workaround is trivial: ext_port: type: OS::Neutron::Port external_id: {get_attr: [, addresses, , 0, port]} metadata: do_something_to_add_a_dependency: {get_resource: } > b) Extend attributes of OS::Nova::Server (OS::Neutron::Port as > well probably) to include the data. > > If we do this we should probably aim to be in parity with > what is made available to clients getting the configuration > from dhcp. (mtu, dns_domain, dns_servers, prefixlen, > gateway_ip, host_routes, ipv6_address_mode, ipv6_ra_mode > etc.) This makes sense to me. If we're allowing people to let Nova/Neutron make implicit choices for them then we also need to allow them to see the result. > c) Create a new heat function to read properties of any > openstack resource, without having to make use of the > external_resource in heat. I'm pretty -1 on this, because I think you want to have the same caching behaviour as a resource, not a function. At that point you're just implementing syntactic sugar that makes things _less_ consistent, not to mention the enormous implementation hacks required. cheers, Zane. From zbitter at redhat.com Tue Apr 24 21:48:56 2018 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 24 Apr 2018 17:48:56 -0400 Subject: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier? In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C0B3EB2@EX10MBOX03.pnnl.gov> References: <1524491647-sup-1779@lrrr.local> <9555e900-24e7-9a9f-5a09-b957faa44fc2@redhat.com> <1A3C52DFCD06494D8528644858247BF01C0B3990@EX10MBOX03.pnnl.gov> <9baf6a58-4092-7417-de14-7be4269d6dbc@openstack.org> <1A3C52DFCD06494D8528644858247BF01C0B3EB2@EX10MBOX03.pnnl.gov> Message-ID: <9157639f-9c4f-d13a-657a-a903028b29ce@redhat.com> On 24/04/18 12:04, Fox, Kevin M wrote: > Yeah, I agree k8s seems to have hit on a good model where interests are separately grouped from the code bases. This has allowed architecture to not to be too heavily influenced by the logical groups interest. > > I'll go ahead and propose it again since its been a little while. In order to start breaking down the barriers between Projects and start working more towards integration, should the TC come up with an architecture group? Get folks from all the major projects involved in it and sharing common infrastructure. > > One possible pie in the sky goal of that group could be the following: > > k8s has many controllers. But they compile almost all of them into one service. the kube-apiserver. Architecturally they could break them out at any point, but so far they have been able to scale just fine without doing so. Having them combined has allowed much easier upgrade paths for users though. This has spurred adoption and contribution. Adding a new controller isn't a huge lift to an operator. they just upgrade to the newest version which has the new controller built in. > > Could the major components, nova-api, neutron-server, glance-apiserver, etc be built in a way to have 1 process for all of them, and combine the upgrade steps such that there is also, one db-sync for the entire constellation? In the pre-containers era one of the most common complaints I heard from operators was that they were forced to upgrade stuff in lock-step (because of library version dependencies) when they really wanted to upgrade each service independently. So this definitely wouldn't work for everyone. Another idea that has been floated on occasion is of combining all of the bits of services that run on a compute node (which include parts of Nova, Cinder, Neutron, Ceilometer, &c.) into a single... thing. I wonder if that wouldn't be a more interesting place to start. > The idea would be to take Constellations idea one step farther. That the Projects would deliver python libraries(and a binary for stand alone operation). In the sense that we've switched most things with a REST API to running in Apache using wsgi, that's _technically_ what's happening already ;) > Constilations would actually provide a code deliverable, not just reference architecture, combining the libraries together into a single usable entity. Operators most likely would consume the Constilations version rather then the individual Project versions. If I'm reading right, you're suggesting that users who just want a quick way to install a small cloud would use a turn-key controller node package, while those who need something more sophisticated could continue to install the individual services separately? It's an interesting idea, but users of the first sort have a tendency to turn into users of the second sort, and they want a smooth upgrade path when that happens. I suspect that's why there aren't any deployment tools that use this model, even though there are probably no technical obstacles to it even today. > What do you think? With respect to the db_sync specifically, I think the main problem is that it exists at all. You want to be able to do a simple rolling update where you start containers containing new versions of the code, and then shut down containers containing old versions of the code. Right now you have to somehow run db_sync with the new code but make sure it happens before starting the service with the new code - and in some cases you may have to shut down the old code first. (And as non-conducive as that is to orchestrated container deployments, it was 10 times worse pre-containers when it was virtually impossible to install the two versions of the code side-by-side.) But once your deployment tool has solved that horrible problem, it's not difficult for it to add a for-loop to do it for every service. What would be a bigger win would be to get rid of db_sync altogether. It was born in an era when we did massive data migrations between versions. We've now adopted guidelines for rolling updates saying that we should only do fast operations like adding/dropping tables/columns during db_sync, and that deprecation periods must permit the DB to be upgraded while instances of the previous version of the service are still running. Once services comply with those guidelines is there any reason we can't just always update the DB schema during service start-up and ditch the separate `-manage db_sync` commands? Maybe that would be a good project-wide goal for an upcoming release. cheers, Zane. From zhipengh512 at gmail.com Tue Apr 24 23:20:29 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 25 Apr 2018 07:20:29 +0800 Subject: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier? In-Reply-To: References: <1524491647-sup-1779@lrrr.local> <9555e900-24e7-9a9f-5a09-b957faa44fc2@redhat.com> <1A3C52DFCD06494D8528644858247BF01C0B3990@EX10MBOX03.pnnl.gov> <9baf6a58-4092-7417-de14-7be4269d6dbc@openstack.org> Message-ID: I think many projects are now beginning to develop the sub-team structure (e.g Nova, Ironic and Cyborg) and that might be part of the answer here. Having a sub-team structure and also have volunteer as sub team leads could also help people that are not good at code review to contribute significantly and get recognized in another way. On Tue, Apr 24, 2018 at 7:24 PM, Davanum Srinivas wrote: > Thierry, > > please see below: > > On Tue, Apr 24, 2018 at 6:24 AM, Thierry Carrez > wrote: > > Fox, Kevin M wrote: > >> OpenStack has created artificial walls between the various Projects. It > shows up, for example, as holes in usability at a user level or extra > difficulty for operators juggling around so many projects. Users and for > the most part, Operators don't really care about project organization, or > ptls, or cores or such. OpenStack has made some progress this direction > with stuff like the unified cli. But OpenStack is not very unified. > > > > I've been giving this some thought (in the context of a presentation I > > was giving on hard lessons learned from 8 years of OpenStack). I think > > that organizing development around project teams and components was the > > best way to cope with the growth of OpenStack in 2011-1015 and get to a > > working set of components. However it's not the best organization to > > improve on the overall "product experience", or for a maintenance phase. > > > > While it can be confusing, I like the two-dimensional approach that > > Kubernetes followed (code ownership in one dimension, SIGs in the > > other). The introduction of SIGs in OpenStack, beyond providing a way to > > build closer feedback loops around specific topics, can help us tackle > > this "unified experience" problem you raised. The formation of the > > upgrades SIG, or the self-healing SIG is a sign that times change. Maybe > > we need to push in that direction even more aggressively and start > > thinking about de-emphasizing project teams themselves. > > Big +1. Another thing to check into is how can we split some of the > work the PTL does into multiple roles ... that are short term and is > rotated around. Hoping that will help with the problem where we need > folks to be totally available full time to do meaningful work in a > project. > > > -- > > Thierry Carrez (ttx) > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > -- > Davanum Srinivas :: https://twitter.com/dims > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From sangho at opennetworking.org Wed Apr 25 00:18:28 2018 From: sangho at opennetworking.org (Sangho Shin) Date: Wed, 25 Apr 2018 09:18:28 +0900 Subject: [openstack-dev] [openstack-infra] [neutron] Change of control over network-onos (was: How to take over a project?) In-Reply-To: <20180424143926.lqfgmurus6tmpcyo@yuggoth.org> References: <0630C19B-9EA1-457D-A74C-8A7A03D96DD6@vmware.com> <2bb037db-61aa-4fec-a694-00eee36bbdab@gmail.com> <7a4390b1-2c4e-6600-4d93-167697ea9f12@redhat.com> <81B28CCD-93B2-4BC8-B2C5-50B0C5D2A972@opennetworking.org> <3C5A1D78-828F-4C6D-B3A1-B6597403233F@opennetworking.org> <0202894D-3C05-434F-A7F4-93678C7613FE@opennetworking.org> <20180424143926.lqfgmurus6tmpcyo@yuggoth.org> Message-ID: <19FA55CF-AB92-403D-9A61-ECE517067F89@opennetworking.org> Thank you, Jeremy I did not think that it takes so long. :-) Sangho > On 24 Apr 2018, at 11:39 PM, Jeremy Stanley wrote: > > On 2018-04-20 01:01:33 +0900 (+0900), Sangho Shin wrote: >> Dear Neutron-Release team, >> >> I wonder if any of you can add me to the network-onos-release member. >> It seems that Vikram is busy. :-) > [...] > > I've adjusted the subject line here in hopes the thread might better > catch their attention. > -- > Jeremy Stanley > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From zhipengh512 at gmail.com Wed Apr 25 01:57:53 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Wed, 25 Apr 2018 09:57:53 +0800 Subject: [openstack-dev] [cyborg]Weekly Team Meeting 2018.04.25 Message-ID: Hi Team, Team meeting starting UTC1400 as usual at #openstack-cyborg, initial agenda as follows: 1. KubeCon preparation for resource mgmt wg discussion 2. subteam meeting arrangement for more agile meeting time/logistic 3. Rocky critical spec update 4. open patches/bug review -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Wed Apr 25 02:29:16 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 25 Apr 2018 02:29:16 +0000 Subject: [openstack-dev] Changes to keystone-stable-maint members In-Reply-To: <77104ad1-23b6-6d22-bc8c-0acc07ee3957@gmail.com> References: <77104ad1-23b6-6d22-bc8c-0acc07ee3957@gmail.com> Message-ID: <20180425022916.GA12705@devvm1> > > > >Additions > >======= > >Lance Bragstad > > > >Lance is the PTL and also highly aware (and does reviews for stable > >keystone when I ask, so we have a second pair of eyes on them) of the > >differences/stable policy. > > > >This will bring us to a solid 2 contributors for Keystone that are > >looking at the stable-maint reviews and ensuring we're not letting too > >much sit in limbo (or dumping it all on the main stable-core team). > > > >Long term I'd like to see a 3rd keystone stable maint, but I am unsure > >who else should be nominated. Getting to a full 2 members actively > >engaged is a big win for ensuring stable branches get appropriate love > >within Keystone. > > > >Cheers, > >--Morgan > > > > This looks OK to me. I know Lance is active on stable reviews for Keystone > and is aware of the guidelines. > I am very comfortable with this too. I know Lance and I have talked over a few stable backports and he has a good understanding of the policies around those. From allprog at gmail.com Wed Apr 25 03:40:19 2018 From: allprog at gmail.com (=?utf-8?B?QW5kcsOhcyBLw7Z2aQ==?=) Date: Wed, 25 Apr 2018 03:40:19 +0000 Subject: [openstack-dev] [mistral] September PTG in Denver In-Reply-To: <457cf2f4-84bd-44ab-bfb7-04b5c0b8d974@Spark> References: <20180423195823.GC17397@sm-xps> , <457cf2f4-84bd-44ab-bfb7-04b5c0b8d974@Spark> Message-ID: Hi Dougal, Very likely, I will join over the phone. Thanks, Andras ________________________________ From: Renat Akhmerov Sent: Tuesday, April 24, 2018 2:46:36 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [mistral] September PTG in Denver Dougal, Most likely, I’ll go. Thanks Renat Akhmerov @Nokia 24 апр. 2018 г., 17:05 +0700, Dougal Matthews , писал: On 23 April 2018 at 20:58, Sean McGinnis > wrote: On Mon, Apr 23, 2018 at 07:32:40PM +0000, Kendall Nelson wrote: > Hey Dougal, > > I think I had said May 2nd in my initial email asking about attendance. If > you can get an answer out of your team by then I would greatly appreciate > it! If you need more time please let me know by then (May 2nd) instead. Whoops - thanks for the correction. > > -Kendall (diablo_rojo) > Do we need to collect this data for September already by the beginning of May? Granted, the sooner we know details and can start planning, the better. But as I started looking over the survey, it just seems really early to predict where things will be 5 months from now. Especially considering we will have a different set of PTLs for many projects by then, and it is too early for some of those hand off discussions to have started yet. Good question! I don't mean to ask people to commit 100% or not, I just want to know their intentions so I have more information when filling out the survey. Sean __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From sreeram at linux.vnet.ibm.com Wed Apr 25 06:00:47 2018 From: sreeram at linux.vnet.ibm.com (Sreeram Vancheeswaran) Date: Wed, 25 Apr 2018 11:30:47 +0530 Subject: [openstack-dev] Help needed in debugging issue - ClientException: Unable to update the attachment. (HTTP 500) Message-ID: <738bc89f-5277-ef4a-79da-51d0cce7b8df@linux.vnet.ibm.com> Hi team! We are currently facing an issue in our out-of-tree driver nova-dpm [1] with nova and cinder on master, where instance launch in devstack is failing due to communication/time-out issues in nova-cinder. We are unable to get to the root cause of the issue and we need your help on getting some hints/directions to debug this issue further. --> From nova-compute service: BuildAbortException: Build of instance aborted: Unable to update the attachment. (HTTP 500) from the nova-compute server (detailed logs here [2]). --> From cinder-volume service: ERROR oslo_messaging.rpc.server VolumeAttachmentNotFound: Volume attachment could not be found with filter: attachment_id = 266ef7e1-4735-40f1-b704-509472f565cb. Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server (detailed logs here [3]) Debugging steps done so far:- * Compared the package versions between the current devstack under test with the **last succeeding job in our CI system** (to be exact, it was for the patches https://review.openstack.org/#/c/458514/ and https://review.openstack.org/#/c/458820/); However the package versions for packages such as sqlalchemy, os-brick, oslo* are exactly the same in both the systems. * We used git bisect to revert nova and cinder projects to versions equal to or before the date of our last succeeding CI run; but still we were able to reproduce the same error. * Our guess is that the db "Save" operation during the update of volume attachment is failing. But we are unable to trace/debug to that point in the rpc call; Any suggestions on how to debug this sceario would be really helpful. * We are running devstack master on Ubuntu 16.04.04 References [1] https://github.com/openstack/nova-dpm [2] Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.volume.cinder [None req-751d4586-cd97-4a8f-9423-f2bc4b1f1269 service nova] Update attachment failed for attachment 266ef7e1-4735-40f1-b704-509472f565cb. Error: Unable to update the attachment. (HTTP 500) (Request-ID: req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce) Code: 500: ClientException: Unable to update the attachment. (HTTP 500) (Request-ID: req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [None req-751d4586-cd97-4a8f-9423-f2bc4b1f1269 service nova] [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] Instance failed block device setup: ClientException: Unable to update the attachment. (HTTP 500) (Request-ID: req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] Traceback (most recent call last): Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/opt/stack/nova/nova/compute/manager.py", line 1577, in _prep_block_device Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] wait_func=self._await_block_device_map_created) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/opt/stack/nova/nova/virt/block_device.py", line 828, in attach_block_devices Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] _log_and_attach(device) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/opt/stack/nova/nova/virt/block_device.py", line 825, in _log_and_attach Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] bdm.attach(*attach_args, **attach_kwargs) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/opt/stack/nova/nova/virt/block_device.py", line 46, in wrapped Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] ret_val = method(obj, context, *args, **kwargs) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/opt/stack/nova/nova/virt/block_device.py", line 618, in attach Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] virt_driver, do_driver_attach) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 274, in inner Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] return f(*args, **kwargs) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/opt/stack/nova/nova/virt/block_device.py", line 615, in _do_locked_attach Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] self._do_attach(*args, **_kwargs) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/opt/stack/nova/nova/virt/block_device.py", line 600, in _do_attach Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] do_driver_attach) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/opt/stack/nova/nova/virt/block_device.py", line 514, in _volume_attach Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] self['mount_device'])['connection_info'] Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/opt/stack/nova/nova/volume/cinder.py", line 291, in wrapper Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] res = method(self, ctx, *args, **kwargs) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/opt/stack/nova/nova/volume/cinder.py", line 327, in wrapper Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] res = method(self, ctx, attachment_id, *args, **kwargs) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/opt/stack/nova/nova/volume/cinder.py", line 736, in attachment_update Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] 'code': getattr(ex, 'code', None)}) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] self.force_reraise() Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] six.reraise(self.type_, self.value, self.tb) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/opt/stack/nova/nova/volume/cinder.py", line 726, in attachment_update Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] attachment_id, _connector) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/usr/local/lib/python2.7/dist-packages/cinderclient/v3/attachments.py", line 67, in update Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] resp = self._update('/attachments/%s' % id, body) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/usr/local/lib/python2.7/dist-packages/cinderclient/base.py", line 344, in _update Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] resp, body = self.api.client.put(url, body=body, **kwargs) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/usr/local/lib/python2.7/dist-packages/cinderclient/client.py", line 206, in put Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] return self._cs_request(url, 'PUT', **kwargs) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/usr/local/lib/python2.7/dist-packages/cinderclient/client.py", line 191, in _cs_request Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] return self.request(url, method, **kwargs) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/usr/local/lib/python2.7/dist-packages/cinderclient/client.py", line 177, in request Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] raise exceptions.from_response(resp, body) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] ClientException: Unable to update the attachment. (HTTP 500) (Request-ID: req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [None req-751d4586-cd97-4a8f-9423-f2bc4b1f1269 service nova] [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] Build of instance d761da60-7bb1-415e-b5b9-eaaed124d6d2 aborted: Unable to update the attachment. (HTTP 500) (Request-ID: req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce): BuildAbortException: Build of instance d761da60-7bb1-415e-b5b9-eaaed124d6d2 aborted: Unable to update the attachment. (HTTP 500) (Request-ID: req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] Traceback (most recent call last): Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/opt/stack/nova/nova/compute/manager.py", line 1839, in _do_build_and_run_instance Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] filter_properties, request_spec) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/opt/stack/nova/nova/compute/manager.py", line 2052, in _build_and_run_instance Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] bdms=block_device_mapping) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] self.force_reraise() Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] six.reraise(self.type_, self.value, self.tb) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/opt/stack/nova/nova/compute/manager.py", line 2004, in _build_and_run_instance Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] block_device_mapping) as resources: Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__ Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] return self.gen.next() Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/opt/stack/nova/nova/compute/manager.py", line 2211, in _build_resources Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] reason=e.format_message()) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] BuildAbortException: Build of instance d761da60-7bb1-415e-b5b9-eaaed124d6d2 aborted: Unable to update the attachment. (HTTP 500) (Request-ID: req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] [3] Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments [req-f9f3364b-4dd8-4195-a60a-2f0e44c1f2ea req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce admin admin] Unable to update the attachment.: MessagingTimeout: Timed out waiting for a reply to message ID fe836528e2ea43edabe8004845837f4f Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments Traceback (most recent call last): Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments File "/opt/stack/cinder/cinder/api/v3/attachments.py", line 228, in update Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments connector)) Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments File "/opt/stack/cinder/cinder/volume/api.py", line 2158, in attachment_update Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments attachment_ref.id)) Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments File "/opt/stack/cinder/cinder/rpc.py", line 187, in _wrapper Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments return f(self, *args, **kwargs) Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments File "/opt/stack/cinder/cinder/volume/rpcapi.py", line 442, in attachment_update Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments attachment_id=attachment_id) Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 174, in call Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments retry=self.retry) Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 131, in _send Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments timeout=timeout, retry=retry) Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 559, in send Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments retry=retry) Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 548, in _send Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments result = self._waiter.wait(msg_id, timeout) Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 440, in wait Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments message = self.waiters.get(msg_id, timeout=timeout) Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 328, in get Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments 'to message ID %s' % msg_id) Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments MessagingTimeout: Timed out waiting for a reply to message ID fe836528e2ea43edabe8004845837f4f Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server [req-f9f3364b-4dd8-4195-a60a-2f0e44c1f2ea req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce admin None] Exception during message handling: VolumeAttachmentNotFound: Volume attachment could not be found with filter: attachment_id = 266ef7e1-4735-40f1-b704-509472f565cb. Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server Traceback (most recent call last): Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 163, in _process_incoming Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 220, in dispatch Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 190, in _do_dispatch Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server File "/opt/stack/cinder/cinder/volume/manager.py", line 4378, in attachment_update Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server connector) Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server File "/opt/stack/cinder/cinder/volume/manager.py", line 4349, in _connection_create Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server self.db.volume_attachment_update(ctxt, attachment.id, values) Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server File "/opt/stack/cinder/cinder/db/api.py", line 365, in volume_attachment_update Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server return IMPL.volume_attachment_update(context, attachment_id, values) Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server File "/opt/stack/cinder/cinder/db/sqlalchemy/api.py", line 182, in wrapper Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server return f(*args, **kwargs) Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server File "/opt/stack/cinder/cinder/db/sqlalchemy/api.py", line 2674, in volume_attachment_update Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server filter='attachment_id = ' + attachment_id) Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server VolumeAttachmentNotFound: Volume attachment could not be found with filter: attachment_id = 266ef7e1-4735-40f1-b704-509472f565cb. Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server -- --------------------------------------------------------------------------------------------------- Sreeram Vancheeswaran System z Firmware - Openstack Development IBM Systems & Technology Lab, Bangalore, India Phone: +91 80 40660826 Mob: +91-9341411511 Email : sreeram at linux.vnet.ibm.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Wed Apr 25 06:12:00 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Wed, 25 Apr 2018 14:12:00 +0800 Subject: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier? In-Reply-To: <1A3C52DFCD06494D8528644858247BF01C0B3EB2@EX10MBOX03.pnnl.gov> References: <1524491647-sup-1779@lrrr.local> <9555e900-24e7-9a9f-5a09-b957faa44fc2@redhat.com> <1A3C52DFCD06494D8528644858247BF01C0B3990@EX10MBOX03.pnnl.gov> <9baf6a58-4092-7417-de14-7be4269d6dbc@openstack.org> <1A3C52DFCD06494D8528644858247BF01C0B3EB2@EX10MBOX03.pnnl.gov> Message-ID: 2018-04-25 0:04 GMT+08:00 Fox, Kevin M : > > Yeah, I agree k8s seems to have hit on a good model where interests are separately grouped from the code bases. This has allowed architecture to not to be too heavily influenced by the logical groups interest. > > I'll go ahead and propose it again since its been a little while. In order to start breaking down the barriers between Projects and start working more towards integration, should the TC come up with an architecture group? Get folks from all the major projects involved in it and sharing common infrastructure. > > One possible pie in the sky goal of that group could be the following: > > k8s has many controllers. But they compile almost all of them into one service. the kube-apiserver. Architecturally they could break them out at any point, but so far they have been able to scale just fine without doing so. Having them combined has allowed much easier upgrade paths for users though. This has spurred adoption and contribution. Adding a new controller isn't a huge lift to an operator. they just upgrade to the newest version which has the new controller built in. > I believe to combine API services into one service will be able to scale much easier. As we already starting from providing multiple services and binding with Apache(Also concern about Zane's comment), we can start this goal by saying providing unified API service architecture (or start with new oslo api service). If we reduce the difference between implementation from API service in each OpenStack services first, maybe will make it easier to manage or upgrade (since we unfied the package requirements) and even possible to accelerate APIs. > Could the major components, nova-api, neutron-server, glance-apiserver, etc be built in a way to have 1 process for all of them, and combine the upgrade steps such that there is also, one db-sync for the entire constellation? > I like Zane's idea of combining services in Compute Node. > The idea would be to take Constellations idea one step farther. That the Projects would deliver python libraries(and a binary for stand alone operation). Constilations would actually provide a code deliverable, not just reference architecture, combining the libraries together into a single usable entity. Operators most likely would consume the Constilations version rather then the individual Project versions. > > What do you think? It won't hurt when we providing unified OpenStack command (and it's actually a great stuff), and it should not break anything for API. Maybe just one more API service call OpenStack API service and it base on teams to said providing plugin or not. I think we will eventually reach the goal in this way. > > Thanks, > Kevin-- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From sreeram at linux.vnet.ibm.com Wed Apr 25 08:32:00 2018 From: sreeram at linux.vnet.ibm.com (Sreeram Vancheeswaran) Date: Wed, 25 Apr 2018 14:02:00 +0530 Subject: [openstack-dev] [nova] Help needed in debugging issue - ClientException: Unable to update the attachment. (HTTP 500) Message-ID: <0df31812-aa57-324b-d21c-8576c0e21473@linux.vnet.ibm.com> Hi team! We are currently facing an issue in our out-of-tree driver nova-dpm [1] with nova and cinder on master, where instance launch in devstack is failing due to communication/time-out issues in nova-cinder. We are unable to get to the root cause of the issue and we need your help on getting some hints/directions to debug this issue further. --> From nova-compute service: BuildAbortException: Build of instance aborted: Unable to update the attachment. (HTTP 500) from the nova-compute server (detailed logs here [2]). --> From cinder-volume service: ERROR oslo_messaging.rpc.server VolumeAttachmentNotFound: Volume attachment could not be found with filter: attachment_id = 266ef7e1-4735-40f1-b704-509472f565cb. Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server (detailed logs here [3]) Debugging steps done so far:- * Compared the package versions between the current devstack under test with the **last succeeding job in our CI system** (to be exact, it was for the patches https://review.openstack.org/#/c/458514/ and https://review.openstack.org/#/c/458820/); However the package versions for packages such as sqlalchemy, os-brick, oslo* are exactly the same in both the systems. * We used git bisect to revert nova and cinder projects to versions equal to or before the date of our last succeeding CI run; but still we were able to reproduce the same error. * Our guess is that the db "Save" operation during the update of volume attachment is failing. But we are unable to trace/debug to that point in the rpc call; Any suggestions on how to debug this sceario would be really helpful. * We are running devstack master on Ubuntu 16.04.04 References [1] https://github.com/openstack/nova-dpm [2] Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.volume.cinder [None req-751d4586-cd97-4a8f-9423-f2bc4b1f1269 service nova] Update attachment failed for attachment 266ef7e1-4735-40f1-b704-509472f565cb. Error: Unable to update the attachment. (HTTP 500) (Request-ID: req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce) Code: 500: ClientException: Unable to update the attachment. (HTTP 500) (Request-ID: req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [None req-751d4586-cd97-4a8f-9423-f2bc4b1f1269 service nova] [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] Instance failed block device setup: ClientException: Unable to update the attachment. (HTTP 500) (Request-ID: req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] Traceback (most recent call last): Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/opt/stack/nova/nova/compute/manager.py", line 1577, in _prep_block_device Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] wait_func=self._await_block_device_map_created) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/opt/stack/nova/nova/virt/block_device.py", line 828, in attach_block_devices Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] _log_and_attach(device) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/opt/stack/nova/nova/virt/block_device.py", line 825, in _log_and_attach Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] bdm.attach(*attach_args, **attach_kwargs) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/opt/stack/nova/nova/virt/block_device.py", line 46, in wrapped Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] ret_val = method(obj, context, *args, **kwargs) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/opt/stack/nova/nova/virt/block_device.py", line 618, in attach Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] virt_driver, do_driver_attach) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", line 274, in inner Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] return f(*args, **kwargs) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/opt/stack/nova/nova/virt/block_device.py", line 615, in _do_locked_attach Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] self._do_attach(*args, **_kwargs) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/opt/stack/nova/nova/virt/block_device.py", line 600, in _do_attach Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] do_driver_attach) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/opt/stack/nova/nova/virt/block_device.py", line 514, in _volume_attach Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] self['mount_device'])['connection_info'] Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/opt/stack/nova/nova/volume/cinder.py", line 291, in wrapper Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] res = method(self, ctx, *args, **kwargs) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/opt/stack/nova/nova/volume/cinder.py", line 327, in wrapper Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] res = method(self, ctx, attachment_id, *args, **kwargs) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/opt/stack/nova/nova/volume/cinder.py", line 736, in attachment_update Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] 'code': getattr(ex, 'code', None)}) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] self.force_reraise() Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] six.reraise(self.type_, self.value, self.tb) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/opt/stack/nova/nova/volume/cinder.py", line 726, in attachment_update Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] attachment_id, _connector) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/usr/local/lib/python2.7/dist-packages/cinderclient/v3/attachments.py", line 67, in update Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] resp = self._update('/attachments/%s' % id, body) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/usr/local/lib/python2.7/dist-packages/cinderclient/base.py", line 344, in _update Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] resp, body = self.api.client.put(url, body=body, **kwargs) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/usr/local/lib/python2.7/dist-packages/cinderclient/client.py", line 206, in put Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] return self._cs_request(url, 'PUT', **kwargs) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/usr/local/lib/python2.7/dist-packages/cinderclient/client.py", line 191, in _cs_request Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] return self.request(url, method, **kwargs) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/usr/local/lib/python2.7/dist-packages/cinderclient/client.py", line 177, in request Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] raise exceptions.from_response(resp, body) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] ClientException: Unable to update the attachment. (HTTP 500) (Request-ID: req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [None req-751d4586-cd97-4a8f-9423-f2bc4b1f1269 service nova] [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] Build of instance d761da60-7bb1-415e-b5b9-eaaed124d6d2 aborted: Unable to update the attachment. (HTTP 500) (Request-ID: req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce): BuildAbortException: Build of instance d761da60-7bb1-415e-b5b9-eaaed124d6d2 aborted: Unable to update the attachment. (HTTP 500) (Request-ID: req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] Traceback (most recent call last): Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/opt/stack/nova/nova/compute/manager.py", line 1839, in _do_build_and_run_instance Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] filter_properties, request_spec) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/opt/stack/nova/nova/compute/manager.py", line 2052, in _build_and_run_instance Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] bdms=block_device_mapping) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 220, in __exit__ Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] self.force_reraise() Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line 196, in force_reraise Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] six.reraise(self.type_, self.value, self.tb) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/opt/stack/nova/nova/compute/manager.py", line 2004, in _build_and_run_instance Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] block_device_mapping) as resources: Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__ Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] return self.gen.next() Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File "/opt/stack/nova/nova/compute/manager.py", line 2211, in _build_resources Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] reason=e.format_message()) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] BuildAbortException: Build of instance d761da60-7bb1-415e-b5b9-eaaed124d6d2 aborted: Unable to update the attachment. (HTTP 500) (Request-ID: req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce) Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] [3] Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments [req-f9f3364b-4dd8-4195-a60a-2f0e44c1f2ea req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce admin admin] Unable to update the attachment.: MessagingTimeout: Timed out waiting for a reply to message ID fe836528e2ea43edabe8004845837f4f Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments Traceback (most recent call last): Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments File "/opt/stack/cinder/cinder/api/v3/attachments.py", line 228, in update Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments connector)) Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments File "/opt/stack/cinder/cinder/volume/api.py", line 2158, in attachment_update Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments attachment_ref.id)) Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments File "/opt/stack/cinder/cinder/rpc.py", line 187, in _wrapper Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments return f(self, *args, **kwargs) Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments File "/opt/stack/cinder/cinder/volume/rpcapi.py", line 442, in attachment_update Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments attachment_id=attachment_id) Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 174, in call Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments retry=self.retry) Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", line 131, in _send Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments timeout=timeout, retry=retry) Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 559, in send Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments retry=retry) Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 548, in _send Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments result = self._waiter.wait(msg_id, timeout) Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 440, in wait Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments message = self.waiters.get(msg_id, timeout=timeout) Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", line 328, in get Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments 'to message ID %s' % msg_id) Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments MessagingTimeout: Timed out waiting for a reply to message ID fe836528e2ea43edabe8004845837f4f Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR cinder.api.v3.attachments Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server [req-f9f3364b-4dd8-4195-a60a-2f0e44c1f2ea req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce admin None] Exception during message handling: VolumeAttachmentNotFound: Volume attachment could not be found with filter: attachment_id = 266ef7e1-4735-40f1-b704-509472f565cb. Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server Traceback (most recent call last): Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 163, in _process_incoming Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 220, in dispatch Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server File "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", line 190, in _do_dispatch Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server File "/opt/stack/cinder/cinder/volume/manager.py", line 4378, in attachment_update Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server connector) Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server File "/opt/stack/cinder/cinder/volume/manager.py", line 4349, in _connection_create Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server self.db.volume_attachment_update(ctxt, attachment.id, values) Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server File "/opt/stack/cinder/cinder/db/api.py", line 365, in volume_attachment_update Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server return IMPL.volume_attachment_update(context, attachment_id, values) Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server File "/opt/stack/cinder/cinder/db/sqlalchemy/api.py", line 182, in wrapper Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server return f(*args, **kwargs) Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server File "/opt/stack/cinder/cinder/db/sqlalchemy/api.py", line 2674, in volume_attachment_update Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server filter='attachment_id = ' + attachment_id) Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server VolumeAttachmentNotFound: Volume attachment could not be found with filter: attachment_id = 266ef7e1-4735-40f1-b704-509472f565cb. Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR oslo_messaging.rpc.server -- --------------------------------------------------------------------------------------------------- Sreeram Vancheeswaran System z Firmware - Openstack Development IBM Systems & Technology Lab, Bangalore, India Phone: +91 80 40660826 Mob: +91-9341411511 Email : sreeram at linux.vnet.ibm.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mchandras at suse.de Wed Apr 25 09:06:23 2018 From: mchandras at suse.de (Markos Chandras) Date: Wed, 25 Apr 2018 10:06:23 +0100 Subject: [openstack-dev] [openstack-ansible] Proposing Mohammed Naser as core reviewer In-Reply-To: References: Message-ID: On 24/04/18 16:05, Jean-Philippe Evrard wrote: > Hi everyone, > > I’d like to propose Mohammed Naser [1] as a core reviewer for OpenStack-Ansible. > +2 -- markos SUSE LINUX GmbH | GF: Felix Imendörffer, Jane Smithard, Graham Norton HRB 21284 (AG Nürnberg) Maxfeldstr. 5, D-90409, Nürnberg From muroi.masahito at lab.ntt.co.jp Wed Apr 25 09:32:03 2018 From: muroi.masahito at lab.ntt.co.jp (Masahito MUROI) Date: Wed, 25 Apr 2018 18:32:03 +0900 Subject: [openstack-dev] [Blazar] Next IRC meeting is canceled Message-ID: Hi Blazar folks, As we discussed in the last meeting, the next weekly meeting is canceled because most of members are out of town next week. The next regular meeting is on 8th May. best regards, Masahito From mihaela.balas at orange.com Wed Apr 25 11:07:31 2018 From: mihaela.balas at orange.com (mihaela.balas at orange.com) Date: Wed, 25 Apr 2018 11:07:31 +0000 Subject: [openstack-dev] [octavia] Sometimes amphoras are not re-created if they are not reached for more than heartbeat_timeout Message-ID: <11302_1524654452_5AE06174_11302_207_1_2be855e5b8174bf397106775823399bf@orange.com> Hello, I am testing Octavia Queens and I see that the failover behavior is very much different than the one in Ocata (this is the version we are currently running in production). One example of such behavior is: I create 4 load balancers and after the creation is successful, I shut off all the 8 amphoras. Sometimes, even the health-manager agent does not reach the amphoras, they are not deleted and re-created. The logs look like shown below even when the heartbeat timeout is long passed. Sometimes the amphoras are deleted and re-created. Sometimes, they are partially re-created - part of them remain in shut off. Heartbeat_timeout is set to 60 seconds. [octavia-health-manager-3662231220-nxnt3] 2018-04-25 10:57:26.244 11 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [req-339b54a7-ab0c-422a-832f-a444cd710497 - a5f15235c0714365b98a50a11ec956e7 - - -] Could not connect to instance. Retrying.: ConnectionError: HTTPSConnectionPool(host='192.168.0.15', port=9443): Max retries exceeded with url: /0.5/listeners/285ad342-5582-423e-b654-1f0b50d91fb2/certificates/octaviasrv2.orange.com.pem (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No route to host',)) [octavia-health-manager-3662231220-3lssd] 2018-04-25 10:57:26.464 13 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [req-a63b795a-4b4f-4b90-a201-a4c9f49ac68b - a5f15235c0714365b98a50a11ec956e7 - - -] Could not connect to instance. Retrying.: ConnectionError: HTTPSConnectionPool(host='192.168.0.14', port=9443): Max retries exceeded with url: /0.5/listeners/a45bdef3-e7da-4a18-9f1f-53d5651efe0f/1615c1ec-249e-4fa8-9d73-2397e281712c/haproxy (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No route to host',)) [octavia-health-manager-3662231220-nxnt3] 2018-04-25 10:57:27.772 11 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [req-10febb10-85ea-4082-9df7-daa48894b004 - a5f15235c0714365b98a50a11ec956e7 - - -] Could not connect to instance. Retrying.: ConnectionError: HTTPSConnectionPool(host='192.168.0.19', port=9443): Max retries exceeded with url: /0.5/listeners/96ce5862-d944-46cb-8809-e1e328268a66/fc5b7940-3527-4e9b-b93f-1da3957a5b71/haproxy (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No route to host',)) [octavia-health-manager-3662231220-nxnt3] 2018-04-25 10:57:34.252 11 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [req-339b54a7-ab0c-422a-832f-a444cd710497 - a5f15235c0714365b98a50a11ec956e7 - - -] Could not connect to instance. Retrying.: ConnectionError: HTTPSConnectionPool(host='192.168.0.15', port=9443): Max retries exceeded with url: /0.5/listeners/285ad342-5582-423e-b654-1f0b50d91fb2/certificates/octaviasrv2.orange.com.pem (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No route to host',)) [octavia-health-manager-3662231220-3lssd] 2018-04-25 10:57:34.476 13 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [req-a63b795a-4b4f-4b90-a201-a4c9f49ac68b - a5f15235c0714365b98a50a11ec956e7 - - -] Could not connect to instance. Retrying.: ConnectionError: HTTPSConnectionPool(host='192.168.0.14', port=9443): Max retries exceeded with url: /0.5/listeners/a45bdef3-e7da-4a18-9f1f-53d5651efe0f/1615c1ec-249e-4fa8-9d73-2397e281712c/haproxy (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No route to host',)) [octavia-health-manager-3662231220-nxnt3] 2018-04-25 10:57:35.780 11 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [req-10febb10-85ea-4082-9df7-daa48894b004 - a5f15235c0714365b98a50a11ec956e7 - - -] Could not connect to instance. Retrying.: ConnectionError: HTTPSConnectionPool(host='192.168.0.19', port=9443): Max retries exceeded with url: /0.5/listeners/96ce5862-d944-46cb-8809-e1e328268a66/fc5b7940-3527-4e9b-b93f-1da3957a5b71/haproxy (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 113] No route to host',)) Thank you, Mihaela Balas _________________________________________________________________________________________________________________________ Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci. This message and its attachments may contain confidential or privileged information that may be protected by law; they should not be distributed, used or copied without authorisation. If you have received this email in error, please notify the sender and delete this message and its attachments. As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From artem.goncharov at gmail.com Wed Apr 25 12:51:55 2018 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Wed, 25 Apr 2018 14:51:55 +0200 Subject: [openstack-dev] [api] [lbaas] Neutron LBaaS V2 docs incompatibility Message-ID: Hi all, after working with OpenStackSDK in my cloud I have found one difference in the Neutron LBaaS (yes, I know it is deprecated, but it is still used). The fix would be small and fast, unfortunately I have faced problems with the API description: - https://wiki.openstack.org/wiki/Neutron/LBaaS/API_2.0#Pools describes, that the LB pool has *healthmonitor_id* attribute (what eventually also fits reality of my cloud) - https://developer.openstack.org/api-ref/network/v2/index.html#pools (which is referred to from the previous link in the deprecation note) describes, that the LB pool has *healthmonitors* (and *healthmonitors_status*) as list of IDs. Basically in this regards it is same as https://wiki.openstack.org/wiki/Neutron/LBaaS/API_1.0#Pool description - unfortunately even https://github.com/openstack/neutron-lib/blob/master/api-ref/source/v2/lbaas-v2.inc describes *Pool.healthmonitors* (however it also contains https://github.com/openstack/neutron-lib/blob/master/api-ref/source/v2/samples/lbaas/pools-list-response2.json sample with the *Pool.healthmonitor_id*) - OpenStackSDK contains *network.pool.health_monitors* (with underscore) I want to bring this all in an order and enable managing of the loadbalancer through OSC for my OpenStack cloud, but I can't figure out what is the correct behavior here. Can anybody, please, help in figuring out the truth here? Thanks, Artem -------------- next part -------------- An HTML attachment was scrubbed... URL: From opensrloo at gmail.com Wed Apr 25 13:09:20 2018 From: opensrloo at gmail.com (Ruby Loo) Date: Wed, 25 Apr 2018 09:09:20 -0400 Subject: [openstack-dev] [ironic] Monthly bug day? In-Reply-To: References: Message-ID: Hi Mike, If we hold it, I'll (try to) be there :) Thanks for spearheading this! --ruby On Mon, Apr 23, 2018 at 8:04 AM, Michael Turek wrote: > Hey everyone! > > We had a bug day about two weeks ago and it went pretty well! At last > week's IRC meeting the idea of having one every month was thrown around. > > What does everyone think about having Bug Day the first Thursday of every > month? > > Thanks, > Mike Turek > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Wed Apr 25 13:14:59 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 25 Apr 2018 15:14:59 +0200 Subject: [openstack-dev] [tripleo] ironic automated cleaning by default? Message-ID: Hi all, I'd like to restart conversation on enabling node automated cleaning by default for the undercloud. This process wipes partitioning tables (optionally, all the data) from overcloud nodes each time they move to "available" state (i.e. on initial enrolling and after each tear down). We have had it disabled for a few reasons: - it was not possible to skip time-consuming wiping if data from disks - the way our workflows used to work required going between manageable and available steps several times However, having cleaning disabled has several issues: - a configdrive left from a previous deployment may confuse cloud-init - a bootable partition left from a previous deployment may take precedence in some BIOS - an UEFI boot partition left from a previous deployment is likely to confuse UEFI firmware - apparently ceph does not work correctly without cleaning (I'll defer to the storage team to comment) For these reasons we don't recommend having cleaning disabled, and I propose to re-enable it. It has the following drawbacks: - The default workflow will require another node boot, thus becoming several minutes longer (incl. the CI) - It will no longer be possible to easily restore a deleted overcloud node. What do you think? If I don't hear principal objections, I'll prepare a patch in the coming days. Dmitry From mriedemos at gmail.com Wed Apr 25 13:57:14 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 25 Apr 2018 08:57:14 -0500 Subject: [openstack-dev] [horizon][nova][cinder] Horizon support for multiattach volumes Message-ID: <6b8df777-e176-a028-b03a-a04319af3e40@gmail.com> I wanted to advertise the need for some help in adding multiattach volume support to Horizon. There is a blueprint tracking the changes [1]. I started the ball rolling with [2] but there is more work to do, listed in the work items section of the blueprint. [2] was I think my first real code contribution to Horizon and it wasn't terrible (thanks for Akihiro's patience), so I'm sure others could easily jump in here and slice this up if we have people looking for something to hack on. [1] https://blueprints.launchpad.net/horizon/+spec/multi-attach-volume [2] https://review.openstack.org/#/c/547856/ -- Thanks, Matt From mriedemos at gmail.com Wed Apr 25 14:10:52 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 25 Apr 2018 09:10:52 -0500 Subject: [openstack-dev] [nova] Help needed in debugging issue - ClientException: Unable to update the attachment. (HTTP 500) In-Reply-To: <0df31812-aa57-324b-d21c-8576c0e21473@linux.vnet.ibm.com> References: <0df31812-aa57-324b-d21c-8576c0e21473@linux.vnet.ibm.com> Message-ID: <146764a3-421a-de37-f96c-6d22c1af0485@gmail.com> On 4/25/2018 3:32 AM, Sreeram Vancheeswaran wrote: > Hi team! > > We are currently facing  an issue in our out-of-tree driver nova-dpm [1] > with nova and cinder on master, where instance launch in devstack is > failing due to communication/time-out issues in nova-cinder.   We are > unable to get to the root cause of the issue and we need your help on > getting some hints/directions to debug this issue further. > > --> From nova-compute service: BuildAbortException: Build of instance > aborted: Unable to update the attachment. (HTTP 500) from the > nova-compute server (detailed logs here [2]). > > --> From cinder-volume service: ERROR oslo_messaging.rpc.server > VolumeAttachmentNotFound: Volume attachment could not be found with > filter: attachment_id = 266ef7e1-4735-40f1-b704-509472f565cb. > Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR > oslo_messaging.rpc.server  (detailed logs here [3]) > > Debugging steps done so far:- > > * Compared the package versions between the current devstack under > test with the **last succeeding job in our CI system** (to be exact, > it was for the patches https://review.openstack.org/#/c/458514/ and > https://review.openstack.org/#/c/458820/); However the package > versions for packages such as sqlalchemy, os-brick, oslo* are > exactly the same in both the systems. > * We used git bisect to revert nova and cinder projects to versions > equal to or before the date of our last succeeding CI run; but still > we were able to reproduce the same error. > * Our guess is that the db "Save" operation during the update of > volume attachment is failing.  But we are unable to trace/debug to > that point in the rpc call;  Any suggestions on how to debug this > sceario would be really helpful. > * We are running devstack master on Ubuntu 16.04.04 > > > References > > [1] https://github.com/openstack/nova-dpm > > > [2] Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.volume.cinder > [None req-751d4586-cd97-4a8f-9423-f2bc4b1f1269 service nova] Update > attachment failed for attachment 266ef7e1-4735-40f1-b704-509472f565cb. > Error: Unable to update the attachment. (HTTP 500) (Request-ID: > req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce) Code: 500: ClientException: > Unable to update the attachment. (HTTP 500) (Request-ID: > req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce) > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [None req-751d4586-cd97-4a8f-9423-f2bc4b1f1269 service nova] [instance: > d761da60-7bb1-415e-b5b9-eaaed124d6d2] Instance failed block device > setup: ClientException: Unable to update the attachment. (HTTP 500) > (Request-ID: req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce) > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] Traceback (most recent > call last): > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File > "/opt/stack/nova/nova/compute/manager.py", line 1577, in _prep_block_device > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] > wait_func=self._await_block_device_map_created) > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File > "/opt/stack/nova/nova/virt/block_device.py", line 828, in > attach_block_devices > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] _log_and_attach(device) > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File > "/opt/stack/nova/nova/virt/block_device.py", line 825, in _log_and_attach > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] > bdm.attach(*attach_args, **attach_kwargs) > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File > "/opt/stack/nova/nova/virt/block_device.py", line 46, in wrapped > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]     ret_val = > method(obj, context, *args, **kwargs) > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File > "/opt/stack/nova/nova/virt/block_device.py", line 618, in attach > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]     virt_driver, > do_driver_attach) > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File > "/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", > line 274, in inner > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]     return f(*args, > **kwargs) > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File > "/opt/stack/nova/nova/virt/block_device.py", line 615, in _do_locked_attach > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] > self._do_attach(*args, **_kwargs) > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File > "/opt/stack/nova/nova/virt/block_device.py", line 600, in _do_attach > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]     do_driver_attach) > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File > "/opt/stack/nova/nova/virt/block_device.py", line 514, in _volume_attach > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] > self['mount_device'])['connection_info'] > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File > "/opt/stack/nova/nova/volume/cinder.py", line 291, in wrapper > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]     res = method(self, > ctx, *args, **kwargs) > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File > "/opt/stack/nova/nova/volume/cinder.py", line 327, in wrapper > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]     res = method(self, > ctx, attachment_id, *args, **kwargs) > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File > "/opt/stack/nova/nova/volume/cinder.py", line 736, in attachment_update > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]     'code': getattr(ex, > 'code', None)}) > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File > "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line > 220, in __exit__ > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]     self.force_reraise() > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File > "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line > 196, in force_reraise > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] six.reraise(self.type_, > self.value, self.tb) > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File > "/opt/stack/nova/nova/volume/cinder.py", line 726, in attachment_update > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]     attachment_id, > _connector) > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File > "/usr/local/lib/python2.7/dist-packages/cinderclient/v3/attachments.py", > line 67, in update > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]     resp = > self._update('/attachments/%s' % id, body) > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File > "/usr/local/lib/python2.7/dist-packages/cinderclient/base.py", line 344, > in _update > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]     resp, body = > self.api.client.put(url, body=body, **kwargs) > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File > "/usr/local/lib/python2.7/dist-packages/cinderclient/client.py", line > 206, in put > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]     return > self._cs_request(url, 'PUT', **kwargs) > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File > "/usr/local/lib/python2.7/dist-packages/cinderclient/client.py", line > 191, in _cs_request > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]     return > self.request(url, method, **kwargs) > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File > "/usr/local/lib/python2.7/dist-packages/cinderclient/client.py", line > 177, in request > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]     raise > exceptions.from_response(resp, body) > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] ClientException: Unable > to update the attachment. (HTTP 500) (Request-ID: > req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce) > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [None req-751d4586-cd97-4a8f-9423-f2bc4b1f1269 service nova] [instance: > d761da60-7bb1-415e-b5b9-eaaed124d6d2] Build of instance > d761da60-7bb1-415e-b5b9-eaaed124d6d2 aborted: Unable to update the > attachment. (HTTP 500) (Request-ID: > req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce): BuildAbortException: Build of > instance d761da60-7bb1-415e-b5b9-eaaed124d6d2 aborted: Unable to update > the attachment. (HTTP 500) (Request-ID: > req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce) > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] Traceback (most recent > call last): > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File > "/opt/stack/nova/nova/compute/manager.py", line 1839, in > _do_build_and_run_instance > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]     filter_properties, > request_spec) > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File > "/opt/stack/nova/nova/compute/manager.py", line 2052, in > _build_and_run_instance > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] bdms=block_device_mapping) > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File > "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line > 220, in __exit__ > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]     self.force_reraise() > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File > "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line > 196, in force_reraise > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] six.reraise(self.type_, > self.value, self.tb) > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File > "/opt/stack/nova/nova/compute/manager.py", line 2004, in > _build_and_run_instance > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] > block_device_mapping) as resources: > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File > "/usr/lib/python2.7/contextlib.py", line 17, in __enter__ > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]     return self.gen.next() > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2]   File > "/opt/stack/nova/nova/compute/manager.py", line 2211, in _build_resources > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] reason=e.format_message()) > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] BuildAbortException: > Build of instance d761da60-7bb1-415e-b5b9-eaaed124d6d2 aborted: Unable > to update the attachment. (HTTP 500) (Request-ID: > req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce) > Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager > [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] > > > [3] Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR > cinder.api.v3.attachments [req-f9f3364b-4dd8-4195-a60a-2f0e44c1f2ea > req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce admin admin] Unable to update > the attachment.: MessagingTimeout: Timed out waiting for a reply to > message ID fe836528e2ea43edabe8004845837f4f > Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR > cinder.api.v3.attachments Traceback (most recent call last): > Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR > cinder.api.v3.attachments   File > "/opt/stack/cinder/cinder/api/v3/attachments.py", line 228, in update > Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR > cinder.api.v3.attachments     connector)) > Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR > cinder.api.v3.attachments   File > "/opt/stack/cinder/cinder/volume/api.py", line 2158, in attachment_update > Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR > cinder.api.v3.attachments     attachment_ref.id)) > Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR > cinder.api.v3.attachments   File "/opt/stack/cinder/cinder/rpc.py", line > 187, in _wrapper > Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR > cinder.api.v3.attachments     return f(self, *args, **kwargs) > Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR > cinder.api.v3.attachments   File > "/opt/stack/cinder/cinder/volume/rpcapi.py", line 442, in attachment_update > Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR > cinder.api.v3.attachments     attachment_id=attachment_id) > Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR > cinder.api.v3.attachments   File > "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", > line 174, in call > Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR > cinder.api.v3.attachments     retry=self.retry) > Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR > cinder.api.v3.attachments   File > "/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", > line 131, in _send > Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR > cinder.api.v3.attachments     timeout=timeout, retry=retry) > Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR > cinder.api.v3.attachments   File > "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", > line 559, in send > Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR > cinder.api.v3.attachments     retry=retry) > Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR > cinder.api.v3.attachments   File > "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", > line 548, in _send > Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR > cinder.api.v3.attachments     result = self._waiter.wait(msg_id, timeout) > Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR > cinder.api.v3.attachments   File > "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", > line 440, in wait > Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR > cinder.api.v3.attachments     message = self.waiters.get(msg_id, > timeout=timeout) > Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR > cinder.api.v3.attachments   File > "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", > line 328, in get > Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR > cinder.api.v3.attachments     'to message ID %s' % msg_id) > Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR > cinder.api.v3.attachments MessagingTimeout: Timed out waiting for a > reply to message ID fe836528e2ea43edabe8004845837f4f > Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR > cinder.api.v3.attachments > Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR > oslo_messaging.rpc.server [req-f9f3364b-4dd8-4195-a60a-2f0e44c1f2ea > req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce admin None] Exception during > message handling: VolumeAttachmentNotFound: Volume attachment could not > be found with filter: attachment_id = 266ef7e1-4735-40f1-b704-509472f565cb. > Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR > oslo_messaging.rpc.server Traceback (most recent call last): > Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR > oslo_messaging.rpc.server   File > "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", > line 163, in _process_incoming > Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR > oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message) > Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR > oslo_messaging.rpc.server   File > "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", > line 220, in dispatch > Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR > oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, > ctxt, args) > Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR > oslo_messaging.rpc.server   File > "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", > line 190, in _do_dispatch > Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR > oslo_messaging.rpc.server     result = func(ctxt, **new_args) > Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR > oslo_messaging.rpc.server   File > "/opt/stack/cinder/cinder/volume/manager.py", line 4378, in > attachment_update > Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR > oslo_messaging.rpc.server     connector) > Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR > oslo_messaging.rpc.server   File > "/opt/stack/cinder/cinder/volume/manager.py", line 4349, in > _connection_create > Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR > oslo_messaging.rpc.server self.db.volume_attachment_update(ctxt, > attachment.id, values) > Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR > oslo_messaging.rpc.server   File "/opt/stack/cinder/cinder/db/api.py", > line 365, in volume_attachment_update > Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR > oslo_messaging.rpc.server     return > IMPL.volume_attachment_update(context, attachment_id, values) > Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR > oslo_messaging.rpc.server   File > "/opt/stack/cinder/cinder/db/sqlalchemy/api.py", line 182, in wrapper > Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR > oslo_messaging.rpc.server     return f(*args, **kwargs) > Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR > oslo_messaging.rpc.server   File > "/opt/stack/cinder/cinder/db/sqlalchemy/api.py", line 2674, in > volume_attachment_update > Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR > oslo_messaging.rpc.server     filter='attachment_id = ' + attachment_id) > Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR > oslo_messaging.rpc.server VolumeAttachmentNotFound: Volume attachment > could not be found with filter: attachment_id = > 266ef7e1-4735-40f1-b704-509472f565cb. > Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR > oslo_messaging.rpc.server > > > -- > --------------------------------------------------------------------------------------------------- > Sreeram Vancheeswaran > System z Firmware - Openstack Development > IBM Systems & Technology Lab, Bangalore, India > Phone: +91 80 40660826 Mob: +91-9341411511 > Email :sreeram at linux.vnet.ibm.com > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > You're doing boot from volume so nova-api should be creating the volume attachment record [1] and then nova-compute is updating the attachment with the compute host connector, which also creates the export in the backend storage via cinder. For whatever reason, the attachment_id nova-compute is passing to cinder is not found, but I wouldn't know why. You'll likely need to trace the request through the nova-api, nova-compute, cinder-api and cinder-volume logs, and trace 266ef7e1-4735-40f1-b704-509472f565cb which is the attachment ID. Like I said, nova-api creates it, will store it in the block_device_mappings table, and reference it later in nova-compute when actually attaching the volume to the instance on the compute host. The fact you're getting down to cinder-volume does mean that when nova-compute called cinder-api to update the volume attachment, cinder-api found the attachment in the database, otherwise it would return a 404 response to nova-compute. Maybe you're hitting some weird race? It's also weird that cinder-api is hitting an RPC messaging timeout even though cinder-volume clearly failed, that should be raised back up to cinder-api and spewed back to the caller (nova-compute) as a 500 error. Also, I should probably confirm, are you booting from an existing volume, or booting from an image or volume snapshot where nova-compute then creates the volume in Cinder and then attaches it to the server? If so, that flow doesn't yet create volume attachment records, which is what patch [2] is for. [1] https://github.com/openstack/nova/blob/0a642e2eee8d430ddcccf2947aedcca1a0a0b005/nova/compute/api.py#L3830 [2] https://review.openstack.org/#/c/541420/ -- Thanks, Matt From fungi at yuggoth.org Wed Apr 25 14:13:42 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 25 Apr 2018 14:13:42 +0000 Subject: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier? In-Reply-To: References: <1524491647-sup-1779@lrrr.local> <9555e900-24e7-9a9f-5a09-b957faa44fc2@redhat.com> <1A3C52DFCD06494D8528644858247BF01C0B3990@EX10MBOX03.pnnl.gov> <9baf6a58-4092-7417-de14-7be4269d6dbc@openstack.org> <1A3C52DFCD06494D8528644858247BF01C0B3EB2@EX10MBOX03.pnnl.gov> Message-ID: <20180425141341.pttdwrfkijfdkj5q@yuggoth.org> On 2018-04-25 14:12:00 +0800 (+0800), Rico Lin wrote: [...] > I believe to combine API services into one service will be able to > scale much easier. As we already starting from providing multiple > services and binding with Apache(Also concern about Zane's > comment), we can start this goal by saying providing unified API > service architecture (or start with new oslo api service). If we > reduce the difference between implementation from API service in > each OpenStack services first, maybe will make it easier to manage > or upgrade (since we unfied the package requirements) and even > possible to accelerate APIs. [...] How do you see this as being either similar to or different from the https://git.openstack.org/cgit/openstack/oaktree/tree/README.rst effort which is currently underway? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gr at ham.ie Wed Apr 25 14:14:06 2018 From: gr at ham.ie (Graham Hayes) Date: Wed, 25 Apr 2018 15:14:06 +0100 Subject: [openstack-dev] [designate] Meeting Times - change to office hours? In-Reply-To: <90abe52b-d7d3-4d51-9b65-a21e499b4e85@nemebean.com> References: <90abe52b-d7d3-4d51-9b65-a21e499b4e85@nemebean.com> Message-ID: <5c9e0a69-546f-89f0-03d5-a9d36e9763e5@ham.ie> On 24/04/18 16:55, Ben Nemec wrote: > I prefer 14:00 to 22:00 UTC, although depending on the time of year I > may have some flexibility on that. > > On 04/24/2018 01:37 AM, Erik Olof Gunnar Andersson wrote: >> I can do anytime ranging from 16:00 UTC to 03:00 UTC, Mon-Fri, maybe >> up to 07:00 UTC assuming that it's once bi-weekly. >> >> ------------------------------------------------------------------------ >> *From:* Jens Harbott >> *Sent:* Monday, April 23, 2018 10:49:25 PM >> *To:* OpenStack Development Mailing List (not for usage questions) >> *Subject:* Re: [openstack-dev] [designate] Meeting Times - change to >> office hours? >> 2018-04-23 13:11 GMT+02:00 Graham Hayes : >>> Hi All, >>> >>> We moved our meeting time to 14:00UTC on Wednesdays, but attendance >>> has been low, and it is also the middle of the night for one of our >>> cores. >>> >>> I would like to suggest we have an office hours style meeting, with >>> one in the UTC evening and one in the UTC morning. >>> >>> If this seems reasonable - when and what frequency should we do >>> them? What times suit the current set of contributors? >> >> My preferred range would be 06:00UTC-14:00UTC, Mon-Thu, though >> extending a couple of hours in either direction might be possible for >> me, too. >> >> If we do alternating times, with the current amount of work happening >> we maybe could make each of them monthly, so we end up with a roughly >> bi-weekly schedule. >> >> I also have a slight preference for continuing to use one of the >> meeting channels as opposed to meeting in the designate channel, if >> that is what "office hours style meeting" is meant to imply. >> I think a bi-weekly meeting, alternating between UTC morning and evening is a good idea. I do like the meeting channels, so I think we should keep them. Thanks, Graham -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From james.slagle at gmail.com Wed Apr 25 14:26:47 2018 From: james.slagle at gmail.com (James Slagle) Date: Wed, 25 Apr 2018 10:26:47 -0400 Subject: [openstack-dev] [tripleo] ironic automated cleaning by default? In-Reply-To: References: Message-ID: On Wed, Apr 25, 2018 at 9:14 AM, Dmitry Tantsur wrote: > Hi all, > > I'd like to restart conversation on enabling node automated cleaning by > default for the undercloud. This process wipes partitioning tables > (optionally, all the data) from overcloud nodes each time they move to > "available" state (i.e. on initial enrolling and after each tear down). > > We have had it disabled for a few reasons: > - it was not possible to skip time-consuming wiping if data from disks > - the way our workflows used to work required going between manageable and > available steps several times > > However, having cleaning disabled has several issues: > - a configdrive left from a previous deployment may confuse cloud-init > - a bootable partition left from a previous deployment may take precedence > in some BIOS > - an UEFI boot partition left from a previous deployment is likely to > confuse UEFI firmware > - apparently ceph does not work correctly without cleaning (I'll defer to > the storage team to comment) > > For these reasons we don't recommend having cleaning disabled, and I propose > to re-enable it. > > It has the following drawbacks: > - The default workflow will require another node boot, thus becoming several > minutes longer (incl. the CI) > - It will no longer be possible to easily restore a deleted overcloud node. I'm trending towards -1, for these exact reasons you list as drawbacks. There has been no shortage of occurrences of users who have ended up with accidentally deleted overclouds. These are usually caused by user error or unintended/unpredictable Heat operations. Until we have a way to guarantee that Heat will never delete a node, or Heat is entirely out of the picture for Ironic provisioning, then I'd prefer that we didn't enable automated cleaning by default. I believe we had done something with policy.json at one time to prevent node delete, but I don't recall if that protected from both user initiated actions and Heat actions. And even that was not enabled by default. IMO, we need to keep "safe" defaults. Even if it means manually documenting that you should clean to prevent the issues you point out above. The alternative is to have no way to recover deleted nodes by default. -- -- James Slagle -- From pabelanger at redhat.com Wed Apr 25 14:26:58 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Wed, 25 Apr 2018 10:26:58 -0400 Subject: [openstack-dev] [all] Gerrit server replacement scheduled for May 2nd 2018 In-Reply-To: <20180419154912.GA13701@localhost.localdomain> References: <20180410184829.GA16085@localhost.localdomain> <20180419154912.GA13701@localhost.localdomain> Message-ID: <20180425142658.GA7028@localhost.localdomain> On Thu, Apr 19, 2018 at 11:49:12AM -0400, Paul Belanger wrote: Hello from Infra. This is our weekly reminder of the upcoming gerrit replacement. We'll continue to send these announcements out up until the day of the migration. We are now 1 weeks away from replacement date. If you have any questions, please contact us in #openstack-infra. --- It's that time again... on Wednesday, May 02, 2018 20:00 UTC, the OpenStack Project Infrastructure team is upgrading the server which runs review.openstack.org to Ubuntu Xenial, and that means a new virtual machine instance with new IP addresses assigned by our service provider. The new IP addresses will be as follows: IPv4 -> 104.130.246.32 IPv6 -> 2001:4800:7819:103:be76:4eff:fe04:9229 They will replace these current production IP addresses: IPv4 -> 104.130.246.91 IPv6 -> 2001:4800:7819:103:be76:4eff:fe05:8525 We understand that some users may be running from egress-filtered networks with port 29418/tcp explicitly allowed to the current review.openstack.org IP addresses, and so are providing this information as far in advance as we can to allow them time to update their firewalls accordingly. Note that some users dealing with egress filtering may find it easier to switch their local configuration to use Gerrit's REST API via HTTPS instead, and the current release of git-review has support for that workflow as well. http://lists.openstack.org/pipermail/openstack-dev/2014-September/045385.html We will follow up with final confirmation in subsequent announcements. Thanks, Paul From mordred at inaugust.com Wed Apr 25 14:40:47 2018 From: mordred at inaugust.com (Monty Taylor) Date: Wed, 25 Apr 2018 16:40:47 +0200 Subject: [openstack-dev] [requirements][horizon][neutron] plugins depending on services Message-ID: Hi everybody, We've been working on navigating through from an interesting situation over the past few months, but there isn't a top-level overview of what's going on with it. That's my bad - I've been telling AJaeger I was going to send an email out for a while. projects with test requirements on git repo urls of other projects ------------------------------------------------------------------ There are a bunch of projects that need, for testing purposes, to depend on other projects. The majority are either neutron or horizon plugins, but conceptually there is nothing neutron or horizon specific about the issue. The problem they're trying to deal with is that they are a plugin to a service and they need to be able to import code from the service they are a plugin to in their unit tests. To make things even more complicated, some of the plugins actually duepend on each other for real, not just as a "we need this for testing" There is trouble in paradise though - which is that we don't allow git urls in requirements files. To work around this, the projects in question added additional pip install lines to a tox_install.sh script - essentially bypassing the global-requirements process and system completely. This went unnoticed in a general sense until we started working through removing the use of zuul-cloner which is not needed any longer in Zuul v3. unwinding things ---------------- There are a few different options, but it's important to keep in mind that we ultimately want all of the following: * The code works * Tests can run properly in CI * "Depends-On" works in CI so that you can test changes cross-repo * Tests can run properly locally for developers * Deployment requirements are accurately communicated to deployers The approach so far ------------------- The approach so far has been releasing service projects to PyPI and reworking the projects to depend on those releases. This approach takes advantage of the tox-siblings feature in the gate to ensure we're cross-testing master of projects with each other. tox-siblings ----------- There is a feature in the Zuul tox jobs we refer to as "tox-siblings" (this is because historically - wow we have historical context for zuul v3 now - it was implemented as a separate role) What it does is ensure that if you are running a tox job and you add additional projects to required-projects in the job config, that the git versions of those projects will be installed into the tox virtualenv - but only for projects that would have been installed by tox otherwise. This way required-projects is both safe to use and has the effect you'd expect. tox-siblings is intended to enable ADDITIONALLY cross-testing projects that otherwise have a normal dependency relationship in the gate. People have been adding jobs like cross-something-something or something-tips in an ad-hoc manner for a while - and in many cases the git parts of that were actually somewhat not correct - so this is an attempt to provide the thing people want in those scenarios in a consistent manner. But it always should be helper logic for more complex gate jobs, not as a de-facto part of a project's basic install. Current Approach is wrong ------------------------ Unfortunately, as part of trying to unwind the plugins situation, we've walked ourselves into a situation where the gate is the only thing that has the correct installation information for some projects, and that's not good. From a networking plugin approach the "depend on release and use tox-siblings" assumes that 'depend on release of neutron' is or can be the common case with the ability to add a second tox job to check master against master. If that's not a real thing, then depending on releases + tox_siblings in the gate is solving the wrong problem. Specific Suggestions -------------------- As there are a few different scenarios, I want to suggest we do a few different things. * Prefer interface libraries on PyPI that projects depend on Like python-openstackclient and osc-lib, this is the *best* approach for projects with plugins. Such interface libraries need to be able to do intermediate releases - and those intermediate releases need to not break the released version of the projects. This is the hardest and longest thing to do as well, so it's most likely to be a multi-cycle effort. * Treat inter-plugin depends as normal library depends If networking-bgpvpn depends on networking-bagpipe and networking-odl, then networking-bagpipe and networking-odl need to be released to PyPI just like any other library in OpenStack. These are real runtime dependencies. Yes, this is more coordination work, but it's work we do everywhere already and it's important. If we do that for inter-plugin depends, then the normal tox jobs should test against the most recent release of the other plugin, and people can make a -tips style job like the openstackclient-tox-py35-tips job to ALSO test that networking-bgpvpn works with tip of networking-odl. * Relax our rules about git repos in test-requirements.txt Introduce a whitelist of git repo urls, starting with: * https://git.openstack.org/openstack/neutron * https://git.openstack.org/openstack/horizon For the service projects that have plugins that need to test against the service they're intending to be used with in a real installation. For those plugin projects, actually put the git urls into test-requirements.txt. This will make the gate work AND local development work for the scenarios where the thing that is actually needed is always testing against tip of a corresponding service. * In the zuul jobs, add something similar to tox-siblings but before the initial install that will detect a git url that matches a locally checked out repo and will swap the local copy instead so that we don't have tox cloning directly in gate jobs. At this point, horizon and neutron plugin projects should be able to use normal tox jobs WITHOUT needing to list anything other than horizon and neutron themselves in required-projects, and they can also add project-specific -tips jobs that will add intra-plugin depends to their required-projects so that they can test both sides of the coin. Finally, and this is a thing we need broadly for OpenStack and not just neutron/horizon plugins: * Extract the tox-siblings logic into a standalone tool that can be installed and used from tox so that it's possible to replicate a -tips job locally. I've got this pretty much done and just need to get it finished up. As soon as it exists I'll update python-openstackclient's tox.ini file to use it - and people can cargo cult from there and/or we can work it up into a documented recipe for people. There is one more scenario or concern, which is that for the horizon plugins, without horizon in the requirements.txt file, we can be erroneously communicating to a deployer that they can be used standlone without horizon. For now I think we're going to have to solve that with a documentation note coupled with having the horizon repo link in the test requirements ... but it might be worth pondering what we could do to make this better. Perhaps for horizon because of that use-case we really should be modelling horizon as cycle-plus-intermediary and should make horizon plugins depend on horizon releases? I'm don't know that I know the full ramifications of making that choice - so for now I think the above approach (horizon git url in test-requiremnts) plus documentation is safer and gives us time to consider whether all the horizon plugin projects listing horizon in their requirements.txt is better or worse? Thoughts? Monty From johfulto at redhat.com Wed Apr 25 14:43:30 2018 From: johfulto at redhat.com (John Fulton) Date: Wed, 25 Apr 2018 10:43:30 -0400 Subject: [openstack-dev] [tripleo] ironic automated cleaning by default? In-Reply-To: References: Message-ID: On Wed, Apr 25, 2018 at 9:14 AM, Dmitry Tantsur wrote: > Hi all, > > I'd like to restart conversation on enabling node automated cleaning by > default for the undercloud. This process wipes partitioning tables > (optionally, all the data) from overcloud nodes each time they move to > "available" state (i.e. on initial enrolling and after each tear down). > > We have had it disabled for a few reasons: > - it was not possible to skip time-consuming wiping if data from disks > - the way our workflows used to work required going between manageable and > available steps several times > > However, having cleaning disabled has several issues: > - a configdrive left from a previous deployment may confuse cloud-init > - a bootable partition left from a previous deployment may take precedence > in some BIOS > - an UEFI boot partition left from a previous deployment is likely to > confuse UEFI firmware > - apparently ceph does not work correctly without cleaning (I'll defer to > the storage team to comment) > Yes, ceph-disk [1] won't prepare a disk that isn't clean. Deployers new to Ceph may not realize this and deployment tools which trigger ceph-disk will fail to prepare the requested OSDs. It may take the deployer time to realize that is the cause of failure and then they usually enable Ironic's automated cleaning. > For these reasons we don't recommend having cleaning disabled, and I > propose to re-enable it. > > It has the following drawbacks: > - The default workflow will require another node boot, thus becoming > several minutes longer (incl. the CI) > - It will no longer be possible to easily restore a deleted overcloud node. > > What do you think? If I don't hear principal objections, I'll prepare a > patch in the coming days. > +1 John [1] http://docs.ceph.com/docs/hammer/man/8/ceph-disk/ > > Dmitry > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ihrachys at redhat.com Wed Apr 25 14:46:10 2018 From: ihrachys at redhat.com (Ihar Hrachyshka) Date: Wed, 25 Apr 2018 07:46:10 -0700 Subject: [openstack-dev] [openstack-infra] How to take over a project? In-Reply-To: <0202894D-3C05-434F-A7F4-93678C7613FE@opennetworking.org> References: <0630C19B-9EA1-457D-A74C-8A7A03D96DD6@vmware.com> <2bb037db-61aa-4fec-a694-00eee36bbdab@gmail.com> <7a4390b1-2c4e-6600-4d93-167697ea9f12@redhat.com> <81B28CCD-93B2-4BC8-B2C5-50B0C5D2A972@opennetworking.org> <3C5A1D78-828F-4C6D-B3A1-B6597403233F@opennetworking.org> <0202894D-3C05-434F-A7F4-93678C7613FE@opennetworking.org> Message-ID: ONOS is not part of Neutron and hence Neutron Release team should not be involved in its matters. If gerrit ACLs say otherwise, you should fix the ACLs. Ihar On Tue, Apr 24, 2018 at 1:22 AM, Sangho Shin wrote: > Dear Neutron-Release team members, > > Can any of you handle the issue below? > > Thank you so much for your help in advance. > > Sangho > > >> On 20 Apr 2018, at 10:01 AM, Sangho Shin wrote: >> >> Dear Neutron-Release team, >> >> I wonder if any of you can add me to the network-onos-release member. >> It seems that Vikram is busy. :-) >> >> Thank you, >> >> Sangho >> >> >> >>> On 19 Apr 2018, at 9:18 AM, Sangho Shin wrote: >>> >>> Ian, >>> >>> Thank you so much for your help. >>> I have requested Vikram to add me to the release team. >>> He should be able to help me. :-) >>> >>> Sangho >>> >>> >>>> On 19 Apr 2018, at 8:36 AM, Ian Wienand wrote: >>>> >>>> On 04/19/2018 01:19 AM, Ian Y. Choi wrote: >>>>> By the way, since the networking-onos-release group has no neutron >>>>> release team group, I think infra team can help to include neutron >>>>> release team and neutron release team can help to create branches >>>>> for the repo if there is no reponse from current >>>>> networking-onos-release group member. >>>> >>>> This seems sane and I've added neutron-release to >>>> networking-onos-release. >>>> >>>> I'm hesitant to give advice on branching within a project like neutron >>>> as I'm sure there's stuff I'm not aware of; but members of the >>>> neutron-release team should be able to get you going. >>>> >>>> Thanks, >>>> >>>> -i >>> >> > From sean.mcginnis at gmx.com Wed Apr 25 14:48:49 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 25 Apr 2018 09:48:49 -0500 Subject: [openstack-dev] [tc] campaign question: How should we handle projects with overlapping feature sets? In-Reply-To: <1524490775-sup-9488@lrrr.local> References: <1524490775-sup-9488@lrrr.local> Message-ID: <20180425144848.GA22839@sm-xps> > > Our current policy regarding Open Development is that a project > should cooperate with existing projects "rather than gratuitously > competing or reinventing the wheel." [1] The flexibility provided > by the use of the term "gratuitously" has allowed us to support > multiple solutions in the deployment and telemetry problem spaces. > At the same time it has left us with questions about how (and > whether) the community would be able to replace the implementation > of any given component with a new set of technologies by "starting > from scratch". > > Where do you draw the line at "gratuitous"? I'm sure I can be swayed in a lot of cases, but I think if a new project can show that there is a need for the overlap, or at least explain a reasonable explanation for it, then I would not consider it gratuitous. For example, if they were addressing a slightly different problem space that has some additional needs, but as part of meeting those needs they need to have a foundation or component of it that is an overlap with existing functionality, then there may be some justification for the overlap. Ideally, I would first like to see if the project can just use the services of the other project and just build on top of their APIs to add their additional functionality. But I know that is not always as easy as it would first appear, so I think if they can state why that would be impossible, or at least prohibively difficult, then I think an overlap would be OK. > > What benefits and drawbacks do you see in supporting multiple tools > with similar features? > It definitely can cause confusion for downstream consumers. Either for those looking at which services to select for new deployments, or for consumers of those clouds with knowing what functionality is available to them and how they access it. Hopefully more clearly defined constellations would help with that. A blocker for me would be if the newer project attempt to emulate the API of the older project, but was not able to provide 100% parity with the existing functionality. If there is overlap, it needs to be very clearly separated into a different (although maybe very similar) API and endpoint so we are not putting this complexity and need for service awareness on the end consumers of the services. > How would our community be different, in positive and negative ways, > if we were more strict about avoiding such overlap? > I think a positive could be that it stimulates more activity in a given area so that ultimately better and more feature rich services are offered as part of OpenStack clouds. And as long as it is not just gratuitous, it could enable new use cases that are not currently possible or outside the scope of any existing projects. I really liked the point that Chris made about it possibly revitalizing developers by having something new and exciting to work on. Or for those existing projects, maybe getting them excited to work on slightly different use cases or collaborating with this new project to look at ways they can work together. As far as negative, I think it is very similar to what I pointed out above for deployers and users. It has the potential to cause some confusion in the community as to where certain functionality should live and where they should go to if they need to interact or use that functionality in their projects. One of the negatives brought up in the Glare discussion, since that was brought up, would be for other projects if they had to add conditional code to determine if they are interacting with Glance or with Glare for images. I think that falls under the points earlier about there needs to be a clear separation and focus on specific use cases so there are not two options doing very similar things, but with APIs that are not compatible or close but different. I would hope that we do not allow something like that to happen - at least without a very good reason for needing to do so. From fungi at yuggoth.org Wed Apr 25 14:54:26 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 25 Apr 2018 14:54:26 +0000 Subject: [openstack-dev] [requirements][horizon][neutron] plugins depending on services In-Reply-To: References: Message-ID: <20180425145425.edqqeax4qplwbyrc@yuggoth.org> On 2018-04-25 16:40:47 +0200 (+0200), Monty Taylor wrote: [...] > * Relax our rules about git repos in test-requirements.txt > > Introduce a whitelist of git repo urls, starting with: > > * https://git.openstack.org/openstack/neutron > * https://git.openstack.org/openstack/horizon > > For the service projects that have plugins that need to test against the > service they're intending to be used with in a real installation. For those > plugin projects, actually put the git urls into test-requirements.txt. This > will make the gate work AND local development work for the scenarios where > the thing that is actually needed is always testing against tip of a > corresponding service. [...] If this is limited to test-requirements.txt and doesn't spill over into requirements.txt then it _might_ be okay, though it still seems like we'd need some sort of transition around release stable branching to indicate what URLs should be used to correspond to those branches. We've been doing basically that with release-time edits to per-repo tox-install.sh scripts already, so maybe the same workflow can be reused on test-requirements.txt a well? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From dtantsur at redhat.com Wed Apr 25 14:55:46 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 25 Apr 2018 16:55:46 +0200 Subject: [openstack-dev] [tripleo] ironic automated cleaning by default? In-Reply-To: References: Message-ID: On 04/25/2018 04:26 PM, James Slagle wrote: > On Wed, Apr 25, 2018 at 9:14 AM, Dmitry Tantsur wrote: >> Hi all, >> >> I'd like to restart conversation on enabling node automated cleaning by >> default for the undercloud. This process wipes partitioning tables >> (optionally, all the data) from overcloud nodes each time they move to >> "available" state (i.e. on initial enrolling and after each tear down). >> >> We have had it disabled for a few reasons: >> - it was not possible to skip time-consuming wiping if data from disks >> - the way our workflows used to work required going between manageable and >> available steps several times >> >> However, having cleaning disabled has several issues: >> - a configdrive left from a previous deployment may confuse cloud-init >> - a bootable partition left from a previous deployment may take precedence >> in some BIOS >> - an UEFI boot partition left from a previous deployment is likely to >> confuse UEFI firmware >> - apparently ceph does not work correctly without cleaning (I'll defer to >> the storage team to comment) >> >> For these reasons we don't recommend having cleaning disabled, and I propose >> to re-enable it. >> >> It has the following drawbacks: >> - The default workflow will require another node boot, thus becoming several >> minutes longer (incl. the CI) >> - It will no longer be possible to easily restore a deleted overcloud node. > > I'm trending towards -1, for these exact reasons you list as > drawbacks. There has been no shortage of occurrences of users who have > ended up with accidentally deleted overclouds. These are usually > caused by user error or unintended/unpredictable Heat operations. > Until we have a way to guarantee that Heat will never delete a node, > or Heat is entirely out of the picture for Ironic provisioning, then > I'd prefer that we didn't enable automated cleaning by default. > > I believe we had done something with policy.json at one time to > prevent node delete, but I don't recall if that protected from both > user initiated actions and Heat actions. And even that was not enabled > by default. > > IMO, we need to keep "safe" defaults. Even if it means manually > documenting that you should clean to prevent the issues you point out > above. The alternative is to have no way to recover deleted nodes by > default. Well, it's not clear what is "safe" here: protect people who explicitly delete their stacks or protect people who don't realize that a previous deployment may screw up their new one in a subtle way. > > > > From sean.mcginnis at gmx.com Wed Apr 25 14:59:13 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 25 Apr 2018 09:59:13 -0500 Subject: [openstack-dev] Following the new PTI for document build, broken local builds In-Reply-To: <20180406172714.d8cdbd0a03d77f9de657a20e@redhat.com> References: <1521629342.8587.20.camel@redhat.com> <8da6121b-a48a-5dd5-8865-75b9e0d38e15@redhat.com> <1523018692.22377.1.camel@redhat.com> <20180406130205.GA15660@smcginnis-mbp.local> <1523026366.22377.13.camel@redhat.com> <20180406172714.d8cdbd0a03d77f9de657a20e@redhat.com> Message-ID: <20180425145913.GB22839@sm-xps> > > > > > > > > I'd be more in favour of changing the zuul job to build with the '-W' > > > > flag. To be honest, there is no good reason to not have this flag > > > > enabled. I'm not sure that will be a popular opinion though as it may > > > > break some projects' builds (correctly, but still). > > > > > > > > I'll propose a patch against zuul-jobs and see what happens :) > > > > > > > > Stephen > > > > > > > > > > I am in favor of this too. We will probably need to give some teams some time > > > to get warnings fixed though. I haven't done any kind of extensive audit of > > > projects, but from a few I looked through, there are definitely a few that are > > > not erroring on warnings and are likely to be blocked if we suddenly flipped > > > the switch and errored on those. > > > > > > This is a legitimate issue though. In Cinder we had -W in the tox docs job, but > > > since that is no longer being enforced in the gate, running "tox -e docs" from > > > a fresh clone of master was failing. We really do need some way to enforce this > > > so things like that do not happen. > > > > This. While forcing work on teams to do busywork is undeniably A Very > > Bad Thing (TM), I do think the longer we leave this, the worse it'll > > get. The zuul-jobs [1] patch will probably introduce some pain for > > projects but it seems like inevitable pain and we're in the right part > > of the cycle in which to do something like this. I'd be willing to help > > projects fix issues they encounter, which I expect will be minimal for > > most projects. > > I too think enforcing -W is the way to go, so count me in for the > broken docs build help. > > Thanks for pushing this forward! > > Cheers, > pk > In support of this I have proposed [1]. To make it easier to transition (since I'm pretty sure this will involve a lot of work by some projects) and since we want to eventually have everything run under Python 3, I have just proposed setting this flag as the default for the publish-openstack-sphinx-docs-python3 job template. Then projects can opt in as they are ready for both the warnings-as-errors and Python 3 support. I would love to hear if there are any concerns about doing things this way or if anyone has any better suggestions. Thanks! Sean [1] https://review.openstack.org/#/c/564232/ From Louie.Kwan at windriver.com Wed Apr 25 15:06:00 2018 From: Louie.Kwan at windriver.com (Kwan, Louie) Date: Wed, 25 Apr 2018 15:06:00 +0000 Subject: [openstack-dev] [masakari] Masakari Project Meeting time Message-ID: <47EFB32CD8770A4D9590812EE28C977E962F4E0B@ALA-MBD.corp.ad.wrs.com> Sampath, Dinesh and others, It was a good meeting last week. As briefly discussed with Sampath, I would like to check whether we can adjust the meeting time. We are at EST time zone, the meeting is right on our midnight time, 12:00 am. It will be nice if the meeting can be started ~2 hours earlier e.g. Could it be started at 02:00: UTC instead? Thanks. Louie From james.slagle at gmail.com Wed Apr 25 15:28:25 2018 From: james.slagle at gmail.com (James Slagle) Date: Wed, 25 Apr 2018 11:28:25 -0400 Subject: [openstack-dev] [tripleo] ironic automated cleaning by default? In-Reply-To: References: Message-ID: On Wed, Apr 25, 2018 at 10:55 AM, Dmitry Tantsur wrote: > On 04/25/2018 04:26 PM, James Slagle wrote: >> >> On Wed, Apr 25, 2018 at 9:14 AM, Dmitry Tantsur >> wrote: >>> >>> Hi all, >>> >>> I'd like to restart conversation on enabling node automated cleaning by >>> default for the undercloud. This process wipes partitioning tables >>> (optionally, all the data) from overcloud nodes each time they move to >>> "available" state (i.e. on initial enrolling and after each tear down). >>> >>> We have had it disabled for a few reasons: >>> - it was not possible to skip time-consuming wiping if data from disks >>> - the way our workflows used to work required going between manageable >>> and >>> available steps several times >>> >>> However, having cleaning disabled has several issues: >>> - a configdrive left from a previous deployment may confuse cloud-init >>> - a bootable partition left from a previous deployment may take >>> precedence >>> in some BIOS >>> - an UEFI boot partition left from a previous deployment is likely to >>> confuse UEFI firmware >>> - apparently ceph does not work correctly without cleaning (I'll defer to >>> the storage team to comment) >>> >>> For these reasons we don't recommend having cleaning disabled, and I >>> propose >>> to re-enable it. >>> >>> It has the following drawbacks: >>> - The default workflow will require another node boot, thus becoming >>> several >>> minutes longer (incl. the CI) >>> - It will no longer be possible to easily restore a deleted overcloud >>> node. >> >> >> I'm trending towards -1, for these exact reasons you list as >> drawbacks. There has been no shortage of occurrences of users who have >> ended up with accidentally deleted overclouds. These are usually >> caused by user error or unintended/unpredictable Heat operations. >> Until we have a way to guarantee that Heat will never delete a node, >> or Heat is entirely out of the picture for Ironic provisioning, then >> I'd prefer that we didn't enable automated cleaning by default. >> >> I believe we had done something with policy.json at one time to >> prevent node delete, but I don't recall if that protected from both >> user initiated actions and Heat actions. And even that was not enabled >> by default. >> >> IMO, we need to keep "safe" defaults. Even if it means manually >> documenting that you should clean to prevent the issues you point out >> above. The alternative is to have no way to recover deleted nodes by >> default. > > > Well, it's not clear what is "safe" here: protect people who explicitly > delete their stacks or protect people who don't realize that a previous > deployment may screw up their new one in a subtle way. The latter you can recover from, the former you can't if automated cleaning is true. It's not just about people who explicitly delete their stacks (whether intentional or not). There could be user error (non-explicit) or side-effects triggered by Heat that could cause nodes to get deleted. You couldn't recover from those scenarios if automated cleaning were true. Whereas you could always fix a deployment error by opting in to do an automated clean. Does Ironic keep track of it a node has been previously cleaned? Could we add a validation to check whether any nodes might be used in the deployment that were not previously cleaned? -- -- James Slagle -- From sreeram at linux.vnet.ibm.com Wed Apr 25 15:34:08 2018 From: sreeram at linux.vnet.ibm.com (Sreeram Vancheeswaran) Date: Wed, 25 Apr 2018 21:04:08 +0530 Subject: [openstack-dev] [nova] Help needed in debugging issue - ClientException: Unable to update the attachment. (HTTP 500) In-Reply-To: <146764a3-421a-de37-f96c-6d22c1af0485@gmail.com> References: <0df31812-aa57-324b-d21c-8576c0e21473@linux.vnet.ibm.com> <146764a3-421a-de37-f96c-6d22c1af0485@gmail.com> Message-ID: On 25/04/18 7:40 PM, Matt Riedemann wrote: > On 4/25/2018 3:32 AM, Sreeram Vancheeswaran wrote: >> Hi team! >> >> We are currently facing an issue in our out-of-tree driver nova-dpm >> [1] with nova and cinder on master, where instance launch in devstack >> is failing due to communication/time-out issues in nova-cinder. We >> are unable to get to the root cause of the issue and we need your >> help on getting some hints/directions to debug this issue further. >> >> --> From nova-compute service: BuildAbortException: Build of instance >> aborted: Unable to update the attachment. (HTTP 500) from the >> nova-compute server (detailed logs here [2]). >> >> --> From cinder-volume service: ERROR oslo_messaging.rpc.server >> VolumeAttachmentNotFound: Volume attachment could not be found with >> filter: attachment_id = 266ef7e1-4735-40f1-b704-509472f565cb. >> Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR >> oslo_messaging.rpc.server (detailed logs here [3]) >> >> Debugging steps done so far:- >> >> * Compared the package versions between the current devstack under >> test with the **last succeeding job in our CI system** (to be exact, >> it was for the patches https://review.openstack.org/#/c/458514/ and >> https://review.openstack.org/#/c/458820/); However the package >> versions for packages such as sqlalchemy, os-brick, oslo* are >> exactly the same in both the systems. >> * We used git bisect to revert nova and cinder projects to versions >> equal to or before the date of our last succeeding CI run; but still >> we were able to reproduce the same error. >> * Our guess is that the db "Save" operation during the update of >> volume attachment is failing. But we are unable to trace/debug to >> that point in the rpc call; Any suggestions on how to debug this >> sceario would be really helpful. >> * We are running devstack master on Ubuntu 16.04.04 >> >> >> References >> >> [1] https://github.com/openstack/nova-dpm >> >> >> [2] Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR >> nova.volume.cinder [None req-751d4586-cd97-4a8f-9423-f2bc4b1f1269 >> service nova] Update attachment failed for attachment >> 266ef7e1-4735-40f1-b704-509472f565cb. Error: Unable to update the >> attachment. (HTTP 500) (Request-ID: >> req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce) Code: 500: ClientException: >> Unable to update the attachment. (HTTP 500) (Request-ID: >> req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce) >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [None req-751d4586-cd97-4a8f-9423-f2bc4b1f1269 service nova] >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] Instance failed >> block device setup: ClientException: Unable to update the attachment. >> (HTTP 500) (Request-ID: req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce) >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] Traceback (most >> recent call last): >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File >> "/opt/stack/nova/nova/compute/manager.py", line 1577, in >> _prep_block_device >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] >> wait_func=self._await_block_device_map_created) >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File >> "/opt/stack/nova/nova/virt/block_device.py", line 828, in >> attach_block_devices >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] _log_and_attach(device) >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File >> "/opt/stack/nova/nova/virt/block_device.py", line 825, in >> _log_and_attach >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] >> bdm.attach(*attach_args, **attach_kwargs) >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File >> "/opt/stack/nova/nova/virt/block_device.py", line 46, in wrapped >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] ret_val = >> method(obj, context, *args, **kwargs) >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File >> "/opt/stack/nova/nova/virt/block_device.py", line 618, in attach >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] virt_driver, >> do_driver_attach) >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File >> "/usr/local/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py", >> line 274, in inner >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] return f(*args, >> **kwargs) >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File >> "/opt/stack/nova/nova/virt/block_device.py", line 615, in >> _do_locked_attach >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] >> self._do_attach(*args, **_kwargs) >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File >> "/opt/stack/nova/nova/virt/block_device.py", line 600, in _do_attach >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] do_driver_attach) >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File >> "/opt/stack/nova/nova/virt/block_device.py", line 514, in _volume_attach >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] >> self['mount_device'])['connection_info'] >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File >> "/opt/stack/nova/nova/volume/cinder.py", line 291, in wrapper >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] res = >> method(self, ctx, *args, **kwargs) >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File >> "/opt/stack/nova/nova/volume/cinder.py", line 327, in wrapper >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] res = >> method(self, ctx, attachment_id, *args, **kwargs) >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File >> "/opt/stack/nova/nova/volume/cinder.py", line 736, in attachment_update >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] 'code': >> getattr(ex, 'code', None)}) >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File >> "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line >> 220, in __exit__ >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] >> self.force_reraise() >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File >> "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line >> 196, in force_reraise >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] >> six.reraise(self.type_, self.value, self.tb) >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File >> "/opt/stack/nova/nova/volume/cinder.py", line 726, in attachment_update >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] attachment_id, >> _connector) >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File >> "/usr/local/lib/python2.7/dist-packages/cinderclient/v3/attachments.py", >> line 67, in update >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] resp = >> self._update('/attachments/%s' % id, body) >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File >> "/usr/local/lib/python2.7/dist-packages/cinderclient/base.py", line >> 344, in _update >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] resp, body = >> self.api.client.put(url, body=body, **kwargs) >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File >> "/usr/local/lib/python2.7/dist-packages/cinderclient/client.py", line >> 206, in put >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] return >> self._cs_request(url, 'PUT', **kwargs) >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File >> "/usr/local/lib/python2.7/dist-packages/cinderclient/client.py", line >> 191, in _cs_request >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] return >> self.request(url, method, **kwargs) >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File >> "/usr/local/lib/python2.7/dist-packages/cinderclient/client.py", line >> 177, in request >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] raise >> exceptions.from_response(resp, body) >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] ClientException: >> Unable to update the attachment. (HTTP 500) (Request-ID: >> req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce) >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [None req-751d4586-cd97-4a8f-9423-f2bc4b1f1269 service nova] >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] Build of instance >> d761da60-7bb1-415e-b5b9-eaaed124d6d2 aborted: Unable to update the >> attachment. (HTTP 500) (Request-ID: >> req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce): BuildAbortException: Build >> of instance d761da60-7bb1-415e-b5b9-eaaed124d6d2 aborted: Unable to >> update the attachment. (HTTP 500) (Request-ID: >> req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce) >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] Traceback (most >> recent call last): >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File >> "/opt/stack/nova/nova/compute/manager.py", line 1839, in >> _do_build_and_run_instance >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] >> filter_properties, request_spec) >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File >> "/opt/stack/nova/nova/compute/manager.py", line 2052, in >> _build_and_run_instance >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] >> bdms=block_device_mapping) >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File >> "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line >> 220, in __exit__ >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] >> self.force_reraise() >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File >> "/usr/local/lib/python2.7/dist-packages/oslo_utils/excutils.py", line >> 196, in force_reraise >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] >> six.reraise(self.type_, self.value, self.tb) >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File >> "/opt/stack/nova/nova/compute/manager.py", line 2004, in >> _build_and_run_instance >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] >> block_device_mapping) as resources: >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File >> "/usr/lib/python2.7/contextlib.py", line 17, in __enter__ >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] return >> self.gen.next() >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] File >> "/opt/stack/nova/nova/compute/manager.py", line 2211, in >> _build_resources >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] >> reason=e.format_message()) >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] BuildAbortException: >> Build of instance d761da60-7bb1-415e-b5b9-eaaed124d6d2 aborted: >> Unable to update the attachment. (HTTP 500) (Request-ID: >> req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce) >> Apr 25 06:41:57 zos057 nova-compute[6190]: ERROR nova.compute.manager >> [instance: d761da60-7bb1-415e-b5b9-eaaed124d6d2] >> >> >> [3] Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR >> cinder.api.v3.attachments [req-f9f3364b-4dd8-4195-a60a-2f0e44c1f2ea >> req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce admin admin] Unable to >> update the attachment.: MessagingTimeout: Timed out waiting for a >> reply to message ID fe836528e2ea43edabe8004845837f4f >> Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR >> cinder.api.v3.attachments Traceback (most recent call last): >> Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR >> cinder.api.v3.attachments File >> "/opt/stack/cinder/cinder/api/v3/attachments.py", line 228, in update >> Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR >> cinder.api.v3.attachments connector)) >> Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR >> cinder.api.v3.attachments File >> "/opt/stack/cinder/cinder/volume/api.py", line 2158, in >> attachment_update >> Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR >> cinder.api.v3.attachments attachment_ref.id)) >> Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR >> cinder.api.v3.attachments File "/opt/stack/cinder/cinder/rpc.py", >> line 187, in _wrapper >> Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR >> cinder.api.v3.attachments return f(self, *args, **kwargs) >> Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR >> cinder.api.v3.attachments File >> "/opt/stack/cinder/cinder/volume/rpcapi.py", line 442, in >> attachment_update >> Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR >> cinder.api.v3.attachments attachment_id=attachment_id) >> Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR >> cinder.api.v3.attachments File >> "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", >> line 174, in call >> Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR >> cinder.api.v3.attachments retry=self.retry) >> Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR >> cinder.api.v3.attachments File >> "/usr/local/lib/python2.7/dist-packages/oslo_messaging/transport.py", >> line 131, in _send >> Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR >> cinder.api.v3.attachments timeout=timeout, retry=retry) >> Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR >> cinder.api.v3.attachments File >> "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", >> line 559, in send >> Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR >> cinder.api.v3.attachments retry=retry) >> Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR >> cinder.api.v3.attachments File >> "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", >> line 548, in _send >> Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR >> cinder.api.v3.attachments result = self._waiter.wait(msg_id, >> timeout) >> Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR >> cinder.api.v3.attachments File >> "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", >> line 440, in wait >> Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR >> cinder.api.v3.attachments message = self.waiters.get(msg_id, >> timeout=timeout) >> Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR >> cinder.api.v3.attachments File >> "/usr/local/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py", >> line 328, in get >> Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR >> cinder.api.v3.attachments 'to message ID %s' % msg_id) >> Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR >> cinder.api.v3.attachments MessagingTimeout: Timed out waiting for a >> reply to message ID fe836528e2ea43edabe8004845837f4f >> Apr 25 06:41:57 zos057 devstack at c-api.service[11490]: ERROR >> cinder.api.v3.attachments >> Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR >> oslo_messaging.rpc.server [req-f9f3364b-4dd8-4195-a60a-2f0e44c1f2ea >> req-550f6b9c-7f22-4dce-adbe-f6843e0aa3ce admin None] Exception during >> message handling: VolumeAttachmentNotFound: Volume attachment could >> not be found with filter: attachment_id = >> 266ef7e1-4735-40f1-b704-509472f565cb. >> Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR >> oslo_messaging.rpc.server Traceback (most recent call last): >> Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR >> oslo_messaging.rpc.server File >> "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", >> line 163, in _process_incoming >> Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR >> oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) >> Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR >> oslo_messaging.rpc.server File >> "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", >> line 220, in dispatch >> Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR >> oslo_messaging.rpc.server return self._do_dispatch(endpoint, >> method, ctxt, args) >> Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR >> oslo_messaging.rpc.server File >> "/usr/local/lib/python2.7/dist-packages/oslo_messaging/rpc/dispatcher.py", >> line 190, in _do_dispatch >> Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR >> oslo_messaging.rpc.server result = func(ctxt, **new_args) >> Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR >> oslo_messaging.rpc.server File >> "/opt/stack/cinder/cinder/volume/manager.py", line 4378, in >> attachment_update >> Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR >> oslo_messaging.rpc.server connector) >> Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR >> oslo_messaging.rpc.server File >> "/opt/stack/cinder/cinder/volume/manager.py", line 4349, in >> _connection_create >> Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR >> oslo_messaging.rpc.server self.db.volume_attachment_update(ctxt, >> attachment.id, values) >> Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR >> oslo_messaging.rpc.server File >> "/opt/stack/cinder/cinder/db/api.py", line 365, in >> volume_attachment_update >> Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR >> oslo_messaging.rpc.server return >> IMPL.volume_attachment_update(context, attachment_id, values) >> Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR >> oslo_messaging.rpc.server File >> "/opt/stack/cinder/cinder/db/sqlalchemy/api.py", line 182, in wrapper >> Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR >> oslo_messaging.rpc.server return f(*args, **kwargs) >> Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR >> oslo_messaging.rpc.server File >> "/opt/stack/cinder/cinder/db/sqlalchemy/api.py", line 2674, in >> volume_attachment_update >> Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR >> oslo_messaging.rpc.server filter='attachment_id = ' + attachment_id) >> Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR >> oslo_messaging.rpc.server VolumeAttachmentNotFound: Volume attachment >> could not be found with filter: attachment_id = >> 266ef7e1-4735-40f1-b704-509472f565cb. >> Apr 25 06:42:47 zos057 cinder-volume[11475]: ERROR >> oslo_messaging.rpc.server >> >> >> -- >> --------------------------------------------------------------------------------------------------- >> >> Sreeram Vancheeswaran >> System z Firmware - Openstack Development >> IBM Systems & Technology Lab, Bangalore, India >> Phone: +91 80 40660826 Mob: +91-9341411511 >> Email :sreeram at linux.vnet.ibm.com >> >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > You're doing boot from volume so nova-api should be creating the > volume attachment record [1] and then nova-compute is updating the > attachment with the compute host connector, which also creates the > export in the backend storage via cinder. For whatever reason, the > attachment_id nova-compute is passing to cinder is not found, but I > wouldn't know why. You'll likely need to trace the request through the > nova-api, nova-compute, cinder-api and cinder-volume logs, and trace > 266ef7e1-4735-40f1-b704-509472f565cb which is the attachment ID. Like > I said, nova-api creates it, will store it in the > block_device_mappings table, and reference it later in nova-compute > when actually attaching the volume to the instance on the compute > host. The fact you're getting down to cinder-volume does mean that > when nova-compute called cinder-api to update the volume attachment, > cinder-api found the attachment in the database, otherwise it would > return a 404 response to nova-compute. Maybe you're hitting some weird > race? > > It's also weird that cinder-api is hitting an RPC messaging timeout > even though cinder-volume clearly failed, that should be raised back > up to cinder-api and spewed back to the caller (nova-compute) as a 500 > error. > > Also, I should probably confirm, are you booting from an existing > volume, or booting from an image or volume snapshot where nova-compute > then creates the volume in Cinder and then attaches it to the server? > If so, that flow doesn't yet create volume attachment records, which > is what patch [2] is for. > > [1] > https://github.com/openstack/nova/blob/0a642e2eee8d430ddcccf2947aedcca1a0a0b005/nova/compute/api.py#L3830 > [2] https://review.openstack.org/#/c/541420/ Thank you so much Matt for the detailed steps. We are doing boot from image and are probably running into the issue mentioned in [2] in your email. -- --------------------------------------------------------------------------------------------------- Sreeram Vancheeswaran System z Firmware - Openstack Development IBM Systems & Technology Lab, Bangalore, India Phone: +91 80 40660826 Mob: +91-9341411511 Email : sreeram at linux.vnet.ibm.com From mriedemos at gmail.com Wed Apr 25 15:39:23 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 25 Apr 2018 10:39:23 -0500 Subject: [openstack-dev] [nova] Help needed in debugging issue - ClientException: Unable to update the attachment. (HTTP 500) In-Reply-To: References: <0df31812-aa57-324b-d21c-8576c0e21473@linux.vnet.ibm.com> <146764a3-421a-de37-f96c-6d22c1af0485@gmail.com> Message-ID: <24c5f6f9-893e-af58-5f89-4d9e39670441@gmail.com> On 4/25/2018 10:34 AM, Sreeram Vancheeswaran wrote: > Thank you so much Matt for the detailed steps.  We are doing boot from > image and are probably running into the issue mentioned in [2] in your > email. Hmm, OK, but that doesn't really make sense how you're going down this path [1] in the code because the API doesn't create a volume attachment record when booting from a volume where the source_type='image', so it should be going down the "legacy" attach flow where attachment_update is not called. Do you have some proprietary code in place that might be causing some problems? Otherwise we need to figure out how this is failing because it could be an issue in Queens. [1] https://github.com/openstack/nova/blob/0a642e2eee8d430ddcccf2947aedcca1a0a0b005/nova/virt/block_device.py#L597 -- Thanks, Matt From doug at doughellmann.com Wed Apr 25 15:41:10 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 25 Apr 2018 11:41:10 -0400 Subject: [openstack-dev] Following the new PTI for document build, broken local builds In-Reply-To: <20180425145913.GB22839@sm-xps> References: <1521629342.8587.20.camel@redhat.com> <8da6121b-a48a-5dd5-8865-75b9e0d38e15@redhat.com> <1523018692.22377.1.camel@redhat.com> <20180406130205.GA15660@smcginnis-mbp.local> <1523026366.22377.13.camel@redhat.com> <20180406172714.d8cdbd0a03d77f9de657a20e@redhat.com> <20180425145913.GB22839@sm-xps> Message-ID: <1524670685-sup-2247@lrrr.local> Excerpts from Sean McGinnis's message of 2018-04-25 09:59:13 -0500: > > > > > > > > > > I'd be more in favour of changing the zuul job to build with the '-W' > > > > > flag. To be honest, there is no good reason to not have this flag > > > > > enabled. I'm not sure that will be a popular opinion though as it may > > > > > break some projects' builds (correctly, but still). > > > > > > > > > > I'll propose a patch against zuul-jobs and see what happens :) > > > > > > > > > > Stephen > > > > > > > > > > > > > I am in favor of this too. We will probably need to give some teams some time > > > > to get warnings fixed though. I haven't done any kind of extensive audit of > > > > projects, but from a few I looked through, there are definitely a few that are > > > > not erroring on warnings and are likely to be blocked if we suddenly flipped > > > > the switch and errored on those. > > > > > > > > This is a legitimate issue though. In Cinder we had -W in the tox docs job, but > > > > since that is no longer being enforced in the gate, running "tox -e docs" from > > > > a fresh clone of master was failing. We really do need some way to enforce this > > > > so things like that do not happen. > > > > > > This. While forcing work on teams to do busywork is undeniably A Very > > > Bad Thing (TM), I do think the longer we leave this, the worse it'll > > > get. The zuul-jobs [1] patch will probably introduce some pain for > > > projects but it seems like inevitable pain and we're in the right part > > > of the cycle in which to do something like this. I'd be willing to help > > > projects fix issues they encounter, which I expect will be minimal for > > > most projects. > > > > I too think enforcing -W is the way to go, so count me in for the > > broken docs build help. > > > > Thanks for pushing this forward! > > > > Cheers, > > pk > > > > In support of this I have proposed [1]. To make it easier to transition (since > I'm pretty sure this will involve a lot of work by some projects) and since we > want to eventually have everything run under Python 3, I have just proposed > setting this flag as the default for the publish-openstack-sphinx-docs-python3 > job template. Then projects can opt in as they are ready for both the > warnings-as-errors and Python 3 support. > > I would love to hear if there are any concerns about doing things this way or > if anyone has any better suggestions. > > Thanks! > Sean > > [1] https://review.openstack.org/#/c/564232/ > The only concern I have is that it may slow the transition to the python 3 version of the jobs, since someone would have to actually fix the warnings before they could add the new job. I'm not sure I want to couple the tasks of fixing doc build warnings with also making those docs build under python 3 (which is usually quite simple). Is there some other way to enable this flag independently of the move to the python3 job? Doug From openstack at nemebean.com Wed Apr 25 15:47:50 2018 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 25 Apr 2018 10:47:50 -0500 Subject: [openstack-dev] [tripleo] ironic automated cleaning by default? In-Reply-To: References: Message-ID: On 04/25/2018 10:28 AM, James Slagle wrote: > On Wed, Apr 25, 2018 at 10:55 AM, Dmitry Tantsur wrote: >> On 04/25/2018 04:26 PM, James Slagle wrote: >>> >>> On Wed, Apr 25, 2018 at 9:14 AM, Dmitry Tantsur >>> wrote: >>>> >>>> Hi all, >>>> >>>> I'd like to restart conversation on enabling node automated cleaning by >>>> default for the undercloud. This process wipes partitioning tables >>>> (optionally, all the data) from overcloud nodes each time they move to >>>> "available" state (i.e. on initial enrolling and after each tear down). >>>> >>>> We have had it disabled for a few reasons: >>>> - it was not possible to skip time-consuming wiping if data from disks >>>> - the way our workflows used to work required going between manageable >>>> and >>>> available steps several times >>>> >>>> However, having cleaning disabled has several issues: >>>> - a configdrive left from a previous deployment may confuse cloud-init >>>> - a bootable partition left from a previous deployment may take >>>> precedence >>>> in some BIOS >>>> - an UEFI boot partition left from a previous deployment is likely to >>>> confuse UEFI firmware >>>> - apparently ceph does not work correctly without cleaning (I'll defer to >>>> the storage team to comment) >>>> >>>> For these reasons we don't recommend having cleaning disabled, and I >>>> propose >>>> to re-enable it. >>>> >>>> It has the following drawbacks: >>>> - The default workflow will require another node boot, thus becoming >>>> several >>>> minutes longer (incl. the CI) >>>> - It will no longer be possible to easily restore a deleted overcloud >>>> node. >>> >>> >>> I'm trending towards -1, for these exact reasons you list as >>> drawbacks. There has been no shortage of occurrences of users who have >>> ended up with accidentally deleted overclouds. These are usually >>> caused by user error or unintended/unpredictable Heat operations. >>> Until we have a way to guarantee that Heat will never delete a node, >>> or Heat is entirely out of the picture for Ironic provisioning, then >>> I'd prefer that we didn't enable automated cleaning by default. >>> >>> I believe we had done something with policy.json at one time to >>> prevent node delete, but I don't recall if that protected from both >>> user initiated actions and Heat actions. And even that was not enabled >>> by default. >>> >>> IMO, we need to keep "safe" defaults. Even if it means manually >>> documenting that you should clean to prevent the issues you point out >>> above. The alternative is to have no way to recover deleted nodes by >>> default. >> >> >> Well, it's not clear what is "safe" here: protect people who explicitly >> delete their stacks or protect people who don't realize that a previous >> deployment may screw up their new one in a subtle way. > > The latter you can recover from, the former you can't if automated > cleaning is true. > > It's not just about people who explicitly delete their stacks (whether > intentional or not). There could be user error (non-explicit) or > side-effects triggered by Heat that could cause nodes to get deleted. > > You couldn't recover from those scenarios if automated cleaning were > true. Whereas you could always fix a deployment error by opting in to > do an automated clean. Does Ironic keep track of it a node has been > previously cleaned? Could we add a validation to check whether any > nodes might be used in the deployment that were not previously > cleaned? Is there a way to only do cleaning right before a node is deployed? If you're about to write a new image to the disk then any data there is forfeit anyway. Since the concern is old data on the disk messing up subsequent deploys, it doesn't really matter whether you clean it right after it's deleted or right before it's deployed, but the latter leaves the data intact for longer in case a mistake was made. If that's not possible then consider this an RFE. :-) -Ben From doug at doughellmann.com Wed Apr 25 16:03:54 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 25 Apr 2018 12:03:54 -0400 Subject: [openstack-dev] [requirements][horizon][neutron] plugins depending on services In-Reply-To: References: Message-ID: <1524671093-sup-8304@lrrr.local> Excerpts from Monty Taylor's message of 2018-04-25 16:40:47 +0200: > Hi everybody, > > We've been working on navigating through from an interesting situation > over the past few months, but there isn't a top-level overview of what's > going on with it. That's my bad - I've been telling AJaeger I was going > to send an email out for a while. > > projects with test requirements on git repo urls of other projects > ------------------------------------------------------------------ > > There are a bunch of projects that need, for testing purposes, to depend > on other projects. The majority are either neutron or horizon plugins, > but conceptually there is nothing neutron or horizon specific about the > issue. The problem they're trying to deal with is that they are a plugin > to a service and they need to be able to import code from the service > they are a plugin to in their unit tests. > > To make things even more complicated, some of the plugins actually > duepend on each other for real, not just as a "we need this for testing" > > There is trouble in paradise though - which is that we don't allow git > urls in requirements files. To work around this, the projects in > question added additional pip install lines to a tox_install.sh script - > essentially bypassing the global-requirements process and system > completely. > > This went unnoticed in a general sense until we started working through > removing the use of zuul-cloner which is not needed any longer in Zuul v3. > > unwinding things > ---------------- > > There are a few different options, but it's important to keep in mind > that we ultimately want all of the following: > > * The code works > * Tests can run properly in CI > * "Depends-On" works in CI so that you can test changes cross-repo > * Tests can run properly locally for developers > * Deployment requirements are accurately communicated to deployers > > The approach so far > ------------------- > > The approach so far has been releasing service projects to PyPI and > reworking the projects to depend on those releases. > > This approach takes advantage of the tox-siblings feature in the gate to > ensure we're cross-testing master of projects with each other. > > tox-siblings > ----------- > > There is a feature in the Zuul tox jobs we refer to as "tox-siblings" > (this is because historically - wow we have historical context for zuul > v3 now - it was implemented as a separate role) What it does is ensure > that if you are running a tox job and you add additional projects to > required-projects in the job config, that the git versions of those > projects will be installed into the tox virtualenv - but only for > projects that would have been installed by tox otherwise. This way > required-projects is both safe to use and has the effect you'd expect. > > tox-siblings is intended to enable ADDITIONALLY cross-testing projects > that otherwise have a normal dependency relationship in the gate. People > have been adding jobs like cross-something-something or something-tips > in an ad-hoc manner for a while - and in many cases the git parts of > that were actually somewhat not correct - so this is an attempt to > provide the thing people want in those scenarios in a consistent manner. > But it always should be helper logic for more complex gate jobs, not as > a de-facto part of a project's basic install. > > Current Approach is wrong > ------------------------ > > Unfortunately, as part of trying to unwind the plugins situation, we've > walked ourselves into a situation where the gate is the only thing that > has the correct installation information for some projects, and that's > not good. > > From a networking plugin approach the "depend on release and use > tox-siblings" assumes that 'depend on release of neutron' is or can be > the common case with the ability to add a second tox job to check master > against master. > > If that's not a real thing, then depending on releases + tox_siblings in > the gate is solving the wrong problem. > > Specific Suggestions > -------------------- > > As there are a few different scenarios, I want to suggest we do a few > different things. > > * Prefer interface libraries on PyPI that projects depend on > > Like python-openstackclient and osc-lib, this is the *best* approach > for projects with plugins. Such interface libraries need to be able to > do intermediate releases - and those intermediate releases need to not > break the released version of the projects. This is the hardest and > longest thing to do as well, so it's most likely to be a multi-cycle effort. > > * Treat inter-plugin depends as normal library depends > > If networking-bgpvpn depends on networking-bagpipe and networking-odl, > then networking-bagpipe and networking-odl need to be released to PyPI > just like any other library in OpenStack. These are real runtime > dependencies. > > Yes, this is more coordination work, but it's work we do everywhere > already and it's important. > > If we do that for inter-plugin depends, then the normal tox jobs should > test against the most recent release of the other plugin, and people can > make a -tips style job like the openstackclient-tox-py35-tips job to > ALSO test that networking-bgpvpn works with tip of networking-odl. > > * Relax our rules about git repos in test-requirements.txt > > Introduce a whitelist of git repo urls, starting with: > > * https://git.openstack.org/openstack/neutron > * https://git.openstack.org/openstack/horizon > > For the service projects that have plugins that need to test against the > service they're intending to be used with in a real installation. For > those plugin projects, actually put the git urls into > test-requirements.txt. This will make the gate work AND local > development work for the scenarios where the thing that is actually > needed is always testing against tip of a corresponding service. How will having git URLs in the test-requirements list work with the constraints system, if we also have server projects like neutron and horizon in the global-requirements and upper-constraints lists? I wonder if it would be simpler to put those requirements into a separate file, which the check-requirements job would ignore and which could be installed without having the constraints applied. > > * In the zuul jobs, add something similar to tox-siblings but before the > initial install that will detect a git url that matches a locally > checked out repo and will swap the local copy instead so that we don't > have tox cloning directly in gate jobs. > > At this point, horizon and neutron plugin projects should be able to use > normal tox jobs WITHOUT needing to list anything other than horizon and > neutron themselves in required-projects, and they can also add > project-specific -tips jobs that will add intra-plugin depends to their > required-projects so that they can test both sides of the coin. > > Finally, and this is a thing we need broadly for OpenStack and not just > neutron/horizon plugins: > > * Extract the tox-siblings logic into a standalone tool that can be > installed and used from tox so that it's possible to replicate a -tips > job locally. I've got this pretty much done and just need to get it > finished up. As soon as it exists I'll update python-openstackclient's > tox.ini file to use it - and people can cargo cult from there and/or we > can work it up into a documented recipe for people. It really feels like we're (again) doing a lot of work to get around limitations of tox itself. Are there any changes in tox itself that would make this simpler? Are we sure tox is still the right tool for us? > There is one more scenario or concern, which is that for the horizon > plugins, without horizon in the requirements.txt file, we can be > erroneously communicating to a deployer that they can be used standlone > without horizon. For now I think we're going to have to solve that with This seems to apply to neutron plugins and neutron, too, right? Especially now with lower-constraints jobs in place, having plugins rely on features only available in unreleased versions of service projects doesn't make a lot of sense. We test that way *between* services using integration tests that use the REST APIs, but we also have some pretty strong stability requirements in place for those APIs. > a documentation note coupled with having the horizon repo link in the > test requirements ... but it might be worth pondering what we could do > to make this better. Perhaps for horizon because of that use-case we > really should be modelling horizon as cycle-plus-intermediary and should > make horizon plugins depend on horizon releases? I'm don't know that I > know the full ramifications of making that choice - so for now I think > the above approach (horizon git url in test-requiremnts) plus > documentation is safer and gives us time to consider whether all the > horizon plugin projects listing horizon in their requirements.txt is > better or worse? > > Thoughts? Another alternative is to put the plugins for things that don't provide a stable API or interface library back into the repo with the things they depend on so that all of the tests can just run together in one job. Either way we do it the groups of people working on the different things are going to need to figure out how to work together to make their projects compatible, and combining the repos means less work and complexity in the CI system. Doug From sreeram at linux.vnet.ibm.com Wed Apr 25 16:11:12 2018 From: sreeram at linux.vnet.ibm.com (Sreeram Vancheeswaran) Date: Wed, 25 Apr 2018 21:41:12 +0530 Subject: [openstack-dev] [nova] Help needed in debugging issue - ClientException: Unable to update the attachment. (HTTP 500) In-Reply-To: <24c5f6f9-893e-af58-5f89-4d9e39670441@gmail.com> References: <0df31812-aa57-324b-d21c-8576c0e21473@linux.vnet.ibm.com> <146764a3-421a-de37-f96c-6d22c1af0485@gmail.com> <24c5f6f9-893e-af58-5f89-4d9e39670441@gmail.com> Message-ID: <5e828411-fbbd-3e85-58ef-cfeac8d35948@linux.vnet.ibm.com> On 25/04/18 9:09 PM, Matt Riedemann wrote: > On 4/25/2018 10:34 AM, Sreeram Vancheeswaran wrote: >> Thank you so much Matt for the detailed steps. We are doing boot >> from image and are probably running into the issue mentioned in [2] >> in your email. > > Hmm, OK, but that doesn't really make sense how you're going down this > path [1] in the code because the API doesn't create a volume > attachment record when booting from a volume where the > source_type='image', so it should be going down the "legacy" attach > flow where attachment_update is not called. > > Do you have some proprietary code in place that might be causing some > problems? Otherwise we need to figure out how this is failing because > it could be an issue in Queens. > > [1] > https://github.com/openstack/nova/blob/0a642e2eee8d430ddcccf2947aedcca1a0a0b005/nova/virt/block_device.py#L597 > Yes, we have some proprietary code to copy image on to volume (as a prototype now), and probably that is causing the issue here; I will debug/trace using the info you previously provided and figure out the root cause. I will get back to you if there is some issue in the "in-tree" code. Again, thank you so much for providing directions on where to continue debugging. -- --------------------------------------------------------------------------------------------------- Sreeram Vancheeswaran System z Firmware - Openstack Development IBM Systems & Technology Lab, Bangalore, India Phone: +91 80 40660826 Mob: +91-9341411511 Email : sreeram at linux.vnet.ibm.com From juliaashleykreger at gmail.com Wed Apr 25 16:11:35 2018 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Wed, 25 Apr 2018 16:11:35 +0000 Subject: [openstack-dev] [ironic] Monthly bug day? In-Reply-To: References: Message-ID: On Mon, Apr 23, 2018 at 12:04 PM, Michael Turek wrote: > What does everyone think about having Bug Day the first Thursday of every > month? All for it! From miguel at mlavalle.com Wed Apr 25 16:13:43 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Wed, 25 Apr 2018 11:13:43 -0500 Subject: [openstack-dev] [openstack-infra] How to take over a project? In-Reply-To: References: <0630C19B-9EA1-457D-A74C-8A7A03D96DD6@vmware.com> <2bb037db-61aa-4fec-a694-00eee36bbdab@gmail.com> <7a4390b1-2c4e-6600-4d93-167697ea9f12@redhat.com> <81B28CCD-93B2-4BC8-B2C5-50B0C5D2A972@opennetworking.org> <3C5A1D78-828F-4C6D-B3A1-B6597403233F@opennetworking.org> <0202894D-3C05-434F-A7F4-93678C7613FE@opennetworking.org> Message-ID: Hi, Ihar raises a valid issue. In the spirit of preventing this request from falling through the cracks, I reached out to Clark Boylan in the infra IRC channel ( http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2018-04-25.log.html#t2018-04-25T15:46:41). We decided to contact Vikram directly in the email account he has registered in Gerrit. I just sent him a message, copying Clark and Sangho. If he responds, Sangho can coordinate with him. If he doesn't after a week or two, then we can switch to Sangho, who is a member of the ONOS core team. Regards Miguel On Wed, Apr 25, 2018 at 9:46 AM, Ihar Hrachyshka wrote: > ONOS is not part of Neutron and hence Neutron Release team should not > be involved in its matters. If gerrit ACLs say otherwise, you should > fix the ACLs. > > Ihar > > On Tue, Apr 24, 2018 at 1:22 AM, Sangho Shin > wrote: > > Dear Neutron-Release team members, > > > > Can any of you handle the issue below? > > > > Thank you so much for your help in advance. > > > > Sangho > > > > > >> On 20 Apr 2018, at 10:01 AM, Sangho Shin > wrote: > >> > >> Dear Neutron-Release team, > >> > >> I wonder if any of you can add me to the network-onos-release member. > >> It seems that Vikram is busy. :-) > >> > >> Thank you, > >> > >> Sangho > >> > >> > >> > >>> On 19 Apr 2018, at 9:18 AM, Sangho Shin > wrote: > >>> > >>> Ian, > >>> > >>> Thank you so much for your help. > >>> I have requested Vikram to add me to the release team. > >>> He should be able to help me. :-) > >>> > >>> Sangho > >>> > >>> > >>>> On 19 Apr 2018, at 8:36 AM, Ian Wienand wrote: > >>>> > >>>> On 04/19/2018 01:19 AM, Ian Y. Choi wrote: > >>>>> By the way, since the networking-onos-release group has no neutron > >>>>> release team group, I think infra team can help to include neutron > >>>>> release team and neutron release team can help to create branches > >>>>> for the repo if there is no reponse from current > >>>>> networking-onos-release group member. > >>>> > >>>> This seems sane and I've added neutron-release to > >>>> networking-onos-release. > >>>> > >>>> I'm hesitant to give advice on branching within a project like neutron > >>>> as I'm sure there's stuff I'm not aware of; but members of the > >>>> neutron-release team should be able to get you going. > >>>> > >>>> Thanks, > >>>> > >>>> -i > >>> > >> > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Wed Apr 25 16:17:12 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 25 Apr 2018 16:17:12 +0000 Subject: [openstack-dev] [mistral] September PTG in Denver In-Reply-To: <20180423195823.GC17397@sm-xps> References: <20180423195823.GC17397@sm-xps> Message-ID: Hey Sean :) The reason why we picked the May 2nd date was so that people would know if they needed to register before the early bird pricing closes. If groups feel like they need more time to decide that's fine. It would still be helpful if those needing more time could fill the survey with the 'Maybe, Still Deciding' answer so I can circle back later for a hard 'Yes, Absolutely' or a 'No, Certainly Not' :) -Kendall (diablo_rojo) On Mon, Apr 23, 2018 at 12:58 PM Sean McGinnis wrote: > On Mon, Apr 23, 2018 at 07:32:40PM +0000, Kendall Nelson wrote: > > Hey Dougal, > > > > I think I had said May 2nd in my initial email asking about attendance. > If > > you can get an answer out of your team by then I would greatly appreciate > > it! If you need more time please let me know by then (May 2nd) instead. > > > > -Kendall (diablo_rojo) > > > > Do we need to collect this data for September already by the beginning of > May? > > Granted, the sooner we know details and can start planning, the better. > But as > I started looking over the survey, it just seems really early to predict > where > things will be 5 months from now. Especially considering we will have a > different set of PTLs for many projects by then, and it is too early for > some > of those hand off discussions to have started yet. > > Sean > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Apr 25 16:40:21 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 25 Apr 2018 16:40:21 +0000 Subject: [openstack-dev] [requirements][horizon][neutron] plugins depending on services In-Reply-To: <1524671093-sup-8304@lrrr.local> References: <1524671093-sup-8304@lrrr.local> Message-ID: <20180425164020.jrmlqlmxhpgasuoc@yuggoth.org> On 2018-04-25 12:03:54 -0400 (-0400), Doug Hellmann wrote: [...] > Especially now with lower-constraints jobs in place, having plugins > rely on features only available in unreleased versions of service > projects doesn't make a lot of sense. We test that way *between* > services using integration tests that use the REST APIs, but we > also have some pretty strong stability requirements in place for > those APIs. [...] This came up again a few days ago for sahara-dashboard. We talked through some obvious alternatives to keep its master branch from depending on an unreleased state of horizon and the situation today is that plugin developers have been relying on developing their releases in parallel with the services. Not merging an entire development cycle's worth of work until release day (whether that's by way of a feature branch or by just continually rebasing and stacking in Gerrit) would be a very painful workflow for them, and having to wait a full release cycle before they could start integrating support for new features in the service would be equally unfortunate. As for merging the plugin and service repositories, they tend to be developed by completely disparate teams so that could require a fair amount of political work to solve. Extracting the plugin interface into a separate library which releases more frequently than the service does indeed sound like the sanest option, but will also probably take quite a while for some teams to achieve (I gather neutron-lib is getting there, but I haven't heard about any work toward that end in Horizon yet). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From sean.mcginnis at gmx.com Wed Apr 25 16:55:43 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 25 Apr 2018 11:55:43 -0500 Subject: [openstack-dev] Following the new PTI for document build, broken local builds In-Reply-To: <1524670685-sup-2247@lrrr.local> References: <1521629342.8587.20.camel@redhat.com> <8da6121b-a48a-5dd5-8865-75b9e0d38e15@redhat.com> <1523018692.22377.1.camel@redhat.com> <20180406130205.GA15660@smcginnis-mbp.local> <1523026366.22377.13.camel@redhat.com> <20180406172714.d8cdbd0a03d77f9de657a20e@redhat.com> <20180425145913.GB22839@sm-xps> <1524670685-sup-2247@lrrr.local> Message-ID: <20180425165543.GA459@sm-xps> > > > > [1] https://review.openstack.org/#/c/564232/ > > > > The only concern I have is that it may slow the transition to the > python 3 version of the jobs, since someone would have to actually > fix the warnings before they could add the new job. I'm not sure I > want to couple the tasks of fixing doc build warnings with also > making those docs build under python 3 (which is usually quite > simple). > > Is there some other way to enable this flag independently of the move to > the python3 job? > > Doug > I did consider just creating a whole new job definition. I could probably do that instead, but my hope was those proactive enough to be moving to python 3 to run their jobs would also be proactive enough that they have already addressed doc job warnings. We could do two separate jobs, then when everyone is ready, collapse it back to one job. I was hoping to jump ahead a little though. From zbitter at redhat.com Wed Apr 25 16:56:07 2018 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 25 Apr 2018 12:56:07 -0400 Subject: [openstack-dev] Following the new PTI for document build, broken local builds In-Reply-To: <1524670685-sup-2247@lrrr.local> References: <1521629342.8587.20.camel@redhat.com> <8da6121b-a48a-5dd5-8865-75b9e0d38e15@redhat.com> <1523018692.22377.1.camel@redhat.com> <20180406130205.GA15660@smcginnis-mbp.local> <1523026366.22377.13.camel@redhat.com> <20180406172714.d8cdbd0a03d77f9de657a20e@redhat.com> <20180425145913.GB22839@sm-xps> <1524670685-sup-2247@lrrr.local> Message-ID: <1210453e-e9af-ff50-6d61-af47afd47857@redhat.com> On 25/04/18 11:41, Doug Hellmann wrote: > Excerpts from Sean McGinnis's message of 2018-04-25 09:59:13 -0500: >>>>>> >>>>>> I'd be more in favour of changing the zuul job to build with the '-W' >>>>>> flag. To be honest, there is no good reason to not have this flag >>>>>> enabled. I'm not sure that will be a popular opinion though as it may >>>>>> break some projects' builds (correctly, but still). >>>>>> >>>>>> I'll propose a patch against zuul-jobs and see what happens :) >>>>>> >>>>>> Stephen >>>>>> >>>>> >>>>> I am in favor of this too. We will probably need to give some teams some time >>>>> to get warnings fixed though. I haven't done any kind of extensive audit of >>>>> projects, but from a few I looked through, there are definitely a few that are >>>>> not erroring on warnings and are likely to be blocked if we suddenly flipped >>>>> the switch and errored on those. >>>>> >>>>> This is a legitimate issue though. In Cinder we had -W in the tox docs job, but >>>>> since that is no longer being enforced in the gate, running "tox -e docs" from >>>>> a fresh clone of master was failing. We really do need some way to enforce this >>>>> so things like that do not happen. >>>> >>>> This. While forcing work on teams to do busywork is undeniably A Very >>>> Bad Thing (TM), I do think the longer we leave this, the worse it'll >>>> get. The zuul-jobs [1] patch will probably introduce some pain for >>>> projects but it seems like inevitable pain and we're in the right part >>>> of the cycle in which to do something like this. I'd be willing to help >>>> projects fix issues they encounter, which I expect will be minimal for >>>> most projects. >>> >>> I too think enforcing -W is the way to go, so count me in for the >>> broken docs build help. >>> >>> Thanks for pushing this forward! >>> >>> Cheers, >>> pk >>> >> >> In support of this I have proposed [1]. To make it easier to transition (since >> I'm pretty sure this will involve a lot of work by some projects) and since we >> want to eventually have everything run under Python 3, I have just proposed >> setting this flag as the default for the publish-openstack-sphinx-docs-python3 >> job template. Then projects can opt in as they are ready for both the >> warnings-as-errors and Python 3 support. >> >> I would love to hear if there are any concerns about doing things this way or >> if anyone has any better suggestions. >> >> Thanks! >> Sean >> >> [1] https://review.openstack.org/#/c/564232/ >> > > The only concern I have is that it may slow the transition to the > python 3 version of the jobs, since someone would have to actually > fix the warnings before they could add the new job. I'm not sure I > want to couple the tasks of fixing doc build warnings with also > making those docs build under python 3 (which is usually quite > simple). > > Is there some other way to enable this flag independently of the move to > the python3 job? The existing proposal is: https://review.openstack.org/559348 TL;DR if you still have a build_sphinx section in setup.cfg then defaults will remain the same, but when removing it as part of the transition to the new PTI you'll have to eliminate any warnings. (Although AFAICT it doesn't hurt to leave that section in place as long as you need, and you can still do the rest of the PTI conversion.) The hold-up is that the job in question is also potentially used by other Zuul users outside of OpenStack - including those who aren't using pbr at all (i.e. there's no setup.cfg). So we need to warn those folks to prepare. cheers, Zane. From sean.mcginnis at gmx.com Wed Apr 25 17:06:44 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 25 Apr 2018 12:06:44 -0500 Subject: [openstack-dev] Following the new PTI for document build, broken local builds In-Reply-To: <1210453e-e9af-ff50-6d61-af47afd47857@redhat.com> References: <1521629342.8587.20.camel@redhat.com> <8da6121b-a48a-5dd5-8865-75b9e0d38e15@redhat.com> <1523018692.22377.1.camel@redhat.com> <20180406130205.GA15660@smcginnis-mbp.local> <1523026366.22377.13.camel@redhat.com> <20180406172714.d8cdbd0a03d77f9de657a20e@redhat.com> <20180425145913.GB22839@sm-xps> <1524670685-sup-2247@lrrr.local> <1210453e-e9af-ff50-6d61-af47afd47857@redhat.com> Message-ID: <20180425170644.GB459@sm-xps> > >> > >>[1] https://review.openstack.org/#/c/564232/ > >> > > > >The only concern I have is that it may slow the transition to the > >python 3 version of the jobs, since someone would have to actually > >fix the warnings before they could add the new job. I'm not sure I > >want to couple the tasks of fixing doc build warnings with also > >making those docs build under python 3 (which is usually quite > >simple). > > > >Is there some other way to enable this flag independently of the move to > >the python3 job? > > The existing proposal is: > > https://review.openstack.org/559348 > > TL;DR if you still have a build_sphinx section in setup.cfg then defaults > will remain the same, but when removing it as part of the transition to the > new PTI you'll have to eliminate any warnings. (Although AFAICT it doesn't > hurt to leave that section in place as long as you need, and you can still > do the rest of the PTI conversion.) > > The hold-up is that the job in question is also potentially used by other > Zuul users outside of OpenStack - including those who aren't using pbr at > all (i.e. there's no setup.cfg). So we need to warn those folks to prepare. > > cheers, > Zane. > Ah, I had looked but did not find an existing proposal. Looks like that would work too. I am good either way, but I will leave my approach out there just as another option to consider. I'll abandon that if folks prefer this way. From doug at doughellmann.com Wed Apr 25 17:10:42 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 25 Apr 2018 13:10:42 -0400 Subject: [openstack-dev] Following the new PTI for document build, broken local builds In-Reply-To: <20180425165543.GA459@sm-xps> References: <1521629342.8587.20.camel@redhat.com> <8da6121b-a48a-5dd5-8865-75b9e0d38e15@redhat.com> <1523018692.22377.1.camel@redhat.com> <20180406130205.GA15660@smcginnis-mbp.local> <1523026366.22377.13.camel@redhat.com> <20180406172714.d8cdbd0a03d77f9de657a20e@redhat.com> <20180425145913.GB22839@sm-xps> <1524670685-sup-2247@lrrr.local> <20180425165543.GA459@sm-xps> Message-ID: <1524676149-sup-4361@lrrr.local> Excerpts from Sean McGinnis's message of 2018-04-25 11:55:43 -0500: > > > > > > [1] https://review.openstack.org/#/c/564232/ > > > > > > > The only concern I have is that it may slow the transition to the > > python 3 version of the jobs, since someone would have to actually > > fix the warnings before they could add the new job. I'm not sure I > > want to couple the tasks of fixing doc build warnings with also > > making those docs build under python 3 (which is usually quite > > simple). > > > > Is there some other way to enable this flag independently of the move to > > the python3 job? > > > > Doug > > > > I did consider just creating a whole new job definition. I could probably do > that instead, but my hope was those proactive enough to be moving to python 3 > to run their jobs would also be proactive enough that they have already > addressed doc job warnings. > > We could do two separate jobs, then when everyone is ready, collapse it back to > one job. I was hoping to jump ahead a little though. > Transitioning jobs is a bit painful because the job definitions and job templates are defined in separately places. If we don't want this setting to be controlled from a file within the git repo, I guess the most expedient thing is to go ahead and make this part of the python 3 job transition. Doug From aspiers at suse.com Wed Apr 25 17:15:42 2018 From: aspiers at suse.com (Adam Spiers) Date: Wed, 25 Apr 2018 18:15:42 +0100 Subject: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier? In-Reply-To: References: <1524491647-sup-1779@lrrr.local> Message-ID: <20180425171542.xr747eusgft6cjmh@pacific.linksys.moosehall> [BTW I hope it's not considered off-bounds for those of us who aren't TC election candidates to reply within these campaign question threads to responses from the candidates - but if so, let me know and I'll shut up ;-) ] Zhipeng Huang wrote: >Culture wise, being too IRC-centric is definitely not helping, from my own >experience getting new Cyborg developer joining our weekly meeting from >China. Well we could always argue it is part of a open source/hacker >culture and preferable to commercial solutions that have the constant risk >of suddenly being shut down someday. But as OpenStack becomes more >commercialized and widely adopted, we should be aware that more and more >(potential) contributors will come from the groups who are used to >non-strictly open source environment, such as product develop team which >relies on a lot of "closed source" but easy to use softwares. > >The change ? Use more video conferences, and more commercial tools that >preferred in certain region. Stop being allergic to non-open source >softwares and bring more capable but not hacker culture inclined >contributors to the community. I respectfully disagree :-) >I know this is not a super welcomed stance in the open source hacker >culture. But if we want OpenStack to be able to sustain more developers and >not have a mid-life crisis then got fringed, we need to start changing the >hacker mindset. I think that "the hacker mindset" is too ambiguous and generalized a concept to be useful in framing justification for change. From where I'm standing, the hacker mindset is a wonderful and valuable thing which should be preserved. However, if that conflicts with other goals of our community, such as reducing barrier to entry, then yes that is a valid concern. In that case we should examine in more detail the specific aspects of hacker culture which are discouraging potential new contributors, and try to fix those, rather than jumping to the assumption that we should instead switch to commercial tools. Given the community's "Four Opens" philosophy and strong belief in the power of Open Source, it would be inconsistent to abandon our preference for Open Source tools. For example, proprietary tools such as Slack are not popular because they are proprietary; they are popular because they have a very intuitive interface and convenient features which people enjoy. So when examining the specific question "What can we do to make it easier for OpenStack newbies to communicate with the existing community over a public instant messaging system?", the first question should not be "Should we switch to a proprietary tool?", but rather "Is there an open source tool which provides enough of the functionality we need?" And in fact in the case of instant messaging, I believe the answer is yes, as I previously pointed out: http://lists.openstack.org/pipermail/openstack-sigs/2018-March/000332.html Similarly, there are plenty of great Open Source solutions for voice and video communications. I'm all for changing with the times and adapting workflows to harness the benefits of more modern tools, but I think it's wrong to automatically assume that this can only be achieved via proprietary solutions. From hjensas at redhat.com Wed Apr 25 17:19:13 2018 From: hjensas at redhat.com (Harald =?ISO-8859-1?Q?Jens=E5s?=) Date: Wed, 25 Apr 2018 19:19:13 +0200 Subject: [openstack-dev] [Heat][TripleO] - Getting attributes of openstack resources not created by the stack for TripleO NetworkConfig. In-Reply-To: <1defd9c4-e2ad-c2bb-0232-d1159ab0a2af@redhat.com> References: <1524142764.4383.83.camel@redhat.com> <1defd9c4-e2ad-c2bb-0232-d1159ab0a2af@redhat.com> Message-ID: <1524676753.4383.203.camel@redhat.com> On Tue, 2018-04-24 at 16:12 -0400, Zane Bitter wrote: > On 19/04/18 08:59, Harald Jensås wrote: > > The problem is getting there using heat ... > > The real answer is to make everything explicit - create a Subnet > resource and a Port resource and don't allow Neutron/Nova to make > any > decisions for you that would have the effect of hiding data that you > need. However, since that's impractical in this particular case... > Yeah, I wish the ctlplane network in tripleo was defined in THT. But since it's created by undercloud installer we are where we are. Moving it is impractical for the same reasons migrating from server resources with implicit ports is ... Another non tripleo use case is when connecting the instance to a provider network, in this case the network and subnet resource is beyond the user control. (External resource probably, but there seem to be the issues Zane mentions below.) > > a couple of ideas: > > > > a) Use heat's ``external_resource`` to create a port resource, > > and then a external subnet resource. Then get the data > > from the external resources. We probably would have to make > > it possible for a ``external_resource`` depend on the server > > resource, and verify that these resource have the required > > attributes. > > Yeah, I don't know why we don't allow depends_on for resources with > external_id. (There's also a bug where we don't recognise > dependencies > contributed by any functions used in the external_id field, like > get_resource or get_attr, even though we allow those functions.) > Apparently somebody had a brain explosion at a design summit session > that nobody remembers attending, and here we are :D > > The difficulty is that the fix should be tied to a template version, > but > the offending check is in the template-independent part of the code > base. > > Nevertheless, a workaround is trivial: > > ext_port: > type: OS::Neutron::Port > external_id: {get_attr: [, addresses, , 0, > port]} > metadata: > do_something_to_add_a_dependency: {get_resource: } > > > b) Extend attributes of OS::Nova::Server (OS::Neutron::Port as > > well probably) to include the data. > > > > If we do this we should probably aim to be in parity with > > what is made available to clients getting the configuration > > from dhcp. (mtu, dns_domain, dns_servers, prefixlen, > > gateway_ip, host_routes, ipv6_address_mode, ipv6_ra_mode > > etc.) > > This makes sense to me. If we're allowing people to let Nova/Neutron > make implicit choices for them then we also need to allow them to > see > the result. > I like this idea best as well. I will open an rfe against Heat. > > c) Create a new heat function to read properties of any > > openstack resource, without having to make use of the > > external_resource in heat. > > I'm pretty -1 on this, because I think you want to have the same > caching > behaviour as a resource, not a function. At that point you're just > implementing syntactic sugar that makes things _less_ consistent, not > to > mention the enormous implementation hacks required. > > cheers, > Zane. > > _____________________________________________________________________ > _____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > cribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From fungi at yuggoth.org Wed Apr 25 17:22:02 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 25 Apr 2018 17:22:02 +0000 Subject: [openstack-dev] Following the new PTI for document build, broken local builds In-Reply-To: <1524676149-sup-4361@lrrr.local> References: <1521629342.8587.20.camel@redhat.com> <8da6121b-a48a-5dd5-8865-75b9e0d38e15@redhat.com> <1523018692.22377.1.camel@redhat.com> <20180406130205.GA15660@smcginnis-mbp.local> <1523026366.22377.13.camel@redhat.com> <20180406172714.d8cdbd0a03d77f9de657a20e@redhat.com> <20180425145913.GB22839@sm-xps> <1524670685-sup-2247@lrrr.local> <20180425165543.GA459@sm-xps> <1524676149-sup-4361@lrrr.local> Message-ID: <20180425172202.rs5bstl2zwa3s4gp@yuggoth.org> On 2018-04-25 13:10:42 -0400 (-0400), Doug Hellmann wrote: [...] > Transitioning jobs is a bit painful because the job definitions and > job templates are defined in separately places. If we don't want > this setting to be controlled from a file within the git repo, I > guess the most expedient thing is to go ahead and make this part > of the python 3 job transition. We could provide an experimental job or some basic instructions in an announcement and just schedule a flag day to start enforcing everywhere. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Wed Apr 25 17:26:22 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 25 Apr 2018 17:26:22 +0000 Subject: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier? In-Reply-To: <20180425171542.xr747eusgft6cjmh@pacific.linksys.moosehall> References: <1524491647-sup-1779@lrrr.local> <20180425171542.xr747eusgft6cjmh@pacific.linksys.moosehall> Message-ID: <20180425172622.6cwrgmvo7tiwo2ul@yuggoth.org> On 2018-04-25 18:15:42 +0100 (+0100), Adam Spiers wrote: > [BTW I hope it's not considered off-bounds for those of us who aren't > TC election candidates to reply within these campaign question threads > to responses from the candidates - but if so, let me know and I'll > shut up ;-) ] [...] Not only are responses from everyone in the community welcome (and like many, I think we should be asking questions like this often outside the context of election campaigning), but I wholeheartedly agree with your stance on this topic and also strongly encourage you to consider running for a seat on the TC in the future if you can swing it. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From rico.lin.guanyu at gmail.com Wed Apr 25 18:01:53 2018 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Thu, 26 Apr 2018 02:01:53 +0800 Subject: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier? In-Reply-To: <20180425141341.pttdwrfkijfdkj5q@yuggoth.org> References: <1524491647-sup-1779@lrrr.local> <9555e900-24e7-9a9f-5a09-b957faa44fc2@redhat.com> <1A3C52DFCD06494D8528644858247BF01C0B3990@EX10MBOX03.pnnl.gov> <9baf6a58-4092-7417-de14-7be4269d6dbc@openstack.org> <1A3C52DFCD06494D8528644858247BF01C0B3EB2@EX10MBOX03.pnnl.gov> <20180425141341.pttdwrfkijfdkj5q@yuggoth.org> Message-ID: 2018-04-25 22:13 GMT+08:00 Jeremy Stanley : > > On 2018-04-25 14:12:00 +0800 (+0800), Rico Lin wrote: > [...] > > I believe to combine API services into one service will be able to > > scale much easier. As we already starting from providing multiple > > services and binding with Apache(Also concern about Zane's > > comment), we can start this goal by saying providing unified API > > service architecture (or start with new oslo api service). If we > > reduce the difference between implementation from API service in > > each OpenStack services first, maybe will make it easier to manage > > or upgrade (since we unfied the package requirements) and even > > possible to accelerate APIs. > [...] > > How do you see this as being either similar to or different from the > https://git.openstack.org/cgit/openstack/oaktree/tree/README.rst > effort which is currently underway? I think it's different from oaktree, since oaktree is an upper layer which depends on API Services (allow shade to connected with), And what I'm saying is to unify all API Servers. An example will be like what tempest do for tests, tempest provide cmd and tools to help you generate and run test cases, each service only required to provide a plugin. So if first step (to unified) is complete, we can even focus on enhancing API service for all, and the cool part is, we only need to do it in a single place for all projects. Think about what happens when Tempest trying to enhance test performance (just do it and check the gate is green). Also, what kevin's idea is to have a API service to replace all API service, which IIUC will be a API server directly use RPC to reach backend for each services in OpenStack. So it's also different too. > -- > Jeremy Stanley > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- May The Force of OpenStack Be With You, Rico Lin irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguel at mlavalle.com Wed Apr 25 18:58:45 2018 From: miguel at mlavalle.com (Miguel Lavalle) Date: Wed, 25 Apr 2018 13:58:45 -0500 Subject: [openstack-dev] [neutron] Bugs deputy duty calendar Message-ID: Dear Neutrinos, I just rolled over the bugs deputy duty calendar. Please take a look and take note of your next duty week: https://wiki.openstack.org/wiki/Network/Meetings#Bug_deputy Best regards Miguel -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed Apr 25 19:06:37 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 25 Apr 2018 15:06:37 -0400 Subject: [openstack-dev] [Release-job-failures][release][reno] Release of openstack/reno failed In-Reply-To: References: Message-ID: <1524683157-sup-7086@lrrr.local> Excerpts from zuul's message of 2018-04-25 18:04:07 +0000: > Build failed. > > - release-openstack-python http://logs.openstack.org/25/25ef4b82d1d21f6e0ab442405eeb8b12e2024fb1/release/release-openstack-python/d9d8142/ : SUCCESS in 4m 00s > - announce-release http://logs.openstack.org/25/25ef4b82d1d21f6e0ab442405eeb8b12e2024fb1/release/announce-release/cf78acd/ : FAILURE in 2m 52s > - propose-update-constraints http://logs.openstack.org/25/25ef4b82d1d21f6e0ab442405eeb8b12e2024fb1/release/propose-update-constraints/5cb80ad/ : SUCCESS in 2m 35s > I believe https://review.openstack.org/564317 addresses the failure in the announce script from the log above. Doug From doug at doughellmann.com Wed Apr 25 19:07:39 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 25 Apr 2018 15:07:39 -0400 Subject: [openstack-dev] [Release-job-failures][release][reno] Release of openstack/reno failed In-Reply-To: <1524683157-sup-7086@lrrr.local> References: <1524683157-sup-7086@lrrr.local> Message-ID: <1524683243-sup-8082@lrrr.local> Excerpts from Doug Hellmann's message of 2018-04-25 15:06:37 -0400: > Excerpts from zuul's message of 2018-04-25 18:04:07 +0000: > > Build failed. > > > > - release-openstack-python http://logs.openstack.org/25/25ef4b82d1d21f6e0ab442405eeb8b12e2024fb1/release/release-openstack-python/d9d8142/ : SUCCESS in 4m 00s > > - announce-release http://logs.openstack.org/25/25ef4b82d1d21f6e0ab442405eeb8b12e2024fb1/release/announce-release/cf78acd/ : FAILURE in 2m 52s > > - propose-update-constraints http://logs.openstack.org/25/25ef4b82d1d21f6e0ab442405eeb8b12e2024fb1/release/propose-update-constraints/5cb80ad/ : SUCCESS in 2m 35s > > > > I believe https://review.openstack.org/564317 addresses the failure in > the announce script from the log above. > > Doug See http://lists.openstack.org/pipermail/release-announce/2018-April/004980.html for the output using that version of the script. Doug From e0ne at e0ne.info Wed Apr 25 19:31:53 2018 From: e0ne at e0ne.info (Ivan Kolodyazhny) Date: Wed, 25 Apr 2018 22:31:53 +0300 Subject: [openstack-dev] [requirements][horizon][neutron] plugins depending on services In-Reply-To: <20180425164020.jrmlqlmxhpgasuoc@yuggoth.org> References: <1524671093-sup-8304@lrrr.local> <20180425164020.jrmlqlmxhpgasuoc@yuggoth.org> Message-ID: Hi team, I'm speaking mostly from Horizon side, but it should be pretty the same for others. We started a discussion today at the Horizon's meeting but we don't have any decision now. For the current release cycle, it looks reasonable to test plugins over the latest master on gates. We're thinking to introduce horizon-lib but we need further discussions on it. Horizon follows stable policy and we try to do our best to not break any existing plugin. Unfortunately, due to some cross-projects miscommunications, there were some issues with plugins this cycle. I'm ready to work with plugins team to fix these issues asap. To prevent such issues in the future, I think it would be good to have cross-project jobs on Horizon's gates too. We should run at least plugins unit-tests against Horizon's proposed patch to make sure that we don't break anything in plugins. E.g. if we drop some deprecated feature or make an incompatible change, such job will notify us that we need to update a plugin first before merging patch to Horizon. I don't know what is the best way to implement such jobs so it would be good to get some inputs and help from Infra team here. Regards, Ivan Kolodyazhny, http://blog.e0ne.info/ On Wed, Apr 25, 2018 at 7:40 PM, Jeremy Stanley wrote: > On 2018-04-25 12:03:54 -0400 (-0400), Doug Hellmann wrote: > [...] > > Especially now with lower-constraints jobs in place, having plugins > > rely on features only available in unreleased versions of service > > projects doesn't make a lot of sense. We test that way *between* > > services using integration tests that use the REST APIs, but we > > also have some pretty strong stability requirements in place for > > those APIs. > [...] > > This came up again a few days ago for sahara-dashboard. We talked > through some obvious alternatives to keep its master branch from > depending on an unreleased state of horizon and the situation today > is that plugin developers have been relying on developing their > releases in parallel with the services. Not merging an entire > development cycle's worth of work until release day (whether that's > by way of a feature branch or by just continually rebasing and > stacking in Gerrit) would be a very painful workflow for them, and > having to wait a full release cycle before they could start > integrating support for new features in the service would be equally > unfortunate. > > As for merging the plugin and service repositories, they tend to be > developed by completely disparate teams so that could require a fair > amount of political work to solve. Extracting the plugin interface > into a separate library which releases more frequently than the > service does indeed sound like the sanest option, but will also > probably take quite a while for some teams to achieve (I gather > neutron-lib is getting there, but I haven't heard about any work > toward that end in Horizon yet). > -- > Jeremy Stanley > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Wed Apr 25 19:46:30 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 25 Apr 2018 15:46:30 -0400 Subject: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier? In-Reply-To: <20180425171542.xr747eusgft6cjmh@pacific.linksys.moosehall> References: <1524491647-sup-1779@lrrr.local> <20180425171542.xr747eusgft6cjmh@pacific.linksys.moosehall> Message-ID: <1524685555-sup-7538@lrrr.local> Excerpts from Adam Spiers's message of 2018-04-25 18:15:42 +0100: > [BTW I hope it's not considered off-bounds for those of us who aren't > TC election candidates to reply within these campaign question threads > to responses from the candidates - but if so, let me know and I'll > shut up ;-) ] Everyone should feel free to participate! Doug From doug at doughellmann.com Wed Apr 25 20:54:46 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 25 Apr 2018 16:54:46 -0400 Subject: [openstack-dev] [all][tc] final stages of python 3 transition Message-ID: <1524689037-sup-783@lrrr.local> It's time to talk about the next steps in our migration from python 2 to python 3. Up to this point we have mostly focused on reaching a state where we support both versions of the language. We are not quite there with all projects, as you can see by reviewing the test coverage status information at https://wiki.openstack.org/wiki/Python3#Python_3_Status_of_OpenStack_projects Still, we need to press on to the next phase of the migration, which I have been calling "Python 3 first". This is where we use python 3 as the default, for everything, and set up the exceptions we need for anything that still requires python 2. To reach that stage, we need to: 1. Change the documentation and release notes jobs to use python 3. (The Oslo team recently completed this, and found that we did need to make a few small code changes to get them to work.) 2. Change (or duplicate) all functional test jobs to run under python 3. 3. Change the packaging jobs to use python 3. 4. Update devstack to use 3 by default and require setting a flag to use 2. (This may trigger other job changes.) At that point, all of our deliverables will be produced using python 3, and we can be relatively confident that if we no longer had access to python 2 we could still continue operating. We could also start updating deployment tools to use either python 3 or 2, so that users could actually deploy using the python 3 versions of services. Somewhere in that time frame our third-party CI systems will need to ensure they have python 3 support as well. After the "Python 3 first" phase is completed we should release one series using the packages built with python 3. Perhaps Stein? Or is that too ambitious? Next, we will be ready to address the prerequisites for "Python 3 only," which will allow us to drop Python 2 support. We need to wait to drop python 2 support as a community, rather than going one project at a time, to avoid doubling the work of downstream consumers such as distros and independent deployers. We don't want them to have to package all (or even a large number) of the dependencies of OpenStack twice because they have to install some services running under python 2 and others under 3. Ideally they would be able to upgrade all of the services on a node together as part of their transition to the new version, without ending up with a python 2 version of a dependency along side a python 3 version of the same package. The remaining items could be fixed earlier, but this is the point at which they would block us: 1. Fix oslo.service functional tests -- the Oslo team needs help maintaining this library. Alternatively, we could move all services to use cotyledon (https://pypi.org/project/cotyledon/). 2. Finish the unit test and functional test ports so that all of our tests can run under python 3 (this implies that the services all run under python 3, so there is no more porting to do). Finally, after we have *all* tests running on python 3, we can safely drop python 2. We have previously discussed the end of the T cycle as the point at which we would have all of those tests running, and if that holds true we could reasonably drop python 2 during the beginning of the U cycle, in late 2019 and before the 2020 cut-off point when upstream python 2 support will be dropped. I need some info from the deployment tool teams to understand whether they would be ready to take the plunge during T or U and start deploying only the python 3 version. Are there other upgrade issues that need to be addressed to support moving from 2 to 3? Something that might be part of the platform(s), rather than OpenStack itself? What else have I missed in these phases? Other jobs? Other blocking conditions? Doug From jimmy at openstack.org Wed Apr 25 21:07:24 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Wed, 25 Apr 2018 16:07:24 -0500 Subject: [openstack-dev] Summit Forum Schedule Message-ID: <5AE0EE0C.1070400@openstack.org> Hi everyone - Please have a look at the Vancouver Forum schedule: https://docs.google.com/spreadsheets/d/15hkU0FLJ7yqougCiCQgprwTy867CGnmUvAvKaBjlDAo/edit?usp=sharing (also attached as a CSV) The proposed schedule was put together by two members from UC, TC and Foundation. We do our best to avoid moving scheduled items around as it tends to create a domino affect, but we do realize we might have missed something. The schedule should generally be set, but if you see a major conflict in either content or speaker availability, please email speakersupport at openstack.org. Thanks all, Jimmy -------------- next part -------------- A non-text attachment was scrubbed... Name: Vancouver forum topic proposals - Community Review - Schedule.csv Type: text/csv Size: 3300 bytes Desc: not available URL: From kendall at openstack.org Wed Apr 25 21:23:57 2018 From: kendall at openstack.org (Kendall Waters) Date: Wed, 25 Apr 2018 16:23:57 -0500 Subject: [openstack-dev] Only a Few Hours Left Until Prices Increase - OpenStack Summit Vancouver Message-ID: Hi everyone, Friendly reminder that prices for the OpenStack Summit Vancouver will be increasing TONIGHT at 11:59pm PT (April 26, 6:59 UTC). Register NOW before the price increases! Also, if you haven’t booked your hotel yet, we still have a limited number of reduced rate hotel rooms available here . If you have any Summit-related questions, please contact summit at openstack.org . Cheers, Kendall Kendall Waters OpenStack Marketing kendall at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Apr 25 21:40:37 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 25 Apr 2018 21:40:37 +0000 Subject: [openstack-dev] [all][tc] final stages of python 3 transition In-Reply-To: <1524689037-sup-783@lrrr.local> References: <1524689037-sup-783@lrrr.local> Message-ID: <20180425214037.z4ncpc227bbsl452@yuggoth.org> On 2018-04-25 16:54:46 -0400 (-0400), Doug Hellmann wrote: [...] > Still, we need to press on to the next phase of the migration, which > I have been calling "Python 3 first". This is where we use python > 3 as the default, for everything, and set up the exceptions we need > for anything that still requires python 2. [...] It may be worth considering how this interacts with the switch of our default test platform from Ubuntu 16.04 (which provides Python 3.5) to 18.04 (which provides Python 3.6). If we switch from 3.5 to 3.6 before we change most remaining jobs over to Python 3.x versions then it gives us a chance to spot differences between 3.5 and 3.6 at that point. Given that the 14.04 to 16.04 migration, where we attempted to allow projects to switch at their own pace, didn't go so well we're hoping to do a "big bang" migration instead for 18.04 and expect teams who haven't set up experimental jobs ahead of time to work out remaining blockers after the flag day before they can go back to business as usual. Since the 18.04 release is happening so far into the Rocky cycle, we're likely to want to do that at the start of Stein instead when it will be less disruptive. So I guess that raises the question: switch to Python 3.5 by default for most jobs in Rocky and then have a potentially more disruptive default platform switch with Python 3.5->3.6 at the beginning of Stein, or wait until the default platform switch to move from Python 2.7 to 3.6 as the job default? I can see some value in each option. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From doug at doughellmann.com Wed Apr 25 22:25:24 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Wed, 25 Apr 2018 18:25:24 -0400 Subject: [openstack-dev] [all][tc] final stages of python 3 transition In-Reply-To: <20180425214037.z4ncpc227bbsl452@yuggoth.org> References: <1524689037-sup-783@lrrr.local> <20180425214037.z4ncpc227bbsl452@yuggoth.org> Message-ID: <1524694604-sup-6375@lrrr.local> Excerpts from Jeremy Stanley's message of 2018-04-25 21:40:37 +0000: > On 2018-04-25 16:54:46 -0400 (-0400), Doug Hellmann wrote: > [...] > > Still, we need to press on to the next phase of the migration, which > > I have been calling "Python 3 first". This is where we use python > > 3 as the default, for everything, and set up the exceptions we need > > for anything that still requires python 2. > [...] > > It may be worth considering how this interacts with the switch of > our default test platform from Ubuntu 16.04 (which provides Python > 3.5) to 18.04 (which provides Python 3.6). If we switch from 3.5 to > 3.6 before we change most remaining jobs over to Python 3.x versions > then it gives us a chance to spot differences between 3.5 and 3.6 at > that point. Given that the 14.04 to 16.04 migration, where we > attempted to allow projects to switch at their own pace, didn't go > so well we're hoping to do a "big bang" migration instead for 18.04 > and expect teams who haven't set up experimental jobs ahead of time > to work out remaining blockers after the flag day before they can go > back to business as usual. Since the 18.04 release is happening so > far into the Rocky cycle, we're likely to want to do that at the > start of Stein instead when it will be less disruptive. > > So I guess that raises the question: switch to Python 3.5 by default > for most jobs in Rocky and then have a potentially more disruptive > default platform switch with Python 3.5->3.6 at the beginning of > Stein, or wait until the default platform switch to move from Python > 2.7 to 3.6 as the job default? I can see some value in each option. Does 18.04 include a python 2 option? What is the target for completing the changeover? The first or second milestone for Stein, or the end of the cycle? It would be useful to have some input from the project teams who have no unit or functional test jobs running for 3.5, since they will have the most work to do to cope with the upgrade overall. Who is coordinating Ubuntu upgrade work and setting up the experimental jobs? Doug From cboylan at sapwetik.org Wed Apr 25 22:35:21 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 25 Apr 2018 15:35:21 -0700 Subject: [openstack-dev] [all][tc] final stages of python 3 transition In-Reply-To: <1524694604-sup-6375@lrrr.local> References: <1524689037-sup-783@lrrr.local> <20180425214037.z4ncpc227bbsl452@yuggoth.org> <1524694604-sup-6375@lrrr.local> Message-ID: <1524695721.1862621.1350917328.39C000D3@webmail.messagingengine.com> On Wed, Apr 25, 2018, at 3:25 PM, Doug Hellmann wrote: > Excerpts from Jeremy Stanley's message of 2018-04-25 21:40:37 +0000: > > On 2018-04-25 16:54:46 -0400 (-0400), Doug Hellmann wrote: > > [...] > > > Still, we need to press on to the next phase of the migration, which > > > I have been calling "Python 3 first". This is where we use python > > > 3 as the default, for everything, and set up the exceptions we need > > > for anything that still requires python 2. > > [...] > > > > It may be worth considering how this interacts with the switch of > > our default test platform from Ubuntu 16.04 (which provides Python > > 3.5) to 18.04 (which provides Python 3.6). If we switch from 3.5 to > > 3.6 before we change most remaining jobs over to Python 3.x versions > > then it gives us a chance to spot differences between 3.5 and 3.6 at > > that point. Given that the 14.04 to 16.04 migration, where we > > attempted to allow projects to switch at their own pace, didn't go > > so well we're hoping to do a "big bang" migration instead for 18.04 > > and expect teams who haven't set up experimental jobs ahead of time > > to work out remaining blockers after the flag day before they can go > > back to business as usual. Since the 18.04 release is happening so > > far into the Rocky cycle, we're likely to want to do that at the > > start of Stein instead when it will be less disruptive. > > > > So I guess that raises the question: switch to Python 3.5 by default > > for most jobs in Rocky and then have a potentially more disruptive > > default platform switch with Python 3.5->3.6 at the beginning of > > Stein, or wait until the default platform switch to move from Python > > 2.7 to 3.6 as the job default? I can see some value in each option. > > Does 18.04 include a python 2 option? It does, https://packages.ubuntu.com/bionic/python2.7. > > What is the target for completing the changeover? The first or > second milestone for Stein, or the end of the cycle? Previously we've tried to do the transition in OpenStack release that is under development when the LTS releases. However we've offset things a bit now so that may not be as feasible. I would expect that if we waited for the next cycle we would do it very early in that cycle. For the transition from python 3.5 on Xenial to 3.6 on Bionic we may want to keep the python 3.5 jobs on Xenial but add in non voting python 3.6 jobs to every project running Xenial python3.5 jobs. Then those projects can toggle them to voting 3.6 jobs if/when they start working. Then we can decide at a later time if continuing to support python 3.5 (and testing it) is worthwhile. > > It would be useful to have some input from the project teams who > have no unit or functional test jobs running for 3.5, since they > will have the most work to do to cope with the upgrade overall. > > Who is coordinating Ubuntu upgrade work and setting up the experimental > jobs? Paul Belanger has been doing much of the work to get the images up and running and helping some projects start to run early jobs on the beta images. I expect Paul would want to continue to carry the transition through to the end. Clark From fungi at yuggoth.org Wed Apr 25 22:43:44 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 25 Apr 2018 22:43:44 +0000 Subject: [openstack-dev] [all][tc] final stages of python 3 transition In-Reply-To: <1524694604-sup-6375@lrrr.local> References: <1524689037-sup-783@lrrr.local> <20180425214037.z4ncpc227bbsl452@yuggoth.org> <1524694604-sup-6375@lrrr.local> Message-ID: <20180425224344.y7lptgycmbmo22sj@yuggoth.org> On 2018-04-25 18:25:24 -0400 (-0400), Doug Hellmann wrote: [...] > Does 18.04 include a python 2 option? Yes, 2.7.15 if packages.ubuntu.com is to be believed. > What is the target for completing the changeover? The first or > second milestone for Stein, or the end of the cycle? I would expect us to want to pull the trigger after whatever grace period the cycle-trailing projects are comfortable with (but certainly before the first milestone I would think?). > It would be useful to have some input from the project teams who > have no unit or functional test jobs running for 3.5, since they > will have the most work to do to cope with the upgrade overall. Yes, it would in my opinion anyway. > Who is coordinating Ubuntu upgrade work and setting up the > experimental jobs? We have preliminary ubuntu-bionic images available already (officially it doesn't release until tomorrow), and some teams have started to use them for experimental or non-voting jobs: http://codesearch.openstack.org/?q=ubuntu-bionic -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From sangho at opennetworking.org Thu Apr 26 00:41:55 2018 From: sangho at opennetworking.org (Sangho Shin) Date: Thu, 26 Apr 2018 09:41:55 +0900 Subject: [openstack-dev] [openstack-infra] How to take over a project? In-Reply-To: References: <0630C19B-9EA1-457D-A74C-8A7A03D96DD6@vmware.com> <2bb037db-61aa-4fec-a694-00eee36bbdab@gmail.com> <7a4390b1-2c4e-6600-4d93-167697ea9f12@redhat.com> <81B28CCD-93B2-4BC8-B2C5-50B0C5D2A972@opennetworking.org> <3C5A1D78-828F-4C6D-B3A1-B6597403233F@opennetworking.org> <0202894D-3C05-434F-A7F4-93678C7613FE@opennetworking.org> Message-ID: Miguel and Ihar, Thank you so much for your help. For now, I just allowed onos-core team to create references, which should allow me to create a new stable branch, I believe. I am waiting the config change is being merged. :-) Thank you, Sangho > On 25 Apr 2018, at 11:46 PM, Ihar Hrachyshka wrote: > > ONOS is not part of Neutron and hence Neutron Release team should not > be involved in its matters. If gerrit ACLs say otherwise, you should > fix the ACLs. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tushar.Patil at nttdata.com Thu Apr 26 01:03:12 2018 From: Tushar.Patil at nttdata.com (Patil, Tushar) Date: Thu, 26 Apr 2018 01:03:12 +0000 Subject: [openstack-dev] [masakari] Masakari Project Meeting time In-Reply-To: <47EFB32CD8770A4D9590812EE28C977E962F4E0B@ALA-MBD.corp.ad.wrs.com> References: <47EFB32CD8770A4D9590812EE28C977E962F4E0B@ALA-MBD.corp.ad.wrs.com> Message-ID: >> We are at EST time zone, the meeting is right on our midnight time, 12:00 am. >> It will be nice if the meeting can be started ~2 hours earlier e.g. Could it be started at 02:00: UTC instead? +1 Regards, Tushar Patil ________________________________________ From: Kwan, Louie Sent: Thursday, April 26, 2018 12:06:00 AM To: Sampath Priyankara (samP); openstack-dev at lists.openstack.org Subject: [openstack-dev] [masakari] Masakari Project Meeting time Sampath, Dinesh and others, It was a good meeting last week. As briefly discussed with Sampath, I would like to check whether we can adjust the meeting time. We are at EST time zone, the meeting is right on our midnight time, 12:00 am. It will be nice if the meeting can be started ~2 hours earlier e.g. Could it be started at 02:00: UTC instead? Thanks. Louie __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged,confidential, and proprietary data. If you are not the intended recipient,please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. From Dinesh.Bhor at nttdata.com Thu Apr 26 01:08:59 2018 From: Dinesh.Bhor at nttdata.com (Bhor, Dinesh) Date: Thu, 26 Apr 2018 01:08:59 +0000 Subject: [openstack-dev] [masakari] Masakari Project Meeting time In-Reply-To: <47EFB32CD8770A4D9590812EE28C977E962F4E0B@ALA-MBD.corp.ad.wrs.com> References: <47EFB32CD8770A4D9590812EE28C977E962F4E0B@ALA-MBD.corp.ad.wrs.com> Message-ID: <134F3A2C-7FB4-41FD-BEA8-A529EDA42BC9@nttdata.com> +1 This time may not fit for attendees who work in IST time zone as it will 07.30 AM in the morning. Thanks, Dinesh > On Apr 26, 2018, at 12:06 AM, Kwan, Louie wrote: > > Sampath, Dinesh and others, > > It was a good meeting last week. > > As briefly discussed with Sampath, I would like to check whether we can adjust the meeting time. > > We are at EST time zone, the meeting is right on our midnight time, 12:00 am. > > It will be nice if the meeting can be started ~2 hours earlier e.g. Could it be started at 02:00: UTC instead? > > Thanks. > Louie > > Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged,confidential, and proprietary data. If you are not the intended recipient,please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. From shu.mutow at gmail.com Thu Apr 26 04:13:31 2018 From: shu.mutow at gmail.com (Shu M.) Date: Thu, 26 Apr 2018 13:13:31 +0900 Subject: [openstack-dev] [requirements][horizon][neutron] plugins depending on services Message-ID: Hi folks, > unwinding things > ---------------- > > There are a few different options, but it's important to keep in mind > that we ultimately want all of the following: > > * The code works > * Tests can run properly in CI > * "Depends-On" works in CI so that you can test changes cross-repo > * Tests can run properly locally for developers > * Deployment requirements are accurately communicated to deployers One more thing: * Test environments between CI and local should be as same as possible. To run tests in CI and local successfully, I have tried to add new testenv for local into tox.ini (https://review.openstack.org/#/c/564099/4/tox.ini) as alternative solution last night, this would be same as adding new requirements.txt for local check. This seems to run fine, but this might make difference in environments between CI and local. So I can not conclude this is the best way for now. >From view of horizon plugin developer, the one of issue on horizon and plugins is the feature gap due to unsufficient communication. Merging developing repository may give good effects for this issue, if horizon can separate each panels into plugins. Thanks, Shu Muto -------------- next part -------------- An HTML attachment was scrubbed... URL: From daidv at vn.fujitsu.com Thu Apr 26 04:31:44 2018 From: daidv at vn.fujitsu.com (daidv at vn.fujitsu.com) Date: Thu, 26 Apr 2018 04:31:44 +0000 Subject: [openstack-dev] [Designate] Plan for OSM Message-ID: Hi forks, We tested and completed our process with OVO migration in Queens cycle. Now, we can continue with OSM implementation for Designate. Actually, we have pushed some patches related to OSM[1] and it's ready to review. Please take a look on these patches. [1] https://review.openstack.org/#/q/project:openstack/designate+status:open++Trigger-less Thanks and Best regards, Dang Van Dai (Mr.) Fujitsu Vietnam Ltd. From sangho at opennetworking.org Thu Apr 26 06:55:30 2018 From: sangho at opennetworking.org (Sangho Shin) Date: Thu, 26 Apr 2018 15:55:30 +0900 Subject: [openstack-dev] [openstack-infra] How to take over a project? In-Reply-To: References: <0630C19B-9EA1-457D-A74C-8A7A03D96DD6@vmware.com> <2bb037db-61aa-4fec-a694-00eee36bbdab@gmail.com> <7a4390b1-2c4e-6600-4d93-167697ea9f12@redhat.com> <81B28CCD-93B2-4BC8-B2C5-50B0C5D2A972@opennetworking.org> <3C5A1D78-828F-4C6D-B3A1-B6597403233F@opennetworking.org> <0202894D-3C05-434F-A7F4-93678C7613FE@opennetworking.org> Message-ID: <796C866B-D207-476F-A4AC-11692E05CC68@opennetworking.org> Ihar, I tried to add netwokring-onos-core group to "Create Reference” permission using the gerrit UI, and it is registered as a new gerrit review. But, it seems that it is not a right process, according to the gerrit history of the similar issues. Can you please let me know how to change the project ACL? Thank you, Sangho > On 25 Apr 2018, at 11:46 PM, Ihar Hrachyshka wrote: > > ONOS is not part of Neutron and hence Neutron Release team should not > be involved in its matters. If gerrit ACLs say otherwise, you should > fix the ACLs. > > Ihar > > On Tue, Apr 24, 2018 at 1:22 AM, Sangho Shin wrote: >> Dear Neutron-Release team members, >> >> Can any of you handle the issue below? >> >> Thank you so much for your help in advance. >> >> Sangho >> >> >>> On 20 Apr 2018, at 10:01 AM, Sangho Shin wrote: >>> >>> Dear Neutron-Release team, >>> >>> I wonder if any of you can add me to the network-onos-release member. >>> It seems that Vikram is busy. :-) >>> >>> Thank you, >>> >>> Sangho >>> >>> >>> >>>> On 19 Apr 2018, at 9:18 AM, Sangho Shin wrote: >>>> >>>> Ian, >>>> >>>> Thank you so much for your help. >>>> I have requested Vikram to add me to the release team. >>>> He should be able to help me. :-) >>>> >>>> Sangho >>>> >>>> >>>>> On 19 Apr 2018, at 8:36 AM, Ian Wienand wrote: >>>>> >>>>> On 04/19/2018 01:19 AM, Ian Y. Choi wrote: >>>>>> By the way, since the networking-onos-release group has no neutron >>>>>> release team group, I think infra team can help to include neutron >>>>>> release team and neutron release team can help to create branches >>>>>> for the repo if there is no reponse from current >>>>>> networking-onos-release group member. >>>>> >>>>> This seems sane and I've added neutron-release to >>>>> networking-onos-release. >>>>> >>>>> I'm hesitant to give advice on branching within a project like neutron >>>>> as I'm sure there's stuff I'm not aware of; but members of the >>>>> neutron-release team should be able to get you going. >>>>> >>>>> Thanks, >>>>> >>>>> -i >>>> >>> >> From thierry at openstack.org Thu Apr 26 08:09:53 2018 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 26 Apr 2018 10:09:53 +0200 Subject: [openstack-dev] [openstack-infra] How to take over a project? In-Reply-To: <796C866B-D207-476F-A4AC-11692E05CC68@opennetworking.org> References: <0630C19B-9EA1-457D-A74C-8A7A03D96DD6@vmware.com> <2bb037db-61aa-4fec-a694-00eee36bbdab@gmail.com> <7a4390b1-2c4e-6600-4d93-167697ea9f12@redhat.com> <81B28CCD-93B2-4BC8-B2C5-50B0C5D2A972@opennetworking.org> <3C5A1D78-828F-4C6D-B3A1-B6597403233F@opennetworking.org> <0202894D-3C05-434F-A7F4-93678C7613FE@opennetworking.org> <796C866B-D207-476F-A4AC-11692E05CC68@opennetworking.org> Message-ID: Sangho Shin wrote: > Ihar, > > I tried to add netwokring-onos-core group to "Create Reference” permission using the gerrit UI, and it is registered as a new gerrit review. > But, it seems that it is not a right process, according to the gerrit history of the similar issues. > Can you please let me know how to change the project ACL? The ACLs are maintained in the openstack-infra/project-config repository. You need to propose a change to the ACL file at: gerrit/acls/openstack/networking-onos.config For more information on how to create and maintain projects, you can read the Infra manual at: https://docs.openstack.org/infra/manual/creators.html While it's geared towards creating NEW projects, the guide can be helpful at pointing to the right files and processes. -- Thierry Carrez (ttx) From sangho at opennetworking.org Thu Apr 26 08:31:24 2018 From: sangho at opennetworking.org (Sangho Shin) Date: Thu, 26 Apr 2018 17:31:24 +0900 Subject: [openstack-dev] [openstack-infra] How to take over a project? In-Reply-To: References: <0630C19B-9EA1-457D-A74C-8A7A03D96DD6@vmware.com> <2bb037db-61aa-4fec-a694-00eee36bbdab@gmail.com> <7a4390b1-2c4e-6600-4d93-167697ea9f12@redhat.com> <81B28CCD-93B2-4BC8-B2C5-50B0C5D2A972@opennetworking.org> <3C5A1D78-828F-4C6D-B3A1-B6597403233F@opennetworking.org> <0202894D-3C05-434F-A7F4-93678C7613FE@opennetworking.org> <796C866B-D207-476F-A4AC-11692E05CC68@opennetworking.org> Message-ID: <878A9369-F08D-4B03-BB1A-E29C368BF317@opennetworking.org> Thank you, Thierry I will follow that link. Sangho > On 26 Apr 2018, at 5:09 PM, Thierry Carrez wrote: > > Sangho Shin wrote: >> Ihar, >> >> I tried to add netwokring-onos-core group to "Create Reference” permission using the gerrit UI, and it is registered as a new gerrit review. >> But, it seems that it is not a right process, according to the gerrit history of the similar issues. >> Can you please let me know how to change the project ACL? > > The ACLs are maintained in the openstack-infra/project-config > repository. You need to propose a change to the ACL file at: > > gerrit/acls/openstack/networking-onos.config > > For more information on how to create and maintain projects, you can > read the Infra manual at: > > https://docs.openstack.org/infra/manual/creators.html > > While it's geared towards creating NEW projects, the guide can be > helpful at pointing to the right files and processes. > > -- > Thierry Carrez (ttx) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From tobias at citynetwork.se Thu Apr 26 09:32:07 2018 From: tobias at citynetwork.se (Tobias Rydberg) Date: Thu, 26 Apr 2018 11:32:07 +0200 Subject: [openstack-dev] [publiccloud-wg] Meeting this afternoon for Public Cloud WG Message-ID: <81ac878c-386c-74c9-d295-100b60412842@citynetwork.se> Hi folks, Time for a new meeting for the Public Cloud WG. Vancouver is coming closer, very open agenda this week so please join and bring your topic to discuss. The open agenda (please add topics) can be found at https://etherpad.openstack.org/p/publiccloud-wg See you all at 1400 UTC in #opensstack-publiccloud Cheers, Tobias -- Tobias Rydberg Senior Developer Mobile: +46 733 312780 www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3945 bytes Desc: S/MIME Cryptographic Signature URL: From waleedm at mellanox.com Thu Apr 26 12:08:39 2018 From: waleedm at mellanox.com (Waleed Musa) Date: Thu, 26 Apr 2018 12:08:39 +0000 Subject: [openstack-dev] [tripleo] [heat-templates] Deprecated environment files Message-ID: Hi guys, I'm wondering, what is the plan of having these environments/*.yaml and enviroments/services-baremetal/*.yaml. It seems that it's deprecated files, Please advice here. Regards Waleed Mousa SW Engineer at Mellanox -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Thu Apr 26 12:41:23 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Thu, 26 Apr 2018 07:41:23 -0500 Subject: [openstack-dev] Summit Forum Schedule In-Reply-To: <5AE0EE0C.1070400@openstack.org> References: <5AE0EE0C.1070400@openstack.org> Message-ID: On 4/25/2018 4:07 PM, Jimmy McArthur wrote: > Please have a look at the Vancouver Forum schedule: > https://docs.google.com/spreadsheets/d/15hkU0FLJ7yqougCiCQgprwTy867CGnmUvAvKaBjlDAo/edit?usp=sharing > (also attached as a CSV) The proposed schedule was put together by two > members from UC, TC and Foundation. > > We do our best to avoid moving scheduled items around as it tends to > create a domino affect, but we do realize we might have missed > something.  The schedule should generally be set, but if you see a major > conflict in either content or speaker availability, please email > speakersupport at openstack.org. Two questions: 1. What does the yellow on the pre-emptible instances cell mean? 2. Just from looking at this, it looks like there were far fewer submissions for forum topics than actual slots available, so basically everything was approved (unless it was an obvious duplicate or not appropriate for a forum session), is that right? In the past when I've wondered if topic x should be a forum session or if I shouldn't bother, I was told to load up the proposals because chances were there would be more slots than proposals, and that seems to still be true. On the one hand, less content to choose from is refreshing so you don't have to worry about picking between as many sessions that you're interested in. But I also wonder how many people held back on proposing something for fear of rejection or that they'd be taking a slot for something with a higher priority. I'm not sure if there is a problem here, or a solution needed, or if it would be useful for the people that pick the sessions to give a heads up before the deadline that there are still a lot of slots open. -- Thanks, Matt From sean.mcginnis at gmx.com Thu Apr 26 12:46:21 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 26 Apr 2018 07:46:21 -0500 Subject: [openstack-dev] [release] Release countdown for week R-17, April 30 - May 4 Message-ID: <20180426124620.GA11233@sm-xps> Time again for our regular release countdown email. Development Focus ----------------- Teams should now be focused on feature development and completion of release goals [0]. [0] https://governance.openstack.org/tc/goals/rocky/index.html General Information ------------------- If you have not requested a Rocky-1, please be aware that the next two milestones can not be missed or your deliverable may not be included as part of the official Rocky cycle project set. There were a few projects where the release team had to force a release in Queens. We would like to avoid that situation in Rocky. For some stable or code-complete projects, you may want to consider switching to be an independent release. This would also be a good time to check whether to do a release for any independent, library, or stable releases. As always, if you have any questions or concerns, feel free to swing by the #openstack-release channel. And just a reminder that the TC election is currently under way. Details of the election can be found here [1]. [1] https://governance.openstack.org/election/ Upcoming Deadlines & Dates -------------------------- TC election closes: Apr 30, 23:45 UTC Forum at OpenStack Summit in Vancouver: May 21-24 Rocky-2 Milestone: June 7 -- Sean McGinnis (smcginnis) From edmondsw at us.ibm.com Thu Apr 26 12:46:42 2018 From: edmondsw at us.ibm.com (William M Edmonds) Date: Thu, 26 Apr 2018 08:46:42 -0400 Subject: [openstack-dev] [requirements][horizon][neutron] plugins depending on services In-Reply-To: References: Message-ID: Monty Taylor wrote on 04/25/2018 09:40:47 AM: ... > Introduce a whitelist of git repo urls, starting with: > > * https://git.openstack.org/openstack/neutron > * https://git.openstack.org/openstack/horizon > We would also need to include at least nova (e.g. [1]) and ceilometer (e.g. [2]). [1] https://github.com/openstack/nova-powervm [2] https://github.com/openstack/ceilometer-powervm -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Thu Apr 26 13:58:31 2018 From: zigo at debian.org (Thomas Goirand) Date: Thu, 26 Apr 2018 15:58:31 +0200 Subject: [openstack-dev] [all][tc] final stages of python 3 transition In-Reply-To: <20180425214037.z4ncpc227bbsl452@yuggoth.org> References: <1524689037-sup-783@lrrr.local> <20180425214037.z4ncpc227bbsl452@yuggoth.org> Message-ID: On 04/25/2018 11:40 PM, Jeremy Stanley wrote: > It may be worth considering how this interacts with the switch of > our default test platform from Ubuntu 16.04 (which provides Python > 3.5) to 18.04 (which provides Python 3.6). If we switch from 3.5 to > 3.6 before we change most remaining jobs over to Python 3.x versions > then it gives us a chance to spot differences between 3.5 and 3.6 at > that point. I don't think you'll find lots of issues, as all Debian and Gentoo packages were built against Python 3.6, and hopefully, prometheanfire and myself have reported the issues. > So I guess that raises the question: switch to Python 3.5 by default > for most jobs in Rocky and then have a potentially more disruptive > default platform switch with Python 3.5->3.6 at the beginning of > Stein, or wait until the default platform switch to move from Python > 2.7 to 3.6 as the job default? I can see some value in each option. I'd love to see gating on both Python 3.5 and 3.6 if possible. Also, can we restart the attempts (non-voting) gating jobs with Debian Sid? That's always were we get all updates first. Cheers, Thomas Goirand (zigo) From dtantsur at redhat.com Thu Apr 26 14:24:51 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 26 Apr 2018 16:24:51 +0200 Subject: [openstack-dev] [tripleo] ironic automated cleaning by default? In-Reply-To: References: Message-ID: <2ebe539b-ec35-17c7-8207-126bf6a0b8f2@redhat.com> Answering to both James and Ben inline. On 04/25/2018 05:47 PM, Ben Nemec wrote: > > > On 04/25/2018 10:28 AM, James Slagle wrote: >> On Wed, Apr 25, 2018 at 10:55 AM, Dmitry Tantsur wrote: >>> On 04/25/2018 04:26 PM, James Slagle wrote: >>>> >>>> On Wed, Apr 25, 2018 at 9:14 AM, Dmitry Tantsur >>>> wrote: >>>>> >>>>> Hi all, >>>>> >>>>> I'd like to restart conversation on enabling node automated cleaning by >>>>> default for the undercloud. This process wipes partitioning tables >>>>> (optionally, all the data) from overcloud nodes each time they move to >>>>> "available" state (i.e. on initial enrolling and after each tear down). >>>>> >>>>> We have had it disabled for a few reasons: >>>>> - it was not possible to skip time-consuming wiping if data from disks >>>>> - the way our workflows used to work required going between manageable >>>>> and >>>>> available steps several times >>>>> >>>>> However, having cleaning disabled has several issues: >>>>> - a configdrive left from a previous deployment may confuse cloud-init >>>>> - a bootable partition left from a previous deployment may take >>>>> precedence >>>>> in some BIOS >>>>> - an UEFI boot partition left from a previous deployment is likely to >>>>> confuse UEFI firmware >>>>> - apparently ceph does not work correctly without cleaning (I'll defer to >>>>> the storage team to comment) >>>>> >>>>> For these reasons we don't recommend having cleaning disabled, and I >>>>> propose >>>>> to re-enable it. >>>>> >>>>> It has the following drawbacks: >>>>> - The default workflow will require another node boot, thus becoming >>>>> several >>>>> minutes longer (incl. the CI) >>>>> - It will no longer be possible to easily restore a deleted overcloud >>>>> node. >>>> >>>> >>>> I'm trending towards -1, for these exact reasons you list as >>>> drawbacks. There has been no shortage of occurrences of users who have >>>> ended up with accidentally deleted overclouds. These are usually >>>> caused by user error or unintended/unpredictable Heat operations. >>>> Until we have a way to guarantee that Heat will never delete a node, >>>> or Heat is entirely out of the picture for Ironic provisioning, then >>>> I'd prefer that we didn't enable automated cleaning by default. >>>> >>>> I believe we had done something with policy.json at one time to >>>> prevent node delete, but I don't recall if that protected from both >>>> user initiated actions and Heat actions. And even that was not enabled >>>> by default. >>>> >>>> IMO, we need to keep "safe" defaults. Even if it means manually >>>> documenting that you should clean to prevent the issues you point out >>>> above. The alternative is to have no way to recover deleted nodes by >>>> default. >>> >>> >>> Well, it's not clear what is "safe" here: protect people who explicitly >>> delete their stacks or protect people who don't realize that a previous >>> deployment may screw up their new one in a subtle way. >> >> The latter you can recover from, the former you can't if automated >> cleaning is true. Nor can we recover from 'rm -rf / --no-preserve-root', but it's not a reason to disable the 'rm' command :) >> >> It's not just about people who explicitly delete their stacks (whether >> intentional or not). There could be user error (non-explicit) or >> side-effects triggered by Heat that could cause nodes to get deleted. If we have problems with Heat, we should fix Heat or stop using it. What you're saying is essentially "we prevent ironic from doing the right thing because we're using a tool that can invoke 'rm -rf /' at a wrong moment." >> >> You couldn't recover from those scenarios if automated cleaning were >> true. Whereas you could always fix a deployment error by opting in to >> do an automated clean. Does Ironic keep track of it a node has been >> previously cleaned? Could we add a validation to check whether any >> nodes might be used in the deployment that were not previously >> cleaned? It's may be possible possible to figure out if a node was ever cleaned. But then we'll force operators to invoke cleaning manually, right? It will work, but that's another step on the default workflow. Are you okay with it? > > Is there a way to only do cleaning right before a node is deployed?  If you're > about to write a new image to the disk then any data there is forfeit anyway. > Since the concern is old data on the disk messing up subsequent deploys, it > doesn't really matter whether you clean it right after it's deleted or right > before it's deployed, but the latter leaves the data intact for longer in case a > mistake was made. > > If that's not possible then consider this an RFE. :-) It's a good idea, but it may cause problems with rebuilding instances. Rebuild is essentially a re-deploy of the OS, users may not expect the whole disk to be wiped.. Also it's unclear whether we want to write additional features to work around disabled cleaning. > > -Ben > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From sean.mcginnis at gmx.com Thu Apr 26 14:27:31 2018 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 26 Apr 2018 09:27:31 -0500 Subject: [openstack-dev] [all][tc] final stages of python 3 transition In-Reply-To: <1524689037-sup-783@lrrr.local> References: <1524689037-sup-783@lrrr.local> Message-ID: <20180426142731.GA18842@sm-xps> On Wed, Apr 25, 2018 at 04:54:46PM -0400, Doug Hellmann wrote: > It's time to talk about the next steps in our migration from python > 2 to python 3. > > [...] > > 2. Change (or duplicate) all functional test jobs to run under > python 3. As a test I ran Cinder functional and unit test jobs on bionic using 3.6. All went well. That made me realize something though - right now we have jobs that explicitly say py35, both for unit tests and functional tests. But I realized setting up these test jobs that it works to just specify "basepython = python3" or run unit tests with "tox -e py3". Then with that, it just depends on whether the job runs on xenial or bionic as to whether the job is run with py35 or py36. It is less explicit, so I see some downside to that, but would it make sense to change jobs to drop the minor version to make it more flexible and easy to make these transitions? From sfinucan at redhat.com Thu Apr 26 14:49:27 2018 From: sfinucan at redhat.com (Stephen Finucane) Date: Thu, 26 Apr 2018 15:49:27 +0100 Subject: [openstack-dev] Following the new PTI for document build, broken local builds In-Reply-To: <20180425170644.GB459@sm-xps> References: <1521629342.8587.20.camel@redhat.com> <8da6121b-a48a-5dd5-8865-75b9e0d38e15@redhat.com> <1523018692.22377.1.camel@redhat.com> <20180406130205.GA15660@smcginnis-mbp.local> <1523026366.22377.13.camel@redhat.com> <20180406172714.d8cdbd0a03d77f9de657a20e@redhat.com> <20180425145913.GB22839@sm-xps> <1524670685-sup-2247@lrrr.local> <1210453e-e9af-ff50-6d61-af47afd47857@redhat.com> <20180425170644.GB459@sm-xps> Message-ID: <1524754167.6216.30.camel@redhat.com> On Wed, 2018-04-25 at 12:06 -0500, Sean McGinnis wrote: > > > > > > > > [1] https://review.openstack.org/#/c/564232/ > > > > > > > > > > The only concern I have is that it may slow the transition to the > > > python 3 version of the jobs, since someone would have to actually > > > fix the warnings before they could add the new job. I'm not sure I > > > want to couple the tasks of fixing doc build warnings with also > > > making those docs build under python 3 (which is usually quite > > > simple). > > > > > > Is there some other way to enable this flag independently of the move to > > > the python3 job? > > > > The existing proposal is: > > > > https://review.openstack.org/559348 > > > > TL;DR if you still have a build_sphinx section in setup.cfg then defaults > > will remain the same, but when removing it as part of the transition to the > > new PTI you'll have to eliminate any warnings. (Although AFAICT it doesn't > > hurt to leave that section in place as long as you need, and you can still > > do the rest of the PTI conversion.) > > > > The hold-up is that the job in question is also potentially used by other > > Zuul users outside of OpenStack - including those who aren't using pbr at > > all (i.e. there's no setup.cfg). So we need to warn those folks to prepare. > > > > cheers, > > Zane. > > > > Ah, I had looked but did not find an existing proposal. Looks like that would > work too. I am good either way, but I will leave my approach out there just as > another option to consider. I'll abandon that if folks prefer this way. Yeah, I reviewed your patch but assumed you'd seen mine already and were looking for a quicker alternative. I've started the process of adding this to zuul-jobs by posting the warning to zuul-announce (though it's waiting moderation by corvus). We only need to wait two weeks after sending that message before we can merge the patch to zuul-jobs, so I guess we should go that way now? Stephen From luo.lujin at jp.fujitsu.com Thu Apr 26 15:03:08 2018 From: luo.lujin at jp.fujitsu.com (Luo, Lujin) Date: Thu, 26 Apr 2018 15:03:08 +0000 Subject: [openstack-dev] [Neutron] [Upgrades] Cancel next IRC meeting (May 3rd) Message-ID: Hi, We are canceling our next Neutron Upgrades subteam meeting on May 3rd. We will resume on May 10th. Thanks, Lujin From james.slagle at gmail.com Thu Apr 26 15:12:16 2018 From: james.slagle at gmail.com (James Slagle) Date: Thu, 26 Apr 2018 11:12:16 -0400 Subject: [openstack-dev] [tripleo] ironic automated cleaning by default? In-Reply-To: <2ebe539b-ec35-17c7-8207-126bf6a0b8f2@redhat.com> References: <2ebe539b-ec35-17c7-8207-126bf6a0b8f2@redhat.com> Message-ID: On Thu, Apr 26, 2018 at 10:24 AM, Dmitry Tantsur wrote: > Answering to both James and Ben inline. > > > On 04/25/2018 05:47 PM, Ben Nemec wrote: >> >> >> >> On 04/25/2018 10:28 AM, James Slagle wrote: >>> >>> On Wed, Apr 25, 2018 at 10:55 AM, Dmitry Tantsur >>> wrote: >>>> >>>> On 04/25/2018 04:26 PM, James Slagle wrote: >>>> Well, it's not clear what is "safe" here: protect people who explicitly >>>> delete their stacks or protect people who don't realize that a previous >>>> deployment may screw up their new one in a subtle way. >>> >>> >>> The latter you can recover from, the former you can't if automated >>> cleaning is true. > > > Nor can we recover from 'rm -rf / --no-preserve-root', but it's not a reason > to disable the 'rm' command :) This is a really disingenuous comparison. If you really want to compare these things with what you're proposing, then it would be to make --no-preserve-root the default with rm. Which it is not. > >>> >>> It's not just about people who explicitly delete their stacks (whether >>> intentional or not). There could be user error (non-explicit) or >>> side-effects triggered by Heat that could cause nodes to get deleted. > > > If we have problems with Heat, we should fix Heat or stop using it. What > you're saying is essentially "we prevent ironic from doing the right thing > because we're using a tool that can invoke 'rm -rf /' at a wrong moment." Agreed on the Heat point, and once/if we're there, I'd probably not object to making automated clean the default. I disagree on how you characterized what I'm saying. I'm not proposing to prevent Ironic from doing the right thing. If people want to use automated cleaning, they can. Nothing will prevent that. It just shouldn't be the default. > >>> >>> You couldn't recover from those scenarios if automated cleaning were >>> true. Whereas you could always fix a deployment error by opting in to >>> do an automated clean. Does Ironic keep track of it a node has been >>> previously cleaned? Could we add a validation to check whether any >>> nodes might be used in the deployment that were not previously >>> cleaned? > > > It's may be possible possible to figure out if a node was ever cleaned. But > then we'll force operators to invoke cleaning manually, right? It will work, > but that's another step on the default workflow. Are you okay with it? I would be ok with it. But I don't even characterize it as a completely necessary step on the default workflow. It fixes some issues as you've pointed out, but also comes with a cost. What we're discussing is whether it's the default or not. If it is not true by default, then we wouldn't make it a required step in the default workflow to make sure it's done. It'd be documented as choice. -- -- James Slagle -- From kent.gordon at verizonwireless.com Thu Apr 26 15:17:21 2018 From: kent.gordon at verizonwireless.com (Gordon, Kent S) Date: Thu, 26 Apr 2018 10:17:21 -0500 Subject: [openstack-dev] [E] [tripleo] ironic automated cleaning by default? In-Reply-To: References: Message-ID: This change would need to be very clearly mentioned in Documentation/Release Notes. It could be a really nasty surprise for an operator expecting the current behavior. On Wed, Apr 25, 2018 at 8:14 AM, Dmitry Tantsur wrote: > Hi all, > > I'd like to restart conversation on enabling node automated cleaning by > default for the undercloud. This process wipes partitioning tables > (optionally, all the data) from overcloud nodes each time they move to > "available" state (i.e. on initial enrolling and after each tear down). > > We have had it disabled for a few reasons: > - it was not possible to skip time-consuming wiping if data from disks > - the way our workflows used to work required going between manageable and > available steps several times > > However, having cleaning disabled has several issues: > - a configdrive left from a previous deployment may confuse cloud-init > - a bootable partition left from a previous deployment may take precedence > in some BIOS > - an UEFI boot partition left from a previous deployment is likely to > confuse UEFI firmware > - apparently ceph does not work correctly without cleaning (I'll defer to > the storage team to comment) > > For these reasons we don't recommend having cleaning disabled, and I > propose to re-enable it. > > It has the following drawbacks: > - The default workflow will require another node boot, thus becoming > several minutes longer (incl. the CI) > - It will no longer be possible to easily restore a deleted overcloud node. > > What do you think? If I don't hear principal objections, I'll prepare a > patch in the coming days. > > Dmitry > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.op > enstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev&d= > DwIGaQ&c=udBTRvFvXC5Dhqg7UHpJlPps3mZ3LRxpb6__0PomBTQ&r=Xkn6r > 0Olgrmyl97VKakpX0o-JiB_old4u22bFbcLdRo&m=ymioAO-4rAyApEj0Gix > bEC4KhMk6z9HBlW_z5nqnuno&s=hg1h8si71iXioJ-TvA3F2ZVt1O7ipViyYI3MASclYpI&e= > -- Kent S. Gordon kent.gordon at verizonwireless.com Work:682-831-3601 Mobile: 817-905-6518 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bodenvmw at gmail.com Thu Apr 26 15:17:43 2018 From: bodenvmw at gmail.com (Boden Russell) Date: Thu, 26 Apr 2018 09:17:43 -0600 Subject: [openstack-dev] [requirements][horizon][neutron] plugins depending on services In-Reply-To: References: Message-ID: On 4/25/18 10:13 PM, Shu M. wrote: > Hi folks, > >> unwinding things >> ---------------- >> >> There are a few different options, but it's important to keep in mind >> that we ultimately want all of the following: >> >> * The code works >> * Tests can run properly in CI >> * "Depends-On" works in CI so that you can test changes cross-repo >> * Tests can run properly locally for developers >> * Deployment requirements are accurately communicated to deployers > > One more thing: > * Test environments between CI and local should be as same as possible. > > To run tests in CI and local successfully, I have tried to add new > testenv for local into tox.ini > (https://review.openstack.org/#/c/564099/4/tox.ini > ) as alternative > solution last night, this would be same as adding new requirements.txt > for local check. This seems to run fine, but this might make difference > in environments between CI and local. So I can not conclude this is the > best way for now. I'd like to echo this point from a neutron plugin (project) perspective. While we can move all our inter-project dependencies to requirements now [1], this does not address how we can run tox targets or devstack locally with master branches (to test changes locally before submitting to gate). To mitigate running tox locally we've introduced new targets that manually install the inter-project dependencies [2] in editable mode. While this works, IMHO it's not optimal and furthermore requires "special" steps if you want to add some changes to those editable projects and run with them. And we've done something similar for devstack [3]. Finally, we also have some periodic jobs used to pre-validate our shared neutron-lib project using master branches as defined by the periodic-jobs-with-neutron-lib-master template. Certainly we want to keep these working. Frankly it's been a bit of a cat-and-mouse game to keep up with the infra/zuul changes in the past 2 releases so it's possible what we've done could be improved upon. If that's the case please do let me know so we can works towards and optimized approach. Thanks [1] https://review.openstack.org/#/c/554292/ [2] https://review.openstack.org/#/c/555005/5/tox.ini [3] https://review.openstack.org/#/c/555005/5/devstack/lib/nsx_common From kevin at cloudnull.com Thu Apr 26 15:20:13 2018 From: kevin at cloudnull.com (Carter, Kevin) Date: Thu, 26 Apr 2018 10:20:13 -0500 Subject: [openstack-dev] [openstack-ansible] Proposing Mohammed Naser as core reviewer In-Reply-To: References: Message-ID: +2 from me! -- Kevin Carter IRC: Cloudnull On Wed, Apr 25, 2018 at 4:06 AM, Markos Chandras wrote: > On 24/04/18 16:05, Jean-Philippe Evrard wrote: > > Hi everyone, > > > > I’d like to propose Mohammed Naser [1] as a core reviewer for > OpenStack-Ansible. > > > > +2 > > -- > markos > > SUSE LINUX GmbH | GF: Felix Imendörffer, Jane Smithard, Graham Norton > HRB 21284 (AG Nürnberg) Maxfeldstr. 5, D-90409, Nürnberg > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Thu Apr 26 15:21:01 2018 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 26 Apr 2018 17:21:01 +0200 Subject: [openstack-dev] Summit Forum Schedule In-Reply-To: References: <5AE0EE0C.1070400@openstack.org> Message-ID: Matt Riedemann wrote: > On 4/25/2018 4:07 PM, Jimmy McArthur wrote: >> Please have a look at the Vancouver Forum schedule: >> https://docs.google.com/spreadsheets/d/15hkU0FLJ7yqougCiCQgprwTy867CGnmUvAvKaBjlDAo/edit?usp=sharing >> (also attached as a CSV) The proposed schedule was put together by two >> members from UC, TC and Foundation. >> >> We do our best to avoid moving scheduled items around as it tends to >> create a domino affect, but we do realize we might have missed >> something.  The schedule should generally be set, but if you see a >> major conflict in either content or speaker availability, please email >> speakersupport at openstack.org. > > Two questions: > > 1. What does the yellow on the pre-emptible instances cell mean? It was two sessions submitted after the deadline that the selection committee decided to keep. Were highlighted yellow so that they could find them. > 2. Just from looking at this, it looks like there were far fewer > submissions for forum topics than actual slots available, so basically > everything was approved (unless it was an obvious duplicate or not > appropriate for a forum session), is that right? Yes, there were less submissions, but they were all actually quite good. Encouraging teams to go through a round of brainstorming before submitting yields a lot less duplicates and crazy sessions. We also had ample space (3 parallel rooms for 4 days, with only one day of keynotes), more than we used to have. We decided to use the 3rd room as a room available to schedule follow-up sessions, in case 40 min does not cut it. More details on that later once the schedule is approved. -- Thierry Carrez (ttx) From dtantsur at redhat.com Thu Apr 26 15:24:42 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 26 Apr 2018 17:24:42 +0200 Subject: [openstack-dev] [tripleo] ironic automated cleaning by default? In-Reply-To: References: <2ebe539b-ec35-17c7-8207-126bf6a0b8f2@redhat.com> Message-ID: On 04/26/2018 05:12 PM, James Slagle wrote: > On Thu, Apr 26, 2018 at 10:24 AM, Dmitry Tantsur wrote: >> Answering to both James and Ben inline. >> >> >> On 04/25/2018 05:47 PM, Ben Nemec wrote: >>> >>> >>> >>> On 04/25/2018 10:28 AM, James Slagle wrote: >>>> >>>> On Wed, Apr 25, 2018 at 10:55 AM, Dmitry Tantsur >>>> wrote: >>>>> >>>>> On 04/25/2018 04:26 PM, James Slagle wrote: >>>>> Well, it's not clear what is "safe" here: protect people who explicitly >>>>> delete their stacks or protect people who don't realize that a previous >>>>> deployment may screw up their new one in a subtle way. >>>> >>>> >>>> The latter you can recover from, the former you can't if automated >>>> cleaning is true. >> >> >> Nor can we recover from 'rm -rf / --no-preserve-root', but it's not a reason >> to disable the 'rm' command :) > > This is a really disingenuous comparison. If you really want to > compare these things with what you're proposing, then it would be to > make --no-preserve-root the default with rm. Which it is not. If we really go down this path, what TripleO does right now is removing the 'rm' command by default and saying "well, you can install it back, if you realize you cannot work without it" :) > >> >>>> >>>> It's not just about people who explicitly delete their stacks (whether >>>> intentional or not). There could be user error (non-explicit) or >>>> side-effects triggered by Heat that could cause nodes to get deleted. >> >> >> If we have problems with Heat, we should fix Heat or stop using it. What >> you're saying is essentially "we prevent ironic from doing the right thing >> because we're using a tool that can invoke 'rm -rf /' at a wrong moment." > > Agreed on the Heat point, and once/if we're there, I'd probably not > object to making automated clean the default. > > I disagree on how you characterized what I'm saying. I'm not proposing > to prevent Ironic from doing the right thing. If people want to use > automated cleaning, they can. Nothing will prevent that. It just > shouldn't be the default. It's not about "want to use". It's about "we don't guarantee the correct behavior in presence of previous deployments on non-root disks" and "if you use ceph, you must use cleaning". > >> >>>> >>>> You couldn't recover from those scenarios if automated cleaning were >>>> true. Whereas you could always fix a deployment error by opting in to >>>> do an automated clean. Does Ironic keep track of it a node has been >>>> previously cleaned? Could we add a validation to check whether any >>>> nodes might be used in the deployment that were not previously >>>> cleaned? >> >> >> It's may be possible possible to figure out if a node was ever cleaned. But >> then we'll force operators to invoke cleaning manually, right? It will work, >> but that's another step on the default workflow. Are you okay with it? > > I would be ok with it. But I don't even characterize it as a > completely necessary step on the default workflow. It fixes some > issues as you've pointed out, but also comes with a cost. What we're > discussing is whether it's the default or not. If it is not true by > default, then we wouldn't make it a required step in the default > workflow to make sure it's done. It'd be documented as choice. > Sure, but how do people know if they want it? Okay, if they use Ceph, they have to. Then.. mm.. "if you have multiple disks and you're not sure what's on them, please clean"? It may work, I wonder how many people will care to follow it though. From zhang.lei.fly at gmail.com Thu Apr 26 15:31:01 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Thu, 26 Apr 2018 23:31:01 +0800 Subject: [openstack-dev] [kolla][vote]Core nomination for Mark Goddard (mgoddard) as kolla core member Message-ID: Kolla core reviewer team, It is my pleasure to nominate ​ mgoddard for kolla core team. ​ Mark has been working both upstream and downstream with kolla and kolla-ansible for over two years, building bare metal compute clouds with ironic for HPC. He's been involved with OpenStack since 2014. He started the kayobe deployment project which complements kolla-ansible. He is also the most active non-core contributor for last 90 days[1] ​​ Consider this nomination a +1 vote from me A +1 vote indicates you are in favor of ​ mgoddard as a candidate, a -1 is a ​​ veto. Voting is open for 7 days until ​May ​4​ th, or a unanimous response is reached or a veto vote occurs. [1] http://stackalytics.com/report/contribution/kolla-group/90 -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.slagle at gmail.com Thu Apr 26 15:34:10 2018 From: james.slagle at gmail.com (James Slagle) Date: Thu, 26 Apr 2018 11:34:10 -0400 Subject: [openstack-dev] [tripleo] ironic automated cleaning by default? In-Reply-To: References: <2ebe539b-ec35-17c7-8207-126bf6a0b8f2@redhat.com> Message-ID: On Thu, Apr 26, 2018 at 11:24 AM, Dmitry Tantsur wrote: > Sure, but how do people know if they want it? Okay, if they use Ceph, they > have to. Then.. mm.. "if you have multiple disks and you're not sure what's > on them, please clean"? It may work, I wonder how many people will care to > follow it though. Yes, this sounds pretty reasonable to me. -- -- James Slagle -- From openstack at nemebean.com Thu Apr 26 15:37:58 2018 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 26 Apr 2018 10:37:58 -0500 Subject: [openstack-dev] [tripleo] ironic automated cleaning by default? In-Reply-To: <2ebe539b-ec35-17c7-8207-126bf6a0b8f2@redhat.com> References: <2ebe539b-ec35-17c7-8207-126bf6a0b8f2@redhat.com> Message-ID: <83251d36-81fe-476b-4196-5d44de375e41@nemebean.com> On 04/26/2018 09:24 AM, Dmitry Tantsur wrote: > Answering to both James and Ben inline. > > On 04/25/2018 05:47 PM, Ben Nemec wrote: >> >> >> On 04/25/2018 10:28 AM, James Slagle wrote: >>> On Wed, Apr 25, 2018 at 10:55 AM, Dmitry Tantsur >>> wrote: >>>> On 04/25/2018 04:26 PM, James Slagle wrote: >>>>> >>>>> On Wed, Apr 25, 2018 at 9:14 AM, Dmitry Tantsur >>>>> wrote: >>>>>> >>>>>> Hi all, >>>>>> >>>>>> I'd like to restart conversation on enabling node automated >>>>>> cleaning by >>>>>> default for the undercloud. This process wipes partitioning tables >>>>>> (optionally, all the data) from overcloud nodes each time they >>>>>> move to >>>>>> "available" state (i.e. on initial enrolling and after each tear >>>>>> down). >>>>>> >>>>>> We have had it disabled for a few reasons: >>>>>> - it was not possible to skip time-consuming wiping if data from >>>>>> disks >>>>>> - the way our workflows used to work required going between >>>>>> manageable >>>>>> and >>>>>> available steps several times >>>>>> >>>>>> However, having cleaning disabled has several issues: >>>>>> - a configdrive left from a previous deployment may confuse >>>>>> cloud-init >>>>>> - a bootable partition left from a previous deployment may take >>>>>> precedence >>>>>> in some BIOS >>>>>> - an UEFI boot partition left from a previous deployment is likely to >>>>>> confuse UEFI firmware >>>>>> - apparently ceph does not work correctly without cleaning (I'll >>>>>> defer to >>>>>> the storage team to comment) >>>>>> >>>>>> For these reasons we don't recommend having cleaning disabled, and I >>>>>> propose >>>>>> to re-enable it. >>>>>> >>>>>> It has the following drawbacks: >>>>>> - The default workflow will require another node boot, thus becoming >>>>>> several >>>>>> minutes longer (incl. the CI) >>>>>> - It will no longer be possible to easily restore a deleted overcloud >>>>>> node. >>>>> >>>>> >>>>> I'm trending towards -1, for these exact reasons you list as >>>>> drawbacks. There has been no shortage of occurrences of users who have >>>>> ended up with accidentally deleted overclouds. These are usually >>>>> caused by user error or unintended/unpredictable Heat operations. >>>>> Until we have a way to guarantee that Heat will never delete a node, >>>>> or Heat is entirely out of the picture for Ironic provisioning, then >>>>> I'd prefer that we didn't enable automated cleaning by default. >>>>> >>>>> I believe we had done something with policy.json at one time to >>>>> prevent node delete, but I don't recall if that protected from both >>>>> user initiated actions and Heat actions. And even that was not enabled >>>>> by default. >>>>> >>>>> IMO, we need to keep "safe" defaults. Even if it means manually >>>>> documenting that you should clean to prevent the issues you point out >>>>> above. The alternative is to have no way to recover deleted nodes by >>>>> default. >>>> >>>> >>>> Well, it's not clear what is "safe" here: protect people who explicitly >>>> delete their stacks or protect people who don't realize that a previous >>>> deployment may screw up their new one in a subtle way. >>> >>> The latter you can recover from, the former you can't if automated >>> cleaning is true. > > Nor can we recover from 'rm -rf / --no-preserve-root', but it's not a > reason to disable the 'rm' command :) > >>> >>> It's not just about people who explicitly delete their stacks (whether >>> intentional or not). There could be user error (non-explicit) or >>> side-effects triggered by Heat that could cause nodes to get deleted. > > If we have problems with Heat, we should fix Heat or stop using it. What > you're saying is essentially "we prevent ironic from doing the right > thing because we're using a tool that can invoke 'rm -rf /' at a wrong > moment." > >>> >>> You couldn't recover from those scenarios if automated cleaning were >>> true. Whereas you could always fix a deployment error by opting in to >>> do an automated clean. Does Ironic keep track of it a node has been >>> previously cleaned? Could we add a validation to check whether any >>> nodes might be used in the deployment that were not previously >>> cleaned? > > It's may be possible possible to figure out if a node was ever cleaned. > But then we'll force operators to invoke cleaning manually, right? It > will work, but that's another step on the default workflow. Are you okay > with it? > >> >> Is there a way to only do cleaning right before a node is deployed? >> If you're about to write a new image to the disk then any data there >> is forfeit anyway. Since the concern is old data on the disk messing >> up subsequent deploys, it doesn't really matter whether you clean it >> right after it's deleted or right before it's deployed, but the latter >> leaves the data intact for longer in case a mistake was made. >> >> If that's not possible then consider this an RFE. :-) > > It's a good idea, but it may cause problems with rebuilding instances. > Rebuild is essentially a re-deploy of the OS, users may not expect the > whole disk to be wiped.. > > Also it's unclear whether we want to write additional features to work > around disabled cleaning. No matter how good the tooling gets, user error will always be a thing. Someone will scale down the wrong node or something similar. I think there's value to allowing recovery from mistakes. We all make them. :-) From berendt at betacloud-solutions.de Thu Apr 26 15:38:22 2018 From: berendt at betacloud-solutions.de (Christian Berendt) Date: Thu, 26 Apr 2018 17:38:22 +0200 Subject: [openstack-dev] [kolla][vote]Core nomination for Mark Goddard (mgoddard) as kolla core member In-Reply-To: References: Message-ID: <26488D36-7AC4-4C59-9693-0106B812B6F4@betacloud-solutions.de> +1 > On 26. Apr 2018, at 17:31, Jeffrey Zhang wrote: > > Kolla core reviewer team, > > It is my pleasure to nominate ​mgoddard for kolla core team. > ​ > Mark has been working both upstream and downstream with kolla and > kolla-ansible for over two years, building bare metal compute clouds with > ironic for HPC. He's been involved with OpenStack since 2014. He started > the kayobe deployment project which complements kolla-ansible. He is > also the most active non-core contributor for last 90 days[1] > ​​ > Consider this nomination a +1 vote from me > > A +1 vote indicates you are in favor of ​mgoddard as a candidate, a -1 > is a ​​veto. Voting is open for 7 days until ​May ​4​th, or a unanimous > response is reached or a veto vote occurs. > > [1] http://stackalytics.com/report/contribution/kolla-group/90 > > -- > Regards, > Jeffrey Zhang > Blog: http://xcodest.me > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Christian Berendt Chief Executive Officer (CEO) Mail: berendt at betacloud-solutions.de Web: https://www.betacloud-solutions.de Betacloud Solutions GmbH Teckstrasse 62 / 70190 Stuttgart / Deutschland Geschäftsführer: Christian Berendt Unternehmenssitz: Stuttgart Amtsgericht: Stuttgart, HRB 756139 From marcin.juszkiewicz at linaro.org Thu Apr 26 15:46:12 2018 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Thu, 26 Apr 2018 17:46:12 +0200 Subject: [openstack-dev] [kolla][vote]Core nomination for Mark Goddard (mgoddard) as kolla core member In-Reply-To: References: Message-ID: <4e97c54d-781a-4f8a-6903-7a7525f236c1@linaro.org> W dniu 26.04.2018 o 17:31, Jeffrey Zhang pisze: > Kolla core reviewer team, > > It is my pleasure to nominate > ​ > mgoddard for kolla core team. +1 From openstack at nemebean.com Thu Apr 26 15:48:48 2018 From: openstack at nemebean.com (Ben Nemec) Date: Thu, 26 Apr 2018 10:48:48 -0500 Subject: [openstack-dev] [Designate] Plan for OSM In-Reply-To: References: Message-ID: On 04/25/2018 11:31 PM, daidv at vn.fujitsu.com wrote: > Hi forks, > > We tested and completed our process with OVO migration in Queens cycle. > Now, we can continue with OSM implementation for Designate. > Actually, we have pushed some patches related to OSM[1] and it's ready to review. Out of curiosity, what does OSM stand for? Based on the patches it seems related to rolling upgrades, but a quick glance at them doesn't make it obvious to me what's going on. Thanks. -Ben From logan at protiumit.com Thu Apr 26 15:56:43 2018 From: logan at protiumit.com (Logan V.) Date: Thu, 26 Apr 2018 10:56:43 -0500 Subject: [openstack-dev] [openstack-ansible] Proposing Mohammed Naser as core reviewer In-Reply-To: References: Message-ID: +2! On Thu, Apr 26, 2018 at 10:20 AM, Carter, Kevin wrote: > +2 from me! > > > -- > > Kevin Carter > IRC: Cloudnull > > On Wed, Apr 25, 2018 at 4:06 AM, Markos Chandras wrote: >> >> On 24/04/18 16:05, Jean-Philippe Evrard wrote: >> > Hi everyone, >> > >> > I’d like to propose Mohammed Naser [1] as a core reviewer for >> > OpenStack-Ansible. >> > >> >> +2 >> >> -- >> markos >> >> SUSE LINUX GmbH | GF: Felix Imendörffer, Jane Smithard, Graham Norton >> HRB 21284 (AG Nürnberg) Maxfeldstr. 5, D-90409, Nürnberg >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From Tim.Bell at cern.ch Thu Apr 26 16:16:38 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Thu, 26 Apr 2018 16:16:38 +0000 Subject: [openstack-dev] [tripleo] ironic automated cleaning by default? In-Reply-To: <83251d36-81fe-476b-4196-5d44de375e41@nemebean.com> References: <2ebe539b-ec35-17c7-8207-126bf6a0b8f2@redhat.com> <83251d36-81fe-476b-4196-5d44de375e41@nemebean.com> Message-ID: How about asking the operators at the summit Forum or asking on openstack-operators to see what the users think? Tim -----Original Message----- From: Ben Nemec Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Thursday, 26 April 2018 at 17:39 To: "OpenStack Development Mailing List (not for usage questions)" , Dmitry Tantsur Subject: Re: [openstack-dev] [tripleo] ironic automated cleaning by default? On 04/26/2018 09:24 AM, Dmitry Tantsur wrote: > Answering to both James and Ben inline. > > On 04/25/2018 05:47 PM, Ben Nemec wrote: >> >> >> On 04/25/2018 10:28 AM, James Slagle wrote: >>> On Wed, Apr 25, 2018 at 10:55 AM, Dmitry Tantsur >>> wrote: >>>> On 04/25/2018 04:26 PM, James Slagle wrote: >>>>> >>>>> On Wed, Apr 25, 2018 at 9:14 AM, Dmitry Tantsur >>>>> wrote: >>>>>> >>>>>> Hi all, >>>>>> >>>>>> I'd like to restart conversation on enabling node automated >>>>>> cleaning by >>>>>> default for the undercloud. This process wipes partitioning tables >>>>>> (optionally, all the data) from overcloud nodes each time they >>>>>> move to >>>>>> "available" state (i.e. on initial enrolling and after each tear >>>>>> down). >>>>>> >>>>>> We have had it disabled for a few reasons: >>>>>> - it was not possible to skip time-consuming wiping if data from >>>>>> disks >>>>>> - the way our workflows used to work required going between >>>>>> manageable >>>>>> and >>>>>> available steps several times >>>>>> >>>>>> However, having cleaning disabled has several issues: >>>>>> - a configdrive left from a previous deployment may confuse >>>>>> cloud-init >>>>>> - a bootable partition left from a previous deployment may take >>>>>> precedence >>>>>> in some BIOS >>>>>> - an UEFI boot partition left from a previous deployment is likely to >>>>>> confuse UEFI firmware >>>>>> - apparently ceph does not work correctly without cleaning (I'll >>>>>> defer to >>>>>> the storage team to comment) >>>>>> >>>>>> For these reasons we don't recommend having cleaning disabled, and I >>>>>> propose >>>>>> to re-enable it. >>>>>> >>>>>> It has the following drawbacks: >>>>>> - The default workflow will require another node boot, thus becoming >>>>>> several >>>>>> minutes longer (incl. the CI) >>>>>> - It will no longer be possible to easily restore a deleted overcloud >>>>>> node. >>>>> >>>>> >>>>> I'm trending towards -1, for these exact reasons you list as >>>>> drawbacks. There has been no shortage of occurrences of users who have >>>>> ended up with accidentally deleted overclouds. These are usually >>>>> caused by user error or unintended/unpredictable Heat operations. >>>>> Until we have a way to guarantee that Heat will never delete a node, >>>>> or Heat is entirely out of the picture for Ironic provisioning, then >>>>> I'd prefer that we didn't enable automated cleaning by default. >>>>> >>>>> I believe we had done something with policy.json at one time to >>>>> prevent node delete, but I don't recall if that protected from both >>>>> user initiated actions and Heat actions. And even that was not enabled >>>>> by default. >>>>> >>>>> IMO, we need to keep "safe" defaults. Even if it means manually >>>>> documenting that you should clean to prevent the issues you point out >>>>> above. The alternative is to have no way to recover deleted nodes by >>>>> default. >>>> >>>> >>>> Well, it's not clear what is "safe" here: protect people who explicitly >>>> delete their stacks or protect people who don't realize that a previous >>>> deployment may screw up their new one in a subtle way. >>> >>> The latter you can recover from, the former you can't if automated >>> cleaning is true. > > Nor can we recover from 'rm -rf / --no-preserve-root', but it's not a > reason to disable the 'rm' command :) > >>> >>> It's not just about people who explicitly delete their stacks (whether >>> intentional or not). There could be user error (non-explicit) or >>> side-effects triggered by Heat that could cause nodes to get deleted. > > If we have problems with Heat, we should fix Heat or stop using it. What > you're saying is essentially "we prevent ironic from doing the right > thing because we're using a tool that can invoke 'rm -rf /' at a wrong > moment." > >>> >>> You couldn't recover from those scenarios if automated cleaning were >>> true. Whereas you could always fix a deployment error by opting in to >>> do an automated clean. Does Ironic keep track of it a node has been >>> previously cleaned? Could we add a validation to check whether any >>> nodes might be used in the deployment that were not previously >>> cleaned? > > It's may be possible possible to figure out if a node was ever cleaned. > But then we'll force operators to invoke cleaning manually, right? It > will work, but that's another step on the default workflow. Are you okay > with it? > >> >> Is there a way to only do cleaning right before a node is deployed? >> If you're about to write a new image to the disk then any data there >> is forfeit anyway. Since the concern is old data on the disk messing >> up subsequent deploys, it doesn't really matter whether you clean it >> right after it's deleted or right before it's deployed, but the latter >> leaves the data intact for longer in case a mistake was made. >> >> If that's not possible then consider this an RFE. :-) > > It's a good idea, but it may cause problems with rebuilding instances. > Rebuild is essentially a re-deploy of the OS, users may not expect the > whole disk to be wiped.. > > Also it's unclear whether we want to write additional features to work > around disabled cleaning. No matter how good the tooling gets, user error will always be a thing. Someone will scale down the wrong node or something similar. I think there's value to allowing recovery from mistakes. We all make them. :-) __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From marcin.juszkiewicz at linaro.org Thu Apr 26 16:28:31 2018 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Thu, 26 Apr 2018 18:28:31 +0200 Subject: [openstack-dev] [kolla][neutron][requirements][pbr]Use git+https line in requirements.txt break the pip install In-Reply-To: References: Message-ID: <88aad026-2bae-47ee-89e6-4e4283500883@linaro.org> W dniu 18.04.2018 o 04:48, Jeffrey Zhang pisze: > Is this expected? and how could we fix this? I posted a workaround: https://review.openstack.org/#/c/564552/ But this should be fixed in networking-odl (imho). From corvus at inaugust.com Thu Apr 26 16:40:10 2018 From: corvus at inaugust.com (James E. Blair) Date: Thu, 26 Apr 2018 09:40:10 -0700 Subject: [openstack-dev] [infra][qa][requirements] Pip 10 is on the way In-Reply-To: <1522960033.3841849.1328067440.7342057C@webmail.messagingengine.com> (Clark Boylan's message of "Thu, 05 Apr 2018 13:27:13 -0700") References: <20180331150026.nqrwaxakxcn3vmqz@yuggoth.org> <20180402150635.5d4jbbnzry2biowu@gentoo.org> <1522685637.1678193.1323782608.022AAF87@webmail.messagingengine.com> <1522960033.3841849.1328067440.7342057C@webmail.messagingengine.com> Message-ID: <87zi1pkjkl.fsf@meyer.lemoncheese.net> Clark Boylan writes: ... > I've since worked out a change that passes tempest using a global > virtualenv installed devstack at > https://review.openstack.org/#/c/558930/. This needs to be cleaned up > so that we only check for and install the virtualenv(s) once and we > need to handle mixed python2 and python3 environments better (so that > you can run a python2 swift and python3 everything else). > > The other major issue we've run into is that nova file injection > (which is tested by tempest) seems to require either libguestfs or > nbd. libguestfs bindings for python aren't available on pypi and > instead we get them from system packaging. This means if we want > libguestfs support we have to enable system site packages when using > virtualenvs. The alternative is to use nbd which apparently isn't > preferred by nova and doesn't work under current devstack anyways. > > Why is this a problem? Well the new pip10 behavior that breaks > devstack is pip10's refusable to remove distutils installed > packages. Distro packages by and large are distutils packaged which > means if you mix system packages and pip installed packages there is a > good chance something will break (and it does break for current > devstack). I'm not sure that using a virtualenv with system site > packages enabled will sufficiently protect us from this case (but we > should test it further). Also it feels wrong to enable system packages > in a virtualenv if the entire point is avoiding system python > packages. > > I'm not sure what the best option is here but if we can show that > system site packages with virtualenvs is viable with pip10 and people > want to move forward with devstack using a global virtualenv we can > work to clean up this change and make it mergeable. Now that pip 10 is here and we've got things relatively stable, it's probably time to revisit this. I think we should continue to explore the route that Clark has opened up. This isn't an emergency because all of the current devstack/pip10 conflicts have been resolved, however, there's no guarantee that we won't add a new package with a conflict (which may be even more difficult to resolve) or even that a future pip won't take an even harder line. I believe that installing all in one virtualenv has the advantage of behaving more like what is expected of a project in the current python ecosystem, while still retaining the co-installability testing that we get with devstack. What I'm a bit fuzzy on is how this impacts devstack plugins or related applications. However, it seems to me that we ought to be able to essentially define the global venv as part of the API and then plugins can participate in it. Perhaps that won't be able to be automatic? Maybe we'll need to set this up and then all devstack plugins will need to change in order to use it? If so, hopefully we'll be able to export the functions needed to make that easy. -Jim From Arkady.Kanevsky at dell.com Thu Apr 26 16:47:01 2018 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Thu, 26 Apr 2018 16:47:01 +0000 Subject: [openstack-dev] [tripleo] ironic automated cleaning by default? In-Reply-To: References: <2ebe539b-ec35-17c7-8207-126bf6a0b8f2@redhat.com> <83251d36-81fe-476b-4196-5d44de375e41@nemebean.com> Message-ID: +1. It would be good to also identify the use cases. Surprised that node should be cleaned up automatically. I would expect that we want it to be a deliberate request from administrator to do. Maybe user when they "return" a node to free pool after baremetal usage. Thanks, Arkady -----Original Message----- From: Tim Bell [mailto:Tim.Bell at cern.ch] Sent: Thursday, April 26, 2018 11:17 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [tripleo] ironic automated cleaning by default? How about asking the operators at the summit Forum or asking on openstack-operators to see what the users think? Tim -----Original Message----- From: Ben Nemec Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Thursday, 26 April 2018 at 17:39 To: "OpenStack Development Mailing List (not for usage questions)" , Dmitry Tantsur Subject: Re: [openstack-dev] [tripleo] ironic automated cleaning by default? On 04/26/2018 09:24 AM, Dmitry Tantsur wrote: > Answering to both James and Ben inline. > > On 04/25/2018 05:47 PM, Ben Nemec wrote: >> >> >> On 04/25/2018 10:28 AM, James Slagle wrote: >>> On Wed, Apr 25, 2018 at 10:55 AM, Dmitry Tantsur >>> wrote: >>>> On 04/25/2018 04:26 PM, James Slagle wrote: >>>>> >>>>> On Wed, Apr 25, 2018 at 9:14 AM, Dmitry Tantsur >>>>> wrote: >>>>>> >>>>>> Hi all, >>>>>> >>>>>> I'd like to restart conversation on enabling node automated >>>>>> cleaning by >>>>>> default for the undercloud. This process wipes partitioning tables >>>>>> (optionally, all the data) from overcloud nodes each time they >>>>>> move to >>>>>> "available" state (i.e. on initial enrolling and after each tear >>>>>> down). >>>>>> >>>>>> We have had it disabled for a few reasons: >>>>>> - it was not possible to skip time-consuming wiping if data from >>>>>> disks >>>>>> - the way our workflows used to work required going between >>>>>> manageable >>>>>> and >>>>>> available steps several times >>>>>> >>>>>> However, having cleaning disabled has several issues: >>>>>> - a configdrive left from a previous deployment may confuse >>>>>> cloud-init >>>>>> - a bootable partition left from a previous deployment may take >>>>>> precedence >>>>>> in some BIOS >>>>>> - an UEFI boot partition left from a previous deployment is likely to >>>>>> confuse UEFI firmware >>>>>> - apparently ceph does not work correctly without cleaning (I'll >>>>>> defer to >>>>>> the storage team to comment) >>>>>> >>>>>> For these reasons we don't recommend having cleaning disabled, and I >>>>>> propose >>>>>> to re-enable it. >>>>>> >>>>>> It has the following drawbacks: >>>>>> - The default workflow will require another node boot, thus becoming >>>>>> several >>>>>> minutes longer (incl. the CI) >>>>>> - It will no longer be possible to easily restore a deleted overcloud >>>>>> node. >>>>> >>>>> >>>>> I'm trending towards -1, for these exact reasons you list as >>>>> drawbacks. There has been no shortage of occurrences of users who have >>>>> ended up with accidentally deleted overclouds. These are usually >>>>> caused by user error or unintended/unpredictable Heat operations. >>>>> Until we have a way to guarantee that Heat will never delete a node, >>>>> or Heat is entirely out of the picture for Ironic provisioning, then >>>>> I'd prefer that we didn't enable automated cleaning by default. >>>>> >>>>> I believe we had done something with policy.json at one time to >>>>> prevent node delete, but I don't recall if that protected from both >>>>> user initiated actions and Heat actions. And even that was not enabled >>>>> by default. >>>>> >>>>> IMO, we need to keep "safe" defaults. Even if it means manually >>>>> documenting that you should clean to prevent the issues you point out >>>>> above. The alternative is to have no way to recover deleted nodes by >>>>> default. >>>> >>>> >>>> Well, it's not clear what is "safe" here: protect people who explicitly >>>> delete their stacks or protect people who don't realize that a previous >>>> deployment may screw up their new one in a subtle way. >>> >>> The latter you can recover from, the former you can't if automated >>> cleaning is true. > > Nor can we recover from 'rm -rf / --no-preserve-root', but it's not a > reason to disable the 'rm' command :) > >>> >>> It's not just about people who explicitly delete their stacks (whether >>> intentional or not). There could be user error (non-explicit) or >>> side-effects triggered by Heat that could cause nodes to get deleted. > > If we have problems with Heat, we should fix Heat or stop using it. What > you're saying is essentially "we prevent ironic from doing the right > thing because we're using a tool that can invoke 'rm -rf /' at a wrong > moment." > >>> >>> You couldn't recover from those scenarios if automated cleaning were >>> true. Whereas you could always fix a deployment error by opting in to >>> do an automated clean. Does Ironic keep track of it a node has been >>> previously cleaned? Could we add a validation to check whether any >>> nodes might be used in the deployment that were not previously >>> cleaned? > > It's may be possible possible to figure out if a node was ever cleaned. > But then we'll force operators to invoke cleaning manually, right? It > will work, but that's another step on the default workflow. Are you okay > with it? > >> >> Is there a way to only do cleaning right before a node is deployed? >> If you're about to write a new image to the disk then any data there >> is forfeit anyway. Since the concern is old data on the disk messing >> up subsequent deploys, it doesn't really matter whether you clean it >> right after it's deleted or right before it's deployed, but the latter >> leaves the data intact for longer in case a mistake was made. >> >> If that's not possible then consider this an RFE. :-) > > It's a good idea, but it may cause problems with rebuilding instances. > Rebuild is essentially a re-deploy of the OS, users may not expect the > whole disk to be wiped.. > > Also it's unclear whether we want to write additional features to work > around disabled cleaning. No matter how good the tooling gets, user error will always be a thing. Someone will scale down the wrong node or something similar. I think there's value to allowing recovery from mistakes. We all make them. :-) __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From msm at redhat.com Thu Apr 26 17:01:39 2018 From: msm at redhat.com (Michael McCune) Date: Thu, 26 Apr 2018 13:01:39 -0400 Subject: [openstack-dev] [all][api] POST /api-sig/news Message-ID: Greetings OpenStack community, Today's meeting was quite short and saw a review of everyone's status and the merging of one guideline. We began by sharing our current work and plans for the near future. Although everyone is on tight schedules currently, we discussed the next steps for the work on the OpenAPI proposal [7] and elmiko has mentioned that he will return to updating the microversion patch [8] in the near future. Next was our standard business of reviewing the frozen and open guidelines. The guideline on cache-control headers, which had been frozen last week, received no negative responses from the community, so it was merged. You can find the link to the merged guideline in the section below. As we reviewed our bug status, the group agreed that at some point in the near future we should take another pass at triaging our bugs. This work will take place after the upcoming Vancouver forum. As always if you're interested in helping out, in addition to coming to the meetings, there's also: * The list of bugs [5] indicates several missing or incomplete guidelines. * The existing guidelines [2] always need refreshing to account for changes over time. If you find something that's not quite right, submit a patch [6] to fix it. * Have you done something for which you think guidance would have made things easier but couldn't find any? Submit a patch and help others [6]. # Newly Published Guidelines * Add guidance on needing cache-control headers https://review.openstack.org/550468 # API Guidelines Proposed for Freeze Guidelines that are ready for wider review by the whole community. None # Guidelines Currently Under Review [3] * Update parameter names in microversion sdk spec https://review.openstack.org/#/c/557773/ * Add API-schema guide (still being defined) https://review.openstack.org/#/c/524467/ * A (shrinking) suite of several documents about doing version and service discovery Start at https://review.openstack.org/#/c/459405/ * WIP: microversion architecture archival doc (very early; not yet ready for review) https://review.openstack.org/444892 # Highlighting your API impacting issues If you seek further review and insight from the API SIG about APIs that you are developing or changing, please address your concerns in an email to the OpenStack developer mailing list[1] with the tag "[api]" in the subject. In your email, you should include any relevant reviews, links, and comments to help guide the discussion of the specific challenge you are facing. To learn more about the API SIG mission and the work we do, see our wiki page [4] and guidelines [2]. Thanks for reading and see you next week! # References [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev [2] http://specs.openstack.org/openstack/api-wg/ [3] https://review.openstack.org/#/q/status:open+project:openstack/api-wg,n,z [4] https://wiki.openstack.org/wiki/API_SIG [5] https://bugs.launchpad.net/openstack-api-wg [6] https://git.openstack.org/cgit/openstack/api-wg [7] https://gist.github.com/elmiko/7d97fef591887aa0c594c3dafad83442 [8] https://review.openstack.org/#/c/444892/ Meeting Agenda https://wiki.openstack.org/wiki/Meetings/API-SIG#Agenda Past Meeting Records http://eavesdrop.openstack.org/meetings/api_sig/ Open Bugs https://bugs.launchpad.net/openstack-api-wg From Tim.Bell at cern.ch Thu Apr 26 17:16:17 2018 From: Tim.Bell at cern.ch (Tim Bell) Date: Thu, 26 Apr 2018 17:16:17 +0000 Subject: [openstack-dev] [tripleo] ironic automated cleaning by default? In-Reply-To: References: <2ebe539b-ec35-17c7-8207-126bf6a0b8f2@redhat.com> <83251d36-81fe-476b-4196-5d44de375e41@nemebean.com> Message-ID: <038C1AD1-9D57-4A31-9848-C0A4E7121DDF@cern.ch> My worry with changing the default is that it would be like adding the following in /etc/environment, alias ls=' rm -rf / --no-preserve-root' i.e. an operation which was previously read-only now becomes irreversible. We also have current use cases with Ironic where we are moving machines between projects by 'disowning' them to the spare pool and then reclaiming them (by UUID) into new projects with the same state. However, other operators may feel differently which is why I suggest asking what people feel about changing the default. In any case, changes in default behaviour need to be highly visible. Tim -----Original Message----- From: "Arkady.Kanevsky at dell.com" Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Thursday, 26 April 2018 at 18:48 To: "openstack-dev at lists.openstack.org" Subject: Re: [openstack-dev] [tripleo] ironic automated cleaning by default? +1. It would be good to also identify the use cases. Surprised that node should be cleaned up automatically. I would expect that we want it to be a deliberate request from administrator to do. Maybe user when they "return" a node to free pool after baremetal usage. Thanks, Arkady -----Original Message----- From: Tim Bell [mailto:Tim.Bell at cern.ch] Sent: Thursday, April 26, 2018 11:17 AM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [tripleo] ironic automated cleaning by default? How about asking the operators at the summit Forum or asking on openstack-operators to see what the users think? Tim -----Original Message----- From: Ben Nemec Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Thursday, 26 April 2018 at 17:39 To: "OpenStack Development Mailing List (not for usage questions)" , Dmitry Tantsur Subject: Re: [openstack-dev] [tripleo] ironic automated cleaning by default? On 04/26/2018 09:24 AM, Dmitry Tantsur wrote: > Answering to both James and Ben inline. > > On 04/25/2018 05:47 PM, Ben Nemec wrote: >> >> >> On 04/25/2018 10:28 AM, James Slagle wrote: >>> On Wed, Apr 25, 2018 at 10:55 AM, Dmitry Tantsur >>> wrote: >>>> On 04/25/2018 04:26 PM, James Slagle wrote: >>>>> >>>>> On Wed, Apr 25, 2018 at 9:14 AM, Dmitry Tantsur >>>>> wrote: >>>>>> >>>>>> Hi all, >>>>>> >>>>>> I'd like to restart conversation on enabling node automated >>>>>> cleaning by >>>>>> default for the undercloud. This process wipes partitioning tables >>>>>> (optionally, all the data) from overcloud nodes each time they >>>>>> move to >>>>>> "available" state (i.e. on initial enrolling and after each tear >>>>>> down). >>>>>> >>>>>> We have had it disabled for a few reasons: >>>>>> - it was not possible to skip time-consuming wiping if data from >>>>>> disks >>>>>> - the way our workflows used to work required going between >>>>>> manageable >>>>>> and >>>>>> available steps several times >>>>>> >>>>>> However, having cleaning disabled has several issues: >>>>>> - a configdrive left from a previous deployment may confuse >>>>>> cloud-init >>>>>> - a bootable partition left from a previous deployment may take >>>>>> precedence >>>>>> in some BIOS >>>>>> - an UEFI boot partition left from a previous deployment is likely to >>>>>> confuse UEFI firmware >>>>>> - apparently ceph does not work correctly without cleaning (I'll >>>>>> defer to >>>>>> the storage team to comment) >>>>>> >>>>>> For these reasons we don't recommend having cleaning disabled, and I >>>>>> propose >>>>>> to re-enable it. >>>>>> >>>>>> It has the following drawbacks: >>>>>> - The default workflow will require another node boot, thus becoming >>>>>> several >>>>>> minutes longer (incl. the CI) >>>>>> - It will no longer be possible to easily restore a deleted overcloud >>>>>> node. >>>>> >>>>> >>>>> I'm trending towards -1, for these exact reasons you list as >>>>> drawbacks. There has been no shortage of occurrences of users who have >>>>> ended up with accidentally deleted overclouds. These are usually >>>>> caused by user error or unintended/unpredictable Heat operations. >>>>> Until we have a way to guarantee that Heat will never delete a node, >>>>> or Heat is entirely out of the picture for Ironic provisioning, then >>>>> I'd prefer that we didn't enable automated cleaning by default. >>>>> >>>>> I believe we had done something with policy.json at one time to >>>>> prevent node delete, but I don't recall if that protected from both >>>>> user initiated actions and Heat actions. And even that was not enabled >>>>> by default. >>>>> >>>>> IMO, we need to keep "safe" defaults. Even if it means manually >>>>> documenting that you should clean to prevent the issues you point out >>>>> above. The alternative is to have no way to recover deleted nodes by >>>>> default. >>>> >>>> >>>> Well, it's not clear what is "safe" here: protect people who explicitly >>>> delete their stacks or protect people who don't realize that a previous >>>> deployment may screw up their new one in a subtle way. >>> >>> The latter you can recover from, the former you can't if automated >>> cleaning is true. > > Nor can we recover from 'rm -rf / --no-preserve-root', but it's not a > reason to disable the 'rm' command :) > >>> >>> It's not just about people who explicitly delete their stacks (whether >>> intentional or not). There could be user error (non-explicit) or >>> side-effects triggered by Heat that could cause nodes to get deleted. > > If we have problems with Heat, we should fix Heat or stop using it. What > you're saying is essentially "we prevent ironic from doing the right > thing because we're using a tool that can invoke 'rm -rf /' at a wrong > moment." > >>> >>> You couldn't recover from those scenarios if automated cleaning were >>> true. Whereas you could always fix a deployment error by opting in to >>> do an automated clean. Does Ironic keep track of it a node has been >>> previously cleaned? Could we add a validation to check whether any >>> nodes might be used in the deployment that were not previously >>> cleaned? > > It's may be possible possible to figure out if a node was ever cleaned. > But then we'll force operators to invoke cleaning manually, right? It > will work, but that's another step on the default workflow. Are you okay > with it? > >> >> Is there a way to only do cleaning right before a node is deployed? >> If you're about to write a new image to the disk then any data there >> is forfeit anyway. Since the concern is old data on the disk messing >> up subsequent deploys, it doesn't really matter whether you clean it >> right after it's deleted or right before it's deployed, but the latter >> leaves the data intact for longer in case a mistake was made. >> >> If that's not possible then consider this an RFE. :-) > > It's a good idea, but it may cause problems with rebuilding instances. > Rebuild is essentially a re-deploy of the OS, users may not expect the > whole disk to be wiped.. > > Also it's unclear whether we want to write additional features to work > around disabled cleaning. No matter how good the tooling gets, user error will always be a thing. Someone will scale down the wrong node or something similar. I think there's value to allowing recovery from mistakes. We all make them. :-) __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From pabelanger at redhat.com Thu Apr 26 17:17:33 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Thu, 26 Apr 2018 13:17:33 -0400 Subject: [openstack-dev] [all][tc] final stages of python 3 transition In-Reply-To: <20180426142731.GA18842@sm-xps> References: <1524689037-sup-783@lrrr.local> <20180426142731.GA18842@sm-xps> Message-ID: <20180426171733.GA16559@localhost.localdomain> On Thu, Apr 26, 2018 at 09:27:31AM -0500, Sean McGinnis wrote: > On Wed, Apr 25, 2018 at 04:54:46PM -0400, Doug Hellmann wrote: > > It's time to talk about the next steps in our migration from python > > 2 to python 3. > > > > [...] > > > > 2. Change (or duplicate) all functional test jobs to run under > > python 3. > > As a test I ran Cinder functional and unit test jobs on bionic using 3.6. All > went well. > > That made me realize something though - right now we have jobs that explicitly > say py35, both for unit tests and functional tests. But I realized setting up > these test jobs that it works to just specify "basepython = python3" or run > unit tests with "tox -e py3". Then with that, it just depends on whether the > job runs on xenial or bionic as to whether the job is run with py35 or py36. > > It is less explicit, so I see some downside to that, but would it make sense to > change jobs to drop the minor version to make it more flexible and easy to make > these transitions? > I still think using tox-py35 / tox-py36 makes sense, as those jobs are already setup to use the specific nodeset of ubuntu-xenial or ubuntu-bionic. If we did move to just tox-py3, it would actually result if more code projects would need to add to their .zuul.yaml files: -project: check: jobs: - tox-py35 -project: check: jobs: - tox-py3: nodeset: ubuntu-xenial This maybe okay, will allow others to comment, but the main reason I am not a fan, is we can no longer infer the nodeset by looking at the job name. tox-py3 could be xenial or bionic. Paul From fungi at yuggoth.org Thu Apr 26 17:32:23 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 26 Apr 2018 17:32:23 +0000 Subject: [openstack-dev] [all][tc] final stages of python 3 transition In-Reply-To: <20180426171733.GA16559@localhost.localdomain> References: <1524689037-sup-783@lrrr.local> <20180426142731.GA18842@sm-xps> <20180426171733.GA16559@localhost.localdomain> Message-ID: <20180426173223.4ksmdxomoetqqhjh@yuggoth.org> On 2018-04-26 13:17:33 -0400 (-0400), Paul Belanger wrote: [...] > This maybe okay, will allow others to comment, but the main reason > I am not a fan, is we can no longer infer the nodeset by looking > at the job name. tox-py3 could be xenial or bionic. This brings back a question we've struggled with for years: are we testing against "Python X.Y" or are we testing against "Python as provided by distro Z"? Depending on how you think about that, one solution or the other is technically a more accurate reflection of our choice here. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From cboylan at sapwetik.org Thu Apr 26 18:22:15 2018 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 26 Apr 2018 11:22:15 -0700 Subject: [openstack-dev] [all][tc] final stages of python 3 transition In-Reply-To: <20180426142731.GA18842@sm-xps> References: <1524689037-sup-783@lrrr.local> <20180426142731.GA18842@sm-xps> Message-ID: <1524766935.803604.1351983704.474E0252@webmail.messagingengine.com> On Thu, Apr 26, 2018, at 7:27 AM, Sean McGinnis wrote: > On Wed, Apr 25, 2018 at 04:54:46PM -0400, Doug Hellmann wrote: > > It's time to talk about the next steps in our migration from python > > 2 to python 3. > > > > [...] > > > > 2. Change (or duplicate) all functional test jobs to run under > > python 3. > > As a test I ran Cinder functional and unit test jobs on bionic using 3.6. All > went well. > > That made me realize something though - right now we have jobs that explicitly > say py35, both for unit tests and functional tests. But I realized setting up > these test jobs that it works to just specify "basepython = python3" or run > unit tests with "tox -e py3". Then with that, it just depends on whether the > job runs on xenial or bionic as to whether the job is run with py35 or py36. > > It is less explicit, so I see some downside to that, but would it make sense to > change jobs to drop the minor version to make it more flexible and easy to make > these transitions? One reason to use it would be local user simplicity. Rather than need to explicitly add new python3 releases to the default env list so that it does what we want every year or two we can just list py3,py2,linters in the default list and get most of the way there for local users. Then we can continue to be more specific in the CI jobs if that is desirable. I do think we likely want to be explicit about the python versions we are using in CI testing. This makes it clear to developers who may need to reproduce or just understand why failures happen what platform is used. It also makes it explicit that "openstack runs on $pythonversion". Clark From colleen at gazlene.net Thu Apr 26 18:23:20 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Thu, 26 Apr 2018 20:23:20 +0200 Subject: [openstack-dev] Summit Forum Schedule In-Reply-To: <5AE0EE0C.1070400@openstack.org> References: <5AE0EE0C.1070400@openstack.org> Message-ID: <1524767000.1439227.1351981072.397A5C6D@webmail.messagingengine.com> Hi Jimmy, On Wed, Apr 25, 2018, at 11:07 PM, Jimmy McArthur wrote: > Hi everyone - > > Please have a look at the Vancouver Forum schedule: > https://docs.google.com/spreadsheets/d/15hkU0FLJ7yqougCiCQgprwTy867CGnmUvAvKaBjlDAo/edit?usp=sharing > (also attached as a CSV) The proposed schedule was put together by two > members from UC, TC and Foundation. > > We do our best to avoid moving scheduled items around as it tends to > create a domino affect, but we do realize we might have missed > something. The schedule should generally be set, but if you see a major > conflict in either content or speaker availability, please email > speakersupport at openstack.org. I have a conflict on Thursday afternoon. Could I propose swapping these two sessions: Monday 11:35-12:15 Manila Ops feedback: running at scale, barriers to deployment Thursday 1:50-2:30 Default Roles I've gotten affirmation from Tom and Lance on the swap, though if this causes problems for anyone else I'm happy to retract this request. Colleen From aspiers at suse.com Thu Apr 26 18:47:07 2018 From: aspiers at suse.com (Adam Spiers) Date: Thu, 26 Apr 2018 19:47:07 +0100 Subject: [openstack-dev] [tc] campaign question: How can we make contributing to OpenStack easier? In-Reply-To: <20180425172622.6cwrgmvo7tiwo2ul@yuggoth.org> <1524685555-sup-7538@lrrr.local> Message-ID: <20180426184707.cvdkbkhvnhj5noii@pacific.linksys.moosehall> Doug Hellmann wrote: >Excerpts from Adam Spiers's message of 2018-04-25 18:15:42 +0100: >> [BTW I hope it's not considered off-bounds for those of us who aren't >> TC election candidates to reply within these campaign question threads >> to responses from the candidates - but if so, let me know and I'll >> shut up ;-) ] > >Everyone should feel free to participate! Jeremy Stanley wrote: >Not only are responses from everyone in the community welcome (and >like many, I think we should be asking questions like this often >outside the context of election campaigning), but I wholeheartedly >agree with your stance on this topic and also strongly encourage you >to consider running for a seat on the TC in the future if you can >swing it. Thanks both for your support! From jimmy at openstack.org Thu Apr 26 20:27:11 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Thu, 26 Apr 2018 15:27:11 -0500 Subject: [openstack-dev] Summit Forum Schedule In-Reply-To: <1524767000.1439227.1351981072.397A5C6D@webmail.messagingengine.com> References: <5AE0EE0C.1070400@openstack.org> <1524767000.1439227.1351981072.397A5C6D@webmail.messagingengine.com> Message-ID: <5AE2361F.10908@openstack.org> No problem. Done :) > Colleen Murphy > April 26, 2018 at 1:23 PM > Hi Jimmy, > > I have a conflict on Thursday afternoon. Could I propose swapping > these two sessions: > > Monday 11:35-12:15 Manila Ops feedback: running at scale, barriers to > deployment > Thursday 1:50-2:30 Default Roles > > I've gotten affirmation from Tom and Lance on the swap, though if this > causes problems for anyone else I'm happy to retract this request. > > Colleen > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jimmy McArthur > April 25, 2018 at 4:07 PM > Hi everyone - > > Please have a look at the Vancouver Forum schedule: > https://docs.google.com/spreadsheets/d/15hkU0FLJ7yqougCiCQgprwTy867CGnmUvAvKaBjlDAo/edit?usp=sharing > (also attached as a CSV) The proposed schedule was put together by two > members from UC, TC and Foundation. > > We do our best to avoid moving scheduled items around as it tends to > create a domino affect, but we do realize we might have missed > something. The schedule should generally be set, but if you see a > major conflict in either content or speaker availability, please email > speakersupport at openstack.org. > > Thanks all, > Jimmy > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Thu Apr 26 20:44:35 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Thu, 26 Apr 2018 16:44:35 -0400 Subject: [openstack-dev] new tool for quickly fixing nits on gerrit reviews Message-ID: <1524775251-sup-1780@lrrr.local> For a while now I've been encouraging folks to propose follow-up patches to fix nits on proposed changes, rather than waiting ages for someone to respond to a -1 for a little typo. Today I've release git-nit, a tool to make doing that easier. The idea is that you would run a command like: $ git nit https://review.openstack.org/#/c/564559/ to download that review into a new local sandbox, ready for your follow-up patch. There are more examples in the README.rst on github [1] (until the repo is imported into our gerrit server [2]). I released version 1.0.0 a few minutes ago, so you should be able to pip install it. Please try it out and let me know what your experience is like. Doug [1] https://github.com/dhellmann/git-nit [2] https://review.openstack.org/564622 From tony at bakeyournoodle.com Thu Apr 26 20:57:01 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 27 Apr 2018 06:57:01 +1000 Subject: [openstack-dev] Changes to keystone-stable-maint members In-Reply-To: References: Message-ID: <20180426205701.GA23129@thor.bakeyournoodle.com> On Tue, Apr 24, 2018 at 10:58:06AM -0700, Morgan Fainberg wrote: > Hi, > > I am proposing making some changes to the Keystone Stable Maint team. > A lot of this is cleanup for contributors that have moved on from > OpenStack. For the most part, I've been the only one responsible for > Keystone Stable Maint reviews, and I'm not comfortable being this > bottleneck > > Removals > ======== > Dolph Matthews > Steve Martinelli > Brant Knudson > > Each of these members have left/moved on from OpenStack, or in the > case of Brant, less involved with Keystone (and I believe OpenStack as > a whole). > > Additions > ======= > Lance Bragstad Done. Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tpb at dyncloud.net Thu Apr 26 21:01:23 2018 From: tpb at dyncloud.net (Tom Barron) Date: Thu, 26 Apr 2018 17:01:23 -0400 Subject: [openstack-dev] Summit Forum Schedule In-Reply-To: <5AE2361F.10908@openstack.org> References: <5AE0EE0C.1070400@openstack.org> <1524767000.1439227.1351981072.397A5C6D@webmail.messagingengine.com> <5AE2361F.10908@openstack.org> Message-ID: <20180426210123.s4nvispkzjnqzw2a@barron.net> Jimmy, Also can we 's/barriers/overcoming barriers/' in the title of the manila session? Thanks! -- Tom On 26/04/18 15:27 -0500, Jimmy McArthur wrote: >No problem. Done :) > >>Colleen Murphy >>April 26, 2018 at 1:23 PM >>Hi Jimmy, >> >>I have a conflict on Thursday afternoon. Could I propose swapping >>these two sessions: >> >>Monday 11:35-12:15 Manila Ops feedback: running at scale, barriers >>to deployment >>Thursday 1:50-2:30 Default Roles >> >>I've gotten affirmation from Tom and Lance on the swap, though if >>this causes problems for anyone else I'm happy to retract this >>request. >> >>Colleen >> >>__________________________________________________________________________ >>OpenStack Development Mailing List (not for usage questions) >>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>Jimmy McArthur >>April 25, 2018 at 4:07 PM >>Hi everyone - >> >>Please have a look at the Vancouver Forum schedule: https://docs.google.com/spreadsheets/d/15hkU0FLJ7yqougCiCQgprwTy867CGnmUvAvKaBjlDAo/edit?usp=sharing >>(also attached as a CSV) The proposed schedule was put together by >>two members from UC, TC and Foundation. >> >>We do our best to avoid moving scheduled items around as it tends to >>create a domino affect, but we do realize we might have missed >>something. The schedule should generally be set, but if you see a >>major conflict in either content or speaker availability, please >>email speakersupport at openstack.org. >> >>Thanks all, >>Jimmy >>_______________________________________________ >>OpenStack-operators mailing list >>OpenStack-operators at lists.openstack.org >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > >__________________________________________________________________________ >OpenStack Development Mailing List (not for usage questions) >Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From jimmy at openstack.org Thu Apr 26 21:41:32 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Thu, 26 Apr 2018 16:41:32 -0500 Subject: [openstack-dev] Summit Forum Schedule In-Reply-To: <20180426210123.s4nvispkzjnqzw2a@barron.net> References: <5AE0EE0C.1070400@openstack.org> <1524767000.1439227.1351981072.397A5C6D@webmail.messagingengine.com> <5AE2361F.10908@openstack.org> <20180426210123.s4nvispkzjnqzw2a@barron.net> Message-ID: <5AE2478C.8090407@openstack.org> No problem at all: https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21780/manila-ops-feedback-running-at-scale-overcoming-barriers-to-deployment Tom Barron wrote: > running at scale, barriers to deployment From singh.surya64mnnit at gmail.com Fri Apr 27 00:31:16 2018 From: singh.surya64mnnit at gmail.com (Surya Singh) Date: Fri, 27 Apr 2018 06:01:16 +0530 Subject: [openstack-dev] [kolla][vote]Core nomination for Mark Goddard (mgoddard) as kolla core member In-Reply-To: References: Message-ID: +1 Good Contribution !! Mark On Thu, Apr 26, 2018 at 9:01 PM, Jeffrey Zhang wrote: > Kolla core reviewer team, > > It is my pleasure to nominate > mgoddard for kolla core team. > Mark has been working both upstream and downstream with kolla and > kolla-ansible for over two years, building bare metal compute clouds with > ironic for HPC. He's been involved with OpenStack since 2014. He started > the kayobe deployment project which complements kolla-ansible. He is > also the most active non-core contributor for last 90 days[1] > Consider this nomination a +1 vote from me > > A +1 vote indicates you are in favor of > mgoddard as a candidate, a -1 > is a > veto. Voting is open for 7 days until > May > > 4 > th, or a unanimous > response is reached or a veto vote occurs. > > [1] http://stackalytics.com/report/contribution/kolla-group/90 > > -- > Regards, > Jeffrey Zhang > Blog: http://xcodest.me > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Cheers Surya From mcdkr at yandex.ru Fri Apr 27 01:30:50 2018 From: mcdkr at yandex.ru (Vitalii Solodilov) Date: Fri, 27 Apr 2018 04:30:50 +0300 Subject: [openstack-dev] [mistral] A mechanism to close stuck running executions Message-ID: <1651901524792650@web8j.yandex.ru> Hi, Jozsef and Andras. Do you plan to finish this patch? https://review.openstack.org/#/c/527085 I think the stuck RUNNING executions is very a sensitive subject for Mistral. --  Best regards, Vitalii Solodilov From mark.kirkwood at catalyst.net.nz Fri Apr 27 03:58:57 2018 From: mark.kirkwood at catalyst.net.nz (Mark Kirkwood) Date: Fri, 27 Apr 2018 15:58:57 +1200 Subject: [openstack-dev] [osc][swift] Setting storage policy for a container possible via the client? In-Reply-To: References: <1524142259-sup-5177@lrrr.local> Message-ID: On 20/04/18 04:54, Dean Troyer wrote: > On Thu, Apr 19, 2018 at 7:51 AM, Doug Hellmann wrote: >> Excerpts from Mark Kirkwood's message of 2018-04-19 16:47:58 +1200: >>> Swift has had storage policies for a while now. These are enabled by >>> setting the 'X-Storage-Policy' header on a container. >>> >>> It looks to me like this is not possible using openstack-client (even in >>> master branch) - while there is a 'set' operation for containers this >>> will *only* set 'Meta-*' type headers. >>> >>> It seems to me that adding this would be highly desirable. Is it in the >>> pipeline? If not I might see how much interest there is at my end for >>> adding such - as (famous last words) it looks pretty straightforward to do. >> I can't imagine why we wouldn't want to implement that and I'm not >> aware of anyone working on it. If you're interested and have time, >> please do work on the patch(es). > The primary thing that hinders Swift work like this is OSC does not > use swiftclient as it wasn't a standalone thing yet when I wrote that > bit (lifting much of the actual API code from swiftclient) . We > decided a while ago to not add that dependency and drop the > OSC-specific object code and use the SDK when we start using SDK for > everything else, after there is an SDK 1.0 release. > > Moving forward on this today using either OSC's api.object code or the > SDK would be fine, with the same SDK caveat we have with Neutron, > since SDK isn't 1.0 we may have to play catch-up and maintain multiple > SDK release compatibilities (which has happened at least twice). Ok, understood. I've uploaded a small patch that adding policy specification to 'container create' and adds some policy details display to 'container show' and 'object store account show' [1]. It uses the existing api design, but tries to get the display to look a little like what the swift cli provides (particularly for the account info). regards Mark [1] Gerrit topic is objectstorepolicies From emilien at redhat.com Fri Apr 27 04:03:08 2018 From: emilien at redhat.com (Emilien Macchi) Date: Thu, 26 Apr 2018 21:03:08 -0700 Subject: [openstack-dev] Fwd: [tripleo] PTG session about All-In-One installer: recap & roadmap In-Reply-To: References: Message-ID: We created a new board where we'll track the efforts for the all-in-one installer: https://trello.com/b/iAHhAgjV/tripleo-all-in-one-installer Note: please do not use the containerized undercloud dashboard for these tasks, it is a separated effort. Feel free to join the board and feed the backlog! Thanks, On Thu, Apr 5, 2018 at 10:02 AM, Dan Prince wrote: > On Thu, Apr 5, 2018 at 12:24 PM, Emilien Macchi > wrote: > > On Thu, Apr 5, 2018 at 4:37 AM, Dan Prince wrote: > > > >> Much of the work on this is already there. We've been using this stuff > >> for over a year to dev/test the containerization efforts for a long > >> time now (and thanks for your help with this effort). The problem I > >> think is how it is all packaged. While you can use it today it > >> involves some tricks (docker in docker), or requires you to use an > >> extra VM to minimize the installation footprint on your laptop. > >> > >> Much of the remaining work here is really just about packaging and > >> technical debt. If we put tripleoclient and heat-monolith into a > >> container that solves much of the requirements problems and > >> essentially gives you a container which can transform Heat templates > >> to Ansible. From the ansible side we need to do a bit more work to > >> mimimize our dependencies (i.e. heat hooks). Using a virtual-env would > >> be one option for developers if we could make that work. I lighter set > >> of RPM packages would be another way to do it. Perhaps both... > >> Then a smaller wrapper around these things (which I personally would > >> like to name) to make it all really tight. > > > > > > So if I summarize the discussion: > > > > - A lot of positive feedback about the idea and many use cases, which is > > great. > > > > - Support for non-containerized services is not required, as long as we > > provide a way to update containers with under-review patches for > developers. > > I think we still desire some (basic no upgrades) support for > non-containerized baremetal at this time. > > > > > - We'll probably want to breakdown the "openstack undercloud deploy" > process > > into pieces > > * start an ephemeral Heat container > > It already supports this if use don't use the --heat-native option, > also you can customize the container used for heat via > --heat-container-image. So we already have this! But rather than do > this I personally prefer the container to have python-tripleoclient > and heat-monolith in it. That way everything everything is in there to > generate Ansible templates. If you just use Heat you are missing some > of the pieces that you'd still have to install elsewhere on your host. > Having them all be in one scoped container which generates Ansible > playbooks from Heat templates is better IMO. > > > * create the Heat stack passing all requested -e's > > * run config-download and save the output > > > > And then remove undercloud specific logic, so we can provide a generic > way > > to create the config-download playbooks. > > Yes. Lets remove some of the undercloud logic. But do note that most > of the undercloud specific login is now in undercloud_config.py anyway > so this is mostly already on its way. > > > This generic way would be consumed by the undercloud deploy commands but > > also by the new all-in-one wrapper. > > > > - Speaking of the wrapper, we will probably have a new one. Several names > > were proposed: > > * openstack tripleo deploy > > * openstack talon deploy > > * openstack elf deploy > > The wrapper could be just another set of playbooks. That we give a > name too... and perhaps put a CLI in front of as well. > > > > > - The wrapper would work with deployed-server, so we would noop Neutron > > networks and use fixed IPs. > > This would be configurable I think depending on which templates were > used. Noop as a default for developer deployments but do note that > some services like Neutron aren't going to work unless you have some > basic network setup. Noop is useful if you prefer to do this manually, > but our os-net-config templates are quite useful to automate things. > > > > > - Investigate the packaging work: containerize tripleoclient and > > dependencies, see how we can containerized Ansible + dependencies (and > > eventually reduce them at strict minimum). > > > > Let me know if I missed something important, hopefully we can move things > > forward during this cycle. > > -- > > Emilien Macchi > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From renat.akhmerov at gmail.com Fri Apr 27 04:45:22 2018 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Fri, 27 Apr 2018 11:45:22 +0700 Subject: [openstack-dev] =?utf-8?Q?=E2=80=8B=5Bopenstack-dev=5D_?=[mistral] timeout and retry In-Reply-To: <3369991524520817@web43g.yandex.ru> References: <3369991524520817@web43g.yandex.ru> Message-ID: <0c051cf2-f702-4bd2-8882-019ceefd2778@Spark> Hi, I don’t clearly understand the problem you’re trying to point to. IMO, when using these two policies at the same time ‘timeout’ policy should have higher priority. No matter at what stage the task is, but if it’s still in RUNNING state or FAILED but we know that retry policy still didn’t use all attempts then the ‘timeout’ policy should force the task to fail. If it’s the second case when it’s FAILED but the retry policy is still in play then we need to leave some ‘force’ marker in the task state to make sure that there’s no need to retry it further. Thanks Renat Akhmerov @Nokia On 24 Apr 2018, 05:00 +0700, Vitalii Solodilov , wrote: > Hi Renat, Can you explain me and Dougal how timeout policy should work with retry policy? > > I guess there is bug right now. > The behaviour is something like this https://ibb.co/hhm0eH > Example: https://review.openstack.org/#/c/563759/ > Logs: http://logs.openstack.org/59/563759/1/check/openstack-tox-py27/6f38808/job-output.txt.gz#_2018-04-23_20_54_55_376083 > Even we will fix this bug and after task timeout we will not retry task. I don't understand which problem is decided by this timeout and retry. > Other problem. What about task retry? I mean using mistral api. The problem is that timeout delayed calls was not created. > > IMHO the combination of these policies should work like this https://ibb.co/fe5tzH > It is not a timeout per action because when task retry it move to some complete state and then back to RUNNING state. And it will work fine with with-items policy. > The main advantage is executor and rabbitmq HA. I can specify small timeout if executor will die the task retried by timeout and create new action. > The second is predictable behaviour. When I specify timeout: 10 and retry.count: 5 I know that will be create maximum 5 action before SUCCESS state and every action will be executes no longer than 10 seconds. > > -- > Best regards, > > Vitalii Solodilov > -------------- next part -------------- An HTML attachment was scrubbed... URL: From allprog at gmail.com Fri Apr 27 05:17:56 2018 From: allprog at gmail.com (=?UTF-8?B?QW5kcsOhcyBLw7Z2aQ==?=) Date: Fri, 27 Apr 2018 05:17:56 +0000 Subject: [openstack-dev] [mistral] A mechanism to close stuck running executions In-Reply-To: <1651901524792650@web8j.yandex.ru> References: <1651901524792650@web8j.yandex.ru> Message-ID: Hi Vitalii, thanks for reminding. I've almost forgotten about it. I've updated with the stuff we have tested locally for months. Looking forward to your comments! Thanks, Andras Vitalii Solodilov ezt írta (időpont: 2018. ápr. 27., P, 3:31): > Hi, Jozsef and Andras. > Do you plan to finish this patch? https://review.openstack.org/#/c/527085 > I think the stuck RUNNING executions is very a sensitive subject for Mistral. > -- > Best regards, > Vitalii Solodilov > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mcdkr at yandex.ru Fri Apr 27 05:53:05 2018 From: mcdkr at yandex.ru (Vitalii Solodilov) Date: Fri, 27 Apr 2018 08:53:05 +0300 Subject: [openstack-dev] =?utf-8?q?=E2=80=8B_=5Bmistral=5D_timeout_and_ret?= =?utf-8?q?ry?= In-Reply-To: <0c051cf2-f702-4bd2-8882-019ceefd2778@Spark> References: <3369991524520817@web43g.yandex.ru> <0c051cf2-f702-4bd2-8882-019ceefd2778@Spark> Message-ID: <6330681524808385@web48g.yandex.ru> An HTML attachment was scrubbed... URL: From zigo at debian.org Fri Apr 27 08:18:28 2018 From: zigo at debian.org (Thomas Goirand) Date: Fri, 27 Apr 2018 10:18:28 +0200 Subject: [openstack-dev] sqlalchemy-migrate and networking-mlnx still depends on tempest-lib Message-ID: <53450a85-3219-a930-aa36-3431a7924c21@debian.org> Hi, Everyone migrated away from tempest-lib to tempest, but there's still 2 packages that are remaining, still using the old deprecated tempest-lib. Does anyone volunteer for the job? It'd be nice if that happened, so we could get completely rid of the tempest-lib packages in distros and everywhere. I can review patches in sqla-migrate, as I'm still core-reviewer there, though I'm not sure I know enough to do it myself. Cheers, Thomas Goirand (zigo) From renat.akhmerov at gmail.com Fri Apr 27 09:02:36 2018 From: renat.akhmerov at gmail.com (Renat Akhmerov) Date: Fri, 27 Apr 2018 16:02:36 +0700 Subject: [openstack-dev] =?utf-8?Q?=E2=80=8B=5Bopenstack-dev=5D_?=[mistral] timeout and retry In-Reply-To: <6330681524808385@web48g.yandex.ru> References: <3369991524520817@web43g.yandex.ru> <0c051cf2-f702-4bd2-8882-019ceefd2778@Spark> <6330681524808385@web48g.yandex.ru> Message-ID: <2e68db48-f75f-43c4-a7d1-ee265e26c27b@Spark> Yep, agree that this is a bug. We need to fix that. Would you please create a ticket at LP? Thanks Renat Akhmerov @Nokia On 27 Apr 2018, 12:53 +0700, Vitalii Solodilov , wrote: > > No matter at what stage the task is, but if it’s still in RUNNING state or FAILED but we know that retry policy still didn’t use all attempts then the ‘timeout’ policy should force the task to fail. > Ok, then we have a bug because timeout policy doesn't force the task to fail. It retries task. And after that, we have two running action parallel. > https://github.com/openstack/mistral/blob/master/mistral/engine/policies.py#L537 > > 27.04.2018, 07:50, "Renat Akhmerov" : > > Hi, > > > > I don’t clearly understand the problem you’re trying to point to. > > > > IMO, when using these two policies at the same time ‘timeout’ policy should have higher priority. No matter at what stage the task is, but if it’s still in RUNNING state or FAILED but we know that retry policy still didn’t use all attempts then the ‘timeout’ policy should force the task to fail. If it’s the second case when it’s FAILED but the retry policy is still in play then we need to leave some ‘force’ marker in the task state to make sure that there’s no need to retry it further. > > > > Thanks > > > > Renat Akhmerov > > @Nokia > > > > On 24 Apr 2018, 05:00 +0700, Vitalii Solodilov , wrote: > > > Hi Renat, Can you explain me and Dougal how timeout policy should work with retry policy? > > > > > > I guess there is bug right now. > > > The behaviour is something like this https://ibb.co/hhm0eH > > > Example: https://review.openstack.org/#/c/563759/ > > > Logs: http://logs.openstack.org/59/563759/1/check/openstack-tox-py27/6f38808/job-output.txt.gz#_2018-04-23_20_54_55_376083 > > > Even we will fix this bug and after task timeout we will not retry task. I don't understand which problem is decided by this timeout and retry. > > > Other problem. What about task retry? I mean using mistral api. The problem is that timeout delayed calls was not created. > > > > > > IMHO the combination of these policies should work like this https://ibb.co/fe5tzH > > > It is not a timeout per action because when task retry it move to some complete state and then back to RUNNING state. And it will work fine with with-items policy. > > > The main advantage is executor and rabbitmq HA. I can specify small timeout if executor will die the task retried by timeout and create new action. > > > The second is predictable behaviour. When I specify timeout: 10 and retry.count: 5 I know that will be create maximum 5 action before SUCCESS state and every action will be executes no longer than 10 seconds. > > > > > > -- > > > Best regards, > > > > > > Vitalii Solodilov > > > > > > -- > Best regards, > > Vitalii Solodilov > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vondra at homeatcloud.cz Fri Apr 27 09:02:53 2018 From: vondra at homeatcloud.cz (=?utf-8?Q?Tom=C3=A1=C5=A1_Vondra?=) Date: Fri, 27 Apr 2018 11:02:53 +0200 Subject: [openstack-dev] [Openstack-operators] [nova] Default scheduler filters survey In-Reply-To: References: Message-ID: <045e01d3de06$870220b0$95066210$@homeatcloud.cz> Hi! What we‘ve got in our small public cloud: scheduler_default_filters=AggregateInstanceExtraSpecsFilter, AggregateImagePropertiesIsolation, RetryFilter, AvailabilityZoneFilter, AggregateRamFilter, AggregateDiskFilter, AggregateCoreFilter, ComputeFilter, ImagePropertiesFilter, ServerGroupAntiAffinityFilter, ServerGroupAffinityFilter #ComputeCapabilitiesFilter off because of conflict with AggregateInstanceExtraSpecFilter https://bugs.launchpad.net/nova/+bug/1279719 I really like to set resource limits using Aggregate metadata. Also, Windows host isolation is done using image metadata. I have filled a bug somewhere that it does not work correctly with Boot from Volume. I believe it got pretty much ignored. That’s why we also use flavor metadata. Tomas from Homeatcloud From: Massimo Sgaravatto [mailto:massimo.sgaravatto at gmail.com] Sent: Saturday, April 21, 2018 7:49 AM To: Simon Leinen Cc: OpenStack Development Mailing List (not for usage questions); OpenStack Operators Subject: Re: [Openstack-operators] [openstack-dev] [nova] Default scheduler filters survey enabled_filters = AggregateInstanceExtraSpecsFilter,AggregateMultiTenancyIsolation,RetryFilter,AvailabilityZoneFilter,RamFilter,CoreFilter,AggregateRamFilter,AggregateCoreFilter,DiskFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter Cheers, Massimo On Wed, Apr 18, 2018 at 10:20 PM, Simon Leinen wrote: Artom Lifshitz writes: > To that end, we'd like to know what filters operators are enabling in > their deployment. If you can, please reply to this email with your > [filter_scheduler]/enabled_filters (or > [DEFAULT]/scheduler_default_filters if you're using an older version) > option from nova.conf. Any other comments are welcome as well :) We have the following enabled on our semi-public (academic community) cloud, which runs on Newton: AggregateInstanceExtraSpecsFilter AvailabilityZoneFilter ComputeCapabilitiesFilter ComputeFilter ImagePropertiesFilter PciPassthroughFilter RamFilter RetryFilter ServerGroupAffinityFilter ServerGroupAntiAffinityFilter (sorted alphabetically) Recently we've also been trying AggregateImagePropertiesIsolation ...but it looks like we'll replace it with our own because it's a bit awkward to use for our purpose (scheduling Windows instance to licensed compute nodes). -- Simon. _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul.bourke at oracle.com Fri Apr 27 09:06:13 2018 From: paul.bourke at oracle.com (Paul Bourke) Date: Fri, 27 Apr 2018 10:06:13 +0100 Subject: [openstack-dev] [kolla][vote]Core nomination for Mark Goddard (mgoddard) as kolla core member In-Reply-To: References: Message-ID: +1, always great working with Mark :) On 26/04/18 16:31, Jeffrey Zhang wrote: > Kolla core reviewer team, > > It is my pleasure to nominate > ​ > mgoddard for kolla core team. > ​ > Mark has been working both upstream and downstream with kolla and > kolla-ansible for over two years, building bare metal compute clouds with > ironic for HPC. He's been involved with OpenStack since 2014. He started > the kayobe deployment project which complements kolla-ansible. He is > also the most active non-core contributor for last 90 days[1] > ​​ > Consider this nomination a +1 vote from me > > A +1 vote indicates you are in favor of > ​ > mgoddard as a candidate, a -1 > is a > ​​ > veto. Voting is open for 7 days until > ​May > ​4​ > th, or a unanimous > response is reached or a veto vote occurs. > > [1] http://stackalytics.com/report/contribution/kolla-group/90 > > -- > Regards, > Jeffrey Zhang > Blog: http://xcodest.me > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From thierry at openstack.org Fri Apr 27 09:19:56 2018 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 27 Apr 2018 11:19:56 +0200 Subject: [openstack-dev] [tc] Technical Committee Status update, April 27th Message-ID: Hi! This is the weekly summary of Technical Committee initiatives. You can find the full list of currently-considered changes at: https://wiki.openstack.org/wiki/Technical_Committee_Tracker We also track TC objectives for the cycle using StoryBoard at: https://storyboard.openstack.org/#!/project/923 == Recently-approved changes == * New repos: ansible-role-container-registry == Election season == Voting is open to renew 7 seats from the Technical Committee's 13 seats. If you contributed changes recently to any of the official OpenStack repositories, you should have received a ballot. Deadline to vote is 23:59 UTC on Monday, so please vote now ! You can find details on the election at: http://lists.openstack.org/pipermail/openstack-dev/2018-April/129753.html A number of threads have been started to discuss TC-related questions, which may inform your vote: * http://lists.openstack.org/pipermail/openstack-dev/2018-April/129622.html * http://lists.openstack.org/pipermail/openstack-dev/2018-April/129658.html * http://lists.openstack.org/pipermail/openstack-dev/2018-April/129661.html * http://lists.openstack.org/pipermail/openstack-dev/2018-April/129664.html == Under discussion == The four changes requiring formal votes from the TC members will be held until the election concludes and new members join: * Splitting/abandoning kolla-kubernetes [1] * Adjutant project team addition [2] * Allow projects to drop py27 support in the PTI [3] * More detail about the expectations we place on goal champions [4] [1] https://review.openstack.org/#/c/552531/ [2] https://review.openstack.org/#/c/553643/ [3] https://review.openstack.org/561922 [4] https://review.openstack.org/564060 == TC member actions/focus/discussions for the coming week(s) == The election closes on Monday. The new members will be inducted, and they will select the Technical Committee chair for the upcoming 6-month session. Urgent topics include preparation of the agenda for the joint Board + TC + UC meeting in Vancouver. If you have an idea of topic that should be discussed, it's still time to chime in on the thread at: http://lists.openstack.org/pipermail/openstack-dev/2018-April/129428.html == Office hours == To be more inclusive of all timezones and more mindful of people for which English is not the primary language, the Technical Committee dropped its dependency on weekly meetings. So that you can still get hold of TC members on IRC, we instituted a series of office hours on #openstack-tc: * 09:00 UTC on Tuesdays * 01:00 UTC on Wednesdays * 15:00 UTC on Thursdays Feel free to add your own office hour conversation starter at: https://etherpad.openstack.org/p/tc-office-hour-conversation-starters Cheers, -- Thierry Carrez (ttx) From allprog at gmail.com Fri Apr 27 09:22:07 2018 From: allprog at gmail.com (=?UTF-8?B?QW5kcsOhcyBLw7Z2aQ==?=) Date: Fri, 27 Apr 2018 09:22:07 +0000 Subject: [openstack-dev] [mistral] Help with test run Message-ID: Hi, Can someone please help me with why this build ended with TIMED_OUT? http://logs.openstack.org/85/527085/8/check/mistral-tox-unit-mysql/3ffae9f/ Thanks, Andras From jichenjc at cn.ibm.com Fri Apr 27 09:40:20 2018 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Fri, 27 Apr 2018 17:40:20 +0800 Subject: [openstack-dev] [Nova] z/VM introducing a new config driveformat In-Reply-To: <35a542f1-2b2f-74bd-b769-eb049a430223@gmail.com> References: <7efe6916-17fc-59c5-d666-6bdfc19c3329@gmail.com> <2eea2405-2e85-6730-e022-e1be33dc35d5@gmail.com> <35a542f1-2b2f-74bd-b769-eb049a430223@gmail.com> Message-ID: According to requirements and comments, now we opened the CI runs with run_validation = True And according to [1] below, for example, [2] need the ssh validation passed the test And there are a couple of comments need some enhancement on the logs of CI such as format and legacy incorrect links of logs etc the newest logs sample can be found [3] (take n-cpu as example and those logs are with _white.html) Also, the blueprint [4] requested by previous discussion post here again for reference please let us know whether the procedure -2 can be removed in order to proceed . thanks for your help [1] http://extbasicopstackcilog01.podc.sl.edst.ibm.com/test_logs/jenkins-check-nova-master-17455/logs/tempest.log 2018-04-27 08:50:44.852 19582 DEBUG tempest [-] validation.run_validation = True http://extbasicopstackcilog01.podc.sl.edst.ibm.com/test_logs/jenkins-check-nova-master-17455/console.html {0} tempest.scenario.test_server_basic_ops.TestServerBasicOps.test_server_basic_ops [86.788179s] ... ok [2] https://github.com/openstack/tempest/blob/master/tempest/scenario/test_server_basic_ops.py [3] http://extbasicopstackcilog01.podc.sl.edst.ibm.com/test_logs/jenkins-check-nova-master-17455/logs/n-cpu.log_white.html [4] https://review.openstack.org/#/c/562154/ Best Regards! Kevin (Chen) Ji 纪 晨 Engineer, zVM Development, CSTL Notes: Chen CH Ji/China/IBM at IBMCN Internet: jichenjc at cn.ibm.com Phone: +86-10-82451493 Address: 3/F Ring Building, ZhongGuanCun Software Park, Haidian District, Beijing 100193, PRC From: melanie witt To: openstack-dev at lists.openstack.org Date: 04/18/2018 01:41 AM Subject: Re: [openstack-dev] [Nova] z/VM introducing a new config driveformat On Tue, 17 Apr 2018 16:58:22 +0800, Chen Ch Ji wrote: > For the question on AE documentation, it's open source in [1] and the > documentation for how to build and use is [2] > once our code is upstream, there are a set of documentation change which > will cover this image build process by > adding some links to there [3] Thanks, that is good info. > You are right, we need image to have our Active Engine, I think > different arch and platform might have their unique > requirements and our solution our Active Engine is very like to > cloud-init so no harm to add it from user's perspective > I think later we can upload image to some place so anyone is able to > consume it as test image if they like > because different arch's image (e.g x86 and s390x) can't be shared anyway. > > For the config drive format you mentioned, actually, as previous > explanation and discussion witho Michael and Dan, > We found the iso9660 can be used (previously we made a bad assumption) > and we already changed the patch in [4], > so it's exactly same to other virt drivers you mentioned , we don't need > special format and iso9660 works perfect for our driver That's good news, I'm glad that got resolved. > It make sense to me we are temply moved out from runway, I suppose we > can adjust the CI to enable the run_ssh = true > with config drive functionalities very soon and we will apply for review > after that with the test result requested in our CI log. Okay, sounds good. Since you expect to be up and running with [validation]run_validation = True soon, I'm going to move the z/VM driver blueprint back to the front of the queue and put the next blueprint in line into the runway. Then, when the next blueprint end date arrives (currently 2018-04-30), if the z/VM CI is ready with cleaned up, human readable log files and is running with run_ssh = True with the test_server_basic_ops test to verify config drive operation, we will add the z/VM driver blueprint back to a runway for dedicated review. Let us know when the z/VM CI is ready, in case other runway reviews are completed early. If other runway reviews complete early, a runway space might be available earlier than 2018-04-30. Thanks, -melanie > Thanks > > [1] > https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_mfcloud_python-2Dzvm-2Dsdk_blob_master_tools_share_zvmguestconfigure&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=CRsbKwMgE5rCqLl8MTo6ZnKA4QkxK3NRmDont5BYcqw&s=RpjRNK6wiUJDNTYKBkou6nSDpaUkNOXdmBJ-SyjkPaw&e= > [2] > https://urldefense.proofpoint.com/v2/url?u=http-3A__cloudlib4zvm.readthedocs.io_en_latest_makeimage.html-23configuration-2Dof-2Dactivation-2Dengine-2Dae-2Din-2Dzlinux&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=CRsbKwMgE5rCqLl8MTo6ZnKA4QkxK3NRmDont5BYcqw&s=CVvkU6HtWW7GArGIpFT4fichM0fuTXXrmWD9zyRo9h0&e= > [3] > https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_q_status-3Aopen-2Bproject-3Aopenstack_nova-2Bbranch-3Amaster-2Btopic-3Abp_add-2Dzvm-2Ddriver-2Drocky&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=CRsbKwMgE5rCqLl8MTo6ZnKA4QkxK3NRmDont5BYcqw&s=P_DwKtfQWsNNWz9SmTW2xvArTWIzCh2EKPHRqLDkGeg&e= > [4] > https://urldefense.proofpoint.com/v2/url?u=https-3A__review.openstack.org_-23_c_527658_33_nova_virt_zvm_utils.pyline&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=CRsbKwMgE5rCqLl8MTo6ZnKA4QkxK3NRmDont5BYcqw&s=l9eTwoZcQ84k6S2EwQCw3gG8n8g5kLkcFplIMzI1G0I&e= 104 __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe https://urldefense.proofpoint.com/v2/url?u=http-3A__lists.openstack.org_cgi-2Dbin_mailman_listinfo_openstack-2Ddev&d=DwIGaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=8sI5aZT88Uetyy_XsOddbPjIiLSGM-sFnua3lLy2Xr0&m=CRsbKwMgE5rCqLl8MTo6ZnKA4QkxK3NRmDont5BYcqw&s=eXxXnVzbsK42dW14x1C23QaY4E-TKbCPBLyX05K_bag&e= -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: graycol.gif Type: image/gif Size: 105 bytes Desc: not available URL: From thomas.morin at orange.com Fri Apr 27 10:44:10 2018 From: thomas.morin at orange.com (thomas.morin at orange.com) Date: Fri, 27 Apr 2018 12:44:10 +0200 Subject: [openstack-dev] [requirements][horizon][neutron] plugins depending on services In-Reply-To: <20180425164020.jrmlqlmxhpgasuoc@yuggoth.org> References: <1524671093-sup-8304@lrrr.local> <20180425164020.jrmlqlmxhpgasuoc@yuggoth.org> Message-ID: <30566_1524825850_5AE2FEFA_30566_7_1_0b1ec2ae-2ede-5eb8-29f4-79622c5c20d7@orange.com> On 25/04/2018 18:40, Jeremy Stanley wrote: > This came up again a few days ago for sahara-dashboard. We talked > through some obvious alternatives to keep its master branch from > depending on an unreleased state of horizon and the situation today > is that plugin developers have been relying on developing their > releases in parallel with the services. Not merging an entire > development cycle's worth of work until release day (whether that's > by way of a feature branch or by just continually rebasing and > stacking in Gerrit) would be a very painful workflow for them, and > having to wait a full release cycle before they could start > integrating support for new features in the service would be equally > unfortunate. +1 > As for merging the plugin and service repositories, they tend to be > developed by completely disparate teams so that could require a fair > amount of political work to solve. Extracting the plugin interface > into a separate library which releases more frequently than the > service does indeed sound like the sanest option, but will also > probably take quite a while for some teams to achieve (I gather > neutron-lib is getting there, but I haven't heard about any work > toward that end in Horizon yet). +1 _________________________________________________________________________________________________________________________ Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci. This message and its attachments may contain confidential or privileged information that may be protected by law; they should not be distributed, used or copied without authorisation. If you have received this email in error, please notify the sender and delete this message and its attachments. As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified. Thank you. From thomas.morin at orange.com Fri Apr 27 10:45:20 2018 From: thomas.morin at orange.com (thomas.morin at orange.com) Date: Fri, 27 Apr 2018 12:45:20 +0200 Subject: [openstack-dev] [requirements][horizon][neutron] plugins depending on services In-Reply-To: References: Message-ID: <28189_1524825921_5AE2FF41_28189_157_1_8c18d129-f618-6911-77bc-b54a5a19a73a@orange.com> Hi Monty, Thanks for bringing this up. Having run into the topic for a few combination of deps, I'll certainly agree that we need something better than what we currently have. I don't feel that I've enough perspective on the whole system and practices to give a strong opinion on what we should do though. A few comments... (below) On 25/04/2018 16:40, Monty Taylor wrote: > projects with test requirements on git repo urls of other projects > ------------------------------------------------------------------ > > There are a bunch of projects that need, for testing purposes, to > depend on other projects. The majority are either neutron or horizon > plugins, but conceptually there is nothing neutron or horizon specific > about the issue. The problem they're trying to deal with is that they > are a plugin to a service and they need to be able to import code from > the service they are a plugin to in their unit tests. (using neutron to avoid being too abstract, but this generalizes to other components with plugins) True, but sometimes a change to a neutron plugin may (with or without a need to actually import neutron), need to run against a neutron version from git (because the change has a Depends-On a Neutron change, or because the change depends on something that is in neutron master but not in a release).  We have this when the plugin depends on a new or fixed behavior. While this case can in theory be fixed by moving the code introducing the fixed or new behavior into neutron-lib,  it doesn't mean that this is always feasible (because the work required to move this part of the code to neutron-lib hasn't happened). > > > > unwinding things > ---------------- > > There are a few different options, but it's important to keep in mind > that we ultimately want all of the following: > > * The code works > * Tests can run properly in CI > * "Depends-On" works in CI so that you can test changes cross-repo Note that this was true with tools/tox_install.sh, but broke when it was removed for a job such as legacy-networking-bgpvpn-dsvm-functional (see [1]) which does not inherit from zuul tox jobs, but still relies ultimately on zuul to run the test. [1] http://logs.openstack.org/41/558741/11/check/legacy-networking-bgpvpn-dsvm-functional/86a743c/ > * Tests can run properly locally for developers (Broke when tools/tox_install.sh was abandoned, currently causing minor pain to lots of people working on neutron-plugins unless py27-dev hacks are in place in their project) > * Deployment requirements are accurately communicated to deployers Was definitely improved by removing tools/tox_install.sh! > > Specific Suggestions > -------------------- > > As there are a few different scenarios, I want to suggest we do a few > different things. > > * Prefer interface libraries on PyPI that projects depend on > > Like python-openstackclient and osc-lib, this is the *best* approach > for projects with plugins. Such interface libraries need to be able to > do intermediate releases - and those intermediate releases need to not > break the released version of the projects. This is the hardest and > longest thing to do as well, so it's most likely to be a multi-cycle > effort. I would object to "best", for the following reasons: - because this is not the starting point, the effort to librarize code is significant, and it's seems a fact that people don't rush to do it - there is a practical drawback of doing that: even for project that have compatible release cycle, we have overhead of having to release e.g. neutron-lib with the change before we can consume it in neutron or a neutron plugin (and we have overhead to test the changes as well, with extra jobs to test against master or local .zuul.yaml hacks to force Depends-On to test what we want e.g. [x] ) ; a situation that would avoid this overhead would I think be superior [x] https://review.openstack.org/#/c/557660/ > > * Treat inter-plugin depends as normal library depends > > If networking-bgpvpn depends on networking-bagpipe and networking-odl, > then networking-bagpipe and networking-odl need to be released to PyPI > just like any other library in OpenStack. These are real runtime > dependencies. Juste a side note here: networking-bagpipe and networking-odl provide components drivers for their corresponding drivers in networking-bgpvpn, they aren't strict runtime dependencies, but only dependencies for a scenario where their driver is used. Given that, they were moved as test-requirements dependencies recently (only required to run unit tests). The situation for these drivers is a bit in flux though: - ODL: the bgpvpn driver for ODL is a v1 driver that became obsolete, there is a v2 driver sitting entirely in networking-odl - bagpipe: the bgpvpn driver for bagpipe could be moved to networking-bagpipe entirely  -- the one reason why it hasn't happened (apart from inertia) is that is it the reference driver for the networking-bgpvpn project, and removing it from networking-bgpvpn would give us a project without any usable driver in tree That said, I'm not sure I agree with: > need to be released to PyPI just like any other library in OpenStack As said above, their is a price to pay in having to do a release before it can be consumed by other projects. And additionally in this case, it would also means that networking-bagpipe would have to move to a different release model with intermediate releases. I have the feeling that it is legitimate to be able to do joint work on coupled components without having to pay for such a price. Perhaps some of the improvements belong to projects have to deal with theirs deps in a saner way, and perhaps some other belong to the real of improvements needed to the CI tooling. > > * Relax our rules about git repos in test-requirements.txt > > Introduce a whitelist of git repo urls, starting with: > >   * https://git.openstack.org/openstack/neutron >   * https://git.openstack.org/openstack/horizon > > For the service projects that have plugins that need to test against the > service they're intending to be used with in a real installation. For > those plugin projects, actually put the git urls into > test-requirements.txt. This will make the gate work AND local > development work for the scenarios where the thing that is actually > needed is always testing against tip of a corresponding service. It seems like this would help a lot. I agree with Doug that a separate requirements file will help play nice with the requirements checking tools. > > * In the zuul jobs, add something similar to tox-siblings but before > the initial install that will detect a git url that matches a locally > checked out repo and will swap the local copy instead so that we don't > have tox cloning directly in gate jobs. > > At this point, horizon and neutron plugin projects should be able to > use normal tox jobs WITHOUT needing to list anything other than > horizon and neutron themselves in required-projects, and they can also > add project-specific -tips jobs that will add intra-plugin depends to > their required-projects so that they can test both sides of the coin. > > Finally, and this is a thing we need broadly for OpenStack and not > just neutron/horizon plugins: > > * Extract the tox-siblings logic into a standalone tool that can be > installed and used from tox so that it's possible to replicate a -tips > job locally. I've got this pretty much done and just need to get it > finished up. As soon as it exists I'll update python-openstackclient's > tox.ini file to use it - and people can cargo cult from there and/or > we can work it up into a documented recipe for people. Yes, ideally this would be transparent to users, not requiring to use a different job name, and not being tied to zuul-v3 job definitions (in particular not something working only for jobs inheriting from the tox job template). Best, -Thomas _________________________________________________________________________________________________________________________ Ce message et ses pieces jointes peuvent contenir des informations confidentielles ou privilegiees et ne doivent donc pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu ce message par erreur, veuillez le signaler a l'expediteur et le detruire ainsi que les pieces jointes. Les messages electroniques etant susceptibles d'alteration, Orange decline toute responsabilite si ce message a ete altere, deforme ou falsifie. Merci. This message and its attachments may contain confidential or privileged information that may be protected by law; they should not be distributed, used or copied without authorisation. If you have received this email in error, please notify the sender and delete this message and its attachments. As emails may be altered, Orange is not liable for messages that have been modified, changed or falsified. Thank you. From balazs.gibizer at ericsson.com Fri Apr 27 11:06:38 2018 From: balazs.gibizer at ericsson.com (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Fri, 27 Apr 2018 13:06:38 +0200 Subject: [openstack-dev] [nova] Next notification subteam meeting is cancelled Message-ID: <1524827198.30869.0@smtp.office365.com> Hi, I have to cancel the next notification subteam meeting as it happens to be on 1st of May which is (inter)national holiday. So the next meeting expected to be held on 8th of May. Cheers, gibi From anusha.iiitm at gmail.com Fri Apr 27 12:24:55 2018 From: anusha.iiitm at gmail.com (Anusha Ramineni) Date: Fri, 27 Apr 2018 17:54:55 +0530 Subject: [openstack-dev] [valence] Valence 0.9.0 Release Message-ID: Hi, Valence team is happy to announce the initial release of Valence-0.9.0 to PyPi. Please find the details below. Valence PyPi url: https://pypi.org/project/valence/ . Documentation and Release Notes for the release can be found at: Release Notes : http://valence.readthedocs.io/en/latest/releasenotes/valence-0.9.html Documentation : http://valence.readthedocs.io/en/latest/ Thanks, Anusha -------------- next part -------------- An HTML attachment was scrubbed... URL: From mcdkr at yandex.ru Fri Apr 27 12:29:28 2018 From: mcdkr at yandex.ru (Vitalii Solodilov) Date: Fri, 27 Apr 2018 15:29:28 +0300 Subject: [openstack-dev] =?utf-8?q?=E2=80=8B_=5Bmistral=5D_timeout_and_ret?= =?utf-8?q?ry?= In-Reply-To: <2e68db48-f75f-43c4-a7d1-ee265e26c27b@Spark> References: <3369991524520817@web43g.yandex.ru> <0c051cf2-f702-4bd2-8882-019ceefd2778@Spark> <6330681524808385@web48g.yandex.ru> <2e68db48-f75f-43c4-a7d1-ee265e26c27b@Spark> Message-ID: <9020451524832168@web18j.yandex.ru> An HTML attachment was scrubbed... URL: From corey.bryant at canonical.com Fri Apr 27 12:43:58 2018 From: corey.bryant at canonical.com (Corey Bryant) Date: Fri, 27 Apr 2018 08:43:58 -0400 Subject: [openstack-dev] [Openstack] OpenStack Queens for Ubuntu 18.04 LTS Message-ID: Hi All, With yesterday’s release of Ubuntu 18.04 LTS (the Bionic Beaver) the Ubuntu OpenStack team at Canonical is pleased to announce the general availability of OpenStack Queens on Ubuntu 18.04 LTS. This release of Ubuntu is a Long Term Support release that will be supported for 5 years. Further details for the Ubuntu 18.04 release can be found at: https://wiki.ubuntu.com/BionicBeaver/ReleaseNotes. And further details for the OpenStack Queens release can be found at: https://www.openstack.org/software/queens. Installing on Ubuntu 18.04 LTS ------------------------------ No extra steps are required required; just start installing OpenStack! Installing on Ubuntu 16.04 LTS ------------------------------ If you’re interested in OpenStack Queens on Ubuntu 16.04, please refer to http://lists.openstack.org/pipermail/openstack-dev/2018-March/127851.html, which coincided with the upstream OpenStack Queens release. Packages -------- The 18.04 archive includes updates for: aodh, barbican, ceilometer, ceph (12.2.4), cinder, congress, designate, designate-dashboard, dpdk (17.11), glance, glusterfs (3.13.2), gnocchi, heat, heat-dashboard, horizon, ironic, keystone, libvirt (4.0.0), magnum, manila, manila-ui, mistral, murano, murano-dashboard, networking-bagpipe, networking-bgpvpn, networking-hyperv, networking-l2gw, networking-odl, networking-ovn, networking-sfc, neutron, neutron-dynamic-routing, neutron-fwaas, neutron-lbaas, neutron-lbaas-dashboard, neutron-taas, neutron-vpnaas, nova, nova-lxd, openstack-trove, openvswitch (2.9.0), panko, qemu (2.11), rabbitmq-server (3.6.10), sahara, sahara-dashboard, senlin, swift, trove-dashboard, vmware-nsx, watcher, and zaqar. For a full list of packages and versions, please refer to [0]. Branch Package Builds --------------------- If you want to try out the latest updates to stable branches, we are delivering continuously integrated packages on each upstream commit in the following PPA’s: sudo add-apt-repository ppa:openstack-ubuntu-testing/mitaka sudo add-apt-repository ppa:openstack-ubuntu-testing/ocata sudo add-apt-repository ppa:openstack-ubuntu-testing/pike sudo add-apt-repository ppa:openstack-ubuntu-testing/queens bear in mind these are built per-commitish (30 min checks for new commits at the moment) so ymmv from time-to-time. Reporting bugs -------------- If you run into any issues please report bugs using the ‘ubuntu-bug’ tool: sudo ubuntu-bug nova-conductor this will ensure that bugs get logged in the right place in Launchpad. Thank you to all who contributed to OpenStack Queens and Ubuntu Bionic both upstream and in Debian/Ubuntu packaging! Regards, Corey (on behalf of the Ubuntu OpenStack team) [0] http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/queens_versions.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongbin034 at gmail.com Fri Apr 27 13:03:59 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Fri, 27 Apr 2018 09:03:59 -0400 Subject: [openstack-dev] [Openstack] OpenStack Queens for Ubuntu 18.04 LTS In-Reply-To: References: Message-ID: Hi Corey, What are the requirements to include OpenStack Zun into the Ubuntu packages? We have a comprehensive installation guide [1] that are using by a lot of users when they were installing Zun. However, the missing of Ubuntu packages is inconvenient for our users. What the Zun team can help for adding Zun to Ubuntu. [1] https://docs.openstack.org/zun/latest/install/index.html Best regards, Hongbin On Fri, Apr 27, 2018 at 8:43 AM, Corey Bryant wrote: > Hi All, > > With yesterday’s release of Ubuntu 18.04 LTS (the Bionic Beaver) the > Ubuntu OpenStack team at Canonical is pleased to announce the general > availability of OpenStack Queens on Ubuntu 18.04 LTS. This release of > Ubuntu is a Long Term Support release that will be supported for 5 years. > > Further details for the Ubuntu 18.04 release can be found at: > https://wiki.ubuntu.com/BionicBeaver/ReleaseNotes. > > And further details for the OpenStack Queens release can be found at: > https://www.openstack.org/software/queens. > > Installing on Ubuntu 18.04 LTS > ------------------------------ > No extra steps are required required; just start installing OpenStack! > > Installing on Ubuntu 16.04 LTS > ------------------------------ > If you’re interested in OpenStack Queens on Ubuntu 16.04, please refer to > http://lists.openstack.org/pipermail/openstack-dev/2018-March/127851.html, > which coincided with the upstream OpenStack Queens release. > > Packages > -------- > The 18.04 archive includes updates for: > > aodh, barbican, ceilometer, ceph (12.2.4), cinder, congress, designate, > designate-dashboard, dpdk (17.11), glance, glusterfs (3.13.2), gnocchi, > heat, heat-dashboard, horizon, ironic, keystone, libvirt (4.0.0), magnum, > manila, manila-ui, mistral, murano, murano-dashboard, networking-bagpipe, > networking-bgpvpn, networking-hyperv, networking-l2gw, networking-odl, > networking-ovn, networking-sfc, neutron, neutron-dynamic-routing, > neutron-fwaas, neutron-lbaas, neutron-lbaas-dashboard, neutron-taas, > neutron-vpnaas, nova, nova-lxd, openstack-trove, openvswitch (2.9.0), > panko, qemu (2.11), rabbitmq-server (3.6.10), sahara, sahara-dashboard, > senlin, swift, trove-dashboard, vmware-nsx, watcher, and zaqar. > > For a full list of packages and versions, please refer to [0]. > > Branch Package Builds > --------------------- > If you want to try out the latest updates to stable branches, we are > delivering continuously integrated packages on each upstream commit in the > following PPA’s: > > sudo add-apt-repository ppa:openstack-ubuntu-testing/mitaka > sudo add-apt-repository ppa:openstack-ubuntu-testing/ocata > sudo add-apt-repository ppa:openstack-ubuntu-testing/pike > sudo add-apt-repository ppa:openstack-ubuntu-testing/queens > > bear in mind these are built per-commitish (30 min checks for new commits > at the moment) so ymmv from time-to-time. > > Reporting bugs > -------------- > If you run into any issues please report bugs using the ‘ubuntu-bug’ tool: > > sudo ubuntu-bug nova-conductor > > this will ensure that bugs get logged in the right place in Launchpad. > > Thank you to all who contributed to OpenStack Queens and Ubuntu Bionic > both upstream and in Debian/Ubuntu packaging! > > Regards, > Corey > (on behalf of the Ubuntu OpenStack team) > > [0] http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud- > archive/queens_versions.html > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cdent+os at anticdent.org Fri Apr 27 13:11:53 2018 From: cdent+os at anticdent.org (Chris Dent) Date: Fri, 27 Apr 2018 14:11:53 +0100 (BST) Subject: [openstack-dev] [nova] [placement] placement update 18-17 Message-ID: Welcome to placement update 18-17. This is an expand update, meaning I've gone searching for new stuff to add to the lists. In other news: I'll be on holiday next week so there won't be one of these next week, unless somebody else wants to do one. # Most Important A great deal of stuff is reliant on nested providers in allocation candidates, so moving it forward is the most important. Next in line are granular resource requests and consumer generations. # What's Changed A race condition in synchronizing os-traits has been corrected, by doing the sync in an independent transaction. Code that handles the "local-delete" situation and cleans up allocations has been merged. # Bugs * Placement related bugs not yet in progress: https://goo.gl/TgiPXb 17, +1 on last week * In progress placement bugs: https://goo.gl/vzGGDQ 8, -4 (woot!) on last week # Specs Total last week: 12. Now: 11 (because one was abandoned) * https://review.openstack.org/#/c/549067/ VMware: place instances on resource pool (using update_provider_tree) * https://review.openstack.org/#/c/552924/ Proposes NUMA topology with RPs * https://review.openstack.org/#/c/544683/ Account for host agg allocation ratio in placement * https://review.openstack.org/#/c/552105/ Support default allocation ratios * https://review.openstack.org/#/c/438640/ Spec on preemptible servers * https://review.openstack.org/#/c/557065/ Proposes Multiple GPU types * https://review.openstack.org/#/c/555081/ Standardize CPU resource tracking * https://review.openstack.org/#/c/502306/ Network bandwidth resource provider * https://review.openstack.org/#/c/509042/ Propose counting quota usage from placement * https://review.openstack.org/#/c/560174/ Add history behind nullable project_id and user_id * https://review.openstack.org/#/c/559466/ Return resources of entire trees in Placement # Main Themes ## Nested providers in allocation candidates Representing nested provides in the response to GET /allocation_candidates is required to actually make use of all the topology that update provider tree will report. That work is in progress at: https://review.openstack.org/#/q/topic:bp/nested-resource-providers-allocation-candidates ## Mirror nova host aggregates to placement This makes it so some kinds of aggregate filtering can be done "placement side" by mirroring nova host aggregates into placement aggregates. https://review.openstack.org/#/q/topic:bp/placement-mirror-host-aggregates This is still in progress but took a little attention break while nested provider discussions took up (and destroyed) brains. ## Consumer Generations This allows multiple agents to "safely" update allocations for a single consumer. The code is in progress: https://review.openstack.org/#/q/topic:bp/add-consumer-generation This is moving along, but is encountering some debate over how best to represent the data and flexibly deal with the at least 3 different ways we need to manage consumer information. ## Granular Ways and means of addressing granular requests when dealing with nested resource providers. Granular in this sense is grouping resource classes and traits together in their own lumps as required. Topic is: https://review.openstack.org/#/q/topic:bp/granular-resource-requests # Extraction I've created patches that adjust devstack and zuul config to use the separate placement database connection. devstack: https://review.openstack.org/#/c/564180/ zuul: https://review.openstack.org/#/c/564067/ db connection: https://review.openstack.org/#/c/362766/ All of these things could merge without requiring any action by anybody. Instead they allow people to use different connections, but don't require it. Jay has made a first pass at an os-resource-classes: https://github.com/jaypipes/os-resource-classes/ which I thought was potentially more heavyweight than required, but other people should have a look too. The other main issue in extraction is the placement unit and functional tests have a lot of dependence on the fixtures and base classes used in the nova unit and functional tests. For the time being that is okay, but it would be useful to start unwinding that, soon. Same will be true for config. # Other 14 entries last week, 4 of those have merged but we've added some to bring the total to: 17. * https://review.openstack.org/#/c/546660/ Purge comp_node and res_prvdr records during deletion of cells/hosts * https://review.openstack.org/#/q/topic:bp/placement-osc-plugin-rocky A huge pile of improvements to osc-placement * https://review.openstack.org/#/c/524425/ General policy sample file for placement * https://review.openstack.org/#/c/527791/ Get resource provider by uuid or name (osc-placement) * https://review.openstack.org/#/c/477478/ placement: Make API history doc more consistent * https://review.openstack.org/#/c/556669/ Handle agg generation conflict in report client * https://review.openstack.org/#/c/537614/ Add unit test for non-placement resize * https://review.openstack.org/#/c/493865/ cover migration cases with functional tests * https://review.openstack.org/#/q/topic:bug/1732731 Bug fixes for sharing resource providers * https://review.openstack.org/#/c/517757/ WIP at granular in allocation candidates * https://review.openstack.org/#/c/561315/ support multiple member_of qparams * https://review.openstack.org/#/q/topic:bug/1763907 member_of with shared providers fixes * https://review.openstack.org/#/q/topic:bp/placement-return-all-resources return resoruces of entire trees in placement * https://review.openstack.org/#/q/topic:placement-test-base refactor base functional test for allocation candidates and resource providers * https://review.openstack.org/#/q/topic:libvirt-report-local-disk-only-if-no-sharing sharing disk in libvirt * https://review.openstack.org/#/c/535517/ Move refresh time from report client to prov tree * https://review.openstack.org/#/c/561770/ PCPU resource class # End I've inevitably forgotten something in here. Please follow up with anything you think should be added. Also note that there's loads of stuff in here that is exactly the same as last week. One of the main reasons for producing this report is to ensure that stuff gets attention at least something of a linear fashion. -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent From corey.bryant at canonical.com Fri Apr 27 13:30:15 2018 From: corey.bryant at canonical.com (Corey Bryant) Date: Fri, 27 Apr 2018 09:30:15 -0400 Subject: [openstack-dev] [Openstack] OpenStack Queens for Ubuntu 18.04 LTS In-Reply-To: References: Message-ID: On Fri, Apr 27, 2018 at 9:03 AM, Hongbin Lu wrote: > Hi Corey, > > What are the requirements to include OpenStack Zun into the Ubuntu > packages? We have a comprehensive installation guide [1] that are using by > a lot of users when they were installing Zun. However, the missing of > Ubuntu packages is inconvenient for our users. What the Zun team can help > for adding Zun to Ubuntu. > > [1] https://docs.openstack.org/zun/latest/install/index.html > > Best regards, > Hongbin > Hi Hongbin, If we were to get working packages from the community and commitment to test, I'd be happy to sponsor uploads to Ubuntu and backport to the cloud achive. Thanks, Corey -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Fri Apr 27 13:40:11 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Fri, 27 Apr 2018 15:40:11 +0200 Subject: [openstack-dev] [tripleo] ironic automated cleaning by default? In-Reply-To: <038C1AD1-9D57-4A31-9848-C0A4E7121DDF@cern.ch> References: <2ebe539b-ec35-17c7-8207-126bf6a0b8f2@redhat.com> <83251d36-81fe-476b-4196-5d44de375e41@nemebean.com> <038C1AD1-9D57-4A31-9848-C0A4E7121DDF@cern.ch> Message-ID: <9322dbcc-d041-e3bc-c97b-e0699334da3c@redhat.com> Hi Tim, On 04/26/2018 07:16 PM, Tim Bell wrote: > My worry with changing the default is that it would be like adding the following in /etc/environment, > > alias ls=' rm -rf / --no-preserve-root' > > i.e. an operation which was previously read-only now becomes irreversible. Well, deleting instances has never been read-only :) The problem really is that Heat can delete instances during a seemingly innocent operations. And I do agree that we cannot just ignore this problem. > > We also have current use cases with Ironic where we are moving machines between projects by 'disowning' them to the spare pool and then reclaiming them (by UUID) into new projects with the same state. I'd be curious to hear how exactly it works. Does it work on Nova level or on Ironic level? > > However, other operators may feel differently which is why I suggest asking what people feel about changing the default. > > In any case, changes in default behaviour need to be highly visible. > > Tim > > -----Original Message----- > From: "Arkady.Kanevsky at dell.com" > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Thursday, 26 April 2018 at 18:48 > To: "openstack-dev at lists.openstack.org" > Subject: Re: [openstack-dev] [tripleo] ironic automated cleaning by default? > > +1. > It would be good to also identify the use cases. > Surprised that node should be cleaned up automatically. > I would expect that we want it to be a deliberate request from administrator to do. > Maybe user when they "return" a node to free pool after baremetal usage. > Thanks, > Arkady > > -----Original Message----- > From: Tim Bell [mailto:Tim.Bell at cern.ch] > Sent: Thursday, April 26, 2018 11:17 AM > To: OpenStack Development Mailing List (not for usage questions) > Subject: Re: [openstack-dev] [tripleo] ironic automated cleaning by default? > > How about asking the operators at the summit Forum or asking on openstack-operators to see what the users think? > > Tim > > -----Original Message----- > From: Ben Nemec > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Thursday, 26 April 2018 at 17:39 > To: "OpenStack Development Mailing List (not for usage questions)" , Dmitry Tantsur > Subject: Re: [openstack-dev] [tripleo] ironic automated cleaning by default? > > > > On 04/26/2018 09:24 AM, Dmitry Tantsur wrote: > > Answering to both James and Ben inline. > > > > On 04/25/2018 05:47 PM, Ben Nemec wrote: > >> > >> > >> On 04/25/2018 10:28 AM, James Slagle wrote: > >>> On Wed, Apr 25, 2018 at 10:55 AM, Dmitry Tantsur > >>> wrote: > >>>> On 04/25/2018 04:26 PM, James Slagle wrote: > >>>>> > >>>>> On Wed, Apr 25, 2018 at 9:14 AM, Dmitry Tantsur > >>>>> wrote: > >>>>>> > >>>>>> Hi all, > >>>>>> > >>>>>> I'd like to restart conversation on enabling node automated > >>>>>> cleaning by > >>>>>> default for the undercloud. This process wipes partitioning tables > >>>>>> (optionally, all the data) from overcloud nodes each time they > >>>>>> move to > >>>>>> "available" state (i.e. on initial enrolling and after each tear > >>>>>> down). > >>>>>> > >>>>>> We have had it disabled for a few reasons: > >>>>>> - it was not possible to skip time-consuming wiping if data from > >>>>>> disks > >>>>>> - the way our workflows used to work required going between > >>>>>> manageable > >>>>>> and > >>>>>> available steps several times > >>>>>> > >>>>>> However, having cleaning disabled has several issues: > >>>>>> - a configdrive left from a previous deployment may confuse > >>>>>> cloud-init > >>>>>> - a bootable partition left from a previous deployment may take > >>>>>> precedence > >>>>>> in some BIOS > >>>>>> - an UEFI boot partition left from a previous deployment is likely to > >>>>>> confuse UEFI firmware > >>>>>> - apparently ceph does not work correctly without cleaning (I'll > >>>>>> defer to > >>>>>> the storage team to comment) > >>>>>> > >>>>>> For these reasons we don't recommend having cleaning disabled, and I > >>>>>> propose > >>>>>> to re-enable it. > >>>>>> > >>>>>> It has the following drawbacks: > >>>>>> - The default workflow will require another node boot, thus becoming > >>>>>> several > >>>>>> minutes longer (incl. the CI) > >>>>>> - It will no longer be possible to easily restore a deleted overcloud > >>>>>> node. > >>>>> > >>>>> > >>>>> I'm trending towards -1, for these exact reasons you list as > >>>>> drawbacks. There has been no shortage of occurrences of users who have > >>>>> ended up with accidentally deleted overclouds. These are usually > >>>>> caused by user error or unintended/unpredictable Heat operations. > >>>>> Until we have a way to guarantee that Heat will never delete a node, > >>>>> or Heat is entirely out of the picture for Ironic provisioning, then > >>>>> I'd prefer that we didn't enable automated cleaning by default. > >>>>> > >>>>> I believe we had done something with policy.json at one time to > >>>>> prevent node delete, but I don't recall if that protected from both > >>>>> user initiated actions and Heat actions. And even that was not enabled > >>>>> by default. > >>>>> > >>>>> IMO, we need to keep "safe" defaults. Even if it means manually > >>>>> documenting that you should clean to prevent the issues you point out > >>>>> above. The alternative is to have no way to recover deleted nodes by > >>>>> default. > >>>> > >>>> > >>>> Well, it's not clear what is "safe" here: protect people who explicitly > >>>> delete their stacks or protect people who don't realize that a previous > >>>> deployment may screw up their new one in a subtle way. > >>> > >>> The latter you can recover from, the former you can't if automated > >>> cleaning is true. > > > > Nor can we recover from 'rm -rf / --no-preserve-root', but it's not a > > reason to disable the 'rm' command :) > > > >>> > >>> It's not just about people who explicitly delete their stacks (whether > >>> intentional or not). There could be user error (non-explicit) or > >>> side-effects triggered by Heat that could cause nodes to get deleted. > > > > If we have problems with Heat, we should fix Heat or stop using it. What > > you're saying is essentially "we prevent ironic from doing the right > > thing because we're using a tool that can invoke 'rm -rf /' at a wrong > > moment." > > > >>> > >>> You couldn't recover from those scenarios if automated cleaning were > >>> true. Whereas you could always fix a deployment error by opting in to > >>> do an automated clean. Does Ironic keep track of it a node has been > >>> previously cleaned? Could we add a validation to check whether any > >>> nodes might be used in the deployment that were not previously > >>> cleaned? > > > > It's may be possible possible to figure out if a node was ever cleaned. > > But then we'll force operators to invoke cleaning manually, right? It > > will work, but that's another step on the default workflow. Are you okay > > with it? > > > >> > >> Is there a way to only do cleaning right before a node is deployed? > >> If you're about to write a new image to the disk then any data there > >> is forfeit anyway. Since the concern is old data on the disk messing > >> up subsequent deploys, it doesn't really matter whether you clean it > >> right after it's deleted or right before it's deployed, but the latter > >> leaves the data intact for longer in case a mistake was made. > >> > >> If that's not possible then consider this an RFE. :-) > > > > It's a good idea, but it may cause problems with rebuilding instances. > > Rebuild is essentially a re-deploy of the OS, users may not expect the > > whole disk to be wiped.. > > > > Also it's unclear whether we want to write additional features to work > > around disabled cleaning. > > No matter how good the tooling gets, user error will always be a thing. > Someone will scale down the wrong node or something similar. I think > there's value to allowing recovery from mistakes. We all make them. :-) > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From dtantsur at redhat.com Fri Apr 27 13:43:34 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Fri, 27 Apr 2018 15:43:34 +0200 Subject: [openstack-dev] [tripleo] ironic automated cleaning by default? In-Reply-To: References: Message-ID: Okay, it seems like the idea was not well received, but I do have some action items out of the discussion (thanks all!): 1. Simplify running cleaning per node. I've proposed patches [0] to add a new command (documentation to follow) to do it. 2. Consider running metadata cleaning during deployment in Ironic. This is a bit difficult right now, but will simplify substantially after the deploy steps work. Any other ideas? I would like to run at least one TripleO CI job with cleaning enabled. Any objections to that? If not, what would be the best job (it has to run ironic, obviously)? [0] https://review.openstack.org/#/q/topic:cleaning+status:open On 04/25/2018 03:14 PM, Dmitry Tantsur wrote: > Hi all, > > I'd like to restart conversation on enabling node automated cleaning by default > for the undercloud. This process wipes partitioning tables (optionally, all the > data) from overcloud nodes each time they move to "available" state (i.e. on > initial enrolling and after each tear down). > > We have had it disabled for a few reasons: > - it was not possible to skip time-consuming wiping if data from disks > - the way our workflows used to work required going between manageable and > available steps several times > > However, having cleaning disabled has several issues: > - a configdrive left from a previous deployment may confuse cloud-init > - a bootable partition left from a previous deployment may take precedence in > some BIOS > - an UEFI boot partition left from a previous deployment is likely to confuse > UEFI firmware > - apparently ceph does not work correctly without cleaning (I'll defer to the > storage team to comment) > > For these reasons we don't recommend having cleaning disabled, and I propose to > re-enable it. > > It has the following drawbacks: > - The default workflow will require another node boot, thus becoming several > minutes longer (incl. the CI) > - It will no longer be possible to easily restore a deleted overcloud node. > > What do you think? If I don't hear principal objections, I'll prepare a patch in > the coming days. > > Dmitry From doug at doughellmann.com Fri Apr 27 14:02:08 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 27 Apr 2018 10:02:08 -0400 Subject: [openstack-dev] [all][reno] issue with reno 2.9.0 and duplicate anchors Message-ID: <1524837645-sup-1751@lrrr.local> The latest release of reno tries to add anchors to the page in a way that ensures they are named consistently across builds. For projects with the same version number in multiple series (which can happen for non-milestone projects that haven't tagged for rocky yet), this causes duplicate anchors and causes the release notes build to fail. There is a fix for this in https://review.openstack.org/564763 and we will try to get a new release of reno out as soon as that patch merges. Doug From hongbin034 at gmail.com Fri Apr 27 14:20:16 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Fri, 27 Apr 2018 10:20:16 -0400 Subject: [openstack-dev] [Openstack] OpenStack Queens for Ubuntu 18.04 LTS In-Reply-To: References: Message-ID: Corey, Thanks for the information. Would you clarify what is "working packages from the community"? Best regards, Hongbin On Fri, Apr 27, 2018 at 9:30 AM, Corey Bryant wrote: > > > On Fri, Apr 27, 2018 at 9:03 AM, Hongbin Lu wrote: > >> Hi Corey, >> >> What are the requirements to include OpenStack Zun into the Ubuntu >> packages? We have a comprehensive installation guide [1] that are using by >> a lot of users when they were installing Zun. However, the missing of >> Ubuntu packages is inconvenient for our users. What the Zun team can help >> for adding Zun to Ubuntu. >> >> [1] https://docs.openstack.org/zun/latest/install/index.html >> >> Best regards, >> Hongbin >> > > Hi Hongbin, > > If we were to get working packages from the community and commitment to > test, I'd be happy to sponsor uploads to Ubuntu and backport to the cloud > achive. > > Thanks, > Corey > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Fri Apr 27 14:23:29 2018 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 27 Apr 2018 07:23:29 -0700 Subject: [openstack-dev] [tripleo] ironic automated cleaning by default? In-Reply-To: References: Message-ID: On Fri, Apr 27, 2018 at 6:43 AM, Dmitry Tantsur wrote: [...] I would like to run at least one TripleO CI job with cleaning enabled. Any > objections to that? If not, what would be the best job (it has to run > ironic, obviously)? > > [0] https://review.openstack.org/#/q/topic:cleaning+status:open We "only" have 2 jobs in the (third party) gate: fs001 and fs035. Both are testing the same thing the last time I checked except fs035 is ipv6. I would pick one of them and just do it. I'll let CI team comment on that. -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Fri Apr 27 14:56:40 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Fri, 27 Apr 2018 09:56:40 -0500 Subject: [openstack-dev] [Openstack-operators] [nova] Default scheduler filters survey In-Reply-To: <045e01d3de06$870220b0$95066210$@homeatcloud.cz> References: <045e01d3de06$870220b0$95066210$@homeatcloud.cz> Message-ID: <822c0915-d999-f75f-9632-5fab7d57e4f1@gmail.com> On 4/27/2018 4:02 AM, Tomáš Vondra wrote: > Also, Windows host isolation is done using image metadata. I have filled > a bug somewhere that it does not work correctly with Boot from Volume. Likely because for boot from volume the instance.image_id is ''. The request spec, which the filter has access to, also likely doesn't have the backing image metadata for the volume because the instance isn't creating with an image directly. But nova could fetch the image metadata from the volume and put that into the request spec. We fixed a similar bug recently for the IsolatedHostsFilter: https://review.openstack.org/#/c/543263/ If you can find the bug, or report a new one, I could take a look. -- Thanks, Matt From corey.bryant at canonical.com Fri Apr 27 14:57:43 2018 From: corey.bryant at canonical.com (Corey Bryant) Date: Fri, 27 Apr 2018 10:57:43 -0400 Subject: [openstack-dev] [Openstack] OpenStack Queens for Ubuntu 18.04 LTS In-Reply-To: References: Message-ID: On Fri, Apr 27, 2018 at 10:20 AM, Hongbin Lu wrote: > Corey, > > Thanks for the information. Would you clarify what is "working packages > from the community"? > > Best regards, > Hongbin > Sorry I guess that comment is probably a bit vague. The OpenStack packages are open source like many other projects. They're Apache 2 licensed and we gladly accept contributions. :) This is a good starting point for working with the Ubuntu OpenStack packages: https://wiki.ubuntu.com/OpenStack/CorePackages If you or someone else were to provide package sources for zun that DTRT to create binary packages, and if they can test them, then I'd be happy to review/sponsor the Ubuntu and cloud-archive uploads. Thanks, Corey > > On Fri, Apr 27, 2018 at 9:30 AM, Corey Bryant > wrote: > >> >> >> On Fri, Apr 27, 2018 at 9:03 AM, Hongbin Lu wrote: >> >>> Hi Corey, >>> >>> What are the requirements to include OpenStack Zun into the Ubuntu >>> packages? We have a comprehensive installation guide [1] that are using by >>> a lot of users when they were installing Zun. However, the missing of >>> Ubuntu packages is inconvenient for our users. What the Zun team can help >>> for adding Zun to Ubuntu. >>> >>> [1] https://docs.openstack.org/zun/latest/install/index.html >>> >>> Best regards, >>> Hongbin >>> >> >> Hi Hongbin, >> >> If we were to get working packages from the community and commitment to >> test, I'd be happy to sponsor uploads to Ubuntu and backport to the cloud >> achive. >> >> Thanks, >> Corey >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at jimrollenhagen.com Fri Apr 27 15:04:24 2018 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Fri, 27 Apr 2018 11:04:24 -0400 Subject: [openstack-dev] [nova] Default scheduler filters survey In-Reply-To: References: Message-ID: On Wed, Apr 18, 2018 at 11:17 AM, Artom Lifshitz wrote: > Hi all, > > A CI issue [1] caused by tempest thinking some filters are enabled > when they're really not, and a proposed patch [2] to add > (Same|Different)HostFilter to the default filters as a workaround, has > led to a discussion about what filters should be enabled by default in > nova. > > The default filters should make sense for a majority of real world > deployments. Adding some filters to the defaults because CI needs them > is faulty logic, because the needs of CI are different to the needs of > operators/users, and the latter takes priority (though it's my > understanding that a good chunk of operators run tempest on their > clouds post-deployment as a way to validate that the cloud is working > properly, so maybe CI's and users' needs aren't that different after > all). > > To that end, we'd like to know what filters operators are enabling in > their deployment. If you can, please reply to this email with your > [filter_scheduler]/enabled_filters (or > [DEFAULT]/scheduler_default_filters if you're using an older version) > option from nova.conf. Any other comments are welcome as well :) > At Oath: AggregateImagePropertiesIsolation ComputeFilter CoreFilter DifferentHostFilter SameHostFilter ServerGroupAntiAffinityFilter ServerGroupAffinityFilter AvailabilityZoneFilter AggregateInstanceExtraSpecsFilter // jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From colleen at gazlene.net Fri Apr 27 15:12:15 2018 From: colleen at gazlene.net (Colleen Murphy) Date: Fri, 27 Apr 2018 17:12:15 +0200 Subject: [openstack-dev] [keystone] Keystone Team Update - Week of 23 April 2018 Message-ID: <1524841935.3083269.1353008184.19E6DEBA@webmail.messagingengine.com> # Keystone Team Update - Week of 23 April 2018 ## News ### scope_types in nova We've had some good discussions incorporating scope_types into nova [0]. Thanks to mriedem and jaypipes for helping out. The discussion flushed out some work needed in keystonemiddleware [1] and oslo.context [2], making the interaction between those components more clear and easier for other services to use system-scoped tokens. Jay's comments/questions are probably going to be asked by other people working on incorporating these changes into their service. If that pertains to you, please see those reviews. [0] https://review.openstack.org/#/c/553613/ [1] https://review.openstack.org/#/c/564072/ [2] https://review.openstack.org/#/c/530509/ ### Milestone 1 retrospective We had our first team retrospective of the cycle after the meeting on Tuesday. We captured our thoughts on a Trello board[3]. [3] https://trello.com/b/PiJecAs4/keystone-rocky-m1-retrospective ### Forum schedule All of the topics we submitted for the Vancouver forum were accepted[4][5][6]. [4] https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21761/default-roles [5] https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21762/keystone-feedback-session [6] https://www.openstack.org/summit/vancouver-2018/summit-schedule/events/21760/unified-limits ## Open Specs Search query: https://goo.gl/eyTktx We still have four open keystone specs as well as our cross-project spec on default roles[7]. At our milestone retrospective we talked about possibly dropping some of the lower priority specs from the roadmap for this cycle. [7] https://review.openstack.org/#/c/523973/ ## Recently Merged Changes Search query: https://goo.gl/hdD9Kw We merged 14 changes this week. ## Changes that need Attention Search query: https://goo.gl/tW5PiH There are 62 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. ## Bugs Report: https://gist.github.com/lbragstad/80862a9111ff821af07e43e217c52190 This week we opened 6 new bugs and closed 2. ## Milestone Outlook https://releases.openstack.org/rocky/schedule.html We're about six weeks away from spec freeze. Feature proposal freeze is just two weeks after that. ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter From mnaser at vexxhost.com Fri Apr 27 15:14:58 2018 From: mnaser at vexxhost.com (Mohammed Naser) Date: Fri, 27 Apr 2018 11:14:58 -0400 Subject: [openstack-dev] [puppet] Proposing Tobias Urdin to join Puppet OpenStack core Message-ID: Hi everyone, I'm proposing that we add Tobias Urdin to the core Puppet OpenStack team as they've been putting great reviews over the past few months and they have directly contributed in resolving all the Ubuntu deployment issues and helped us bring Ubuntu support back and make the jobs voting again. Thank you, Mohammed From iurygregory at gmail.com Fri Apr 27 15:21:23 2018 From: iurygregory at gmail.com (Iury Gregory) Date: Fri, 27 Apr 2018 15:21:23 +0000 Subject: [openstack-dev] [puppet] Proposing Tobias Urdin to join Puppet OpenStack core In-Reply-To: References: Message-ID: +1 On Fri, Apr 27, 2018, 12:15 Mohammed Naser wrote: > Hi everyone, > > I'm proposing that we add Tobias Urdin to the core Puppet OpenStack > team as they've been putting great reviews over the past few months > and they have directly contributed in resolving all the Ubuntu > deployment issues and helped us bring Ubuntu support back and make the > jobs voting again. > > Thank you, > Mohammed > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.urdin at crystone.com Fri Apr 27 15:23:51 2018 From: tobias.urdin at crystone.com (Tobias Urdin) Date: Fri, 27 Apr 2018 15:23:51 +0000 Subject: [openstack-dev] [Openstack] OpenStack Queens for Ubuntu 18.04 LTS References: Message-ID: <2d5b9f1e66664cf6a1333b7837ffb189@mb01.staff.ognet.se> Hello, I was very interested in packaging Zun for Ubuntu however I did not have the time to properly get started. I was able to package kuryr-lib, I've uploaded it here for now https://github.com/tobias-urdin/deb-kuryr-lib Would love to see both Zun and Qinling in Ubuntu to get a good grip on the container world :) Best regards On 04/27/2018 04:59 PM, Corey Bryant wrote: On Fri, Apr 27, 2018 at 10:20 AM, Hongbin Lu > wrote: Corey, Thanks for the information. Would you clarify what is "working packages from the community"? Best regards, Hongbin Sorry I guess that comment is probably a bit vague. The OpenStack packages are open source like many other projects. They're Apache 2 licensed and we gladly accept contributions. :) This is a good starting point for working with the Ubuntu OpenStack packages: https://wiki.ubuntu.com/OpenStack/CorePackages If you or someone else were to provide package sources for zun that DTRT to create binary packages, and if they can test them, then I'd be happy to review/sponsor the Ubuntu and cloud-archive uploads. Thanks, Corey On Fri, Apr 27, 2018 at 9:30 AM, Corey Bryant > wrote: On Fri, Apr 27, 2018 at 9:03 AM, Hongbin Lu > wrote: Hi Corey, What are the requirements to include OpenStack Zun into the Ubuntu packages? We have a comprehensive installation guide [1] that are using by a lot of users when they were installing Zun. However, the missing of Ubuntu packages is inconvenient for our users. What the Zun team can help for adding Zun to Ubuntu. [1] https://docs.openstack.org/zun/latest/install/index.html Best regards, Hongbin Hi Hongbin, If we were to get working packages from the community and commitment to test, I'd be happy to sponsor uploads to Ubuntu and backport to the cloud achive. Thanks, Corey __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmsimard at redhat.com Fri Apr 27 15:41:19 2018 From: dmsimard at redhat.com (David Moreau Simard) Date: Fri, 27 Apr 2018 11:41:19 -0400 Subject: [openstack-dev] [all] Recent failures to use ARA or generate reports in the gate Message-ID: Hi, I was made aware today that new installations of ARA were not working or failing to generate reports in a variety of gate jobs with a stack trace that ends with: AttributeError: 'Blueprint' object has no attribute 'json_encoder' The root cause was identified to be a new release of Flask, 0.12.3, which shipped broken packages to PyPi [1]. This should be fixed momentarily once upstream ships a fixed 0.12.4 package. In the meantime, we're going to merge a requirements.txt update to blacklist 0.12.3 but it won't be effective until we cut a new release of ARA which we hope to be able to do sometime next week. I'll take the opportunity to remind users of ARA that we're transitioning away from statically generated reports [3] and you should do that too if you haven't already. [1]: https://github.com/pallets/flask/issues/2728 [2]: https://github.com/openstack/requirements/blob/a5537a6f4b9cc477067949e1f9136415ac216f21/upper-constraints.txt# L480 [3]: http://lists.openstack.org/pipermail/openstack-dev/2018-March/128902.html David Moreau Simard Senior Software Engineer | OpenStack RDO dmsimard = [irc, github, twitter] From richwellum at gmail.com Fri Apr 27 15:49:16 2018 From: richwellum at gmail.com (Richard Wellum) Date: Fri, 27 Apr 2018 15:49:16 +0000 Subject: [openstack-dev] [kolla][vote]Core nomination for Mark Goddard (mgoddard) as kolla core member In-Reply-To: References: Message-ID: +1 On Fri, Apr 27, 2018 at 2:07 AM Paul Bourke wrote: > +1, always great working with Mark :) > > On 26/04/18 16:31, Jeffrey Zhang wrote: > > Kolla core reviewer team, > > > > It is my pleasure to nominate > > ​ > > mgoddard for kolla core team. > > ​ > > Mark has been working both upstream and downstream with kolla and > > kolla-ansible for over two years, building bare metal compute clouds with > > ironic for HPC. He's been involved with OpenStack since 2014. He started > > the kayobe deployment project which complements kolla-ansible. He is > > also the most active non-core contributor for last 90 days[1] > > ​​ > > Consider this nomination a +1 vote from me > > > > A +1 vote indicates you are in favor of > > ​ > > mgoddard as a candidate, a -1 > > is a > > ​​ > > veto. Voting is open for 7 days until > > ​May > > ​4​ > > th, or a unanimous > > response is reached or a veto vote occurs. > > > > [1] http://stackalytics.com/report/contribution/kolla-group/90 > > > > -- > > Regards, > > Jeffrey Zhang > > Blog: http://xcodest.me > > > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alee at redhat.com Fri Apr 27 15:49:57 2018 From: alee at redhat.com (Ade Lee) Date: Fri, 27 Apr 2018 11:49:57 -0400 Subject: [openstack-dev] [tripleo] validating overcloud config changes on a redeploy Message-ID: <1524844197.3706.24.camel@redhat.com> Hi, Recently I starting looking at how we implement password changes in an existing deployment, and found that there were issues. This made me wonder whether we needed a test job to confirm that password changes (and other config changes) are in fact executed properly. As far as I understand it, the way to do password changes is to - 1) Create a yaml file containing the parameters to be changed and their new values 2) call openstack overcloud deploy and append -e new_params.yaml Note that the above steps can really describe the testing of setting any config changes (not just passwords). Of course, if we do change passwords, we'll want to validate that the config files have changed, the keystone/dbusers have been modified, the mistral plan has been updated, services are still running etc. After talking with many folks, it seems there is no clear consensus where code to do the above tasks should live. Should it be in tripleo- upgrades, or in tripleo-validations or in a separate repo? Is there anyone already doing something similar? If we end up creating a role to do this, ideally it should be deployment tool agnostic - usable by both infrared or quickstart or others. Whats the best way to do this? Thanks, Ade From corey.bryant at canonical.com Fri Apr 27 15:54:18 2018 From: corey.bryant at canonical.com (Corey Bryant) Date: Fri, 27 Apr 2018 11:54:18 -0400 Subject: [openstack-dev] [Openstack] OpenStack Queens for Ubuntu 18.04 LTS In-Reply-To: <2d5b9f1e66664cf6a1333b7837ffb189@mb01.staff.ognet.se> References: <2d5b9f1e66664cf6a1333b7837ffb189@mb01.staff.ognet.se> Message-ID: On Fri, Apr 27, 2018 at 11:23 AM, Tobias Urdin wrote: > Hello, > > I was very interested in packaging Zun for Ubuntu however I did not have > the time to properly get started. > > I was able to package kuryr-lib, I've uploaded it here for now > https://github.com/tobias-urdin/deb-kuryr-lib > > > Would love to see both Zun and Qinling in Ubuntu to get a good grip on the > container world :) > Best regards > > Awesome Tobias. I can take a closer look next week if you'd like. Thanks, Corey > > On 04/27/2018 04:59 PM, Corey Bryant wrote: > > On Fri, Apr 27, 2018 at 10:20 AM, Hongbin Lu wrote: > >> Corey, >> >> Thanks for the information. Would you clarify what is "working packages >> from the community"? >> >> Best regards, >> Hongbin >> > > Sorry I guess that comment is probably a bit vague. > > The OpenStack packages are open source like many other projects. They're > Apache 2 licensed and we gladly accept contributions. :) > > This is a good starting point for working with the Ubuntu OpenStack > packages: > https://wiki.ubuntu.com/OpenStack/CorePackages > > If you or someone else were to provide package sources for zun that DTRT > to create binary packages, and if they can test them, then I'd be happy to > review/sponsor the Ubuntu and cloud-archive uploads. > > Thanks, > Corey > > >> >> On Fri, Apr 27, 2018 at 9:30 AM, Corey Bryant > > wrote: >> >>> >>> >>> On Fri, Apr 27, 2018 at 9:03 AM, Hongbin Lu >>> wrote: >>> >>>> Hi Corey, >>>> >>>> What are the requirements to include OpenStack Zun into the Ubuntu >>>> packages? We have a comprehensive installation guide [1] that are using by >>>> a lot of users when they were installing Zun. However, the missing of >>>> Ubuntu packages is inconvenient for our users. What the Zun team can help >>>> for adding Zun to Ubuntu. >>>> >>>> [1] https://docs.openstack.org/zun/latest/install/index.html >>>> >>>> Best regards, >>>> Hongbin >>>> >>> >>> Hi Hongbin, >>> >>> If we were to get working packages from the community and commitment to >>> test, I'd be happy to sponsor uploads to Ubuntu and backport to the cloud >>> achive. >>> >>> Thanks, >>> Corey >>> >>> ____________________________________________________________ >>> ______________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.op >>> enstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib >> e >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Fri Apr 27 16:04:18 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Fri, 27 Apr 2018 11:04:18 -0500 Subject: [openstack-dev] The Forum Schedule is now live Message-ID: <5AE34A02.8020802@openstack.org> Hello all - Please take a look here for the posted Forum schedule: https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224 You should also see it update on your Summit App. Thank you and see you in Vancouver! Jimmy From tobias.urdin at crystone.com Fri Apr 27 16:06:17 2018 From: tobias.urdin at crystone.com (Tobias Urdin) Date: Fri, 27 Apr 2018 16:06:17 +0000 Subject: [openstack-dev] [Openstack] OpenStack Queens for Ubuntu 18.04 LTS References: <2d5b9f1e66664cf6a1333b7837ffb189@mb01.staff.ognet.se> Message-ID: <5da9e30ebe8b40a49f9ada0f2ae22253@mb01.staff.ognet.se> I got started on kuryr-libnetwork but never finished the init/systemd scripts but all dependencies in control file should be ok. I uploaded it here: https://github.com/tobias-urdin/deb-kuryr-libnetwork (not a working package!) After fixing kuryr-libnetwork one can get starting packaging Zun. For Qinling you might want kuryr-libkubernetes as well, but I'm unsure. Best regards On 04/27/2018 05:56 PM, Corey Bryant wrote: On Fri, Apr 27, 2018 at 11:23 AM, Tobias Urdin > wrote: Hello, I was very interested in packaging Zun for Ubuntu however I did not have the time to properly get started. I was able to package kuryr-lib, I've uploaded it here for now https://github.com/tobias-urdin/deb-kuryr-lib Would love to see both Zun and Qinling in Ubuntu to get a good grip on the container world :) Best regards Awesome Tobias. I can take a closer look next week if you'd like. Thanks, Corey On 04/27/2018 04:59 PM, Corey Bryant wrote: On Fri, Apr 27, 2018 at 10:20 AM, Hongbin Lu > wrote: Corey, Thanks for the information. Would you clarify what is "working packages from the community"? Best regards, Hongbin Sorry I guess that comment is probably a bit vague. The OpenStack packages are open source like many other projects. They're Apache 2 licensed and we gladly accept contributions. :) This is a good starting point for working with the Ubuntu OpenStack packages: https://wiki.ubuntu.com/OpenStack/CorePackages If you or someone else were to provide package sources for zun that DTRT to create binary packages, and if they can test them, then I'd be happy to review/sponsor the Ubuntu and cloud-archive uploads. Thanks, Corey On Fri, Apr 27, 2018 at 9:30 AM, Corey Bryant > wrote: On Fri, Apr 27, 2018 at 9:03 AM, Hongbin Lu > wrote: Hi Corey, What are the requirements to include OpenStack Zun into the Ubuntu packages? We have a comprehensive installation guide [1] that are using by a lot of users when they were installing Zun. However, the missing of Ubuntu packages is inconvenient for our users. What the Zun team can help for adding Zun to Ubuntu. [1] https://docs.openstack.org/zun/latest/install/index.html Best regards, Hongbin Hi Hongbin, If we were to get working packages from the community and commitment to test, I'd be happy to sponsor uploads to Ubuntu and backport to the cloud achive. Thanks, Corey __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at doughellmann.com Fri Apr 27 16:25:59 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Fri, 27 Apr 2018 12:25:59 -0400 Subject: [openstack-dev] [all][reno] issue with reno 2.9.0 and duplicate anchors In-Reply-To: <1524837645-sup-1751@lrrr.local> References: <1524837645-sup-1751@lrrr.local> Message-ID: <1524846311-sup-4099@lrrr.local> Excerpts from Doug Hellmann's message of 2018-04-27 10:02:08 -0400: > The latest release of reno tries to add anchors to the page in a way > that ensures they are named consistently across builds. For projects > with the same version number in multiple series (which can happen for > non-milestone projects that haven't tagged for rocky yet), this causes > duplicate anchors and causes the release notes build to fail. > > There is a fix for this in https://review.openstack.org/564763 and we > will try to get a new release of reno out as soon as that patch merges. > > Doug Reno 2.9.1 is available now and should fix this issue [1]. The constraint update is working its way through the gate [2]. [1] http://lists.openstack.org/pipermail/release-announce/2018-April/004988.html [2] https://review.openstack.org/#/c/564794/ From jimmy at openstack.org Fri Apr 27 16:31:28 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Fri, 27 Apr 2018 11:31:28 -0500 Subject: [openstack-dev] The Forum Schedule is now live In-Reply-To: <5AE34A02.8020802@openstack.org> References: <5AE34A02.8020802@openstack.org> Message-ID: <5AE35060.1040506@openstack.org> PS: If you have general questions on the schedule, additional updates to an abstract, or changes to the speaker list, please send them along to speakersupport at openstack.org. > Jimmy McArthur > April 27, 2018 at 11:04 AM > Hello all - > > Please take a look here for the posted Forum schedule: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224 > You should also see it update on your Summit App. > > Thank you and see you in Vancouver! > Jimmy > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Fri Apr 27 17:24:27 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Fri, 27 Apr 2018 10:24:27 -0700 Subject: [openstack-dev] [octavia] Sometimes amphoras are not re-created if they are not reached for more than heartbeat_timeout In-Reply-To: <11302_1524654452_5AE06174_11302_207_1_2be855e5b8174bf397106775823399bf@orange.com> References: <11302_1524654452_5AE06174_11302_207_1_2be855e5b8174bf397106775823399bf@orange.com> Message-ID: Hi Mihaela, I am sorry to hear you are having trouble with the queens release of Octavia. It is true that a lot of work has gone into the failover capability, specifically working around a python threading issue and making it more resistant to certain neutron failure situations (missing ports, etc.). I know of one open bug against the failover flows, https://storyboard.openstack.org/#!/story/2001481, "failover breaks in Active/Standby mode if both amphroae are down". Unfortunately the log snippet above does not give me enough information about the problem to help with this issue. From the snippet it looks like the failovers were initiated, but the controllers are unable to reach the amphora-agent on the replacement amphora. It will continue those retry attempts, but eventually will fail the amphora into ERROR if it doesn't succeed. One thought I have is if you created you amphora image in the last two weeks, you may have built an amphora using the master branch of octavia, which had a bug that impacted active/standby images. This was introduced working around the new pip 10 issues. That patch has been fixed: https://review.openstack.org/#/c/564371/ If neither of these situations match your environment, please open a story (https://storyboard.openstack.org/#!/dashboard/stories) for us and include the health manager logs from the point you delete the amphora up until it starts these connection attempts. We will dig through those logs to see what the issue might be. Michael (johnsom) On Wed, Apr 25, 2018 at 4:07 AM, wrote: > Hello, > > > > I am testing Octavia Queens and I see that the failover behavior is very > much different than the one in Ocata (this is the version we are currently > running in production). > > One example of such behavior is: > > > > I create 4 load balancers and after the creation is successful, I shut off > all the 8 amphoras. Sometimes, even the health-manager agent does not reach > the amphoras, they are not deleted and re-created. The logs look like shown > below even when the heartbeat timeout is long passed. Sometimes the amphoras > are deleted and re-created. Sometimes, they are partially re-created – part > of them remain in shut off. > > Heartbeat_timeout is set to 60 seconds. > > > > > > > > [octavia-health-manager-3662231220-nxnt3] 2018-04-25 10:57:26.244 11 WARNING > octavia.amphorae.drivers.haproxy.rest_api_driver > [req-339b54a7-ab0c-422a-832f-a444cd710497 - a5f15235c0714365b98a50a11ec956e7 > - - -] Could not connect to instance. Retrying.: ConnectionError: > HTTPSConnectionPool(host='192.168.0.15', port=9443): Max retries exceeded > with url: > /0.5/listeners/285ad342-5582-423e-b654-1f0b50d91fb2/certificates/octaviasrv2.orange.com.pem > (Caused by NewConnectionError(' object at 0x7f559862c710>: Failed to establish a new connection: [Errno 113] > No route to host',)) > > [octavia-health-manager-3662231220-3lssd] 2018-04-25 10:57:26.464 13 WARNING > octavia.amphorae.drivers.haproxy.rest_api_driver > [req-a63b795a-4b4f-4b90-a201-a4c9f49ac68b - a5f15235c0714365b98a50a11ec956e7 > - - -] Could not connect to instance. Retrying.: ConnectionError: > HTTPSConnectionPool(host='192.168.0.14', port=9443): Max retries exceeded > with url: > /0.5/listeners/a45bdef3-e7da-4a18-9f1f-53d5651efe0f/1615c1ec-249e-4fa8-9d73-2397e281712c/haproxy > (Caused by NewConnectionError(' object at 0x7f8a0de95e10>: Failed to establish a new connection: [Errno 113] > No route to host',)) > > [octavia-health-manager-3662231220-nxnt3] 2018-04-25 10:57:27.772 11 WARNING > octavia.amphorae.drivers.haproxy.rest_api_driver > [req-10febb10-85ea-4082-9df7-daa48894b004 - a5f15235c0714365b98a50a11ec956e7 > - - -] Could not connect to instance. Retrying.: ConnectionError: > HTTPSConnectionPool(host='192.168.0.19', port=9443): Max retries exceeded > with url: > /0.5/listeners/96ce5862-d944-46cb-8809-e1e328268a66/fc5b7940-3527-4e9b-b93f-1da3957a5b71/haproxy > (Caused by NewConnectionError(' object at 0x7f5598491c90>: Failed to establish a new connection: [Errno 113] > No route to host',)) > > [octavia-health-manager-3662231220-nxnt3] 2018-04-25 10:57:34.252 11 WARNING > octavia.amphorae.drivers.haproxy.rest_api_driver > [req-339b54a7-ab0c-422a-832f-a444cd710497 - a5f15235c0714365b98a50a11ec956e7 > - - -] Could not connect to instance. Retrying.: ConnectionError: > HTTPSConnectionPool(host='192.168.0.15', port=9443): Max retries exceeded > with url: > /0.5/listeners/285ad342-5582-423e-b654-1f0b50d91fb2/certificates/octaviasrv2.orange.com.pem > (Caused by NewConnectionError(' object at 0x7f5598520790>: Failed to establish a new connection: [Errno 113] > No route to host',)) > > [octavia-health-manager-3662231220-3lssd] 2018-04-25 10:57:34.476 13 WARNING > octavia.amphorae.drivers.haproxy.rest_api_driver > [req-a63b795a-4b4f-4b90-a201-a4c9f49ac68b - a5f15235c0714365b98a50a11ec956e7 > - - -] Could not connect to instance. Retrying.: ConnectionError: > HTTPSConnectionPool(host='192.168.0.14', port=9443): Max retries exceeded > with url: > /0.5/listeners/a45bdef3-e7da-4a18-9f1f-53d5651efe0f/1615c1ec-249e-4fa8-9d73-2397e281712c/haproxy > (Caused by NewConnectionError(' object at 0x7f8a0de953d0>: Failed to establish a new connection: [Errno 113] > No route to host',)) > > [octavia-health-manager-3662231220-nxnt3] 2018-04-25 10:57:35.780 11 WARNING > octavia.amphorae.drivers.haproxy.rest_api_driver > [req-10febb10-85ea-4082-9df7-daa48894b004 - a5f15235c0714365b98a50a11ec956e7 > - - -] Could not connect to instance. Retrying.: ConnectionError: > HTTPSConnectionPool(host='192.168.0.19', port=9443): Max retries exceeded > with url: > /0.5/listeners/96ce5862-d944-46cb-8809-e1e328268a66/fc5b7940-3527-4e9b-b93f-1da3957a5b71/haproxy > (Caused by NewConnectionError(' object at 0x7f55984e2050>: Failed to establish a new connection: [Errno 113] > No route to host',)) > > > > Thank you, > > Mihaela Balas > > _________________________________________________________________________________________________________________________ > > Ce message et ses pieces jointes peuvent contenir des informations > confidentielles ou privilegiees et ne doivent donc > pas etre diffuses, exploites ou copies sans autorisation. Si vous avez recu > ce message par erreur, veuillez le signaler > a l'expediteur et le detruire ainsi que les pieces jointes. Les messages > electroniques etant susceptibles d'alteration, > Orange decline toute responsabilite si ce message a ete altere, deforme ou > falsifie. Merci. > > This message and its attachments may contain confidential or privileged > information that may be protected by law; > they should not be distributed, used or copied without authorisation. > If you have received this email in error, please notify the sender and > delete this message and its attachments. > As emails may be altered, Orange is not liable for messages that have been > modified, changed or falsified. > Thank you. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From emilien at redhat.com Fri Apr 27 17:41:28 2018 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 27 Apr 2018 10:41:28 -0700 Subject: [openstack-dev] [puppet] Proposing Tobias Urdin to join Puppet OpenStack core In-Reply-To: References: Message-ID: +1, thanks Tobias for your contributions! On Fri, Apr 27, 2018 at 8:21 AM, Iury Gregory wrote: > +1 > > On Fri, Apr 27, 2018, 12:15 Mohammed Naser wrote: > >> Hi everyone, >> >> I'm proposing that we add Tobias Urdin to the core Puppet OpenStack >> team as they've been putting great reviews over the past few months >> and they have directly contributed in resolving all the Ubuntu >> deployment issues and helped us bring Ubuntu support back and make the >> jobs voting again. >> >> Thank you, >> Mohammed >> >> ____________________________________________________________ >> ______________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: >> unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Fri Apr 27 17:57:30 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Fri, 27 Apr 2018 10:57:30 -0700 Subject: [openstack-dev] [api] [lbaas] Neutron LBaaS V2 docs incompatibility In-Reply-To: References: Message-ID: Hi Artem, You are correct that the API reference at https://developer.openstack.org/api-ref/network/v2/index.html#pools is incorrect. As you figured out, someone mistakenly merged the long dead/removed LBaaS v1 API specification into the LBaaS v2 API specification at that link. The current, and up to date load balancing API reference is at: https://developer.openstack.org/api-ref/load-balancer/v2/index.html This documents the Octavia API which is a superset of the the LBaaS v2 API, so it should help you clarify any issues you run into. That said, due to the deprecation of neutron-lbaas and spin out from neutron we decided to explicitly not support neutron-lbaas in the OpenStack Client. neutron-lbaas is only supported using the neutron client. You can continue to use the neutron client CLI with neutron-lbaas through the neutron-lbaas deprecation cycle. When you move to using Octavia you can switch to using the python-octaviaclient OSC plugin. Michael On Wed, Apr 25, 2018 at 5:51 AM, Artem Goncharov wrote: > Hi all, > > after working with OpenStackSDK in my cloud I have found one difference in > the Neutron LBaaS (yes, I know it is deprecated, but it is still used). The > fix would be small and fast, unfortunately I have faced problems with the > API description: > - https://wiki.openstack.org/wiki/Neutron/LBaaS/API_2.0#Pools describes, > that the LB pool has healthmonitor_id attribute (what eventually also fits > reality of my cloud) > - https://developer.openstack.org/api-ref/network/v2/index.html#pools (which > is referred to from the previous link in the deprecation note) describes, > that the LB pool has healthmonitors (and healthmonitors_status) as list of > IDs. Basically in this regards it is same as > https://wiki.openstack.org/wiki/Neutron/LBaaS/API_1.0#Pool description > - unfortunately even > https://github.com/openstack/neutron-lib/blob/master/api-ref/source/v2/lbaas-v2.inc > describes Pool.healthmonitors (however it also contains > https://github.com/openstack/neutron-lib/blob/master/api-ref/source/v2/samples/lbaas/pools-list-response2.json > sample with the Pool.healthmonitor_id) > - OpenStackSDK contains network.pool.health_monitors (with underscore) > > I want to bring this all in an order and enable managing of the loadbalancer > through OSC for my OpenStack cloud, but I can't figure out what is the > correct behavior here. > > Can anybody, please, help in figuring out the truth here? > > Thanks, > Artem > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From artem.goncharov at gmail.com Fri Apr 27 18:09:18 2018 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Fri, 27 Apr 2018 18:09:18 +0000 Subject: [openstack-dev] [api] [lbaas] Neutron LBaaS V2 docs incompatibility In-Reply-To: References: Message-ID: Thanks a lot Michael. On Fri, 27 Apr 2018, 19:57 Michael Johnson, wrote: > Hi Artem, > > You are correct that the API reference at > https://developer.openstack.org/api-ref/network/v2/index.html#pools is > incorrect. As you figured out, someone mistakenly merged the long > dead/removed LBaaS v1 API specification into the LBaaS v2 API > specification at that link. > > The current, and up to date load balancing API reference is at: > https://developer.openstack.org/api-ref/load-balancer/v2/index.html > > This documents the Octavia API which is a superset of the the LBaaS v2 > API, so it should help you clarify any issues you run into. > > That said, due to the deprecation of neutron-lbaas and spin out from > neutron we decided to explicitly not support neutron-lbaas in the > OpenStack Client. neutron-lbaas is only supported using the neutron > client. You can continue to use the neutron client CLI with > neutron-lbaas through the neutron-lbaas deprecation cycle. > > When you move to using Octavia you can switch to using the > python-octaviaclient OSC plugin. > > Michael > > On Wed, Apr 25, 2018 at 5:51 AM, Artem Goncharov > wrote: > > Hi all, > > > > after working with OpenStackSDK in my cloud I have found one difference > in > > the Neutron LBaaS (yes, I know it is deprecated, but it is still used). > The > > fix would be small and fast, unfortunately I have faced problems with the > > API description: > > - https://wiki.openstack.org/wiki/Neutron/LBaaS/API_2.0#Pools describes, > > that the LB pool has healthmonitor_id attribute (what eventually also > fits > > reality of my cloud) > > - https://developer.openstack.org/api-ref/network/v2/index.html#pools > (which > > is referred to from the previous link in the deprecation note) describes, > > that the LB pool has healthmonitors (and healthmonitors_status) as list > of > > IDs. Basically in this regards it is same as > > https://wiki.openstack.org/wiki/Neutron/LBaaS/API_1.0#Pool description > > - unfortunately even > > > https://github.com/openstack/neutron-lib/blob/master/api-ref/source/v2/lbaas-v2.inc > > describes Pool.healthmonitors (however it also contains > > > https://github.com/openstack/neutron-lib/blob/master/api-ref/source/v2/samples/lbaas/pools-list-response2.json > > sample with the Pool.healthmonitor_id) > > - OpenStackSDK contains network.pool.health_monitors (with underscore) > > > > I want to bring this all in an order and enable managing of the > loadbalancer > > through OSC for my OpenStack cloud, but I can't figure out what is the > > correct behavior here. > > > > Can anybody, please, help in figuring out the truth here? > > > > Thanks, > > Artem > > > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Fri Apr 27 21:13:44 2018 From: aschultz at redhat.com (Alex Schultz) Date: Fri, 27 Apr 2018 15:13:44 -0600 Subject: [openstack-dev] [puppet] Proposing Tobias Urdin to join Puppet OpenStack core In-Reply-To: References: Message-ID: +1 On Fri, Apr 27, 2018 at 11:41 AM, Emilien Macchi wrote: > +1, thanks Tobias for your contributions! > > On Fri, Apr 27, 2018 at 8:21 AM, Iury Gregory wrote: >> >> +1 >> >> On Fri, Apr 27, 2018, 12:15 Mohammed Naser wrote: >>> >>> Hi everyone, >>> >>> I'm proposing that we add Tobias Urdin to the core Puppet OpenStack >>> team as they've been putting great reviews over the past few months >>> and they have directly contributed in resolving all the Ubuntu >>> deployment issues and helped us bring Ubuntu support back and make the >>> jobs voting again. >>> >>> Thank you, >>> Mohammed >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > > > -- > Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From mariusc at redhat.com Fri Apr 27 22:04:24 2018 From: mariusc at redhat.com (Marius Cornea) Date: Fri, 27 Apr 2018 18:04:24 -0400 Subject: [openstack-dev] [tripleo] validating overcloud config changes on a redeploy In-Reply-To: <1524844197.3706.24.camel@redhat.com> References: <1524844197.3706.24.camel@redhat.com> Message-ID: Hi Ade, To the best of my knowledge the closest place where we do similar sequence of actions (post deployment build and append environment files to the deploy command and re-run overcloud deploy on top of an already deployed overcloud) is tripleo-upgrade. As we discussed on IRC I was reluctant to adding these kind of tests to tripleo-upgrade since it was initially created to cover only the minor update and major upgrades use cases. Nevertheless, thinking more about your use case I realized that configuration changes tests could fit quite well in tripleo-upgrade for several reasons: - we already have a mechanism[1] in place for attaching extra environment files to the deploy command - we already have tests that can be run during the stack update which applies the config changes; this could be useful to validate that configuration changes do not break the data plane(e.g to validate that a neutron config change doesn't not leave instances without networking during the stack update) - we can easily segregate the config changes plays into their own directory as we do with update/upgrade[2] and add the reusable ones in the common directory - upgrades might benefit from the config changes tests by running them in a pre/post minor update/major upgrade step and catch potential parameters changes between releases I'd like to hear what others think about this and see if there could be a better place where to host these kind of tests but personally I'm ok with adding them to tripleo-upgrade. Best regards, Marius [1] http://git.openstack.org/cgit/openstack/tripleo-upgrade/tree/tasks/upgrade/step_upgrade.yml [2] http://git.openstack.org/cgit/openstack/tripleo-upgrade/tree/tasks On Fri, Apr 27, 2018 at 11:49 AM, Ade Lee wrote: > Hi, > > Recently I starting looking at how we implement password changes in an > existing deployment, and found that there were issues. This made me > wonder whether we needed a test job to confirm that password changes > (and other config changes) are in fact executed properly. > > As far as I understand it, the way to do password changes is to - > 1) Create a yaml file containing the parameters to be changed and > their new values > 2) call openstack overcloud deploy and append -e new_params.yaml > > Note that the above steps can really describe the testing of setting > any config changes (not just passwords). > > Of course, if we do change passwords, we'll want to validate that the > config files have changed, the keystone/dbusers have been modified, the > mistral plan has been updated, services are still running etc. > > After talking with many folks, it seems there is no clear consensus > where code to do the above tasks should live. Should it be in tripleo- > upgrades, or in tripleo-validations or in a separate repo? > > Is there anyone already doing something similar? > > If we end up creating a role to do this, ideally it should be > deployment tool agnostic - usable by both infrared or quickstart or > others. > > Whats the best way to do this? > > Thanks, > Ade > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From anlin.kong at gmail.com Sat Apr 28 00:10:33 2018 From: anlin.kong at gmail.com (Lingxian Kong) Date: Sat, 28 Apr 2018 00:10:33 +0000 Subject: [openstack-dev] [nova] Default scheduler filters survey In-Reply-To: References: Message-ID: At Catalyst Cloud: RetryFilter AvailabilityZoneFilter RamFilter ComputeFilter AggregateCoreFilter DiskFilter AggregateInstanceExtraSpecsFilter ImagePropertiesFilter ServerGroupAntiAffinityFilter SameHostFilter Cheers, Lingxian Kong On Sat, Apr 28, 2018 at 3:04 AM Jim Rollenhagen wrote: > On Wed, Apr 18, 2018 at 11:17 AM, Artom Lifshitz > wrote: > >> Hi all, >> >> A CI issue [1] caused by tempest thinking some filters are enabled >> when they're really not, and a proposed patch [2] to add >> (Same|Different)HostFilter to the default filters as a workaround, has >> led to a discussion about what filters should be enabled by default in >> nova. >> >> The default filters should make sense for a majority of real world >> deployments. Adding some filters to the defaults because CI needs them >> is faulty logic, because the needs of CI are different to the needs of >> operators/users, and the latter takes priority (though it's my >> understanding that a good chunk of operators run tempest on their >> clouds post-deployment as a way to validate that the cloud is working >> properly, so maybe CI's and users' needs aren't that different after >> all). >> >> To that end, we'd like to know what filters operators are enabling in >> their deployment. If you can, please reply to this email with your >> [filter_scheduler]/enabled_filters (or >> [DEFAULT]/scheduler_default_filters if you're using an older version) >> option from nova.conf. Any other comments are welcome as well :) >> > > At Oath: > > AggregateImagePropertiesIsolation > ComputeFilter > CoreFilter > DifferentHostFilter > SameHostFilter > ServerGroupAntiAffinityFilter > ServerGroupAffinityFilter > AvailabilityZoneFilter > AggregateInstanceExtraSpecsFilter > > // jim > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wenranxiao at gmail.com Sat Apr 28 08:39:04 2018 From: wenranxiao at gmail.com (wenran xiao) Date: Sat, 28 Apr 2018 16:39:04 +0800 Subject: [openstack-dev] [neutron] Problem in applying QoS policy to router gateway port Message-ID: I apply qos policy to gateway port in devstack, refer to this patch: https://review.openstack.org/#/c/523153/ , I change options "ovs_use_veth" to True(Default is False), but I meet the problem, I can't connect my vm by "ping". The vm and namespace create before I change options "ovs_use_veth". Is this change smoothly? -------------- next part -------------- An HTML attachment was scrubbed... URL: From simon.leinen at switch.ch Sat Apr 28 08:53:46 2018 From: simon.leinen at switch.ch (Simon Leinen) Date: Sat, 28 Apr 2018 10:53:46 +0200 Subject: [openstack-dev] [designate] Meeting Times - change to office hours? In-Reply-To: (Graham Hayes's message of "Mon, 23 Apr 2018 12:11:12 +0100") References: Message-ID: Graham Hayes writes: > I would like to suggest we have an office hours style meeting, with > one in the UTC evening and one in the UTC morning. > If this seems reasonable - when and what frequency should we do > them? What times suit the current set of contributors? In general, I prefer 0700-1700 UTC, but 1600-2100 UTC are also doable. The current slot (1400 UTC) is ideal for me, except that in Winter (outside EU DST) it collides with another (regional OpenStack-related) teleconference every other week. Moving the current Designate meeting slot to bi-weekly would provide an easy way to fix my collision. -- Simon. From gmann at ghanshyammann.com Sat Apr 28 10:27:02 2018 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sat, 28 Apr 2018 19:27:02 +0900 Subject: [openstack-dev] [tempest] Proposing Felipe Monteiro for Tempest core Message-ID: Hi Tempest Team, I would like to propose Felipe Monteiro (irc: felipemonteiro) to Tempest core. Felipe has been an active contributor to the Tempest since the Pike cycle. He has been doing lot of review and commits since then. Filling the gaps on service clients side and their testing and lot other areas. He has demonstrated the good quality and feedback while his review. He has good understanding of Tempest source code and project missions & goal. IMO his efforts are highly valuable and it will be great to have him in team. As per usual practice, please vote +1 or -1 to the nomination. I will keep this nomination open for a week or until everyone voted. Felipe Reviews and Commit - https://review.openstack.org/#/q/reviewer:felipe.monteiro at att.com+project:openstack/tempest https://review.openstack.org/#/q/owner:felipe.monteiro at att.com+project:openstack/tempest -gmann From masayuki.igawa at gmail.com Sat Apr 28 10:47:27 2018 From: masayuki.igawa at gmail.com (Masayuki Igawa) Date: Sat, 28 Apr 2018 19:47:27 +0900 Subject: [openstack-dev] [tempest] Proposing Felipe Monteiro for Tempest core In-Reply-To: References: Message-ID: <1524912447.2668821.1353791304.6089EE37@webmail.messagingengine.com> +1!!!! -- Masayuki Igawa Key fingerprint = C27C 2F00 3A2A 999A 903A 753D 290F 53ED C899 BF89 On Sat, Apr 28, 2018, at 19:27, Ghanshyam Mann wrote: > Hi Tempest Team, > > I would like to propose Felipe Monteiro (irc: felipemonteiro) to Tempest core. > > Felipe has been an active contributor to the Tempest since the Pike > cycle. He has been doing lot of review and commits since then. Filling > the gaps on service clients side and their testing and lot other > areas. He has demonstrated the good quality and feedback while his > review. > > He has good understanding of Tempest source code and project missions > & goal. IMO his efforts are highly valuable and it will be great to > have him in team. > > > As per usual practice, please vote +1 or -1 to the nomination. I will > keep this nomination open for a week or until everyone voted. > > Felipe Reviews and Commit - > https://review.openstack.org/#/q/reviewer:felipe.monteiro at att.com+project:openstack/tempest > https://review.openstack.org/#/q/owner:felipe.monteiro at att.com+project:openstack/tempest > > -gmann > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From hongbin034 at gmail.com Sat Apr 28 21:14:02 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Sat, 28 Apr 2018 17:14:02 -0400 Subject: [openstack-dev] [Zun][Kolla][Kolla-ansible] Verify Zun deployment in Kolla gate Message-ID: Hi Kolla team, Recently, I saw there are users who tried to install Zun by using Kolla-ansible and reported bugs to us whenever they ran into issues (e.g. https://bugs.launchpad.net/kolla-ansible/+bug/1766151). The increase of this usage pattern (Kolla + Zun) made me think that we need to have CI coverage to verify the Zun deployment setup by Kolla. IMHO, the ideal CI workflow should be: * Create a VM with different distros (i.e. Ubuntu, CentOS). * Use Kolla-ansible to stand up a Zun deployment. * Run Zun's tempest test suit [1] against the deployment. My question for Kolla team is if it is reasonable to setup a Zuul job as described above? or such CI jobs already exist? If not, how to create one? [1] https://github.com/openstack/zun-tempest-plugin Best regards, Hongbin -------------- next part -------------- An HTML attachment was scrubbed... URL: From ken1ohmichi at gmail.com Sat Apr 28 21:43:44 2018 From: ken1ohmichi at gmail.com (Ken'ichi Ohmichi) Date: Sat, 28 Apr 2018 21:43:44 +0000 Subject: [openstack-dev] [tempest] Proposing Felipe Monteiro for Tempest core In-Reply-To: References: Message-ID: +1 Thanks for your contribution, Felipe 2018年4月28日(土) 3:29 Ghanshyam Mann : > Hi Tempest Team, > > I would like to propose Felipe Monteiro (irc: felipemonteiro) to Tempest > core. > > Felipe has been an active contributor to the Tempest since the Pike > cycle. He has been doing lot of review and commits since then. Filling > the gaps on service clients side and their testing and lot other > areas. He has demonstrated the good quality and feedback while his > review. > > He has good understanding of Tempest source code and project missions > & goal. IMO his efforts are highly valuable and it will be great to > have him in team. > > > As per usual practice, please vote +1 or -1 to the nomination. I will > keep this nomination open for a week or until everyone voted. > > Felipe Reviews and Commit - > > https://review.openstack.org/#/q/reviewer:felipe.monteiro at att.com+project:openstack/tempest > > https://review.openstack.org/#/q/owner:felipe.monteiro at att.com+project:openstack/tempest > > -gmann > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongbin034 at gmail.com Sat Apr 28 23:46:29 2018 From: hongbin034 at gmail.com (Hongbin Lu) Date: Sat, 28 Apr 2018 19:46:29 -0400 Subject: [openstack-dev] [Zun][k8s] AWS Fargate and OpenStack Zun Message-ID: Hi folks, FYI. I wrote a blog post about a comparison between AWS Fargate and OpenStack Zun. It mainly covers the following: * The basic concepts of OpenStack Zun and AWS Fargate * The Kubernetes integration plan Here is the link: https://www.linkedin.com/pulse/aws-fargate-openstack-zun-comparing-serverless-container-hongbin-lu/ Best regards, Hongbin -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengh512 at gmail.com Sun Apr 29 00:25:40 2018 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Sun, 29 Apr 2018 08:25:40 +0800 Subject: [openstack-dev] [Zun][k8s] AWS Fargate and OpenStack Zun In-Reply-To: References: Message-ID: Thanks Hongbin this is absolutely awesome. On Sun, Apr 29, 2018 at 7:46 AM, Hongbin Lu wrote: > Hi folks, > > FYI. I wrote a blog post about a comparison between AWS Fargate and > OpenStack Zun. It mainly covers the following: > > * The basic concepts of OpenStack Zun and AWS Fargate > * The Kubernetes integration plan > > Here is the link: https://www.linkedin.com/pulse/aws-fargate- > openstack-zun-comparing-serverless-container-hongbin-lu/ > > Best regards, > Hongbin > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Zhipeng (Howard) Huang Standard Engineer IT Standard & Patent/IT Product Line Huawei Technologies Co,. Ltd Email: huangzhipeng at huawei.com Office: Huawei Industrial Base, Longgang, Shenzhen (Previous) Research Assistant Mobile Ad-Hoc Network Lab, Calit2 University of California, Irvine Email: zhipengh at uci.edu Office: Calit2 Building Room 2402 OpenStack, OPNFV, OpenDaylight, OpenCompute Aficionado -------------- next part -------------- An HTML attachment was scrubbed... URL: From stdake at cisco.com Sun Apr 29 01:16:03 2018 From: stdake at cisco.com (Steven Dake (stdake)) Date: Sun, 29 Apr 2018 01:16:03 +0000 Subject: [openstack-dev] [kolla][vote]Core nomination for Mark Goddard (mgoddard) as kolla core member In-Reply-To: References: Message-ID: <42FF9A49-2D30-4637-9FB7-64B009C6AB41@cisco.com> +1 From: Jeffrey Zhang Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Thursday, April 26, 2018 at 5:32 PM To: OpenStack Development Mailing List Subject: [openstack-dev] [kolla][vote]Core nomination for Mark Goddard (mgoddard) as kolla core member Kolla core reviewer team, It is my pleasure to nominate ​ mgoddard for kolla core team. ​ Mark has been working both upstream and downstream with kolla and kolla-ansible for over two years, building bare metal compute clouds with ironic for HPC. He's been involved with OpenStack since 2014. He started the kayobe deployment project which complements kolla-ansible. He is also the most active non-core contributor for last 90 days[1] ​​ Consider this nomination a +1 vote from me A +1 vote indicates you are in favor of ​ mgoddard as a candidate, a -1 is a ​​ veto. Voting is open for 7 days until ​May ​4​ th, or a unanimous response is reached or a veto vote occurs. [1] http://stackalytics.com/report/contribution/kolla-group/90 -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From ifat.afek at nokia.com Sun Apr 29 06:10:34 2018 From: ifat.afek at nokia.com (Afek, Ifat (Nokia - IL/Kfar Sava)) Date: Sun, 29 Apr 2018 06:10:34 +0000 Subject: [openstack-dev] [Vitrage] Vitrage graph error In-Reply-To: <03ef01d3dba2$1e1d06c0$5a571440$@ssu.ac.kr> References: <01b501d3db03$650d4670$2f27d350$@ssu.ac.kr> <720B7050-A658-4E3D-BB60-C9AECC5D4186@nokia.com> <03ef01d3dba2$1e1d06c0$5a571440$@ssu.ac.kr> Message-ID: <74014AD9-8C7C-4552-B222-EF11388E4A3E@nokia.com> Hi Minwook, The following change should fix your problem: https://review.openstack.org/#/c/564471/ Let me know if it helped. Thanks, Ifat From: MinWookKim Reply-To: "OpenStack Development Mailing List (not for usage questions)" Date: Tuesday, 24 April 2018 at 10:59 To: "'OpenStack Development Mailing List (not for usage questions)'" Subject: Re: [openstack-dev] [Vitrage] Vitrage graph error Hello Ifat, I have not checked the alarm yet. (I think it does not work.) However, i confirmed that the entity graph and the topology do not work. Additionally, the CLI does not seem to work either. I'll check it out with you. : ) Thank you. Best Regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Tuesday, April 24, 2018 4:15 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] Vitrage graph error Hi Minwook, Is the problem only in the Entity Graph? Do the Alarms view and the Topology view work? And what about the CLI? I’ll check it and get back to you. Thanks, Ifat From: MinWookKim > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Monday, 23 April 2018 at 16:02 To: "'OpenStack Development Mailing List (not for usage questions)'" > Subject: [openstack-dev] [Vitrage] Vitrage graph error Hello Vitrage team, A few days ago I used Devstack to install the Openstack master version, which included Vitrage. However, I found that the Vitrage graph does not work on the Vitrage-dashboard. The state of all Vitrage components is active. Could you check it once? Thanks. Best Regards, Minwook. -------------- next part -------------- An HTML attachment was scrubbed... URL: From inc007 at gmail.com Sun Apr 29 07:26:47 2018 From: inc007 at gmail.com (=?UTF-8?B?TWljaGHFgiBKYXN0cnrEmWJza2k=?=) Date: Sun, 29 Apr 2018 09:26:47 +0200 Subject: [openstack-dev] [kolla][vote]Core nomination for Mark Goddard (mgoddard) as kolla core member In-Reply-To: <42FF9A49-2D30-4637-9FB7-64B009C6AB41@cisco.com> References: <42FF9A49-2D30-4637-9FB7-64B009C6AB41@cisco.com> Message-ID: strong +1 from me! Great work Mark! On 29 April 2018 at 03:16, Steven Dake (stdake) wrote: > +1 > > > > > > > > From: Jeffrey Zhang > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > > Date: Thursday, April 26, 2018 at 5:32 PM > To: OpenStack Development Mailing List > Subject: [openstack-dev] [kolla][vote]Core nomination for Mark Goddard > (mgoddard) as kolla core member > > > > Kolla core reviewer team, > > It is my pleasure to nominate > > mgoddard for kolla core team. > > Mark has been working both upstream and downstream with kolla and > kolla-ansible for over two years, building bare metal compute clouds with > ironic for HPC. He's been involved with OpenStack since 2014. He started > the kayobe deployment project which complements kolla-ansible. He is > also the most active non-core contributor for last 90 days[1] > > Consider this nomination a +1 vote from me > > A +1 vote indicates you are in favor of > > mgoddard as a candidate, a -1 > is a > > veto. Voting is open for 7 days until > > May > > > > 4 > > th, or a unanimous > response is reached or a veto vote occurs. > > [1] http://stackalytics.com/report/contribution/kolla-group/90 > > > > -- > > Regards, > > Jeffrey Zhang > > Blog: http://xcodest.me > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jean-philippe at evrard.me Sun Apr 29 07:36:15 2018 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Sun, 29 Apr 2018 09:36:15 +0200 Subject: [openstack-dev] [OpenStackAnsible] Tag repos as newton-eol In-Reply-To: References: <20180314212003.GC25428@thor.bakeyournoodle.com> Message-ID: Hello, > I'd like to phase out openstack/openstack-ansible-tests and > openstack/openstack-ansible later. Now that we had the time to bump the roles in openstack-ansible, and adapt the tests, we can now EOL the rest of newton, i.e.: openstack/openstack-ansible and openstack/openstack-ansible-tests. Thanks for the help again Tony! JP From fungi at yuggoth.org Sun Apr 29 18:18:42 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sun, 29 Apr 2018 18:18:42 +0000 Subject: [openstack-dev] [All][Election] Last days for Rocky TC Election Voting! In-Reply-To: References: Message-ID: <20180429181842.iphk65cgkjei6k6j@yuggoth.org> We are coming down to the last hours for voting in the TC election. Voting ends 2018-04-30 (Monday) at 23:45 UTC. Search your gerrit preferred email address[0] for the following subject: Poll: Rocky TC Election That is your ballot and links you to the voting application. Please vote. If you have voted, please encourage your colleagues to vote. Candidate statements are linked to the names of all confirmed candidates: http://governance.openstack.org/election/#rocky-tc-candidates What to do if you don't see the email and have a commit in at least one of the official programs projects[1]: * check the trash of your gerrit Preferred Email address[0], in case it went into trash or spam * wait a bit and check again, in case your email server is a bit slow * find the sha of at least one commit from the program project repos[1] and email the election officials[2]. If we can confirm that you are entitled to vote, we will add you to the voters list and you will be emailed a ballot. Please vote! Thank you, [0] Sign into review.openstack.org: Go to Settings > Contact Information. Look at the email listed as your Preferred Email. That is where the ballot has been sent. [1] https://git.openstack.org/cgit/openstack/governance/tree/reference/projects.yaml?id=apr-2018-elections [2] http://governance.openstack.org/election/#election-officials -- Jeremy Stanley, Election Official -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From alifshit at redhat.com Sun Apr 29 18:34:09 2018 From: alifshit at redhat.com (Artom Lifshitz) Date: Sun, 29 Apr 2018 14:34:09 -0400 Subject: [openstack-dev] [nova] Default scheduler filters survey In-Reply-To: References: Message-ID: Thanks everyone for your input! I wrote a small Python script [1] to present all your responses in an understandable format. Here's the output: Filters common to all deployments: {'ComputeFilter', 'ServerGroupAntiAffinityFilter'} Filter counts (out of 9 deployments): ServerGroupAntiAffinityFilter 9 ComputeFilter 9 AvailabilityZoneFilter 8 ServerGroupAffinityFilter 8 AggregateInstanceExtraSpecsFilter 8 ImagePropertiesFilter 8 RetryFilter 7 ComputeCapabilitiesFilter 5 AggregateCoreFilter 4 RamFilter 4 PciPassthroughFilter 3 AggregateRamFilter 3 CoreFilter 2 DiskFilter 2 AggregateImagePropertiesIsolation 2 SameHostFilter 2 AggregateMultiTenancyIsolation 1 NUMATopologyFilter 1 AggregateDiskFilter 1 DifferentHostFilter 1 Based on that, we can definitely say that SameHostFilter and DifferentHostFilter do *not* belong in the defaults. In fact, we got our defaults pretty spot on, based on this admittedly very limited dataset. The only frequently occurring filter that's not in our defaults is AggregateInstanceExtraSpecsFilter. [1] https://gist.github.com/notartom/0819df7c3cb9d02315bfabe5630385c9 On Fri, Apr 27, 2018 at 8:10 PM, Lingxian Kong wrote: > At Catalyst Cloud: > > RetryFilter > AvailabilityZoneFilter > RamFilter > ComputeFilter > AggregateCoreFilter > DiskFilter > AggregateInstanceExtraSpecsFilter > ImagePropertiesFilter > ServerGroupAntiAffinityFilter > SameHostFilter > > Cheers, > Lingxian Kong > > > On Sat, Apr 28, 2018 at 3:04 AM Jim Rollenhagen > wrote: >> >> On Wed, Apr 18, 2018 at 11:17 AM, Artom Lifshitz >> wrote: >>> >>> Hi all, >>> >>> A CI issue [1] caused by tempest thinking some filters are enabled >>> when they're really not, and a proposed patch [2] to add >>> (Same|Different)HostFilter to the default filters as a workaround, has >>> led to a discussion about what filters should be enabled by default in >>> nova. >>> >>> The default filters should make sense for a majority of real world >>> deployments. Adding some filters to the defaults because CI needs them >>> is faulty logic, because the needs of CI are different to the needs of >>> operators/users, and the latter takes priority (though it's my >>> understanding that a good chunk of operators run tempest on their >>> clouds post-deployment as a way to validate that the cloud is working >>> properly, so maybe CI's and users' needs aren't that different after >>> all). >>> >>> To that end, we'd like to know what filters operators are enabling in >>> their deployment. If you can, please reply to this email with your >>> [filter_scheduler]/enabled_filters (or >>> [DEFAULT]/scheduler_default_filters if you're using an older version) >>> option from nova.conf. Any other comments are welcome as well :) >> >> >> At Oath: >> >> AggregateImagePropertiesIsolation >> ComputeFilter >> CoreFilter >> DifferentHostFilter >> SameHostFilter >> ServerGroupAntiAffinityFilter >> ServerGroupAffinityFilter >> AvailabilityZoneFilter >> AggregateInstanceExtraSpecsFilter >> >> // jim >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- -- Artom Lifshitz Software Engineer, OpenStack Compute DFG From mjturek at linux.vnet.ibm.com Sun Apr 29 20:17:45 2018 From: mjturek at linux.vnet.ibm.com (Michael Turek) Date: Sun, 29 Apr 2018 16:17:45 -0400 Subject: [openstack-dev] [ironic] Monthly bug day? In-Reply-To: References: Message-ID: <7e2b39d4-d74b-0953-fb61-659c0a6b4e7e@linux.vnet.ibm.com> Awesome! If everyone doesn't mind the short notice, we'll have it again this Thursday @ 1:00 PM to 3:00 PM UTC. I can provide video conferencing through hangouts here https://goo.gl/xSKBS4 Let's give that a shot this time! We can adjust times, tooling, and regular agenda over the next couple meetings and see where we settle. If anyone has any questions or suggestions, don't hesitate to reach out to me! Thanks, Mike Turek On 4/25/18 12:11 PM, Julia Kreger wrote: > On Mon, Apr 23, 2018 at 12:04 PM, Michael Turek > wrote: > >> What does everyone think about having Bug Day the first Thursday of every >> month? > All for it! > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From ed at leafe.com Sun Apr 29 21:29:17 2018 From: ed at leafe.com (Ed Leafe) Date: Sun, 29 Apr 2018 16:29:17 -0500 Subject: [openstack-dev] [Openstack-operators] [nova] Default scheduler filters survey In-Reply-To: References: Message-ID: On Apr 29, 2018, at 1:34 PM, Artom Lifshitz wrote: > > Based on that, we can definitely say that SameHostFilter and > DifferentHostFilter do *not* belong in the defaults. In fact, we got > our defaults pretty spot on, based on this admittedly very limited > dataset. The only frequently occurring filter that's not in our > defaults is AggregateInstanceExtraSpecsFilter. Another data point that might be illuminating is: how many sites use a custom (i.e., not in-tree) filter or weigher? One of the original design tenets of the scheduler was that we did not want to artificially limit what people could use to control their deployments, but inside of Nova there is a lot of confusion as to whether anyone is using anything but the included filters. So - does anyone out there rely on a filter and/or weigher that they wrote themselves, and maintain outside of OpenStack? -- Ed Leafe -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From delightwook at ssu.ac.kr Mon Apr 30 02:20:34 2018 From: delightwook at ssu.ac.kr (MinWookKim) Date: Mon, 30 Apr 2018 11:20:34 +0900 Subject: [openstack-dev] [Vitrage] Vitrage graph error In-Reply-To: <74014AD9-8C7C-4552-B222-EF11388E4A3E@nokia.com> References: <01b501d3db03$650d4670$2f27d350$@ssu.ac.kr> <720B7050-A658-4E3D-BB60-C9AECC5D4186@nokia.com> <03ef01d3dba2$1e1d06c0$5a571440$@ssu.ac.kr> <74014AD9-8C7C-4552-B222-EF11388E4A3E@nokia.com> Message-ID: <020201d3e029$d2375940$76a60bc0$@ssu.ac.kr> Hello Ifat, The problem is resolved. :) Thank you. Best Regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Sunday, April 29, 2018 3:11 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] Vitrage graph error Hi Minwook, The following change should fix your problem: https://review.openstack.org/#/c/564471/ Let me know if it helped. Thanks, Ifat From: MinWookKim > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Tuesday, 24 April 2018 at 10:59 To: "'OpenStack Development Mailing List (not for usage questions)'" > Subject: Re: [openstack-dev] [Vitrage] Vitrage graph error Hello Ifat, I have not checked the alarm yet. (I think it does not work.) However, i confirmed that the entity graph and the topology do not work. Additionally, the CLI does not seem to work either. I'll check it out with you. : ) Thank you. Best Regards, Minwook. From: Afek, Ifat (Nokia - IL/Kfar Sava) [mailto:ifat.afek at nokia.com] Sent: Tuesday, April 24, 2018 4:15 PM To: OpenStack Development Mailing List (not for usage questions) Subject: Re: [openstack-dev] [Vitrage] Vitrage graph error Hi Minwook, Is the problem only in the Entity Graph? Do the Alarms view and the Topology view work? And what about the CLI? I’ll check it and get back to you. Thanks, Ifat From: MinWookKim > Reply-To: "OpenStack Development Mailing List (not for usage questions)" > Date: Monday, 23 April 2018 at 16:02 To: "'OpenStack Development Mailing List (not for usage questions)'" > Subject: [openstack-dev] [Vitrage] Vitrage graph error Hello Vitrage team, A few days ago I used Devstack to install the Openstack master version, which included Vitrage. However, I found that the Vitrage graph does not work on the Vitrage-dashboard. The state of all Vitrage components is active. Could you check it once? Thanks. Best Regards, Minwook. -------------- next part -------------- A non-text attachment was scrubbed... Name: winmail.dat Type: application/ms-tnef Size: 13550 bytes Desc: not available URL: From gdubreui at redhat.com Mon Apr 30 03:53:02 2018 From: gdubreui at redhat.com (Gilles Dubreuil) Date: Mon, 30 Apr 2018 13:53:02 +1000 Subject: [openstack-dev] [api] REST limitations and GraghGL inception? Message-ID: Hi, Remember Boston's Summit presentation [1] about GraphQL [2] and how it addresses REST limitations. I wonder if any project has been thinking about using GraphQL. I haven't find any mention or pointers about it. GraphQL takes a complete different approach compared to REST. So we can finally forget about REST API Description languages (OpenAPI/Swagger/WSDL/WADL/JSON-API/ETC) and HATEOS (the hypermedia approach which doesn't describe how to use it). So, once passed the point where 'REST vs GraphQL' is like comparing SQL and no-SQL DBMS and therefore have different applications, there are no doubt the complexity of most OpenStack projects are good candidates for GraphQL. Besides topics such as efficiency, decoupling, no version management need there many other powerful features such as API Schema out of the box and better automation down that track. It looks like the dream of a conduit between API services and consumers might have finally come true so we could move-on an worry about other things. So has anyone already starting looking into it? [1] https://www.openstack.org/videos/boston-2017/building-modern-apis-with-graphql [2] http://graphql.org From pawel at suder.info Mon Apr 30 06:26:51 2018 From: pawel at suder.info (=?UTF-8?Q?Pawe=C5=82?= Suder) Date: Mon, 30 Apr 2018 08:26:51 +0200 Subject: [openstack-dev] [neutron] Bug deputy report 23-29 April Message-ID: <1525069611.4621.9.camel@suder.info> Hello Team, Last week starting from 23 April until 29 April I was bug deputy for Neutron project. Following bugs/RFEs were opened: [RFE] Create host-routes for routed networks (segments) https://bugs.launchpad.net/neutron/+bug/1766380 RFE, importance not set. Seems to be very interesting. Confirmed by Miguel (thx!). Need to be discussed by drivers team. Trunk Tests are failing often in dvr-multinode scenario job https://bugs.launchpad.net/neutron/+bug/1766701 High, confirmed based on logs from failing jobs. Periodic job * neutron-dynamic-routing-dsvm-tempest-with-ryu-master- scenario-ipv4 fails https://bugs.launchpad.net/neutron/+bug/1766702 High, confirmed based on logs from failing jobs. Rally tests job is reaching job timeout often https://bugs.launchpad.net/neutron/+bug/1766703 High, confirmed based on logs from failing jobs. [NEED ATTENTION] the machine running dhcp agent will have very high cpu load when start dhcp agent after the agent down more than 150 seconds https://bugs.launchpad.net/neutron/+bug/1766812 Not yet clarified, due to scale, it will be not easy to triage it. Some logs are attached, but still issue might be very environmental. Not marked as confirmed, importance not set. [OPEN QUESTION]: should be reproduced somehow? loadbalancer can't create with chinese character name https://bugs.launchpad.net/neutron/+bug/1767028 It could be related to Octavia. Not confirmed, do not know version of used OpenStack. Logs from Neutron attached. Importance not set. [OPEN QUESTION]: how to link with other project? character of set image property multiqueue command is wrong https://bugs.launchpad.net/neutron/+bug/1767267 Confirmed doc issue, some typos/command syntax issues. Importance not set. Neutron agent internal ports remain untagged for some time, which makes them trunk ports https://bugs.launchpad.net/neutron/+bug/1767422 Confirmed. Fix proposed. [DVR] br-int in compute node will send unknown unicast to sg-xxx https://bugs.launchpad.net/neutron/+bug/1767811 Clarifying. Cheers, Paweł From madhuri.kumari at intel.com Mon Apr 30 06:40:16 2018 From: madhuri.kumari at intel.com (Kumari, Madhuri) Date: Mon, 30 Apr 2018 06:40:16 +0000 Subject: [openstack-dev] [Zun][k8s] AWS Fargate and OpenStack Zun In-Reply-To: References: Message-ID: <0512CBBECA36994BAA14C7FEDE986CA6042947EB@BGSMSX102.gar.corp.intel.com> Thank you Hongbin. The article is very helpful. Regards, Madhuri From: Hongbin Lu [mailto:hongbin034 at gmail.com] Sent: Sunday, April 29, 2018 5:16 AM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] [Zun][k8s] AWS Fargate and OpenStack Zun Hi folks, FYI. I wrote a blog post about a comparison between AWS Fargate and OpenStack Zun. It mainly covers the following: * The basic concepts of OpenStack Zun and AWS Fargate * The Kubernetes integration plan Here is the link: https://www.linkedin.com/pulse/aws-fargate-openstack-zun-comparing-serverless-container-hongbin-lu/ Best regards, Hongbin -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Mon Apr 30 07:12:38 2018 From: skaplons at redhat.com (Slawomir Kaplonski) Date: Mon, 30 Apr 2018 09:12:38 +0200 Subject: [openstack-dev] Is there any way to recheck only one job? Message-ID: Hi, I wonder if there is any way to recheck only one type of job instead of rechecking everything. For example sometimes I have to debug some random failure in specific job type, like „neutron-fullstack” and I want to collect some additional data or test something. So in such case I push some „Do not merge” patch and waits for job result - but I really don’t care about e.g. pep8 or UT results so would be good is I could run (recheck) only job which I want. That could safe some resources for other jobs and speed up my tests a little as I could be able to recheck only my job faster :) Is there any way that I can do it with gerrit and zuul currently? Or maybe it could be consider as a new feature to add? What do You think about it? — Best regards Slawek Kaplonski skaplons at redhat.com From lennyb at mellanox.com Mon Apr 30 07:32:58 2018 From: lennyb at mellanox.com (Lenny Berkhovsky) Date: Mon, 30 Apr 2018 07:32:58 +0000 Subject: [openstack-dev] Is there any way to recheck only one job? In-Reply-To: References: Message-ID: If your CI is using zuul, then you can try updating your zuul gerrit event comment in /etc/zuul/layout/layout.yaml accordingly. Since most ( if not all ) Cis are triggered by 'recheck' comment you can use your custom one. precedence: low trigger: gerrit: - event: patchset-created - event: change-restored - event: comment-added comment: Lenny -----Original Message----- From: Slawomir Kaplonski [mailto:skaplons at redhat.com] Sent: Monday, April 30, 2018 10:13 AM To: OpenStack Development Mailing List (not for usage questions) Subject: [openstack-dev] Is there any way to recheck only one job? Hi, I wonder if there is any way to recheck only one type of job instead of rechecking everything. For example sometimes I have to debug some random failure in specific job type, like „neutron-fullstack” and I want to collect some additional data or test something. So in such case I push some „Do not merge” patch and waits for job result - but I really don’t care about e.g. pep8 or UT results so would be good is I could run (recheck) only job which I want. That could safe some resources for other jobs and speed up my tests a little as I could be able to recheck only my job faster :) Is there any way that I can do it with gerrit and zuul currently? Or maybe it could be consider as a new feature to add? What do You think about it? — Best regards Slawek Kaplonski skaplons at redhat.com __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe https://emea01.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.openstack.org%2Fcgi-bin%2Fmailman%2Flistinfo%2Fopenstack-dev&data=02%7C01%7Clennyb%40mellanox.com%7C2e90994f3c63470d179108d5ae69da93%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636606692019701457&sdata=pd7%2FN8Tlpsxo7fYW7bbyy1UV4JlRTT6OWlnVp6qMZ44%3D&reserved=0 From j.harbott at x-ion.de Mon Apr 30 08:41:09 2018 From: j.harbott at x-ion.de (Jens Harbott) Date: Mon, 30 Apr 2018 08:41:09 +0000 Subject: [openstack-dev] Is there any way to recheck only one job? In-Reply-To: References: Message-ID: 2018-04-30 7:12 GMT+00:00 Slawomir Kaplonski : > Hi, > > I wonder if there is any way to recheck only one type of job instead of rechecking everything. > For example sometimes I have to debug some random failure in specific job type, like „neutron-fullstack” and I want to collect some additional data or test something. So in such case I push some „Do not merge” patch and waits for job result - but I really don’t care about e.g. pep8 or UT results so would be good is I could run (recheck) only job which I want. That could safe some resources for other jobs and speed up my tests a little as I could be able to recheck only my job faster :) > > Is there any way that I can do it with gerrit and zuul currently? Or maybe it could be consider as a new feature to add? What do You think about it? This is intentionally not implemented as it could be used to trick patches leading to unstable behaviour into passing too easily, hiding possible issues. As an alternative, you could include a change to .zuul.yaml into your test patch, removing all jobs except the one you are interested in. This would still run the jobs defined in project-config, but may be good enough for your scenario. From a.chadin at servionica.ru Mon Apr 30 08:41:15 2018 From: a.chadin at servionica.ru (=?utf-8?B?0KfQsNC00LjQvSDQkNC70LXQutGB0LDQvdC00YA=?=) Date: Mon, 30 Apr 2018 08:41:15 +0000 Subject: [openstack-dev] =?utf-8?b?IFt3YXRjaGVyXSBNYXnigJlzIGhvbGlkYXlz?= Message-ID: <5084BD56-F4BC-484E-B69F-5742BB989197@servionica.ru> Hi Watcher team. I won’t be available in IRC till Wednesday because of national holidays. Some reviews and patch sets will be done during this time. Have a nice day! —— Alex From j.harbott at x-ion.de Mon Apr 30 09:37:21 2018 From: j.harbott at x-ion.de (Jens Harbott) Date: Mon, 30 Apr 2018 09:37:21 +0000 Subject: [openstack-dev] Thank you TryStack!! In-Reply-To: <5AB9797D.1090209@tipit.net> References: <5AB9797D.1090209@tipit.net> Message-ID: 2018-03-26 22:51 GMT+00:00 Jimmy Mcarthur : > Hi everyone, > > We recently made the tough decision, in conjunction with the dedicated > volunteers that run TryStack, to end the service as of March 29, 2018. For > those of you that used it, thank you for being part of the TryStack > community. > > The good news is that you can find more resources to try OpenStack at > http://www.openstack.org/start, including the Passport Program, where you > can test on any participating public cloud. If you are looking to test > different tools or application stacks with OpenStack clouds, you should > check out Open Lab. > > Thank you very much to Will Foster, Kambiz Aghaiepour, Rich Bowen, and the > many other volunteers who have managed this valuable service for the last > several years! Your contribution to OpenStack was noticed and appreciated > by many in the community. Seems it would be great if https://trystack.openstack.org/ would be updated with this information, according to comments in #openstack users are still landing on that page and try to get a stack there in vain. From gael.therond at gmail.com Mon Apr 30 10:16:51 2018 From: gael.therond at gmail.com (Flint WALRUS) Date: Mon, 30 Apr 2018 10:16:51 +0000 Subject: [openstack-dev] [api] REST limitations and GraghGL inception? In-Reply-To: References: Message-ID: I would very much second that question! Indeed it have been one of my own wondering since many times. Of course GraphQL is not intended to replace REST as is and have to live in parallel but it would likely and highly accelerate all requests within heavily loaded environments. So +1 for this question. Le lun. 30 avr. 2018 à 05:53, Gilles Dubreuil a écrit : > Hi, > > Remember Boston's Summit presentation [1] about GraphQL [2] and how it > addresses REST limitations. > I wonder if any project has been thinking about using GraphQL. I haven't > find any mention or pointers about it. > > GraphQL takes a complete different approach compared to REST. So we can > finally forget about REST API Description languages > (OpenAPI/Swagger/WSDL/WADL/JSON-API/ETC) and HATEOS (the hypermedia > approach which doesn't describe how to use it). > > So, once passed the point where 'REST vs GraphQL' is like comparing SQL > and no-SQL DBMS and therefore have different applications, there are no > doubt the complexity of most OpenStack projects are good candidates for > GraphQL. > > Besides topics such as efficiency, decoupling, no version management > need there many other powerful features such as API Schema out of the > box and better automation down that track. > > It looks like the dream of a conduit between API services and consumers > might have finally come true so we could move-on an worry about other > things. > > So has anyone already starting looking into it? > > [1] > > https://www.openstack.org/videos/boston-2017/building-modern-apis-with-graphql > [2] http://graphql.org > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Mon Apr 30 11:33:38 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 30 Apr 2018 13:33:38 +0200 Subject: [openstack-dev] [ironic] Monthly bug day? In-Reply-To: <7e2b39d4-d74b-0953-fb61-659c0a6b4e7e@linux.vnet.ibm.com> References: <7e2b39d4-d74b-0953-fb61-659c0a6b4e7e@linux.vnet.ibm.com> Message-ID: <6667f21d-81f4-1f61-a365-43aea343c0e9@redhat.com> Hi, On 04/29/2018 10:17 PM, Michael Turek wrote: > Awesome! If everyone doesn't mind the short notice, we'll have it again this > Thursday @ 1:00 PM to 3:00 PM UTC. ++ > > I can provide video conferencing through hangouts here https://goo.gl/xSKBS4 > Let's give that a shot this time! Note that the last time I checked Hangouts video messaging required a proprietary browser plugin (and hence did not work in Firefox). Using it may exclude people not accepting proprietary software and/or avoiding using Chromium. > > We can adjust times, tooling, and regular agenda over the next couple meetings > and see where we settle. If anyone has any questions or suggestions, don't > hesitate to reach out to me! > > Thanks, > Mike Turek > > > On 4/25/18 12:11 PM, Julia Kreger wrote: >> On Mon, Apr 23, 2018 at 12:04 PM, Michael Turek >> wrote: >> >>> What does everyone think about having Bug Day the first Thursday of every >>> month? >> All for it! >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From mjturek at linux.vnet.ibm.com Mon Apr 30 12:39:24 2018 From: mjturek at linux.vnet.ibm.com (Michael Turek) Date: Mon, 30 Apr 2018 08:39:24 -0400 Subject: [openstack-dev] [ironic] Monthly bug day? In-Reply-To: <6667f21d-81f4-1f61-a365-43aea343c0e9@redhat.com> References: <7e2b39d4-d74b-0953-fb61-659c0a6b4e7e@linux.vnet.ibm.com> <6667f21d-81f4-1f61-a365-43aea343c0e9@redhat.com> Message-ID: Just tried this and seems like Firefox does still require a browser plugin. Julia, could we use your bluejeans line again? Thanks! Mike Turek On 4/30/18 7:33 AM, Dmitry Tantsur wrote: > Hi, > > On 04/29/2018 10:17 PM, Michael Turek wrote: >> Awesome! If everyone doesn't mind the short notice, we'll have it >> again this Thursday @ 1:00 PM to 3:00 PM UTC. > > ++ > >> >> I can provide video conferencing through hangouts here >> https://goo.gl/xSKBS4 >> Let's give that a shot this time! > > Note that the last time I checked Hangouts video messaging required a > proprietary browser plugin (and hence did not work in Firefox). Using > it may exclude people not accepting proprietary software and/or > avoiding using Chromium. > >> >> We can adjust times, tooling, and regular agenda over the next couple >> meetings and see where we settle. If anyone has any questions or >> suggestions, don't hesitate to reach out to me! >> >> Thanks, >> Mike Turek >> >> >> On 4/25/18 12:11 PM, Julia Kreger wrote: >>> On Mon, Apr 23, 2018 at 12:04 PM, Michael Turek >>> wrote: >>> >>>> What does everyone think about having Bug Day the first Thursday of >>>> every >>>> month? >>> All for it! >>> >>> __________________________________________________________________________ >>> >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> __________________________________________________________________________ >> >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jimmy at openstack.org Mon Apr 30 12:53:33 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 30 Apr 2018 07:53:33 -0500 Subject: [openstack-dev] Thank you TryStack!! In-Reply-To: References: <5AB9797D.1090209@tipit.net> Message-ID: <5AE711CD.7050607@openstack.org> Hmm. It appears our redirects have stopped working. Checking on this... > Jens Harbott > April 30, 2018 at 4:37 AM > > Seems it would be great if https://trystack.openstack.org/ would be > updated with this information, according to comments in #openstack > users are still landing on that page and try to get a stack there in > vain. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jimmy Mcarthur > March 26, 2018 at 5:51 PM > Hi everyone, > > We recently made the tough decision, in conjunction with the dedicated > volunteers that run TryStack, to end the service as of March 29, > 2018. For those of you that used it, thank you for being part of the > TryStack community. > > The good news is that you can find more resources to try OpenStack at > http://www.openstack.org/start, including the Passport Program > , where you can test on any > participating public cloud. If you are looking to test different tools > or application stacks with OpenStack clouds, you should check out Open > Lab . > > Thank you very much to Will Foster, Kambiz Aghaiepour, Rich Bowen, and > the many other volunteers who have managed this valuable service for > the last several years! Your contribution to OpenStack was noticed > and appreciated by many in the community. > > Cheers, > Jimmy > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhang.lei.fly at gmail.com Mon Apr 30 12:54:28 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Mon, 30 Apr 2018 20:54:28 +0800 Subject: [openstack-dev] [kolla][vote]Retire kolla-kubernetes project In-Reply-To: References: Message-ID: Thanks all guys. Per the voting result, we will retire kolla-kubernetes project. On Tue, Apr 24, 2018 at 10:48 PM, Surya Singh wrote: > +1 > > As we don't have active core team in Kolla-kubernetes since months, > unfortunately going for sunset is reasonable. > > Though happy to help in running OpenStack on kubernetes. > > --- > Thanks > Surya > > > > On Wed, Apr 18, 2018 at 7:21 AM, Jeffrey Zhang > wrote: > > Since many of the contributors in the kolla-kubernetes project are moved > to > > other things. And there is no active contributor for months. On the > other > > hand, there is another comparable project, openstack-helm, in the > community. > > For less confusion and disruptive community resource, I propose to retire > > the kolla-kubernetes project. > > > > More discussion about this you can check the mail[0] and patch[1] > > > > please vote +1 to retire the repo, or -1 not to retire the repo. The vote > > will be open until everyone has voted, or for 1 week until April 25th, > 2018. > > > > [0] > > http://lists.openstack.org/pipermail/openstack-dev/2018- > March/128822.html > > [1] https://review.openstack.org/552531 > > > > -- > > Regards, > > Jeffrey Zhang > > Blog: http://xcodest.me > > > > ____________________________________________________________ > ______________ > > OpenStack Development Mailing List (not for usage questions) > > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject: > unsubscribe > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Mon Apr 30 12:58:06 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 30 Apr 2018 14:58:06 +0200 Subject: [openstack-dev] [ironic] [all] The last reminder about the classic drivers removal In-Reply-To: References: Message-ID: Hi all, This is the last reminder that the classic drivers will be removed from ironic. We plan on finish the removal before Rocky-2. See below for the information on migration. If for some reason we need to delay the removal, please speak up NOW. Note that I'm personally not inclined to delay it past Rocky, since it requires my time and effort to track this process. Cheers, Dmitry On 03/06/2018 12:11 PM, Dmitry Tantsur wrote: > Hi all, > > As you may already know, we have deprecated classic drivers in the Queens > release. We don't have specific removal plans yet. But according to the > deprecation policy we may remove them at any time after May 1st, which will be > half way to Rocky milestone 2. Personally, I'd like to do it around then. > > The `online_data_migrations` script will handle migrating nodes, if all required > hardware interfaces and types are enabled before the upgrade to Queens. > Otherwise, check the documentation [1] on how to update your nodes. > > Dmitry > > [1] https://docs.openstack.org/ironic/latest/admin/upgrade-to-hardware-types.html From zhang.lei.fly at gmail.com Mon Apr 30 13:06:27 2018 From: zhang.lei.fly at gmail.com (Jeffrey Zhang) Date: Mon, 30 Apr 2018 21:06:27 +0800 Subject: [openstack-dev] [Zun][Kolla][Kolla-ansible] Verify Zun deployment in Kolla gate In-Reply-To: References: Message-ID: Thanks hongbin In Kolla, one job is used to test multi OpenStack services. there are already two test scenarios. 1. without ceph 2. with ceph each scenario test a serial of OpenStack services. like nova, neutron, cinder etc. Zun or kuryr is not tested now. But i think it is OK to add a new scenario to test network related service, like zun and kuryr. for tempest testing, there is a WIP bp for this[0] [0] https://blueprints.launchpad.net/kolla-ansible/+spec/tempest-gate On Sun, Apr 29, 2018 at 5:14 AM, Hongbin Lu wrote: > Hi Kolla team, > > Recently, I saw there are users who tried to install Zun by using > Kolla-ansible and reported bugs to us whenever they ran into issues (e.g. > https://bugs.launchpad.net/kolla-ansible/+bug/1766151). The increase of > this usage pattern (Kolla + Zun) made me think that we need to have CI > coverage to verify the Zun deployment setup by Kolla. > > IMHO, the ideal CI workflow should be: > > * Create a VM with different distros (i.e. Ubuntu, CentOS). > * Use Kolla-ansible to stand up a Zun deployment. > * Run Zun's tempest test suit [1] against the deployment. > > My question for Kolla team is if it is reasonable to setup a Zuul job as > described above? or such CI jobs already exist? If not, how to create one? > > [1] https://github.com/openstack/zun-tempest-plugin > > Best regards, > Hongbin > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -- Regards, Jeffrey Zhang Blog: http://xcodest.me -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihailmed at gmail.com Mon Apr 30 13:18:19 2018 From: mihailmed at gmail.com (Mikhail Medvedev) Date: Mon, 30 Apr 2018 08:18:19 -0500 Subject: [openstack-dev] [Openstack-operators] [nova] Default scheduler filters survey In-Reply-To: References: Message-ID: On Sun, Apr 29, 2018 at 4:29 PM, Ed Leafe wrote: > > Another data point that might be illuminating is: how many sites use a custom (i.e., not in-tree) filter or weigher? One of the original design tenets of the scheduler was that we did not want to artificially limit what people could use to control their deployments, but inside of Nova there is a lot of confusion as to whether anyone is using anything but the included filters. > > So - does anyone out there rely on a filter and/or weigher that they wrote themselves, and maintain outside of OpenStack? > Internal cloud that is used for Power KVM CI single use VMs: AvailabilityZoneFilter AggregateMultiTenancyIsolation RetryFilter RamFilter ComputeFilter ComputeCapabilitiesFilter ImagePropertiesFilter CoreFilter NumInstancesFilter * NUMATopologyFilter NumInstancesFilter is a custom weigher I have added that returns negative number of instances on a host. Using it this way gives an even spread of instances over the compute nodes up to a point the compute cores are filled up evenly, then it overflows to the compute nodes with more CPU cores. Maybe it is possible to achieve the same with existing filters, at the time I did not see how. --- Mikhail Medvedev IBM From jaypipes at gmail.com Mon Apr 30 13:46:48 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 30 Apr 2018 09:46:48 -0400 Subject: [openstack-dev] [Openstack-operators] [nova] Default scheduler filters survey In-Reply-To: References: Message-ID: <02e8e0fb-4b60-e4f3-9c4b-27aac5bb9e62@gmail.com> On 04/30/2018 09:18 AM, Mikhail Medvedev wrote: > On Sun, Apr 29, 2018 at 4:29 PM, Ed Leafe wrote: >> >> Another data point that might be illuminating is: how many sites use a custom (i.e., not in-tree) filter or weigher? One of the original design tenets of the scheduler was that we did not want to artificially limit what people could use to control their deployments, but inside of Nova there is a lot of confusion as to whether anyone is using anything but the included filters. >> >> So - does anyone out there rely on a filter and/or weigher that they wrote themselves, and maintain outside of OpenStack? >> > > Internal cloud that is used for Power KVM CI single use VMs: > > AvailabilityZoneFilter > AggregateMultiTenancyIsolation > RetryFilter > RamFilter > ComputeFilter > ComputeCapabilitiesFilter > ImagePropertiesFilter > CoreFilter > NumInstancesFilter * > NUMATopologyFilter > > NumInstancesFilter is a custom weigher I have added that returns > negative number of instances on a host. Using it this way gives an > even spread of instances over the compute nodes up to a point the > compute cores are filled up evenly, then it overflows to the compute > nodes with more CPU cores. Maybe it is possible to achieve the same > with existing filters, at the time I did not see how. Hi Mikhail, Did you mean to say you created a new *weigher*, not filter? Best, -jay From mihailmed at gmail.com Mon Apr 30 14:07:55 2018 From: mihailmed at gmail.com (Mikhail Medvedev) Date: Mon, 30 Apr 2018 09:07:55 -0500 Subject: [openstack-dev] [Openstack-operators] [nova] Default scheduler filters survey In-Reply-To: <02e8e0fb-4b60-e4f3-9c4b-27aac5bb9e62@gmail.com> References: <02e8e0fb-4b60-e4f3-9c4b-27aac5bb9e62@gmail.com> Message-ID: On Mon, Apr 30, 2018 at 8:46 AM, Jay Pipes wrote: > On 04/30/2018 09:18 AM, Mikhail Medvedev wrote: >> >> On Sun, Apr 29, 2018 at 4:29 PM, Ed Leafe wrote: >>> >>> >>> Another data point that might be illuminating is: how many sites use a >>> custom (i.e., not in-tree) filter or weigher? One of the original design >>> tenets of the scheduler was that we did not want to artificially limit what >>> people could use to control their deployments, but inside of Nova there is a >>> lot of confusion as to whether anyone is using anything but the included >>> filters. >>> >>> So - does anyone out there rely on a filter and/or weigher that they >>> wrote themselves, and maintain outside of OpenStack? >>> >> >> Internal cloud that is used for Power KVM CI single use VMs: >> >> AvailabilityZoneFilter >> AggregateMultiTenancyIsolation >> RetryFilter >> RamFilter >> ComputeFilter >> ComputeCapabilitiesFilter >> ImagePropertiesFilter >> CoreFilter >> NumInstancesFilter * >> NUMATopologyFilter >> >> NumInstancesFilter is a custom weigher I have added that returns >> negative number of instances on a host. Using it this way gives an >> even spread of instances over the compute nodes up to a point the >> compute cores are filled up evenly, then it overflows to the compute >> nodes with more CPU cores. Maybe it is possible to achieve the same >> with existing filters, at the time I did not see how. Correction: above describes custom weigher I've added, not the in-tree NumInstancesFilter. > > > Hi Mikhail, > > Did you mean to say you created a new *weigher*, not filter? Jay, thanks for spotting this, been awhile since I've done it. NumInstancesFilter is a standard filter, so I obviously did not write it. I've added a custom weigher that I have created (scheduler_weight_classes=pkvmci-os.nova.scheduler.weights.instance.InstanceWeigher) and maintain locally. > > Best, > -jay > > From pabelanger at redhat.com Mon Apr 30 14:23:34 2018 From: pabelanger at redhat.com (Paul Belanger) Date: Mon, 30 Apr 2018 10:23:34 -0400 Subject: [openstack-dev] Thank you TryStack!! In-Reply-To: References: <5AB9797D.1090209@tipit.net> Message-ID: <20180430142334.GB10224@localhost.localdomain> On Mon, Apr 30, 2018 at 09:37:21AM +0000, Jens Harbott wrote: > 2018-03-26 22:51 GMT+00:00 Jimmy Mcarthur : > > Hi everyone, > > > > We recently made the tough decision, in conjunction with the dedicated > > volunteers that run TryStack, to end the service as of March 29, 2018. For > > those of you that used it, thank you for being part of the TryStack > > community. > > > > The good news is that you can find more resources to try OpenStack at > > http://www.openstack.org/start, including the Passport Program, where you > > can test on any participating public cloud. If you are looking to test > > different tools or application stacks with OpenStack clouds, you should > > check out Open Lab. > > > > Thank you very much to Will Foster, Kambiz Aghaiepour, Rich Bowen, and the > > many other volunteers who have managed this valuable service for the last > > several years! Your contribution to OpenStack was noticed and appreciated > > by many in the community. > > Seems it would be great if https://trystack.openstack.org/ would be > updated with this information, according to comments in #openstack > users are still landing on that page and try to get a stack there in > vain. > The code is hosted by openstack-infra[1], if somebody would like to propose a patch with the new information. [1] http://git.openstack.org/cgit/openstack-infra/trystack-site From mark at stackhpc.com Mon Apr 30 14:33:52 2018 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 30 Apr 2018 14:33:52 +0000 Subject: [openstack-dev] [Zun][Kolla][Kolla-ansible] Verify Zun deployment in Kolla gate In-Reply-To: References: Message-ID: Hi, This is something I've been thinking about recently. In particular, I noticed a patch go by to fix the same issue in the magnum role that has been broken and fixed previously. Kolla needs to up its game in terms of CI testing. At the very least, we need tests that verify that services can be deployed. Even if we don't verify that the deployed service is functional, this will be an improvement from where we are today. As with many things, we won't get there in a single leap, but should look to incrementally improve test coverage, perhaps with a set of milestones spanning multiple releases. I suggest our first step should be to add a set of experimental jobs for testing particular services. These would not run against every patch, but could be invoked on demand by commenting 'check experimental' on a patch in Gerrit. For many services this could be done simply by setting 'enable_=true' in config. There are many paths we could take from there, but perhaps this would be best discussed at the next PTG? Cheers, Mark On Mon, 30 Apr 2018, 14:07 Jeffrey Zhang, wrote: > Thanks hongbin > > In Kolla, one job is used to test multi OpenStack services. there are > already two test scenarios. > > 1. without ceph > 2. with ceph > > each scenario test a serial of OpenStack services. like nova, neutron, > cinder etc. > Zun or kuryr is not tested now. But i think it is OK to add a new > scenario to test network related > service, like zun and kuryr. > > for tempest testing, there is a WIP bp for this[0] > > [0] https://blueprints.launchpad.net/kolla-ansible/+spec/tempest-gate > > On Sun, Apr 29, 2018 at 5:14 AM, Hongbin Lu wrote: > >> Hi Kolla team, >> >> Recently, I saw there are users who tried to install Zun by using >> Kolla-ansible and reported bugs to us whenever they ran into issues (e.g. >> https://bugs.launchpad.net/kolla-ansible/+bug/1766151). The increase of >> this usage pattern (Kolla + Zun) made me think that we need to have CI >> coverage to verify the Zun deployment setup by Kolla. >> >> IMHO, the ideal CI workflow should be: >> >> * Create a VM with different distros (i.e. Ubuntu, CentOS). >> * Use Kolla-ansible to stand up a Zun deployment. >> * Run Zun's tempest test suit [1] against the deployment. >> >> My question for Kolla team is if it is reasonable to setup a Zuul job as >> described above? or such CI jobs already exist? If not, how to create one? >> >> [1] https://github.com/openstack/zun-tempest-plugin >> >> Best regards, >> Hongbin >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> > > > -- > Regards, > Jeffrey Zhang > Blog: http://xcodest.me > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Mon Apr 30 14:34:15 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 30 Apr 2018 09:34:15 -0500 Subject: [openstack-dev] Thank you TryStack!! In-Reply-To: <20180430142334.GB10224@localhost.localdomain> References: <5AB9797D.1090209@tipit.net> <20180430142334.GB10224@localhost.localdomain> Message-ID: <5AE72967.3050100@openstack.org> I'm working on redirecting trystack.openstack.org to openstack.org/software/start. We have redirects in place for trystack.org, but didn't realize trystack.openstack.org as a thing as well. > Paul Belanger > April 30, 2018 at 9:23 AM > The code is hosted by openstack-infra[1], if somebody would like to > propose a > patch with the new information. > > [1] http://git.openstack.org/cgit/openstack-infra/trystack-site > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jens Harbott > April 30, 2018 at 4:37 AM > > Seems it would be great if https://trystack.openstack.org/ would be > updated with this information, according to comments in #openstack > users are still landing on that page and try to get a stack there in > vain. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jimmy Mcarthur > March 26, 2018 at 5:51 PM > Hi everyone, > > We recently made the tough decision, in conjunction with the dedicated > volunteers that run TryStack, to end the service as of March 29, > 2018. For those of you that used it, thank you for being part of the > TryStack community. > > The good news is that you can find more resources to try OpenStack at > http://www.openstack.org/start, including the Passport Program > , where you can test on any > participating public cloud. If you are looking to test different tools > or application stacks with OpenStack clouds, you should check out Open > Lab . > > Thank you very much to Will Foster, Kambiz Aghaiepour, Rich Bowen, and > the many other volunteers who have managed this valuable service for > the last several years! Your contribution to OpenStack was noticed > and appreciated by many in the community. > > Cheers, > Jimmy > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From dms at danplanet.com Mon Apr 30 14:54:32 2018 From: dms at danplanet.com (Dan Smith) Date: Mon, 30 Apr 2018 07:54:32 -0700 Subject: [openstack-dev] [Nova] z/VM introducing a new config driveformat In-Reply-To: (Chen CH Ji's message of "Fri, 27 Apr 2018 17:40:20 +0800") References: <7efe6916-17fc-59c5-d666-6bdfc19c3329@gmail.com> <2eea2405-2e85-6730-e022-e1be33dc35d5@gmail.com> <35a542f1-2b2f-74bd-b769-eb049a430223@gmail.com> Message-ID: > According to requirements and comments, now we opened the CI runs with > run_validation = True And according to [1] below, for example, [2] > need the ssh validation passed the test > > And there are a couple of comments need some enhancement on the logs > of CI such as format and legacy incorrect links of logs etc the newest > logs sample can be found [3] (take n-cpu as example and those logs are > with _white.html) > > Also, the blueprint [4] requested by previous discussion post here > again for reference > > please let us know whether the procedure -2 can be removed in order to > proceed . thanks for your help The CI log format issues look fixed to me and validation is turned on for the stuff supported, which is what was keeping it out of the runway. I still plan to leave the -2 on there until the next few patches have agreement, just so we don't land an empty shell driver before we are sure we're going to land spawn/destroy, etc. That's pretty normal procedure and I'll be around to remove it when appropriate. --Dan From doug at doughellmann.com Mon Apr 30 15:02:34 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 30 Apr 2018 11:02:34 -0400 Subject: [openstack-dev] [all][tc] final stages of python 3 transition In-Reply-To: <1524766935.803604.1351983704.474E0252@webmail.messagingengine.com> References: <1524689037-sup-783@lrrr.local> <20180426142731.GA18842@sm-xps> <1524766935.803604.1351983704.474E0252@webmail.messagingengine.com> Message-ID: <1525100487-sup-3968@lrrr.local> Excerpts from Clark Boylan's message of 2018-04-26 11:22:15 -0700: > On Thu, Apr 26, 2018, at 7:27 AM, Sean McGinnis wrote: > > On Wed, Apr 25, 2018 at 04:54:46PM -0400, Doug Hellmann wrote: > > > It's time to talk about the next steps in our migration from python > > > 2 to python 3. > > > > > > [...] > > > > > > 2. Change (or duplicate) all functional test jobs to run under > > > python 3. > > > > As a test I ran Cinder functional and unit test jobs on bionic using 3.6. All > > went well. > > > > That made me realize something though - right now we have jobs that explicitly > > say py35, both for unit tests and functional tests. But I realized setting up > > these test jobs that it works to just specify "basepython = python3" or run > > unit tests with "tox -e py3". Then with that, it just depends on whether the > > job runs on xenial or bionic as to whether the job is run with py35 or py36. > > > > It is less explicit, so I see some downside to that, but would it make sense to > > change jobs to drop the minor version to make it more flexible and easy to make > > these transitions? > > One reason to use it would be local user simplicity. Rather than need to explicitly add new python3 releases to the default env list so that it does what we want every year or two we can just list py3,py2,linters in the default list and get most of the way there for local users. Then we can continue to be more specific in the CI jobs if that is desirable. > > I do think we likely want to be explicit about the python versions we are using in CI testing. This makes it clear to developers who may need to reproduce or just understand why failures happen what platform is used. It also makes it explicit that "openstack runs on $pythonversion". > > Clark > Including support for local users to refer to "py3" makes sense, as long as we don't come to rely on it in CI. Users can also always be more explicit if they need to be when running tests locally. Doug From corvus at inaugust.com Mon Apr 30 15:03:32 2018 From: corvus at inaugust.com (James E. Blair) Date: Mon, 30 Apr 2018 08:03:32 -0700 Subject: [openstack-dev] Zuul memory improvements Message-ID: <87wowo4tyz.fsf@meyer.lemoncheese.net> Hi, We recently made some changes to Zuul which you may want to know about if you interact with a large number of projects. Previously, each change to Zuul which updated Zuul's configuration (e.g., a change to a project's zuul.yaml file) would consume a significant amount of memory. If we had too many of these in the queue at a time, the server would run out of RAM. To mitigate this, we asked folks who regularly submit large numbers of configuration changes to only submit a few at a time. We have updated Zuul so it now caches much more of its configuration, and the cost in memory of an additional configuration change is very small. An added bonus: they are computed more quickly as well. Of course, there's still a cost to every change pushed up to Gerrit -- each one uses test nodes, for instance, so if you need to make a large number of changes, please do consider the impact to the whole system and other users. However, there's no longer a need to severely restrict configuration changes as a class -- consider them as any other change. -Jim From fungi at yuggoth.org Mon Apr 30 15:09:56 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 30 Apr 2018 15:09:56 +0000 Subject: [openstack-dev] Thank you TryStack!! In-Reply-To: <20180430142334.GB10224@localhost.localdomain> References: <5AB9797D.1090209@tipit.net> <20180430142334.GB10224@localhost.localdomain> Message-ID: <20180430150955.d22pakf7d5cv2x67@yuggoth.org> On 2018-04-30 10:23:34 -0400 (-0400), Paul Belanger wrote: [...] > The code is hosted by openstack-infra[1], if somebody would like > to propose a patch with the new information. > > [1] http://git.openstack.org/cgit/openstack-infra/trystack-site Yes, ideally it'd just be something along the lines of a README.rst with a sentence or two about what happened, and removing the other content from the repository. Basically it can just follow our https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project directions. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Mon Apr 30 15:12:55 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 30 Apr 2018 15:12:55 +0000 Subject: [openstack-dev] Thank you TryStack!! In-Reply-To: <5AE72967.3050100@openstack.org> References: <5AB9797D.1090209@tipit.net> <20180430142334.GB10224@localhost.localdomain> <5AE72967.3050100@openstack.org> Message-ID: <20180430151255.bcgaqm5svvtz2rkq@yuggoth.org> On 2018-04-30 09:34:15 -0500 (-0500), Jimmy McArthur wrote: > I'm working on redirecting trystack.openstack.org to > openstack.org/software/start. We have redirects in place for > trystack.org, but didn't realize trystack.openstack.org as a thing > as well. [...] Yes, before the TryStack effort was closed down, there had been a plan for trystack.org to redirect to a trystack.openstack.org site hosted in the community infrastructure. At this point I expect we can just rip out the section for it from https://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/manifests/static.pp as DNS appears to no longer be pointed there. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From emilien at redhat.com Mon Apr 30 15:33:14 2018 From: emilien at redhat.com (Emilien Macchi) Date: Mon, 30 Apr 2018 08:33:14 -0700 Subject: [openstack-dev] The Forum Schedule is now live In-Reply-To: <5AE34A02.8020802@openstack.org> References: <5AE34A02.8020802@openstack.org> Message-ID: On Fri, Apr 27, 2018 at 9:04 AM, Jimmy McArthur wrote: > Hello all - > > Please take a look here for the posted Forum schedule: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224 > You should also see it update on your Summit App. > Why TripleO doesn't have project update? Maybe we could combine it with TripleO - Project Onboarding if needed but it would be great to have it advertised as a project update! Thanks, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Mon Apr 30 15:44:15 2018 From: amy at demarco.com (Amy Marrich) Date: Mon, 30 Apr 2018 10:44:15 -0500 Subject: [openstack-dev] The Forum Schedule is now live In-Reply-To: References: <5AE34A02.8020802@openstack.org> Message-ID: Emilien, I believe that the Project Updates are separate from the Forum? I know I saw some in the schedule before the Forum submittals were even closed. Maybe contact speaker support or Jimmy will answer here. Thanks, Amy (spotz) On Mon, Apr 30, 2018 at 10:33 AM, Emilien Macchi wrote: > > > On Fri, Apr 27, 2018 at 9:04 AM, Jimmy McArthur > wrote: > >> Hello all - >> >> Please take a look here for the posted Forum schedule: >> https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224 >> You should also see it update on your Summit App. >> > > Why TripleO doesn't have project update? > Maybe we could combine it with TripleO - Project Onboarding if needed but > it would be great to have it advertised as a project update! > > Thanks, > -- > Emilien Macchi > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Mon Apr 30 15:47:47 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 30 Apr 2018 10:47:47 -0500 Subject: [openstack-dev] [Openstack-operators] The Forum Schedule is now live In-Reply-To: References: <5AE34A02.8020802@openstack.org> Message-ID: <5AE73AA3.4030408@openstack.org> Project Updates are in their own track: https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=223 As are SIG, BoF and Working Groups: https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=218 > Amy Marrich > April 30, 2018 at 10:44 AM > Emilien, > > I believe that the Project Updates are separate from the Forum? I know > I saw some in the schedule before the Forum submittals were even > closed. Maybe contact speaker support or Jimmy will answer here. > > Thanks, > > Amy (spotz) > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > Emilien Macchi > April 30, 2018 at 10:33 AM > > > Hello all - > > Please take a look here for the posted Forum schedule: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224 > > You should also see it update on your Summit App. > > Why TripleO doesn't have project update? > Maybe we could combine it with TripleO - Project Onboarding if needed > but it would be great to have it advertised as a project update! > > Thanks, > -- > Emilien Macchi > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jimmy McArthur > April 27, 2018 at 11:04 AM > Hello all - > > Please take a look here for the posted Forum schedule: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224 > You should also see it update on your Summit App. > > Thank you and see you in Vancouver! > Jimmy > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Mon Apr 30 15:58:24 2018 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Mon, 30 Apr 2018 15:58:24 +0000 Subject: [openstack-dev] [Openstack-operators] The Forum Schedule is now live In-Reply-To: <5AE73AA3.4030408@openstack.org> References: <5AE34A02.8020802@openstack.org> <5AE73AA3.4030408@openstack.org> Message-ID: <18ce76f6eb3b4b30afedb642f43ce93c@AUSX13MPS308.AMER.DELL.COM> Both are currently empty. From: Jimmy McArthur [mailto:jimmy at openstack.org] Sent: Monday, April 30, 2018 10:48 AM To: Amy Marrich Cc: OpenStack Development Mailing List (not for usage questions); OpenStack-operators at lists.openstack.org Subject: Re: [Openstack-operators] [openstack-dev] The Forum Schedule is now live Project Updates are in their own track: https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=223 As are SIG, BoF and Working Groups: https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=218 Amy Marrich April 30, 2018 at 10:44 AM Emilien, I believe that the Project Updates are separate from the Forum? I know I saw some in the schedule before the Forum submittals were even closed. Maybe contact speaker support or Jimmy will answer here. Thanks, Amy (spotz) _______________________________________________ OpenStack-operators mailing list OpenStack-operators at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators Emilien Macchi April 30, 2018 at 10:33 AM Hello all - Please take a look here for the posted Forum schedule: https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224 You should also see it update on your Summit App. Why TripleO doesn't have project update? Maybe we could combine it with TripleO - Project Onboarding if needed but it would be great to have it advertised as a project update! Thanks, -- Emilien Macchi __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Jimmy McArthur April 27, 2018 at 11:04 AM Hello all - Please take a look here for the posted Forum schedule: https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224 You should also see it update on your Summit App. Thank you and see you in Vancouver! Jimmy __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From corvus at inaugust.com Mon Apr 30 15:58:33 2018 From: corvus at inaugust.com (James E. Blair) Date: Mon, 30 Apr 2018 08:58:33 -0700 Subject: [openstack-dev] Overriding project-templates in Zuul Message-ID: <87o9i04rfa.fsf@meyer.lemoncheese.net> Hi, If you've had difficulty overriding jobs in project-templates, please read and provide feedback on this proposed change. We tried to make the Zuul v3 configuration language as intuitive as possible, and incorporated a lot that we learned from our years running Zuul v2. One thing that we didn't anticipate was how folks would end up wanting to use a job in both project-templates *and* local project stanzas. Essentially, we had assumed that if you wanted to control how a job was run, you would add it to a project stanza directly rather that use a project-template. It's easy to do so if you use one or the other. However, it turns out there are lots of good reasons to use both. For example, in a project-template we may want to establish a recommended way to run a job, or that a job should always be run with a set of related jobs. Yet a project may still want to indicate that job should only run on certain changes in that specific repo. To be very specific -- a very commonly expressed frustration is that a project can't specify a "files" or "irrelevant-files" matcher to override a job that appears in a project-template. Reconciling those is difficult, largely because once Zuul decides to run a job (for example, by a declaration in a project-template) it is impossible to dissuade it from running that job by adding any extra configuration to a project. We need to tread carefully when fixing this, because quite a number of related concepts could be affected. For instance, we need to preserve branch independence (a change to stop running a job in one branch shouldn't affect others). And we need to preserve the ability for job variants to layer on to each other (a project-local variant should still be able to alter a variant in a project-template). I propose that we remedy this by making a small change to how Zuul determines that a job should run: When a job appears multiple times on a project (for instance if it appears in a project-template and also on the project itself), all of the project-local variants which match the item's branch must also match the item in order for the job to run. In other words, if a job appears in a project-template used by a project and on the project, then both must match. This effectively causes the "files" and "irrelevant-files" attributes on all of the project-local job definitions matching a given branch to be combined. The combination of multiple files matchers behaves as a union, and irrelevant-files matchers as an intersection. ================ ======== ======= ======= Matcher Template Project Result ================ ======== ======= ======= files AB BC ABC irrelevant-files AB BC B ================ ======== ======= ======= I believe this will address the shortcoming identified above, but before we get too far in implementing it, I'd like to ask folks to take a moment and evaluate whether it will address the issues you've seen, or if you foresee any problems which I haven't anticipated. Thanks, Jim From dtantsur at redhat.com Mon Apr 30 16:00:25 2018 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 30 Apr 2018 18:00:25 +0200 Subject: [openstack-dev] [ironic] Monthly bug day? In-Reply-To: References: <7e2b39d4-d74b-0953-fb61-659c0a6b4e7e@linux.vnet.ibm.com> <6667f21d-81f4-1f61-a365-43aea343c0e9@redhat.com> Message-ID: <4c165699-6602-0528-200a-8a69481b39c5@redhat.com> I've created a bluejeans channel for this meeting: https://bluejeans.com/309964257. I may be late for it, but I've set it up to be usable even without me. On 04/30/2018 02:39 PM, Michael Turek wrote: > Just tried this and seems like Firefox does still require a browser plugin. > > Julia, could we use your bluejeans line again? > > Thanks! > Mike Turek > > > On 4/30/18 7:33 AM, Dmitry Tantsur wrote: >> Hi, >> >> On 04/29/2018 10:17 PM, Michael Turek wrote: >>> Awesome! If everyone doesn't mind the short notice, we'll have it again this >>> Thursday @ 1:00 PM to 3:00 PM UTC. >> >> ++ >> >>> >>> I can provide video conferencing through hangouts here https://goo.gl/xSKBS4 >>> Let's give that a shot this time! >> >> Note that the last time I checked Hangouts video messaging required a >> proprietary browser plugin (and hence did not work in Firefox). Using it may >> exclude people not accepting proprietary software and/or avoiding using Chromium. >> >>> >>> We can adjust times, tooling, and regular agenda over the next couple >>> meetings and see where we settle. If anyone has any questions or suggestions, >>> don't hesitate to reach out to me! >>> >>> Thanks, >>> Mike Turek >>> >>> >>> On 4/25/18 12:11 PM, Julia Kreger wrote: >>>> On Mon, Apr 23, 2018 at 12:04 PM, Michael Turek >>>> wrote: >>>> >>>>> What does everyone think about having Bug Day the first Thursday of every >>>>> month? >>>> All for it! >>>> >>>> __________________________________________________________________________ >>>> OpenStack Development Mailing List (not for usage questions) >>>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >>> >>> >>> __________________________________________________________________________ >>> OpenStack Development Mailing List (not for usage questions) >>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From jimmy at openstack.org Mon Apr 30 16:07:27 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 30 Apr 2018 11:07:27 -0500 Subject: [openstack-dev] Thank you TryStack!! In-Reply-To: <20180430151255.bcgaqm5svvtz2rkq@yuggoth.org> References: <5AB9797D.1090209@tipit.net> <20180430142334.GB10224@localhost.localdomain> <5AE72967.3050100@openstack.org> <20180430151255.bcgaqm5svvtz2rkq@yuggoth.org> Message-ID: <5AE73F3F.4040503@openstack.org> > Jeremy Stanley > April 30, 2018 at 10:12 AM > [...] > > Yes, before the TryStack effort was closed down, there had been a > plan for trystack.org to redirect to a trystack.openstack.org site > hosted in the community infrastructure. When we talked to trystack we agreed to redirect trystack.org to https://openstack.org/software/start since that presents alternative options for people to "try openstack". My suggestion would be to redirect trystack.openstack.org to the same spot, but certainly open to other suggestions :) > At this point I expect we > can just rip out the section for it from > https://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/manifests/static.pp > as DNS appears to no longer be pointed there. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jimmy McArthur > April 30, 2018 at 9:34 AM > I'm working on redirecting trystack.openstack.org to > openstack.org/software/start. We have redirects in place for > trystack.org, but didn't realize trystack.openstack.org as a thing as > well. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Paul Belanger > April 30, 2018 at 9:23 AM > The code is hosted by openstack-infra[1], if somebody would like to > propose a > patch with the new information. > > [1] http://git.openstack.org/cgit/openstack-infra/trystack-site > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jens Harbott > April 30, 2018 at 4:37 AM > > Seems it would be great if https://trystack.openstack.org/ would be > updated with this information, according to comments in #openstack > users are still landing on that page and try to get a stack there in > vain. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jimmy Mcarthur > March 26, 2018 at 5:51 PM > Hi everyone, > > We recently made the tough decision, in conjunction with the dedicated > volunteers that run TryStack, to end the service as of March 29, > 2018. For those of you that used it, thank you for being part of the > TryStack community. > > The good news is that you can find more resources to try OpenStack at > http://www.openstack.org/start, including the Passport Program > , where you can test on any > participating public cloud. If you are looking to test different tools > or application stacks with OpenStack clouds, you should check out Open > Lab . > > Thank you very much to Will Foster, Kambiz Aghaiepour, Rich Bowen, and > the many other volunteers who have managed this valuable service for > the last several years! Your contribution to OpenStack was noticed > and appreciated by many in the community. > > Cheers, > Jimmy > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Mon Apr 30 16:21:22 2018 From: melwittt at gmail.com (melanie witt) Date: Mon, 30 Apr 2018 09:21:22 -0700 Subject: [openstack-dev] [Nova] z/VM introducing a new config driveformat In-Reply-To: References: <7efe6916-17fc-59c5-d666-6bdfc19c3329@gmail.com> <2eea2405-2e85-6730-e022-e1be33dc35d5@gmail.com> <35a542f1-2b2f-74bd-b769-eb049a430223@gmail.com> Message-ID: <2e7068d0-417c-0c36-b040-d15cb188afa6@gmail.com> On Fri, 27 Apr 2018 17:40:20 +0800, Chen Ch Ji wrote: > According to requirements and comments, now we opened the CI runs with > run_validation = True > And according to [1] below, for example, [2] need the ssh validation > passed the test > > And there are a couple of comments need some enhancement on the logs of > CI such as format and legacy incorrect links of logs etc > the newest logs sample can be found [3] (take n-cpu as example and those > logs are with _white.html) > > Also, the blueprint [4] requested by previous discussion post here again > for reference Thank you for alerting us about the completion of the work on the z/VM CI. The logs look much improved and ssh connectivity and metadata functionality via config drive is being verified by tempest. The only strange thing I noticed is it appears tempest starts multiple times in the log [0]. Do you know what's going on there? That said, since things are looking good with z/VM CI now, we've added the z/VM patch series back into a review runway today. Cheers, -melanie [0] http://extbasicopstackcilog01.podc.sl.edst.ibm.com/test_logs/jenkins-check-nova-master-17444/logs/tempest.log from https://review.openstack.org/527658 From jimmy at openstack.org Mon Apr 30 16:22:07 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 30 Apr 2018 11:22:07 -0500 Subject: [openstack-dev] [Openstack-operators] The Forum Schedule is now live In-Reply-To: <18ce76f6eb3b4b30afedb642f43ce93c@AUSX13MPS308.AMER.DELL.COM> References: <5AE34A02.8020802@openstack.org> <5AE73AA3.4030408@openstack.org> <18ce76f6eb3b4b30afedb642f43ce93c@AUSX13MPS308.AMER.DELL.COM> Message-ID: <5AE742AF.2010106@openstack.org> Hmm. I see both populated with all of the relevant sessions. Can you send me a screencap of what you're seeing? > Arkady.Kanevsky at dell.com > April 30, 2018 at 10:58 AM > > Both are currently empty. > > *From:*Jimmy McArthur [mailto:jimmy at openstack.org] > *Sent:* Monday, April 30, 2018 10:48 AM > *To:* Amy Marrich > *Cc:* OpenStack Development Mailing List (not for usage questions); > OpenStack-operators at lists.openstack.org > *Subject:* Re: [Openstack-operators] [openstack-dev] The Forum > Schedule is now live > > Project Updates are in their own track: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=223 > > As are SIG, BoF and Working Groups: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=218 > > > Jimmy McArthur > April 30, 2018 at 10:47 AM > Project Updates are in their own track: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=223 > > As are SIG, BoF and Working Groups: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=218 > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Amy Marrich > April 30, 2018 at 10:44 AM > Emilien, > > I believe that the Project Updates are separate from the Forum? I know > I saw some in the schedule before the Forum submittals were even > closed. Maybe contact speaker support or Jimmy will answer here. > > Thanks, > > Amy (spotz) > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Emilien Macchi > April 30, 2018 at 10:33 AM > > > Hello all - > > Please take a look here for the posted Forum schedule: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224 > > You should also see it update on your Summit App. > > Why TripleO doesn't have project update? > Maybe we could combine it with TripleO - Project Onboarding if needed > but it would be great to have it advertised as a project update! > > Thanks, > -- > Emilien Macchi > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jimmy McArthur > April 27, 2018 at 11:04 AM > Hello all - > > Please take a look here for the posted Forum schedule: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224 > You should also see it update on your Summit App. > > Thank you and see you in Vancouver! > Jimmy > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From mgagne at calavera.ca Mon Apr 30 16:41:21 2018 From: mgagne at calavera.ca (=?UTF-8?Q?Mathieu_Gagn=C3=A9?=) Date: Mon, 30 Apr 2018 12:41:21 -0400 Subject: [openstack-dev] [Openstack-operators] [nova] Default scheduler filters survey In-Reply-To: References: Message-ID: Hi, On Sun, Apr 29, 2018 at 5:29 PM, Ed Leafe wrote: > On Apr 29, 2018, at 1:34 PM, Artom Lifshitz wrote: >> >> Based on that, we can definitely say that SameHostFilter and >> DifferentHostFilter do *not* belong in the defaults. In fact, we got >> our defaults pretty spot on, based on this admittedly very limited >> dataset. The only frequently occurring filter that's not in our >> defaults is AggregateInstanceExtraSpecsFilter. > > Another data point that might be illuminating is: how many sites use a custom (i.e., not in-tree) filter or weigher? One of the original design tenets of the scheduler was that we did not want to artificially limit what people could use to control their deployments, but inside of Nova there is a lot of confusion as to whether anyone is using anything but the included filters. > > So - does anyone out there rely on a filter and/or weigher that they wrote themselves, and maintain outside of OpenStack? Yes and we have a bunch. Here are our filters and weighers with explanations. Filters for cells: * InstanceTypeClassFilter [0] Filters for cloud/virtual cells: * RetryFilter * AvailabilityZoneFilter * RamFilter * ComputeFilter * AggregateCoreFilter * ImagePropertiesFilter * AggregateImageOsTypeIsolationFilter [1] * AggregateInstanceExtraSpecsFilter * AggregateProjectsIsolationFilter [2] Weighers for cloud/virtual cells: * MetricsWeigher * AggregateRAMWeigher [3] Filters for baremetal cells: * ComputeFilter * NetworkModelFilter [4] * TenantFilter [5] * UserFilter [6] * RetryFilter * AvailabilityZoneFilter * ComputeCapabilitiesFilter * ImagePropertiesFilter * ExactRamFilter * ExactDiskFilter * ExactCoreFilter Weighers for baremetal cells: * ReservedHostForTenantWeigher [7] * ReservedHostForUserWeigher [8] [0] Used to scheduler instances based on flavor class found in extra_specs (virtual/baremetal) [1] Allows to properly isolated hosts for licensing purposes. The upstream filter is not strict as per bugs/reviews/specs: * https://bugs.launchpad.net/nova/+bug/1293444 * https://bugs.launchpad.net/nova/+bug/1677217 * https://review.openstack.org/#/c/56420/ * https://review.openstack.org/#/c/85399/ Our custom implementation for Mitaka: https://gist.github.com/mgagne/462e7fa8417843055aa6da7c5fd51c00 [2] Similar filter to AggregateImageOsTypeIsolationFilter but for projects. Our custom implementation for Mitaka: https://gist.github.com/mgagne/d729ccb512b0434568ffb094441f643f [3] Allows to change stacking behavior based on the 'ram_weight_multiplier' aggregate key. (emptiest/fullest) Our custom implementation for Mitaka: https://gist.github.com/mgagne/65f033cbc5fdd4c8d1f45e90c943a5f4 [4] Used to filter Ironic nodes based on supported network models as requested by flavor extra_specs. We support JIT network configuration (flat/bond) and need to know which nodes support what network models beforehand. [5] Used to filter Ironic nodes based on the 'reserved_for_tenant_id' Ironic node property. This is used to reserve Ironic node to specific projects. Some customers order lot of machines in advance. We reserve those for them. [6] Used to filter Ironic nodes based on the 'reserved_for_user_id' Ironic node property. This is mainly used when enrolling existing nodes already living on a different system. We reserve the node to a special internal user so the customer cannot reserve the node by mistake until the process is completed. Latest version of Nova dropped user_id from RequestSpec. We had to add it back. [7] Used to favor reserved host over non-reserved ones based on project. [8] Used to favor reserved host over non-reserved ones based on user. Latest version of Nova dropped user_id from RequestSpec. We had to add it back. -- Mathieu From mtreinish at kortar.org Mon Apr 30 16:42:40 2018 From: mtreinish at kortar.org (Matthew Treinish) Date: Mon, 30 Apr 2018 12:42:40 -0400 Subject: [openstack-dev] [Nova] z/VM introducing a new config driveformat In-Reply-To: <2e7068d0-417c-0c36-b040-d15cb188afa6@gmail.com> References: <7efe6916-17fc-59c5-d666-6bdfc19c3329@gmail.com> <2eea2405-2e85-6730-e022-e1be33dc35d5@gmail.com> <35a542f1-2b2f-74bd-b769-eb049a430223@gmail.com> <2e7068d0-417c-0c36-b040-d15cb188afa6@gmail.com> Message-ID: <20180430164240.GA26359@zeong> On Mon, Apr 30, 2018 at 09:21:22AM -0700, melanie witt wrote: > On Fri, 27 Apr 2018 17:40:20 +0800, Chen Ch Ji wrote: > > According to requirements and comments, now we opened the CI runs with > > run_validation = True > > And according to [1] below, for example, [2] need the ssh validation > > passed the test > > > > And there are a couple of comments need some enhancement on the logs of > > CI such as format and legacy incorrect links of logs etc > > the newest logs sample can be found [3] (take n-cpu as example and those > > logs are with _white.html) > > > > Also, the blueprint [4] requested by previous discussion post here again > > for reference > > Thank you for alerting us about the completion of the work on the z/VM CI. > The logs look much improved and ssh connectivity and metadata functionality > via config drive is being verified by tempest. > > The only strange thing I noticed is it appears tempest starts multiple times > in the log [0]. Do you know what's going on there? This is normal, it's an artifact of a few things. The first time config is dumped to the logs is because of tempest verify-config being run as part of devstack: https://github.com/openstack-dev/devstack/blob/master/lib/tempest#L590 You also see the API requests this command is making being logged. Then when the tempest tests are actually being run the config is dumped to the logs once per test worker process. Basically every time we parse the config file at debug log levels it get's printed to the log file. FWIW, you can also see this in a gate run too: http://logs.openstack.org/90/539590/10/gate/tempest-full/4b0a136/controller/logs/tempest_log.txt -Matt Treinish > > That said, since things are looking good with z/VM CI now, we've added the > z/VM patch series back into a review runway today. > > Cheers, > -melanie > > [0] http://extbasicopstackcilog01.podc.sl.edst.ibm.com/test_logs/jenkins-check-nova-master-17444/logs/tempest.log > from https://review.openstack.org/527658 > > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From melwittt at gmail.com Mon Apr 30 16:46:30 2018 From: melwittt at gmail.com (melanie witt) Date: Mon, 30 Apr 2018 09:46:30 -0700 Subject: [openstack-dev] [nova] review runway status Message-ID: <60581325-3274-e74f-8098-de864298caec@gmail.com> Howdy everyone, This is just a brief status about the blueprints currently occupying review runways [0] and an ask for the nova-core team to give these reviews priority for their code review focus. * XenAPI: Support a new image handler for non-FS based SRs https://blueprints.launchpad.net/nova/+spec/xenapi-image-handler-option-improvement (jianghuaw_) [END DATE: 2018-05-11] series starting at https://review.openstack.org/497201 * Consoles database backend: https://blueprints.launchpad.net/nova/+spec/convert-consoles-to-objects (melwitt) [END DATE: 2018-05-01] series starting at https://review.openstack.org/325414 * Add z/VM driver https://blueprints.launchpad.net/nova/+spec/add-zvm-driver-rocky (jichen) [END DATE: 2018-05-15] spec amendment https://review.openstack.org/562154 and implementation series starting at https://review.openstack.org/523387 Cheers, -melanie [0] https://etherpad.openstack.org/p/nova-runways-rocky From melwittt at gmail.com Mon Apr 30 16:51:52 2018 From: melwittt at gmail.com (melanie witt) Date: Mon, 30 Apr 2018 09:51:52 -0700 Subject: [openstack-dev] [Nova] z/VM introducing a new config driveformat In-Reply-To: <20180430164240.GA26359@zeong> References: <7efe6916-17fc-59c5-d666-6bdfc19c3329@gmail.com> <2eea2405-2e85-6730-e022-e1be33dc35d5@gmail.com> <35a542f1-2b2f-74bd-b769-eb049a430223@gmail.com> <2e7068d0-417c-0c36-b040-d15cb188afa6@gmail.com> <20180430164240.GA26359@zeong> Message-ID: On Mon, 30 Apr 2018 12:42:40 -0400, Matthew Treinish wrote: > On Mon, Apr 30, 2018 at 09:21:22AM -0700, melanie witt wrote: >> On Fri, 27 Apr 2018 17:40:20 +0800, Chen Ch Ji wrote: >>> According to requirements and comments, now we opened the CI runs with >>> run_validation = True >>> And according to [1] below, for example, [2] need the ssh validation >>> passed the test >>> >>> And there are a couple of comments need some enhancement on the logs of >>> CI such as format and legacy incorrect links of logs etc >>> the newest logs sample can be found [3] (take n-cpu as example and those >>> logs are with _white.html) >>> >>> Also, the blueprint [4] requested by previous discussion post here again >>> for reference >> >> Thank you for alerting us about the completion of the work on the z/VM CI. >> The logs look much improved and ssh connectivity and metadata functionality >> via config drive is being verified by tempest. >> >> The only strange thing I noticed is it appears tempest starts multiple times >> in the log [0]. Do you know what's going on there? > > This is normal, it's an artifact of a few things. The first time config is > dumped to the logs is because of tempest verify-config being run as part of > devstack: > > https://github.com/openstack-dev/devstack/blob/master/lib/tempest#L590 > > You also see the API requests this command is making being logged. Then > when the tempest tests are actually being run the config is dumped to the logs > once per test worker process. Basically every time we parse the config file at > debug log levels it get's printed to the log file. > > FWIW, you can also see this in a gate run too: > http://logs.openstack.org/90/539590/10/gate/tempest-full/4b0a136/controller/logs/tempest_log.txt A-ha, thanks for sharing all of that info. I have learned something new. :) -melanie From aschultz at redhat.com Mon Apr 30 16:52:32 2018 From: aschultz at redhat.com (Alex Schultz) Date: Mon, 30 Apr 2018 10:52:32 -0600 Subject: [openstack-dev] [Openstack-operators] The Forum Schedule is now live In-Reply-To: <5AE73AA3.4030408@openstack.org> References: <5AE34A02.8020802@openstack.org> <5AE73AA3.4030408@openstack.org> Message-ID: On Mon, Apr 30, 2018 at 9:47 AM, Jimmy McArthur wrote: > Project Updates are in their own track: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=223 > TripleO is still missing? Thanks, -Alex > As are SIG, BoF and Working Groups: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=218 > > Amy Marrich > April 30, 2018 at 10:44 AM > Emilien, > > I believe that the Project Updates are separate from the Forum? I know I saw > some in the schedule before the Forum submittals were even closed. Maybe > contact speaker support or Jimmy will answer here. > > Thanks, > > Amy (spotz) > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > Emilien Macchi > April 30, 2018 at 10:33 AM > > >> Hello all - >> >> Please take a look here for the posted Forum schedule: >> https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224 >> You should also see it update on your Summit App. > > > Why TripleO doesn't have project update? > Maybe we could combine it with TripleO - Project Onboarding if needed but it > would be great to have it advertised as a project update! > > Thanks, > -- > Emilien Macchi > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jimmy McArthur > April 27, 2018 at 11:04 AM > Hello all - > > Please take a look here for the posted Forum schedule: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224 > You should also see it update on your Summit App. > > Thank you and see you in Vancouver! > Jimmy > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From kgiusti at gmail.com Mon Apr 30 16:54:09 2018 From: kgiusti at gmail.com (Ken Giusti) Date: Mon, 30 Apr 2018 12:54:09 -0400 Subject: [openstack-dev] [Oslo][Nova][Sahara][Tempest][Cinder][Magnum] Removing broken tox missing requirements tests Message-ID: Folks, Here in Oslo land a number of projects define a tox test for missing dependencies. These tests are based on a tool - pip-check-reqs - that no longer functions under the latest release of pip. The project's upstream github repo hasn't had any commit activity in a year and appears to no longer be maintained. See my previous email about this tool: http://lists.openstack.org/pipermail/openstack-dev/2018-April/129697.html In lieu of a suitable replacement, I've started removing the broken tox tests from the oslo project to prevent anyone else having that "Hmm - why doesn't this test pass?" moment I hit last week. I've created a epad that lists the projects that define tox tests based on this tool: https://etherpad.openstack.org/p/pip_(missing|check)_reqs There are other non-Oslo projects - Nova, Cinder, etc - that may want to also remove that test. See the epad for details. I've started patches for a couple of projects, but if anyone is willing to help out please use the epad so we don't step on each other's toes. thanks, -- Ken Giusti (kgiusti at gmail.com) From fungi at yuggoth.org Mon Apr 30 17:02:05 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 30 Apr 2018 17:02:05 +0000 Subject: [openstack-dev] Thank you TryStack!! In-Reply-To: <5AE73F3F.4040503@openstack.org> References: <5AB9797D.1090209@tipit.net> <20180430142334.GB10224@localhost.localdomain> <5AE72967.3050100@openstack.org> <20180430151255.bcgaqm5svvtz2rkq@yuggoth.org> <5AE73F3F.4040503@openstack.org> Message-ID: <20180430170204.vvtfq6gktc5i3r6r@yuggoth.org> On 2018-04-30 11:07:27 -0500 (-0500), Jimmy McArthur wrote: [...] > When we talked to trystack we agreed to redirect trystack.org to > https://openstack.org/software/start since that presents > alternative options for people to "try openstack". My suggestion > would be to redirect trystack.openstack.org to the same spot, but > certainly open to other suggestions :) [...] Since I don't think the trystack.o.o site ever found its way fully into production, it may make more sense for us to simply delete the records for it from DNS. Someone else probably knows more about the prior state of it than I though. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jimmy at openstack.org Mon Apr 30 17:05:54 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 30 Apr 2018 12:05:54 -0500 Subject: [openstack-dev] [Openstack-operators] The Forum Schedule is now live In-Reply-To: References: <5AE34A02.8020802@openstack.org> <5AE73AA3.4030408@openstack.org> Message-ID: <5AE74CF2.9010804@openstack.org> Alex, It looks like we have a spot held for you, but did not receive confirmation that TripleO would be moving forward with Project Update. If you all will be recording this, we have you down for Wednesday from 11:25 - 11:45am. Just let me know and I'll get it up on the schedule. Thanks! Jimmy > Alex Schultz > April 30, 2018 at 11:52 AM > On Mon, Apr 30, 2018 at 9:47 AM, Jimmy McArthur wrote: >> Project Updates are in their own track: >> https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=223 >> > > TripleO is still missing? > > Thanks, > -Alex > >> As are SIG, BoF and Working Groups: >> https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=218 >> >> Amy Marrich >> April 30, 2018 at 10:44 AM >> Emilien, >> >> I believe that the Project Updates are separate from the Forum? I know I saw >> some in the schedule before the Forum submittals were even closed. Maybe >> contact speaker support or Jimmy will answer here. >> >> Thanks, >> >> Amy (spotz) >> >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> Emilien Macchi >> April 30, 2018 at 10:33 AM >> >> >>> Hello all - >>> >>> Please take a look here for the posted Forum schedule: >>> https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224 >>> You should also see it update on your Summit App. >> Why TripleO doesn't have project update? >> Maybe we could combine it with TripleO - Project Onboarding if needed but it >> would be great to have it advertised as a project update! >> >> Thanks, >> -- >> Emilien Macchi >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> Jimmy McArthur >> April 27, 2018 at 11:04 AM >> Hello all - >> >> Please take a look here for the posted Forum schedule: >> https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224 >> You should also see it update on your Summit App. >> >> Thank you and see you in Vancouver! >> Jimmy >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > Jimmy McArthur > April 30, 2018 at 10:47 AM > Project Updates are in their own track: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=223 > > As are SIG, BoF and Working Groups: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=218 > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Amy Marrich > April 30, 2018 at 10:44 AM > Emilien, > > I believe that the Project Updates are separate from the Forum? I know > I saw some in the schedule before the Forum submittals were even > closed. Maybe contact speaker support or Jimmy will answer here. > > Thanks, > > Amy (spotz) > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Emilien Macchi > April 30, 2018 at 10:33 AM > > > Hello all - > > Please take a look here for the posted Forum schedule: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224 > > You should also see it update on your Summit App. > > Why TripleO doesn't have project update? > Maybe we could combine it with TripleO - Project Onboarding if needed > but it would be great to have it advertised as a project update! > > Thanks, > -- > Emilien Macchi > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jimmy McArthur > April 27, 2018 at 11:04 AM > Hello all - > > Please take a look here for the posted Forum schedule: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224 > You should also see it update on your Summit App. > > Thank you and see you in Vancouver! > Jimmy > > > __________________________________________________________________________ > > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Mon Apr 30 17:06:34 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Mon, 30 Apr 2018 13:06:34 -0400 Subject: [openstack-dev] [Openstack-operators] [nova] Default scheduler filters survey In-Reply-To: References: Message-ID: <2c736005-12a5-c619-ba4b-8b0c1cb9f43f@gmail.com> Mathieu, How do you handle issues where compute nodes are associated with multiple aggregates and both aggregates have different values for a particular filter key? Is that a human-based validation process to ensure you don't have that situation? Best, -jay On 04/30/2018 12:41 PM, Mathieu Gagné wrote: > Hi, > > On Sun, Apr 29, 2018 at 5:29 PM, Ed Leafe wrote: >> On Apr 29, 2018, at 1:34 PM, Artom Lifshitz wrote: >>> >>> Based on that, we can definitely say that SameHostFilter and >>> DifferentHostFilter do *not* belong in the defaults. In fact, we got >>> our defaults pretty spot on, based on this admittedly very limited >>> dataset. The only frequently occurring filter that's not in our >>> defaults is AggregateInstanceExtraSpecsFilter. >> >> Another data point that might be illuminating is: how many sites use a custom (i.e., not in-tree) filter or weigher? One of the original design tenets of the scheduler was that we did not want to artificially limit what people could use to control their deployments, but inside of Nova there is a lot of confusion as to whether anyone is using anything but the included filters. >> >> So - does anyone out there rely on a filter and/or weigher that they wrote themselves, and maintain outside of OpenStack? > > Yes and we have a bunch. > > Here are our filters and weighers with explanations. > > Filters for cells: > * InstanceTypeClassFilter [0] > > Filters for cloud/virtual cells: > * RetryFilter > * AvailabilityZoneFilter > * RamFilter > * ComputeFilter > * AggregateCoreFilter > * ImagePropertiesFilter > * AggregateImageOsTypeIsolationFilter [1] > * AggregateInstanceExtraSpecsFilter > * AggregateProjectsIsolationFilter [2] > > Weighers for cloud/virtual cells: > * MetricsWeigher > * AggregateRAMWeigher [3] > > Filters for baremetal cells: > * ComputeFilter > * NetworkModelFilter [4] > * TenantFilter [5] > * UserFilter [6] > * RetryFilter > * AvailabilityZoneFilter > * ComputeCapabilitiesFilter > * ImagePropertiesFilter > * ExactRamFilter > * ExactDiskFilter > * ExactCoreFilter > > Weighers for baremetal cells: > * ReservedHostForTenantWeigher [7] > * ReservedHostForUserWeigher [8] > > [0] Used to scheduler instances based on flavor class found in > extra_specs (virtual/baremetal) > [1] Allows to properly isolated hosts for licensing purposes. > The upstream filter is not strict as per bugs/reviews/specs: > * https://bugs.launchpad.net/nova/+bug/1293444 > * https://bugs.launchpad.net/nova/+bug/1677217 > * https://review.openstack.org/#/c/56420/ > * https://review.openstack.org/#/c/85399/ > Our custom implementation for Mitaka: > https://gist.github.com/mgagne/462e7fa8417843055aa6da7c5fd51c00 > [2] Similar filter to AggregateImageOsTypeIsolationFilter but for projects. > Our custom implementation for Mitaka: > https://gist.github.com/mgagne/d729ccb512b0434568ffb094441f643f > [3] Allows to change stacking behavior based on the 'ram_weight_multiplier' > aggregate key. (emptiest/fullest) > Our custom implementation for Mitaka: > https://gist.github.com/mgagne/65f033cbc5fdd4c8d1f45e90c943a5f4 > [4] Used to filter Ironic nodes based on supported network models as requested > by flavor extra_specs. We support JIT network configuration (flat/bond) and > need to know which nodes support what network models beforehand. > [5] Used to filter Ironic nodes based on the 'reserved_for_tenant_id' > Ironic node property. > This is used to reserve Ironic node to specific projects. > Some customers order lot of machines in advance. We reserve those for them. > [6] Used to filter Ironic nodes based on the 'reserved_for_user_id' > Ironic node property. > This is mainly used when enrolling existing nodes already living > on a different system. > We reserve the node to a special internal user so the customer > cannot reserve > the node by mistake until the process is completed. > Latest version of Nova dropped user_id from RequestSpec. We had to > add it back. > [7] Used to favor reserved host over non-reserved ones based on project. > [8] Used to favor reserved host over non-reserved ones based on user. > Latest version of Nova dropped user_id from RequestSpec. We had to > add it back. > > -- > Mathieu > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > From jimmy at openstack.org Mon Apr 30 17:10:29 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 30 Apr 2018 12:10:29 -0500 Subject: [openstack-dev] Thank you TryStack!! In-Reply-To: <20180430170204.vvtfq6gktc5i3r6r@yuggoth.org> References: <5AB9797D.1090209@tipit.net> <20180430142334.GB10224@localhost.localdomain> <5AE72967.3050100@openstack.org> <20180430151255.bcgaqm5svvtz2rkq@yuggoth.org> <5AE73F3F.4040503@openstack.org> <20180430170204.vvtfq6gktc5i3r6r@yuggoth.org> Message-ID: <5AE74E05.90405@openstack.org> Yeah... my only concern is that if traffic is actually getting there, a redirect to the same place trystack.org is going might be helpful. > Jeremy Stanley > April 30, 2018 at 12:02 PM > On 2018-04-30 11:07:27 -0500 (-0500), Jimmy McArthur wrote: > [...] > [...] > > Since I don't think the trystack.o.o site ever found its way fully > into production, it may make more sense for us to simply delete the > records for it from DNS. Someone else probably knows more about the > prior state of it than I though. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jimmy McArthur > April 30, 2018 at 11:07 AM > > >> Jeremy Stanley >> April 30, 2018 at 10:12 AM >> [...] >> >> Yes, before the TryStack effort was closed down, there had been a >> plan for trystack.org to redirect to a trystack.openstack.org site >> hosted in the community infrastructure. > When we talked to trystack we agreed to redirect trystack.org to > https://openstack.org/software/start since that presents alternative > options for people to "try openstack". My suggestion would be to > redirect trystack.openstack.org to the same spot, but certainly open > to other suggestions :) >> At this point I expect we >> can just rip out the section for it from >> https://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/manifests/static.pp >> as DNS appears to no longer be pointed there. >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> Jimmy McArthur >> April 30, 2018 at 9:34 AM >> I'm working on redirecting trystack.openstack.org to >> openstack.org/software/start. We have redirects in place for >> trystack.org, but didn't realize trystack.openstack.org as a thing as >> well. >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> Paul Belanger >> April 30, 2018 at 9:23 AM >> The code is hosted by openstack-infra[1], if somebody would like to >> propose a >> patch with the new information. >> >> [1] http://git.openstack.org/cgit/openstack-infra/trystack-site >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> Jens Harbott >> April 30, 2018 at 4:37 AM >> >> Seems it would be great if https://trystack.openstack.org/ would be >> updated with this information, according to comments in #openstack >> users are still landing on that page and try to get a stack there in >> vain. >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> Jimmy Mcarthur >> March 26, 2018 at 5:51 PM >> Hi everyone, >> >> We recently made the tough decision, in conjunction with the >> dedicated volunteers that run TryStack, to end the service as of >> March 29, 2018. For those of you that used it, thank you for being >> part of the TryStack community. >> >> The good news is that you can find more resources to try OpenStack at >> http://www.openstack.org/start, including the Passport Program >> , where you can test on any >> participating public cloud. If you are looking to test different >> tools or application stacks with OpenStack clouds, you should check >> out Open Lab . >> >> Thank you very much to Will Foster, Kambiz Aghaiepour, Rich Bowen, >> and the many other volunteers who have managed this valuable service >> for the last several years! Your contribution to OpenStack was >> noticed and appreciated by many in the community. >> >> Cheers, >> Jimmy >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jeremy Stanley > April 30, 2018 at 10:12 AM > [...] > > Yes, before the TryStack effort was closed down, there had been a > plan for trystack.org to redirect to a trystack.openstack.org site > hosted in the community infrastructure. At this point I expect we > can just rip out the section for it from > https://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/manifests/static.pp > as DNS appears to no longer be pointed there. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jimmy McArthur > April 30, 2018 at 9:34 AM > I'm working on redirecting trystack.openstack.org to > openstack.org/software/start. We have redirects in place for > trystack.org, but didn't realize trystack.openstack.org as a thing as > well. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Paul Belanger > April 30, 2018 at 9:23 AM > The code is hosted by openstack-infra[1], if somebody would like to > propose a > patch with the new information. > > [1] http://git.openstack.org/cgit/openstack-infra/trystack-site > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From mgagne at calavera.ca Mon Apr 30 17:14:03 2018 From: mgagne at calavera.ca (=?UTF-8?Q?Mathieu_Gagn=C3=A9?=) Date: Mon, 30 Apr 2018 13:14:03 -0400 Subject: [openstack-dev] [Openstack-operators] [nova] Default scheduler filters survey In-Reply-To: <2c736005-12a5-c619-ba4b-8b0c1cb9f43f@gmail.com> References: <2c736005-12a5-c619-ba4b-8b0c1cb9f43f@gmail.com> Message-ID: On Mon, Apr 30, 2018 at 1:06 PM, Jay Pipes wrote: > Mathieu, > > How do you handle issues where compute nodes are associated with multiple > aggregates and both aggregates have different values for a particular filter > key? > > Is that a human-based validation process to ensure you don't have that > situation? > It's human-based and we are fine with it. We also have automatic reports which generate stats on those aggregates. You will visually see it if some hosts are part of multiple aggregates. If we need an intersection of 2 aggregates, we create a new one and use it instead. -- Mathieu From emilien at redhat.com Mon Apr 30 17:25:33 2018 From: emilien at redhat.com (Emilien Macchi) Date: Mon, 30 Apr 2018 10:25:33 -0700 Subject: [openstack-dev] [Openstack-operators] The Forum Schedule is now live In-Reply-To: <5AE74CF2.9010804@openstack.org> References: <5AE34A02.8020802@openstack.org> <5AE73AA3.4030408@openstack.org> <5AE74CF2.9010804@openstack.org> Message-ID: On Mon, Apr 30, 2018 at 10:05 AM, Jimmy McArthur wrote: > > It looks like we have a spot held for you, but did not receive > confirmation that TripleO would be moving forward with Project Update. If > you all will be recording this, we have you down for Wednesday from 11:25 - > 11:45am. Just let me know and I'll get it up on the schedule. > This slot is perfect, and I'll run it with one of my tripleo co-workers (Alex won't be here). Thanks, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Apr 30 17:29:06 2018 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 30 Apr 2018 17:29:06 +0000 Subject: [openstack-dev] Thank you TryStack!! In-Reply-To: <5AE74E05.90405@openstack.org> References: <5AB9797D.1090209@tipit.net> <20180430142334.GB10224@localhost.localdomain> <5AE72967.3050100@openstack.org> <20180430151255.bcgaqm5svvtz2rkq@yuggoth.org> <5AE73F3F.4040503@openstack.org> <20180430170204.vvtfq6gktc5i3r6r@yuggoth.org> <5AE74E05.90405@openstack.org> Message-ID: <20180430172905.c3qyjrwucgx5vdww@yuggoth.org> On 2018-04-30 12:10:29 -0500 (-0500), Jimmy McArthur wrote: > Yeah... my only concern is that if traffic is actually getting > there, a redirect to the same place trystack.org is going might be > helpful. [...] I was thrown by the fact that DNS currently has trystack.openstack.org as a CNAME alias for trystack.org, but reviewing logs on static.openstack.org it seems it may have previously pointed there (was receiving traffic up until around 13:15 UTC today) so if you want to just glom that onto the current trystack.org redirect that may make the most sense and we can move forward tearing down the old infrastructure for it. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From andrey.mp at gmail.com Mon Apr 30 17:40:07 2018 From: andrey.mp at gmail.com (Andrey Pavlov) Date: Mon, 30 Apr 2018 20:40:07 +0300 Subject: [openstack-dev] [all][tc][ptls] final stages of python 3 transition In-Reply-To: <1525100618-sup-9669@lrrr.local> References: <1524689037-sup-783@lrrr.local> <1525100618-sup-9669@lrrr.local> Message-ID: Hi Doug, There is no ec2-api project on the link. But we have voted job openstack-tox-py35 Regards, Andrey Pavlov. On Mon, Apr 30, 2018 at 6:06 PM, Doug Hellmann wrote: > It would be useful to have more input from PTLs on this issue, so I'm > CCing all of them to get their attention. > > Excerpts from Doug Hellmann's message of 2018-04-25 16:54:46 -0400: > > It's time to talk about the next steps in our migration from python > > 2 to python 3. > > > > Up to this point we have mostly focused on reaching a state where > > we support both versions of the language. We are not quite there > > with all projects, as you can see by reviewing the test coverage > > status information at > > https://wiki.openstack.org/wiki/Python3#Python_3_Status_ > of_OpenStack_projects > > > > Still, we need to press on to the next phase of the migration, which > > I have been calling "Python 3 first". This is where we use python > > 3 as the default, for everything, and set up the exceptions we need > > for anything that still requires python 2. > > > > To reach that stage, we need to: > > > > 1. Change the documentation and release notes jobs to use python 3. > > (The Oslo team recently completed this, and found that we did > > need to make a few small code changes to get them to work.) > > 2. Change (or duplicate) all functional test jobs to run under > > python 3. > > 3. Change the packaging jobs to use python 3. > > 4. Update devstack to use 3 by default and require setting a flag to > > use 2. (This may trigger other job changes.) > > > > At that point, all of our deliverables will be produced using python > > 3, and we can be relatively confident that if we no longer had > > access to python 2 we could still continue operating. We could also > > start updating deployment tools to use either python 3 or 2, so > > that users could actually deploy using the python 3 versions of > > services. > > > > Somewhere in that time frame our third-party CI systems will need > > to ensure they have python 3 support as well. > > > > After the "Python 3 first" phase is completed we should release > > one series using the packages built with python 3. Perhaps Stein? > > Or is that too ambitious? > > > > Next, we will be ready to address the prerequisites for "Python 3 > > only," which will allow us to drop Python 2 support. > > > > We need to wait to drop python 2 support as a community, rather > > than going one project at a time, to avoid doubling the work of > > downstream consumers such as distros and independent deployers. We > > don't want them to have to package all (or even a large number) of > > the dependencies of OpenStack twice because they have to install > > some services running under python 2 and others under 3. Ideally > > they would be able to upgrade all of the services on a node together > > as part of their transition to the new version, without ending up > > with a python 2 version of a dependency along side a python 3 version > > of the same package. > > > > The remaining items could be fixed earlier, but this is the point > > at which they would block us: > > > > 1. Fix oslo.service functional tests -- the Oslo team needs help > > maintaining this library. Alternatively, we could move all > > services to use cotyledon (https://pypi.org/project/cotyledon/). > > > > 2. Finish the unit test and functional test ports so that all of > > our tests can run under python 3 (this implies that the services > > all run under python 3, so there is no more porting to do). > > > > Finally, after we have *all* tests running on python 3, we can > > safely drop python 2. > > > > We have previously discussed the end of the T cycle as the point > > at which we would have all of those tests running, and if that holds > > true we could reasonably drop python 2 during the beginning of the > > U cycle, in late 2019 and before the 2020 cut-off point when upstream > > python 2 support will be dropped. > > > > I need some info from the deployment tool teams to understand whether > > they would be ready to take the plunge during T or U and start > > deploying only the python 3 version. Are there other upgrade issues > > that need to be addressed to support moving from 2 to 3? Something > > that might be part of the platform(s), rather than OpenStack itself? > > > > What else have I missed in these phases? Other jobs? Other blocking > > conditions? > > > > Doug > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Mon Apr 30 18:01:55 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 30 Apr 2018 13:01:55 -0500 Subject: [openstack-dev] Thank you TryStack!! In-Reply-To: <20180430172905.c3qyjrwucgx5vdww@yuggoth.org> References: <5AB9797D.1090209@tipit.net> <20180430142334.GB10224@localhost.localdomain> <5AE72967.3050100@openstack.org> <20180430151255.bcgaqm5svvtz2rkq@yuggoth.org> <5AE73F3F.4040503@openstack.org> <20180430170204.vvtfq6gktc5i3r6r@yuggoth.org> <5AE74E05.90405@openstack.org> <20180430172905.c3qyjrwucgx5vdww@yuggoth.org> Message-ID: <5AE75A13.4030606@openstack.org> OK - got it :) > Jeremy Stanley > April 30, 2018 at 12:29 PM > [...] > > I was thrown by the fact that DNS currently has > trystack.openstack.org as a CNAME alias for trystack.org, but > reviewing logs on static.openstack.org it seems it may have > previously pointed there (was receiving traffic up until around > 13:15 UTC today) so if you want to just glom that onto the current > trystack.org redirect that may make the most sense and we can move > forward tearing down the old infrastructure for it. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jimmy McArthur > April 30, 2018 at 12:10 PM > Yeah... my only concern is that if traffic is actually getting there, > a redirect to the same place trystack.org is going might be helpful. > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jeremy Stanley > April 30, 2018 at 12:02 PM > On 2018-04-30 11:07:27 -0500 (-0500), Jimmy McArthur wrote: > [...] > [...] > > Since I don't think the trystack.o.o site ever found its way fully > into production, it may make more sense for us to simply delete the > records for it from DNS. Someone else probably knows more about the > prior state of it than I though. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jimmy McArthur > April 30, 2018 at 11:07 AM > > >> Jeremy Stanley >> April 30, 2018 at 10:12 AM >> [...] >> >> Yes, before the TryStack effort was closed down, there had been a >> plan for trystack.org to redirect to a trystack.openstack.org site >> hosted in the community infrastructure. > When we talked to trystack we agreed to redirect trystack.org to > https://openstack.org/software/start since that presents alternative > options for people to "try openstack". My suggestion would be to > redirect trystack.openstack.org to the same spot, but certainly open > to other suggestions :) >> At this point I expect we >> can just rip out the section for it from >> https://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/manifests/static.pp >> as DNS appears to no longer be pointed there. >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> Jimmy McArthur >> April 30, 2018 at 9:34 AM >> I'm working on redirecting trystack.openstack.org to >> openstack.org/software/start. We have redirects in place for >> trystack.org, but didn't realize trystack.openstack.org as a thing as >> well. >> >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> Paul Belanger >> April 30, 2018 at 9:23 AM >> The code is hosted by openstack-infra[1], if somebody would like to >> propose a >> patch with the new information. >> >> [1] http://git.openstack.org/cgit/openstack-infra/trystack-site >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> Jens Harbott >> April 30, 2018 at 4:37 AM >> >> Seems it would be great if https://trystack.openstack.org/ would be >> updated with this information, according to comments in #openstack >> users are still landing on that page and try to get a stack there in >> vain. >> >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev >> Jimmy Mcarthur >> March 26, 2018 at 5:51 PM >> Hi everyone, >> >> We recently made the tough decision, in conjunction with the >> dedicated volunteers that run TryStack, to end the service as of >> March 29, 2018. For those of you that used it, thank you for being >> part of the TryStack community. >> >> The good news is that you can find more resources to try OpenStack at >> http://www.openstack.org/start, including the Passport Program >> , where you can test on any >> participating public cloud. If you are looking to test different >> tools or application stacks with OpenStack clouds, you should check >> out Open Lab . >> >> Thank you very much to Will Foster, Kambiz Aghaiepour, Rich Bowen, >> and the many other volunteers who have managed this valuable service >> for the last several years! Your contribution to OpenStack was >> noticed and appreciated by many in the community. >> >> Cheers, >> Jimmy >> __________________________________________________________________________ >> OpenStack Development Mailing List (not for usage questions) >> Unsubscribe: >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Jeremy Stanley > April 30, 2018 at 10:12 AM > [...] > > Yes, before the TryStack effort was closed down, there had been a > plan for trystack.org to redirect to a trystack.openstack.org site > hosted in the community infrastructure. At this point I expect we > can just rip out the section for it from > https://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/manifests/static.pp > as DNS appears to no longer be pointed there. > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Mon Apr 30 18:14:19 2018 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Mon, 30 Apr 2018 18:14:19 +0000 Subject: [openstack-dev] [Openstack-operators] The Forum Schedule is now live In-Reply-To: <5AE742AF.2010106@openstack.org> References: <5AE34A02.8020802@openstack.org> <5AE73AA3.4030408@openstack.org> <18ce76f6eb3b4b30afedb642f43ce93c@AUSX13MPS308.AMER.DELL.COM> <5AE742AF.2010106@openstack.org> Message-ID: Interesting. It does work on Chrome but not on IE. Here is IE screenshot. Thanks, Arkady From: Jimmy McArthur [mailto:jimmy at openstack.org] Sent: Monday, April 30, 2018 11:22 AM To: Kanevsky, Arkady Cc: amy at demarco.com; openstack-dev at lists.openstack.org; OpenStack-operators at lists.openstack.org Subject: Re: [Openstack-operators] [openstack-dev] The Forum Schedule is now live Hmm. I see both populated with all of the relevant sessions. Can you send me a screencap of what you're seeing? Arkady.Kanevsky at dell.com April 30, 2018 at 10:58 AM Both are currently empty. From: Jimmy McArthur [mailto:jimmy at openstack.org] Sent: Monday, April 30, 2018 10:48 AM To: Amy Marrich Cc: OpenStack Development Mailing List (not for usage questions); OpenStack-operators at lists.openstack.org Subject: Re: [Openstack-operators] [openstack-dev] The Forum Schedule is now live Project Updates are in their own track: https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=223 As are SIG, BoF and Working Groups: https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=218 Jimmy McArthur April 30, 2018 at 10:47 AM Project Updates are in their own track: https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=223 As are SIG, BoF and Working Groups: https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=218 __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Amy Marrich April 30, 2018 at 10:44 AM Emilien, I believe that the Project Updates are separate from the Forum? I know I saw some in the schedule before the Forum submittals were even closed. Maybe contact speaker support or Jimmy will answer here. Thanks, Amy (spotz) __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Emilien Macchi April 30, 2018 at 10:33 AM Hello all - Please take a look here for the posted Forum schedule: https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224 You should also see it update on your Summit App. Why TripleO doesn't have project update? Maybe we could combine it with TripleO - Project Onboarding if needed but it would be great to have it advertised as a project update! Thanks, -- Emilien Macchi __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Jimmy McArthur April 27, 2018 at 11:04 AM Hello all - Please take a look here for the posted Forum schedule: https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=224 You should also see it update on your Summit App. Thank you and see you in Vancouver! Jimmy __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Capture1.png Type: image/png Size: 121994 bytes Desc: Capture1.png URL: From jimmy at openstack.org Mon Apr 30 18:22:10 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 30 Apr 2018 13:22:10 -0500 Subject: [openstack-dev] [Openstack-operators] The Forum Schedule is now live In-Reply-To: References: <5AE34A02.8020802@openstack.org> <5AE73AA3.4030408@openstack.org> <18ce76f6eb3b4b30afedb642f43ce93c@AUSX13MPS308.AMER.DELL.COM> <5AE742AF.2010106@openstack.org> Message-ID: <5AE75ED2.5020506@openstack.org> We don't support deprecated browsers, I'm afraid. > Arkady.Kanevsky at dell.com > April 30, 2018 at 1:14 PM > > Interesting. > > It does work on Chrome but not on IE. > > Here is IE screenshot. > > Thanks, > > Arkady > > *From:*Jimmy McArthur [mailto:jimmy at openstack.org] > *Sent:* Monday, April 30, 2018 11:22 AM > *To:* Kanevsky, Arkady > *Cc:* amy at demarco.com; openstack-dev at lists.openstack.org; > OpenStack-operators at lists.openstack.org > *Subject:* Re: [Openstack-operators] [openstack-dev] The Forum > Schedule is now live > > Hmm. I see both populated with all of the relevant sessions. Can you > send me a screencap of what you're seeing? > > > Jimmy McArthur > April 30, 2018 at 11:22 AM > Hmm. I see both populated with all of the relevant sessions. Can you > send me a screencap of what you're seeing? > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Arkady.Kanevsky at dell.com > April 30, 2018 at 10:58 AM > > Both are currently empty. > > *From:*Jimmy McArthur [mailto:jimmy at openstack.org] > *Sent:* Monday, April 30, 2018 10:48 AM > *To:* Amy Marrich > *Cc:* OpenStack Development Mailing List (not for usage questions); > OpenStack-operators at lists.openstack.org > *Subject:* Re: [Openstack-operators] [openstack-dev] The Forum > Schedule is now live > > Project Updates are in their own track: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=223 > > As are SIG, BoF and Working Groups: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=218 > > > Jimmy McArthur > April 30, 2018 at 10:47 AM > Project Updates are in their own track: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=223 > > As are SIG, BoF and Working Groups: > https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=218 > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev > Amy Marrich > April 30, 2018 at 10:44 AM > Emilien, > > I believe that the Project Updates are separate from the Forum? I know > I saw some in the schedule before the Forum submittals were even > closed. Maybe contact speaker support or Jimmy will answer here. > > Thanks, > > Amy (spotz) > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Mon Apr 30 19:21:42 2018 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Mon, 30 Apr 2018 21:21:42 +0200 Subject: [openstack-dev] [os-upstream-institute] Meeting reminder Message-ID: Hi, Don’t forget that we switched to the US - Europe slot only till the training in Vancouver. See you on #openstack-meeting-3 at 2000 UTC! Thanks, Ildikó (IRC: ildikov) From Arkady.Kanevsky at dell.com Mon Apr 30 19:40:05 2018 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Mon, 30 Apr 2018 19:40:05 +0000 Subject: [openstack-dev] [Openstack-operators] The Forum Schedule is now live In-Reply-To: <5AE75ED2.5020506@openstack.org> References: <5AE34A02.8020802@openstack.org> <5AE73AA3.4030408@openstack.org> <18ce76f6eb3b4b30afedb642f43ce93c@AUSX13MPS308.AMER.DELL.COM> <5AE742AF.2010106@openstack.org> <5AE75ED2.5020506@openstack.org> Message-ID: <0349cc9ad93344b88868d0288ddee485@AUSX13MPS308.AMER.DELL.COM> LOL From: Jimmy McArthur [mailto:jimmy at openstack.org] Sent: Monday, April 30, 2018 1:22 PM To: Kanevsky, Arkady Cc: amy at demarco.com; openstack-dev at lists.openstack.org; OpenStack-operators at lists.openstack.org Subject: Re: [Openstack-operators] [openstack-dev] The Forum Schedule is now live We don't support deprecated browsers, I'm afraid. Arkady.Kanevsky at dell.com April 30, 2018 at 1:14 PM Interesting. It does work on Chrome but not on IE. Here is IE screenshot. Thanks, Arkady From: Jimmy McArthur [mailto:jimmy at openstack.org] Sent: Monday, April 30, 2018 11:22 AM To: Kanevsky, Arkady Cc: amy at demarco.com; openstack-dev at lists.openstack.org; OpenStack-operators at lists.openstack.org Subject: Re: [Openstack-operators] [openstack-dev] The Forum Schedule is now live Hmm. I see both populated with all of the relevant sessions. Can you send me a screencap of what you're seeing? Jimmy McArthur April 30, 2018 at 11:22 AM Hmm. I see both populated with all of the relevant sessions. Can you send me a screencap of what you're seeing? __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Arkady.Kanevsky at dell.com April 30, 2018 at 10:58 AM Both are currently empty. From: Jimmy McArthur [mailto:jimmy at openstack.org] Sent: Monday, April 30, 2018 10:48 AM To: Amy Marrich Cc: OpenStack Development Mailing List (not for usage questions); OpenStack-operators at lists.openstack.org Subject: Re: [Openstack-operators] [openstack-dev] The Forum Schedule is now live Project Updates are in their own track: https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=223 As are SIG, BoF and Working Groups: https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=218 Jimmy McArthur April 30, 2018 at 10:47 AM Project Updates are in their own track: https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=223 As are SIG, BoF and Working Groups: https://www.openstack.org/summit/vancouver-2018/summit-schedule#track=218 __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Amy Marrich April 30, 2018 at 10:44 AM Emilien, I believe that the Project Updates are separate from the Forum? I know I saw some in the schedule before the Forum submittals were even closed. Maybe contact speaker support or Jimmy will answer here. Thanks, Amy (spotz) __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: From alee at redhat.com Mon Apr 30 20:07:09 2018 From: alee at redhat.com (Ade Lee) Date: Mon, 30 Apr 2018 16:07:09 -0400 Subject: [openstack-dev] [barbican] barbican migrated to storyboard Message-ID: <1525118829.3706.33.camel@redhat.com> Hi all, Thanks to the hard work done by Kendall and Jeremy, Barbican has now been been migrated to storyboard. The new link for the Barbican storyboard is https://storyboard.openstac k.org/#!/project_group/81 This is the starting point for : python-barbicanclient, castellan-ui, barbican-tempest-plugin, barbican- specs and openstack-barbican Note, that because castellan is under oslo control, it has not yet been migrated at this time. Thanks Kendall and Jeremy! Ade From pawel at suder.info Mon Apr 30 20:54:17 2018 From: pawel at suder.info (=?UTF-8?Q?Pawe=C5=82?= Suder) Date: Mon, 30 Apr 2018 22:54:17 +0200 Subject: [openstack-dev] [neutron] Bug deputy report 23-29 April In-Reply-To: <1525069611.4621.9.camel@suder.info> References: <1525069611.4621.9.camel@suder.info> Message-ID: <1525121657.9263.0.camel@suder.info> Hello Team, I forgot about this: Deleting port doesnt delete dns records https://bugs.launchpad.net/neutron/+bug/1741079 Re-open, confirmed. Cheers, Paweł W dniu 30.04.2018, pon o godzinie 08∶26 +0200, użytkownik Paweł Suder napisał: > Hello Team, > > Last week starting from 23 April until 29 April I was bug deputy for > Neutron project. > > Following bugs/RFEs were opened: > > [RFE] Create host-routes for routed networks (segments) > https://bugs.launchpad.net/neutron/+bug/1766380 > RFE, importance not set. Seems to be very interesting. Confirmed by > Miguel (thx!). Need to be discussed by drivers team. > > Trunk Tests are failing often in dvr-multinode scenario job > https://bugs.launchpad.net/neutron/+bug/1766701 > High, confirmed based on logs from failing jobs. > > Periodic job * neutron-dynamic-routing-dsvm-tempest-with-ryu-master- > scenario-ipv4 fails > https://bugs.launchpad.net/neutron/+bug/1766702 > High, confirmed based on logs from failing jobs. > > Rally tests job is reaching job timeout often > https://bugs.launchpad.net/neutron/+bug/1766703 > High, confirmed based on logs from failing jobs. > > [NEED ATTENTION] the machine running dhcp agent will have very high > cpu > load when start dhcp agent after the agent down more than 150 seconds > https://bugs.launchpad.net/neutron/+bug/1766812 > Not yet clarified, due to scale, it will be not easy to triage it. > Some > logs are attached, but still issue might be very environmental. Not > marked as confirmed, importance not set. > [OPEN QUESTION]: should be reproduced somehow? > > loadbalancer can't create with chinese character name > https://bugs.launchpad.net/neutron/+bug/1767028 > It could be related to Octavia. Not confirmed, do not know version of > used OpenStack. Logs from Neutron attached. Importance not set. > [OPEN QUESTION]: how to link with other project? > > character of set image property multiqueue command is wrong > https://bugs.launchpad.net/neutron/+bug/1767267 > Confirmed doc issue, some typos/command syntax issues. Importance not > set. > > Neutron agent internal ports remain untagged for some time, which > makes > them trunk ports > https://bugs.launchpad.net/neutron/+bug/1767422 > Confirmed. Fix proposed. > > [DVR] br-int in compute node will send unknown unicast to sg-xxx > https://bugs.launchpad.net/neutron/+bug/1767811 > Clarifying. > > Cheers, > Paweł > > _____________________________________________________________________ > _____ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs > cribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From openstack at nemebean.com Mon Apr 30 21:16:35 2018 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 30 Apr 2018 16:16:35 -0500 Subject: [openstack-dev] [all][tc][ptls] final stages of python 3 transition In-Reply-To: <1525100618-sup-9669@lrrr.local> References: <1524689037-sup-783@lrrr.local> <1525100618-sup-9669@lrrr.local> Message-ID: Resending from an address that is subscribed to the list. Apologies to those of you who get this twice. On 04/30/2018 10:06 AM, Doug Hellmann wrote: > It would be useful to have more input from PTLs on this issue, so I'm > CCing all of them to get their attention. > > Excerpts from Doug Hellmann's message of 2018-04-25 16:54:46 -0400: >> It's time to talk about the next steps in our migration from python >> 2 to python 3. >> >> Up to this point we have mostly focused on reaching a state where >> we support both versions of the language. We are not quite there >> with all projects, as you can see by reviewing the test coverage >> status information at >> https://wiki.openstack.org/wiki/Python3#Python_3_Status_of_OpenStack_projects >> >> Still, we need to press on to the next phase of the migration, which >> I have been calling "Python 3 first". This is where we use python >> 3 as the default, for everything, and set up the exceptions we need >> for anything that still requires python 2. >> >> To reach that stage, we need to: >> >> 1. Change the documentation and release notes jobs to use python 3. >> (The Oslo team recently completed this, and found that we did >> need to make a few small code changes to get them to work.) >> 2. Change (or duplicate) all functional test jobs to run under >> python 3. >> 3. Change the packaging jobs to use python 3. >> 4. Update devstack to use 3 by default and require setting a flag to >> use 2. (This may trigger other job changes.) >> >> At that point, all of our deliverables will be produced using python >> 3, and we can be relatively confident that if we no longer had >> access to python 2 we could still continue operating. We could also >> start updating deployment tools to use either python 3 or 2, so >> that users could actually deploy using the python 3 versions of >> services. >> >> Somewhere in that time frame our third-party CI systems will need >> to ensure they have python 3 support as well. >> >> After the "Python 3 first" phase is completed we should release >> one series using the packages built with python 3. Perhaps Stein? >> Or is that too ambitious? >> >> Next, we will be ready to address the prerequisites for "Python 3 >> only," which will allow us to drop Python 2 support. >> >> We need to wait to drop python 2 support as a community, rather >> than going one project at a time, to avoid doubling the work of >> downstream consumers such as distros and independent deployers. We >> don't want them to have to package all (or even a large number) of >> the dependencies of OpenStack twice because they have to install >> some services running under python 2 and others under 3. Ideally >> they would be able to upgrade all of the services on a node together >> as part of their transition to the new version, without ending up >> with a python 2 version of a dependency along side a python 3 version >> of the same package. >> >> The remaining items could be fixed earlier, but this is the point >> at which they would block us: >> >> 1. Fix oslo.service functional tests -- the Oslo team needs help >> maintaining this library. Alternatively, we could move all >> services to use cotyledon (https://pypi.org/project/cotyledon/). For everyone's awareness, we discussed this in the Oslo meeting today and our first step is to see how many, if any, services are actually relying on the oslo.service functionality that doesn't work in Python 3 today. From there we will come up with a plan for how to move forward. https://bugs.launchpad.net/manila/+bug/1482633 is the original bug. >> >> 2. Finish the unit test and functional test ports so that all of >> our tests can run under python 3 (this implies that the services >> all run under python 3, so there is no more porting to do). And integration tests? I know for the initial python 3 goal we said just unit and functional, but it seems to me that we can't claim full python 3 compatibility until we can run our tempest jobs against python 3-based OpenStack. >> >> Finally, after we have *all* tests running on python 3, we can >> safely drop python 2. >> >> We have previously discussed the end of the T cycle as the point >> at which we would have all of those tests running, and if that holds >> true we could reasonably drop python 2 during the beginning of the >> U cycle, in late 2019 and before the 2020 cut-off point when upstream >> python 2 support will be dropped. >> >> I need some info from the deployment tool teams to understand whether >> they would be ready to take the plunge during T or U and start >> deploying only the python 3 version. Are there other upgrade issues >> that need to be addressed to support moving from 2 to 3? Something >> that might be part of the platform(s), rather than OpenStack itself? Alex can probably expand on this, but I know TripleO has some challenges in this area. Specifically the fact that CentOS 7 will only ever support Python 2 and CentOS 8 is planned to only support Python 3. Since CentOS 8 is not a thing yet and no release dates are announced they're having to use Fedora for Python 3 testing, which isn't something that will be supported long-term. That makes things...complicated. Some more details are in the PTG discussion wrap-up thread: http://lists.openstack.org/pipermail/openstack-dev/2018-March/128481.html That said, I believe the plan is to be testing on Python 3 by T, so I guess that's ultimately the answer to your question. >> >> What else have I missed in these phases? Other jobs? Other blocking >> conditions? >> >> Doug From mriedemos at gmail.com Mon Apr 30 21:21:18 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 30 Apr 2018 16:21:18 -0500 Subject: [openstack-dev] [api] REST limitations and GraghGL inception? In-Reply-To: References: Message-ID: On 4/29/2018 10:53 PM, Gilles Dubreuil wrote: > Remember Boston's Summit presentation [1] about GraphQL [2] and how it > addresses REST limitations. > I wonder if any project has been thinking about using GraphQL. I haven't > find any mention or pointers about it. > > GraphQL takes a complete different approach compared to REST. So we can > finally forget about REST API Description languages > (OpenAPI/Swagger/WSDL/WADL/JSON-API/ETC) and HATEOS (the hypermedia > approach which doesn't describe how to use it). > > So, once passed the point where 'REST vs GraphQL' is like comparing SQL > and no-SQL DBMS and therefore have different applications, there are no > doubt the complexity of most OpenStack projects are good candidates for > GraphQL. > > Besides topics such as efficiency, decoupling, no version management > need there many other powerful features such as API Schema out of the > box and better automation down that track. > > It looks like the dream of a conduit between API services and consumers > might have finally come true so we could move-on an worry about other > things. > > So has anyone already starting looking into it? > > [1] > https://www.openstack.org/videos/boston-2017/building-modern-apis-with-graphql > > [2] http://graphql.org Not to speak for him, but Sean Dague had a blog post about REST API microversions in OpenStack and there is a Q&A bit at the bottom about GraphQL replacing the need for microversions: https://dague.net/2017/12/11/rest-api-microversions/ Since I don't expect Sean to magically appear to reply to this thread, I thought I'd pass this along. -- Thanks, Matt From mriedemos at gmail.com Mon Apr 30 21:28:55 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 30 Apr 2018 16:28:55 -0500 Subject: [openstack-dev] [Openstack-operators] [nova] Default scheduler filters survey In-Reply-To: References: Message-ID: <5333196f-eab5-3343-7476-cfffddb5c299@gmail.com> On 4/30/2018 11:41 AM, Mathieu Gagné wrote: > [6] Used to filter Ironic nodes based on the 'reserved_for_user_id' > Ironic node property. > This is mainly used when enrolling existing nodes already living > on a different system. > We reserve the node to a special internal user so the customer > cannot reserve > the node by mistake until the process is completed. > Latest version of Nova dropped user_id from RequestSpec. We had to > add it back. See https://review.openstack.org/#/c/565340/ for context on the regression mentioned about RequestSpec.user_id. Thanks Mathieu for jumping in #openstack-nova and discussing it. -- Thanks, Matt From mtreinish at kortar.org Mon Apr 30 21:42:20 2018 From: mtreinish at kortar.org (Matthew Treinish) Date: Mon, 30 Apr 2018 17:42:20 -0400 Subject: [openstack-dev] [all][tc][ptls] final stages of python 3 transition In-Reply-To: References: <1524689037-sup-783@lrrr.local> <1525100618-sup-9669@lrrr.local> Message-ID: <20180430214220.GA15842@zeong> On Mon, Apr 30, 2018 at 04:16:35PM -0500, Ben Nemec wrote: > Resending from an address that is subscribed to the list. Apologies to > those of you who get this twice. > > On 04/30/2018 10:06 AM, Doug Hellmann wrote: > > It would be useful to have more input from PTLs on this issue, so I'm > > CCing all of them to get their attention. > > > > Excerpts from Doug Hellmann's message of 2018-04-25 16:54:46 -0400: > > > It's time to talk about the next steps in our migration from python > > > 2 to python 3. > > > > > > Up to this point we have mostly focused on reaching a state where > > > we support both versions of the language. We are not quite there > > > with all projects, as you can see by reviewing the test coverage > > > status information at > > > https://wiki.openstack.org/wiki/Python3#Python_3_Status_of_OpenStack_projects > > > > > > Still, we need to press on to the next phase of the migration, which > > > I have been calling "Python 3 first". This is where we use python > > > 3 as the default, for everything, and set up the exceptions we need > > > for anything that still requires python 2. > > > > > > To reach that stage, we need to: > > > > > > 1. Change the documentation and release notes jobs to use python 3. > > > (The Oslo team recently completed this, and found that we did > > > need to make a few small code changes to get them to work.) > > > 2. Change (or duplicate) all functional test jobs to run under > > > python 3. > > > 3. Change the packaging jobs to use python 3. > > > 4. Update devstack to use 3 by default and require setting a flag to > > > use 2. (This may trigger other job changes.) > > > > > > At that point, all of our deliverables will be produced using python > > > 3, and we can be relatively confident that if we no longer had > > > access to python 2 we could still continue operating. We could also > > > start updating deployment tools to use either python 3 or 2, so > > > that users could actually deploy using the python 3 versions of > > > services. > > > > > > Somewhere in that time frame our third-party CI systems will need > > > to ensure they have python 3 support as well. > > > > > > After the "Python 3 first" phase is completed we should release > > > one series using the packages built with python 3. Perhaps Stein? > > > Or is that too ambitious? > > > > > > Next, we will be ready to address the prerequisites for "Python 3 > > > only," which will allow us to drop Python 2 support. > > > > > > We need to wait to drop python 2 support as a community, rather > > > than going one project at a time, to avoid doubling the work of > > > downstream consumers such as distros and independent deployers. We > > > don't want them to have to package all (or even a large number) of > > > the dependencies of OpenStack twice because they have to install > > > some services running under python 2 and others under 3. Ideally > > > they would be able to upgrade all of the services on a node together > > > as part of their transition to the new version, without ending up > > > with a python 2 version of a dependency along side a python 3 version > > > of the same package. > > > > > > The remaining items could be fixed earlier, but this is the point > > > at which they would block us: > > > > > > 1. Fix oslo.service functional tests -- the Oslo team needs help > > > maintaining this library. Alternatively, we could move all > > > services to use cotyledon (https://pypi.org/project/cotyledon/). > > For everyone's awareness, we discussed this in the Oslo meeting today and > our first step is to see how many, if any, services are actually relying on > the oslo.service functionality that doesn't work in Python 3 today. From > there we will come up with a plan for how to move forward. > > https://bugs.launchpad.net/manila/+bug/1482633 is the original bug. > > > > > > > 2. Finish the unit test and functional test ports so that all of > > > our tests can run under python 3 (this implies that the services > > > all run under python 3, so there is no more porting to do). > > And integration tests? I know for the initial python 3 goal we said just > unit and functional, but it seems to me that we can't claim full python 3 > compatibility until we can run our tempest jobs against python 3-based > OpenStack. They already are running, and have been since the Atlanta PTG (which was the same cycle as the goal): https://review.openstack.org/#/c/436540/ You can see the gate jobs history here: http://status.openstack.org/openstack-health/#/job/tempest-full-py3 -Matt Treinish > > > > > > > Finally, after we have *all* tests running on python 3, we can > > > safely drop python 2. > > > > > > We have previously discussed the end of the T cycle as the point > > > at which we would have all of those tests running, and if that holds > > > true we could reasonably drop python 2 during the beginning of the > > > U cycle, in late 2019 and before the 2020 cut-off point when upstream > > > python 2 support will be dropped. > > > > > > I need some info from the deployment tool teams to understand whether > > > they would be ready to take the plunge during T or U and start > > > deploying only the python 3 version. Are there other upgrade issues > > > that need to be addressed to support moving from 2 to 3? Something > > > that might be part of the platform(s), rather than OpenStack itself? > > Alex can probably expand on this, but I know TripleO has some challenges in > this area. Specifically the fact that CentOS 7 will only ever support > Python 2 and CentOS 8 is planned to only support Python 3. Since CentOS 8 is > not a thing yet and no release dates are announced they're having to use > Fedora for Python 3 testing, which isn't something that will be supported > long-term. That makes things...complicated. > > Some more details are in the PTG discussion wrap-up thread: > http://lists.openstack.org/pipermail/openstack-dev/2018-March/128481.html > > That said, I believe the plan is to be testing on Python 3 by T, so I guess > that's ultimately the answer to your question. > > > > > > > What else have I missed in these phases? Other jobs? Other blocking > > > conditions? > > > > > > Doug > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From aschultz at redhat.com Mon Apr 30 21:43:16 2018 From: aschultz at redhat.com (Alex Schultz) Date: Mon, 30 Apr 2018 15:43:16 -0600 Subject: [openstack-dev] [all][tc][ptls] final stages of python 3 transition In-Reply-To: References: <1524689037-sup-783@lrrr.local> <1525100618-sup-9669@lrrr.local> Message-ID: On Mon, Apr 30, 2018 at 3:16 PM, Ben Nemec wrote: > Resending from an address that is subscribed to the list. Apologies to > those of you who get this twice. > > On 04/30/2018 10:06 AM, Doug Hellmann wrote: >> >> It would be useful to have more input from PTLs on this issue, so I'm >> CCing all of them to get their attention. >> >> Excerpts from Doug Hellmann's message of 2018-04-25 16:54:46 -0400: >>> >>> It's time to talk about the next steps in our migration from python >>> 2 to python 3. >>> >>> Up to this point we have mostly focused on reaching a state where >>> we support both versions of the language. We are not quite there >>> with all projects, as you can see by reviewing the test coverage >>> status information at >>> >>> https://wiki.openstack.org/wiki/Python3#Python_3_Status_of_OpenStack_projects >>> >>> Still, we need to press on to the next phase of the migration, which >>> I have been calling "Python 3 first". This is where we use python >>> 3 as the default, for everything, and set up the exceptions we need >>> for anything that still requires python 2. >>> >>> To reach that stage, we need to: >>> >>> 1. Change the documentation and release notes jobs to use python 3. >>> (The Oslo team recently completed this, and found that we did >>> need to make a few small code changes to get them to work.) >>> 2. Change (or duplicate) all functional test jobs to run under >>> python 3. >>> 3. Change the packaging jobs to use python 3. >>> 4. Update devstack to use 3 by default and require setting a flag to >>> use 2. (This may trigger other job changes.) >>> >>> At that point, all of our deliverables will be produced using python >>> 3, and we can be relatively confident that if we no longer had >>> access to python 2 we could still continue operating. We could also >>> start updating deployment tools to use either python 3 or 2, so >>> that users could actually deploy using the python 3 versions of >>> services. >>> >>> Somewhere in that time frame our third-party CI systems will need >>> to ensure they have python 3 support as well. >>> >>> After the "Python 3 first" phase is completed we should release >>> one series using the packages built with python 3. Perhaps Stein? >>> Or is that too ambitious? >>> >>> Next, we will be ready to address the prerequisites for "Python 3 >>> only," which will allow us to drop Python 2 support. >>> >>> We need to wait to drop python 2 support as a community, rather >>> than going one project at a time, to avoid doubling the work of >>> downstream consumers such as distros and independent deployers. We >>> don't want them to have to package all (or even a large number) of >>> the dependencies of OpenStack twice because they have to install >>> some services running under python 2 and others under 3. Ideally >>> they would be able to upgrade all of the services on a node together >>> as part of their transition to the new version, without ending up >>> with a python 2 version of a dependency along side a python 3 version >>> of the same package. >>> >>> The remaining items could be fixed earlier, but this is the point >>> at which they would block us: >>> >>> 1. Fix oslo.service functional tests -- the Oslo team needs help >>> maintaining this library. Alternatively, we could move all >>> services to use cotyledon (https://pypi.org/project/cotyledon/). > > > For everyone's awareness, we discussed this in the Oslo meeting today and > our first step is to see how many, if any, services are actually relying on > the oslo.service functionality that doesn't work in Python 3 today. From > there we will come up with a plan for how to move forward. > > https://bugs.launchpad.net/manila/+bug/1482633 is the original bug. > >>> >>> 2. Finish the unit test and functional test ports so that all of >>> our tests can run under python 3 (this implies that the services >>> all run under python 3, so there is no more porting to do). > > > And integration tests? I know for the initial python 3 goal we said just > unit and functional, but it seems to me that we can't claim full python 3 > compatibility until we can run our tempest jobs against python 3-based > OpenStack. > >>> >>> Finally, after we have *all* tests running on python 3, we can >>> safely drop python 2. >>> >>> We have previously discussed the end of the T cycle as the point >>> at which we would have all of those tests running, and if that holds >>> true we could reasonably drop python 2 during the beginning of the >>> U cycle, in late 2019 and before the 2020 cut-off point when upstream >>> python 2 support will be dropped. >>> >>> I need some info from the deployment tool teams to understand whether >>> they would be ready to take the plunge during T or U and start >>> deploying only the python 3 version. Are there other upgrade issues >>> that need to be addressed to support moving from 2 to 3? Something >>> that might be part of the platform(s), rather than OpenStack itself? > > > Alex can probably expand on this, but I know TripleO has some challenges in > this area. Specifically the fact that CentOS 7 will only ever support > Python 2 and CentOS 8 is planned to only support Python 3. Since CentOS 8 is > not a thing yet and no release dates are announced they're having to use > Fedora for Python 3 testing, which isn't something that will be supported > long-term. That makes things...complicated. > > Some more details are in the PTG discussion wrap-up thread: > http://lists.openstack.org/pipermail/openstack-dev/2018-March/128481.html > > That said, I believe the plan is to be testing on Python 3 by T, so I guess > that's ultimately the answer to your question. > Yes from a TripleO perspective there are a few different ways to address this, but we will likely need to follow the availability of python3 on the current release of a given CentOS version. With the switch to containers may allow us to decouple from the base OS python a bit, but that would mean that we'd need to be able to pull in a fedora images with python3 packages (via Kolla). The work on this front is very early on so I'm not sure we have a timeline to commit to T. Thanks, -Alex >>> >>> What else have I missed in these phases? Other jobs? Other blocking >>> conditions? >>> >>> Doug > > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev From doug at doughellmann.com Mon Apr 30 21:58:30 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 30 Apr 2018 17:58:30 -0400 Subject: [openstack-dev] [all][tc][ptls] final stages of python 3 transition In-Reply-To: References: <1524689037-sup-783@lrrr.local> <1525100618-sup-9669@lrrr.local> Message-ID: <1525125208-sup-9821@lrrr.local> Excerpts from Ben Nemec's message of 2018-04-30 16:16:35 -0500: > Resending from an address that is subscribed to the list. Apologies to > those of you who get this twice. > > On 04/30/2018 10:06 AM, Doug Hellmann wrote: > > It would be useful to have more input from PTLs on this issue, so I'm > > CCing all of them to get their attention. > > > > Excerpts from Doug Hellmann's message of 2018-04-25 16:54:46 -0400: > >> It's time to talk about the next steps in our migration from python > >> 2 to python 3. > >> > >> Up to this point we have mostly focused on reaching a state where > >> we support both versions of the language. We are not quite there > >> with all projects, as you can see by reviewing the test coverage > >> status information at > >> https://wiki.openstack.org/wiki/Python3#Python_3_Status_of_OpenStack_projects > >> > >> Still, we need to press on to the next phase of the migration, which > >> I have been calling "Python 3 first". This is where we use python > >> 3 as the default, for everything, and set up the exceptions we need > >> for anything that still requires python 2. > >> > >> To reach that stage, we need to: > >> > >> 1. Change the documentation and release notes jobs to use python 3. > >> (The Oslo team recently completed this, and found that we did > >> need to make a few small code changes to get them to work.) > >> 2. Change (or duplicate) all functional test jobs to run under > >> python 3. > >> 3. Change the packaging jobs to use python 3. > >> 4. Update devstack to use 3 by default and require setting a flag to > >> use 2. (This may trigger other job changes.) > >> > >> At that point, all of our deliverables will be produced using python > >> 3, and we can be relatively confident that if we no longer had > >> access to python 2 we could still continue operating. We could also > >> start updating deployment tools to use either python 3 or 2, so > >> that users could actually deploy using the python 3 versions of > >> services. > >> > >> Somewhere in that time frame our third-party CI systems will need > >> to ensure they have python 3 support as well. > >> > >> After the "Python 3 first" phase is completed we should release > >> one series using the packages built with python 3. Perhaps Stein? > >> Or is that too ambitious? > >> > >> Next, we will be ready to address the prerequisites for "Python 3 > >> only," which will allow us to drop Python 2 support. > >> > >> We need to wait to drop python 2 support as a community, rather > >> than going one project at a time, to avoid doubling the work of > >> downstream consumers such as distros and independent deployers. We > >> don't want them to have to package all (or even a large number) of > >> the dependencies of OpenStack twice because they have to install > >> some services running under python 2 and others under 3. Ideally > >> they would be able to upgrade all of the services on a node together > >> as part of their transition to the new version, without ending up > >> with a python 2 version of a dependency along side a python 3 version > >> of the same package. > >> > >> The remaining items could be fixed earlier, but this is the point > >> at which they would block us: > >> > >> 1. Fix oslo.service functional tests -- the Oslo team needs help > >> maintaining this library. Alternatively, we could move all > >> services to use cotyledon (https://pypi.org/project/cotyledon/). > > For everyone's awareness, we discussed this in the Oslo meeting today > and our first step is to see how many, if any, services are actually > relying on the oslo.service functionality that doesn't work in Python 3 > today. From there we will come up with a plan for how to move forward. > > https://bugs.launchpad.net/manila/+bug/1482633 is the original bug. > > >> > >> 2. Finish the unit test and functional test ports so that all of > >> our tests can run under python 3 (this implies that the services > >> all run under python 3, so there is no more porting to do). > > And integration tests? I know for the initial python 3 goal we said > just unit and functional, but it seems to me that we can't claim full > python 3 compatibility until we can run our tempest jobs against python > 3-based OpenStack. Good point. The wiki page lists the integrated-gate-py35 job for many projects, but not all will use that particular test job. I'm not sure what other sort of integration jobs we do have, but I agree we should have versions of them working for python 3. > > >> > >> Finally, after we have *all* tests running on python 3, we can > >> safely drop python 2. > >> > >> We have previously discussed the end of the T cycle as the point > >> at which we would have all of those tests running, and if that holds > >> true we could reasonably drop python 2 during the beginning of the > >> U cycle, in late 2019 and before the 2020 cut-off point when upstream > >> python 2 support will be dropped. > >> > >> I need some info from the deployment tool teams to understand whether > >> they would be ready to take the plunge during T or U and start > >> deploying only the python 3 version. Are there other upgrade issues > >> that need to be addressed to support moving from 2 to 3? Something > >> that might be part of the platform(s), rather than OpenStack itself? > > Alex can probably expand on this, but I know TripleO has some challenges > in this area. Specifically the fact that CentOS 7 will only ever > support Python 2 and CentOS 8 is planned to only support Python 3. Since > CentOS 8 is not a thing yet and no release dates are announced they're > having to use Fedora for Python 3 testing, which isn't something that > will be supported long-term. That makes things...complicated. > > Some more details are in the PTG discussion wrap-up thread: > http://lists.openstack.org/pipermail/openstack-dev/2018-March/128481.html > > That said, I believe the plan is to be testing on Python 3 by T, so I > guess that's ultimately the answer to your question. Yes, that's more or less what I was looking for. Doug > > >> > >> What else have I missed in these phases? Other jobs? Other blocking > >> conditions? > >> > >> Doug > From doug at doughellmann.com Mon Apr 30 22:00:27 2018 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 30 Apr 2018 18:00:27 -0400 Subject: [openstack-dev] [all][tc][ptls] final stages of python 3 transition In-Reply-To: References: <1524689037-sup-783@lrrr.local> <1525100618-sup-9669@lrrr.local> Message-ID: <1525125561-sup-8369@lrrr.local> Excerpts from Alex Schultz's message of 2018-04-30 15:43:16 -0600: > On Mon, Apr 30, 2018 at 3:16 PM, Ben Nemec wrote: > > Resending from an address that is subscribed to the list. Apologies to > > those of you who get this twice. > > > > On 04/30/2018 10:06 AM, Doug Hellmann wrote: > >> > >> It would be useful to have more input from PTLs on this issue, so I'm > >> CCing all of them to get their attention. > >> > >> Excerpts from Doug Hellmann's message of 2018-04-25 16:54:46 -0400: > >>> > >>> It's time to talk about the next steps in our migration from python > >>> 2 to python 3. > >>> > >>> Up to this point we have mostly focused on reaching a state where > >>> we support both versions of the language. We are not quite there > >>> with all projects, as you can see by reviewing the test coverage > >>> status information at > >>> > >>> https://wiki.openstack.org/wiki/Python3#Python_3_Status_of_OpenStack_projects > >>> > >>> Still, we need to press on to the next phase of the migration, which > >>> I have been calling "Python 3 first". This is where we use python > >>> 3 as the default, for everything, and set up the exceptions we need > >>> for anything that still requires python 2. > >>> > >>> To reach that stage, we need to: > >>> > >>> 1. Change the documentation and release notes jobs to use python 3. > >>> (The Oslo team recently completed this, and found that we did > >>> need to make a few small code changes to get them to work.) > >>> 2. Change (or duplicate) all functional test jobs to run under > >>> python 3. > >>> 3. Change the packaging jobs to use python 3. > >>> 4. Update devstack to use 3 by default and require setting a flag to > >>> use 2. (This may trigger other job changes.) > >>> > >>> At that point, all of our deliverables will be produced using python > >>> 3, and we can be relatively confident that if we no longer had > >>> access to python 2 we could still continue operating. We could also > >>> start updating deployment tools to use either python 3 or 2, so > >>> that users could actually deploy using the python 3 versions of > >>> services. > >>> > >>> Somewhere in that time frame our third-party CI systems will need > >>> to ensure they have python 3 support as well. > >>> > >>> After the "Python 3 first" phase is completed we should release > >>> one series using the packages built with python 3. Perhaps Stein? > >>> Or is that too ambitious? > >>> > >>> Next, we will be ready to address the prerequisites for "Python 3 > >>> only," which will allow us to drop Python 2 support. > >>> > >>> We need to wait to drop python 2 support as a community, rather > >>> than going one project at a time, to avoid doubling the work of > >>> downstream consumers such as distros and independent deployers. We > >>> don't want them to have to package all (or even a large number) of > >>> the dependencies of OpenStack twice because they have to install > >>> some services running under python 2 and others under 3. Ideally > >>> they would be able to upgrade all of the services on a node together > >>> as part of their transition to the new version, without ending up > >>> with a python 2 version of a dependency along side a python 3 version > >>> of the same package. > >>> > >>> The remaining items could be fixed earlier, but this is the point > >>> at which they would block us: > >>> > >>> 1. Fix oslo.service functional tests -- the Oslo team needs help > >>> maintaining this library. Alternatively, we could move all > >>> services to use cotyledon (https://pypi.org/project/cotyledon/). > > > > > > For everyone's awareness, we discussed this in the Oslo meeting today and > > our first step is to see how many, if any, services are actually relying on > > the oslo.service functionality that doesn't work in Python 3 today. From > > there we will come up with a plan for how to move forward. > > > > https://bugs.launchpad.net/manila/+bug/1482633 is the original bug. > > > >>> > >>> 2. Finish the unit test and functional test ports so that all of > >>> our tests can run under python 3 (this implies that the services > >>> all run under python 3, so there is no more porting to do). > > > > > > And integration tests? I know for the initial python 3 goal we said just > > unit and functional, but it seems to me that we can't claim full python 3 > > compatibility until we can run our tempest jobs against python 3-based > > OpenStack. > > > >>> > >>> Finally, after we have *all* tests running on python 3, we can > >>> safely drop python 2. > >>> > >>> We have previously discussed the end of the T cycle as the point > >>> at which we would have all of those tests running, and if that holds > >>> true we could reasonably drop python 2 during the beginning of the > >>> U cycle, in late 2019 and before the 2020 cut-off point when upstream > >>> python 2 support will be dropped. > >>> > >>> I need some info from the deployment tool teams to understand whether > >>> they would be ready to take the plunge during T or U and start > >>> deploying only the python 3 version. Are there other upgrade issues > >>> that need to be addressed to support moving from 2 to 3? Something > >>> that might be part of the platform(s), rather than OpenStack itself? > > > > > > Alex can probably expand on this, but I know TripleO has some challenges in > > this area. Specifically the fact that CentOS 7 will only ever support > > Python 2 and CentOS 8 is planned to only support Python 3. Since CentOS 8 is > > not a thing yet and no release dates are announced they're having to use > > Fedora for Python 3 testing, which isn't something that will be supported > > long-term. That makes things...complicated. > > > > Some more details are in the PTG discussion wrap-up thread: > > http://lists.openstack.org/pipermail/openstack-dev/2018-March/128481.html > > > > That said, I believe the plan is to be testing on Python 3 by T, so I guess > > that's ultimately the answer to your question. > > > > Yes from a TripleO perspective there are a few different ways to > address this, but we will likely need to follow the availability of > python3 on the current release of a given CentOS version. With the > switch to containers may allow us to decouple from the base OS python > a bit, but that would mean that we'd need to be able to pull in a > fedora images with python3 packages (via Kolla). The work on this > front is very early on so I'm not sure we have a timeline to commit to > T. > > Thanks, > -Alex OK, so it sounds like no earlier than T for TripleO. What about some of the other deployment tools? Can members of those teams give us any sort of guidance about when python 3 support is expected? Doug From ramamani.yeleswarapu at intel.com Mon Apr 30 23:32:41 2018 From: ramamani.yeleswarapu at intel.com (Yeleswarapu, Ramamani) Date: Mon, 30 Apr 2018 23:32:41 +0000 Subject: [openstack-dev] [ironic] this week's priorities and subteam reports Message-ID: Hi, We are glad to present this week's priorities and subteam report for Ironic. As usual, this is pulled directly from the Ironic whiteboard[0] and formatted. This Week's Priorities (as of the weekly ironic meeting) ======================================================== Weekly priorities ----------------- - Bios interface support - BIOS Settings: Add BIOSInterface : https://review.openstack.org/507793 - BIOS Settings: Add BIOS caching: https://review.openstack.org/512200 - Add Node BIOS support - REST API: https://review.openstack.org/512579 - Hardware type cleanup - https://review.openstack.org/#/q/status:open+topic:hw-types - https://review.openstack.org/#/q/topic:api-jobs to unblock api CI test cleanup - Python-ironicclient things - Accept a version on set_provision_state - https://review.openstack.org/#/c/557850/ - Wire in header microversion into client negotiation - https://review.openstack.org/#/c/558027/ - Remaining Rescue patches - https://review.openstack.org/#/c/528699/ - Tempest tests with nova (This can land after nova work is done. But, it should be ready to get the nova patch reviewed.) (Rebased by TheJulia 20180416) - Management interface boot_mode change - https://review.openstack.org/#/c/526773/ - Bug Fixes - Any this week? - House Keeping: Vendor priorities ----------------- cisco-ucs: Patches in works for SDK update, but not posted yet, currently rebuilding third party CI infra after a disaster... idrac: RFE and first several patches for adding UEFI support will be posted by Tuesday, 1/9 ilo: None irmc: None - a few works are work in progress oneview: None at this time - No subteam at present. xclarity: Fix XClarity parameters discrepancy: https://review.openstack.org/#/c/561405/ Bugs (dtantsur, vdrok, TheJulia) -------------------------------- - (TheJulia) Ironic has moved to Storyboard. Dtantsur has indicated he will update the tool that generates these stats. - initial version (much fewer features): https://github.com/dtantsur/ironic-bug-report - Stats (new version, no diff this time): - Total bugs: 283 - of them untriaged: 256 - Total RFEs: 238 - of them untriaged: 27 - HIGH bugs with patches to review: - Clean steps are not tested in gate https://bugs.launchpad.net/ironic/+bug/1523640: Add manual clean step ironic standalone test https://review.openstack.org/#/c/429770/15 - Needs to be reproposed to the ironic tempest plugin repository. - prepare_instance() is not called for whole disk images with 'agent' deploy interface https://bugs.launchpad.net/ironic/+bug/1713916: - Fix ``agent`` deploy interface to call ``boot.prepare_instance`` https://review.openstack.org/#/c/499050/ MERGED - Backport to stable/queens proposed Priorities ========== Deploy Steps (rloo, mgoddard) ----------------------------- - spec for deployment steps framework has merged: https://review.openstack.org/#/c/549493/ - waiting for code from rloo, no timeframe yet BIOS config framework(zshi, yolanda, mgoddard, hshiina) ------------------------------------------------------- - status as of 30 April 2018: - Spec: http://specs.openstack.org/openstack/ironic-specs/specs/approved/generic-bios-config.html - List of ordered patches: - BIOS Settings: Add DB model: https://review.openstack.org/511162 agreed that column type of bios setting value is string, blocked by the gate failure MERGED - Add bios_interface db field https://review.openstack.org/528609 many +2s, can be merged soon after the patch above is merged MERGED - BIOS Settings: Add DB API: https://review.openstack.org/511402 1x +1, actively reviewed and updated MERGED - BIOS Settings: Add RPC object https://review.openstack.org/511714 MERGED - Add BIOSInterface to base driver class https://review.openstack.org/507793 - BIOS Settings: Add BIOS caching: https://review.openstack.org/512200 - Add Node BIOS support - REST API: https://review.openstack.org/512579 Conductor Location Awareness (jroll, dtantsur) ---------------------------------------------- - story: https://storyboard.openstack.org/#!/story/2001795 - (April 30) spec has good feedback, one issue to resolve, should be able to land this week - https://review.openstack.org/#/c/559420/ needs update Reference architecture guide (dtantsur, jroll) ---------------------------------------------- - story: https://storyboard.openstack.org/#!/story/2001745 - status as of 30 April 2018: - Dublin PTG consensus was to start with small architectural building blocks. - list of cases from the Denver PTG - see in the story - nothing new this week Graphical console interface (mkrai, anup-d-navare, TheJulia) ------------------------------------------------------------ - status as of 30 Apr 2018: - No update - Have not had a chance to get to this yet this cycle. Goal for the cycle was a plan, not necessarily implementation. - VNC Graphical console spec: https://review.openstack.org/#/c/306074/ - needs update, address comments - nova blueprint: https://blueprints.launchpad.net/nova/+spec/ironic-vnc-console - Spec has updated for review (anupn) Neutron event processing (vdrok) -------------------------------- - status as of 30 April 2018: - spec at https://review.openstack.org/343684 - Needs update - WIP code at https://review.openstack.org/440778 - code rewrite done, should be able to test it this week and get on review, spec update coming afterwards Goals ===== Make nova flexible with ironic API versions (TheJulia) ------------------------------------------------------ Status as of 23 APR 2018: (TheJulia) No update this week. Alternatively existing functionality could be used. The rescue patch for nova might end up landing with a version list. I've checked with some nova folks and they are on board with that option as a short term compromise. (TheJulia) We need python-ironicclient reviews which would be required to do this https://review.openstack.org/#/c/557850/ https://review.openstack.org/#/c/558027/ Storyboard migration (TheJulia, dtantsur) ----------------------------------------- Status as of Apr 30th. - Done with moving data. - dtantsur to rewrite the bug dashboard - in progress https://github.com/dtantsur/ironic-bug-report - suggestions welcome Management interface refactoring (etingof, dtantsur) ---------------------------------------------------- - Status as of 23 Apr: - boot mode in ManagementInterface: https://review.openstack.org/#/c/526773/ active review Getting clean steps (rloo, TheJulia) ------------------------------------ - Stat as of April 22nd 2018 - spec: https://review.openstack.org/#/c/507910/ - Updated Project vision (jroll, TheJulia) -------------------------------- - Status as of April 16: - jroll still trying to find time to collect enough thoughts for an email SIGHUP support (rloo) --------------------- - Status as of April 30 - ironic Done - ironic-inspector: Done - doesn't use oslo.service because not sure if can use flask with it - https://review.openstack.org/560243 custom signal handling. MERGED - https://review.openstack.org/561823 oslo.service approach - networking-baremetal: Done https://review.openstack.org/561257 MERGED - DONE! - Reflected in community's goal: https://storyboard.openstack.org/#!/story/2001545, task 6377. MERGED! Stretch Goals ============= NOTE: These items will be migrated into storyboard and will be removed from the weekly whiteboard once storyboard is in-place Classic driver removal formerly Classic drivers deprecation (dtantsur) ---------------------------------------------------------------------- - spec: http://specs.openstack.org/openstack/ironic-specs/specs/not-implemented/classic-drivers-future.html - status as of 26 Mar 2018: - switch documentation to hardware types: - api-ref examples: TODO - update https://wiki.openstack.org/wiki/Ironic/Drivers: TODO - or should we kill it with fire in favour of the docs? - ironic-inspector: - documentation: https://review.openstack.org/#/c/545285/ MERGED - backport: https://review.openstack.org/#/c/554586/ - enable fake-hardware in devstack: https://review.openstack.org/#/c/550811/ MERGED - change the default discovery driver: https://review.openstack.org/#/c/550464/ - migration of CI to hardware types - IPA: https://review.openstack.org/553431 MERGED - ironic-lib: https://review.openstack.org/#/c/552537/ MERGED - python-ironicclient: https://review.openstack.org/552543 MERGED - python-ironic-inspector-client: https://review.openstack.org/552546 +A MERGED - virtualbmc: https://review.openstack.org/#/c/555361/ MERGED - started an ML thread tagging potentially affected projects: http://lists.openstack.org/pipermail/openstack-dev/2018-March/128438.html - bug needs to be fixed: "periodic tasks of non-classic driver Interfaces aren't run" https://storyboard.openstack.org/#!/story/2001884 Redfish OOB inspection (etingof, deray, stendulker) --------------------------------------------------- - sushy Storage API -- https://review.openstack.org/#/c/563051/1 Before Rocky ============ CI refactoring and missing test coverage ---------------------------------------- - not considered a priority, it's a 'do it always' thing - Standalone CI tests (vsaienk0) - next patch to be reviewed, needed for 3rd party CI: https://review.openstack.org/#/c/429770/ - localboot with partitioned image patches: - Ironic - add localboot partitioned image test: https://review.openstack.org/#/c/502886/ Rebase/update required - when previous are merged TODO (vsaienko) - Upload tinycore partitioned image to tarbals.openstack.org - Switch ironic to use tinyipa partitioned image by default - Missing test coverage (all) - portgroups and attach/detach tempest tests: https://review.openstack.org/382476 - adoption: https://review.openstack.org/#/c/344975/ - should probably be changed to use standalone tests - root device hints: TODO - node take over - resource classes integration tests: https://review.openstack.org/#/c/443628/ - radosgw (https://bugs.launchpad.net/ironic/+bug/1737957) Queens High Priorities ====================== Routed network support (sambetts, vsaienk0, bfournie, hjensas) -------------------------------------------------------------- - status as of 12 Feb 2018: - All code patches are merged. - One CI patch left, rework devstack baremetal simulation. To be done in Rocky? - This is to have actual 'flat' networks in CI. - Placement API work to be done in Rocky due to: Challenges with integration to Placement due to the way the integration was done in neutron. Neutron will create a resource provider for network segments in Placement, then it creates an os-aggregate in Nova for the segment, adds nova compute hosts to this aggregate. Ironic nodes cannot be added to host-aggregates. I (hjensas) had a short discussion with neutron devs (mlavalle) on the issue: http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2018-01-12.log.html#t2018-01-12T17:05:38 There are patches in Nova to add support for ironic nodes in host-aggregates: - https://review.openstack.org/#/c/526753/ allow compute nodes to be associated with host agg - https://review.openstack.org/#/c/529135/ (Spec) - Patches: - CI Patches: - https://review.openstack.org/#/c/392959/ Rework Ironic devstack baremetal network simulation - RFEs (Rocky) - https://bugs.launchpad.net/networking-baremetal/+bug/1749166 - TheJulia, March 19th 2018: This RFE seems not to contain detail on what is desired to be improved upon, and ultimately just seems like refactoring/improvement work and may not then need an rfe. - https://bugs.launchpad.net/networking-baremetal/+bug/1749162 - TheJulia, March 19th 2018: This RFE makes sense, although I would classify it as a general improvement. If we wish to adhere to strict RFE approval for networking-baremetal work, then I think we should consider this approved since it is minor enhancement to improve operation. Rescue mode (rloo, stendulker) ------------------------------ - Status as on 12 Feb 2018 - spec: http://specs.openstack.org/openstack/ironic-specs/specs/approved/implement-rescue-mode.html - code: https://review.openstack.org/#/q/topic:bug/1526449+status:open+OR+status:merged - ironic side: - all code patches have merged except for - Rescue mode standalone tests: https://review.openstack.org/#/c/538119/ (failing CI, not ready for reviews) - Tempest tests with nova: https://review.openstack.org/#/c/528699/ - Run the tempest test on the CI: https://review.openstack.org/#/c/528704/ - succeeded in rescuing: http://logs.openstack.org/04/528704/16/check/ironic-tempest-dsvm-ipa-wholedisk-bios-agent_ipmitool-tinyipa/4b74169/logs/screen-ir-cond.txt.gz#_Feb_02_09_44_12_940007 - nova side: - https://blueprints.launchpad.net/nova/+spec/ironic-rescue-mode: - approved for Queens but didn't get the ironic code (client) done in time - (TheJulia) Nova has indicated that this is deferred until Rocky. - To get the nova patch merged, we need: - release new python-ironicclient - Done - update ironicclient version in upper-constraints (this patch will be posted automatically) - update ironicclient version in global-requirement (this patch needs to be posted manually) Posted https://review.openstack.org/554673 - code patch: https://review.openstack.org/#/c/416487/ Needs revision - CI is needed for nova part to land - tiendc is working for CI Clean up deploy interfaces (vdrok) ---------------------------------- - status as of 5 Feb 2017: - patch https://review.openstack.org/524433 needs update and rebase Zuul v3 jobs in-tree (sambetts, derekh, jlvillal) ------------------------------------------------- - Next TODO is to convert jobs on master, to proper ansible. NOT a high priority though. - (pas-ha) DNM experimental patch with "devstack-tempest" as base job https://review.openstack.org/#/c/520167/ OpenStack Priorities ==================== Python 3.5 compatibility (Nisha, Ankit) --------------------------------------- - Topic: https://review.openstack.org/#/q/topic:goal-python35+NOT+project:openstack/governance+NOT+project:openstack/releases - this include all projects, not only ironic - please tag all reviews with topic "goal-python35" - TODO submit the python3 job for IPA - for ironic and ironic-inspector job enabled by disabling swift as swift is still lacking py3.5 support. - anupn to update the python3 job to build tinyipa with python3 - (anupn): Talked with swift folks and there is a bug upstream opened https://review.openstack.org/#/c/401397 for py3 support in swift. But this is not on their priority - Right now patch pass all gate jobs except agent_- drivers. - (TheJulia) It seems we might not have py3 compatibility with swift until the T- cycle. Deploying with Apache and WSGI in CI (pas-ha, vsaienk0) ------------------------------------------------------- - ironic is mostly finished - (pas-ha) needs to be rewritten for uWSGI, patches on review: - https://review.openstack.org/#/c/507067 - inspector is TODO and depends on https://review.openstack.org/#/q/topic:bug/1525218 - delayed as the HA work seems to take a different direction - (TheJulia, March 19th, 2018) Perhaps because of the different direction, we should consider ourselves done? Subprojects =========== Inspector (dtantsur) -------------------- - trying to flip dsvm-discovery to use the new dnsmasq pxe filter and failing because of bash :Dhttps://review.openstack.org/#/c/525685/6/devstack/plugin.sh at 202 - follow-ups being merged/reviewed; working on state consistency enhancements https://review.openstack.org/#/c/510928/ too (HA demo follow-up) Bifrost (TheJulia) ------------------ - Also seems a recent authentication change in keystoneauth1 has broken processing of the clouds.yaml files, i.e. `openstack` command does not work. - TheJulia will try to look at this this week. Drivers: -------- OneView (???) ~~~~~~~~~~~~~ - Oneview presently does not have a subteam. Cisco UCS (sambetts) Last updated 2018/02/05 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - Cisco CIMC driver CI back up and working on every patch - Cisco UCSM driver CI in development - Patches for updating the UCS python SDKs are in the works and should be posted soon ......... Until next week, --rama [0] https://etherpad.openstack.org/p/IronicWhiteBoard -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Mon Apr 30 23:43:09 2018 From: emilien at redhat.com (Emilien Macchi) Date: Mon, 30 Apr 2018 16:43:09 -0700 Subject: [openstack-dev] Overriding project-templates in Zuul In-Reply-To: <87o9i04rfa.fsf@meyer.lemoncheese.net> References: <87o9i04rfa.fsf@meyer.lemoncheese.net> Message-ID: On Mon, Apr 30, 2018 at 8:58 AM, James E. Blair wrote: [...] > ================ ======== ======= ======= > Matcher Template Project Result > ================ ======== ======= ======= > files AB BC ABC > irrelevant-files AB BC B > ================ ======== ======= ======= > > I believe this will address the shortcoming identified above, but before > we get too far in implementing it, I'd like to ask folks to take a > moment and evaluate whether it will address the issues you've seen, or > if you foresee any problems which I haven't anticipated. > It'll address a need we have in TripleO where we have complex file rules and heavily rely on templates. The matrix proposal looks good to me and will allow us to simplify a bit our templates. Thanks, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: