[openstack-dev] [nova] [placement] placement update 18-14
Jay Pipes
jaypipes at gmail.com
Fri Apr 6 19:18:07 UTC 2018
Thanks, as always, for the excellent summary emails, Chris. Comments inline.
On 04/06/2018 01:54 PM, Chris Dent wrote:
>
> This is "contract" style update. New stuff will not be added to the
> lists.
>
> # Most Important
>
> There doesn't appear to be anything new with regard to most
> important. That which was important remains important. At the
> scheduler team meeting at the start of the week there was talk of
> working out ways to trim the amount of work in progress by using the
> nova priorities tracking etherpad to help sort things out:
>
> https://etherpad.openstack.org/p/rocky-nova-priorities-tracking
>
> Update provider tree and nested allocation candidates remain
> critical basic functionality on which much else is based. With most
> of provider tree done, it's really on nested allocation candidates.
Yup. And that series is deadlocked on a disagreement about whether
granular request groups should be "separate by default" (meaning: if you
request multiple groups of resources, the expectation is that they will
be served by distinct resource providers) or "unrestricted by default"
(meaning: if you request multiple groups of resources, those resources
may or may not be serviced by distinct resource providers).
For folk's information, the latter (unrestricted by default) is the
*existing* behaviour as outlined in the granular request groups spec:
http://specs.openstack.org/openstack/nova-specs/specs/rocky/approved/granular-resource-requests.html
Specifically, it is Requirement 3 on the above spec that is the primary
driver for this debate.
I currently have an action item to resolve this debate and move forward
with a decision, whatever that may be.
> # What's Changed
>
> Quite a bit of provider tree related code has merged.
>
> Some negotiation happened with regard to when/if the fixes for
> shared providers is going to happen. I'm not sure how that resolved,
> if someone can follow up with that, that would be most excellent.
Sharing providers are in a weird place right now, agreed. We have landed
lots of code on the placement side of the house for handling sharing
providers. However, the nova-compute service still does not know about
the providers that share resources with it. This makes it impossible
right now to have a compute node with local disk storage as well as
shared disk resources.
> Most of the placement-req-filter series merged.
>
> The spec for error codes in the placement API merged (code is in
> progress and ready for review, see below).
>
> # Questions
>
> * Eric and I discussed earlier in the week that it might be a good
> time to start an #openstack-placement IRC channel, for two main
> reasons: break things up so as to limit the crosstalk in the often
> very busy #openstack-nova channel and to lend a bit of momentum
> for going in that direction. Is this okay with everyone? If not,
> please say so, otherwise I'll make it happen soon.
Cool with me. I know Matt has wanted a separate placement channel for a
while now.
> * Shared providers status?
> (I really think we need to make this go. It was one of the
> original value propositions of placement: being able to accurate
> manage shared disk.)
Agreed, but you know.... NUMA. And CPU pinning. And vGPUs. And FPGAs.
And physnet network bandwidth scheduling. And... well, you get the idea.
Best,
-jay
> # Bugs
>
> * Placement related bugs not yet in progress: https://goo.gl/TgiPXb
> 15, -1 on last week
> * In progress placement bugs: https://goo.gl/vzGGDQ
> 13, +1 on last week
>
> # Specs
>
> These seem to be divided into three classes:
>
> * Normal stuff
> * Old stuff not getting attention or newer stuff that ought to be
> abandoned because of lack of support
> * Anything related to the client side of using nested providers
> effectively. This apparently needs a lot of thinking. If there are
> some general sticking points we can extract and resolve, that
> might help move the whole thing forward?
>
> * https://review.openstack.org/#/c/549067/
> VMware: place instances on resource pool
> (using update_provider_tree)
>
> * https://review.openstack.org/#/c/545057/
> mirror nova host aggregates to placement API
>
> * https://review.openstack.org/#/c/552924/
> Proposes NUMA topology with RPs
>
> * https://review.openstack.org/#/c/544683/
> Account for host agg allocation ratio in placement
>
> * https://review.openstack.org/#/c/552927/
> Spec for isolating configuration of placement database
> (This has a strong +2 on it but needs one more.)
>
> * https://review.openstack.org/#/c/552105/
> Support default allocation ratios
>
> * https://review.openstack.org/#/c/438640/
> Spec on preemptible servers
>
> * https://review.openstack.org/#/c/556873/
> Handle nested providers for allocation candidates
>
> * https://review.openstack.org/#/c/556971/
> Add Generation to Consumers
>
> * https://review.openstack.org/#/c/557065/
> Proposes Multiple GPU types
>
> * https://review.openstack.org/#/c/555081/
> Standardize CPU resource tracking
>
> * https://review.openstack.org/#/c/502306/
> Network bandwidth resource provider
>
> * https://review.openstack.org/#/c/509042/
> Propose counting quota usage from placement
>
> # Main Themes
>
> ## Update Provider Tree
>
> Most of the main guts of this have merged (huzzah!). What's left are
> some loose end details, and clean handling of aggregates:
>
> https://review.openstack.org/#/q/topic:bp/update-provider-tree
>
> ## Nested providers in allocation candidates
>
> Representing nested provides in the response to GET
> /allocation_candidates is required to actually make use of all the
> topology that update provider tree will report. That work is in
> progress at:
>
> https://review.openstack.org/#/q/topic:bp/nested-resource-providers
>
> https://review.openstack.org/#/q/topic:bp/nested-resource-providers-allocation-candidates
>
>
> Note that some of this includes the up-for-debate shared handling.
>
> ## Request Filters
>
> As far as I can tell this is mostly done (yay!) but there is a loose
> end: We merged an updated spec to support multiple member_of
> parameters, but it's not clear anybody is currently owning that:
>
> https://review.openstack.org/#/c/555413/
>
> ## Mirror nova host aggregates to placement
>
> This makes it so some kinds of aggregate filtering can be done
> "placement side" by mirroring nova host aggregates into placement
> aggregates.
>
>
> https://review.openstack.org/#/q/topic:bp/placement-mirror-host-aggregates
>
> It's part of what will make the req filters above useful.
>
> ## Forbidden Traits
>
> A way of expressing "I'd like resources that do _not_ have trait X".
> This is ready for review:
>
> https://review.openstack.org/#/q/topic:bp/placement-forbidden-traits
>
> ## Consumer Generations
>
> This allows multiple agents to "safely" update allocations for a
> single consumer. There is both a spec and code in progress for this:
>
> https://review.openstack.org/#/q/topic:bp/add-consumer-generation
>
> # Extraction
>
> Small bits of work on extraction continue on the
> bp/placement-extract topic:
>
> https://review.openstack.org/#/q/topic:bp/placement-extract
>
> The spec for optional database handling got some nice support
> but needs more attention:
>
> https://review.openstack.org/#/c/552927/
>
> Jay has declared that he's going to start work on the
> os-resources-classes library.
>
> I've posted a 6th in my placement container playground series:
>
> https://anticdent.org/placement-container-playground-6.html
>
> Though not directly related to extraction, that experimentation has
> exposed a lot of the areas where work remains to be done to make
> placement independent of nova.
>
> A recent experiment with shrinking the repo to just the placement
> dir reinforced a few things we already know:
>
> * The placement tests need their own base test to avoid 'from nova
> import test'
> * That will need to provide database and other fixtures (such a
> config and the self.flags feature).
> * And, of course, eventually, config handling. The container
> experiments above demonstrate just how little config placement
> actually needs (by design, let's keep it that way).
>
> # Other
>
> This is a contract week, so nothing new has been added here, despite
> there being new work. Part of the intent here it make sure we are
> queue-like where we can be. This list maintains its ordering from
> week to week: newly discovered things are added to the end.
>
> There are 14 entries here, -7 on last week.
>
> That's good. However some of the removals are the result of some
> code changing topic (and having been listed here by topic). Some of
> the oldest stuff at the top of the list has not moved.
>
> * https://review.openstack.org/#/c/546660/
> Purge comp_node and res_prvdr records during deletion of
> cells/hosts
>
> * https://review.openstack.org/#/q/topic:bp/placement-osc-plugin-rocky
> A huge pile of improvements to osc-placement
>
> * https://review.openstack.org/#/c/546713/
> Add compute capabilities traits (to os-traits)
>
> * https://review.openstack.org/#/c/524425/
> General policy sample file for placement
>
> * https://review.openstack.org/#/c/546177/
> Provide framework for setting placement error codes
>
> * https://review.openstack.org/#/c/527791/
> Get resource provider by uuid or name (osc-placement)
>
> * https://review.openstack.org/#/c/477478/
> placement: Make API history doc more consistent
>
> * https://review.openstack.org/#/c/556669/
> Handle agg generation conflict in report client
>
> * https://review.openstack.org/#/c/556628/
> Slugification utilities for placement names
>
> * https://review.openstack.org/#/c/557086/
> Remove usage of [placement]os_region_name
>
> * https://review.openstack.org/#/c/556633/
> Get rid of 406 paths in report client
>
> * https://review.openstack.org/#/c/537614/
> Add unit test for non-placement resize
>
> * https://review.openstack.org/#/c/554357/
> Address issues raised in adding member_of to GET /a-c
>
> * https://review.openstack.org/#/c/493865/
> cover migration cases with functional tests
>
> # End
>
> 2 runway slots open up this coming Wednesday, the 11th.
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
More information about the OpenStack-dev
mailing list