[openstack-dev] [nova] placement/resource providers update 4

Jay Pipes jaypipes at gmail.com
Fri Dec 2 23:29:16 UTC 2016


On Dec 2, 2016 5:21 PM, "Matt Riedemann" <mriedem at linux.vnet.ibm.com> wrote:

On 12/2/2016 12:04 PM, Chris Dent wrote:

>
>
> Latest news on what's going on with resource providers and the
> placement API. I've made some adjustments in the structure of this
> since last time[0]. The new structure tries to put the stuff we need to
> talk about, including medium and long term planning, at the top and
> move the stuff that is summaries of what's going on on gerrit towards
> the bottom. I think we need to do this to enhance the opportunities for
> asynchronous resolution of some of the topics on our plates. If we
> keep waiting until the next meeting where we are all there at the same
> time, stuff will sit for too long.
>
> [0]
> http://lists.openstack.org/pipermail/openstack-dev/2016-Nove
> mber/107982.html
>
>
> # Things to Think About
>
> (Note that I'm frequently going to be wrong or at least incomplete
> about the things I say here, because I'm writing off the top of my
> head. Half the point of writing this is to get it correct by
> collaborative action. If you see something that is wrong, please
> shout out in a response. This section is for discussion of stuff that
> isn't yet being tracked well or has vague conflicts.)
>
> The general goal with placement for Ocata is to have both the nova
> scheduler and resource tracker talking to the API to usefully limit
> the number of hosts that the scheduler evaluates when selecting
> destinations. There are several segments of work coming together to
> make this possible, some of which are further along than others.
>
> ## Update Client Side to Consider Aggregates
>
> When the scheduler requests a list of resource providers, that list
> ought to include compute nodes that are are associated, via
> aggregates, with any shared resource provides (such as shared disk)
> that can satisfy the resource requirements in the request.
>
> Meanwhile, when a compute node places a VM that uses shared disk the
> allocation of resources made by the resource tracker need to go to
> the right resource providers.
>
> This is a thing we know we need to do but is not something for which
> (as far as I know) we've articulated a clear plan or really started
> on.
>

I'm glad I'm not the only one that was wondering what's going on with the
client side aggregates handling stuff.


I have it all done locally. Will push tomorrow...

Best,
- jay

I see the aggregates PUT/GET patches have merged but the resource tracker
stuff hasn't started, at least that I'm aware of. I was looking into this a
bit this week when writing up the Ocata priorities docs and needed to go
back into the generic-resource-pools spec from Newton to dig into the notes
on aggregates:

https://specs.openstack.org/openstack/nova-specs/specs/newto
n/implemented/generic-resource-pools.html

There is a lot of detail in there, which is good - even though we
retrospected at the summit that we spent too much time on details in the
specs in Newton, I guess in this case it might pay off. :)

If I'm understanding correctly, a 'resource pool' in that spec when talking
about aggregates is really a set of resource providers tied to an
aggregate. So I could have 3 compute nodes A, B, C all using the same
shared storage cluster X. So A, B, and C are in an aggregate for X and we
have the resource providers for compute nodes A, B and C all related to
that aggregate X in the placement service. How that ties back into the
scheduler and resource tracker is a bit fuzzy to me at the moment, but if ^
is correct then I could probably figure the rest out by digging back into
the spec details.



> ## Update Scheduler to Request Limited Resource Providers
>
> The "Scheduler Filters in DB" spec[1] has merged along with its
> pair, "Filter Resource Providers by Request"[2], and the work has
> started[3].
>
> There are some things to consider as that work progresses:
>
> * The bit about aggregates in the previous section: the list of
>   returned resource providers needs to include associated providers.
>

nit: I think you mean associated _aggregates_ here.


  To quote Mr. Pipes:
>
>       we will only return resource providers to the scheduler that
>       are compute nodes in Ocata. the resource providers that the
>       placement service returns will either have the resources
>       requested or will be associated with aggregates that have
>       providers that match the requested resources.
>

An example might be useful here, but I'm sure there is probably already one
in the generic resource pools spec linked above. I think it means:

"have the resources requested"

- means this is a resource provider that satisfies a request for some type
of resource class, maybe DISK_GB.


"or will be associated with aggregates that have providers that match the
requested resources."

- means there is a shared storage resource provider that's associated to an
aggregate in the placement service and that aggregate is associated with
some compute node resource providers? So in my example up above, does that
mean we have a resource provider for the shared storage cluster, let's call
it X, which is associated with aggregate (again, X), and compute nodes A, B
and C are in that aggregate, and are resource providers themselves.



> * There is unresolved debate about the structure of the request being
>   made to the API. Is it POST or a GET, does it have a body or use
>   query strings? The plan is to resolve this discussion in the review
>   of the code at [3].
>

I personally prefer the POST after reading about the differences between
the two, and when reviewing the spec on this. I'm not crazy about the
scheduler having to pass a giant json string as a query parameter to a GET
request on the placement API, I'd rather do that with a request body.



> [1]
> http://specs.openstack.org/openstack/nova-specs/specs/ocata/
> approved/resource-providers-scheduler-db-filters.html
>
> [2]
> http://specs.openstack.org/openstack/nova-specs/specs/ocata/
> approved/resource-providers-get-by-request.html
>
> [3] https://review.openstack.org/#/c/386242/
>
> ## Docs
>
> In addition to needing an api-ref we also need a placement-dev.rst to
> go alongside the placement.rst. The -dev would mostly explain the how
> and the why of the placement API archicture, how the testing works,
> etc. That's mostly on me.
>
> ## Placement Upgrade/Installation issues
>
> (This is a straight copy from the previous message)
>
> In his response[4] to this topic Matt R pointed out todos for this
> topic:
>
> * get the placement-api enabled by default in the various bits of
>   ocata CI * ensure that microversions are being used on both sides of the
>   placement API transactions (that's true in pending changes to
>   both the API and the resource tracker)
>
> [4]
> http://lists.openstack.org/pipermail/openstack-dev/2016-Nove
> mber/107177.html
>

So...if step 1 is just enabling the placement service in all of the master
branch jobs, I think that's probably easy enough.

Here is an example of enabling services via devstack-gate:

https://review.openstack.org/#/c/345626/

The problem would be if that's not controlled via branch. But I think it
can be, so you can have the 'base' set of services enabled on all branches,
and then you can filter by branch. We could probably do that just like how
'tlsproxy' is done here:

https://github.com/openstack-infra/devstack-gate/blob/eb895c
a90282019493c7889f57e8c4143468cfa9/features.yaml#L174

sdague would be the person to ask for sure.

--

The other thing we've talked about some this week is adding a
'ready-for-upgrade' nova-manage command which will do some basic sniff
testing of your deployment and let you know if you're ready to start
running online data migrations and schema migrations, etc. This would check
some basic things like if placement is setup and we can make requests to
the REST API from the command. Given the client making these requests today
is in the compute nodes, the configuration is needed there, but I expect
most people are running nova-manage from their control nodes, so I'm not
sure how this is going to work, dansmith probably has ideas. My point is,
it'd be odd for the command to require the placement config setup on the
control node just for the nova-manage command to work, even though the
control node doesn't use the placement REST API anywhere else (well except
maybe the scheduler now in Ocata). Alternatively it's odd to run
nova-manage from all of your compute nodes, but....maybe it makes sense to
also be running that check on your compute nodes before upgrading them too.
I'll stop rambling now.


>
> ## Long Term Stuff
>

Honestly I skimmed this part because I'm mostly concerned with immediate
priorities, but understand if you need to do a brain dump for posterity and
tire kicking.



> ### Making Claims in the Placement API
>
> After Ocata the placement API will evolve to make claims, on the
> /allocations endpoint. When presented with a set of resources
> requirements _the_ resource provider that satisfies those requiements
> will be returned and the claim of resources made in a single step. To
> quote Mr. Pipes again:
>
>     once we have a placement service actually doing claims, the
>     returned resource providers for an allocation will be the actual
>     resource providers that were allocated against (which include
>     *both* compute node providers as well as any resource provider of
>     a shared resource that was allocated)
>
> Just so folk are aware.
>
> ### Moving Placement out of Nova
>
> If this is something we ever plan to do (there appear to be multiple
> points of view) then it is something we need to prepare for to ease
> the eventual transition. Some of these things include:
>
> * Removing as much 'nova.*' packages from the hierarchy of placement
>   modules.
> * Getting the new placement DB[5][6] handled in some way
> * Removing remotable from the resource provider objects. The intent is
>   that these will never be accessed other than through the HTTP API and
>   since that scales horizontally, no RPC should be required. If we
>   ever plan to remove it, sooner is better than later. A POC has been
>   submitted[7], but there's disagreement about whether we should do
>   it. We need to resolve that.
>
> [5] https://review.openstack.org/#/c/362766/
> [6] https://etherpad.openstack.org/p/placement-optional-db-spec
> [7] https://review.openstack.org/#/c/404279/
>
> # Pending Planned Work
>
> ## Custom Resource Classes
>
> Jay just posted a big update[8] on that so go look at that. A lot of
> code has merged, but a lot of code[9] is still in flight.
>
> [8]
> http://lists.openstack.org/pipermail/openstack-dev/2016-Dece
> mber/108393.html
>
> [9] https://review.openstack.org/#/q/topic:bp/custom-resource-classes
>
> ## Filtering compute nodes with the placement API
>
> Already mentioned (with links) above.
>
> ## Nested Resource Providers
>
> In discussions yesterday about Ocata priorities[10] we clarified that
> while resource providers matter, they are a stretch for Ocata. The
> primary goal is to have enough discussion and experimentation now so
> that we can have useful discussions at the PTG.
>
> The spec[11] has merged, the code is in a stack[12]. There's general
> agreement on the implementation, but there still seems to be some
> concern about how it is all going to work and what it all means in
> actual practice. The expectation is that we'll figure things when
> we're doing that actual practice.
>
> [10] https://review.openstack.org/#/c/404456/
> [11]
> http://specs.openstack.org/openstack/nova-specs/specs/ocata/
> approved/nested-resource-providers.html
>
> [12] https://review.openstack.org/#/c/377138/
>
> ## Allocations for generic PCI devices
>
> This code was abandoned because it was making some bad assumptions
> about how PCI device handling is done. See the abandoned review[13]
> for more information.
>
> [13] https://review.openstack.org/#/c/374681/
>
> # Pending Pickup Work
>
> (Bugs[14], stuff from the leftovers etherpad[15], other random bits of
> improvement.)
>
> [14] https://bugs.launchpad.net/nova/+bugs?field.tag=placement
> [15] https://etherpad.openstack.org/p/placement-newton-leftovers
>
> * Demo inventory update script:
>   https://review.openstack.org/#/c/382613/
>
>   This one might be considered a WIP because how it chooses to do
>   things (rather simply and dumbly) may not be in line with expecations.
>
> * CORS support in placement API:
>   https://review.openstack.org/#/c/392891/
>
>   John Garbutt's review led to finding a huge bug in this (service
>   wouldn't start in an actual deployment). That's been fixed.
>
> * Handling limits in schema better
>   https://review.openstack.org/#/c/399002/ (needs review)
>   https://review.openstack.org/#/c/398998/ (needs fixes from submitter)
>
> * [WIP] Placement api: Add json_error_formatter to defaults
>   https://review.openstack.org/#/c/395194/
>
>   This is an effort to avoid boilerplate, but no good solution has
>   been determined yet. Reviewers can help us figure a good way to
>   hande things.
>
> * Small improvements to placement.rst
>   https://review.openstack.org/#/c/403811/
>
> # End
>
> As usual, I hope this is useful to people. If something is missing
> or incorrect please say so. It's quite a bit of work to assemble this,
> but it's useful to me, so I'd be doing it anyway, even if I wasn't
> sending it out. I hope other people find it useful. If there's
> something I can do to make it more useful, let me know.
>

I think this is useful. Honestly I didn't read the last one, but I read
this one, and it's something that helps me since there are a lot of moving
parts going on with resource providers and I need to make sure we aren't
starving parts of the priority work, especially as we're 2 weeks from o-2.
Plus I generally don't make it to the weekly scheduler meeting so this is a
nice recap of the weekly events.

-- 

Thanks,

Matt Riedemann



__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20161202/f8eaaf81/attachment.html>


More information about the OpenStack-dev mailing list