[openstack-dev] [Nova] [RFC] ResourceProviderTags - Manage Capabilities with ResourceProvider

Jay Pipes jaypipes at gmail.com
Wed Jul 20 18:08:43 UTC 2016


On 07/18/2016 01:45 PM, Matt Riedemann wrote:
> On 7/15/2016 8:06 PM, Alex Xu wrote:
>>
>> Actually I still think aggregates isn't good for Manage Caps, just as I
>> said in previous reply about Aggregates. One of reason is just same with
>> #2 you said :) And It's totally not managable. User is even no way to
>> query a specific host in which host-aggregate. And there isn't a
>> interface to query what metadata was related to the host by
>> host-aggregate. I prefer just keep the Aggregate as tool to group the
>> hosts. But yes, user still can use host-aggregate to manage host with
>> old way, let's user decide what is more convenient.
>
> +1 to Alex's point. I just read through this thread and had the same
> thought. If the point is to reduce complexity in the system and surface
> capabilities to the end user, let's do that with resource provider tags,
> not a mix of host aggregate metadata and resource provider tags so that
> an operator has to set both, but also know in what situations he/she has
> to set it and where.

Yeah, having the resource provider be tagged with capabilities versus 
having to manage aggregate tags may make some of the qualitative 
matching queries easier to grok. The query performance won't necessarily 
be any better, but they will likely be easier to read...

> I'm hoping Jay or someone channeling Jay can hold my hand and walk me
> safely through the evil forest that is image properties / flavor extra
> specs / scheduler hints / host aggregates / resource providers / and the
> plethora of scheduler filters that use them to build a concrete
> picture/story tying this all together. I'm thinking like use cases, what
> does the operator need to do

Are you asking how things are *currently* done in Nova? If so, I'll need 
to find some alcohol.

If you are asking about how we'd *like* all of the qualitative things to 
be requested and queried in the new placement API, then less alcohol is 
required.

The schema I'm thinking about on the placement engine side looks like this:

CREATE TABLE tags (
   id INT NOT NULL,
   name VARCHAR(200) NOT NULL,
   PRIMARY KEY (id),
   UNIQUE INDEX (name)
);

CREATE TABLE resource_provider_tags (
   resource_provider_id INT NOT NULL
   tag_id INT NOT NULL,
   PRIMARY KEY (resource_provider_id, tag_id),
   INDEX (tag_id)
);

On the Nova side, we need a mechanism of associating a set of 
capabilities that may either be required or preferred. The thing that we 
currently use for associating requested things in Nova is the flavor, so 
we'd need to define a mapping in Nova for the tags a flavor would 
require or prefer.

CREATE TABLE flavor_tags (
   flavor_id INT NOT NULL,
   tag_name VARCHAR(200) NOT NULL,
   is_required INT NOT NULL
);

We would need to have a call in the placement REST API to find the 
resource providers that matched a particular set of required or 
preferred capability tags. Such a call might look like the following:

GET /resource_providers
{
   "resources": {
     "VCPU": 2,
     "MEMORY_MB": 2048,
     "DISK_GB": 100
   },
   "requires": [
     "storage:ssd",
     "compute:hw:x86:avx2",
   ],
   "prefers": [
     "compute:virt:accelerated_whizzybang"
   ]
}

Disregard the quantitative side of the above request right now. We could 
answer the qualitative side of the equation with the following SQL query 
in the placement engine:

SELECT rp.uuid
FROM resource_providers AS rp, tags AS t1, tags AS t2, tags AS t3
INNER JOIN resource_provider_tags AS rpt1
ON rp.id = rpt1.resource_provider_id
AND rpt1.tag_id = t1.id
INNER JOIN resource_provider_tags AS rpt2
AND rpt1.resource_provider_id = rpt2.resource_provider_id
AND rpt2.tag_id = t2.id
LEFT JOIN resource_provider_tags AS rpt3
ON rp.id = rpt3.resource_provider_id
AND rpt3.tag_id = t3.id
GROUP BY rp.uuid
ORDER BY COUNT(COALESCE(rpt3.resource_provider_id, 0)) DESC
WHERE t1.name = 'storage:ssd'
AND t2.name = 'compute:hw:x86:avx2'
AND t3.name IN ('compute:virt:accelerated_whizzybang')

The above returns all resource providers having the 'storage:ssd' and 
'compute:hw:x86:avx2' tags and returns resource providers *first* that 
have the 'compute:virt:accelerated_whizzybang' tag.

>, what does the end user of the cloud need
> to do, etc. I think if we're going to do resource providers tags for
> capabilities we also need to think about what we're replacing. Maybe
> that's just host aggregate metadata, but what's the deprecation plan for
> that?

Good question, as usual. My expectation would be that in Ocata, when we 
start adding the qualitative aspects to the placement REST API, we would 
introduce documentation that operators could use to translate common use 
cases that they were using flavor extra_specs and aggregate metadata for 
in the pre-placement world to the resource provider tags setup that 
would replace that functonality in the placement API world. Unlike most 
of the quantitative side of things, there really isn't a good way to 
"autoheal" or "autosetup" these things.

> There is a ton to talk about here, so I'll leave that for the midcycle.
> But let's think about what, if anything, needs to land in Newton to
> enable us working on this in Ocata - but our priority for the midcycle
> is really going to be focused on what things we need to get done yet in
> Newton based on what we said we'd do in Austin.
>
> Also, a final nit - can we please be specific about roles in this thread
> and any specs? I see 'user' thrown around a lot, but there are different
> kinds of users. Only admins can see host aggregates and their metadata.
> And when we're talking about how these tags will be used, let's be clear
> about who the actors are - admins or cloud users. It helps avoid some
> confusion.

Correct. ONLY administrators can set, delete and associate tags with 
resource providers. End users only see a flavor name IMHO. It would be 
up to the deployer to document for end users whether and what 
capabilities a particular flavor provides...

Best,
-jay



More information about the OpenStack-dev mailing list