[openstack-dev] [Nova] [RFC] ResourceProviderTags - Manage Capabilities with ResourceProvider

Alex Xu soulxu at gmail.com
Tue Jul 19 06:21:14 UTC 2016


2016-07-18 13:45 GMT-07:00 Matt Riedemann <mriedem at linux.vnet.ibm.com>:

> On 7/15/2016 8:06 PM, Alex Xu wrote:
>
>>
>> Actually I still think aggregates isn't good for Manage Caps, just as I
>> said in previous reply about Aggregates. One of reason is just same with
>> #2 you said :) And It's totally not managable. User is even no way to
>> query a specific host in which host-aggregate. And there isn't a
>> interface to query what metadata was related to the host by
>> host-aggregate. I prefer just keep the Aggregate as tool to group the
>> hosts. But yes, user still can use host-aggregate to manage host with
>> old way, let's user decide what is more convenient.
>>
>>
> +1 to Alex's point. I just read through this thread and had the same
> thought. If the point is to reduce complexity in the system and surface
> capabilities to the end user, let's do that with resource provider tags,
> not a mix of host aggregate metadata and resource provider tags so that an
> operator has to set both, but also know in what situations he/she has to
> set it and where.
>
> I'm hoping Jay or someone channeling Jay can hold my hand and walk me
> safely through the evil forest that is image properties / flavor extra
> specs / scheduler hints / host aggregates / resource providers / and the
> plethora of scheduler filters that use them to build a concrete
> picture/story tying this all together. I'm thinking like use cases, what
> does the operator need to do, what does the end user of the cloud need to
> do, etc. I think if we're going to do resource providers tags for
> capabilities we also need to think about what we're replacing. Maybe that's
> just host aggregate metadata, but what's the deprecation plan for that?
>

Yes, it is a lot of confuse on existed image properties and extra_specs. I
have tried list all the properties and extra_specs:
https://etherpad.openstack.org/p/nova_existed_extra_spec_and_metadata

But look at them, I think none of them are capabilities(after Jay point me
out the disk_type isn't capabilities). They are very hypervisor specific or
VM hardware configuration detail.

The Nova API shouldn't expose any specific hypervisor detail, also the VM
hardware configuration detail. User shouldn't care about those detail, they
just needs request the Capabilities, then nova decide the VM hardware
configuration based on the Capabilities.

My initial thought is we leave the existed properties and extra_specs
alone, deal with Capabilities separately. Just dump my thought at here.

For the deprecation of host aggregate metadata, I didn't thought that yet.
In normally we can keep them for a release after we have ResourceTags?
Anyway I will think about it more, thanks for point this out.



>
> There is a ton to talk about here, so I'll leave that for the midcycle.
> But let's think about what, if anything, needs to land in Newton to enable
> us working on this in Ocata - but our priority for the midcycle is really
> going to be focused on what things we need to get done yet in Newton based
> on what we said we'd do in Austin.
>
> Also, a final nit - can we please be specific about roles in this thread
> and any specs? I see 'user' thrown around a lot, but there are different
> kinds of users. Only admins can see host aggregates and their metadata. And
> when we're talking about how these tags will be used, let's be clear about
> who the actors are - admins or cloud users. It helps avoid some confusion.


Got it, I will clear the user roles in the specs later. Thanks for point
this out too.


>
>
> --
>
> Thanks,
>
> Matt Riedemann
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160718/38e121c1/attachment.html>


More information about the OpenStack-dev mailing list