[openstack-dev] [TripleO][Tuskar] Icehouse Requirements

Robert Collins robertc at robertcollins.net
Mon Dec 9 22:06:50 UTC 2013


On 10 December 2013 10:57, Jay Dobies <jason.dobies at redhat.com> wrote:

>>
>> So we have:
>>   - node - a physical general purpose machine capable of running in
>> many roles. Some nodes may have hardware layout that is particularly
>> useful for a given role.
>>   - role - a specific workload we want to map onto one or more nodes.
>> Examples include 'undercloud control plane', 'overcloud control
>> plane', 'overcloud storage', 'overcloud compute' etc.
>>   - instance - A role deployed on a node - this is where work actually
>> happens.
>>   - scheduling - the process of deciding which role is deployed on which
>> node.
>
>
> This glossary is really handy to make sure we're all speaking the same
> language.
>
>
>> The way TripleO works is that we defined a Heat template that lays out
>> policy: 5 instances of 'overcloud control plane please', '20
>> hypervisors' etc. Heat passes that to Nova, which pulls the image for
>> the role out of Glance, picks a node, and deploys the image to the
>> node.
>>
>> Note in particular the order: Heat -> Nova -> Scheduler -> Node chosen.
>>
>> The user action is not 'allocate a Node to 'overcloud control plane',
>> it is 'size the control plane through heat'.
>>
>> So when we talk about 'unallocated Nodes', the implication is that
>> users 'allocate Nodes', but they don't: they size roles, and after
>> doing all that there may be some Nodes that are - yes - unallocated,
>
>
> I'm not sure if I should ask this here or to your point above, but what
> about multi-role nodes? Is there any piece in here that says "The policy
> wants 5 instances but I can fit two of them on this existing underutilized
> node and three of them on unallocated nodes" or since it's all at the image
> level you get just what's in the image and that's the finest-level of
> granularity?

The way we handle that today is to create a composite role that says
'overcloud-compute+cinder storage', for instance - because image is
the level of granularity. If/when we get automatic container
subdivision - see the other really interesting long-term thread - we
could subdivide, but I'd still do that using image as the level of
granularity, it's just that we'd have the host image + the container
images.

>> or have nothing scheduled to them. So... I'm not debating that we
>> should have a list of free hardware - we totally should - I'm debating
>> how we frame it. 'Available Nodes' or 'Undeployed machines' or
>> whatever. I just want to get away from talking about something
>> ([manual] allocation) that we don't offer.
>
>
> My only concern here is that we're not talking about cloud users, we're
> talking about admins adminning (we'll pretend it's a word, come with me) a
> cloud. To a cloud user, "give me some power so I can do some stuff" is a
> safe use case if I trust the cloud I'm running on. I trust that the cloud
> provider has taken the proper steps to ensure that my CPU isn't in New York
> and my storage in Tokyo.

Sure :)

> To the admin setting up an overcloud, they are the ones providing that trust
> to eventual cloud users. That's where I feel like more visibility and
> control are going to be desired/appreciated.
>
> I admit what I just said isn't at all concrete. Might even be flat out
> wrong. I was never an admin, I've just worked on sys management software
> long enough to have the opinion that their levels of OCD are legendary. I
> can't shake this feeling that someone is going to slap some fancy new
> jacked-up piece of hardware onto the network and have a specific purpose
> they are going to want to use it for. But maybe that's antiquated thinking
> on my part.

I think concrete use cases are the only way we'll get light at the end
of the tunnel.

So lets say someone puts a new bit of fancy kit onto their network and
wants it for e.g. GPU VM instances only. Thats a reasonable desire.

The basic stuff we're talking about so far is just about saying each
role can run on some set of undercloud flavors. If that new bit of kit
has the same coarse metadata as other kit, Nova can't tell it apart.
So the way to solve the problem is:
 - a) teach Ironic about the specialness of the node (e.g. a tag 'GPU')
 - b) teach Nova that there is a flavor that maps to the presence of
that specialness, and
   c) teach Nova that other flavors may not map to that specialness

then in Tuskar whatever Nova configuration is needed to use that GPU
is a special role ('GPU compute' for instance) and only that role
would be given that flavor to use. That special config is probably
being in a host aggregate, with an overcloud flavor that specifies
that aggregate, which means at the TripleO level we need to put the
aggregate in the config metadata for that role, and the admin does a
one-time setup in the Nova Horizon UI to configure their GPU compute
flavor.

This isn't 'manual allocation' to me - it's surfacing the capabilities
from the bottom ('has GPU') and the constraints from the top ('needs
GPU') and letting Nova and Heat sort it out.

-Rob

-- 
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud



More information about the OpenStack-dev mailing list