[openstack-dev] [TripleO][Tuskar] Terminology

Robert Collins robertc at robertcollins.net
Fri Dec 13 03:20:47 UTC 2013


On 12 December 2013 21:59, Jaromir Coufal <jcoufal at redhat.com> wrote:
> On 2013/12/12 01:21, Robert Collins wrote:
>>
>> On 12 December 2013 08:15, Tzu-Mainn Chen <tzumainn at redhat.com
>>>
>>>           * MANAGEMENT NODE - a node that has been mapped with an
>>> undercloud role
>>
>>
>> Pedantically, this is 'A node with an instance of a management role
>> running on it'. I think calling it 'management node' is too sticky.
>> What if we cold migrate it to another machine when a disk fails and we
>> want to avoid dataloss if another disk were to fail?
>>
>> Management instance?
>
> I think the difference here is if I am looking on Nodes as HW stuff or if I
> am interested in services running on it. In the first case, I want to see
> 'Management Node', in the second case I want to see 'Management Instance'.
> So in the terms of Resources/Nodes it is valid to say 'Management Node'.

mmm, I don't really agree, I mean, I agree that if you're looking at
HW you want to look at Nodes. But we might migrate services between
nodes while keeping the same instance. Nodes should only be surfaced
when folk are actually addressing hardware IMO.

>>>           * SERVICE NODE - a node that has been mapped with an overcloud
>>> role
>>
>>
>> Again, the binding to node is too sticky IMNSHO.
>>
>> Service instance? Cloud instance?
>
> Same as above - depends on context of what I want to see.
>
> Service Instance is misleading. One service instance is for example
> nova-scheduler, not the whole node itself.
>
> I would avoid 'cloud' wording here. Service Node sounds fine for me, since
> the context is within Nodes/Resources.

Avoiding cloud - ack.

However, on instance - 'instance' is a very well defined term in Nova
and thus OpenStack: Nova boot gets you an instance, nova delete gets
rid of an instance, nova rebuild recreates it, etc. Instances run
[virtual|baremetal] machines managed by a hypervisor. So
nova-scheduler is not ever going to be confused with instance in the
OpenStack space IMO. But it brings up a broader question, which is -
what should we do when terms that are well defined in OpenStack - like
Node, Instance, Flavor - are not so well defined for new users? We
could use different terms, but that may confuse 'stackers, and will
mean that our UI needs it's own dedicated terminology to map back to
e.g. the manuals for Nova and Ironic. I'm inclined to suggest that as
a principle, where there is a well defined OpenStack concept, that we
use it, even if it is not ideal, because the consistency will be
valuable.

...
>>>              * BLOCK STORAGE NODE - a service node that has been mapped
>>> to an overcloud block storage role
>>
>>
>> s/Node/instance/ ?
>
> Within deployment section, +1 for substitution. However with respect to my
> note above (service instance meaning).

Yeah - but see above, I don't think there is room for confusion with
e.g. nova-compute. However there may be room for confusion between
instance-running-on-baremetal and instance-deployed-as-virt - but TBH
I don't think that matters too much, if we go to docker or something
in future, we'd have physical and container instances both serving
stuff out, but it's not clear to me that we'd want to show these
things up as separate at the top level.


> -1. Availability is very broad term and might mean various things. I can
> have assigned nodes with some role which are available for me - in terms of
> reachability for example.
>
> I vote for unallocated, unassigned, free?

"Free nodes" works well IMO. It's a positive, direct statement.

>>>       * INSTANCE - A role deployed on a node - this is where work
>>> actually happens.
>
> Yes. However this term is overloaded as well. Can we find something better?

See above - it is, but I think something different would cause
confusion, not reduce it.

>>> * DEPLOYMENT
>>>       * SIZE THE ROLES - the act of deciding how many nodes will need to
>>> be assigned to each role
>>>             * another option - DISTRIBUTE NODES (?)
>>>                                   - (I think the former is more accurate,
>>> but perhaps there's a better way to say it?)
>>
>>
>> Perhaps 'Size the cloud' ? "How big do you want your cloud to be?"
>
> * Design the deployment?
>
> (I am sorry for the aversion for 'cloud' - it's just used everywhere :))

I get that - thats fine.

>>>       * SCHEDULING - the process of deciding which role is deployed on
>>> which node
>>
>>
>> This possible should be a sub step of deployment.
>>
>>>       * SERVICE CLASS - a further categorization within a service role
>>> for a particular deployment.
>>
>>
>> See the other thread where I suggested perhaps bringing the image +
>> config aspects all the way up - I think that renames 'service class'
>> to 'Role configuration'. KVM Compute is a role configuration. KVM
>> compute(GPU) might be another.
>
> Role configuration sounds good to me.
>
> My only concern is - if/when we add multiple classes, role configuration
> doesn't sound accurate to me. Because Compute is a Role and if I have
> multiple Compute classes I feel it is still the same Role for me (Compute).
> Or would you expect it to be a different role?

It's the image + config -> heat -> deployed thing again.

So the question I'm asking is whether we want separate concepts in the
UI vs in the plumbing. In the plumbing we have:
elements -> image -> +config -> heat

So maybe we should have those as the top level things:
- image definition = 'what we need to build or download an image' -
arch, os, elements we put on it
- service configuration = 'what we need to ask heat to deploy
something' - image (from one of the defined images) + config metadata
(the heat data the image needs, the flavors this should be deployed
onto, the number of instances of it to deploy).

The closer the UI and the plumbing are, the less dissonance we have
(and the more pressure to fix the plumbing if the result would be a
poor UX).

>> Today the implementation at the plumbing layer can only do 'flavour',
>> though Heat is open to letting us to 'find an instance from any of X
>> flavors' in future. Lets not be -too- generic:
>> 'Flavor': The Nova description of a particular machine configuration,
>> and choosing one is part of setting up the 'role configuration'.
>
> NP with this, I will just avoid using term Flavors in the UI for user (again
> overloaded term). Better is HW configuration or Node Profile.

I think we should use Flavor, because thats what the thing is; its a
machine flavor in the undercloud. We could subdivide the configuration
bits we need into those-that-relate-to-node-selection and
those-that-relate-to-the-image-we're-deploying if you like - I don't
have a particular view on that. Note that the Flavor might not be the
only thing - in the long term users might want to use host aggregates
or other things that you can pass to Nova boot - which Heat exposes,
of course. But - the point here is that we need to avoid /requiring/
that: the minimum and only mandatory thing is the selection of flavor.
Basic features like HA need to be expressed to Heat as a HA policy,
not as scheduler hints.

-Rob


-- 
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud



More information about the OpenStack-dev mailing list