[openstack-dev] [TripleO][Tuskar] Icehouse Requirements

Jay Dobies jason.dobies at redhat.com
Thu Dec 12 21:48:28 UTC 2013

On 12/12/2013 04:25 PM, Keith Basil wrote:
> On Dec 12, 2013, at 4:05 PM, Jay Dobies wrote:
>>> Maybe this is a valid use case?
>>> Cloud operator has several core service nodes of differing configuration
>>> types.
>>> [node1]  <-- balanced mix of disk/cpu/ram for general core services
>>> [node2]  <-- lots of disks for Ceilometer data storage
>>> [node3]  <-- low-end "appliance like" box for a specialized/custom core service
>>> 	     (SIEM box for example)
>>> All nodes[1,2,3] are in the same deployment grouping ("core services)".  As such,
>>> this is a heterogenous deployment grouping.  Heterogeneity in this case defined by
>>> differing roles and hardware configurations.
>>> This is a real use case.
>>> How do we handle this?
>> This is the sort of thing I had been concerned with, but I think this is just a variation on Robert's GPU example. Rather than butcher it by paraphrasing, I'll just include the relevant part:
>> "The basic stuff we're talking about so far is just about saying each
>> role can run on some set of undercloud flavors. If that new bit of kit
>> has the same coarse metadata as other kit, Nova can't tell it apart.
>> So the way to solve the problem is:
>> - a) teach Ironic about the specialness of the node (e.g. a tag 'GPU')
>> - b) teach Nova that there is a flavor that maps to the presence of
>> that specialness, and
>>    c) teach Nova that other flavors may not map to that specialness
>> then in Tuskar whatever Nova configuration is needed to use that GPU
>> is a special role ('GPU compute' for instance) and only that role
>> would be given that flavor to use. That special config is probably
>> being in a host aggregate, with an overcloud flavor that specifies
>> that aggregate, which means at the TripleO level we need to put the
>> aggregate in the config metadata for that role, and the admin does a
>> one-time setup in the Nova Horizon UI to configure their GPU compute
>> flavor."
> Yes, the core services example is a variation on the above.  The idea
> of _undercloud_ flavor assignment (flavor to role mapping) escaped me
> when I read that earlier.
> It appears to be very elegant and provides another attribute for Tuskar's
> notion of resource classes.  So +1 here.
>> You mention three specific nodes, but what you're describing is more likely three concepts:
>> - Balanced Nodes
>> - High Disk I/O Nodes
>> - Low-End Appliance Nodes
>> They may have one node in each, but I think your example of three nodes is potentially *too* simplified to be considered as proper sample size. I'd guess there are more than three in play commonly, in which case the concepts breakdown starts to be more appealing.
> Correct - definitely more than three, I just wanted to illustrate the use case.

I not sure I explained what I was getting at properly. I wasn't implying 
you thought it was limited to just three. I do the same thing, simplify 
down for discussion purposes (I've done so in my head about this very 

But I think this may be a rare case where simplifying actually masks the 
concept rather than exposes it. Manual feels a bit more desirable in 
small sample groups but when looking at larger sets of nodes, the flavor 
concept feels less odd than it does when defining a flavor for a single 

That's all. :) Maybe that was clear already, but I wanted to make sure I 
didn't come off as attacking your example. It certainly wasn't my 
intention. The balanced v. disk machine thing is the sort of thing I'd 
been thinking for a while but hadn't found a good way to make concrete.

>> I think the disk flavor in particular has quite a few use cases, especially until SSDs are ubiquitous. I'd want to flag those (in Jay terminology, "the disk hotness") as hosting the data-intensive portions, but where I had previously been viewing that as manual allocation, it sounds like the approach is to properly categorize them for what they are and teach Nova how to use them.
>> Robert - Please correct me if I misread any of what your intention was, I don't want to drive people down the wrong path if I'm misinterpretting anything.
> 	-k
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

More information about the OpenStack-dev mailing list