[openstack-dev] [TripleO][Tuskar] Icehouse Requirements

Jay Dobies jason.dobies at redhat.com
Thu Dec 12 21:05:41 UTC 2013

> Maybe this is a valid use case?
> Cloud operator has several core service nodes of differing configuration
> types.
> [node1]  <-- balanced mix of disk/cpu/ram for general core services
> [node2]  <-- lots of disks for Ceilometer data storage
> [node3]  <-- low-end "appliance like" box for a specialized/custom core service
> 	     (SIEM box for example)
> All nodes[1,2,3] are in the same deployment grouping ("core services)".  As such,
> this is a heterogenous deployment grouping.  Heterogeneity in this case defined by
> differing roles and hardware configurations.
> This is a real use case.
> How do we handle this?

This is the sort of thing I had been concerned with, but I think this is 
just a variation on Robert's GPU example. Rather than butcher it by 
paraphrasing, I'll just include the relevant part:

"The basic stuff we're talking about so far is just about saying each
role can run on some set of undercloud flavors. If that new bit of kit
has the same coarse metadata as other kit, Nova can't tell it apart.
So the way to solve the problem is:
  - a) teach Ironic about the specialness of the node (e.g. a tag 'GPU')
  - b) teach Nova that there is a flavor that maps to the presence of
that specialness, and
    c) teach Nova that other flavors may not map to that specialness

then in Tuskar whatever Nova configuration is needed to use that GPU
is a special role ('GPU compute' for instance) and only that role
would be given that flavor to use. That special config is probably
being in a host aggregate, with an overcloud flavor that specifies
that aggregate, which means at the TripleO level we need to put the
aggregate in the config metadata for that role, and the admin does a
one-time setup in the Nova Horizon UI to configure their GPU compute

You mention three specific nodes, but what you're describing is more 
likely three concepts:
- Balanced Nodes
- High Disk I/O Nodes
- Low-End Appliance Nodes

They may have one node in each, but I think your example of three nodes 
is potentially *too* simplified to be considered as proper sample size. 
I'd guess there are more than three in play commonly, in which case the 
concepts breakdown starts to be more appealing.

I think the disk flavor in particular has quite a few use cases, 
especially until SSDs are ubiquitous. I'd want to flag those (in Jay 
terminology, "the disk hotness") as hosting the data-intensive portions, 
but where I had previously been viewing that as manual allocation, it 
sounds like the approach is to properly categorize them for what they 
are and teach Nova how to use them.

Robert - Please correct me if I misread any of what your intention was, 
I don't want to drive people down the wrong path if I'm misinterpretting 

> 	-k
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

More information about the OpenStack-dev mailing list