[openstack-dev] [heat] Comments/questions on the instance-group-api-extension blueprint

Mike Spreitzer mspreitz at us.ibm.com
Thu Sep 12 18:03:37 UTC 2013


We are currently explicitly considering location and space.  For example, 
a template can require that a volume be in a disk that is directly 
attached to the machine hosting the VM to which the volume is attached. 
Spinning rust bandwidth is much trickier because it is not something you 
can simply add up when you combine workloads.  The IOPS, as well as the 
B/S, that a disk will deliver depends on the workload mix on that disk. 
While the disk may deliver X IOPS when serving only application A, and Y 
when serving only application B, you cannot conclude that it will serve 
(X+Y)/2 when serving (A+B)/2.  While we hope to do better in the future, 
we currently handle disk bandwidth in non-quantitative ways.  One is that 
a template may request that a volume be placed such that it does not 
compete with any other volume (i.e., is the only one on its disk). Another 
is that a template may specify a "type" for a volume, which effectively 
maps to a Cinder volume type that has been pre-defined to correspond to a 
QoS defined in an enterprise storage subsystem.

The choice between fast&expensive vs slow&cheap storage is currently left 
to higher layers.  That could be pushed down, supposing there is a 
suitably abstract yet accurate way of describing how the tradeoff choice 
should be made.

I think Savanna people are on this list too, so I presume it's a good 
place for this discussion.

Thanks,
Mike



From:   shalz <shalz at hotmail.com>
To:     OpenStack Development Mailing List 
<openstack-dev at lists.openstack.org>, 
Date:   09/11/2013 09:55 PM
Subject:        Re: [openstack-dev] [heat] Comments/questions on the 
instance-group-api-extension blueprint



Mike,

You mention  "We are now extending that example to include storage, and we 
are also working examples with Hadoop. "

In the context of your examples / scenarios, do these placement decisions 
consider storage performance and capacity on a physical node?

For example: Based on application needs, and IOPS, latency requirements - 
carving out a SSD storage or a traditional spinning disk block volume?  Or 
say for cost-efficiency reasons using SSD caching on Hadoop name nodes? 

I'm investigating  a) Per node PCIe SSD deployment need in Openstack 
environment /  Hadoop environment and ,b) selected node SSD caching, 
specifically for OpenStack Cinder.  Hope this is the right forum to ask 
this question.

rgds,
S

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130912/58c6d627/attachment.html>


More information about the OpenStack-dev mailing list