<font size=2 face="sans-serif">We are currently explicitly considering
location and space. For example, a template can require that a volume
be in a disk that is directly attached to the machine hosting the VM to
which the volume is attached. Spinning rust bandwidth is much trickier
because it is not something you can simply add up when you combine workloads.
The IOPS, as well as the B/S, that a disk will deliver depends on
the workload mix on that disk. While the disk may deliver X IOPS
when serving only application A, and Y when serving only application B,
you cannot conclude that it will serve (X+Y)/2 when serving (A+B)/2. While
we hope to do better in the future, we currently handle disk bandwidth
in non-quantitative ways. One is that a template may request that
a volume be placed such that it does not compete with any other volume
(i.e., is the only one on its disk). Another is that a template may
specify a "type" for a volume, which effectively maps to a Cinder
volume type that has been pre-defined to correspond to a QoS defined in
an enterprise storage subsystem.</font>
<br>
<br><font size=2 face="sans-serif">The choice between fast&expensive
vs slow&cheap storage is currently left to higher layers. That
could be pushed down, supposing there is a suitably abstract yet accurate
way of describing how the tradeoff choice should be made.</font>
<br>
<br><font size=2 face="sans-serif">I think Savanna people are on this list
too, so I presume it's a good place for this discussion.</font>
<br>
<br><font size=2 face="sans-serif">Thanks,</font>
<br><font size=2 face="sans-serif">Mike</font>
<br>
<br>
<br>
<br><font size=1 color=#5f5f5f face="sans-serif">From:
</font><font size=1 face="sans-serif">shalz <shalz@hotmail.com></font>
<br><font size=1 color=#5f5f5f face="sans-serif">To:
</font><font size=1 face="sans-serif">OpenStack Development
Mailing List <openstack-dev@lists.openstack.org>, </font>
<br><font size=1 color=#5f5f5f face="sans-serif">Date:
</font><font size=1 face="sans-serif">09/11/2013 09:55 PM</font>
<br><font size=1 color=#5f5f5f face="sans-serif">Subject:
</font><font size=1 face="sans-serif">Re: [openstack-dev]
[heat] Comments/questions on the instance-group-api-extension
blueprint</font>
<br>
<hr noshade>
<br>
<br>
<br><font size=3>Mike,</font>
<br>
<br><font size=3>You mention "</font><font size=2 face="sans-serif">We
are now extending that example to include storage, and we are also working
examples with Hadoop.</font><font size=3> "</font>
<br>
<br><font size=3>In the context of your examples / scenarios, do these
placement decisions consider storage performance and capacity on a physical
node?</font>
<br>
<br><font size=3>For example: Based on application needs, and IOPS, latency
requirements - carving out a SSD storage or a traditional spinning disk
block volume? Or say for cost-efficiency reasons using SSD caching
on Hadoop name nodes? </font>
<br>
<br><font size=3>I'm investigating a) Per node PCIe SSD deployment
need in Openstack environment / Hadoop environment and ,b) selected
node SSD caching, specifically for OpenStack <b>Cinder</b>. Hope
this is the right forum to ask this question.</font>
<br>
<br><font size=3>rgds,</font>
<br><font size=3>S</font><tt><font size=2><br>
</font></tt>
<br>