<html><head><meta http-equiv="Content-Type" content="text/html charset=us-ascii"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class="">Hi Adam,<div class=""><br class=""></div><div class="">We provide various volume types which differ in</div><div class=""><br class=""></div><div class="">- performance (implemented via different IOPS QoS specifications, not via different hardware),</div><div class="">- service quality (e.g. volumes on a Ceph pool that is on Diesel-backed servers, so via separate hardware),</div><div class="">- a combination of the two,</div><div class="">- geographical location (with a second Ceph instance in another data centre).</div><div class=""><br class=""></div><div class="">I think it is absolutely realistic/manageable to use the same Ceph cluster for various use cases.</div><div class=""><br class=""></div><div class="">HTH,</div><div class=""> Arne</div><div class=""><br class=""></div><div class=""><div><blockquote type="cite" class=""><div class="">On 26 Oct 2015, at 14:02, Adam Lawson <<a href="mailto:alawson@aqorn.com" class="">alawson@aqorn.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><meta http-equiv="Content-Type" content="text/html; charset=utf-8" class=""><div dir="ltr" class="">Has anyone deployed Ceph and accommodate different disk/performance requirements? I.e. Saving ephemeral storage and boot volumes on SSD and less important content such as object storage, glance images on SATA or something along those lines?<div class=""><br class=""></div><div class="">Just looking at it's realistic (or discover best practice) on using the same Ceph cluster for both use cases...<div class=""><br class=""></div><div class="">//adam<br clear="all" class=""><div class=""><div class="gmail_signature"><div dir="ltr" class=""><div class=""><font class=""><div style="font-family:arial;font-size:small" class=""><b class=""><i class=""><br class="">Adam Lawson</i></b></div><div class=""><font class=""><font color="#666666" size="1" class=""><div style="font-family:arial" class=""><br class=""></div><div style="font-family:arial;font-size:small" class="">AQORN, Inc.</div><div style="font-family:arial;font-size:small" class="">427 North Tatnall Street</div><div style="font-family:arial;font-size:small" class="">Ste. 58461</div><div style="font-family:arial;font-size:small" class="">Wilmington, Delaware 19801-2230</div><div style="font-family:arial;font-size:small" class="">Toll-free: (844) 4-AQORN-NOW ext. 101</div><div style="font-family:arial;font-size:small" class="">International: +1 302-387-4660</div></font><font color="#666666" size="1" class=""><div style="font-family:arial;font-size:small" class="">Direct: +1 916-246-2072</div></font></font></div></font></div><div style="font-family:arial;font-size:small" class=""><img src="http://www.aqorn.com/images/logo.png" width="96" height="39" class=""><br class=""></div></div></div></div>
</div></div></div>
_______________________________________________<br class="">OpenStack-operators mailing list<br class=""><a href="mailto:OpenStack-operators@lists.openstack.org" class="">OpenStack-operators@lists.openstack.org</a><br class="">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators<br class=""></div></blockquote></div><br class=""></div></body></html>