<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body dir="auto"><div>There are some cases where it may be easier to run them as separate clusters. Like when you start adding tuning changes that don't make sense for types of clusters. Like the recent tcmalloc/jemalloc discovery for SSDs. I'm not sure that makes sense for spinners, and that is at a library level, not config. </div><div id="AppleMailSignature"><br></div><div id="AppleMailSignature">It isn't much more overhead to run multi backend cinder, which you kind of need to do anyway to guarantee QoS on the default volume for BfV. <br><br>--<div>Warren</div></div><div><br>On Oct 27, 2015, at 12:32 AM, Arne Wiebalck <<a href="mailto:arne.wiebalck@cern.ch">arne.wiebalck@cern.ch</a>> wrote:<br><br></div><blockquote type="cite"><div><meta http-equiv="Content-Type" content="text/html charset=us-ascii">Hi Adam,<div class=""><br class=""></div><div class="">We provide various volume types which differ in</div><div class=""><br class=""></div><div class="">- performance (implemented via different IOPS QoS specifications, not via different hardware),</div><div class="">- service quality (e.g. volumes on a Ceph pool that is on Diesel-backed servers, so via separate hardware),</div><div class="">- a combination of the two,</div><div class="">- geographical location (with a second Ceph instance in another data centre).</div><div class=""><br class=""></div><div class="">I think it is absolutely realistic/manageable to use the same Ceph cluster for various use cases.</div><div class=""><br class=""></div><div class="">HTH,</div><div class=""> Arne</div><div class=""><br class=""></div><div class=""><div><blockquote type="cite" class=""><div class="">On 26 Oct 2015, at 14:02, Adam Lawson <<a href="mailto:alawson@aqorn.com" class="">alawson@aqorn.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><meta http-equiv="Content-Type" content="text/html; charset=utf-8" class=""><div dir="ltr" class="">Has anyone deployed Ceph and accommodate different disk/performance requirements? I.e. Saving ephemeral storage and boot volumes on SSD and less important content such as object storage, glance images on SATA or something along those lines?<div class=""><br class=""></div><div class="">Just looking at it's realistic (or discover best practice) on using the same Ceph cluster for both use cases...<div class=""><br class=""></div><div class="">//adam<br clear="all" class=""><div class=""><div class="gmail_signature"><div dir="ltr" class=""><div class=""><font class=""><div style="font-family:arial;font-size:small" class=""><b class=""><i class=""><br class="">Adam Lawson</i></b></div><div class=""><font class=""><font color="#666666" size="1" class=""><div style="font-family:arial" class=""><br class=""></div><div style="font-family:arial;font-size:small" class="">AQORN, Inc.</div><div style="font-family:arial;font-size:small" class="">427 North Tatnall Street</div><div style="font-family:arial;font-size:small" class="">Ste. 58461</div><div style="font-family:arial;font-size:small" class="">Wilmington, Delaware 19801-2230</div><div style="font-family:arial;font-size:small" class="">Toll-free: (844) 4-AQORN-NOW ext. 101</div><div style="font-family:arial;font-size:small" class="">International: +1 302-387-4660</div></font><font color="#666666" size="1" class=""><div style="font-family:arial;font-size:small" class="">Direct: +1 916-246-2072</div></font></font></div></font></div><div style="font-family:arial;font-size:small" class=""><img src="http://www.aqorn.com/images/logo.png" width="96" height="39" class=""><br class=""></div></div></div></div>
</div></div></div>
_______________________________________________<br class="">OpenStack-operators mailing list<br class=""><a href="mailto:OpenStack-operators@lists.openstack.org" class="">OpenStack-operators@lists.openstack.org</a><br class=""><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br class=""></div></blockquote></div><br class=""></div></div></blockquote><blockquote type="cite"><div><span>_______________________________________________</span><br><span>OpenStack-operators mailing list</span><br><span><a href="mailto:OpenStack-operators@lists.openstack.org">OpenStack-operators@lists.openstack.org</a></span><br><span><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a></span><br></div></blockquote></body></html>