[Openstack-operators] [Ceph] Different storage types on different disk types

Warren Wang warren at wangspeed.com
Mon Oct 26 18:11:36 UTC 2015


There are some cases where it may be easier to run them as separate clusters. Like when you start adding tuning changes that don't make sense for types of clusters. Like the recent tcmalloc/jemalloc discovery for SSDs. I'm not sure that makes sense for spinners, and that is at a library level, not config. 

It isn't much more overhead to run multi backend cinder, which you kind of need to do anyway to guarantee QoS on the default volume for BfV. 

--
Warren

> On Oct 27, 2015, at 12:32 AM, Arne Wiebalck <arne.wiebalck at cern.ch> wrote:
> 
> Hi Adam,
> 
> We provide various volume types which differ in
> 
> - performance (implemented via different IOPS QoS specifications, not via different hardware),
> - service quality (e.g. volumes on a Ceph pool that is on Diesel-backed servers, so via separate hardware),
> - a combination of the two,
> - geographical location (with a second Ceph instance in another data centre).
> 
> I think it is absolutely realistic/manageable to use the same Ceph cluster for various use cases.
> 
> HTH,
>  Arne
> 
>> On 26 Oct 2015, at 14:02, Adam Lawson <alawson at aqorn.com> wrote:
>> 
>> Has anyone deployed Ceph and accommodate different disk/performance requirements? I.e. Saving ephemeral storage and boot volumes on SSD and less important content such as object storage, glance images on SATA or something along those lines?
>> 
>> Just looking at it's realistic (or discover best practice) on using the same Ceph cluster for both use cases...
>> 
>> //adam
>> 
>> Adam Lawson
>> 
>> AQORN, Inc.
>> 427 North Tatnall Street
>> Ste. 58461
>> Wilmington, Delaware 19801-2230
>> Toll-free: (844) 4-AQORN-NOW ext. 101
>> International: +1 302-387-4660
>> Direct: +1 916-246-2072
>> 
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20151027/6d35bd33/attachment.html>


More information about the OpenStack-operators mailing list