[Openstack-operators] [Ceph] Different storage types on different disk types

Adam Lawson alawson at aqorn.com
Wed Oct 28 08:31:15 UTC 2015


Thanks everyone, got several good ideas on how to proceed forward. Thanks!!

//adam
On Oct 28, 2015 11:55 AM, "Andrew Woodward" <xarses at gmail.com> wrote:

> Adam,
>
> Most deployments use different pools for each type as discussed above.
> These pools can then be mapped differently to disks, racks and so on by
> updating the CRUSH rules. This controls how pools are distributed in the
> cluster.
>
> On Tue, Oct 27, 2015 at 3:15 AM Warren Wang <warren at wangspeed.com> wrote:
>
>> There are some cases where it may be easier to run them as separate
>> clusters. Like when you start adding tuning changes that don't make sense
>> for types of clusters. Like the recent tcmalloc/jemalloc discovery for
>> SSDs. I'm not sure that makes sense for spinners, and that is at a library
>> level, not config.
>>
>> It isn't much more overhead to run multi backend cinder, which you kind
>> of need to do anyway to guarantee QoS on the default volume for BfV.
>>
>> --
>> Warren
>>
>> On Oct 27, 2015, at 12:32 AM, Arne Wiebalck <arne.wiebalck at cern.ch>
>> wrote:
>>
>> Hi Adam,
>>
>> We provide various volume types which differ in
>>
>> - performance (implemented via different IOPS QoS specifications, not via
>> different hardware),
>> - service quality (e.g. volumes on a Ceph pool that is on Diesel-backed
>> servers, so via separate hardware),
>> - a combination of the two,
>> - geographical location (with a second Ceph instance in another data
>> centre).
>>
>> I think it is absolutely realistic/manageable to use the same Ceph
>> cluster for various use cases.
>>
>> HTH,
>>  Arne
>>
>> On 26 Oct 2015, at 14:02, Adam Lawson <alawson at aqorn.com> wrote:
>>
>> Has anyone deployed Ceph and accommodate different disk/performance
>> requirements? I.e. Saving ephemeral storage and boot volumes on SSD and
>> less important content such as object storage, glance images on SATA or
>> something along those lines?
>>
>> Just looking at it's realistic (or discover best practice) on using the
>> same Ceph cluster for both use cases...
>>
>> //adam
>>
>> *Adam Lawson*
>>
>> AQORN, Inc.
>> 427 North Tatnall Street
>> Ste. 58461
>> Wilmington, Delaware 19801-2230
>> Toll-free: (844) 4-AQORN-NOW ext. 101
>> International: +1 302-387-4660
>> Direct: +1 916-246-2072
>>
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
> --
>
> --
>
> Andrew Woodward
>
> Mirantis
>
> Fuel Community Ambassador
>
> Ceph Community
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20151028/87a8d980/attachment.html>


More information about the OpenStack-operators mailing list