[Openstack-operators] Multiple Ceph pools for Nova?

Guilherme Steinmuller Pimentel Pimentel guilherme.pimentel at ccc.ufcg.edu.br
Mon May 21 19:31:40 UTC 2018


2018-05-21 16:17 GMT-03:00 Erik McCormick <emccormick at cirrusseven.com>:

> Do you have enough hypervisors you can dedicate some to each purpose? You
> could make two availability zones each with a different backend.
>

I have about 20 hypervisors. Ten are using a nova pool with SAS disks and
the other 10 are using another pool using SATA disks.

Yes, making two availability zones is an option. I didn't dive deep into it
when I was planning the deployment, so I am using the default nova
availability zone and defining which pool to use by flavor/aggregate
metadata.


>
> On Mon, May 21, 2018, 11:52 AM Smith, Eric <Eric.Smith at ccur.com> wrote:
>
>> I have 2 Ceph pools, one backed by SSDs and one backed by spinning disks
>> (Separate roots within the CRUSH hierarchy). I’d like to run all instances
>> in a single project / tenant on SSDs and the rest on spinning disks. How
>> would I go about setting this up?
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20180521/e4777af3/attachment.html>


More information about the OpenStack-operators mailing list