[Openstack-operators] Multiple Ceph pools for Nova?

Matt Riedemann mriedemos at gmail.com
Tue May 22 04:51:36 UTC 2018


On 5/21/2018 11:51 AM, Smith, Eric wrote:
> I have 2 Ceph pools, one backed by SSDs and one backed by spinning disks 
> (Separate roots within the CRUSH hierarchy). I’d like to run all 
> instances in a single project / tenant on SSDs and the rest on spinning 
> disks. How would I go about setting this up?

As mentioned elsewhere, host aggregate would work for the compute hosts 
connected to each storage pool. Then you can have different flavors per 
aggregate and charge more for the SSD flavors or restrict the aggregates 
based on tenant [1].

Alternatively, if this is something you plan to eventually scale to a 
larger size, you could even separate the pools with separate cells and 
use resource provider aggregates in placement to mirror the host 
aggregates for tenant-per-cell filtering [2]. It sounds like this is 
very similar to what CERN does (cells per hardware characteristics and 
projects assigned to specific cells). So Belmiro could probably help 
give some guidance here too. Check out the talk he gave today at the 
summit [3].

[1] 
https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html#aggregatemultitenancyisolation
[2] 
https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html#tenant-isolation-with-placement
[3] 
https://www.openstack.org/videos/vancouver-2018/moving-from-cellsv1-to-cellsv2-at-cern

-- 

Thanks,

Matt



More information about the OpenStack-operators mailing list