[Openstack-operators] Multiple Ceph pools for Nova?

Smith, Eric Eric.Smith at ccur.com
Tue May 22 13:57:47 UTC 2018


Thanks everyone for the feedback - I have a pretty small environment (11 nodes) and I was able to find the compute / volume pool segregation within nova.conf / cinder.conf. I think I should be able to just export / import my existing RBDs from the spinning disk compute pool to the SSD compute pool and update my nova.conf. Then I'll add an extra backend in cinder.conf to point new volumes to the SSD volumes pool.

Thanks for all the help again.
Eric

On 5/22/18, 12:53 AM, "Matt Riedemann" <mriedemos at gmail.com> wrote:

    On 5/21/2018 11:51 AM, Smith, Eric wrote:
    > I have 2 Ceph pools, one backed by SSDs and one backed by spinning disks 
    > (Separate roots within the CRUSH hierarchy). I’d like to run all 
    > instances in a single project / tenant on SSDs and the rest on spinning 
    > disks. How would I go about setting this up?
    
    As mentioned elsewhere, host aggregate would work for the compute hosts 
    connected to each storage pool. Then you can have different flavors per 
    aggregate and charge more for the SSD flavors or restrict the aggregates 
    based on tenant [1].
    
    Alternatively, if this is something you plan to eventually scale to a 
    larger size, you could even separate the pools with separate cells and 
    use resource provider aggregates in placement to mirror the host 
    aggregates for tenant-per-cell filtering [2]. It sounds like this is 
    very similar to what CERN does (cells per hardware characteristics and 
    projects assigned to specific cells). So Belmiro could probably help 
    give some guidance here too. Check out the talk he gave today at the 
    summit [3].
    
    [1] 
    https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html#aggregatemultitenancyisolation
    [2] 
    https://docs.openstack.org/nova/latest/admin/configuration/schedulers.html#tenant-isolation-with-placement
    [3] 
    https://www.openstack.org/videos/vancouver-2018/moving-from-cellsv1-to-cellsv2-at-cern
    
    -- 
    
    Thanks,
    
    Matt
    
    _______________________________________________
    OpenStack-operators mailing list
    OpenStack-operators at lists.openstack.org
    http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
    



More information about the OpenStack-operators mailing list