[cinder] setting `rbd_exclusive_cinder_pool = True` with multiple cinder-volume writing to the same pool

Gorka Eguileor geguileo at redhat.com
Tue Mar 19 09:54:31 UTC 2019


On 18/03, Krzysztof Klimonda wrote:
> Thanks guys,
>
> @Gorka, Is the recommendation one ceph pool per cinder-volume or rather one cinder-volume (in active/passive HA) per ceph cluster? It seems Active/Active HA hasn’t been finished (looking at https://blueprints.launchpad.net/cinder/+spec/cinder-volume-active-active-support <https://blueprints.launchpad.net/cinder/+spec/cinder-volume-active-active-support>) so one cinder-volume would probably become a bottle neck at some point? Or is it going to be mostly idle anyway now that I’ve toggled `rbd_exclusive_cinder_pool = True`?
>
> Regards,
> Chris

Hi Chris,

The recommendation is 1 ceph pool per Cinder "backend".  You can have
multiple cinder-volume services accessing the same Ceph pool if they are
deployed as Active/Active, or multiple cinder-volume services accessing
different pools from the same Cluster if they are in Active/Passive
mode.

One of the reasons why it's better to not share the same pool is how the
scheduler decreases the available space on a pool when creating a new
resource.

In current master you can enable Active/Active for cinder-volume on an
RBD backend, but this hasn't been thoroughly tested yet.

I added the rbd_exclusive_cinder_pool to greatly reduce the number of
connections to the Ceph cluster, and it should be noticeable.

In general you should be OK with a single pool and a single service, but
that will depend on the Cinder version, if you are using Backups, the
number of concurrent operations, and what are the most common
operations.  So real load testing is the best way to determine maximum
load levels.

Cheers,
Gorka.

>
> > On 18 Mar 2019, at 19:00, Gorka Eguileor <geguileo at redhat.com> wrote:
> >
> > On 13/03, Krzysztof Klimonda wrote:
> >> Hi,
> >>
> >> Just doing a quick sanity check - I can set `rbd_exclusive_cinder_pool = True` (in pike, where the change landed first) even if more than one cinder-volume is writing to the same Ceph pool, right? Assuming nothing writes to the pool outside of cinder, we should be good as far as I understand the code.
> >>
> >> -Chris
> >
> > Hi Chris,
> >
> > It's best not to share your pools, but your assessment is correct
> > regarding the rbd_exclusive_cinder_pool configuration option.  The code
> > ruled by that option will work as expected even when sharing the pool
> > between different cinder-volume services.
> >
> > Cheers,
> > Gorka.
>



More information about the openstack-discuss mailing list