Cinder Ceph backup concurrency

Gorka Eguileor geguileo at redhat.com
Mon Jun 10 10:39:09 UTC 2019


On 08/06, Cory Hawkless wrote:
> I'm using Rocky and Cinders built in Ceph backup driver which is working ok but I'd like to limit each instance of the backup agent to X number of concurrent backups.
> For example, if I(Or a tenant) trigger a backup to run on 20 volumes, the cinder-0backuip agent promptly starts the process of backup up all 20 volumes simultaneously and while this works ok it has the downside of over saturating links, causing high IO on the disks etc.
> 
> Ideally I'd like to have each cinder-backup agent limited to running X(Perhaps 5) backups jobs at any one time and the remaining jobs will be 'queued' until an agent has less than X jobs remaining.
> 
> Is this possible at all?
> Based on my understanding the Cinder scheduler services handles the allocation and distribution of the backup tasks, is that correct?
> 
> Thanks in advance
> Cory

Hi Cory,

Cinder doesn't have any kind of throttling mechanism specific for
"heavy" operations.  This also includes the cinder-backup service that
doesn't make use of the cinder-scheduler service.

I think there may be ways to do throttling for the case you describe,
though I haven't tried them:

Defining "executor_thread_pool_size" (defaults to 64) to reduce the
number of concurrent operations that will be executed on the
cinder-backup service (backup listings and such will not be affected, as
they are executed by cinder-api).  Some of the remaining requests will
remain on the oslo messaging queue, and the rest in RabbitMQ message
queue.

For the RBD backend you could also limit the size of the native threads
with "backup_native_threads_pool_size", which will limit the number of
concurrent RBD calls (since they use native threads instead of green
threads).

Also, don't forget to ensure that "backup_workers" is set to 1,
otherwise you will be running multiple processes, each with the
previously defined limitations, resulting in N times what you wanted to
have.

I hope this helps.

Cheers,
Gorka.



More information about the openstack-discuss mailing list