[openstack-dev] [openstack-operators][cinder] max_concurrent_builds in Cinder

Alex Meade mr.alex.meade at gmail.com
Mon May 23 18:59:03 UTC 2016

This sounds like a good idea to me. The queue doesn't handle this since we
just read everything off immediately anyways. I have seen issues where
customers have to write scripts that build 5 volumes, sleep, then build
more until they get >100 volumes. Just because a Cinder volume service will
clobber itself.


On Mon, May 23, 2016 at 10:32 AM, Ivan Kolodyazhny <e0ne at e0ne.info> wrote:

> Hi developers and operators,
> I would like to get any feedback from you about my idea before I'll start
> work on spec.
> In Nova, we've got max_concurrent_builds option [1] to set 'Maximum number
> of instance builds to run concurrently' per each compute. There is no
> equivalent Cinder.
> Why do we need it for Cinder? IMO, it could help us to address following
> issues:
>    - Creation of N volumes at the same time increases a lot of resource
>    usage by cinder-volume service. Image caching feature [2] could help us a
>    bit in case when we create volume form image. But we still have to upload N
>    images to the volumes backend at the same time.
>    - Deletion on N volumes at parallel. Usually, it's not very hard task
>    for Cinder, but if you have to delete 100+ volumes at once, you can fit
>    different issues with DB connections, CPU and memory usages. In case of
>    LVM, it also could use 'dd' command to cleanup volumes.
>    - It will be some kind of load balancing in HA mode: if cinder-volume
>    process is busy with current operations, it will not catch message from
>    RabbitMQ and other cinder-volume service will do it.
>    - From users perspective, it seems that better way is to create/delete
>    N volumes a bit slower than fail after X volumes were created/deleted.
> [1]
> https://github.com/openstack/nova/blob/283da2bbb74d0a131a66703516d16698a05817c7/nova/conf/compute.py#L161-L163
> [2]
> https://specs.openstack.org/openstack/cinder-specs/specs/liberty/image-volume-cache.html
> Regards,
> Ivan Kolodyazhny,
> http://blog.e0ne.info/
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160523/57981636/attachment.html>

More information about the OpenStack-dev mailing list