[Openstack-operators] cinder-api with rbd driver ignores ceph.conf
zioproto at gmail.com
Thu Nov 12 15:41:55 UTC 2015
So here is my best guess.
Could be that I am missing this patch ?
proto at controller:~$ apt-cache policy python-cinder
2015-11-12 16:25 GMT+01:00 Saverio Proto <zioproto at gmail.com>:
> Hello there,
> I am investigating why my cinder is slow deleting volumes.
> you might remember my email from few days ago with subject:
> "cinder volume_clear=zero makes sense with rbd ?"
> so it comes out that volume_clear has nothing to do with the rbd driver.
> cinder was not guilty, it was really ceph rbd slow itself to delete big volumes.
> I was able to reproduce the slowness just using the rbd client.
> I was also able to fix the slowness just using the rbd client :)
> This is fixed in ceph hammer release, introducing a new feature.
> Enabling the object map feature rbd is now super fast to delete large volumes.
> However how I am in trouble with cinder. Looks like my cinder-api
> (running juno here) ignores the changes in my ceph.conf file.
> cat cinder.conf | grep rbd
> But when I create a volume with cinder, The options in ceph.conf are ignored:
> cat /etc/ceph/ceph.conf | grep rbd
> rbd default format = 2
> rbd default features = 13
> But the volume:
> rbd image 'volume-78ca9968-77e8-4b68-9744-03b25b8068b1':
> size 102400 MB in 25600 objects
> order 22 (4096 kB objects)
> block_name_prefix: rbd_data.533f4356fe034
> format: 2
> features: layering
> so my first question is:
> does anyone use cinder with rbd driver and object map feature enabled
> ? Does it work for anyone ?
> thank you
More information about the OpenStack-operators