[Openstack-operators] cinder-api with rbd driver ignores ceph.conf

Saverio Proto zioproto at gmail.com
Tue Nov 24 12:12:58 UTC 2015


Hello there,

we were able finally to backport the patch to Juno:
https://github.com/zioproto/cinder/tree/backport-ceph-object-map

we are testing this version. Everything good so far.

this will require in your ceph.conf
rbd default format = 2
rbd default features = 13

if anyone is willing to test this on his Juno setup I can also share
.deb packages for Ubuntu

Saverio



2015-11-16 16:21 GMT+01:00 Saverio Proto <zioproto at gmail.com>:
> Thanks,
>
> I tried to backport this patch to Juno but it is not that trivial for
> me. I have 2 tests failing, about volume cloning and create a volume
> without layering.
>
> https://github.com/zioproto/cinder/commit/0d26cae585f54c7bda5ba5b423d8d9ddc87e0b34
> https://github.com/zioproto/cinder/commits/backport-ceph-object-map
>
> I guess I will stop trying to backport this patch and wait for the
> upgrade to Kilo of our Openstack installation to have the feature.
>
> If anyone ever backported this feature to Juno it would be nice to
> know, so I can use the patch to generate deb packages.
>
> thanks
>
> Saverio
>
> 2015-11-12 17:55 GMT+01:00 Josh Durgin <jdurgin at redhat.com>:
>> On 11/12/2015 07:41 AM, Saverio Proto wrote:
>>>
>>> So here is my best guess.
>>> Could be that I am missing this patch ?
>>>
>>>
>>> https://github.com/openstack/cinder/commit/6211d8fa2033c2a607c20667110c5913cf60dd53
>>
>>
>> Exactly, you need that patch for cinder to use rbd_default_features
>> from ceph.conf instead of its own default of only layering.
>>
>> In infernalis and later version of ceph you can also add object map to
>> existing rbd images via the 'rbd feature enable' and 'rbd object-map
>> rebuild' commands.
>>
>> Josh
>>
>>> proto at controller:~$ apt-cache policy python-cinder
>>> python-cinder:
>>>    Installed: 1:2014.2.3-0ubuntu1.1~cloud0
>>>    Candidate: 1:2014.2.3-0ubuntu1.1~cloud0
>>>
>>>
>>> Thanks
>>>
>>> Saverio
>>>
>>>
>>>
>>> 2015-11-12 16:25 GMT+01:00 Saverio Proto <zioproto at gmail.com>:
>>>>
>>>> Hello there,
>>>>
>>>> I am investigating why my cinder is slow deleting volumes.
>>>>
>>>> you might remember my email from few days ago with subject:
>>>> "cinder volume_clear=zero makes sense with rbd ?"
>>>>
>>>> so it comes out that volume_clear has nothing to do with the rbd driver.
>>>>
>>>> cinder was not guilty, it was really ceph rbd slow itself to delete big
>>>> volumes.
>>>>
>>>> I was able to reproduce the slowness just using the rbd client.
>>>>
>>>> I was also able to fix the slowness just using the rbd client :)
>>>>
>>>> This is fixed in ceph hammer release, introducing a new feature.
>>>>
>>>>
>>>> http://www.sebastien-han.fr/blog/2015/07/06/ceph-enable-the-object-map-feature/
>>>>
>>>> Enabling the object map feature rbd is now super fast to delete large
>>>> volumes.
>>>>
>>>> However how I am in trouble with cinder. Looks like my cinder-api
>>>> (running juno here) ignores the changes in my ceph.conf file.
>>>>
>>>> cat cinder.conf | grep rbd
>>>>
>>>> volume_driver=cinder.volume.drivers.rbd.RBDDriver
>>>> rbd_user=cinder
>>>> rbd_max_clone_depth=5
>>>> rbd_ceph_conf=/etc/ceph/ceph.conf
>>>> rbd_flatten_volume_from_snapshot=False
>>>> rbd_pool=volumes
>>>> rbd_secret_uuid=secret
>>>>
>>>> But when I create a volume with cinder, The options in ceph.conf are
>>>> ignored:
>>>>
>>>> cat /etc/ceph/ceph.conf | grep rbd
>>>> rbd default format = 2
>>>> rbd default features = 13
>>>>
>>>> But the volume:
>>>>
>>>> rbd image 'volume-78ca9968-77e8-4b68-9744-03b25b8068b1':
>>>>      size 102400 MB in 25600 objects
>>>>      order 22 (4096 kB objects)
>>>>      block_name_prefix: rbd_data.533f4356fe034
>>>>      format: 2
>>>>      features: layering
>>>>      flags:
>>>>
>>>>
>>>> so my first question is:
>>>>
>>>> does anyone use cinder with rbd driver and object map feature enabled
>>>> ? Does it work for anyone ?
>>>>
>>>> thank you
>>>>
>>>> Saverio
>>>
>>>
>>> _______________________________________________
>>> OpenStack-operators mailing list
>>> OpenStack-operators at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>



More information about the OpenStack-operators mailing list