[Openstack-operators] rbd ephemeral storage, very slow deleting...

Jonathan Proulx jon at jonproulx.com
Thu Sep 25 14:18:08 UTC 2014


Ouch! that (and possibly the default of 'volume_clear = zero') is
likely my issue.

rbd image 'c570a2b5-e4bd-472f-898d-a49451300ecd_disk':
        size 32768 GB in 8388608 objects

Good to see Ceph is (apparently) using sparse allocation since I have
27 test instances running now and at 32T a piece I sure don't have
864TB to fit all that, oh wait replication make that 1.75PB...well I
am running icehouse so lemme go get that patch :)

-Jon

On Wed, Sep 24, 2014 at 10:17 PM, Sam Morrison <sorrison at gmail.com> wrote:
> There was a bug in Havana where it would create the underlying RBD volume at 1024 times the actual size. We didn’t notice this until we started deleting instances and they took forever.
> Could be the case with you too?
>
> See https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1219658
> https://review.openstack.org/#/q/I3ec53b3617d52f75784ebb3b0dad92ca815f8876,n,z
>
> I don’t think this made it into Havana sadly.
>
> Sam
>
>
> On 25 Sep 2014, at 5:45 am, Jonathan Proulx <jon at jonproulx.com> wrote:
>
>> Hi All,
>>
>> Just started experimenting with RBD (ceph) back end for ephemeral
>> storage on some of my compute nodes.
>>
>> I have it launching instances just fine, but when I try and delete
>> them libvirt shows the instances are gone, but OpensStack lists them
>> in 'deleting' state and the rbd process on the hypervisor spins madly
>> at about 300% cpu ...
>>
>> ...and now approx 18min later they have finally fully terminated, why so long?
>>
>> -Jon
>>
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>



More information about the OpenStack-operators mailing list