[Openstack] Ceph and ephemeral disks

Sergey Motovilovets motovilovets.sergey at gmail.com
Fri Jul 11 13:53:47 UTC 2014


Hi

I got a problem with Ceph as a backend for ephemeral volumes.
Creation of the instance completes as supposed, and instance is functional,
but when I try to delete it - instance stuck in "Deleting" state forever

Part of nova.conf on compute node:
...
[libvirt]
inject_password=false
inject_key=false
inject_partition=-2
images_type=rbd
images_rbd_pool=volumes
images_rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_user=cinder
rbd_secret_uuid=a652fa87-554c-45cf-a28a-8da845ea620f
...

nova-compute.log shows:
...
2014-07-11 09:25:44.902 21473 DEBUG nova.openstack.common.processutils
[req-509e4cd5-e27a-487e-a6bf-0a4a70c9541c f5d20c800acb46b2a35dd1dc1030bedb
a5286f8e7e2440ab9e8fcc120d59b872] Running cmd (subprocess): rbd -p volumes
ls --id cinder --conf /etc/ceph/ceph.conf execute
/usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py:154
2014-07-11 09:25:44.967 21473 DEBUG nova.openstack.common.processutils
[req-509e4cd5-e27a-487e-a6bf-0a4a70c9541c f5d20c800acb46b2a35dd1dc1030bedb
a5286f8e7e2440ab9e8fcc120d59b872] Result was 0 execute
/usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py:187
2014-07-11 09:25:44.969 21473 DEBUG nova.openstack.common.processutils
[req-509e4cd5-e27a-487e-a6bf-0a4a70c9541c f5d20c800acb46b2a35dd1dc1030bedb
a5286f8e7e2440ab9e8fcc120d59b872]* Running cmd (subprocess): sudo
nova-rootwrap /etc/nova/rootwrap.conf rbd -p volumes rm
a7669b1e-682d-4b62-a41e-ef5245999403_disk --id cinder --conf
/etc/ceph/ceph.conf *execute
/usr/lib/python2.7/dist-packages/nova/openstack/common/processutils.py:154
2014-07-11 09:25:48.614 21473 DEBUG nova.openstack.common.lockutils
[req-c2bacd90-e43f-4b36-9e94-b2dc0e573bf5 cba50a22b1074eafb440d5e39f2ed3a9
014373179bd4492e8f4fa4e55d6993a5] Got semaphore "<function _lock_name at
0x7f4ee5357848>" lock
/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:168
2014-07-11 09:25:48.615 21473 DEBUG nova.openstack.common.lockutils
[req-c2bacd90-e43f-4b36-9e94-b2dc0e573bf5 cba50a22b1074eafb440d5e39f2ed3a9
014373179bd4492e8f4fa4e55d6993a5] Got semaphore / lock "_pop_event" inner
/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:248
2014-07-11 09:25:48.616 21473 DEBUG nova.openstack.common.lockutils
[req-c2bacd90-e43f-4b36-9e94-b2dc0e573bf5 cba50a22b1074eafb440d5e39f2ed3a9
014373179bd4492e8f4fa4e55d6993a5] Semaphore / lock released "_pop_event"
inner
/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:252
178
...

# rbd -p volumes ls | grep a76
a7669b1e-682d-4b62-a41e-ef5245999403_disk

Disk is still here, so I tried re-running rm command manually:
# sudo nova-rootwrap /etc/nova/rootwrap.conf rbd -p volumes rm
a7669b1e-682d-4b62-a41e-ef5245999403_disk --id cinder --conf
/etc/ceph/ceph.conf
and got:
rbd: error: image still has watchers

But
# rados -p volumes listwatchers
rbd_id.a7669b1e-682d-4b62-a41e-ef5245999403_disk
shows nothing


Only restart of nova-compute service helps.

Any ideas how to fix this?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140711/2bf194d1/attachment.html>


More information about the Openstack mailing list