[Openstack-operators] ceph rbd root disk unexpected deletion

Mike Lowe jomlowe at iu.edu
Mon Mar 13 14:47:43 UTC 2017


Over the weekend a user reported that his instance was in a stopped state and could not be started, on further examination it appears that the vm had crashed and the strange thing is that the root disk is now gone.  Has anybody come across anything like this before?

And why on earth is it attempting deletion of the rbd device without deletion of the instance?

2017-03-12 10:59:07.591 3010 WARNING nova.virt.libvirt.storage.rbd_utils [-] rbd remove 4367a2e4-d704-490d-b3a6-129b9465cd0d_disk in pool ephemeral-vms failed
2017-03-12 10:59:17.613 3010 WARNING nova.virt.libvirt.storage.rbd_utils [-] rbd remove 4367a2e4-d704-490d-b3a6-129b9465cd0d_disk in pool ephemeral-vms failed
2017-03-12 10:59:26.143 3010 WARNING nova.virt.libvirt.storage.rbd_utils [-] rbd remove 4367a2e4-d704-490d-b3a6-129b9465cd0d_disk in pool ephemeral-vms failed
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3574 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20170313/7567fe4b/attachment.bin>


More information about the OpenStack-operators mailing list