[Openstack-operators] ceph rbd root disk unexpected deletion

Saverio Proto zioproto at gmail.com
Fri Mar 17 12:28:23 UTC 2017


Hello Mike,

what version of openstack ?
is the instance booting from ephemeral disk or booting from cinder volume ?

When you boot from volume, that will be the root disk of your
instance. The user could have clicked on "Delete Volume on Instance
Delete". It can be selected when creating a new instance.

Saverio

2017-03-13 15:47 GMT+01:00 Mike Lowe <jomlowe at iu.edu>:
> Over the weekend a user reported that his instance was in a stopped state and could not be started, on further examination it appears that the vm had crashed and the strange thing is that the root disk is now gone.  Has anybody come across anything like this before?
>
> And why on earth is it attempting deletion of the rbd device without deletion of the instance?
>
> 2017-03-12 10:59:07.591 3010 WARNING nova.virt.libvirt.storage.rbd_utils [-] rbd remove 4367a2e4-d704-490d-b3a6-129b9465cd0d_disk in pool ephemeral-vms failed
> 2017-03-12 10:59:17.613 3010 WARNING nova.virt.libvirt.storage.rbd_utils [-] rbd remove 4367a2e4-d704-490d-b3a6-129b9465cd0d_disk in pool ephemeral-vms failed
> 2017-03-12 10:59:26.143 3010 WARNING nova.virt.libvirt.storage.rbd_utils [-] rbd remove 4367a2e4-d704-490d-b3a6-129b9465cd0d_disk in pool ephemeral-vms failed
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>



More information about the OpenStack-operators mailing list