Hi, We are using openstack rocky. One of our openstack vm was running with 3 volume , 1 is bootable and another 2 is normal volume . We have deleted 1 normal volume without properly detaching. So volume is deleted but instance is still showing 3 volume is attached. Now we can't snapshot the instance and facing some others issue. Please advise how we can detach the volume(deleted) from the instance Note: We have reset state volume attached status to detached and delete the volume. Regards, Munna
Hi, if the volume was in use by the vm you'll have to reboot it to properly release all open files. As far as I know a snapshot of an instance booted from volume will only snapshot the respective volume. That should still be possible, but I guess it would be inconsistent because of the open files. Is reboot not an option? Regards, Eugen Zitat von "Md. Hejbul Tawhid MUNNA" <munnaeebd@gmail.com>:
Hi,
We are using openstack rocky.
One of our openstack vm was running with 3 volume , 1 is bootable and another 2 is normal volume . We have deleted 1 normal volume without properly detaching.
So volume is deleted but instance is still showing 3 volume is attached. Now we can't snapshot the instance and facing some others issue.
Please advise how we can detach the volume(deleted) from the instance
Note: We have reset state volume attached status to detached and delete the volume.
Regards, Munna
On Mon, 15 Mar 2021 at 15:38, Md. Hejbul Tawhid MUNNA <munnaeebd@gmail.com> wrote:
Hi,
We are using openstack rocky.
One of our openstack vm was running with 3 volume , 1 is bootable and another 2 is normal volume . We have deleted 1 normal volume without properly detaching.
So volume is deleted but instance is still showing 3 volume is attached. Now we can't snapshot the instance and facing some others issue.
Please advise how we can detach the volume(deleted) from the instance
Note: We have reset state volume attached status to detached and delete the volume.
Ewww I wish we made this harder. Please try to avoid resetting states like this unless you really have to. The cleanest way of detaching the volume from the instance is going to be to mark the volume attachment as deleted within the Nova database and hard rebooting the instance. $ mysql nova_cell1 MariaDB [nova_cell1]> update block_device_mapping set deleted = id where volume_id = '$volume_id' and instance_uuid = '$instance_uuid'; Confirm the volume is no longer listed as attached and then hard reboot: $ openstack server volume list $instance $ openstack server reboot --hard $instance Depending on your volume backend you will likely need to manually clean up any now stale volume connections on the host. For example, deleting any mpath devices etc. You might want to consider a full compute host reboot to ensure things are clean. Anyway, hope this helps, Lee
Dear lee, Thank you for your reply. We will try this solution by changing db. Regards, Munna On Wed, 17 Mar 2021, 16:29 Lee Yarwood, <lyarwood@redhat.com> wrote:
On Mon, 15 Mar 2021 at 15:38, Md. Hejbul Tawhid MUNNA <munnaeebd@gmail.com> wrote:
Hi,
We are using openstack rocky.
One of our openstack vm was running with 3 volume , 1 is bootable and
another 2 is normal volume . We have deleted 1 normal volume without properly detaching.
So volume is deleted but instance is still showing 3 volume is attached.
Now we can't snapshot the instance and facing some others issue.
Please advise how we can detach the volume(deleted) from the instance
Note: We have reset state volume attached status to detached and delete
the volume.
Ewww I wish we made this harder. Please try to avoid resetting states like this unless you really have to.
The cleanest way of detaching the volume from the instance is going to be to mark the volume attachment as deleted within the Nova database and hard rebooting the instance.
$ mysql nova_cell1 MariaDB [nova_cell1]> update block_device_mapping set deleted = id where volume_id = '$volume_id' and instance_uuid = '$instance_uuid';
Confirm the volume is no longer listed as attached and then hard reboot:
$ openstack server volume list $instance $ openstack server reboot --hard $instance
Depending on your volume backend you will likely need to manually clean up any now stale volume connections on the host. For example, deleting any mpath devices etc. You might want to consider a full compute host reboot to ensure things are clean.
Anyway, hope this helps,
Lee
participants (3)
-
Eugen Block
-
Lee Yarwood
-
Md. Hejbul Tawhid MUNNA