how to remove image with still used volumes
Hey. I have instances with a volume that were once created from some image. Now the volume's OS was upgraded over time to the respective current releases, while the image is long obsolete and just uses up space. Is there a way to remove those images? It seems normal commands don't allow me, as long as there are volumes which were created from them. Thanks, Chris.
On Mon, Nov 7, 2022 at 9:13 PM Christoph Anton Mitterer < calestyo@scientia.org> wrote:
Hey.
I have instances with a volume that were once created from some image.
Now the volume's OS was upgraded over time to the respective current releases, while the image is long obsolete and just uses up space.
Is there a way to remove those images? It seems normal commands don't allow me, as long as there are volumes which were created from them.
Instance disks are changes over time from a baseline. What this means is, you can't delete the origin without destroying all of its descendants. What you can do is set it to "hidden" so it won't show up in the default image list. You'll still be able to explicitly look for it though, and instances that depend on it can find it as well. Check the --hidden option here. https://docs.openstack.org/glance/train/admin/manage-images.html If you have older Openstack, you can make "visibility" private which should hide it from most people. I'm not sure how long --hidden has existed.
Thanks, Chris.
-Erik
Hey Erik. On Mon, 2022-11-07 at 22:01 -0500, Erik McCormick wrote:
Instance disks are changes over time from a baseline. What this means is, you can't delete the origin without destroying all of its descendants.
But isn't that quite inefficient? If one never re-installs the images but only upgrades them over many years, any shared extents will be long gone and one just keeps the old copy of the original image around for no good. [The whole concept of images doesn't really fit my workflow, TBH. I simply have a number of existing systems I'd like to move into openstack... they already are installed and I'd just like to copy the raw image (of them) into a storage volume for instance - without any (OpenStack) images, especially as I'd have then one such (OpenStack) image for each server I want to move.] I even tried to circumvent this, attach a empty volume, copy the OS from the original volume to that and trying to remove the latter. But openstack won't let me for obscure reasons. Next I tried to simply use the copied-volume (which is then not based on an image) and create a new instance with that. While that works, the new instance then no longer boots via UEFI. Which is also a weird thing, I don't understand in OpenStack: Whether a VM boots from BIOS or UEFI, should be completely independent of any storage (volumes or images), however: Only(!) when I set --property hw_firmware_type=uefi while I create a image (and a volume/instance from that) the instance actually boots UEFI. When I set the same on either the server or the volume (when the image wasn't created so - or, as above, when no image was used at all)... it simply seems to ignore this and always uses SeaBIOS. I think I've experienced the same when I set the the hw_disk_bus to something else (like sata). Thanks, Chris.
hi, Any input on this Regards Adivya Singh On Tue, Nov 8, 2022 at 8:50 AM Christoph Anton Mitterer < calestyo@scientia.org> wrote:
Hey Erik.
On Mon, 2022-11-07 at 22:01 -0500, Erik McCormick wrote:
Instance disks are changes over time from a baseline. What this means is, you can't delete the origin without destroying all of its descendants.
But isn't that quite inefficient? If one never re-installs the images but only upgrades them over many years, any shared extents will be long gone and one just keeps the old copy of the original image around for no good.
[The whole concept of images doesn't really fit my workflow, TBH. I simply have a number of existing systems I'd like to move into openstack... they already are installed and I'd just like to copy the raw image (of them) into a storage volume for instance - without any (OpenStack) images, especially as I'd have then one such (OpenStack) image for each server I want to move.]
I even tried to circumvent this, attach a empty volume, copy the OS from the original volume to that and trying to remove the latter. But openstack won't let me for obscure reasons.
Next I tried to simply use the copied-volume (which is then not based on an image) and create a new instance with that. While that works, the new instance then no longer boots via UEFI.
Which is also a weird thing, I don't understand in OpenStack: Whether a VM boots from BIOS or UEFI, should be completely independent of any storage (volumes or images), however:
Only(!) when I set --property hw_firmware_type=uefi while I create a image (and a volume/instance from that) the instance actually boots UEFI. When I set the same on either the server or the volume (when the image wasn't created so - or, as above, when no image was used at all)... it simply seems to ignore this and always uses SeaBIOS.
I think I've experienced the same when I set the the hw_disk_bus to something else (like sata).
Thanks, Chris.
hi, you can directly delete from Database if they are obsolete Regards Adivya Singh On Tue, Nov 8, 2022 at 10:05 AM Adivya Singh <adivya1.singh@gmail.com> wrote:
hi,
Any input on this
Regards Adivya Singh
On Tue, Nov 8, 2022 at 8:50 AM Christoph Anton Mitterer < calestyo@scientia.org> wrote:
Hey Erik.
On Mon, 2022-11-07 at 22:01 -0500, Erik McCormick wrote:
Instance disks are changes over time from a baseline. What this means is, you can't delete the origin without destroying all of its descendants.
But isn't that quite inefficient? If one never re-installs the images but only upgrades them over many years, any shared extents will be long gone and one just keeps the old copy of the original image around for no good.
[The whole concept of images doesn't really fit my workflow, TBH. I simply have a number of existing systems I'd like to move into openstack... they already are installed and I'd just like to copy the raw image (of them) into a storage volume for instance - without any (OpenStack) images, especially as I'd have then one such (OpenStack) image for each server I want to move.]
I even tried to circumvent this, attach a empty volume, copy the OS from the original volume to that and trying to remove the latter. But openstack won't let me for obscure reasons.
Next I tried to simply use the copied-volume (which is then not based on an image) and create a new instance with that. While that works, the new instance then no longer boots via UEFI.
Which is also a weird thing, I don't understand in OpenStack: Whether a VM boots from BIOS or UEFI, should be completely independent of any storage (volumes or images), however:
Only(!) when I set --property hw_firmware_type=uefi while I create a image (and a volume/instance from that) the instance actually boots UEFI. When I set the same on either the server or the volume (when the image wasn't created so - or, as above, when no image was used at all)... it simply seems to ignore this and always uses SeaBIOS.
I think I've experienced the same when I set the the hw_disk_bus to something else (like sata).
Thanks, Chris.
Hi, yes, accessing database requires some form of admin rights. IMO it's bad idea delete images from database unless you clean up related storage (Glance store) also, just deleting related db records leaves actual image file untouched Br, - Eki - On Tue, 8 Nov 2022 at 06:46, Christoph Anton Mitterer <calestyo@scientia.org> wrote:
On Tue, 2022-11-08 at 10:06 +0530, Adivya Singh wrote:
you can directly delete from Database if they are obsolete
Uhm... I guess that would require some form of admin rights (which I don't have on that cluster)?
Thanks, Chris.
Hey. So is there no way to get some raw data as a volume (without any images) into openstack and boot via UEFI from it? Thanks, Chris.
On Mon, Nov 7, 2022 at 10:15 PM Christoph Anton Mitterer < calestyo@scientia.org> wrote:
Hey Erik.
On Mon, 2022-11-07 at 22:01 -0500, Erik McCormick wrote:
Instance disks are changes over time from a baseline. What this means is, you can't delete the origin without destroying all of its descendants.
But isn't that quite inefficient? If one never re-installs the images but only upgrades them over many years, any shared extents will be long gone and one just keeps the old copy of the original image around for no good.
I suppose you could consider it inefficient over a very long term in that you have a source image taking up storage that has very little resemblance to the instances that were spawned from it. However, what you're running in to here is the "pets vs. cattle" argument. Openstack is a *cloud* platform, not a virtualization platform. It is built for cattle. Long-lived instances are not what it's targeted to. That being said, it deals with them just fine. You simply have to accept you're going to end up with these relics. If you're able to nuke and recreate instances frequently and not upgrade them over years, you end up using far less storage and have instances that can quickly migrate around if you're using local storage.
[The whole concept of images doesn't really fit my workflow, TBH. I simply have a number of existing systems I'd like to move into openstack... they already are installed and I'd just like to copy the raw image (of them) into a storage volume for instance - without any (OpenStack) images, especially as I'd have then one such (OpenStack) image for each server I want to move.]
You can import an existing disk (after some conversion depending on your source hypervisor) into a Ceph-backed Cinder volume and boot from it just fine. You have to make sure to tick the box that tells it it's bootable, but otherwise should be fine. I even tried to circumvent this, attach a empty volume, copy the OS
from the original volume to that and trying to remove the latter. But openstack won't let me for obscure reasons.
Next I tried to simply use the copied-volume (which is then not based on an image) and create a new instance with that. While that works, the new instance then no longer boots via UEFI.
Which is also a weird thing, I don't understand in OpenStack: Whether a VM boots from BIOS or UEFI, should be completely independent of any storage (volumes or images), however:
Only(!) when I set --property hw_firmware_type=uefi while I create a image (and a volume/instance from that) the instance actually boots UEFI. When I set the same on either the server or the volume (when the image wasn't created so - or, as above, when no image was used at all)... it simply seems to ignore this and always uses SeaBIOS.
I think I've experienced the same when I set the the hw_disk_bus to something else (like sata).
Those properties you're setting on images are simply being passed to nova when it boots the instance. You should be able to specify them on a command-line boot from a volume.
For your conversion purposes, you could check out virt-v2v. I used that to convert a bunch of old vmware instances to KVM and import them into Openstack. It was slow but worked pretty well.
Thanks, Chris.
-Erik
On Tue, 2022-11-08 at 12:09 -0500, Erik McCormick wrote:
I suppose you could consider it inefficient over a very long term in that you have a source image taking up storage that has very little resemblance to the instances that were spawned from it.
Well it ultimately costs me a ~ factor 2 of storage per instance.
However, what you're running in to here is the "pets vs. cattle" argument. Openstack is a *cloud* platform, not a virtualization platform. It is built for cattle. Long-lived instances are not what it's targeted to.
It's clear that my use case is a bit non-standard. :-)
That being said, it deals with them just fine. You simply have to accept you're going to end up with these relics. If you're able to nuke and recreate instances frequently and not upgrade them over years, you end up using far less storage and have instances that can quickly migrate around if you're using local storage.
In my case that periodic re-creating is rather not easily possible, as the VMs are rather complex in their setup. It's clear that otherwise, it makes sense to have the CoW to share blocks,... but still: Shouldn't it be possible to simply break up the connection? Like telling the backend, when someone wants to "detach" the volume from the image it's based (and CoW-copied) upon, that it should make a full copy of the still shared blocks? Also, what its the reason, that one cannot remove the volume, an instance was originally created with, from it?
You can import an existing disk (after some conversion depending on your source hypervisor) into a Ceph-backed Cinder volume and boot from it just fine. You have to make sure to tick the box that tells it it's bootable, but otherwise should be fine.
That's what I tried to describe before: 1st: I imported it as image 2nd: Made an instance from it - so far things work fine - 3rd: attached a empty (non image based) volume of the same size) that one also had --bootable 4th: copied everything over and made it bootable At this point however, removing the original volume (based on the image) seems to be forbidden (some error that the root volume cannot be removed. So I tried to trick the system and used the 2nd (non-image based volume) for instance creation (i.e. server create --volume, not -- image). While that did work, it then falls back to booting from (SeaBIOS) and not UEFI as the previous instance, based on the image, still did. And I seem to cannot get that working to UEFI-boot.
Those properties you're setting on images are simply being passed to nova when it boots the instance. You should be able to specify them on a command-line boot from a volume.
Well I'm afraid, that doesn't work. Not sure, maybe it's a bug (the OpenStack instance in question is probably a somewhat older version). When I upload my image, and use --property hw_firmware_type=uefi, the image get's properties | direct_url='rbd://fd2b36a3-4f06-5212-a74b-1f9ea2b3ee83/images/c45be8c7-8ff7-4553-a145-c83ba75fb951/snap', hw_firmware_type='uefi', locations='[{'url': 'rbd://fd2b36a3-4f06-5212-a74b-1f9ea2b3ee83/images/c45be8c7-8ff7-4553-a145-c83ba75fb951/snap', 'metadata': {}}]', owner_specified.openstack.md5='', owner_specified.openstack.object='images/mytestimage', owner_specified.openstack.sha256='' The volume created from that has: properties | attached_mode='rw' volume_image_metadata | {'container_format': 'bare', 'min_ram': '0', 'owner_specified.openstack.sha256': '', 'disk_format': 'raw', 'image_name': 'mytestimage', 'hw_firmware_type': 'uefi', 'image_id': 'c45be8c7-8ff7-4553-a145-c83ba75fb951', 'owner_specified.openstack.object': 'images/mytestimage', 'owner_specified.openstack.md5': '', 'min_disk': '0', 'checksum': '9ad12344a29cbbf7dbddc1ff4c48ea69', 'size': '21474836480'} So there, the setting seems to be not in properties but volume_image_metadata. The instance created from that volume has: an empty properties field. My 2nd (image independent volume) has: properties | hw_firmware_type='uefi' and no volume_image_metadata field. When I create an instance from that (2nd image), via e.g.: $ openstack server create --flavor pn72te.small --network=internet --volume mytestimage-indep-from-image --property 'hw_firmware_type=uefi' test Then it gets: properties | hw_firmware_type='uefi' So seems whether it boots BIOS or UEFI is not determined from the instance's properties field (which IMO would be the natural place)... but not from the volume (or image)... but there it also doesn't seem to work. Is there any documentation on where/what exactly causes the instance to boot UEFI?
For your conversion purposes, you could check out virt-v2v. I used that to convert a bunch of old vmware instances to KVM and import them into Openstack. It was slow but worked pretty well.
I'll have a look, but I guess it won't help me with the no-UEFI-boot issue. Thanks, Chris.
What is your storage back end? With ceph there is a way which I wouldn't really recommend but in our cloud it accidentally happens from time to time. Basically, it's about flattening images. For example, there are multiple VMs based on the same image which are copy-on-write clones. We back up the most important VMs with 'rbd export' so they become "flat" in the backup store. After disaster recovery we had to restore some of the VMs ('rbd import'), but that means they lose their "parent" (the base image in glance). After some time we cleaned up the glance store and deleted images without clones, accidentally resulting in VMs with no base image ('openstack server show') since they were flat and had no more parent information. One disadvantage is that you have to search the database which image it could have been, another one is flat images allocate the whole disk space in ceph (but there's a sparsify command to deal with that). So one could "flatten" all instances that don't need their "parent clone" and delete them from glance. But I doubt that it's a reasonable approach, just one possible way. Zitat von Christoph Anton Mitterer <calestyo@scientia.org>:
On Tue, 2022-11-08 at 12:09 -0500, Erik McCormick wrote:
I suppose you could consider it inefficient over a very long term in that you have a source image taking up storage that has very little resemblance to the instances that were spawned from it.
Well it ultimately costs me a ~ factor 2 of storage per instance.
However, what you're running in to here is the "pets vs. cattle" argument. Openstack is a *cloud* platform, not a virtualization platform. It is built for cattle. Long-lived instances are not what it's targeted to.
It's clear that my use case is a bit non-standard. :-)
That being said, it deals with them just fine. You simply have to accept you're going to end up with these relics. If you're able to nuke and recreate instances frequently and not upgrade them over years, you end up using far less storage and have instances that can quickly migrate around if you're using local storage.
In my case that periodic re-creating is rather not easily possible, as the VMs are rather complex in their setup.
It's clear that otherwise, it makes sense to have the CoW to share blocks,... but still:
Shouldn't it be possible to simply break up the connection? Like telling the backend, when someone wants to "detach" the volume from the image it's based (and CoW-copied) upon, that it should make a full copy of the still shared blocks?
Also, what its the reason, that one cannot remove the volume, an instance was originally created with, from it?
You can import an existing disk (after some conversion depending on your source hypervisor) into a Ceph-backed Cinder volume and boot from it just fine. You have to make sure to tick the box that tells it it's bootable, but otherwise should be fine.
That's what I tried to describe before: 1st: I imported it as image 2nd: Made an instance from it - so far things work fine - 3rd: attached a empty (non image based) volume of the same size) that one also had --bootable 4th: copied everything over and made it bootable
At this point however, removing the original volume (based on the image) seems to be forbidden (some error that the root volume cannot be removed.
So I tried to trick the system and used the 2nd (non-image based volume) for instance creation (i.e. server create --volume, not -- image). While that did work, it then falls back to booting from (SeaBIOS) and not UEFI as the previous instance, based on the image, still did.
And I seem to cannot get that working to UEFI-boot.
Those properties you're setting on images are simply being passed to nova when it boots the instance. You should be able to specify them on a command-line boot from a volume.
Well I'm afraid, that doesn't work. Not sure, maybe it's a bug (the OpenStack instance in question is probably a somewhat older version).
When I upload my image, and use --property hw_firmware_type=uefi, the image get's properties | direct_url='rbd://fd2b36a3-4f06-5212-a74b-1f9ea2b3ee83/images/c45be8c7-8ff7-4553-a145-c83ba75fb951/snap', hw_firmware_type='uefi', locations='[{'url': 'rbd://fd2b36a3-4f06-5212-a74b-1f9ea2b3ee83/images/c45be8c7-8ff7-4553-a145-c83ba75fb951/snap', 'metadata': {}}]', owner_specified.openstack.md5='', owner_specified.openstack.object='images/mytestimage', owner_specified.openstack.sha256=''
The volume created from that has: properties | attached_mode='rw' volume_image_metadata | {'container_format': 'bare', 'min_ram': '0', 'owner_specified.openstack.sha256': '', 'disk_format': 'raw', 'image_name': 'mytestimage', 'hw_firmware_type': 'uefi', 'image_id': 'c45be8c7-8ff7-4553-a145-c83ba75fb951', 'owner_specified.openstack.object': 'images/mytestimage', 'owner_specified.openstack.md5': '', 'min_disk': '0', 'checksum': '9ad12344a29cbbf7dbddc1ff4c48ea69', 'size': '21474836480'}
So there, the setting seems to be not in properties but volume_image_metadata.
The instance created from that volume has: an empty properties field.
My 2nd (image independent volume) has: properties | hw_firmware_type='uefi' and no volume_image_metadata field.
When I create an instance from that (2nd image), via e.g.: $ openstack server create --flavor pn72te.small --network=internet --volume mytestimage-indep-from-image --property 'hw_firmware_type=uefi' test
Then it gets: properties | hw_firmware_type='uefi'
So seems whether it boots BIOS or UEFI is not determined from the instance's properties field (which IMO would be the natural place)... but not from the volume (or image)... but there it also doesn't seem to work.
Is there any documentation on where/what exactly causes the instance to boot UEFI?
For your conversion purposes, you could check out virt-v2v. I used that to convert a bunch of old vmware instances to KVM and import them into Openstack. It was slow but worked pretty well.
I'll have a look, but I guess it won't help me with the no-UEFI-boot issue.
Thanks, Chris.
Hey Eugen.
Basically, it's about flattening images. For example, there are multiple VMs based on the same image which are copy-on-write clones. We back up the most important VMs with 'rbd export' so they become "flat" in the backup store.
Well that's effectively what I did, when I copied to a bare volume and tried booting from that. But then the problem is, as I wrote in the other mail, that either I cannot remove the original volume as it's a "root" volume and leave just the copy behind. Or, if I create a fresh server, I cannot make it boot with UEFI, for unknown reasons. Thanks, Chris.
Hi,
But then the problem is, as I wrote in the other mail, that either I cannot remove the original volume as it's a "root" volume and leave just the copy behind.
right, I forgot about the not removable root volume. Your workaround seems valid though, copying the original volume to a new volume, launch a new instance from the new volume and remove the old one. But did you also try to set --image-property (not --property as you wrote before) to the fresh volume? Zitat von Christoph Anton Mitterer <calestyo@scientia.org>:
Hey Eugen.
Basically, it's about flattening images. For example, there are multiple VMs based on the same image which are copy-on-write clones. We back up the most important VMs with 'rbd export' so they become "flat" in the backup store.
Well that's effectively what I did, when I copied to a bare volume and tried booting from that.
But then the problem is, as I wrote in the other mail, that either I cannot remove the original volume as it's a "root" volume and leave just the copy behind. Or, if I create a fresh server, I cannot make it boot with UEFI, for unknown reasons.
Thanks, Chris.
participants (5)
-
Adivya Singh
-
Christoph Anton Mitterer
-
Erik McCormick
-
Erkki Peura
-
Eugen Block