how to remove image with still used volumes

Eugen Block eblock at nde.ag
Thu Nov 10 08:47:49 UTC 2022


What is your storage back end? With ceph there is a way which I  
wouldn't really recommend but in our cloud it accidentally happens  
from time to time. Basically, it's about flattening images.
For example, there are multiple VMs based on the same image which are  
copy-on-write clones. We back up the most important VMs with 'rbd  
export' so they become "flat" in the backup store. After disaster  
recovery we had to restore some of the VMs ('rbd import'), but that  
means they lose their "parent" (the base image in glance). After some  
time we cleaned up the glance store and deleted images without clones,  
accidentally resulting in VMs with no base image ('openstack server  
show') since they were flat and had no more parent information. One  
disadvantage is that you have to search the database which image it  
could have been, another one is flat images allocate the whole disk  
space in ceph (but there's a sparsify command to deal with that). So  
one could "flatten" all instances that don't need their "parent clone"  
and delete them from glance. But I doubt that it's a reasonable  
approach, just one possible way.

Zitat von Christoph Anton Mitterer <calestyo at scientia.org>:

> On Tue, 2022-11-08 at 12:09 -0500, Erik McCormick wrote:
>> I suppose you could consider it inefficient over a very long term in
>> that you have a source image taking up storage that has very little
>> resemblance to the instances that were spawned from it.
>
> Well it ultimately costs me a ~ factor 2 of storage per instance.
>
>
>> However, what you're running in to here is the "pets vs.
>> cattle" argument. Openstack is a *cloud* platform, not a
>> virtualization platform. It is built for cattle. Long-lived instances
>> are not what it's targeted to.
>
> It's clear that my use case is a bit non-standard. :-)
>
>
>>  That being said, it deals with them just fine. You simply have to
>> accept you're going to end up with these relics. If you're able to
>> nuke and recreate instances frequently and not upgrade them over
>> years, you end up using far less storage and have instances that can
>> quickly migrate around if you're using local storage. 
>
> In my case that periodic re-creating is rather not easily possible, as
> the VMs are rather complex in their setup.
>
> It's clear that otherwise, it makes sense to have the CoW to share
> blocks,... but still:
>
> Shouldn't it be possible to simply break up the connection? Like
> telling the backend, when someone wants to "detach" the volume from the
> image it's based (and CoW-copied) upon, that it should make a full copy
> of the still shared blocks?
>
> Also, what its the reason, that one cannot remove the volume, an
> instance was originally created with, from it?
>
>
>> You can import an existing disk (after some conversion depending on
>> your source hypervisor) into a Ceph-backed Cinder volume and boot
>> from it just fine. You have to make sure to tick the box that tells
>> it it's bootable, but otherwise should be fine. 
>
> That's what I tried to describe before:
> 1st: I imported it as image
> 2nd: Made an instance from it
> - so far things work fine -
> 3rd: attached a empty (non image based) volume of the same size)
>      that one also had --bootable
> 4th: copied everything over and made it bootable
>
> At this point however, removing the original volume (based on the
> image) seems to be forbidden (some error that the root volume cannot be
> removed.
>
> So I tried to trick the system and used the 2nd (non-image based
> volume) for instance creation (i.e. server create --volume, not --
> image).
> While that did work, it then falls back to booting from (SeaBIOS) and
> not UEFI as the previous instance, based on the image, still did.
>
> And I seem to cannot get that working to UEFI-boot.
>
>
>> Those properties you're setting on images are simply being passed to
>> nova when it boots the instance. You should be able to specify them
>> on a command-line boot from a volume.
>
> Well I'm afraid, that doesn't work. Not sure, maybe it's a bug (the
> OpenStack instance in question is probably a somewhat older version).
>
> When I upload my image, and use --property hw_firmware_type=uefi, the
> image get's
> properties       |  
> direct_url='rbd://fd2b36a3-4f06-5212-a74b-1f9ea2b3ee83/images/c45be8c7-8ff7-4553-a145-c83ba75fb951/snap', hw_firmware_type='uefi', locations='[{'url': 'rbd://fd2b36a3-4f06-5212-a74b-1f9ea2b3ee83/images/c45be8c7-8ff7-4553-a145-c83ba75fb951/snap', 'metadata': {}}]', owner_specified.openstack.md5='', owner_specified.openstack.object='images/mytestimage',  
> owner_specified.openstack.sha256=''
>
> The volume created from that has:
> properties                   | attached_mode='rw'
> volume_image_metadata        | {'container_format': 'bare',  
> 'min_ram': '0', 'owner_specified.openstack.sha256': '',  
> 'disk_format': 'raw', 'image_name': 'mytestimage',  
> 'hw_firmware_type': 'uefi', 'image_id':  
> 'c45be8c7-8ff7-4553-a145-c83ba75fb951',  
> 'owner_specified.openstack.object': 'images/mytestimage',  
> 'owner_specified.openstack.md5': '', 'min_disk': '0', 'checksum':  
> '9ad12344a29cbbf7dbddc1ff4c48ea69', 'size': '21474836480'}
>
> So there, the setting seems to be not in properties but
> volume_image_metadata.
>
>
> The instance created from that volume has:
> an empty properties field.
>
>
> My 2nd (image independent volume) has:
> properties                   | hw_firmware_type='uefi'
> and no volume_image_metadata field.
>
>
> When I create an instance from that (2nd image), via e.g.:
> $ openstack server create --flavor pn72te.small --network=internet  
> --volume mytestimage-indep-from-image --property  
> 'hw_firmware_type=uefi' test
>
> Then it gets:
> properties                  | hw_firmware_type='uefi'
>
>
> So seems whether it boots BIOS or UEFI is not determined from the
> instance's properties field (which IMO would be the natural place)...
> but not from the volume (or image)... but there it also doesn't seem to
> work.
>
>
>
> Is there any documentation on where/what exactly causes the instance to
> boot UEFI?
>
>
>> For your conversion purposes, you could check out virt-v2v. I used
>> that to convert a bunch of old vmware instances to KVM and import
>> them into Openstack. It was slow but worked pretty well.
>
> I'll have a look, but I guess it won't help me with the no-UEFI-boot
> issue.
>
>
> Thanks,
> Chris.






More information about the openstack-discuss mailing list