[Openstack] [nova] Cleaning up unused images in the cache

Gary Kotton gkotton at vmware.com
Wed Apr 29 13:23:23 UTC 2015


Hi,
In the case that libvirt is using a shared file system then there is the chance of having a race condition - one compute node may try and blow away an image that it thinks is aged whilst another tries to spin up an instance using the same image that is being deleted. There may be the outside chance that this is protected by locks on the shared file system. My guess is that it is not the case.
Thanks
Gary

From: Leslie-Alexandre DENIS <contact at ladenis.fr<mailto:contact at ladenis.fr>>
Date: Wednesday, April 29, 2015 at 4:11 PM
To: Joe Topjian <joe at topjian.net<mailto:joe at topjian.net>>
Cc: "openstack-operators at lists.openstack.org<mailto:openstack-operators at lists.openstack.org>" <openstack-operators at lists.openstack.org<mailto:openstack-operators at lists.openstack.org>>, "openstack at lists.openstack.org<mailto:openstack at lists.openstack.org>" <openstack at lists.openstack.org<mailto:openstack at lists.openstack.org>>
Subject: Re: [Openstack] [nova] Cleaning up unused images in the cache

Dear Joe,

Thanks for your kind reply, your informations are helpful. I'm reading the imagecache.py[1] sourcecode in order to really understand what it'll happen in case of a shared filesystem.

I understand the SHA1 hash mechanism and the backing file check but I'm not sure how it will manage the case of shared FS.

The main function seems to be :
- backing_file = libvirt_utils.get_disk_backing_file(disk_path)

But does libvirt_utils.get_disk_backing_file federates all the compute nodes informations ?! If no it may delete the other nodes images ?

Hope it's not too redundant,
Kind regards

[1] https://github.com/openstack/nova/blob/master/nova/virt/libvirt/imagecache.py<https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_openstack_nova_blob_master_nova_virt_libvirt_imagecache.py&d=AwMFaQ&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs&r=VlZxHpZBmzzkWT5jqz9JYBk8YTeq9N3-diTlNj4GyNc&m=w2pWyjM1IULD3khGemCqgEcm_n_1Pksa-WvLSve2lFg&s=_JgVx1Tp01Y8aWVK3KO4zGaIbhagJD4YXn6hLLrWKtw&e=>

Le 28/04/2015 16:18, Joe Topjian a écrit :
Hello,

I've got a similar question about cache-manager and the presence of a shared filesystem for instances images.
I'm currently reading the source code in order to find out how this is managed but before I would be curious how you achieve this on production servers.

For example images not used by compute node A will probably be cleaned on the shared FS despite the fact that compute B use it, that's the main problem.

This used to be a problem, but AFAIK it should not happen any more. If you're noticing it happening, please raise a flag.

How do you handle _base guys ?

We configure Nova to not have instances rely on _base files. We found it to be too dangerous of a single point of failure. For example, we ran into the scenario you described a few years ago before it was fixed. Bugs are one thing, but there are a lot of other ways a _base file can become corrupt or removed. Even if those scenarios are rare, the results are damaging enough for us to totally forgo reliance of _base files.

Padraig Brady has an awesome article that details the many ways you can configure _base and instance files:

http://www.pixelbeat.org/docs/openstack_libvirt_images/<https://urldefense.proofpoint.com/v2/url?u=http-3A__www.pixelbeat.org_docs_openstack-5Flibvirt-5Fimages_&d=AwMFaQ&c=Sqcl0Ez6M0X8aeM67LKIiDJAXVeAw-YihVMNtXt-uEs&r=VlZxHpZBmzzkWT5jqz9JYBk8YTeq9N3-diTlNj4GyNc&m=w2pWyjM1IULD3khGemCqgEcm_n_1Pksa-WvLSve2lFg&s=w_4_SKuqAOKgdJmNM3A2raXr0-Syzvyk7YABkxtz-04&e=>

I'm looping -operators into this thread for input on further ways to handle _base. You might also be able to find some other methods by searching the -operators mailing list archive.

Thanks,
Joe



--
Leslie-Alexandre DENIS
Tel +33 6 83 88 34 01
Skype ladenis-dc4
BBM PIN 7F78C3BD

SIRET 800 458 663 00013
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20150429/62672eda/attachment.html>


More information about the Openstack mailing list