[openstack-dev] [nova] BUG? nova-compute should delete unused instance files on boot

Pavel Kravchenco KPAVEL at il.ibm.com
Tue Oct 8 08:28:47 UTC 2013


In case of vm evacuate with Grizzly code you will have  those disks 
leftovers due to this bug: https://bugs.launchpad.net/nova/+bug/1156269
It's solved for Havana.

Pavel

Vishvananda Ishaya <vishvananda at gmail.com> wrote on 08/10/2013 01:34:26 
AM:

> From: Vishvananda Ishaya <vishvananda at gmail.com>
> To: OpenStack Development Mailing List 
<openstack-dev at lists.openstack.org>, 
> Date: 08/10/2013 01:36 AM
> Subject: Re: [openstack-dev] [nova] BUG? nova-compute should delete 
> unused instance files on boot
> 
> There is a configuration option stating what to do with instances 
> that are still in the hypervisor but have been deleted from the 
> database. I think you want:
> 
> running_deleted_instance_action=reap
> 
> You probably also want
> 
> resume_guests_state_on_host_boot=true
> 
> to bring back the instances that were running before the node was 
> powered off. We should definitely consider changing the default of 
> these two values since I think the default values are probably not 
> what most people would want.
> 
> Vish
> On Oct 7, 2013, at 1:24 PM, Chris Friesen <chris.friesen at windriver.com> 
wrote:
> 
> > On 10/07/2013 12:44 PM, Russell Bryant wrote:
> >> On 10/07/2013 02:28 PM, Chris Friesen wrote:
> >>> 
> >>> I've been doing a lot of instance creation/deletion/evacuate and 
I've
> >>> noticed that if I
> >>> 
> >>> 1)create an instance
> >>> 2) power off the compute node it was running on
> >>> 3) delete the instance
> >>> 4) boot up the compute node
> >>> 
> >>> then the instance rootfs stays around in /var/lib/nova/instances/.
> >>> Eventually this could add up to significant amounts of space.
> >>> 
> >>> 
> >>> Is this expected behaviour?  (This is on grizzly, so maybe havana is
> >>> different.)  If not, should I file a bug for it?
> >>> 
> >>> I think it would make sense for the compute node to come up, query 
all
> >>> the instances in /var/lib/nova/instances/, and delete the ones for
> >>> instances that aren't in the database.
> >> 
> >> How long are you waiting after starting up the compute node?  I would
> >> expect it to get cleaned up by a periodic task, so you might have to
> >> wait roughly 10 minutes (by default).
> > 
> > This is nearly 50 minutes after booting up the compute node:
> > 
> > cfriesen at compute2:/var/lib/nova/instances$ ls -1
> > 39e459b1-3878-41db-aaaf-7c7d0dfa2b19
> > 41a60975-d6b8-468e-90bc-d7de58c2124d
> > 46aec2ae-b6de-4503-a238-af736f81f1a4
> > 50ec3d89-1c9d-4c28-adaf-26c924dfa3ed
> > _base
> > c6ec71a3-658c-4c7c-aa42-cc26296ce7fb
> > c72845e9-0d34-459f-b602-bb2ee409728b
> > compute_nodes
> > locks
> > 
> > Of these, only two show up in "nova list".
> > 
> > Chris
> > 
> > 
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20131008/cf15f26e/attachment.html>


More information about the OpenStack-dev mailing list