Hi George, <div><br></div><div>Thanks for the reply, actually I'm not too sure the type of image that I'm using, I'm trying to create an LXC instance here, so i have this entry in /etc/nova/nova.conf file.  --libvirt_type=lxc.</div>
<div><br></div><div>And i noted that, once instance are spawned, some of the files are get stored in "/var/lib/nova/instances/_base", after your reply I then realized it is qcow images being used, therefore to avoid that i add the following entry into nova.conf file "--use_cow_images=false". But still I see, once instances are spawned files goes into "/var/lib/nova/instances/_base" location and still I'm having the issue in terminating instances.</div>
<div><br></div><div>What kind of configuration I should have in order to avoid this issue? </div><div><br></div><div>For your convenience I have posted my nova.conf file as below..</div><div><br></div><div>--daemonize=1</div>
<div>--dhcpbridge_flagfile=/etc/nova/nova.conf</div><div>--dhcpbridge=/usr/bin/nova-dhcpbridge</div><div>--force_dhcp_release</div><div>--logdir=/var/log/nova</div><div>--state_path=/var/lib/nova</div><div>--verbose</div>
<div>--connection_type=libvirt</div><div>--libvirt_type=lxc</div><div>--libvirt_use_virtio_for_bridges</div><div>--sql_connection=mysql://<a href="http://nova:openstack@172.16.0.1/nova">nova:openstack@172.16.0.1/nova</a></div>
<div>--s3_host=172.16.0.1</div><div>--s3_dmz=172.16.0.1</div><div>--rabbit_host=172.16.0.1</div><div>--ec2_host=172.16.0.1</div><div>--ec2_dmz_host=172.16.0.1</div><div>--ec2_url=<a href="http://172.16.0.1:8773/services/Cloud">http://172.16.0.1:8773/services/Cloud</a></div>
<div>--fixed_range=<a href="http://10.1.0.0/16">10.1.0.0/16</a></div><div>--network_size=512</div><div>--num_networks=1</div><div>--FAKE_subdomain=ec2</div><div>--public_interface=eth1</div><div>--auto_assign_floating_ip</div>
<div>--state_path=/var/lib/nova</div><div>--lock_path=/var/lock/nova</div><div>--image_service=nova.image.glance.GlanceImageService</div><div>--glance_api_servers=<a href="http://172.16.0.1:9292">172.16.0.1:9292</a></div>
<div>--vlan_start=100</div><div>--vlan_interface=eth2</div><div>--root_helper=sudo nova-rootwrap</div><div>--zone_name=nova</div><div>--node_availability_zone=nova</div><div>--storage_availability_zone=nova</div><div>--allow_admin_api</div>
<div>--enable_zone_routing</div><div>--api_paste_config=/etc/nova/api-paste.ini</div><div>--vncserver_host=0.0.0.0</div><div>--vncproxy_url=<a href="http://172.16.0.1:6080">http://172.16.0.1:6080</a></div><div>--ajax_console_proxy_url=<a href="http://172.16.0.1:8000">http://172.16.0.1:8000</a></div>
<div>--osapi_host=172.16.0.1</div><div>--rabbit_host=172.16.0.1</div><div>--auth_strategy=keystone</div><div>--keystone_ec2_url=<a href="http://172.16.0.1:5000/v2.0/ec2tokens">http://172.16.0.1:5000/v2.0/ec2tokens</a></div>
<div>--multi_host</div><div>--send_arp_for_ha</div><div>--novnc_enabled=true</div><div>--novncproxy_base_url=<a href="http://172.16.0.1:6080/vnc_auto.html">http://172.16.0.1:6080/vnc_auto.html</a></div><div>--vncserver_proxyclient_address=172.16.0.1</div>
<div>--vncserver_listen=172.16.0.1</div><div>--use_cow_images=false</div><div>                                     <br>Thanks </div><div>Sajith</div><div><br><div class="gmail_quote">On Wed, Jun 20, 2012 at 6:48 PM, George Mihaiescu <span dir="ltr"><<a href="mailto:George.Mihaiescu@q9.com" target="_blank">George.Mihaiescu@q9.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">









<div lang="EN-US" link="blue" vlink="blue">

<div>

<p class="MsoNormal"><font color="navy" face="Arial"><span style="font-size:10.0pt;font-family:Arial;color:navy">Hi Sajith,<u></u><u></u></span></font></p>

<p class="MsoNormal"><font color="navy" face="Arial"><span style="font-size:10.0pt;font-family:Arial;color:navy"><u></u> <u></u></span></font></p>

<p class="MsoNormal"><font color="navy" face="Arial"><span style="font-size:10.0pt;font-family:Arial;color:navy">I noticed this error in the logs you sent
:<u></u><u></u></span></font></p>

<p class="MsoNormal"><font color="navy" face="Arial"><span style="font-size:10.0pt;font-family:Arial;color:navy"></span></font></p><div class="im"><font color="navy" face="Arial">2012-06-18 18:44:21 TRACE
nova.rpc.amqp     os.remove(fullname)<br></font></div><font color="navy" face="Arial">
2012-06-18 18:44:21 TRACE nova.rpc.amqp OSError: [Errno 13] Permission denied:
'/var/lib/nova/instances/instance-00000037/rootfs/boot/memtest86+.bin'\<u></u><u></u></font><p></p>

<p class="MsoNormal"><font color="navy" face="Arial"><span style="font-size:10.0pt;font-family:Arial;color:navy"><u></u> <u></u></span></font></p>

<p class="MsoNormal"><font color="navy" face="Arial"><span style="font-size:10.0pt;font-family:Arial;color:navy">Check who owns that file and if it’s
somehow shared between instances because there might be an issue deleting it if
it’s used by another instance.<u></u><u></u></span></font></p>

<p class="MsoNormal"><font color="navy" face="Arial"><span style="font-size:10.0pt;font-family:Arial;color:navy">I’m not sure what type of image you
use, but qcow2 baselines stay in “/var/lib/nova/instances/_base”
and they are shared among similar instances.<u></u><u></u></span></font></p>

<p class="MsoNormal"><font color="navy" face="Arial"><span style="font-size:10.0pt;font-family:Arial;color:navy"><u></u> <u></u></span></font></p>

<p class="MsoNormal"><font color="navy" face="Arial"><span style="font-size:10.0pt;font-family:Arial;color:navy">This is just a starting point, so you
might have to dig some more through the logs. <u></u><u></u></span></font></p>

<p class="MsoNormal"><font color="navy" face="Arial"><span style="font-size:10.0pt;font-family:Arial;color:navy"><u></u> <u></u></span></font></p>

<p class="MsoNormal"><font color="navy" face="Arial"><span style="font-size:10.0pt;font-family:Arial;color:navy">George<u></u><u></u></span></font></p>

<p class="MsoNormal"><font color="navy" face="Arial"><span style="font-size:10.0pt;font-family:Arial;color:navy"><u></u> <u></u></span></font></p>

<div>

<div class="MsoNormal" align="center" style="text-align:center"><font size="3" face="Times New Roman"><span style="font-size:12.0pt">

<hr size="2" width="100%" align="center">

</span></font></div>

<p class="MsoNormal"><b><font face="Tahoma"><span style="font-size:10.0pt;font-family:Tahoma;font-weight:bold">From:</span></font></b><font face="Tahoma"><span style="font-size:10.0pt;font-family:Tahoma">
openstack-bounces+george.mihaiescu=<a href="mailto:q9.com@lists.launchpad.net" target="_blank">q9.com@lists.launchpad.net</a>
[mailto:<a href="mailto:openstack-bounces%2Bgeorge.mihaiescu" target="_blank">openstack-bounces+george.mihaiescu</a>=<a href="mailto:q9.com@lists.launchpad.net" target="_blank">q9.com@lists.launchpad.net</a>] <b><span style="font-weight:bold">On Behalf Of </span></b>Sajith Kariyawasam<br>

<b><span style="font-weight:bold">Sent:</span></b> Tuesday, June 19, 2012 1:22
PM<br>
<b><span style="font-weight:bold">To:</span></b> <a href="mailto:openstack@lists.launchpad.net" target="_blank">openstack@lists.launchpad.net</a><br>
<b><span style="font-weight:bold">Subject:</span></b> Re: [Openstack] Instance
termination is not stable</span></font><u></u><u></u></p>

</div><div><div class="h5">

<p class="MsoNormal"><font size="3" face="Times New Roman"><span style="font-size:12.0pt"><u></u> <u></u></span></font></p>

<p class="MsoNormal" style="margin-bottom:12.0pt"><font size="3" face="Times New Roman"><span style="font-size:12.0pt">Any clue on this guys? <u></u><u></u></span></font></p>

<div>

<p class="MsoNormal"><font size="3" face="Times New Roman"><span style="font-size:12.0pt">On Mon, Jun 18, 2012 at 7:08 PM, Sajith Kariyawasam <<a href="mailto:sajhak@gmail.com" target="_blank">sajhak@gmail.com</a>> wrote:<u></u><u></u></span></font></p>


<p class="MsoNormal"><font size="3" face="Times New Roman"><span style="font-size:12.0pt">Hi all,<br>
<br>
I have Openstack Essex version installed and I have created several instances
based on an Ubuntu-12.04 UEC image in Openstack and those are up and running.<br>
<br>
When I'm trying to terminate an instance I'm getting an exception (log is
mentioned below) and, in console its status is shown as "Shutoff" and
the task is "Deleting". Even though i tried terminating the instance
again and again nothing happens. But after I restart machine (nova) those
instances can be terminated. <br>
<br>
This issue is not occurred everytime, but occassionally, as I noted this occurs
when there are more than 2 instances up and running at the same time.. If I
create one instance, terminate that, again create one, terminate that one, if
goes like that, there wont be an issue in terminating. <br>
<br>
What could be the problem here? any suggestions are highly appreciated.<br>
<br>
Thanks <br>
<br>
<br>
<b><span style="font-weight:bold">ERROR LOG ( /var/log/nova/nova-compute.log )<br>
==========</span></b><br>
<br>
2012-06-18 18:43:55 DEBUG nova.manager [-] Skipping
ComputeManager._run_image_cache_manager_pass, 17 ticks left until next run from
(pid=24151) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:147<br>
2012-06-18 18:43:55 DEBUG nova.manager [-] Running periodic task
ComputeManager._reclaim_queued_deletes from (pid=24151) periodic_tasks
/usr/lib/python2.7/dist-packages/nova/manager.py:152<br>
2012-06-18 18:43:55 DEBUG nova.compute.manager [-]
FLAGS.reclaim_instance_interval <= 0, skipping... from (pid=24151)
_reclaim_queued_deletes
/usr/lib/python2.7/dist-packages/nova/compute/manager.py:2380<br>
2012-06-18 18:43:55 DEBUG nova.manager [-] Running periodic task
ComputeManager._report_driver_status from (pid=24151) periodic_tasks
/usr/lib/python2.7/dist-packages/nova/manager.py:152<br>
2012-06-18 18:43:55 INFO nova.compute.manager [-] Updating host status<br>
2012-06-18 18:43:55 DEBUG nova.virt.libvirt.connection [-] Updating host stats
from (pid=24151) update_status
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py:2467<br>
2012-06-18 18:43:55 DEBUG nova.manager [-] Running periodic task
ComputeManager._poll_unconfirmed_resizes from (pid=24151) periodic_tasks
/usr/lib/python2.7/dist-packages/nova/manager.py:152<br>
2012-06-18 18:44:17 DEBUG nova.rpc.amqp [-] received {u'_context_roles':
[u'swiftoperator', u'Member', u'admin'], u'_context_request_id':
u'req-01ca70c8-2240-407b-92d1-5a59ee497291', u'_context_read_deleted': u'no',
u'args': {u'instance_uuid': u'9999d250-1d8b-4973-8320-e6058a2058b9'},
u'_context_auth_token': '<SANITIZED>', u'_context_is_admin': True,
u'_context_project_id': u'194d6e24ec1843fb8fbd94c3fb519deb', u'_context_timestamp':
u'2012-06-18T13:14:17.013212', u'_context_user_id':
u'f8a75778c36241479693ff61a754f67b', u'method': u'terminate_instance',
u'_context_remote_address': u'172.16.0.254'} from (pid=24151) _safe_log
/usr/lib/python2.7/dist-packages/nova/rpc/common.py:160<br>
2012-06-18 18:44:17 DEBUG nova.rpc.amqp
[req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
194d6e24ec1843fb8fbd94c3fb519deb] unpacked context: {'user_id':
u'f8a75778c36241479693ff61a754f67b', 'roles': [u'swiftoperator', u'Member',
u'admin'], 'timestamp': '2012-06-18T13:14:17.013212', 'auth_token':
'<SANITIZED>', 'remote_address': u'172.16.0.254', 'is_admin': True,
'request_id': u'req-01ca70c8-2240-407b-92d1-5a59ee497291', 'project_id':
u'194d6e24ec1843fb8fbd94c3fb519deb', 'read_deleted': u'no'} from (pid=24151)
_safe_log /usr/lib/python2.7/dist-packages/nova/rpc/common.py:160<br>
2012-06-18 18:44:17 INFO nova.compute.manager
[req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
194d6e24ec1843fb8fbd94c3fb519deb] check_instance_lock: decorating:
|<function terminate_instance at 0x2bd3050>|<br>
2012-06-18 18:44:17 INFO nova.compute.manager
[req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
194d6e24ec1843fb8fbd94c3fb519deb] check_instance_lock: arguments:
|<nova.compute.manager.ComputeManager object at 0x20ffb90>|
|<nova.rpc.amqp.RpcContext object at 0x4d2a450>|
|9999d250-1d8b-4973-8320-e6058a2058b9|<br>
2012-06-18 18:44:17 DEBUG nova.compute.manager
[req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
194d6e24ec1843fb8fbd94c3fb519deb] instance
9999d250-1d8b-4973-8320-e6058a2058b9: getting locked state from (pid=24151)
get_lock /usr/lib/python2.7/dist-packages/nova/compute/manager.py:1597<br>
2012-06-18 18:44:17 INFO nova.compute.manager
[req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
194d6e24ec1843fb8fbd94c3fb519deb] check_instance_lock: locked: |False|<br>
2012-06-18 18:44:17 INFO nova.compute.manager
[req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
194d6e24ec1843fb8fbd94c3fb519deb] check_instance_lock: admin: |True|<br>
2012-06-18 18:44:17 INFO nova.compute.manager
[req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
194d6e24ec1843fb8fbd94c3fb519deb] check_instance_lock: executing: |<function
terminate_instance at 0x2bd3050>|<br>
2012-06-18 18:44:17 DEBUG nova.utils [req-01ca70c8-2240-407b-92d1-5a59ee497291
f8a75778c36241479693ff61a754f67b 194d6e24ec1843fb8fbd94c3fb519deb] Attempting
to grab semaphore "9999d250-1d8b-4973-8320-e6058a2058b9" for method
"do_terminate_instance"... from (pid=24151) inner
/usr/lib/python2.7/dist-packages/nova/utils.py:927<br>
2012-06-18 18:44:17 DEBUG nova.utils [req-01ca70c8-2240-407b-92d1-5a59ee497291
f8a75778c36241479693ff61a754f67b 194d6e24ec1843fb8fbd94c3fb519deb] Got
semaphore "9999d250-1d8b-4973-8320-e6058a2058b9" for method
"do_terminate_instance"... from (pid=24151) inner
/usr/lib/python2.7/dist-packages/nova/utils.py:931<br>
2012-06-18 18:44:17 AUDIT nova.compute.manager [req-01ca70c8-2240-407b-92d1-5a59ee497291
f8a75778c36241479693ff61a754f67b 194d6e24ec1843fb8fbd94c3fb519deb] [instance:
9999d250-1d8b-4973-8320-e6058a2058b9] Terminating instance<br>
2012-06-18 18:44:17 DEBUG nova.rpc.amqp
[req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
194d6e24ec1843fb8fbd94c3fb519deb] Making asynchronous call on network ... from
(pid=24151) multicall /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:321<br>
2012-06-18 18:44:17 DEBUG nova.rpc.amqp
[req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
194d6e24ec1843fb8fbd94c3fb519deb] MSG_ID is 2fba7314616d480fa39d5d4d1d942c46
from (pid=24151) multicall
/usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:324<br>
2012-06-18 18:44:17 DEBUG nova.compute.manager [req-01ca70c8-2240-407b-92d1-5a59ee497291
f8a75778c36241479693ff61a754f67b 194d6e24ec1843fb8fbd94c3fb519deb] [instance:
9999d250-1d8b-4973-8320-e6058a2058b9] Deallocating network for instance from
(pid=24151) _deallocate_network /usr/lib/python2.7/dist-packages/nova/compute/manager.py:616<br>
2012-06-18 18:44:17 DEBUG nova.rpc.amqp
[req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
194d6e24ec1843fb8fbd94c3fb519deb] Making asynchronous cast on network... from
(pid=24151) cast /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:346<br>
2012-06-18 18:44:20 WARNING nova.virt.libvirt.connection
[req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
194d6e24ec1843fb8fbd94c3fb519deb] [instance:
9999d250-1d8b-4973-8320-e6058a2058b9] Error from libvirt during saved instance
removal. Code=3 Error=this function is not supported by the connection driver:
virDomainHasManagedSaveImage<br>
2012-06-18 18:44:20 DEBUG nova.utils [req-01ca70c8-2240-407b-92d1-5a59ee497291
f8a75778c36241479693ff61a754f67b 194d6e24ec1843fb8fbd94c3fb519deb] Attempting
to grab semaphore "iptables" for method "apply"... from
(pid=24151) inner /usr/lib/python2.7/dist-packages/nova/utils.py:927<br>
2012-06-18 18:44:20 DEBUG nova.utils [req-01ca70c8-2240-407b-92d1-5a59ee497291
f8a75778c36241479693ff61a754f67b 194d6e24ec1843fb8fbd94c3fb519deb] Got
semaphore "iptables" for method "apply"... from (pid=24151)
inner /usr/lib/python2.7/dist-packages/nova/utils.py:931<br>
2012-06-18 18:44:20 DEBUG nova.utils [req-01ca70c8-2240-407b-92d1-5a59ee497291
f8a75778c36241479693ff61a754f67b 194d6e24ec1843fb8fbd94c3fb519deb] Attempting
to grab file lock "iptables" for method "apply"... from
(pid=24151) inner /usr/lib/python2.7/dist-packages/nova/utils.py:935<br>
2012-06-18 18:44:20 DEBUG nova.utils [req-01ca70c8-2240-407b-92d1-5a59ee497291
f8a75778c36241479693ff61a754f67b 194d6e24ec1843fb8fbd94c3fb519deb] Got file
lock "iptables" for method "apply"... from (pid=24151)
inner /usr/lib/python2.7/dist-packages/nova/utils.py:942<br>
2012-06-18 18:44:20 DEBUG nova.utils [req-01ca70c8-2240-407b-92d1-5a59ee497291
f8a75778c36241479693ff61a754f67b 194d6e24ec1843fb8fbd94c3fb519deb] Running cmd
(subprocess): sudo nova-rootwrap iptables-save -t filter from (pid=24151)
execute /usr/lib/python2.7/dist-packages/nova/utils.py:219<br>
2012-06-18 18:44:20 INFO nova.virt.libvirt.connection [-] [instance:
9999d250-1d8b-4973-8320-e6058a2058b9] Instance destroyed successfully.<br>
2012-06-18 18:44:20 DEBUG nova.utils [req-01ca70c8-2240-407b-92d1-5a59ee497291
f8a75778c36241479693ff61a754f67b 194d6e24ec1843fb8fbd94c3fb519deb] Running cmd
(subprocess): sudo nova-rootwrap iptables-restore from (pid=24151) execute
/usr/lib/python2.7/dist-packages/nova/utils.py:219<br>
2012-06-18 18:44:20 DEBUG nova.utils [req-01ca70c8-2240-407b-92d1-5a59ee497291
f8a75778c36241479693ff61a754f67b 194d6e24ec1843fb8fbd94c3fb519deb] Running cmd
(subprocess): sudo nova-rootwrap iptables-save -t nat from (pid=24151) execute
/usr/lib/python2.7/dist-packages/nova/utils.py:219<br>
2012-06-18 18:44:21 DEBUG nova.utils [req-01ca70c8-2240-407b-92d1-5a59ee497291
f8a75778c36241479693ff61a754f67b 194d6e24ec1843fb8fbd94c3fb519deb] Running cmd
(subprocess): sudo nova-rootwrap iptables-restore from (pid=24151) execute
/usr/lib/python2.7/dist-packages/nova/utils.py:219<br>
2012-06-18 18:44:21 DEBUG nova.network.linux_net
[req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
194d6e24ec1843fb8fbd94c3fb519deb] IPTablesManager.apply completed with success
from (pid=24151) apply
/usr/lib/python2.7/dist-packages/nova/network/linux_net.py:335<br>
2012-06-18 18:44:21 INFO nova.virt.libvirt.connection
[req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
194d6e24ec1843fb8fbd94c3fb519deb] [instance:
9999d250-1d8b-4973-8320-e6058a2058b9] Deleting instance files
/var/lib/nova/instances/instance-00000037<br>
2012-06-18 18:44:21 DEBUG nova.utils [req-01ca70c8-2240-407b-92d1-5a59ee497291
f8a75778c36241479693ff61a754f67b 194d6e24ec1843fb8fbd94c3fb519deb] Running cmd
(subprocess): sudo nova-rootwrap umount /dev/nbd11 from (pid=24151) execute
/usr/lib/python2.7/dist-packages/nova/utils.py:219<br>
2012-06-18 18:44:21 DEBUG nova.utils [req-01ca70c8-2240-407b-92d1-5a59ee497291
f8a75778c36241479693ff61a754f67b 194d6e24ec1843fb8fbd94c3fb519deb] Running cmd
(subprocess): sudo nova-rootwrap qemu-nbd -d /dev/nbd11 from (pid=24151)
execute /usr/lib/python2.7/dist-packages/nova/utils.py:219<br>
2012-06-18 18:44:21 ERROR nova.rpc.amqp
[req-01ca70c8-2240-407b-92d1-5a59ee497291 f8a75778c36241479693ff61a754f67b
194d6e24ec1843fb8fbd94c3fb519deb] Exception during message handling<br>
2012-06-18 18:44:21 TRACE nova.rpc.amqp Traceback (most recent call last):<br>
2012-06-18 18:44:21 TRACE nova.rpc.amqp   File
"/usr/lib/python2.7/dist-packages/nova/rpc/amqp.py", line 252, in
_process_data<br>
2012-06-18 18:44:21 TRACE nova.rpc.amqp     rval = node_func(context=ctxt,
**node_args)<br>
2012-06-18 18:44:21 TRACE nova.rpc.amqp   File
"/usr/lib/python2.7/dist-packages/nova/exception.py", line 114, in
wrapped<br>
2012-06-18 18:44:21 TRACE nova.rpc.amqp     return f(*args,
**kw)<br>
2012-06-18 18:44:21 TRACE nova.rpc.amqp   File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 153,
in decorated_function<br>
2012-06-18 18:44:21 TRACE nova.rpc.amqp     function(self,
context, instance_uuid, *args, **kwargs)<br>
2012-06-18 18:44:21 TRACE nova.rpc.amqp   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py",
line 177, in decorated_function<br>
2012-06-18 18:44:21 TRACE nova.rpc.amqp     sys.exc_info())<br>
2012-06-18 18:44:21 TRACE nova.rpc.amqp   File
"/usr/lib/python2.7/contextlib.py", line 24, in __exit__<br>
2012-06-18 18:44:21 TRACE nova.rpc.amqp     self.gen.next()<br>
2012-06-18 18:44:21 TRACE nova.rpc.amqp   File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 171,
in decorated_function<br>
2012-06-18 18:44:21 TRACE nova.rpc.amqp     return function(self,
context, instance_uuid, *args, **kwargs)<br>
2012-06-18 18:44:21 TRACE nova.rpc.amqp   File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 747,
in terminate_instance<br>
2012-06-18 18:44:21 TRACE nova.rpc.amqp    
do_terminate_instance()<br>
2012-06-18 18:44:21 TRACE nova.rpc.amqp   File
"/usr/lib/python2.7/dist-packages/nova/utils.py", line 945, in inner<br>
2012-06-18 18:44:21 TRACE nova.rpc.amqp     retval =
f(*args, **kwargs)<br>
2012-06-18 18:44:21 TRACE nova.rpc.amqp   File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 740,
in do_terminate_instance<br>
2012-06-18 18:44:21 TRACE nova.rpc.amqp    
self._delete_instance(context, instance)<br>
2012-06-18 18:44:21 TRACE nova.rpc.amqp   File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 718,
in _delete_instance<br>
2012-06-18 18:44:21 TRACE nova.rpc.amqp    
self._shutdown_instance(context, instance, 'Terminating')<br>
2012-06-18 18:44:21 TRACE nova.rpc.amqp   File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 687,
in _shutdown_instance<br>
2012-06-18 18:44:21 TRACE nova.rpc.amqp    
block_device_info)<br>
2012-06-18 18:44:21 TRACE nova.rpc.amqp   File
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py",
line 484, in destroy<br>
2012-06-18 18:44:21 TRACE nova.rpc.amqp     cleanup=True)<br>
2012-06-18 18:44:21 TRACE nova.rpc.amqp   File
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py",
line 478, in _destroy<br>
2012-06-18 18:44:21 TRACE nova.rpc.amqp    
self._cleanup(instance)<br>
2012-06-18 18:44:21 TRACE nova.rpc.amqp   File
"/usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py",
line 493, in _cleanup<br>
2012-06-18 18:44:21 TRACE nova.rpc.amqp    
shutil.rmtree(target)<br>
2012-06-18 18:44:21 TRACE nova.rpc.amqp   File
"/usr/lib/python2.7/shutil.py", line 245, in rmtree<br>
2012-06-18 18:44:21 TRACE nova.rpc.amqp    
rmtree(fullname, ignore_errors, onerror)<br>
2012-06-18 18:44:21 TRACE nova.rpc.amqp   File
"/usr/lib/python2.7/shutil.py", line 245, in rmtree<br>
2012-06-18 18:44:21 TRACE nova.rpc.amqp     rmtree(fullname,
ignore_errors, onerror)<br>
2012-06-18 18:44:21 TRACE nova.rpc.amqp   File
"/usr/lib/python2.7/shutil.py", line 250, in rmtree<br>
2012-06-18 18:44:21 TRACE nova.rpc.amqp    
onerror(os.remove, fullname, sys.exc_info())<br>
2012-06-18 18:44:21 TRACE nova.rpc.amqp   File
"/usr/lib/python2.7/shutil.py", line 248, in rmtree<br>
2012-06-18 18:44:21 TRACE nova.rpc.amqp    
os.remove(fullname)<br>
2012-06-18 18:44:21 TRACE nova.rpc.amqp OSError: [Errno 13] Permission denied:
'/var/lib/nova/instances/instance-00000037/rootfs/boot/memtest86+.bin'<br>
2012-06-18 18:44:21 TRACE nova.rpc.amqp <br>
2012-06-18 18:44:55 DEBUG nova.manager [-] Running periodic task
ComputeManager._publish_service_capabilities from (pid=24151) periodic_tasks
/usr/lib/python2.7/dist-packages/nova/manager.py:152<br>
2012-06-18 18:44:55 DEBUG nova.manager [-] Notifying Schedulers of capabilities
... from (pid=24151) _publish_service_capabilities
/usr/lib/python2.7/dist-packages/nova/manager.py:203<br>
2012-06-18 18:44:55 DEBUG nova.rpc.amqp [-] Making asynchronous fanout cast...
from (pid=24151) fanout_cast
/usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:354<br>
2012-06-18 18:44:55 DEBUG nova.manager [-] Running periodic task
ComputeManager._poll_rescued_instances from (pid=24151) periodic_tasks
/usr/lib/python2.7/dist-packages/nova/manager.py:152<br>
2012-06-18 18:44:55 DEBUG nova.manager [-] Running periodic task
ComputeManager._sync_power_states from (pid=24151) periodic_tasks
/usr/lib/python2.7/dist-packages/nova/manager.py:152<br>
2012-06-18 18:44:55 WARNING nova.compute.manager [-] Found 5 in the database
and 4 on the hypervisor.<br>
2012-06-18 18:44:55 WARNING nova.compute.manager [-] [instance:
9999d250-1d8b-4973-8320-e6058a2058b9] Instance found in database but not known
by hypervisor. Setting power state to NOSTATE<br>
2012-06-18 18:44:55 DEBUG nova.manager [-] Running periodic task
ComputeManager._poll_bandwidth_usage from (pid=24151) periodic_tasks
/usr/lib/python2.7/dist-packages/nova/manager.py:152<br>
2012-06-18 18:44:55 DEBUG nova.manager [-] Running periodic task
ComputeManager.update_available_resource from (pid=24151) periodic_tasks
/usr/lib/python2.7/dist-packages/nova/manager.py:152<br>
2012-06-18 18:44:55 INFO nova.virt.libvirt.connection [-] Compute_service
record updated for sajithvb2 <br>
2012-06-18 18:44:55 DEBUG nova.manager [-] Running periodic task
ComputeManager._poll_rebooting_instances from (pid=24151) periodic_tasks
/usr/lib/python2.7/dist-packages/nova/manager.py:152<br>
2012-06-18 18:44:55 DEBUG nova.manager [-] Skipping
ComputeManager._cleanup_running_deleted_instances, 27 ticks left until next run
from (pid=24151) periodic_tasks
/usr/lib/python2.7/dist-packages/nova/manager.py:147<br>
2012-06-18 18:44:55 DEBUG nova.manager [-] Running periodic task
ComputeManager._heal_instance_info_cache from (pid=24151) periodic_tasks
/usr/lib/python2.7/dist-packages/nova/manager.py:152<br>
2012-06-18 18:44:55 DEBUG nova.rpc.amqp [-] Making asynchronous call on network
... from (pid=24151) multicall
/usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:321<br>
2012-06-18 18:44:55 DEBUG nova.rpc.amqp [-] MSG_ID is
e220048302744b3180c048d0e410aa29 from (pid=24151) multicall
/usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:324<br>
2012-06-18 18:44:55 DEBUG nova.compute.manager [-] Updated the info_cache for
instance fa181c09-f78e-441f-afda-8c8eb84f24bd from (pid=24151)
_heal_instance_info_cache
/usr/lib/python2.7/dist-packages/nova/compute/manager.py:2227<br>
2012-06-18 18:44:55 DEBUG nova.manager [-] Skipping
ComputeManager._run_image_cache_manager_pass, 16 ticks left until next run from
(pid=24151) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:147<br>
2012-06-18 18:44:55 DEBUG nova.manager [-] Running periodic task
ComputeManager._reclaim_queued_deletes from (pid=24151) periodic_tasks
/usr/lib/python2.7/dist-packages/nova/manager.py:152<br>
2012-06-18 18:44:55 DEBUG nova.compute.manager [-]
FLAGS.reclaim_instance_interval <= 0, skipping... from (pid=24151)
_reclaim_queued_deletes
/usr/lib/python2.7/dist-packages/nova/compute/manager.py:2380<br>
2012-06-18 18:44:55 DEBUG nova.manager [-] Running periodic task ComputeManager._report_driver_status
from (pid=24151) periodic_tasks
/usr/lib/python2.7/dist-packages/nova/manager.py:152<br>
2012-06-18 18:44:55 DEBUG nova.manager [-] Running periodic task
ComputeManager._poll_unconfirmed_resizes from (pid=24151) periodic_tasks
/usr/lib/python2.7/dist-packages/nova/manager.py:152<br>
2012-06-18 18:45:55 DEBUG nova.manager [-] Running periodic task
ComputeManager._publish_service_capabilities from (pid=24151) periodic_tasks
/usr/lib/python2.7/dist-packages/nova/manager.py:152<br>
2012-06-18 18:45:55 DEBUG nova.manager [-] Notifying Schedulers of capabilities
... from (pid=24151) _publish_service_capabilities
/usr/lib/python2.7/dist-packages/nova/manager.py:203<br>
2012-06-18 18:45:55 DEBUG nova.rpc.amqp [-] Making asynchronous fanout cast... from
(pid=24151) fanout_cast /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:354<br>
2012-06-18 18:45:55 DEBUG nova.manager [-] Running periodic task
ComputeManager._poll_rescued_instances from (pid=24151) periodic_tasks
/usr/lib/python2.7/dist-packages/nova/manager.py:152<br>
2012-06-18 18:45:55 DEBUG nova.manager [-] Skipping
ComputeManager._sync_power_states, 10 ticks left until next run from
(pid=24151) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:147<br>
2012-06-18 18:45:55 DEBUG nova.manager [-] Running periodic task
ComputeManager._poll_bandwidth_usage from (pid=24151) periodic_tasks
/usr/lib/python2.7/dist-packages/nova/manager.py:152<br>
2012-06-18 18:45:55 DEBUG nova.manager [-] Running periodic task
ComputeManager.update_available_resource from (pid=24151) periodic_tasks
/usr/lib/python2.7/dist-packages/nova/manager.py:152<br>
2012-06-18 18:45:55 INFO nova.virt.libvirt.connection [-] Compute_service
record updated for sajithvb2 <br>
2012-06-18 18:45:55 DEBUG nova.manager [-] Running periodic task ComputeManager._poll_rebooting_instances
from (pid=24151) periodic_tasks
/usr/lib/python2.7/dist-packages/nova/manager.py:152<br>
2012-06-18 18:45:55 DEBUG nova.manager [-] Skipping
ComputeManager._cleanup_running_deleted_instances, 26 ticks left until next run
from (pid=24151) periodic_tasks
/usr/lib/python2.7/dist-packages/nova/manager.py:147<br>
2012-06-18 18:45:55 DEBUG nova.manager [-] Running periodic task
ComputeManager._heal_instance_info_cache from (pid=24151) periodic_tasks
/usr/lib/python2.7/dist-packages/nova/manager.py:152<br>
2012-06-18 18:45:55 DEBUG nova.rpc.amqp [-] Making asynchronous call on network
... from (pid=24151) multicall
/usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:321<br>
2012-06-18 18:45:55 DEBUG nova.rpc.amqp [-] MSG_ID is 4dfad20f5f9f498997cf6ee7141e563d
from (pid=24151) multicall
/usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:324<br>
2012-06-18 18:45:55 DEBUG nova.compute.manager [-] Updated the info_cache for
instance 8ff80284-ff59-4eb7-b5e7-5e3a36fc4144 from (pid=24151)
_heal_instance_info_cache
/usr/lib/python2.7/dist-packages/nova/compute/manager.py:2227<br>
2012-06-18 18:45:55 DEBUG nova.manager [-] Skipping
ComputeManager._run_image_cache_manager_pass, 15 ticks left until next run from
(pid=24151) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:147<br>
2012-06-18 18:45:55 DEBUG nova.manager [-] Running periodic task
ComputeManager._reclaim_queued_deletes from (pid=24151) periodic_tasks
/usr/lib/python2.7/dist-packages/nova/manager.py:152<br>
2012-06-18 18:45:55 DEBUG nova.compute.manager [-]
FLAGS.reclaim_instance_interval <= 0, skipping... from (pid=24151)
_reclaim_queued_deletes
/usr/lib/python2.7/dist-packages/nova/compute/manager.py:2380<br>
2012-06-18 18:45:55 DEBUG nova.manager [-] Running periodic task
ComputeManager._report_driver_status from (pid=24151) periodic_tasks
/usr/lib/python2.7/dist-packages/nova/manager.py:152<br>
2012-06-18 18:45:55 INFO nova.compute.manager [-] Updating host status<br>
2012-06-18 18:45:55 DEBUG nova.virt.libvirt.connection [-] Updating host stats
from (pid=24151) update_status
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py:2467<br>
2012-06-18 18:45:55 DEBUG nova.manager [-] Running periodic task
ComputeManager._poll_unconfirmed_resizes from (pid=24151) periodic_tasks
/usr/lib/python2.7/dist-packages/nova/manager.py:152<br>
2012-06-18 18:46:55 DEBUG nova.manager [-] Running periodic task
ComputeManager._publish_service_capabilities from (pid=24151) periodic_tasks
/usr/lib/python2.7/dist-packages/nova/manager.py:152<br>
2012-06-18 18:46:55 DEBUG nova.manager [-] Notifying Schedulers of capabilities
... from (pid=24151) _publish_service_capabilities
/usr/lib/python2.7/dist-packages/nova/manager.py:203<br>
2012-06-18 18:46:55 DEBUG nova.rpc.amqp [-] Making asynchronous fanout cast...
from (pid=24151) fanout_cast /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:354<br>
2012-06-18 18:46:55 DEBUG nova.manager [-] Running periodic task
ComputeManager._poll_rescued_instances from (pid=24151) periodic_tasks
/usr/lib/python2.7/dist-packages/nova/manager.py:152<br>
2012-06-18 18:46:55 DEBUG nova.manager [-] Skipping
ComputeManager._sync_power_states, 9 ticks left until next run from (pid=24151)
periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:147<br>
2012-06-18 18:46:55 DEBUG nova.manager [-] Running periodic task ComputeManager._poll_bandwidth_usage
from (pid=24151) periodic_tasks
/usr/lib/python2.7/dist-packages/nova/manager.py:152<br>
2012-06-18 18:46:55 DEBUG nova.manager [-] Running periodic task
ComputeManager.update_available_resource from (pid=24151) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:152<br>
2012-06-18 18:46:55 INFO nova.virt.libvirt.connection [-] Compute_service
record updated for sajithvb2 <br>
2012-06-18 18:46:55 DEBUG nova.manager [-] Running periodic task
ComputeManager._poll_rebooting_instances from (pid=24151) periodic_tasks
/usr/lib/python2.7/dist-packages/nova/manager.py:152<br>
2012-06-18 18:46:55 DEBUG nova.manager [-] Skipping
ComputeManager._cleanup_running_deleted_instances, 25 ticks left until next run
from (pid=24151) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:147<br>
2012-06-18 18:46:55 DEBUG nova.manager [-] Running periodic task
ComputeManager._heal_instance_info_cache from (pid=24151) periodic_tasks
/usr/lib/python2.7/dist-packages/nova/manager.py:152<br>
2012-06-18 18:46:55 DEBUG nova.rpc.amqp [-] Making asynchronous call on network
... from (pid=24151) multicall
/usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:321<br>
2012-06-18 18:46:55 DEBUG nova.rpc.amqp [-] MSG_ID is
fcebfbce0666469dbefdd4b1a2c4df38 from (pid=24151) multicall
/usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:324<br>
2012-06-18 18:46:55 DEBUG nova.compute.manager [-] Updated the info_cache for
instance dda6d890-72bf-4538-816d-5e19702902a4 from (pid=24151)
_heal_instance_info_cache /usr/lib/python2.7/dist-packages/nova/compute/manager.py:2227<br>
2012-06-18 18:46:55 DEBUG nova.manager [-] Skipping
ComputeManager._run_image_cache_manager_pass, 14 ticks left until next run from
(pid=24151) periodic_tasks /usr/lib/python2.7/dist-packages/nova/manager.py:147<br>
2012-06-18 18:46:55 DEBUG nova.manager [-] Running periodic task
ComputeManager._reclaim_queued_deletes from (pid=24151) periodic_tasks
/usr/lib/python2.7/dist-packages/nova/manager.py:152<br>
2012-06-18 18:46:55 DEBUG nova.compute.manager [-] FLAGS.reclaim_instance_interval
<= 0, skipping... from (pid=24151) _reclaim_queued_deletes
/usr/lib/python2.7/dist-packages/nova/compute/manager.py:2380<br>
2012-06-18 18:46:55 DEBUG nova.manager [-] Running periodic task
ComputeManager._report_driver_status from (pid=24151) periodic_tasks
/usr/lib/python2.7/dist-packages/nova/manager.py:152<br>
2012-06-18 18:46:55 DEBUG nova.manager [-] Running periodic task
ComputeManager._poll_unconfirmed_resizes from (pid=24151) periodic_tasks
/usr/lib/python2.7/dist-packages/nova/manager.py:152<font color="#888888"><span style="color:#888888"><br>
<br clear="all">
<br>
<span>-- </span><br>
<span>Best Regards<u></u><u></u></span></span></font></span></font></p>

<div>

<p class="MsoNormal"><font size="3" color="#888888" face="Times New Roman"><span style="font-size:12.0pt;color:#888888">Sajith</span></font><u></u><u></u></p>

</div>

<p class="MsoNormal"><font size="3" face="Times New Roman"><span style="font-size:12.0pt"><u></u> <u></u></span></font></p>

</div>

<p class="MsoNormal"><font size="3" face="Times New Roman"><span style="font-size:12.0pt"><br>
<br clear="all">
<br>
-- <br>
Best Regards<u></u><u></u></span></font></p>

<div>

<p class="MsoNormal"><font size="3" face="Times New Roman"><span style="font-size:12.0pt">Sajith<u></u><u></u></span></font></p>

</div>

<p class="MsoNormal"><font size="3" face="Times New Roman"><span style="font-size:12.0pt"><u></u> <u></u></span></font></p>

</div></div></div>

</div>


</blockquote></div><br><br clear="all"><div><br></div>-- <br>Best Regards<div>Sajith</div><br>
</div>