[Openstack] openstack fails to launch vm -- Unable to destroy VDI OpaqueRef:f16be09

Afef MDHAFFAR afef.mdhaffar at gmail.com
Wed Jan 23 18:24:23 UTC 2013


Hi,

I am using openstack + ubuntu + xcp.
I got this error when I try to launch a VM. Any idea??

2013-01-23 18:20:11 DEBUG nova.manager [-] Running periodic task
ComputeManager._poll_rescued_instances from (pid=13806) periodic_tasks
/opt/stack/nova/nova/manager.py:171
2013-01-23 18:20:11 DEBUG nova.manager [-] Skipping
ComputeManager._sync_power_states, 3 ticks left until next run from
(pid=13806) periodic_tasks /opt/stack/nova/nova/manager.py:166
2013-01-23 18:20:11 DEBUG nova.manager [-] Running periodic task
ComputeManager._poll_bandwidth_usage from (pid=13806) periodic_tasks
/opt/stack/nova/nova/manager.py:171
2013-01-23 18:20:11 INFO nova.compute.manager [-] Updating bandwidth usage
cache
2013-01-23 18:20:13 DEBUG nova.manager [-] Running periodic task
ComputeManager._instance_usage_audit from (pid=13806) periodic_tasks
/opt/stack/nova/nova/manager.py:171
2013-01-23 18:20:13 DEBUG nova.manager [-] Running periodic task
ComputeManager.update_available_resource from (pid=13806) periodic_tasks
/opt/stack/nova/nova/manager.py:171
2013-01-23 18:20:13 DEBUG nova.virt.xenapi.host [-] Updating host stats
from (pid=13806) update_status /opt/stack/nova/nova/virt/xenapi/host.py:156
2013-01-23 18:20:14 DEBUG nova.openstack.common.lockutils [-] Got semaphore
"compute_resources" for method "update_available_resource"... from
(pid=13806) inner /opt/stack/nova/nova/openstack/common/lockutils.py:185
2013-01-23 18:20:14 AUDIT nova.compute.resource_tracker [-] Auditing
locally available compute resources
2013-01-23 18:20:14 DEBUG nova.virt.xenapi.host [-] Updating host stats
from (pid=13806) update_status /opt/stack/nova/nova/virt/xenapi/host.py:156
2013-01-23 18:20:14 ERROR nova.virt.xenapi.vm_utils
[req-78b293e1-aa47-4657-bba0-48254900b87b Afef cep4cma] ['INTERNAL_ERROR',
'Watch.Timeout(300.)']
2013-01-23 18:20:14 TRACE nova.virt.xenapi.vm_utils Traceback (most recent
call last):
2013-01-23 18:20:14 TRACE nova.virt.xenapi.vm_utils   File
"/opt/stack/nova/nova/virt/xenapi/vm_utils.py", line 351, in unplug_vbd
2013-01-23 18:20:14 TRACE nova.virt.xenapi.vm_utils
session.call_xenapi('VBD.unplug', vbd_ref)
2013-01-23 18:20:14 TRACE nova.virt.xenapi.vm_utils   File
"/opt/stack/nova/nova/virt/xenapi/driver.py", line 715, in call_xenapi
2013-01-23 18:20:14 TRACE nova.virt.xenapi.vm_utils     return
session.xenapi_request(method, args)
2013-01-23 18:20:14 TRACE nova.virt.xenapi.vm_utils   File
"/usr/local/lib/python2.7/dist-packages/XenAPI.py", line 133, in
xenapi_request
2013-01-23 18:20:14 TRACE nova.virt.xenapi.vm_utils     result =
_parse_result(getattr(self, methodname)(*full_params))
2013-01-23 18:20:14 TRACE nova.virt.xenapi.vm_utils   File
"/usr/local/lib/python2.7/dist-packages/XenAPI.py", line 203, in
_parse_result
2013-01-23 18:20:14 TRACE nova.virt.xenapi.vm_utils     raise
Failure(result['ErrorDescription'])
2013-01-23 18:20:14 TRACE nova.virt.xenapi.vm_utils Failure:
['INTERNAL_ERROR', 'Watch.Timeout(300.)']
2013-01-23 18:20:14 TRACE nova.virt.xenapi.vm_utils
2013-01-23 18:20:14 ERROR nova.virt.xenapi.vm_utils
[req-78b293e1-aa47-4657-bba0-48254900b87b Afef cep4cma]
['OPERATION_NOT_ALLOWED', "VBD 'd3ea624c-9290-db2d-e2cb-12a40b4135db' still
attached to '7e5f4d98-4a77-6e6c-1e2c-01374378741d'"]
2013-01-23 18:20:14 TRACE nova.virt.xenapi.vm_utils Traceback (most recent
call last):
2013-01-23 18:20:14 TRACE nova.virt.xenapi.vm_utils   File
"/opt/stack/nova/nova/virt/xenapi/vm_utils.py", line 376, in destroy_vbd
2013-01-23 18:20:14 TRACE nova.virt.xenapi.vm_utils
session.call_xenapi('VBD.destroy', vbd_ref)
2013-01-23 18:20:14 TRACE nova.virt.xenapi.vm_utils   File
"/opt/stack/nova/nova/virt/xenapi/driver.py", line 715, in call_xenapi
2013-01-23 18:20:14 TRACE nova.virt.xenapi.vm_utils     return
session.xenapi_request(method, args)
2013-01-23 18:20:14 TRACE nova.virt.xenapi.vm_utils   File
"/usr/local/lib/python2.7/dist-packages/XenAPI.py", line 133, in
xenapi_request
2013-01-23 18:20:14 TRACE nova.virt.xenapi.vm_utils     result =
_parse_result(getattr(self, methodname)(*full_params))
2013-01-23 18:20:14 TRACE nova.virt.xenapi.vm_utils   File
"/usr/local/lib/python2.7/dist-packages/XenAPI.py", line 203, in
_parse_result
2013-01-23 18:20:14 TRACE nova.virt.xenapi.vm_utils     raise
Failure(result['ErrorDescription'])
2013-01-23 18:20:14 TRACE nova.virt.xenapi.vm_utils Failure:
['OPERATION_NOT_ALLOWED', "VBD 'd3ea624c-9290-db2d-e2cb-12a40b4135db' still
attached to '7e5f4d98-4a77-6e6c-1e2c-01374378741d'"]
2013-01-23 18:20:14 TRACE nova.virt.xenapi.vm_utils
2013-01-23 18:20:14 DEBUG nova.virt.xenapi.vm_utils
[req-78b293e1-aa47-4657-bba0-48254900b87b Afef cep4cma] Destroying VBD for
VDI OpaqueRef:f16be09b-2c1e-54ce-d471-4a92b36578b3 done. from (pid=13806)
vdi_attached_here /opt/stack/nova/nova/virt/xenapi/vm_utils.py:1892
2013-01-23 18:20:14 ERROR nova.utils
[req-78b293e1-aa47-4657-bba0-48254900b87b Afef cep4cma] [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7] Failed to spawn, rolling back
2013-01-23 18:20:14 TRACE nova.utils [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7] Traceback (most recent call last):
2013-01-23 18:20:14 TRACE nova.utils [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]   File
"/opt/stack/nova/nova/virt/xenapi/vmops.py", line 396, in spawn
2013-01-23 18:20:14 TRACE nova.utils [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]     attach_disks_step(undo_mgr,
vm_ref, vdis, disk_image_type)
2013-01-23 18:20:14 TRACE nova.utils [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]   File
"/opt/stack/nova/nova/virt/xenapi/vmops.py", line 134, in inner
2013-01-23 18:20:14 TRACE nova.utils [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]     rv = f(*args, **kwargs)
2013-01-23 18:20:14 TRACE nova.utils [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]   File
"/opt/stack/nova/nova/virt/xenapi/vmops.py", line 328, in attach_disks_step
2013-01-23 18:20:14 TRACE nova.utils [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]     disk_image_type)
2013-01-23 18:20:14 TRACE nova.utils [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]   File
"/opt/stack/nova/nova/virt/xenapi/vmops.py", line 502, in _attach_disks
2013-01-23 18:20:14 TRACE nova.utils [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]     instance_type['root_gb'])
2013-01-23 18:20:14 TRACE nova.utils [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]   File
"/opt/stack/nova/nova/virt/xenapi/vm_utils.py", line 784, in
auto_configure_disk
2013-01-23 18:20:14 TRACE nova.utils [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]     with vdi_attached_here(session,
vdi_ref, read_only=False) as dev:
2013-01-23 18:20:14 TRACE nova.utils [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]   File
"/usr/lib/python2.7/contextlib.py", line 17, in __enter__
2013-01-23 18:20:14 TRACE nova.utils [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]     return self.gen.next()
2013-01-23 18:20:14 TRACE nova.utils [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]   File
"/opt/stack/nova/nova/virt/xenapi/vm_utils.py", line 1885, in
vdi_attached_here
2013-01-23 18:20:14 TRACE nova.utils [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]     unplug_vbd(session, vbd_ref)
2013-01-23 18:20:14 TRACE nova.utils [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]   File
"/opt/stack/nova/nova/virt/xenapi/vm_utils.py", line 364, in unplug_vbd
2013-01-23 18:20:14 TRACE nova.utils [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]     _('Unable to unplug VBD %s') %
vbd_ref)
2013-01-23 18:20:14 TRACE nova.utils [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7] StorageError: Unable to unplug VBD
OpaqueRef:25fd523f-5274-5a83-6684-4165604e801f
2013-01-23 18:20:14 TRACE nova.utils [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]
2013-01-23 18:20:14 WARNING nova.virt.xenapi.vm_utils
[req-78b293e1-aa47-4657-bba0-48254900b87b Afef cep4cma] [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7] VM already halted, skipping
shutdown...
2013-01-23 18:20:14 DEBUG nova.virt.xenapi.vmops
[req-78b293e1-aa47-4657-bba0-48254900b87b Afef cep4cma] [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7] Destroying VDIs from (pid=13806)
_destroy_vdis /opt/stack/nova/nova/virt/xenapi/vmops.py:986
2013-01-23 18:20:14 DEBUG nova.virt.xenapi.vmops
[req-78b293e1-aa47-4657-bba0-48254900b87b Afef cep4cma] [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7] Using RAW or VHD, skipping kernel and
ramdisk deletion from (pid=13806) _destroy_kernel_ramdisk
/opt/stack/nova/nova/virt/xenapi/vmops.py:1014
2013-01-23 18:20:14 DEBUG nova.virt.xenapi.vm_utils
[req-78b293e1-aa47-4657-bba0-48254900b87b Afef cep4cma] [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7] VM destroyed from (pid=13806)
destroy_vm /opt/stack/nova/nova/virt/xenapi/vm_utils.py:276
2013-01-23 18:20:14 INFO nova.virt.firewall
[req-78b293e1-aa47-4657-bba0-48254900b87b Afef cep4cma] [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7] Attempted to unfilter instance which
is not filtered
2013-01-23 18:20:15 DEBUG nova.compute.resource_tracker [-] Hypervisor:
free ram (MB): 4706 from (pid=13806) _report_hypervisor_resource_view
/opt/stack/nova/nova/compute/resource_tracker.py:313
2013-01-23 18:20:15 DEBUG nova.compute.resource_tracker [-] Hypervisor:
free disk (GB): 66 from (pid=13806) _report_hypervisor_resource_view
/opt/stack/nova/nova/compute/resource_tracker.py:314
2013-01-23 18:20:15 DEBUG nova.compute.resource_tracker [-] Hypervisor:
VCPU information unavailable from (pid=13806)
_report_hypervisor_resource_view
/opt/stack/nova/nova/compute/resource_tracker.py:321
2013-01-23 18:20:15 AUDIT nova.compute.resource_tracker [-] Free ram (MB):
5631
2013-01-23 18:20:15 AUDIT nova.compute.resource_tracker [-] Free disk (GB):
17
2013-01-23 18:20:15 AUDIT nova.compute.resource_tracker [-] Free VCPU
information unavailable
2013-01-23 18:20:15 INFO nova.compute.resource_tracker [-] Compute_service
record updated for computeDomU05
2013-01-23 18:20:15 DEBUG nova.manager [-] Running periodic task
ComputeManager._poll_rebooting_instances from (pid=13806) periodic_tasks
/opt/stack/nova/nova/manager.py:171
2013-01-23 18:20:15 DEBUG nova.manager [-] Skipping
ComputeManager._cleanup_running_deleted_instances, 23 ticks left until next
run from (pid=13806) periodic_tasks /opt/stack/nova/nova/manager.py:166
2013-01-23 18:20:15 DEBUG nova.manager [-] Running periodic task
ComputeManager._check_instance_build_time from (pid=13806) periodic_tasks
/opt/stack/nova/nova/manager.py:171
2013-01-23 18:20:15 DEBUG nova.manager [-] Running periodic task
ComputeManager._heal_instance_info_cache from (pid=13806) periodic_tasks
/opt/stack/nova/nova/manager.py:171
2013-01-23 18:20:15 DEBUG nova.openstack.common.rpc.amqp [-] Making
asynchronous call on network ... from (pid=13806) multicall
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:352
2013-01-23 18:20:15 DEBUG nova.openstack.common.rpc.amqp [-] MSG_ID is
65ce36f29eb14aa9aecb6a0282a0f5d5 from (pid=13806) multicall
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:355
2013-01-23 18:20:15 DEBUG nova.openstack.common.rpc.amqp [-] received
{u'_context_roles': [], u'_msg_id': u'65ce36f29eb14aa9aecb6a0282a0f5d5',
u'_context_quota_class': None, u'_context_request_id':
u'req-3530b51d-8309-4a22-a8bb-047e76fbc3a7', u'_context_service_catalog':
None, u'_context_user_name': None, u'_context_auth_token': '<SANITIZED>',
u'args': {u'instance_id': 510, u'instance_uuid':
u'2d783ed2-9f27-4e33-9c77-f0031a6f44f9', u'host': u'computeDomU05',
u'project_id': u'2f4008b766f648d4a5b55b9d88ffd23e', u'rxtx_factor': 1.0},
u'_context_instance_lock_checked': False, u'_context_project_name': None,
u'_context_is_admin': True, u'version': u'1.0', u'_context_project_id':
None, u'_context_timestamp': u'2013-01-23T18:20:11.536181',
u'_context_read_deleted': u'no', u'_context_user_id': None, u'method':
u'get_instance_nw_info', u'_context_remote_address': None} from (pid=13809)
_safe_log /opt/stack/nova/nova/openstack/common/rpc/common.py:195
2013-01-23 18:20:15 DEBUG nova.openstack.common.rpc.amqp [-] unpacked
context: {'project_name': None, 'user_id': None, 'roles': [], 'timestamp':
u'2013-01-23T18:20:11.536181', 'auth_token': '<SANITIZED>',
'remote_address': None, 'quota_class': None, 'is_admin': True,
'service_catalog': None, 'request_id':
u'req-3530b51d-8309-4a22-a8bb-047e76fbc3a7', 'instance_lock_checked':
False, 'project_id': None, 'user_name': None, 'read_deleted': u'no'} from
(pid=13809) _safe_log
/opt/stack/nova/nova/openstack/common/rpc/common.py:195
2013-01-23 18:20:16 DEBUG nova.openstack.common.lockutils
[req-3530b51d-8309-4a22-a8bb-047e76fbc3a7 None None] Got semaphore
"get_dhcp" for method "_get_dhcp_ip"... from (pid=13809) inner
/opt/stack/nova/nova/openstack/common/lockutils.py:185
2013-01-23 18:20:16 DEBUG nova.openstack.common.lockutils
[req-3530b51d-8309-4a22-a8bb-047e76fbc3a7 None None] Got semaphore
"get_dhcp" for method "_get_dhcp_ip"... from (pid=13809) inner
/opt/stack/nova/nova/openstack/common/lockutils.py:185
2013-01-23 18:20:16 DEBUG nova.compute.manager [-] [instance:
2d783ed2-9f27-4e33-9c77-f0031a6f44f9] Updated the info_cache for instance
from (pid=13806) _heal_instance_info_cache
/opt/stack/nova/nova/compute/manager.py:2743
2013-01-23 18:20:16 DEBUG nova.manager [-] Skipping
ComputeManager._run_image_cache_manager_pass, 33 ticks left until next run
from (pid=13806) periodic_tasks /opt/stack/nova/nova/manager.py:166
2013-01-23 18:20:16 DEBUG nova.manager [-] Running periodic task
ComputeManager._reclaim_queued_deletes from (pid=13806) periodic_tasks
/opt/stack/nova/nova/manager.py:171
2013-01-23 18:20:16 DEBUG nova.compute.manager [-]
CONF.reclaim_instance_interval <= 0, skipping... from (pid=13806)
_reclaim_queued_deletes /opt/stack/nova/nova/compute/manager.py:3085
2013-01-23 18:20:16 DEBUG nova.manager [-] Running periodic task
ComputeManager._report_driver_status from (pid=13806) periodic_tasks
/opt/stack/nova/nova/manager.py:171
2013-01-23 18:20:16 DEBUG nova.manager [-] Running periodic task
ComputeManager._poll_unconfirmed_resizes from (pid=13806) periodic_tasks
/opt/stack/nova/nova/manager.py:171
2013-01-23 18:20:23 ERROR nova.virt.xenapi.vm_utils
[req-78b293e1-aa47-4657-bba0-48254900b87b Afef cep4cma] ['VDI_IN_USE',
'OpaqueRef:f16be09b-2c1e-54ce-d471-4a92b36578b3']
2013-01-23 18:20:23 TRACE nova.virt.xenapi.vm_utils Traceback (most recent
call last):
2013-01-23 18:20:23 TRACE nova.virt.xenapi.vm_utils   File
"/opt/stack/nova/nova/virt/xenapi/vm_utils.py", line 414, in destroy_vdi
2013-01-23 18:20:23 TRACE nova.virt.xenapi.vm_utils
session.call_xenapi('VDI.destroy', vdi_ref)
2013-01-23 18:20:23 TRACE nova.virt.xenapi.vm_utils   File
"/opt/stack/nova/nova/virt/xenapi/driver.py", line 715, in call_xenapi
2013-01-23 18:20:23 TRACE nova.virt.xenapi.vm_utils     return
session.xenapi_request(method, args)
2013-01-23 18:20:23 TRACE nova.virt.xenapi.vm_utils   File
"/usr/local/lib/python2.7/dist-packages/XenAPI.py", line 133, in
xenapi_request
2013-01-23 18:20:23 TRACE nova.virt.xenapi.vm_utils     result =
_parse_result(getattr(self, methodname)(*full_params))
2013-01-23 18:20:23 TRACE nova.virt.xenapi.vm_utils   File
"/usr/local/lib/python2.7/dist-packages/XenAPI.py", line 203, in
_parse_result
2013-01-23 18:20:23 TRACE nova.virt.xenapi.vm_utils     raise
Failure(result['ErrorDescription'])
2013-01-23 18:20:23 TRACE nova.virt.xenapi.vm_utils Failure: ['VDI_IN_USE',
'OpaqueRef:f16be09b-2c1e-54ce-d471-4a92b36578b3']
2013-01-23 18:20:23 TRACE nova.virt.xenapi.vm_utils
2013-01-23 18:20:23 ERROR nova.virt.xenapi.vm_utils
[req-78b293e1-aa47-4657-bba0-48254900b87b Afef cep4cma] Unable to destroy
VDI OpaqueRef:f16be09b-2c1e-54ce-d471-4a92b36578b3
2013-01-23 18:20:23 ERROR nova.compute.manager
[req-78b293e1-aa47-4657-bba0-48254900b87b Afef cep4cma] [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7] Instance failed to spawn
2013-01-23 18:20:23 TRACE nova.compute.manager [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7] Traceback (most recent call last):
2013-01-23 18:20:23 TRACE nova.compute.manager [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]   File
"/opt/stack/nova/nova/compute/manager.py", line 840, in _spawn
2013-01-23 18:20:23 TRACE nova.compute.manager [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]     block_device_info)
2013-01-23 18:20:23 TRACE nova.compute.manager [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]   File
"/opt/stack/nova/nova/virt/xenapi/driver.py", line 172, in spawn
2013-01-23 18:20:23 TRACE nova.compute.manager [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]     admin_password, network_info,
block_device_info)
2013-01-23 18:20:23 TRACE nova.compute.manager [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]   File
"/opt/stack/nova/nova/virt/xenapi/vmops.py", line 409, in spawn
2013-01-23 18:20:23 TRACE nova.compute.manager [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]
undo_mgr.rollback_and_reraise(msg=msg, instance=instance)
2013-01-23 18:20:23 TRACE nova.compute.manager [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]   File
"/opt/stack/nova/nova/utils.py", line 1154, in rollback_and_reraise
2013-01-23 18:20:23 TRACE nova.compute.manager [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]     self._rollback()
2013-01-23 18:20:23 TRACE nova.compute.manager [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]   File
"/usr/lib/python2.7/contextlib.py", line 24, in __exit__
2013-01-23 18:20:23 TRACE nova.compute.manager [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]     self.gen.next()
2013-01-23 18:20:23 TRACE nova.compute.manager [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]   File
"/opt/stack/nova/nova/virt/xenapi/vmops.py", line 396, in spawn
2013-01-23 18:20:23 TRACE nova.compute.manager [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]     attach_disks_step(undo_mgr,
vm_ref, vdis, disk_image_type)
2013-01-23 18:20:23 TRACE nova.compute.manager [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]   File
"/opt/stack/nova/nova/virt/xenapi/vmops.py", line 134, in inner
2013-01-23 18:20:23 TRACE nova.compute.manager [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]     rv = f(*args, **kwargs)
2013-01-23 18:20:23 TRACE nova.compute.manager [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]   File
"/opt/stack/nova/nova/virt/xenapi/vmops.py", line 328, in attach_disks_step
2013-01-23 18:20:23 TRACE nova.compute.manager [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]     disk_image_type)
2013-01-23 18:20:23 TRACE nova.compute.manager [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]   File
"/opt/stack/nova/nova/virt/xenapi/vmops.py", line 502, in _attach_disks
2013-01-23 18:20:23 TRACE nova.compute.manager [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]     instance_type['root_gb'])
2013-01-23 18:20:23 TRACE nova.compute.manager [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]   File
"/opt/stack/nova/nova/virt/xenapi/vm_utils.py", line 784, in
auto_configure_disk
2013-01-23 18:20:23 TRACE nova.compute.manager [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]     with vdi_attached_here(session,
vdi_ref, read_only=False) as dev:
2013-01-23 18:20:23 TRACE nova.compute.manager [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]   File
"/usr/lib/python2.7/contextlib.py", line 17, in __enter__
2013-01-23 18:20:23 TRACE nova.compute.manager [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]     return self.gen.next()
2013-01-23 18:20:23 TRACE nova.compute.manager [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]   File
"/opt/stack/nova/nova/virt/xenapi/vm_utils.py", line 1885, in
vdi_attached_here
2013-01-23 18:20:23 TRACE nova.compute.manager [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]     unplug_vbd(session, vbd_ref)
2013-01-23 18:20:23 TRACE nova.compute.manager [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]   File
"/opt/stack/nova/nova/virt/xenapi/vm_utils.py", line 364, in unplug_vbd
2013-01-23 18:20:23 TRACE nova.compute.manager [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]     _('Unable to unplug VBD %s') %
vbd_ref)
2013-01-23 18:20:23 TRACE nova.compute.manager [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7] StorageError: Unable to unplug VBD
OpaqueRef:25fd523f-5274-5a83-6684-4165604e801f
2013-01-23 18:20:23 TRACE nova.compute.manager [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]
2013-01-23 18:20:23 DEBUG nova.openstack.common.lockutils
[req-78b293e1-aa47-4657-bba0-48254900b87b Afef cep4cma] Got semaphore
"compute_resources" for method "abort"... from (pid=13806) inner
/opt/stack/nova/nova/openstack/common/lockutils.py:185
2013-01-23 18:20:23 DEBUG nova.compute.claims
[req-78b293e1-aa47-4657-bba0-48254900b87b Afef cep4cma] [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7] Aborting claim: [Claim: 1024 MB
memory, 20 GB disk, 1 VCPUS] from (pid=13806) abort
/opt/stack/nova/nova/compute/claims.py:94
2013-01-23 18:20:23 DEBUG nova.compute.manager
[req-78b293e1-aa47-4657-bba0-48254900b87b Afef cep4cma] [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7] Deallocating network for instance
from (pid=13806) _deallocate_network
/opt/stack/nova/nova/compute/manager.py:866
2013-01-23 18:20:23 DEBUG nova.openstack.common.rpc.amqp [-] Making
asynchronous call on network ... from (pid=13806) multicall
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:352
2013-01-23 18:20:23 DEBUG nova.openstack.common.rpc.amqp [-] MSG_ID is
3b93097b972b463c89d632edb538c8f2 from (pid=13806) multicall
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:355
2013-01-23 18:20:23 DEBUG nova.manager [-] Running periodic task
FlatDHCPManager.publish_service_capabilities from (pid=13809)
periodic_tasks /opt/stack/nova/nova/manager.py:171
2013-01-23 18:20:23 DEBUG nova.manager [-] Running periodic task
FlatDHCPManager._disassociate_stale_fixed_ips from (pid=13809)
periodic_tasks /opt/stack/nova/nova/manager.py:171
2013-01-23 18:20:23 ERROR nova.compute.manager
[req-78b293e1-aa47-4657-bba0-48254900b87b Afef cep4cma] [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7] Error trying to reschedule
2013-01-23 18:20:23 TRACE nova.compute.manager [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7] Traceback (most recent call last):
2013-01-23 18:20:23 TRACE nova.compute.manager [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]   File
"/opt/stack/nova/nova/compute/manager.py", line 641, in
_reschedule_or_reraise
2013-01-23 18:20:23 TRACE nova.compute.manager [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]     task_state)
2013-01-23 18:20:23 TRACE nova.compute.manager [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]   File
"/opt/stack/nova/nova/compute/manager.py", line 659, in _reschedule
2013-01-23 18:20:23 TRACE nova.compute.manager [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]     retry =
filter_properties.get('retry', None)
2013-01-23 18:20:23 TRACE nova.compute.manager [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7] AttributeError: 'unicode' object has
no attribute 'get'
2013-01-23 18:20:23 TRACE nova.compute.manager [instance:
98d6cf48-c70f-4417-ab47-669c9f56e4f7]
2013-01-23 18:20:23 DEBUG nova.openstack.common.lockutils
[req-78b293e1-aa47-4657-bba0-48254900b87b Afef cep4cma] Got semaphore
"compute_resources" for method "update_usage"... from (pid=13806) inner
/opt/stack/nova/nova/openstack/common/lockutils.py:185
2013-01-23 18:20:24 DEBUG nova.openstack.common.lockutils
[req-78b293e1-aa47-4657-bba0-48254900b87b Afef cep4cma] Got semaphore
"compute_resources" for method "update_usage"... from (pid=13806) inner
/opt/stack/nova/nova/openstack/common/lockutils.py:185
2013-01-23 18:20:24 ERROR nova.openstack.common.rpc.amqp [-] Exception
during message handling
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp Traceback (most
recent call last):
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp   File
"/opt/stack/nova/nova/openstack/common/rpc/amqp.py", line 276, in
_process_data
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp     rval =
self.proxy.dispatch(ctxt, version, method, **args)
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp   File
"/opt/stack/nova/nova/openstack/common/rpc/dispatcher.py", line 145, in
dispatch
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp     return
getattr(proxyobj, method)(ctxt, **kwargs)
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp   File
"/opt/stack/nova/nova/exception.py", line 115, in wrapped
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp     temp_level,
payload)
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp   File
"/usr/lib/python2.7/contextlib.py", line 24, in __exit__
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp     self.gen.next()
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp   File
"/opt/stack/nova/nova/exception.py", line 90, in wrapped
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp     return
f(*args, **kw)
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp   File
"/opt/stack/nova/nova/compute/manager.py", line 176, in decorated_function
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp     pass
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp   File
"/usr/lib/python2.7/contextlib.py", line 24, in __exit__
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp     self.gen.next()
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp   File
"/opt/stack/nova/nova/compute/manager.py", line 162, in decorated_function
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp     return
function(self, context, *args, **kwargs)
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp   File
"/opt/stack/nova/nova/compute/manager.py", line 203, in decorated_function
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp
kwargs['instance']['uuid'], e, sys.exc_info())
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp   File
"/usr/lib/python2.7/contextlib.py", line 24, in __exit__
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp     self.gen.next()
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp   File
"/opt/stack/nova/nova/compute/manager.py", line 191, in decorated_function
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp     return
function(self, context, *args, **kwargs)
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp   File
"/opt/stack/nova/nova/compute/manager.py", line 933, in run_instance
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp
do_run_instance()
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp   File
"/opt/stack/nova/nova/openstack/common/lockutils.py", line 228, in inner
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp     retval =
f(*args, **kwargs)
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp   File
"/opt/stack/nova/nova/compute/manager.py", line 932, in do_run_instance
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp
admin_password, is_first_time, instance)
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp   File
"/opt/stack/nova/nova/compute/manager.py", line 608, in _run_instance
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp
self._set_instance_error_state(context, instance['uuid'])
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp   File
"/usr/lib/python2.7/contextlib.py", line 24, in __exit__
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp     self.gen.next()
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp   File
"/opt/stack/nova/nova/compute/manager.py", line 596, in _run_instance
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp     is_first_time,
request_spec, filter_properties)
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp   File
"/opt/stack/nova/nova/compute/manager.py", line 582, in _run_instance
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp
injected_files, admin_password)
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp   File
"/opt/stack/nova/nova/compute/manager.py", line 840, in _spawn
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp
block_device_info)
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp   File
"/opt/stack/nova/nova/virt/xenapi/driver.py", line 172, in spawn
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp
admin_password, network_info, block_device_info)
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp   File
"/opt/stack/nova/nova/virt/xenapi/vmops.py", line 409, in spawn
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp
undo_mgr.rollback_and_reraise(msg=msg, instance=instance)
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp   File
"/opt/stack/nova/nova/utils.py", line 1154, in rollback_and_reraise
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp
self._rollback()
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp   File
"/usr/lib/python2.7/contextlib.py", line 24, in __exit__
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp     self.gen.next()
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp   File
"/opt/stack/nova/nova/virt/xenapi/vmops.py", line 396, in spawn
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp
attach_disks_step(undo_mgr, vm_ref, vdis, disk_image_type)
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp   File
"/opt/stack/nova/nova/virt/xenapi/vmops.py", line 134, in inner
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp     rv = f(*args,
**kwargs)
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp   File
"/opt/stack/nova/nova/virt/xenapi/vmops.py", line 328, in attach_disks_step
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp
disk_image_type)
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp   File
"/opt/stack/nova/nova/virt/xenapi/vmops.py", line 502, in _attach_disks
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp
instance_type['root_gb'])
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp   File
"/opt/stack/nova/nova/virt/xenapi/vm_utils.py", line 784, in
auto_configure_disk
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp     with
vdi_attached_here(session, vdi_ref, read_only=False) as dev:
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp   File
"/usr/lib/python2.7/contextlib.py", line 17, in __enter__
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp     return
self.gen.next()
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp   File
"/opt/stack/nova/nova/virt/xenapi/vm_utils.py", line 1885, in
vdi_attached_here
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp
unplug_vbd(session, vbd_ref)
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp   File
"/opt/stack/nova/nova/virt/xenapi/vm_utils.py", line 364, in unplug_vbd
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp     _('Unable to
unplug VBD %s') % vbd_ref)
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp StorageError:
Unable to unplug VBD OpaqueRef:25fd523f-5274-5a83-6684-4165604e801f
2013-01-23 18:20:24 TRACE nova.openstack.common.rpc.amqp
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20130123/90db01bd/attachment.html>


More information about the Openstack mailing list