[Openstack] fails to launch instance

Afef MDHAFFAR afef.mdhaffar at gmail.com
Wed Oct 3 15:44:33 UTC 2012


Hi all,

I am running openstack on Ubuntu 12.04 (with XCP).
I create an ubuntu image (Raw). The creation of the image is successful.
However, when I try to launch an instance using this image, I get this
error:
-----
2012-10-03 15:39:15 DEBUG nova.virt.xenapi.vm_utils
[req-ad542f39-2dff-4567-ae6b-9e11b692781a admin demo] [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693] Fetched VDIs of type 'os_raw' with
UUID '5674bc5f-4649-435f-9be0-b1aeee201a17' from (pid=5918) _fetch_image
/opt/stack/nova/nova/virt/xenapi/vm_utils.py:948
2012-10-03 15:39:15 ERROR nova.utils
[req-ad542f39-2dff-4567-ae6b-9e11b692781a admin demo] [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693] Failed to spawn, rolling back
2012-10-03 15:39:15 TRACE nova.utils [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693] Traceback (most recent call last):
2012-10-03 15:39:15 TRACE nova.utils [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693]   File
"/opt/stack/nova/nova/virt/xenapi/vmops.py", line 355, in spawn
2012-10-03 15:39:15 TRACE nova.utils [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693]     vdis = create_disks_step(undo_mgr)
2012-10-03 15:39:15 TRACE nova.utils [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693]   File
"/opt/stack/nova/nova/virt/xenapi/vmops.py", line 139, in inner
2012-10-03 15:39:15 TRACE nova.utils [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693]     rv = f(*args, **kwargs)
2012-10-03 15:39:15 TRACE nova.utils [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693]   File
"/opt/stack/nova/nova/virt/xenapi/vmops.py", line 257, in create_disks_step
2012-10-03 15:39:15 TRACE nova.utils [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693]     image_meta, block_device_info)
2012-10-03 15:39:15 TRACE nova.utils [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693]   File
"/opt/stack/nova/nova/virt/xenapi/vmops.py", line 224, in _create_disks
2012-10-03 15:39:15 TRACE nova.utils [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693]
block_device_info=block_device_info)
2012-10-03 15:39:15 TRACE nova.utils [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693]   File
"/opt/stack/nova/nova/virt/xenapi/vm_utils.py", line 427, in
get_vdis_for_instance
2012-10-03 15:39:15 TRACE nova.utils [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693]     image_type)
2012-10-03 15:39:15 TRACE nova.utils [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693]   File
"/opt/stack/nova/nova/virt/xenapi/vm_utils.py", line 919, in _create_image
2012-10-03 15:39:15 TRACE nova.utils [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693]     image_id, image_type)
2012-10-03 15:39:15 TRACE nova.utils [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693]   File
"/opt/stack/nova/nova/virt/xenapi/vm_utils.py", line 843, in
_create_cached_image
2012-10-03 15:39:15 TRACE nova.utils [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693]     root_vdi = vdis['root']
2012-10-03 15:39:15 TRACE nova.utils [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693] KeyError: 'root'
2012-10-03 15:39:15 TRACE nova.utils [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693]
2012-10-03 15:39:15 ERROR nova.compute.manager
[req-ad542f39-2dff-4567-ae6b-9e11b692781a admin demo] [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693] Instance failed to spawn
2012-10-03 15:39:15 TRACE nova.compute.manager [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693] Traceback (most recent call last):
2012-10-03 15:39:15 TRACE nova.compute.manager [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693]   File
"/opt/stack/nova/nova/compute/manager.py", line 748, in _spawn
2012-10-03 15:39:15 TRACE nova.compute.manager [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693]     block_device_info)
2012-10-03 15:39:15 TRACE nova.compute.manager [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693]   File
"/opt/stack/nova/nova/virt/xenapi/driver.py", line 179, in spawn
2012-10-03 15:39:15 TRACE nova.compute.manager [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693]     admin_password, network_info,
block_device_info)
2012-10-03 15:39:15 TRACE nova.compute.manager [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693]   File
"/opt/stack/nova/nova/virt/xenapi/vmops.py", line 369, in spawn
2012-10-03 15:39:15 TRACE nova.compute.manager [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693]
undo_mgr.rollback_and_reraise(msg=msg, instance=instance)
2012-10-03 15:39:15 TRACE nova.compute.manager [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693]   File
"/opt/stack/nova/nova/utils.py", line 1315, in rollback_and_reraise
2012-10-03 15:39:15 TRACE nova.compute.manager [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693]     self._rollback()
2012-10-03 15:39:15 TRACE nova.compute.manager [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693]   File
"/usr/lib/python2.7/contextlib.py", line 24, in __exit__
2012-10-03 15:39:15 TRACE nova.compute.manager [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693]     self.gen.next()
2012-10-03 15:39:15 TRACE nova.compute.manager [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693]   File
"/opt/stack/nova/nova/virt/xenapi/vmops.py", line 355, in spawn
2012-10-03 15:39:15 TRACE nova.compute.manager [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693]     vdis = create_disks_step(undo_mgr)
2012-10-03 15:39:15 TRACE nova.compute.manager [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693]   File
"/opt/stack/nova/nova/virt/xenapi/vmops.py", line 139, in inner
2012-10-03 15:39:15 TRACE nova.compute.manager [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693]     rv = f(*args, **kwargs)
2012-10-03 15:39:15 TRACE nova.compute.manager [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693]   File
"/opt/stack/nova/nova/virt/xenapi/vmops.py", line 257, in create_disks_step
2012-10-03 15:39:15 TRACE nova.compute.manager [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693]     image_meta, block_device_info)
2012-10-03 15:39:15 TRACE nova.compute.manager [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693]   File
"/opt/stack/nova/nova/virt/xenapi/vmops.py", line 224, in _create_disks
2012-10-03 15:39:15 TRACE nova.compute.manager [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693]
block_device_info=block_device_info)
2012-10-03 15:39:15 TRACE nova.compute.manager [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693]   File
"/opt/stack/nova/nova/virt/xenapi/vm_utils.py", line 427, in
get_vdis_for_instance
2012-10-03 15:39:15 TRACE nova.compute.manager [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693]     image_type)
2012-10-03 15:39:15 TRACE nova.compute.manager [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693]   File
"/opt/stack/nova/nova/virt/xenapi/vm_utils.py", line 919, in _create_image
2012-10-03 15:39:15 TRACE nova.compute.manager [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693]     image_id, image_type)
2012-10-03 15:39:15 TRACE nova.compute.manager [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693]   File
"/opt/stack/nova/nova/virt/xenapi/vm_utils.py", line 843, in
_create_cached_image
2012-10-03 15:39:15 TRACE nova.compute.manager [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693]     root_vdi = vdis['root']
2012-10-03 15:39:15 TRACE nova.compute.manager [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693] KeyError: 'root'
2012-10-03 15:39:15 TRACE nova.compute.manager [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693]
2012-10-03 15:39:15 DEBUG nova.utils
[req-ad542f39-2dff-4567-ae6b-9e11b692781a admin demo] Got semaphore
"compute_resources" for method "abort_resource_claim"... from (pid=5918)
inner /opt/stack/nova/nova/utils.py:721
2012-10-03 15:39:15 INFO nova.compute.resource_tracker
[req-ad542f39-2dff-4567-ae6b-9e11b692781a admin demo] Aborting claim:
[Claim ea4d2039-64c3-4f6f-a866-84db1f0e3693: 512 MB memory, 0 GB disk, 1
VCPUS]
2012-10-03 15:39:15 DEBUG nova.compute.manager
[req-ad542f39-2dff-4567-ae6b-9e11b692781a admin demo] [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693] Deallocating network for instance
from (pid=5918) _deallocate_network
/opt/stack/nova/nova/compute/manager.py:774
2012-10-03 15:39:15 DEBUG nova.openstack.common.rpc.amqp [-] Making
asynchronous call on network ... from (pid=5918) multicall
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:351
2012-10-03 15:39:15 DEBUG nova.openstack.common.rpc.amqp [-] MSG_ID is
7850c3859077456fab6d8fc9dc49807c from (pid=5918) multicall
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:354
2012-10-03 15:39:15 DEBUG nova.openstack.common.rpc.amqp [-] Pool creating
new connection from (pid=5918) create
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:57
2012-10-03 15:39:15 DEBUG nova.compute.manager [-] [instance:
818fee36-6e63-44c9-8edd-4a76ffd8aa6a] Updated the info_cache for instance
from (pid=5918) _heal_instance_info_cache
/opt/stack/nova/nova/compute/manager.py:2416
2012-10-03 15:39:15 DEBUG nova.manager [-] Skipping
ComputeManager._run_image_cache_manager_pass, 38 ticks left until next run
from (pid=5918) periodic_tasks /opt/stack/nova/nova/manager.py:167
2012-10-03 15:39:15 DEBUG nova.manager [-] Running periodic task
ComputeManager._reclaim_queued_deletes from (pid=5918) periodic_tasks
/opt/stack/nova/nova/manager.py:172
2012-10-03 15:39:15 DEBUG nova.compute.manager [-]
FLAGS.reclaim_instance_interval <= 0, skipping... from (pid=5918)
_reclaim_queued_deletes /opt/stack/nova/nova/compute/manager.py:2752
2012-10-03 15:39:15 DEBUG nova.manager [-] Running periodic task
ComputeManager._report_driver_status from (pid=5918) periodic_tasks
/opt/stack/nova/nova/manager.py:172
2012-10-03 15:39:15 INFO nova.compute.manager [-] Updating host status
2012-10-03 15:39:15 DEBUG nova.virt.xenapi.host [-] Updating host stats
from (pid=5918) update_status /opt/stack/nova/nova/virt/xenapi/host.py:148
2012-10-03 15:39:15 INFO nova.openstack.common.rpc.common [-] Connected to
AMQP server on localhost:5672
2012-10-03 15:39:16 DEBUG nova.manager [-] Running periodic task
ComputeManager._poll_unconfirmed_resizes from (pid=5918) periodic_tasks
/opt/stack/nova/nova/manager.py:172
2012-10-03 15:39:22 DEBUG nova.compute.manager
[req-ad542f39-2dff-4567-ae6b-9e11b692781a admin demo] [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693] Re-scheduling instance: attempt 1
from (pid=5918) _reschedule /opt/stack/nova/nova/compute/manager.py:580
2012-10-03 15:39:22 DEBUG nova.utils
[req-ad542f39-2dff-4567-ae6b-9e11b692781a admin demo] Got semaphore
"compute_resources" for method "update_usage"... from (pid=5918) inner
/opt/stack/nova/nova/utils.py:721
2012-10-03 15:39:22 DEBUG nova.openstack.common.rpc.amqp [-] Making
asynchronous cast on scheduler... from (pid=5918) cast
/opt/stack/nova/nova/openstack/common/rpc/amqp.py:376
2012-10-03 15:39:22 ERROR nova.compute.manager
[req-ad542f39-2dff-4567-ae6b-9e11b692781a admin demo] [instance:
ea4d2039-64c3-4f6f-a866-84db1f0e3693] Build error: ['Traceback (most recent
call last):\n', '  File "/opt/stack/nova/nova/compute/manager.py", line
501, in _run_instance\n    injected_files, admin_password)\n', '  File
"/opt/stack/nova/nova/compute/manager.py", line 748, in _spawn\n
 block_device_info)\n', '  File
"/opt/stack/nova/nova/virt/xenapi/driver.py", line 179, in spawn\n
 admin_password, network_info, block_device_info)\n', '  File
"/opt/stack/nova/nova/virt/xenapi/vmops.py", line 369, in spawn\n
 undo_mgr.rollback_and_reraise(msg=msg, instance=instance)\n', '  File
"/opt/stack/nova/nova/utils.py", line 1315, in rollback_and_reraise\n
 self._rollback()\n', '  File "/usr/lib/python2.7/contextlib.py", line 24,
in __exit__\n    self.gen.next()\n', '  File
"/opt/stack/nova/nova/virt/xenapi/vmops.py", line 355, in spawn\n    vdis =
create_disks_step(undo_mgr)\n', '  File
"/opt/stack/nova/nova/virt/xenapi/vmops.py", line 139, in inner\n    rv =
f(*args, **kwargs)\n', '  File "/opt/stack/nova/nova/virt/xenapi/vmops.py",
line 257, in create_disks_step\n    image_meta, block_device_info)\n', '
 File "/opt/stack/nova/nova/virt/xenapi/vmops.py", line 224, in
_create_disks\n    block_device_info=block_device_info)\n', '  File
"/opt/stack/nova/nova/virt/xenapi/vm_utils.py", line 427, in
get_vdis_for_instance\n    image_type)\n', '  File
"/opt/stack/nova/nova/virt/xenapi/vm_utils.py", line 919, in
_create_image\n    image_id, image_type)\n', '  File
"/opt/stack/nova/nova/virt/xenapi/vm_utils.py", line 843, in
_create_cached_image\n    root_vdi = vdis[\'root\']\n', "KeyError:
'root'\n"]
------
How can I fix this error?

Thank you,
Afef
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20121003/c2720844/attachment.html>


More information about the Openstack mailing list