[Openstack-operators] Cannot create Instances
Steven Barnabas
sbarnabas at frontporch.com
Wed Mar 20 17:25:29 UTC 2013
Yes! Cirros worked. I also imported a Ubuntu image and that also worked. Both of these images are QCOW2 format so Im thinking its qemu conversion process. Im gonna try again with a .iso this time to see if it works. If not, I will update Qemu 1.3(.1)
Steven Barnabas
Network Engineer
Front Porch, Inc.
209-288-5580
209-652-7733 mobile
www.frontporch.com<http://www.frontporch.com/>
On Mar 19, 2013, at 2:51 PM, Joe Topjian <joe.topjian at cybera.ca<mailto:joe.topjian at cybera.ca>> wrote:
Hi Steven,
Yeah, something definitely funny with the image.
The partition that holds /var/lib/nova/instances isn't out of disk space, is it?
Could you try importing the small cirros image and see if you're able to launch that? You can use the following snippet:
. /root/openrc
wget https://launchpad.net/cirros/trunk/0.3.0/+download/cirros-0.3.0-x86_64-disk.img
glance add name='cirros image' is_public=true container_format=bare disk_format=qcow2 < cirros-0.3.0-x86_64-disk.img
Joe
On Tue, Mar 19, 2013 at 3:48 PM, Steven Barnabas <sbarnabas at frontporch.com<mailto:sbarnabas at frontporch.com>> wrote:
drwxr-xr-x 2 nova nova 4096 Jan 30 08:42 buckets
drwxr-xr-x 8 nova nova 4096 Feb 22 13:54 CA
drwxr-xr-x 2 nova nova 4096 Jan 30 08:42 images
drwxr-xr-x 2 nova nova 4096 Jan 30 08:42 instances
drwxr-xr-x 2 nova nova 4096 Jan 30 08:42 keys
drwxr-xr-x 2 nova nova 4096 Jan 30 08:42 networks
-rw-r----- 1 nova nova 158720 Feb 22 13:54 nova.sqlite
drwxr-xr-x 2 nova nova 4096 Jan 30 08:42 tmp
Looks like there is permission there. There is no instance folder under /instances. I deleted the instance and tried creating a new one with a new name. Same 10 min thing then an error. I checked the nova-compute log this time and I am receiving a completely different error now:
2013-03-19 13:31:53 ERROR nova.compute.manager [req-8d23bdc9-80db-4bd7-bd63-e712b1c0615e aa0e0626b30140fd87f26225a202bf58 fd28b03caed749d8b8b014ec350d9f92] [instance: c75434ec-677f-4b9e-b5f8-8ab93bcdf972] Instance failed to spawn
2013-03-19 13:31:53 23291 TRACE nova.compute.manager [instance: c75434ec-677f-4b9e-b5f8-8ab93bcdf972] Traceback (most recent call last):
2013-03-19 13:31:53 23291 TRACE nova.compute.manager [instance: c75434ec-677f-4b9e-b5f8-8ab93bcdf972] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 747, in _spawn
2013-03-19 13:31:53 23291 TRACE nova.compute.manager [instance: c75434ec-677f-4b9e-b5f8-8ab93bcdf972] block_device_info)
2013-03-19 13:31:53 23291 TRACE nova.compute.manager [instance: c75434ec-677f-4b9e-b5f8-8ab93bcdf972] File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 117, in wrapped
2013-03-19 13:31:53 23291 TRACE nova.compute.manager [instance: c75434ec-677f-4b9e-b5f8-8ab93bcdf972] temp_level, payload)
2013-03-19 13:31:53 23291 TRACE nova.compute.manager [instance: c75434ec-677f-4b9e-b5f8-8ab93bcdf972] File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
2013-03-19 13:31:53 23291 TRACE nova.compute.manager [instance: c75434ec-677f-4b9e-b5f8-8ab93bcdf972] self.gen.next()
2013-03-19 13:31:53 23291 TRACE nova.compute.manager [instance: c75434ec-677f-4b9e-b5f8-8ab93bcdf972] File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 92, in wrapped
2013-03-19 13:31:53 23291 TRACE nova.compute.manager [instance: c75434ec-677f-4b9e-b5f8-8ab93bcdf972] return f(*args, **kw)
2013-03-19 13:31:53 23291 TRACE nova.compute.manager [instance: c75434ec-677f-4b9e-b5f8-8ab93bcdf972] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1090, in spawn
2013-03-19 13:31:53 23291 TRACE nova.compute.manager [instance: c75434ec-677f-4b9e-b5f8-8ab93bcdf972] admin_pass=admin_password)
2013-03-19 13:31:53 23291 TRACE nova.compute.manager [instance: c75434ec-677f-4b9e-b5f8-8ab93bcdf972] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1356, in _create_image
2013-03-19 13:31:53 23291 TRACE nova.compute.manager [instance: c75434ec-677f-4b9e-b5f8-8ab93bcdf972] project_id=instance['project_id'])
2013-03-19 13:31:53 23291 TRACE nova.compute.manager [instance: c75434ec-677f-4b9e-b5f8-8ab93bcdf972] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", line 126, in cache
2013-03-19 13:31:53 23291 TRACE nova.compute.manager [instance: c75434ec-677f-4b9e-b5f8-8ab93bcdf972] *args, **kwargs)
2013-03-19 13:31:53 23291 TRACE nova.compute.manager [instance: c75434ec-677f-4b9e-b5f8-8ab93bcdf972] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", line 173, in create_image
2013-03-19 13:31:53 23291 TRACE nova.compute.manager [instance: c75434ec-677f-4b9e-b5f8-8ab93bcdf972] prepare_template(target=base, *args, **kwargs)
2013-03-19 13:31:53 23291 TRACE nova.compute.manager [instance: c75434ec-677f-4b9e-b5f8-8ab93bcdf972] File "/usr/lib/python2.7/dist-packages/nova/utils.py", line 796, in inner
2013-03-19 13:31:53 23291 TRACE nova.compute.manager [instance: c75434ec-677f-4b9e-b5f8-8ab93bcdf972] retval = f(*args, **kwargs)
2013-03-19 13:31:53 23291 TRACE nova.compute.manager [instance: c75434ec-677f-4b9e-b5f8-8ab93bcdf972] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", line 117, in call_if_not_exists
2013-03-19 13:31:53 23291 TRACE nova.compute.manager [instance: c75434ec-677f-4b9e-b5f8-8ab93bcdf972] fetch_func(target=target, *args, **kwargs)
2013-03-19 13:31:53 23291 TRACE nova.compute.manager [instance: c75434ec-677f-4b9e-b5f8-8ab93bcdf972] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/utils.py", line 440, in fetch_image
2013-03-19 13:31:53 23291 TRACE nova.compute.manager [instance: c75434ec-677f-4b9e-b5f8-8ab93bcdf972] images.fetch_to_raw(context, image_id, target, user_id, project_id)
2013-03-19 13:31:53 23291 TRACE nova.compute.manager [instance: c75434ec-677f-4b9e-b5f8-8ab93bcdf972] File "/usr/lib/python2.7/dist-packages/nova/virt/images.py", line 98, in fetch_to_raw
2013-03-19 13:31:53 23291 TRACE nova.compute.manager [instance: c75434ec-677f-4b9e-b5f8-8ab93bcdf972] staged)
2013-03-19 13:31:53 23291 TRACE nova.compute.manager [instance: c75434ec-677f-4b9e-b5f8-8ab93bcdf972] File "/usr/lib/python2.7/dist-packages/nova/utils.py", line 210, in execute
2013-03-19 13:31:53 23291 TRACE nova.compute.manager [instance: c75434ec-677f-4b9e-b5f8-8ab93bcdf972] cmd=' '.join(cmd))
2013-03-19 13:31:53 23291 TRACE nova.compute.manager [instance: c75434ec-677f-4b9e-b5f8-8ab93bcdf972] ProcessExecutionError: Unexpected error while running command.
2013-03-19 13:31:53 23291 TRACE nova.compute.manager [instance: c75434ec-677f-4b9e-b5f8-8ab93bcdf972] Command: qemu-img convert -O raw /var/lib/nova/instances/_base/2986816d8cbac0ea14d12b014f770563ab0a0517.part /var/lib/nova/instances/_base/2986816d8cbac0ea14d12b014f770563ab0a0517.converted
2013-03-19 13:31:53 23291 TRACE nova.compute.manager [instance: c75434ec-677f-4b9e-b5f8-8ab93bcdf972] Exit code: 1
2013-03-19 13:31:53 23291 TRACE nova.compute.manager [instance: c75434ec-677f-4b9e-b5f8-8ab93bcdf972] Stdout: ''
2013-03-19 13:31:53 23291 TRACE nova.compute.manager [instance: c75434ec-677f-4b9e-b5f8-8ab93bcdf972] Stderr: 'qemu-img: error while reading sector 131072: Invalid argument\n'
2013-03-19 13:31:53 23291 TRACE nova.compute.manager [instance: c75434ec-677f-4b9e-b5f8-8ab93bcdf972]
2013-03-19 13:31:53 DEBUG nova.utils [req-8d23bdc9-80db-4bd7-bd63-e712b1c0615e aa0e0626b30140fd87f26225a202bf58 fd28b03caed749d8b8b014ec350d9f92] Got semaphore "compute_resources" for method "abort_resource_claim"... inner /usr/lib/python2.7/dist-packages/nova/utils.py:765
2013-03-19 13:31:53 INFO nova.compute.resource_tracker [req-8d23bdc9-80db-4bd7-bd63-e712b1c0615e aa0e0626b30140fd87f26225a202bf58 fd28b03caed749d8b8b014ec350d9f92] Aborting claim: [Claim c75434ec-677f-4b9e-b5f8-8ab93bcdf972: 16384 MB memory, 170 GB disk, 4 VCPUS]
2013-03-19 13:31:53 DEBUG nova.compute.manager [req-8d23bdc9-80db-4bd7-bd63-e712b1c0615e aa0e0626b30140fd87f26225a202bf58 fd28b03caed749d8b8b014ec350d9f92] [instance: c75434ec-677f-4b9e-b5f8-8ab93bcdf972] Deallocating network for instance _deallocate_network /usr/lib/python2.7/dist-packages/nova/compute/manager.py:773
2013-03-19 13:31:53 DEBUG nova.network.quantumv2.api [req-8d23bdc9-80db-4bd7-bd63-e712b1c0615e aa0e0626b30140fd87f26225a202bf58 fd28b03caed749d8b8b014ec350d9f92] deallocate_for_instance() for PL7 deallocate_for_instance /usr/lib/python2.7/dist-packages/nova/network/quantumv2/api.py:171
2013-03-19 13:31:53 DEBUG nova.compute.manager [req-8d23bdc9-80db-4bd7-bd63-e712b1c0615e aa0e0626b30140fd87f26225a202bf58 fd28b03caed749d8b8b014ec350d9f92] [instance: c75434ec-677f-4b9e-b5f8-8ab93bcdf972] Re-scheduling instance: attempt 1 _reschedule /usr/lib/python2.7/dist-packages/nova/compute/manager.py:579
2013-03-19 13:31:53 DEBUG nova.utils [req-8d23bdc9-80db-4bd7-bd63-e712b1c0615e aa0e0626b30140fd87f26225a202bf58 fd28b03caed749d8b8b014ec350d9f92] Got semaphore "compute_resources" for method "update_usage"... inner /usr/lib/python2.7/dist-packages/nova/utils.py:765
2013-03-19 13:31:53 23291 DEBUG nova.openstack.common.rpc.amqp [-] Making asynchronous cast on scheduler... cast /usr/lib/python2.7/dist-packages/nova/openstack/common/rpc/amqp.py:377
2013-03-19 13:31:53 ERROR nova.compute.manager [req-8d23bdc9-80db-4bd7-bd63-e712b1c0615e aa0e0626b30140fd87f26225a202bf58 fd28b03caed749d8b8b014ec350d9f92] [instance: c75434ec-677f-4b9e-b5f8-8ab93bcdf972] Build error: ['Traceback (most recent call last):\n', ' File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 494, in _run_instance\n injected_files, admin_password)\n', ' File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 747, in _spawn\n block_device_info)\n', ' File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 117, in wrapped\n temp_level, payload)\n', ' File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__\n self.gen.next()\n', ' File "/usr/lib/python2.7/dist-packages/nova/exception.py", line 92, in wrapped\n return f(*args, **kw)\n', ' File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1090, in spawn\n admin_pass=admin_password)\n', ' File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 1356, in _create_image\n project_id=instance[\'project_id\'])\n', ' File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", line 126, in cache\n *args, **kwargs)\n', ' File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", line 173, in create_image\n prepare_template(target=base, *args, **kwargs)\n', ' File "/usr/lib/python2.7/dist-packages/nova/utils.py", line 796, in inner\n retval = f(*args, **kwargs)\n', ' File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/imagebackend.py", line 117, in call_if_not_exists\n fetch_func(target=target, *args, **kwargs)\n', ' File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/utils.py", line 440, in fetch_image\n images.fetch_to_raw(context, image_id, target, user_id, project_id)\n', ' File "/usr/lib/python2.7/dist-packages/nova/virt/images.py", line 98, in fetch_to_raw\n staged)\n', ' File "/usr/lib/python2.7/dist-packages/nova/utils.py", line 210, in execute\n cmd=\' \'.join(cmd))\n', "ProcessExecutionError: Unexpected error while running command.\nCommand: qemu-img convert -O raw /var/lib/nova/instances/_base/2986816d8cbac0ea14d12b014f770563ab0a0517.part /var/lib/nova/instances/_base/2986816d8cbac0ea14d12b014f770563ab0a0517.converted\nExit code: 1\nStdout: ''\nStderr: 'qemu-img: error while reading sector 131072: Invalid argument\\n'\n"]
Kind of seems like it does not like the image?
Steven Barnabas
Network Engineer
Front Porch, Inc.
209-288-5580<tel:209-288-5580>
209-652-7733<tel:209-652-7733> mobile
www.frontporch.com<http://www.frontporch.com/>
On Mar 18, 2013, at 7:04 PM, Joe Topjian <joe.topjian at cybera.ca<mailto:joe.topjian at cybera.ca>> wrote:
/var/lib/nova/instances
--
Joe Topjian
Systems Administrator
Cybera Inc.
www.cybera.ca<http://www.cybera.ca/>
Cybera is a not-for-profit organization that works to spur and support innovation, for the economic benefit of Alberta, through the use of cyberinfrastructure.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20130320/2bccaf26/attachment.html>
More information about the OpenStack-operators
mailing list