[Openstack] [Nova] Instances which use flavors with disk space fail to spawn

Leander Bessa Beernaert leanderbb at gmail.com
Tue May 29 15:07:52 UTC 2012


For anyone interested, i've figured out that the instances were not getting
spawned because the amount of memory in the flavor was equal to the maximum
memory available through the underlying hardware.

On Tue, May 29, 2012 at 11:10 AM, Leander Bessa Beernaert <
leanderbb at gmail.com> wrote:

> Hello,
>
> I'm unable to boot any image with a flavor that has a disk space
> associated with it. It always fails at the spawning state. Below it the log
> output of nova-compute:
>
> 2012-05-28 16:20:25 ERROR nova.compute.manager
>>> [req-1c725f9c-acae-47c4-b5ae-9ed5d2d9830c 9494d025721c4d7bb28a16fa796f9414
>>> 04282e9aff474d2383bb4d4417673e0a] [instance:
>>> 10d7c8e0-e05b-4e57-b722-dab5771261b7] Instance failed to spawn
>>
>> 2012-05-28 16:20:25 TRACE nova.compute.manager [instance:
>>> 10d7c8e0-e05b-4e57-b722-dab5771261b7] Traceback (most recent call last):
>>
>> 2012-05-28 16:20:25 TRACE nova.compute.manager [instance:
>>> 10d7c8e0-e05b-4e57-b722-dab5771261b7]   File
>>> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 592, in
>>> _spawn
>>
>> 2012-05-28 16:20:25 TRACE nova.compute.manager [instance:
>>> 10d7c8e0-e05b-4e57-b722-dab5771261b7]
>>> self._legacy_nw_info(network_info), block_device_info)
>>
>> 2012-05-28 16:20:25 TRACE nova.compute.manager [instance:
>>> 10d7c8e0-e05b-4e57-b722-dab5771261b7]   File
>>> "/usr/lib/python2.7/dist-packages/nova/exception.py", line 114, in wrapped
>>
>> 2012-05-28 16:20:25 TRACE nova.compute.manager [instance:
>>> 10d7c8e0-e05b-4e57-b722-dab5771261b7]     return f(*args, **kw)
>>
>> 2012-05-28 16:20:25 TRACE nova.compute.manager [instance:
>>> 10d7c8e0-e05b-4e57-b722-dab5771261b7]   File
>>> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py", line
>>> 922, in spawn
>>
>> 2012-05-28 16:20:25 TRACE nova.compute.manager [instance:
>>> 10d7c8e0-e05b-4e57-b722-dab5771261b7]     self._create_new_domain(xml)
>>
>> 2012-05-28 16:20:25 TRACE nova.compute.manager [instance:
>>> 10d7c8e0-e05b-4e57-b722-dab5771261b7]   File
>>> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py", line
>>> 1575, in _create_new_domain
>>
>> 2012-05-28 16:20:25 TRACE nova.compute.manager [instance:
>>> 10d7c8e0-e05b-4e57-b722-dab5771261b7]
>>> domain.createWithFlags(launch_flags)
>>
>> 2012-05-28 16:20:25 TRACE nova.compute.manager [instance:
>>> 10d7c8e0-e05b-4e57-b722-dab5771261b7]   File
>>> "/usr/lib/python2.7/dist-packages/libvirt.py", line 581, in createWithFlags
>>
>> 2012-05-28 16:20:25 TRACE nova.compute.manager [instance:
>>> 10d7c8e0-e05b-4e57-b722-dab5771261b7]     if ret == -1: raise libvirtError
>>> ('virDomainCreateWithFlags() failed', dom=self)
>>
>> 2012-05-28 16:20:25 TRACE nova.compute.manager [instance:
>>> 10d7c8e0-e05b-4e57-b722-dab5771261b7] libvirtError: Unable to read from
>>> monitor: Connection reset by peer
>>
>> 2012-05-28 16:20:25 TRACE nova.compute.manager [instance:
>>> 10d7c8e0-e05b-4e57-b722-dab5771261b7]
>>
>> 2012-05-28 16:20:25 DEBUG nova.compute.manager
>>> [req-1c725f9c-acae-47c4-b5ae-9ed5d2d9830c 9494d025721c4d7bb28a16fa796f9414
>>> 04282e9aff474d2383bb4d4417673e0a] [instance:
>>> 10d7c8e0-e05b-4e57-b722-dab5771261b7] Deallocating network for instance
>>> from (pid=23518) _deallocate_network
>>> /usr/lib/python2.7/dist-packages/nova/compute/manager.py:616
>>
>> 2012-05-28 16:20:25 DEBUG nova.rpc.amqp
>>> [req-1c725f9c-acae-47c4-b5ae-9ed5d2d9830c 9494d025721c4d7bb28a16fa796f9414
>>> 04282e9aff474d2383bb4d4417673e0a] Making asynchronous cast on network...
>>> from (pid=23518) cast /usr/lib/python2.7/dist-packages/nova/rpc/amqp.py:346
>>
>> 2012-05-28 16:20:26 ERROR nova.rpc.amqp
>>> [req-1c725f9c-acae-47c4-b5ae-9ed5d2d9830c 9494d025721c4d7bb28a16fa796f9414
>>> 04282e9aff474d2383bb4d4417673e0a] Exception during message handling
>>
>> 2012-05-28 16:20:26 TRACE nova.rpc.amqp Traceback (most recent call last):
>>
>> 2012-05-28 16:20:26 TRACE nova.rpc.amqp   File
>>> "/usr/lib/python2.7/dist-packages/nova/rpc/amqp.py", line 252, in
>>> _process_data
>>
>> 2012-05-28 16:20:26 TRACE nova.rpc.amqp     rval =
>>> node_func(context=ctxt, **node_args)
>>
>> 2012-05-28 16:20:26 TRACE nova.rpc.amqp   File
>>> "/usr/lib/python2.7/dist-packages/nova/exception.py", line 114, in wrapped
>>
>> 2012-05-28 16:20:26 TRACE nova.rpc.amqp     return f(*args, **kw)
>>
>> 2012-05-28 16:20:26 TRACE nova.rpc.amqp   File
>>> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 177, in
>>> decorated_function
>>
>> 2012-05-28 16:20:26 TRACE nova.rpc.amqp     sys.exc_info())
>>
>> 2012-05-28 16:20:26 TRACE nova.rpc.amqp   File
>>> "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
>>
>> 2012-05-28 16:20:26 TRACE nova.rpc.amqp     self.gen.next()
>>
>> 2012-05-28 16:20:26 TRACE nova.rpc.amqp   File
>>> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 171, in
>>> decorated_function
>>
>> 2012-05-28 16:20:26 TRACE nova.rpc.amqp     return function(self,
>>> context, instance_uuid, *args, **kwargs)
>>
>> 2012-05-28 16:20:26 TRACE nova.rpc.amqp   File
>>> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 651, in
>>> run_instance
>>
>> 2012-05-28 16:20:26 TRACE nova.rpc.amqp     do_run_instance()
>>
>> 2012-05-28 16:20:26 TRACE nova.rpc.amqp   File
>>> "/usr/lib/python2.7/dist-packages/nova/utils.py", line 945, in inner
>>
>> 2012-05-28 16:20:26 TRACE nova.rpc.amqp     retval = f(*args, **kwargs)
>>
>> 2012-05-28 16:20:26 TRACE nova.rpc.amqp   File
>>> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 650, in
>>> do_run_instance
>>
>> 2012-05-28 16:20:26 TRACE nova.rpc.amqp     self._run_instance(context,
>>> instance_uuid, **kwargs)
>>
>> 2012-05-28 16:20:26 TRACE nova.rpc.amqp   File
>>> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 451, in
>>> _run_instance
>>
>> 2012-05-28 16:20:26 TRACE nova.rpc.amqp
>>> self._set_instance_error_state(context, instance_uuid)
>>
>> 2012-05-28 16:20:26 TRACE nova.rpc.amqp   File
>>> "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
>>
>> 2012-05-28 16:20:26 TRACE nova.rpc.amqp     self.gen.next()
>>
>> 2012-05-28 16:20:26 TRACE nova.rpc.amqp   File
>>> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 432, in
>>> _run_instance
>>
>> 2012-05-28 16:20:26 TRACE nova.rpc.amqp
>>> self._deallocate_network(context, instance)
>>
>> 2012-05-28 16:20:26 TRACE nova.rpc.amqp   File
>>> "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
>>
>> 2012-05-28 16:20:26 TRACE nova.rpc.amqp     self.gen.next()
>>
>> 2012-05-28 16:20:26 TRACE nova.rpc.amqp   File
>>> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 429, in
>>> _run_instance
>>
>> 2012-05-28 16:20:26 TRACE nova.rpc.amqp     injected_files,
>>> admin_password)
>>
>> 2012-05-28 16:20:26 TRACE nova.rpc.amqp   File
>>> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 592, in
>>> _spawn
>>
>> 2012-05-28 16:20:26 TRACE nova.rpc.amqp
>>> self._legacy_nw_info(network_info), block_device_info)
>>
>> 2012-05-28 16:20:26 TRACE nova.rpc.amqp   File
>>> "/usr/lib/python2.7/dist-packages/nova/exception.py", line 114, in wrapped
>>
>> 2012-05-28 16:20:26 TRACE nova.rpc.amqp     return f(*args, **kw)
>>
>> 2012-05-28 16:20:26 TRACE nova.rpc.amqp   File
>>> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py", line
>>> 922, in spawn
>>
>> 2012-05-28 16:20:26 TRACE nova.rpc.amqp     self._create_new_domain(xml)
>>
>> 2012-05-28 16:20:26 TRACE nova.rpc.amqp   File
>>> "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/connection.py", line
>>> 1575, in _create_new_domain
>>
>> 2012-05-28 16:20:26 TRACE nova.rpc.amqp
>>> domain.createWithFlags(launch_flags)
>>
>> 2012-05-28 16:20:26 TRACE nova.rpc.amqp   File
>>> "/usr/lib/python2.7/dist-packages/libvirt.py", line 581, in createWithFlags
>>
>> 2012-05-28 16:20:26 TRACE nova.rpc.amqp     if ret == -1: raise
>>> libvirtError ('virDomainCreateWithFlags() failed', dom=self)
>>
>> 2012-05-28 16:20:26 TRACE nova.rpc.amqp libvirtError: Unable to read from
>>> monitor: Connection reset by peer
>>
>>
> Any suggestions?
>
>
> Regards,
>
> Leander
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20120529/55ebaa2e/attachment.html>


More information about the Openstack mailing list