[Openstack] Nova boot spawn failed with message time out

Shyam Nadiminti nsv.shyam at gmail.com
Tue Jan 20 14:24:02 UTC 2015


tail: cannot open â/var/log/neutron/server.logâ for reading: No such file
or directory

no such file.

On Tue, Jan 20, 2015 at 7:39 PM, Geo Varghese <gvarghese at aqorn.com> wrote:

> Please check neutron server logs
>
> tail -f /var/log/neutron/server.log
>
> On Tue, Jan 20, 2015 at 7:26 PM, Shyam Nadiminti <nsv.shyam at gmail.com>
> wrote:
>
>> Further debugging shows that nova cannot connect to neutron.  Something
>> is wrong with config params?  How to check?
>>
>> 2015-01-20 16:06:50.774 8152 ERROR nova.compute.manager [-] Instance
>> failed network setup after 1 attempt(s)
>> 2015-01-20 16:06:50.774 8152 TRACE nova.compute.manager Traceback (most
>> recent call last):
>> 2015-01-20 16:06:50.774 8152 TRACE nova.compute.manager   File
>> "/usr/local/lib/python2.7/site-packages/nova/compute/manager.py", line
>> 1682, in _allocate_network_async
>> 2015-01-20 16:06:50.774 8152 TRACE nova.compute.manager
>> dhcp_options=dhcp_options)
>> 2015-01-20 16:06:50.774 8152 TRACE nova.compute.manager   File
>> "/usr/local/lib/python2.7/site-packages/nova/network/neutronv2/api.py",
>> line 261, in allocate_for_insta
>> nce
>> 2015-01-20 16:06:50.774 8152 TRACE nova.compute.manager
>> refresh_cache=True, neutron=neutron) else
>> 2015-01-20 16:06:50.774 8152 TRACE nova.compute.manager   File
>> "/usr/local/lib/python2.7/site-packages/nova/network/neutronv2/api.py",
>> line 454, in _has_port_binding_
>> extension
>> 2015-01-20 16:06:50.774 8152 TRACE nova.compute.manager
>> self._refresh_neutron_extensions_cache(context, neutron=neutron)
>> 2015-01-20 16:06:50.774 8152 TRACE nova.compute.manager   File
>> "/usr/local/lib/python2.7/site-packages/nova/network/neutronv2/api.py",
>> line 445, in _refresh_neutron_e
>> xtensions_cache
>> 2015-01-20 16:06:50.774 8152 TRACE nova.compute.manager
>> extensions_list = neutron.list_extensions()['extensions']
>> 2015-01-20 16:06:50.774 8152 TRACE nova.compute.manager   File
>> "/usr/local/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line
>> 108, in with_params
>> 2015-01-20 16:06:50.774 8152 TRACE nova.compute.manager     ret =
>> self.function(instance, *args, **kwargs)
>> 2015-01-20 16:06:50.774 8152 TRACE nova.compute.manager   File
>> "/usr/local/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line
>> 286, in list_extensions
>> 2015-01-20 16:06:50.774 8152 TRACE nova.compute.manager     return
>> self.get(self.extensions_path, params=_params)
>> 2015-01-20 16:06:50.774 8152 TRACE nova.compute.manager   File
>> "/usr/local/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line
>> 1183, in get
>> 2015-01-20 16:06:50.774 8152 TRACE nova.compute.manager
>> headers=headers, params=params)
>> 2015-01-20 16:06:50.774 8152 TRACE nova.compute.manager   File
>> "/usr/local/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line
>> 1175, in retry_request
>> 2015-01-20 16:06:50.774 8152 TRACE nova.compute.manager     raise
>> exceptions.ConnectionFailed(reason=_("Maximum attempts reached"))
>> 2015-01-20 16:06:50.774 8152 TRACE nova.compute.manager ConnectionFailed:
>> Connection to neutron failed: Maximum attempts reached
>>
>>
>> On Tue, Jan 20, 2015 at 4:02 PM, Geo Varghese <gvarghese at aqorn.com>
>> wrote:
>>
>>> Restart your rabbitmq server. Seems its an issue with messaging server.
>>>
>>> On Tue, Jan 20, 2015 at 2:52 PM, Shyam Nadiminti <nsv.shyam at gmail.com>
>>> wrote:
>>>
>>>>
>>>>
>>>> I'm trying to create new instance and it has been failing with the
>>>> following stack - failed to get network information.
>>>>
>>>>
>>>> - build_and_run_instance is called.
>>>>
>>>> -locked_do_build_and_run_instance is called to get resources.
>>>>
>>>> -Memory, disk space resource claim is successful.
>>>>
>>>> - Then, trying to build network resource asynchronously: _allocat
>>>> e_network_async
>>>>
>>>>
>>>> The issue is started.  I found that message has been sent but response
>>>> is not received.  Can anyone suggest me what logs should I look into to see
>>>> what is happening in response?  Why is it timed out?  How to debug it
>>>> further?  Any issue with nova.conf that I may be missing?
>>>>
>>>>
>>>>
>>>> 015-01-19 21:17:56.350 17307 ERROR nova.compute.manager [-] Instance
>>>> failed network setup after 1 attempt(s)
>>>>
>>>> 2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager Traceback
>>>> (most recent call last):
>>>>
>>>> 2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager   File
>>>> "/usr/local/lib/python2.7/site-packages/nova/compute/manager.py", line
>>>> 1682, in _allocate_network_async
>>>>
>>>> 2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager
>>>> dhcp_options=dhcp_options)
>>>>
>>>> 2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager   File
>>>> "/usr/local/lib/python2.7/site-packages/nova/network/api.py", line 47, in
>>>> wrapped
>>>>
>>>> 2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager     return
>>>> func(self, context, *args, **kwargs)
>>>>
>>>> 2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager   File
>>>> "/usr/local/lib/python2.7/site-packages/nova/network/base_api.py", line 64,
>>>> in wrapper
>>>>
>>>> 2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager     res =
>>>> f(self, context, *args, **kwargs)
>>>>
>>>> 2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager   File
>>>> "/usr/local/lib/python2.7/site-packages/nova/network/api.py", line 277, in
>>>> allocate_for_instance
>>>>
>>>> 2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager     nw_info =
>>>> self.network_rpcapi.allocate_for_instance(context, **args)
>>>>
>>>> 2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager   File
>>>> "/usr/local/lib/python2.7/site-packages/nova/network/rpcapi.py", line 188,
>>>> in allocate_for_instance
>>>>
>>>> 2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager
>>>> macs=jsonutils.to_primitive(macs))
>>>>
>>>> 2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager   File
>>>> "/usr/local/lib/python2.7/site-packages/oslo/messaging/rpc/client.py", line
>>>> 159, in call
>>>>
>>>> 2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager
>>>> retry=self.retry)
>>>>
>>>> 2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager   File
>>>> "/usr/local/lib/python2.7/site-packages/oslo/messaging/transport.py", line
>>>> 90, in _send
>>>>
>>>> 2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager
>>>> timeout=timeout, retry=retry)
>>>>
>>>> 2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager   File
>>>> "/usr/local/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py",
>>>> line 408, in send
>>>>
>>>> 2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager
>>>> retry=retry)
>>>>
>>>> 2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager   File
>>>> "/usr/local/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py",
>>>> line 397, in _send
>>>>
>>>> 2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager     result =
>>>> self._waiter.wait(msg_id, timeout)
>>>>
>>>> 2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager   File
>>>> "/usr/local/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py",
>>>> line 298, in wait
>>>>
>>>> 2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager     reply,
>>>> ending, trylock = self._poll_queue(msg_id, timeout)
>>>>
>>>> 2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager   File
>>>> "/usr/local/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py",
>>>> line 238, in _poll_queue
>>>>
>>>> 2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager     message =
>>>> self.waiters.get(msg_id, timeout)
>>>>
>>>> 2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager   File
>>>> "/usr/local/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py",
>>>> line 144, in get
>>>>
>>>> 2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager     'to
>>>> message ID %s' % msg_id)
>>>>
>>>> 2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager
>>>> MessagingTimeout: Timed out waiting for a reply to message ID
>>>> 56262582185f4a5cb0d11a7f85239c3f
>>>>
>>>> 2015-01-19 21:17:56.350 17307 TRACE nova.compute.manager
>>>>
>>>> _______________________________________________
>>>> Mailing list:
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>> Post to     : openstack at lists.openstack.org
>>>> Unsubscribe :
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>>
>>>>
>>>
>>>
>>> --
>>> --
>>> Regards,
>>> Geo Varghese
>>>
>>
>>
>
>
> --
> --
> Regards,
> Geo Varghese
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20150120/a2523ae3/attachment.html>


More information about the Openstack mailing list