[Openstack] nova-compute goes to XXX but is alive

Guillermo Alvarado guillermoalvarado89 at gmail.com
Thu Dec 4 16:52:23 UTC 2014


One compute node was reinstalled  and the problem still happen. Everything
is OK but 40 min after all nodes goes XXX...
El dic 4, 2014 9:53 AM, "Guillermo Alvarado" <guillermoalvarado89 at gmail.com>
escribió:

> Yes all nodes are with the same time...
> El dic 4, 2014 6:55 AM, "George Mihaiescu" <lmihaiescu at gmail.com>
> escribió:
>
>> Make sure the time is in sync on all your compute nodes and controller.
>> On 4 Dec 2014 02:05, "Guillermo Alvarado" <guillermoalvarado89 at gmail.com>
>> wrote:
>>
>>> I got this log when the computes goes XXX
>>>
>>>
>>> *var/log/nova/nova-compute.log*
>>>
>>>
>>> 2014-12-04 00:43:41.921 32947 DEBUG nova.openstack.common.lockutils [-]
>>> Got semaphore "storage-registry-lock" lock
>>> /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:168
>>>
>>> 2014-12-04 00:43:41.922 32947 DEBUG nova.openstack.common.lockutils [-]
>>> Attempting to grab file lock "storage-registry-lock" lock
>>> /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:178
>>>
>>> 2014-12-04 00:43:41.963 32947 DEBUG nova.openstack.common.lockutils [-]
>>> Got file lock "storage-registry-lock" at
>>> /var/lib/nova/instances/locks/nova-storage-registry-lock lock
>>> /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:206
>>> 2014-12-04 00:43:41.964 32947 DEBUG nova.openstack.common.lockutils [-]
>>> Got semaphore / lock "do_get_storage_users" inner
>>> /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:248
>>>
>>> 2014-12-04 00:43:41.965 32947 DEBUG nova.openstack.common.lockutils [-]
>>> Released file lock "storage-registry-lock" at
>>> /var/lib/nova/instances/locks/nova-storage-registry-lock lock
>>> /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:210
>>> 2014-12-04 00:43:41.966 32947 DEBUG nova.openstack.common.lockutils [-]
>>> Semaphore / lock released "do_get_storage_users" inner
>>> /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:252
>>>
>>>
>>> BTW I am using ceph and RBD to cinder
>>>
>>> Any Ideas??
>>>
>>> 2014-12-04 0:32 GMT-06:00 Guillermo Alvarado <
>>> guillermoalvarado89 at gmail.com>:
>>>
>>>> Ok I installed that tool, I see all in "green" so it is ok, but I
>>>> cannot see in the message rate graph the Blue line that represent delivery
>>>> in the overview tab...
>>>>
>>>>
>>>>
>>>> 2014-12-04 0:08 GMT-06:00 Robert van Leeuwen <
>>>> Robert.vanLeeuwen at spilgames.com>:
>>>>
>>>>  > I am having a lot of problems with nova compute,  after 30 minutes
>>>>> all my computes report XXX when
>>>>> > I execute nova-manage service list
>>>>>
>>>>> Usually this is an indication that rabbitmq is no longer properly
>>>>> processing messages.
>>>>> e.g. disk is to "full" (if you have less then 1GB freesoace or so it
>>>>> will stop), rabbitmq is keep running but will stop processing messages.
>>>>> Have a look at the rabbitmq management interface:
>>>>> https://www.rabbitmq.com/management.html
>>>>> This is usually points you in the right direction.
>>>>>
>>>>> Cheers,
>>>>> Robert van Leeuwen
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>> _______________________________________________
>>> Mailing list:
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>> Post to     : openstack at lists.openstack.org
>>> Unsubscribe :
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>
>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20141204/e25c2d92/attachment.html>


More information about the Openstack mailing list