[Openstack] nova-compute goes to XXX but is alive

Guillermo Alvarado guillermoalvarado89 at gmail.com
Thu Dec 4 06:53:23 UTC 2014


I got this log when the computes goes XXX


*var/log/nova/nova-compute.log*


2014-12-04 00:43:41.921 32947 DEBUG nova.openstack.common.lockutils [-] Got
semaphore "storage-registry-lock" lock
/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:168

2014-12-04 00:43:41.922 32947 DEBUG nova.openstack.common.lockutils [-]
Attempting to grab file lock "storage-registry-lock" lock
/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:178

2014-12-04 00:43:41.963 32947 DEBUG nova.openstack.common.lockutils [-] Got
file lock "storage-registry-lock" at
/var/lib/nova/instances/locks/nova-storage-registry-lock lock
/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:206
2014-12-04 00:43:41.964 32947 DEBUG nova.openstack.common.lockutils [-] Got
semaphore / lock "do_get_storage_users" inner
/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:248

2014-12-04 00:43:41.965 32947 DEBUG nova.openstack.common.lockutils [-]
Released file lock "storage-registry-lock" at
/var/lib/nova/instances/locks/nova-storage-registry-lock lock
/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:210
2014-12-04 00:43:41.966 32947 DEBUG nova.openstack.common.lockutils [-]
Semaphore / lock released "do_get_storage_users" inner
/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:252


BTW I am using ceph and RBD to cinder

Any Ideas??

2014-12-04 0:32 GMT-06:00 Guillermo Alvarado <guillermoalvarado89 at gmail.com>
:

> Ok I installed that tool, I see all in "green" so it is ok, but I cannot
> see in the message rate graph the Blue line that represent delivery in the
> overview tab...
>
>
>
> 2014-12-04 0:08 GMT-06:00 Robert van Leeuwen <
> Robert.vanLeeuwen at spilgames.com>:
>
>  > I am having a lot of problems with nova compute,  after 30 minutes all
>> my computes report XXX when
>> > I execute nova-manage service list
>>
>> Usually this is an indication that rabbitmq is no longer properly
>> processing messages.
>> e.g. disk is to "full" (if you have less then 1GB freesoace or so it will
>> stop), rabbitmq is keep running but will stop processing messages.
>> Have a look at the rabbitmq management interface:
>> https://www.rabbitmq.com/management.html
>> This is usually points you in the right direction.
>>
>> Cheers,
>> Robert van Leeuwen
>>
>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20141204/1da29bea/attachment.html>


More information about the Openstack mailing list