[Openstack] [grizzly]Problems of qpid as rpcbackend

minmin ren rmm0811 at gmail.com
Tue May 28 07:21:02 UTC 2013


I think I found some problems of qpid as rpcbackend, however I'm not sure
about it. Could anyone  try to test it with your environment?

openstack grizzly version

config file need debug=True

1. service openstack-cinder-scheduler stop (nova-compute, nova-scheduler,
etc)
2. " vi /var/log/cinder/scheduler.log "  some info will be found like this.

I deployed two machines(node1 and dev202)

2013-05-27 06:02:46 CRITICAL [cinder] need more than 0 values to unpack
Traceback (most recent call last):
  File "/usr/bin/cinder-scheduler", line 50, in <module>
    service.wait()
  File "/usr/lib/python2.6/site-packages/cinder/service.py", line 613, in
wait
    rpc.cleanup()
  File
"/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/__init__.py",
line 240, in cleanup
    return _get_impl().cleanup()
  File
"/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/impl_qpid.py",
line 649, in cleanup
    return rpc_amqp.cleanup(Connection.pool)
  File
"/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/amqp.py",
line 671, in cleanup
    connection_pool.empty()
  File
"/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/amqp.py",
line 80, in empty
    self.get().close()
  File
"/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/impl_qpid.py",
line 386, in close
    self.connection.close()
  File "<string>", line 6, in close
  File "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line
316, in close
    ssn.close(timeout=timeout)
  File "<string>", line 6, in close
  File "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line
749, in close
    if not self._ewait(lambda: self.closed, timeout=timeout):
  File "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line
566, in _ewait
    result = self.connection._ewait(lambda: self.error or predicate(),
timeout)
  File "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line
208, in _ewait
    result = self._wait(lambda: self.error or predicate(), timeout)
  File "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line
193, in _wait
    return self._waiter.wait(predicate, timeout=timeout)
  File "/usr/lib/python2.6/site-packages/qpid/concurrency.py", line 57, in
wait
    self.condition.wait(3)
  File "/usr/lib/python2.6/site-packages/qpid/concurrency.py", line 96, in
wait
    sw.wait(timeout)
  File "/usr/lib/python2.6/site-packages/qpid/compat.py", line 53, in wait
    ready, _, _ = select([self], [], [], timeout)
ValueError: need more than 0 values to unpack


I put the problems with multi-cinder-volumes on launchpad
https://answers.launchpad.net/cinder/+question/229456
Because I encountered this problems, however others services except
cinder-volume never appear this problems.
Then I found other services log print some critical info, error at
self.connection.close()
So I delete self.connection.close() which should not be removed, I watch
qpid queue infomation, the problem which I  confused on
multi-cinder-volumes disappear.
As a result, I think the problem I found may be a bug.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20130528/06f35ec8/attachment.html>


More information about the Openstack mailing list