<div dir="ltr"><div>I think I found some problems of qpid as rpcbackend, however I'm not sure about it. Could anyone try to test it with your environment?<br></div><div><br>openstack grizzly version<br></div><div><br>
</div><div>config file need debug=True<br><br></div><div>1. service openstack-cinder-scheduler stop (nova-compute, nova-scheduler, etc)<br></div><div>2. " vi /var/log/cinder/scheduler.log " some info will be found like this. <br>
</div><div><br>I deployed two machines(node1 and dev202)<br><div><br>2013-05-27 06:02:46 CRITICAL [cinder] need more than 0 values to unpack<br>Traceback (most recent call last):<br> File "/usr/bin/cinder-scheduler", line 50, in <module><br>
service.wait()<br> File "/usr/lib/python2.6/site-packages/cinder/service.py", line 613, in wait<br> rpc.cleanup()<br> File "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/__init__.py", line 240, in cleanup<br>
return _get_impl().cleanup()<br> File "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/impl_qpid.py", line 649, in cleanup<br> return rpc_amqp.cleanup(Connection.pool)<br> File "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/amqp.py", line 671, in cleanup<br>
connection_pool.empty()<br> File "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/amqp.py", line 80, in empty<br> self.get().close()<br> File "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/impl_qpid.py", line 386, in close<br>
self.connection.close()<br> File "<string>", line 6, in close<br> File "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 316, in close<br> ssn.close(timeout=timeout)<br>
File "<string>", line 6, in close<br> File "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 749, in close<br> if not self._ewait(lambda: self.closed, timeout=timeout):<br>
File "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 566, in _ewait<br> result = self.connection._ewait(lambda: self.error or predicate(), timeout)<br> File "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 208, in _ewait<br>
result = self._wait(lambda: self.error or predicate(), timeout)<br> File "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 193, in _wait<br> return self._waiter.wait(predicate, timeout=timeout)<br>
File "/usr/lib/python2.6/site-packages/qpid/concurrency.py", line 57, in wait<br> self.condition.wait(3)<br> File "/usr/lib/python2.6/site-packages/qpid/concurrency.py", line 96, in wait<br> sw.wait(timeout)<br>
File "/usr/lib/python2.6/site-packages/qpid/compat.py", line 53, in wait<br> ready, _, _ = select([self], [], [], timeout)<br>ValueError: need more than 0 values to unpack<br><br><br></div><div>I put the problems with multi-cinder-volumes on launchpad<br>
<a href="https://answers.launchpad.net/cinder/+question/229456">https://answers.launchpad.net/cinder/+question/229456</a><br></div><div>Because I encountered this problems, however others services except cinder-volume never appear this problems.<br>
</div><div>Then I found other services log print some critical info, error at self.connection.close()<br></div><div>So I delete self.connection.close() which should not be removed, I watch qpid queue infomation, the problem which I confused on multi-cinder-volumes disappear.<br>
</div><div>As a result, I think the problem I found may be a bug. <br></div></div></div>