[Openstack-operators] RabbitMQ issues since upgrading to Icehouse
Joe Topjian
joe at topjian.net
Tue Sep 2 19:06:08 UTC 2014
Hi Sam,
We upgraded to Icehouse over the weekend and had some issues with Rabbit.
The number of RabbitMQ connections went from ~70 to ~380 post-upgrade and
we had reports of users being unable to use the metadata service and that
instances took longer to boot as they waited for metadata.
I noticed that there were several more nova-api-metadata services running
on our compute nodes than before (nova-network multi-host environment). I
found the config option "metadata_workers" which is set to the number of
cpus by default. I changed it to "2" and restarted everything across the
board. Our total number of Rabbit connections are now sitting at ~150.
I'm not sure if this helps at all. I wasn't running Icehouse long enough to
see if any of the reply_xxxxxx queues filled up with messages or not. The
times that I checked showed that this was not happening (and is still not
happening).
Joe
On Mon, Aug 25, 2014 at 5:17 PM, Sam Morrison <sorrison at gmail.com> wrote:
> Hi,
>
> Since upgrading to Icehouse we have seen increased issues with messaging
> relating to RabbitMQ.
>
> 1. We often get reply_xxxxxx queues starting to fill up with unacked
> messages. To fix this we need to restart the offending service. Usually
> nova-api or nova-compute.
>
> 2. If you kill a node so as to force an *ungraceful* disconnect of rabbit
> the connection “object?” still sticks around in rabbit. Starting the
> service again means there are now 2 consumers. The new one and the phantom
> old one. This then leads to messages piling up in the unacked queue. This
> feels like a rabbit bug to me but just thought I’d mention it here too.
>
>
> We have have a setup that includes icehouse computes and havana computes
> in the same cloud and we only see this on the icehouse computes. This is
> using Trusty and RabbitMQ 3.3.4
>
>
> Has anyone seen anything like this too?
>
> Thanks,
> Sam
>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20140902/a54e3b54/attachment.html>
More information about the OpenStack-operators
mailing list