[Openstack] [Openstack-operators] RabbitMQ issues since upgrading to Icehouse
Sam Morrison
sorrison at gmail.com
Wed Sep 3 05:31:39 UTC 2014
No problem setting it to 2, the problem arises when you upgrade to icehouse the default for metadata workers has changed from 1 -> number of processes.
So if you don’t change your config you’ll all of a sudden get a lot more nova-api-metadata servers running. (If you’re using multi host and have 160 * 64 core compute nodes you all of a sudden have a *lot* more api-metadata workers...)
Sam
On 3 Sep 2014, at 3:20 pm, Tim Bell <Tim.Bell at cern.ch> wrote:
> What was the problem with metadata workers set to 2 ?
>
> Tim
>
>> -----Original Message-----
>> From: Sam Morrison [mailto:sorrison at gmail.com]
>> Sent: 03 September 2014 00:55
>> To: Abel Lopez
>> Cc: openstack-operators at lists.openstack.org; openstack at lists.openstack.org
>> Subject: Re: [Openstack-operators] RabbitMQ issues since upgrading to
>> Icehouse
>>
>> Hi Abel,
>>
>> We were running Havana on Precise before. We moved to Icehouse on Trusty.
>> So new kombu etc. too.
>> We also moved to new rabbit servers so all the queues were fresh etc.
>>
>> Joe: Yeah we have metadata_workers set to 2 also, a little gotcha for the
>> icehouse upgrade.
>>
>> Cheers,
>> Sam
>>
>>
>>
>> On 3 Sep 2014, at 6:28 am, Abel Lopez <alopgeek at gmail.com> wrote:
>>
>>> What release were you running before Icehouse?
>>> I'm curious if you purged/deleted queues during the upgrade.
>>> Might be useful to start fresh with your rabbit, like completely trash your
>> mensia during a maintenance window (obviously with your services stopped) so
>> they recreate the queues at startup.
>>> Also, was kombu upgraded along with your openstack release?
>>>
>>> On Aug 25, 2014, at 4:17 PM, Sam Morrison <sorrison at gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> Since upgrading to Icehouse we have seen increased issues with messaging
>> relating to RabbitMQ.
>>>>
>>>> 1. We often get reply_xxxxxx queues starting to fill up with unacked
>> messages. To fix this we need to restart the offending service. Usually nova-api
>> or nova-compute.
>>>>
>>>> 2. If you kill a node so as to force an *ungraceful* disconnect of rabbit the
>> connection "object?" still sticks around in rabbit. Starting the service again
>> means there are now 2 consumers. The new one and the phantom old one. This
>> then leads to messages piling up in the unacked queue. This feels like a rabbit
>> bug to me but just thought I'd mention it here too.
>>>>
>>>>
>>>> We have have a setup that includes icehouse computes and havana computes
>> in the same cloud and we only see this on the icehouse computes. This is using
>> Trusty and RabbitMQ 3.3.4
>>>>
>>>>
>>>> Has anyone seen anything like this too?
>>>>
>>>> Thanks,
>>>> Sam
>>>>
>>>>
>>>> _______________________________________________
>>>> OpenStack-operators mailing list
>>>> OpenStack-operators at lists.openstack.org
>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>
>>
>>
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
More information about the Openstack
mailing list