[Openstack-operators] [oslo] RabbitMQ queue TTL issues moving to Liberty

Jake Yip jake.yip at unimelb.edu.au
Tue Aug 16 08:27:10 UTC 2016


Hi Matt,

We seem to be doing ok with 3.6.3. IIRC 3.6.2 was causing the stats db to
fall over every now and then causing huge problems.

Regards,
Jake

Jake Yip,
DevOps Engineer,
Core Services, NeCTAR Research Cloud,
The University of Melbourne

On Tue, Aug 16, 2016 at 6:34 AM, Matt Fischer <matt at mattfischer.com> wrote:

> Has anyone had any luck improving the statsdb issue by upgrading rabbit to
> 3.6.3 or newer? We're at 3.5.6 now and 3.6.2 has parallelized stats
> processing, then 3.6.3 has additional memory leak fixes for it. What we've
> been seeing is that we occasionally get slow & steady climbs of rabbit
> memory usage until the cluster falls over when it hits the memory limit.
> The last one occurred over 12 hours once we went back and looked at the
> charts.
>
> I'm hoping to try 3.6.5 but we have no way to repro this outside of
> production and even there short of bouncing neutron and all the agents over
> and over I'm not sure I could recreate it.
>
> Note - we already have the collect interval set to 30k, per recommendation
> from the Rabbit Ops talk in Tokyo, but no other optimizations for the
> statsdb. Some folks here are considering a cron job to bounce it every few
> hours.
>
>
> On Thu, Jul 28, 2016 at 9:10 AM, Kris G. Lindgren <klindgren at godaddy.com>
> wrote:
>
>> We also believe the change from auto-delete queues to 10min expiration
>> queues was the cause of our rabbit whoes a month or so ago.  Where we had
>> rabbitmq servers filling their stats DB and consuming 20+ GB of ram before
>> hitting the rabbitmq mem high watermark.  We were running for 6+ months
>> without issue under kilo and when we moved to Liberty rabbit consistently
>> started falling on its face.  We eventually turned down the stats
>> collection interval, but I would imagine keeping stats around for queue’s
>> for 10 minutes that were used for a single RPC message when we are passing
>> 1500+ messages per second wasn’t helping anything.  We haven’t tried
>> changing the timeout values to be lower, to see if that made things
>> better.  But we did identify this change as something that could contribute
>> to our rabbitmq issues.
>>
>>
>>
>>
>>
>> ___________________________________________________________________
>>
>> Kris Lindgren
>>
>> Senior Linux Systems Engineer
>>
>> GoDaddy
>>
>>
>>
>> *From: *Dmitry Mescheryakov <dmescheryakov at mirantis.com>
>> *Date: *Thursday, July 28, 2016 at 6:17 AM
>> *To: *Sam Morrison <sorrison at gmail.com>
>> *Cc: *OpenStack Operators <openstack-operators at lists.openstack.org>
>> *Subject: *Re: [Openstack-operators] [oslo] RabbitMQ queue TTL issues
>> moving to Liberty
>>
>>
>>
>>
>>
>>
>>
>> 2016-07-27 2:20 GMT+03:00 Sam Morrison <sorrison at gmail.com>:
>>
>>
>>
>> On 27 Jul 2016, at 4:05 AM, Dmitry Mescheryakov <
>> dmescheryakov at mirantis.com> wrote:
>>
>>
>>
>>
>>
>>
>>
>> 2016-07-26 2:15 GMT+03:00 Sam Morrison <sorrison at gmail.com>:
>>
>> The queue TTL happens on reply queues and fanout queues. I don’t think it
>> should happen on fanout queues. They should auto delete. I can understand
>> the reason for having them on reply queues though so maybe that would be a
>> way to forward?
>>
>>
>>
>> Or am I missing something and it is needed on fanout queues too?
>>
>>
>>
>> I would say we do need fanout queues to expire for the very same reason
>> we want reply queues to expire instead of auto delete. In case of broken
>> connection, the expiration provides client time to reconnect and continue
>> consuming from the queue. In case of auto-delete queues, it was a frequent
>> case that RabbitMQ deleted the queue before client reconnects ... along
>> with all non-consumed messages in it.
>>
>>
>>
>> But in the case of fanout queues, if there is a broken connection can’t
>> the service just recreate the queue if it doesn’t exist? I guess that means
>> it needs to store the state of what the queue name is though?
>>
>>
>>
>> Yes they could loose messages directed at them but all the services I
>> know that consume on fanout queues have a re sync functionality for this
>> very case.
>>
>>
>>
>> If the connection is broken will oslo messaging know how to connect to
>> the same queue again anyway? I would’ve thought it would handle the
>> disconnect and then reconnect, either with the same queue name or a new
>> queue all together?
>>
>>
>>
>> oslo.messaging handles reconnect perfectly - on connect it just
>> unconditionally declares the queue and starts consuming from it. If queue
>> already existed, the declaration operation will just be ignored by RabbitMQ.
>>
>>
>>
>> For your earlier point that services re sync and hence messages lost in
>> fanout are not that important, I can't comment on that. But after some
>> thinking I do agree that having big expiration time for fanouts is
>> non-adequate for big deployments anyway. How about we split
>> rabbit_transient_queues_ttl into two parameters - one for reply queue
>> and one for fanout ones? In that case people concerned with messages piling
>> up in fanouts might set it to 1, which will virtually make these queues
>> behave like auto-delete ones (though I strongly recommend to leave it at
>> least at 20 seconds, to give service a chance to reconnect).
>>
>>
>>
>> Thanks,
>>
>>
>>
>> Dmitry
>>
>>
>>
>>
>>
>>
>>
>> Sam
>>
>>
>>
>>
>>
>>
>>
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20160816/26d04f64/attachment.html>


More information about the OpenStack-operators mailing list