[Openstack-operators] RabbitMQ HA

Belmiro Moreira moreira.belmiro.email.lists at gmail.com
Sat Feb 1 20:30:02 UTC 2014


Hi Subbu,
correct,
but my point was that would be nice if is possible to define multiple rabbit hosts like for other nova services.

Belmiro 


On Feb 1, 2014, at 20:35 , Allamaraju, Subbu <subbu at subbu.org> wrote:

> But you could use a VIP for each rabbit cluster and use a TCP load balancer. Correct?
> 
> Subbu
> 
> On Feb 1, 2014, at 5:19 AM, Belmiro Moreira <moreira.belmiro.email.lists at gmail.com> wrote:
> 
>> Hi,
>> to describe our experience,
>> we are using RabbitMQ in active/active with mirror queues.
>> We have 4 cells and each cell has 3 rabbitMQ servers with more than 2000 compute nodes in total.
>> 
>> Until now we didn’t have any particular issue with this configuration (we had a network partition but easily fixed).
>> 
>> The problem with cells is that between cells is only possible to define one rabbit host.
>> There is an open bug: https://bugs.launchpad.net/nova/+bug/1178541
>> 
>> Belmiro
>> 
>> 
>> 
>> On Jan 31, 2014, at 18:48 , Allamaraju, Subbu <subbu at subbu.org> wrote:
>> 
>>> Alexander,
>>> 
>>> Thanks for sharing your experience. We've been using RabbitMQ active-active behind a VIP/TCP LB. Just wanted to check if the HA Guide's recommendation is valid.
>>> 
>>> Subbu
>>> 
>>> On Jan 31, 2014, at 1:27 AM, Papaspyrou, Alexander <papaspyrou at adesso-mobile.de> wrote:
>>> 
>>>> Subbu,
>>>> 
>>>> we are running on RMQ 3.2 with server-side HA policy on all OpenStack queues.
>>>> 
>>>> Even with two Rabbit servers dispersed over two sufficiently distant data centers (different subnets, different locations, connected via fibre over a number of routers), and besides network partitions here and there (which RMQ fixes automatically most of the time, if configured properly), our setup runs like a charm. Hardware is negligible; RMQ almost never goes beyond a GB or so of memory usage, and the CPU is usually bored to death.
>>>> 
>>>> We found the DRBD setup much more flaky and rather cumbersome to setup, and frankly, I never understood why people happen to take the dark alley Linux-HA. If you want, you can put ldirectord in front of the two boxes to balance the load (we did that, although from a performance perspective, this is not necessary). ldirectord also detects whether one of the boxes is out to lunch (which never happened so far), and reroutes the traffic automatically.
>>>> 
>>>> To be really sure, put a corosync/pacemaker-managed failover (virtual) IP in front of the two boxes, and run corosync/pacemaker/ldirectord/rabbitmq on each box, with properly configured VIP transportation – that’s where I’d invest the effort to dive into the details of Linux HA glory.
>>>> 
>>>> Regards,
>>>> Alexander
>>>> -- 
>>>> adesso mobile solutions GmbH
>>>> Dr.-Ing. Alexander Papaspyrou
>>>> Senior System Architect
>>>> IT Operations
>>>> 
>>>> Freie-Vogel-Str. 391 | 44269 Dortmund
>>>> T +49 231 930 66480 | F +49 231 930 9317 | M +49 172  209 4739
>>>> Mail: papaspyrou at adesso-mobile.de | Web: www.adesso-mobile.de | Mobil-Web: mobil.adesso-mobile.de
>>>> 
>>>> Vertretungsberechtigte Geschäftsführer: Dr. Josef Brewing, Frank Dobelmann
>>>> Registergericht: Amtsgericht Dortmund
>>>> Registernummer: HRB 13763
>>>> Umsatzsteuer-Identifikationsnummer gemäß § 27 a Umsatzsteuergesetz: DE201541832
>>>> 
>>>> 
>>>> 
>>>> 
>>>> Am 20.01.2014 um 05:33 schrieb Allamaraju, Subbu <subbu at subbu.org>:
>>>> 
>>>>> OpenStack HA guide (http://docs.openstack.org/high-availability-guide/content/s-rabbitmq.html) says that Pacemaker/DRBD approach is preferred over  active-active mirrored queues. Details are sparse in the guide. Is anyone aware of any data/issues first hand?
>>>>> 
>>>>> Thanks for any pointers.
>>>>> 
>>>>> Subbu
>>>>> _______________________________________________
>>>>> OpenStack-operators mailing list
>>>>> OpenStack-operators at lists.openstack.org
>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>>> 
>>> 
>>> 
>>> _______________________________________________
>>> OpenStack-operators mailing list
>>> OpenStack-operators at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>> 
> 




More information about the OpenStack-operators mailing list