[openstack-dev] [Openstack] High Available queues in rabbitmq

Rosa, Andrea (HP Cloud Services) andrea.rosa at hp.com
Thu Jul 26 11:10:02 UTC 2012


Hi Eugene,

Thanks for the patch.
I have a question:
it seems to me that this patch is a (good) starting point of a broader change to be able to use HA in active/active configuration with RMQ.
As far as I know, with that configuration we need to add some extra logic for consumers to deal with "consumer cancellation notification" and duplicated messages due to a (potentially) re-send after a failover.

Is that correct?
Is there some plan to have a blueprint for this change?

Regards
--
Andrea Rosa


>-----Original Message-----
>From: Eugene Kirpichov [mailto:ekirpichov at gmail.com]
>Sent: 26 July 2012 00:46
>To: Alessandro Tagliapietra; rbryant at redhat.com
>Cc: Rosa, Andrea (HP Cloud Services); OpenStack Development Mailing
>List; openstack at lists.launchpad.net
>Subject: Re: [openstack-dev] [Openstack] High Available queues in
>rabbitmq
>
>Gentlemen,
>
>Here is my patch: https://review.openstack.org/#/c/10305/
>It also depends on another small patch
>https://review.openstack.org/#/c/10197
>
>I'd like to ask someone to review it.
>Also, how to get these changes into nova? It seems that nova has a
>copy-paste of openstack-common inside it, should I just mirror the
>changes to nova once they're accepted in openstack-common?
>
>I'm cc'ing Russell Bryant because he originally created the
>openstack-common module.
>
>On Wed, Jul 25, 2012 at 3:03 AM, Alessandro Tagliapietra
><tagliapietra.alessandro at gmail.com> wrote:
>> Yup, using as resource is a "old" way as
>http://www.rabbitmq.com/ha.html
>> Active/active makes sure that you have no downtime and it's simple as
>you
>> don't need to use DRBD.
>>
>> 2012/7/25 Rosa, Andrea (HP Cloud Services) <andrea.rosa at hp.com>
>>
>>> Sorry for my question, I have just seen from the original thread that
>>> we are talking about HA with Active/Active solution.
>>> --
>>> Andrea Rosa
>>>
>>> >-----Original Message-----
>>> >From: Rosa, Andrea (HP Cloud Services)
>>> >Sent: 25 July 2012 10:45
>>> >To: Eugene Kirpichov
>>> >Cc: openstack-dev at lists.openstack.org; Alessandro Tagliapietra;
>>> >openstack at lists.launchpad.net
>>> >Subject: Re: [openstack-dev] [Openstack] High Available queues in
>>> >rabbitmq
>>> >
>>> >Hi
>>> >
>>> >Your patch doesn't use a Resource manager, so are you working on an
>>> >Active/Active
>>> >configuration using mirrored queues? Or are you working on a cluster
>>> >configuration?
>>> >
>>> >I am really interested in that change, thanks for your help.
>>> >Regards
>>> >--
>>> >Andrea Rosa
>>> >
>>> >>-----Original Message-----
>>> >>From: openstack-bounces+andrea.rosa=hp.com at lists.launchpad.net
>>> >>[mailto:openstack-bounces+andrea.rosa=hp.com at lists.launchpad.net]
>On
>>> >>Behalf Of Alessandro Tagliapietra
>>> >>Sent: 24 July 2012 17:58
>>> >>To: Eugene Kirpichov
>>> >>Cc: openstack-dev at lists.openstack.org;
>openstack at lists.launchpad.net
>>> >>Subject: Re: [Openstack] High Available queues in rabbitmq
>>> >>
>>> >>Oh, so without the need to put an IP floating between hosts.
>>> >>Good job, thanks for helping
>>> >>
>>> >>Best
>>> >>
>>> >>Alessandro
>>> >>
>>> >>Il giorno 24/lug/2012, alle ore 17:49, Eugene Kirpichov ha scritto:
>>> >>
>>> >>> Hi Alessandro,
>>> >>>
>>> >>> My patch is about removing the need for pacemaker (and it's
>pacemaker
>>> >>> that I denoted with the term "TCP load balancer").
>>> >>>
>>> >>> I didn't submit the patch yesterday because I underestimated the
>>> >>> effort to write unit tests for it and found a few issues on the
>way.
>>> >I
>>> >>> hope I'll finish today.
>>> >>>
>>> >>> On Tue, Jul 24, 2012 at 12:00 AM, Alessandro Tagliapietra
>>> >>> <tagliapietra.alessandro at gmail.com> wrote:
>>> >>>> Sorry for the delay, i was out from work.
>>> >>>> Awesome work Eugene, I don't need the patch instantly as i'm
>still
>>> >>building the infrastructure.
>>> >>>> Will it will take alot of time to go in Ubuntu repositories?
>>> >>>>
>>> >>>> Why you said you need load balancing? You can use only the
>master
>>> >>node and in case the rabbitmq-server dies, switch the ip to the new
>>> >>master with pacemaker, that's how I would do.
>>> >>>>
>>> >>>> Best Regards
>>> >>>>
>>> >>>> Alessadro
>>> >>>>
>>> >>>>
>>> >>>> Il giorno 23/lug/2012, alle ore 21:49, Eugene Kirpichov ha
>scritto:
>>> >>>>
>>> >>>>> +openstack-dev@
>>> >>>>>
>>> >>>>> To openstack-dev: this is a discussion of an upcoming patch
>about
>>> >>>>> native RabbitMQ H/A support in nova. I'll post the patch for
>>> >>>>> codereview today.
>>> >>>>>
>>> >>>>> On Mon, Jul 23, 2012 at 12:46 PM, Eugene Kirpichov
>>> >><ekirpichov at gmail.com> wrote:
>>> >>>>>> Yup, that's basically the same thing that Jay suggested :)
>Obvious
>>> >>in
>>> >>>>>> retrospect...
>>> >>>>>>
>>> >>>>>> On Mon, Jul 23, 2012 at 12:42 PM, Oleg Gelbukh
>>> >><ogelbukh at mirantis.com> wrote:
>>> >>>>>>> Eugene,
>>> >>>>>>>
>>> >>>>>>> I suggest just add option 'rabbit_servers' that will override
>>> >>>>>>> 'rabbit_host'/'rabbit_port' pair, if present. This won't
>break
>>> >>anything, in
>>> >>>>>>> my understanding.
>>> >>>>>>>
>>> >>>>>>> --
>>> >>>>>>> Best regards,
>>> >>>>>>> Oleg Gelbukh
>>> >>>>>>> Mirantis, Inc.
>>> >>>>>>>
>>> >>>>>>>
>>> >>>>>>> On Mon, Jul 23, 2012 at 10:58 PM, Eugene Kirpichov
>>> >><ekirpichov at gmail.com>
>>> >>>>>>> wrote:
>>> >>>>>>>>
>>> >>>>>>>> Hi,
>>> >>>>>>>>
>>> >>>>>>>> I'm working on a RabbitMQ H/A patch right now.
>>> >>>>>>>>
>>> >>>>>>>> It actually involves more than just using H/A queues (unless
>>> >>you're
>>> >>>>>>>> willing to add a TCP load balancer on top of your RMQ
>cluster).
>>> >>>>>>>> You also need to add support for multiple RabbitMQ's
>directly to
>>> >>nova.
>>> >>>>>>>> This is not hard at all, and I have the patch ready and
>tested
>>> >in
>>> >>>>>>>> production.
>>> >>>>>>>>
>>> >>>>>>>> Alessandro, if you need this urgently, I can send you the
>patch
>>> >>right
>>> >>>>>>>> now before the discussion codereview for inclusion in core
>nova.
>>> >>>>>>>>
>>> >>>>>>>> The only problem is, it breaks backward compatibility a bit:
>my
>>> >>patch
>>> >>>>>>>> assumes you have a flag "rabbit_addresses" which should look
>>> >like
>>> >>>>>>>> "rmq-host1:5672,rmq-host2:5672" instead of the prior
>rabbit_host
>>> >>and
>>> >>>>>>>> rabbit_port flags.
>>> >>>>>>>>
>>> >>>>>>>> Guys, can you advise on a way to do this without being ugly
>and
>>> >>>>>>>> without breaking compatibility?
>>> >>>>>>>> Maybe have "rabbit_host", "rabbit_port" be ListOpt's? But
>that
>>> >>sounds
>>> >>>>>>>> weird, as their names are in singular.
>>> >>>>>>>> Maybe have "rabbit_host", "rabbit_port" and also
>"rabbit_host2",
>>> >>>>>>>> "rabbit_port2" (assuming we only have clusters of 2 nodes)?
>>> >>>>>>>> Something else?
>>> >>>>>>>>
>>> >>>>>>>> On Mon, Jul 23, 2012 at 11:27 AM, Jay Pipes
><jaypipes at gmail.com>
>>> >>wrote:
>>> >>>>>>>>> On 07/23/2012 09:02 AM, Alessandro Tagliapietra wrote:
>>> >>>>>>>>>> Hi guys,
>>> >>>>>>>>>>
>>> >>>>>>>>>> just an idea, i'm deploying Openstack trying to make it
>HA.
>>> >>>>>>>>>> The missing thing is rabbitmq, which can be easily started
>in
>>> >>>>>>>>>> active/active mode, but it needs to declare the queues
>adding
>>> >>an
>>> >>>>>>>>>> x-ha-policy entry.
>>> >>>>>>>>>> http://www.rabbitmq.com/ha.html
>>> >>>>>>>>>> It would be nice to add a config entry to be able to
>declare
>>> >>the queues
>>> >>>>>>>>>> in that way.
>>> >>>>>>>>>> If someone know where to edit the openstack code, else
>i'll
>>> >try
>>> >>to do
>>> >>>>>>>>>> that in the next weeks maybe.
>>> >>>>>>>>>
>>> >>>>>>>>>
>>> >>>>>>>>> https://github.com/openstack/openstack-
>>> >>common/blob/master/openstack/common/rpc/impl_kombu.py
>>> >>>>>>>>>
>>> >>>>>>>>> You'll need to add the config options there and the queue
>is
>>> >>declared
>>> >>>>>>>>> here with the options supplied to the ConsumerBase
>constructor:
>>> >>>>>>>>>
>>> >>>>>>>>>
>>> >>>>>>>>> https://github.com/openstack/openstack-
>>> >>common/blob/master/openstack/common/rpc/impl_kombu.py#L114
>>> >>>>>>>>>
>>> >>>>>>>>> Best,
>>> >>>>>>>>> -jay
>>> >>>>>>>>>
>>> >>>>>>>>> _______________________________________________
>>> >>>>>>>>> Mailing list: https://launchpad.net/~openstack
>>> >>>>>>>>> Post to     : openstack at lists.launchpad.net
>>> >>>>>>>>> Unsubscribe : https://launchpad.net/~openstack
>>> >>>>>>>>> More help   : https://help.launchpad.net/ListHelp
>>> >>>>>>>>
>>> >>>>>>>>
>>> >>>>>>>>
>>> >>>>>>>> --
>>> >>>>>>>> Eugene Kirpichov
>>> >>>>>>>> http://www.linkedin.com/in/eugenekirpichov
>>> >>>>>>>>
>>> >>>>>>>> _______________________________________________
>>> >>>>>>>> Mailing list: https://launchpad.net/~openstack
>>> >>>>>>>> Post to     : openstack at lists.launchpad.net
>>> >>>>>>>> Unsubscribe : https://launchpad.net/~openstack
>>> >>>>>>>> More help   : https://help.launchpad.net/ListHelp
>>> >>>>>>>
>>> >>>>>>>
>>> >>>>>>
>>> >>>>>>
>>> >>>>>>
>>> >>>>>> --
>>> >>>>>> Eugene Kirpichov
>>> >>>>>> http://www.linkedin.com/in/eugenekirpichov
>>> >>>>>
>>> >>>>>
>>> >>>>>
>>> >>>>> --
>>> >>>>> Eugene Kirpichov
>>> >>>>> http://www.linkedin.com/in/eugenekirpichov
>>> >>>>>
>>> >>>>> _______________________________________________
>>> >>>>> Mailing list: https://launchpad.net/~openstack
>>> >>>>> Post to     : openstack at lists.launchpad.net
>>> >>>>> Unsubscribe : https://launchpad.net/~openstack
>>> >>>>> More help   : https://help.launchpad.net/ListHelp
>>> >>>>
>>> >>>
>>> >>>
>>> >>>
>>> >>> --
>>> >>> Eugene Kirpichov
>>> >>> http://www.linkedin.com/in/eugenekirpichov
>>> >>
>>> >>
>>> >>_______________________________________________
>>> >>Mailing list: https://launchpad.net/~openstack
>>> >>Post to     : openstack at lists.launchpad.net
>>> >>Unsubscribe : https://launchpad.net/~openstack
>>> >>More help   : https://help.launchpad.net/ListHelp
>>> >
>>> >_______________________________________________
>>> >OpenStack-dev mailing list
>>> >OpenStack-dev at lists.openstack.org
>>> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
>
>--
>Eugene Kirpichov
>http://www.linkedin.com/in/eugenekirpichov



More information about the OpenStack-dev mailing list