[Openstack] HA Openstack with Pacemaker

Samuel Winchenbach swinchen at gmail.com
Thu Feb 14 19:34:49 UTC 2013


W
ell, I think I will have to go with one ip per service and forget about
load balancing.  It seems as though with LVS routing requests internally
through the VIP is difficult (impossible?) at least with LVS-DR.  It seems
like a shame not to be able to distribute the work among the controller
nodes.


On Thu, Feb 14, 2013 at 9:50 AM, Samuel Winchenbach <swinchen at gmail.com>wrote:

> Hi Sébastien,
>
> I have two hosts with public interfaces with a number (~8) compute nodes
> behind them.   I am trying to set the two public nodes in for HA and load
> balancing,  I plan to run all the openstack services on these two nodes in
> Active/Active where possible.   I currently have MySQL and RabbitMQ setup
> in pacemaker with a drbd backend.
>
> That is a quick summary.   If there is anything else I can answer about my
> setup please let me know.
>
> Thanks,
> Sam
>
>
> On Thu, Feb 14, 2013 at 9:26 AM, Sébastien Han <han.sebastien at gmail.com>wrote:
>
>> Well I don't know your setup, if you use LB for API service or if you use
>> an active/passive pacemaker but at the end it's not that much IPs I guess.
>> I dare to say that Keepalived sounds outdated to me...
>>
>> If you use pacemaker and want to have the same IP for all the resources
>> simply create a resource group with all the openstack service inside it
>> (it's ugly but if it's what you want :)). Give me more info about your
>> setup and we can go further in the discussion :).
>>
>> --
>> Regards,
>> Sébastien Han.
>>
>>
>> On Thu, Feb 14, 2013 at 3:15 PM, Samuel Winchenbach <swinchen at gmail.com>wrote:
>>
>>> T
>>> he only real problem is that it would consume a lot of IP addresses when
>>> exposing the public interfaces.   I _think_ I may have the solution in your
>>> blog actually:
>>> http://www.sebastien-han.fr/blog/2012/10/19/highly-available-lvs/
>>> and
>>> http://clusterlabs.org/wiki/Using_ldirectord
>>>
>>> I am trying to weigh the pros and cons of this method vs
>>> keepalived/haproxy and just biting the bullet and using one IP per service.
>>>
>>>
>>> On Thu, Feb 14, 2013 at 4:17 AM, Sébastien Han <han.sebastien at gmail.com>wrote:
>>>
>>>> What's the problem to have one IP on service pool basis?
>>>>
>>>> --
>>>> Regards,
>>>> Sébastien Han.
>>>>
>>>>
>>>> On Wed, Feb 13, 2013 at 8:45 PM, Samuel Winchenbach <swinchen at gmail.com
>>>> > wrote:
>>>>
>>>>> What if the VIP is created on a different host than keystone is
>>>>> started on?   It seems like you either need to set net.ipv4.ip_nonlocal_bind
>>>>> = 1 or create a colocation in pacemaker (which would either require all
>>>>> services to be on the same host, or have an ip-per-service).
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> On Wed, Feb 13, 2013 at 2:28 PM, Razique Mahroua <
>>>>> razique.mahroua at gmail.com> wrote:
>>>>>
>>>>>> There we go
>>>>>> https://review.openstack.org/#/c/21581/
>>>>>>
>>>>>> *Razique Mahroua** - **Nuage & Co*
>>>>>> razique.mahroua at gmail.com
>>>>>> Tel : +33 9 72 37 94 15
>>>>>>
>>>>>>
>>>>>> Le 13 févr. 2013 à 20:15, Razique Mahroua <razique.mahroua at gmail.com>
>>>>>> a écrit :
>>>>>>
>>>>>> I'm currently updating that part of the documentation - indeed it
>>>>>> states that two IPs are used, but in fact, you end up with only one VIP for
>>>>>> the API service.
>>>>>> I'll send the patch tonight
>>>>>>
>>>>>> *Razique Mahroua** - **Nuage & Co*
>>>>>> razique.mahroua at gmail.com
>>>>>> Tel : +33 9 72 37 94 15
>>>>>>
>>>>>> <NUAGECO-LOGO-Fblan_petit.jpg>
>>>>>>
>>>>>> Le 13 févr. 2013 à 20:05, Samuel Winchenbach <swinchen at gmail.com> a
>>>>>> écrit :
>>>>>>
>>>>>> In that documentation it looks like each openstack service gets it
>>>>>> own IP (keystone is being assigned 192.168.42.103 and glance is getting
>>>>>> 192.168.42.104).
>>>>>>
>>>>>> I might be missing something too because in the section titled
>>>>>> "Configure the VIP" it create a primitive called "p_api-ip" (or p_ip_api if
>>>>>> you read the text above it) and then in "Adding Keystone resource to
>>>>>> Pacemaker" it creates a group with "p_ip_keystone"???
>>>>>>
>>>>>>
>>>>>> Stranger yet, "Configuring OpenStack Services to use High Available
>>>>>> Glance API" says:  "For Nova, for example, if your Glance API
>>>>>> service IP address is 192.168.42.104 as in the configuration explained
>>>>>> here, you would use the following line in your nova.conf file : glance_api_servers
>>>>>> = 192.168.42.103"  But, in the step before it set:  "registry_host =
>>>>>> 192.168.42.104"?
>>>>>>
>>>>>> So I am not sure which ip you would connect to here...
>>>>>>
>>>>>> Sam
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Wed, Feb 13, 2013 at 1:29 PM, JuanFra Rodriguez Cardoso <
>>>>>> juanfra.rodriguez.cardoso at gmail.com> wrote:
>>>>>>
>>>>>>> Hi Samuel:
>>>>>>>
>>>>>>> Yes, it's possible with pacemaker. Look at
>>>>>>> http://docs.openstack.org/trunk/openstack-ha/content/ch-intro.html.
>>>>>>>
>>>>>>> Regards,
>>>>>>> JuanFra
>>>>>>>
>>>>>>>
>>>>>>> 2013/2/13 Samuel Winchenbach <swinchen at gmail.com>
>>>>>>>
>>>>>>>>  Hi All,
>>>>>>>>
>>>>>>>> I currently have a HA OpenStack cluster running where the OpenStack
>>>>>>>> services are kept alive with a combination of haproxy and keepalived.
>>>>>>>>
>>>>>>>> Is it possible to configure pacemaker so that all the OpenStack
>>>>>>>> services  are served by the same IP?  With keepalived I have a virtual ip
>>>>>>>> that can move from server to server and haproxy sends the request to a
>>>>>>>> machine that has a "live" service.   This allows one (public) ip to handle
>>>>>>>> all incoming requests.  I believe it is the combination of VRRP/IPVS that
>>>>>>>> allows this.
>>>>>>>>
>>>>>>>>
>>>>>>>> Is it possible to do something similar with pacemaker?  I really
>>>>>>>> don't want to have an IP for each service, and I don't want to make it a
>>>>>>>> requirement that all OpenStack services must be running on the same server.
>>>>>>>>
>>>>>>>> Thanks... I hope this question is clear, I feel like I sort of
>>>>>>>> butchered the wording a bit.
>>>>>>>>
>>>>>>>> Sam
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> Mailing list: https://launchpad.net/~openstack
>>>>>>>> Post to     : openstack at lists.launchpad.net
>>>>>>>> Unsubscribe : https://launchpad.net/~openstack
>>>>>>>> More help   : https://help.launchpad.net/ListHelp
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>> _______________________________________________
>>>>>> Mailing list: https://launchpad.net/~openstack
>>>>>> Post to     : openstack at lists.launchpad.net
>>>>>> Unsubscribe : https://launchpad.net/~openstack
>>>>>> More help   : https://help.launchpad.net/ListHelp
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Mailing list: https://launchpad.net/~openstack
>>>>> Post to     : openstack at lists.launchpad.net
>>>>> Unsubscribe : https://launchpad.net/~openstack
>>>>> More help   : https://help.launchpad.net/ListHelp
>>>>>
>>>>>
>>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20130214/3228635d/attachment.html>


More information about the Openstack mailing list