[openstack-dev] [Quantum][LBaaS] Default insertion mode for balancer device
Dan Wendlandt
dan at nicira.com
Wed Jan 30 22:28:36 UTC 2013
Hi Eugene,
I haven't worked through the details, so perhaps this is impossible, but to
me the simplest model for introducing load-balancing in G-3 would be to do
something very close to what we did for L3 routers and DHCP already. Admin
decides to run a quantum-lb-agent on one or more hosts. When a new API
request comes in, a call is dispatched to this agent, which creates an
isolated network namespace, creates interfaces on that namespace and
configures IPs, plugs those interfaces into an OVS/linux bridge, and then
configures and runs a separate instance of HAproxy bound specifically to
that namespace (enabling overlapping IPs, etc.)
I had originally thought this was basically the design you were proposing
based on the picture here: http://wiki.openstack.org/Quantum/LBaaS/Agent .
However, it seems like you're assuming a design where the drivers connect
remotely to other devices. This complicates things a lot, as you have to
worry about device management, scheduling itself. In part, it seems like
we may be trying to run before we are able to work. It might be nice to
focus on a simple model were we can locally spawn new LBs on demand, then
in the Havana release we can focus on building out richer capabilities.
Its up to you guys, but given the time frame, simpler seems to be the
winning strategy, as it would be a shame to not deliver end-to-end lbaas in
Grizzly.
Dan
On Tue, Jan 29, 2013 at 4:10 AM, Eugene Nikanorov
<enikanorov at mirantis.com>wrote:
> Hi, Sam,
>
> Driver API is available on wiki
> http://wiki.openstack.org/Quantum/LBaaS/DriverAPI for a couple of weeks.
>
> Regarding different use cases: they're certainly a must have, but
> considering time remaining to g-3, it's highly unlikely that anything new
> will be implemented, reviewed and merged.
>
> Thanks,
> Eugene.
>
>
> On Sun, Jan 27, 2013 at 6:50 PM, Samuel Bercovici <SamuelB at radware.com>wrote:
>>
>>> Hi Eugene,****
>>>
>>> ** **
>>>
>>> My comments bellow.****
>>>
>>> ** **
>>>
>>> Regards,****
>>>
>>> -Sam.****
>>>
>>> ** **
>>>
>>> ** **
>>>
>>> ** **
>>>
>>> *From:* Eugene Nikanorov [mailto:enikanorov at mirantis.com]
>>> *Sent:* Friday, January 25, 2013 10:29 PM
>>>
>>> *To:* OpenStack Development Mailing List
>>> *Subject:* [openstack-dev] [Quantum][LBaaS] Default insertion mode for
>>> balancer device****
>>>
>>> ** **
>>>
>>> Hi folks,****
>>>
>>> ** **
>>>
>>> Currently we're working on integrating together parts of LBaaS which
>>> will allow the whole service to work. ****
>>>
>>> As you know, the only driver that is under development now is HAProxy
>>> driver.****
>>>
>>> Sam> We (Radware) are waiting for the Driver API to be available so we
>>> can complete our support for grizzly.****
>>>
>>> Also, It seems to be the only type of balancer going into grizzly.****
>>>
>>> ** **
>>>
>>> We faced a certain problem with that: it seems that the most viable use
>>> case for HAProxy loadbalancer is private loadbalancer which is brought up
>>> by a tenant in a private network (E.g. balancer is in the same L2 domain
>>> with the pool members). ****
>>>
>>> Then it is configured via LBaaS service and then tenant may assign
>>> floating IP to the balancer device making possible to access its VMs from
>>> internet through the balancer.****
>>>
>>> However it became obvious that this use case is incompatible with our
>>> simplified device management based on configuration file.****
>>>
>>> Configuration file implies that it is edited by admin and modification
>>> requires quantum restart. E.g. tenant can't add it's private device, etc.
>>> ****
>>>
>>> Sam> The alternative is a two legged approach so that the LB service is
>>> connecting between the private network and the other network and the VIP
>>> does not sit behind a floating IP. We might want to consider using the
>>> floating IP as the VIP address.****
>>>
>>> ** **
>>>
>>> If we go with conf-based device management, defining HAProxy to be
>>> shared device, we will need some additional configuration of machine with
>>> HAProxy to have connectivity with certain tenant network, possibly having
>>> OVS on it, etc. ****
>>>
>>> Also, pool members from different tenants having subnets with the same
>>> IP ranges are indistinguishable from HAProxy conf-file perspective.****
>>>
>>> Solving any of mentioned problems with shared HAProxy seems to take some
>>> time to implement, test and review and obviously that will miss grizzly.
>>> ****
>>>
>>> Sam>I agree that to solve this challenge (or also the other topology in
>>> which HA Proxy is connected to the 2nd network), it would be easier for
>>> HA Proxy to be assigned per network/tenant and not be shared.****
>>>
>>> ** **
>>>
>>> So currently we're thinking again about having device management as a
>>> plugin with it's extension, to focus "private devices" use case.****
>>>
>>> We have already written the code a while ago, but then went
>>> with simpler conf-based approach. ****
>>>
>>> I'd estimate we could get device management plugin code on gerrit in a
>>> week from now, which will give ~2 weeks of time for review plus a week till
>>> g-3****
>>>
>>> Sam>As with the proposed scheduler, this should be done as part of the
>>> HA Proxy Device. It makes sense to have an HA proxy Device Management
>>> capabilities.****
>>>
>>> ** **
>>>
>>> We'd like to hear your opinion:****
>>>
>>> 1) Is the mentioned use case of private HAProxy the one we should focus
>>> on for G-3? ****
>>>
>>> If not, then please describe alternatives.****
>>>
>>> Sam> I think there are two primary use cases. The first is a
>>> public/private network in which a pool of HA Proxies provide service to
>>> such a network. ****
>>>
>>> Sam> The second is the one discussed above either one legged + fixed IP
>>> or two legged using the fixed IP as VIP.****
>>>
>>> I'd like to emphasize that this use case goes inline with our future
>>> plans to make automatic provisioning of private tenant HAProxies via Nova.
>>> ****
>>>
>>> ** **
>>>
>>> 2) Do you think it's ok to go further with dev management plugin? We
>>> need it to provide tenants ability to add their private devices.****
>>>
>>> If you feel that we don't have enough time for that, please advise
>>> alternative.****
>>>
>>> Sam> This looks like reasonable if implemented for the HA Proxy Device
>>> Driver and exposed as utility classes to anyone who want to use it.****
>>>
>>> Sam> This should also be done after APIs for Driver and Driver selection
>>> are available ASAP so that vendors who wishes to start integration could do
>>> so.****
>>>
>>> ** **
>>>
>>> Thanks,****
>>>
>>> Eugene.****
>>>
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>>
>>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
--
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Dan Wendlandt
Nicira, Inc: www.nicira.com
twitter: danwendlandt
~~~~~~~~~~~~~~~~~~~~~~~~~~~
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130130/7623c7df/attachment.html>
More information about the OpenStack-dev
mailing list