[openstack-dev] [Quantum][LBaaS] Default insertion mode for balancer device
Eugene Nikanorov
enikanorov at mirantis.com
Tue Jan 29 12:10:53 UTC 2013
Hi, Sam,
Driver API is available on wiki
http://wiki.openstack.org/Quantum/LBaaS/DriverAPI for a couple of weeks.
Regarding different use cases: they're certainly a must have, but
considering time remaining to g-3, it's highly unlikely that anything new
will be implemented, reviewed and merged.
Thanks,
Eugene.
On Sun, Jan 27, 2013 at 6:50 PM, Samuel Bercovici <SamuelB at radware.com>wrote:
>
>> Hi Eugene,****
>>
>> ** **
>>
>> My comments bellow.****
>>
>> ** **
>>
>> Regards,****
>>
>> -Sam.****
>>
>> ** **
>>
>> ** **
>>
>> ** **
>>
>> *From:* Eugene Nikanorov [mailto:enikanorov at mirantis.com]
>> *Sent:* Friday, January 25, 2013 10:29 PM
>>
>> *To:* OpenStack Development Mailing List
>> *Subject:* [openstack-dev] [Quantum][LBaaS] Default insertion mode for
>> balancer device****
>>
>> ** **
>>
>> Hi folks,****
>>
>> ** **
>>
>> Currently we're working on integrating together parts of LBaaS which will
>> allow the whole service to work. ****
>>
>> As you know, the only driver that is under development now is HAProxy
>> driver.****
>>
>> Sam> We (Radware) are waiting for the Driver API to be available so we
>> can complete our support for grizzly.****
>>
>> Also, It seems to be the only type of balancer going into grizzly.****
>>
>> ** **
>>
>> We faced a certain problem with that: it seems that the most viable use
>> case for HAProxy loadbalancer is private loadbalancer which is brought up
>> by a tenant in a private network (E.g. balancer is in the same L2 domain
>> with the pool members). ****
>>
>> Then it is configured via LBaaS service and then tenant may assign
>> floating IP to the balancer device making possible to access its VMs from
>> internet through the balancer.****
>>
>> However it became obvious that this use case is incompatible with our
>> simplified device management based on configuration file.****
>>
>> Configuration file implies that it is edited by admin and modification
>> requires quantum restart. E.g. tenant can't add it's private device, etc.
>> ****
>>
>> Sam> The alternative is a two legged approach so that the LB service is
>> connecting between the private network and the other network and the VIP
>> does not sit behind a floating IP. We might want to consider using the
>> floating IP as the VIP address.****
>>
>> ** **
>>
>> If we go with conf-based device management, defining HAProxy to be shared
>> device, we will need some additional configuration of machine with HAProxy
>> to have connectivity with certain tenant network, possibly having OVS on
>> it, etc. ****
>>
>> Also, pool members from different tenants having subnets with the same IP
>> ranges are indistinguishable from HAProxy conf-file perspective.****
>>
>> Solving any of mentioned problems with shared HAProxy seems to take some
>> time to implement, test and review and obviously that will miss grizzly.*
>> ***
>>
>> Sam>I agree that to solve this challenge (or also the other topology in
>> which HA Proxy is connected to the 2nd network), it would be easier for
>> HA Proxy to be assigned per network/tenant and not be shared.****
>>
>> ** **
>>
>> So currently we're thinking again about having device management as a
>> plugin with it's extension, to focus "private devices" use case.****
>>
>> We have already written the code a while ago, but then went
>> with simpler conf-based approach. ****
>>
>> I'd estimate we could get device management plugin code on gerrit in a
>> week from now, which will give ~2 weeks of time for review plus a week till
>> g-3****
>>
>> Sam>As with the proposed scheduler, this should be done as part of the HA
>> Proxy Device. It makes sense to have an HA proxy Device Management
>> capabilities.****
>>
>> ** **
>>
>> We'd like to hear your opinion:****
>>
>> 1) Is the mentioned use case of private HAProxy the one we should focus
>> on for G-3? ****
>>
>> If not, then please describe alternatives.****
>>
>> Sam> I think there are two primary use cases. The first is a
>> public/private network in which a pool of HA Proxies provide service to
>> such a network. ****
>>
>> Sam> The second is the one discussed above either one legged + fixed IP
>> or two legged using the fixed IP as VIP.****
>>
>> I'd like to emphasize that this use case goes inline with our future
>> plans to make automatic provisioning of private tenant HAProxies via Nova.
>> ****
>>
>> ** **
>>
>> 2) Do you think it's ok to go further with dev management plugin? We need
>> it to provide tenants ability to add their private devices.****
>>
>> If you feel that we don't have enough time for that, please advise
>> alternative.****
>>
>> Sam> This looks like reasonable if implemented for the HA Proxy Device
>> Driver and exposed as utility classes to anyone who want to use it.****
>>
>> Sam> This should also be done after APIs for Driver and Driver selection
>> are available ASAP so that vendors who wishes to start integration could do
>> so.****
>>
>> ** **
>>
>> Thanks,****
>>
>> Eugene.****
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130129/6831d691/attachment.html>
More information about the OpenStack-dev
mailing list