[openstack-dev] [Quantum][LBaaS] Private Haproxy devices & service agent

Samuel Bercovici SamuelB at Radware.com
Thu Feb 7 16:45:11 UTC 2013


Hi,

I think that the VM approach better suites us (Radware) and probably most of the other commercial load balancers.
Radware is providing a VM prepackaged with Alteon (our load balancer) functionality (I believe this is also the case with F5 and Citrix).
This means that we (Radware) will deploy service VMs and will not use on option to deploy service processes.

Regards,
                -Sam.





From: Eugene Nikanorov [mailto:enikanorov at mirantis.com]
Sent: Thursday, February 07, 2013 6:29 PM
To: OpenStack Development Mailing List
Subject: [openstack-dev] [Quantum][LBaaS] Private Haproxy devices & service agent

Hi folks,

Currently we're in a process of implementing the suggestion of creating haproxy on the hosts rather than VMs.
All is going pretty smooth except that we have found out that such method will put a restriction on overall deployment scheme.
Initially we though that we would have 1 service agent (or more than one listening the same MQ just to balance load on a process accessing the actual devices)
which could reach any balancing device using information provided by device database.

Now, when we're bringing device up on the host with agent, it appears that we can't have more than one agent running, (and effectively can use only one host to bring up haproxies)
here is why:
The plugin is unaware of the host agents are running on and does not distinguish them.
So each of the agents will receive all the requests even the ones that are not addressed to it.
This happens because there's no simple way to send a request to a specific service agent if all of them are listening the same queue (am I correct?).
In l3/dhcp agents it is solved by proactive polling of the plugin through quantum-client or rpc.
I think it would create significant complexity for load balancer agent since lbaas object model is quite complex itself, plus we initially designed that plugin will call agent, not vice versa.

We're also working on an alternative approach as we discussed on monday - to bring up VM in tenant's network.
It appears to be a bit simpler in terms of code, than haproxy-on-host approach.
So now we tend to go with VM approach for several reasons:
1) it lets nova to schedule VM depending on host load.
2) several components (device management, agent, driver) support this approach
3) it goes inline with our initial design

Please share your thoughts on this as we're going to finalize our decision tomorrow.
Also, I'm available on #openstack-dev or #quantum-lbaas if anyone is willing to discuss this.

Thanks,
Eugene.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130207/766935b2/attachment.html>


More information about the OpenStack-dev mailing list