[openstack-dev] [Neutron][LBaaS] Multiple VIPs per loadbalancer
Stephen Balukoff
sbalukoff at bluebox.net
Sat May 10 03:01:15 UTC 2014
Hi Eugene,
On Fri, May 9, 2014 at 1:36 PM, Eugene Nikanorov <enikanorov at mirantis.com>wrote:
>
>
>
> On Fri, May 9, 2014 at 7:40 PM, Brandon Logan <brandon.logan at rackspace.com
> > wrote:
>
>> Yes, Rackspace has users that have multiple IPv4 and IPv6 VIPs on a
>> single load balancer.
>
> For sure that can be supported by particular physical appliance, but I
> doubt we need to translate it to logical loadbalancer.
>
>
>> However, I don't think it is a matter of it being
>> needed. It's a matter of having an API that makes sense to a user.
>> Just because the API has multiple VIPs doesn't mean every VIP needs its
>> own port. In fact creating a port is an implementation detail (you know
>
> that phrase that everyone throws out to stonewall any discussions?).
>> The user doesn't care how many neutron ports are set up underneath, they
>> only care about the VIPs.
>>
> Right, port creation is implementation detail, however, L2 connectivity
> for the frontend is a certain API expectation.
> I think VIP creation should have clear semantics: user creates L2
> endpoint, e.g. l2 port + ipv4[+ipv6] address.
> If we agree that we only need 1 L2 port per logical loadbalancer, then it
> could be handled by two API/objmodel approaches:
>
> 1) loadbalancer + VIPs, 1:n relationship
> 2) VIP + listeners, 1:n relationship
> You see that from API and obj model structure perspective those approaches
> are exactly the same.
> However, in (1) we would need to specify L3 information (ipv4 + ipv6
> addresses, subnet_id) for the loadbalancer, and that will be inherited by
> VIPs which would keep info about L4+
> To me it seems a little bit confusing (per our glossary)
>
> While in second approach VIP remains a keeper of L2/L3 information, while
> listeners keep L4+ information.
> That seems to be more clear.
>
There's a complication though: Pools may also need some L2/L3 information
(per the discussion of adding subnet_id as an attribute of the pool, eh.)
> In case we want more than one L2 port, then we need to combine those
> approaches and have loadbalancer+VIPs+Listeners, where loadbalancer is a
> container that maps to a backend.
> However, per discussed on the last meeting, we don't want to let user have
> direct control over the backend.
>
If the VIP subnet/neutron network and Pool subnet/neutron network are not
the same, then the load balancer is going to need separate L2 interfaces to
each. In fact, a VIP with a Listener that references several different pool
via L7 policies, which pools are on different subnets, is going to need an
L2 interface on all of them. Unless I'm totally misunderstanding something
(which is always a possibility-- this stuff is hard, eh!)
And actually, there are a few cases that have been discussed where
operators do want users to be able to have some (limited) control over the
back end. These almost all have to do with VIP affinity.
> Also we've heard objection to this approach several times from other core
> team members (this discussion has been going for more than half a year
> now), so I would suggest to move forward with single L2 port approach. Then
> the question goes down to terminology: loadbalancer/VIPs or VIP/Listeners.
>
To be fair this is definitely about more than terminology. In the examples
you've listed mentioning loadbalancer objects, it seems to me that you're
ignoring that this model also still contains Listeners. So, to be more
accurate, it's really about:
loadbalancer/VIPs/Listeners or VIPs/Listeners.
To me, that says it's all about: Does the loadbalancer object add something
meaningful to this model? And I think the answer is:
* To smaller users with very basic load balancing needs: No (mostly, though
to many it's still "yes")
* To larger customers with advanced load balancing needs: Yes.
* To operators of any size: Yes.
I've outlined my reasoning for thinking so in the other discussion thread.
Thanks,
Stephen
--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140509/96c5f204/attachment.html>
More information about the OpenStack-dev
mailing list