[openstack-dev] [Neutron][LBaaS] API proposal review thoughts

Samuel Bercovici SamuelB at Radware.com
Sat May 10 12:52:09 UTC 2014


Jorje,

I agree with you that affinity should be supported. I am questioning whether a mandatory load balancer root object is the proper way.
I think that using mechanism such Nova's hints for scheduling by optionally providing affinity hints is a better way.

It looks like there are two different discussions here:

1.       A root object with IP properties (ipv4 and/or ipv6) with multiple listeners - As far as I understand the discussion is whether such root object is called VIP or load-balancer.

Does calling this object "load-balancer" is enough to address your needs?

2.       A root object that models a virtual load balancer instance that contains multiple different VIPs on which I have raised my concerns.

-Sam.


From: Jorge Miramontes [mailto:jorge.miramontes at RACKSPACE.COM]
Sent: Saturday, May 10, 2014 1:00 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] API proposal review thoughts

Sam, our larger customers especially care about affinity since they have many load balancer instances. Their use case usually centers around being re-sellers. Also, if you have a deployment that utilizes several load balancers our customers have created tickets to ensure they are on different host machines so that they can mitigate host machine outages (we currently don't allow the tenant to choose affinity within a cluster, only across DC's). In a nut shell, the use cases are relevant, especially to our larger customers/tenants (i.e. customers that we pay special attention to since they bring in the majority of revenue).

Let me know if I am misunderstanding this,and please explain it
further.
A single neutron port can have many fixed ips on many subnets.  Since
this is the case you're saying that there is no need for the API to
define multiple VIPs since a single neutron port can represent all the
IPs that all the VIPs require?
Right, if you want to to have both ipv4 and ipv6 addresses on the VIP then it's possible with single neutron port.
So multiple VIPs for this case are not needed.

Eugene/Sam, a single Neutron port does allow for multiple subnets. However, this precludes tenants from having a load balancer that serves multiple networks. An example use case is the following:

"As a tenant I have several isolated private networks that were created to host different aspects of my business. They have been in use for a while. I also have a new shared service (i.e. a database, wiki, etc.) that needs to be load balanced. I want each isolated private network to access the load balanced service."

As you can see this requires multiple vips. I can think of several other use cases but agree with others that even if multiple vips aren't needed (which they are) a load balancer object is still needed for everything that Stephen presented.

Cheers,
--Jorge

From: Samuel Bercovici <SamuelB at Radware.com<mailto:SamuelB at Radware.com>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Friday, May 9, 2014 3:37 PM
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [Neutron][LBaaS] API proposal review thoughts

It boils down to two aspects:

1.      How common is it for tenant to care about affinity or have more than a single VIP used in a way that adding an additional (mandatory) construct makes sense for them to handle?

For example if 99% of users do not care about affinity or will only use a single VIP (with multiple listeners). In this case does adding an additional object that tenants need to know about makes sense?

2.      Scheduling this so that it can be handled efficiently by different vendors and SLAs. We can elaborate on this F2F next week.

Can providers share their statistics to assist to understand how common are those use cases?

Regards,
                -Sam.



From: Stephen Balukoff [mailto:sbalukoff at bluebox.net]
Sent: Friday, May 09, 2014 9:26 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] API proposal review thoughts

Hi Eugene,

This assumes that 'VIP' is an entity that can contain both an IPv4 address and an IPv6 address. This is how it is in the API proposal and corresponding object model that I suggested, but it is a slight re-definition of the term "virtual IP" as it's used in the rest of the industry. (And again, we're not yet in agreement that 'VIP' should actually contain two ip addresses like this.)

In my mind, the main reasons I would like to see the container object are:

? It solves the colocation / apolcation (or affinity / anti-affinity) problem for VIPs in a way that is much more intuitive to understand and less confusing for users than either the "hints" included in my API, or something based off the nova blueprint for doing the same for virtual servers/containers. (Full disclosure: There probably would still be a need for some anti-affinity logic at the logical load balancer level as well, though at this point it would be an operator concern only and expressed to the user in the "flavor" of the logical load balancer object, and probably be associated with different billing strategies. "The user wants a dedicated physical load balancer? Then he should create one with this flavor, and note that it costs this much more...")
? From my experience, users are already familiar with the concept of what a logical load balancer actually is (ie. something that resembles a physical or virtual appliance from their perspective). So this probably fits into their view of the world better.
? It makes sense for "Load Balancer as a Service" to hand out logical load balancer objects. I think this will aid in a more intuitive understanding of the service for users who otherwise don't want to be concerned with operations.
? This opens up the option for private cloud operators / providers to bill based on number of physical load balancers used (if the "logical load balancer" happens to coincide with physical load balancer appliances in their implementation) in a way that is going to be seen as "more fair" and "more predictable" to the user because the user has more control over it. And it seems to me this is accomplished without producing any undue burden on public cloud providers, those who don't bill this way, or those for whom the "logical load balancer" doesn't coincide with physical load balancer appliances.
? Attaching a "flavor" attribute to a logical load balancer seems like a better idea than attaching it to the VIP. What if the user wants to change the flavor on which their VIP is deployed (ie. without changing IP addresses)? What if they want to do this for several VIPs at once? I can definitely see this happening in our customer base through the lifecycle of many of our customers' applications.
? Having flavors associated with load balancers and not VIPs also allows for operators to provide a lot more differing product offerings to the user in a way that is simple for the user to understand. For example:
o"Flavor A" is the cheap load balancer option, deployed on a "shared" platform used by many tenants that has fewer guarantees around performance and costs X.
o"Flavor B" is guaranteed to be deployed on "vendor Q's Super Special Product (tm)" but to keep down costs, may be shared with other tenants, though not among a single tenant's "load balancers" unless the tenant uses the same load balancer id when deploying their VIPs (ie. user has control of affinity among their own VIPs, but no control over whether affinity happens with other tenants). It may experience variable performance as a result, but has higher guarantees than the above and costs a little more.
o"Flavor C" is guaranteed to be deployed on "vendor P's Even Better Super Special Product (tm)" and is also guaranteed not to be shared among tenants. This is essentially the "dedicated load balancer" option that gets you the best guaranteed performance, but costs a lot more than the above.
o...and so on.
? A logical load balancer object is a great demarcation point <http://en.wikipedia.org/wiki/Demarcation_point>  between operator concerns and user concerns. It seems likely that there will be an operator API created, and this will need to interface with the user API at some well-defined interface. (If you like, I can provide a couple specific operator concerns which are much more easily accomplished without disrupting the user experience using the demarc at the 'load balancer' instead of at the 'VIP'.)

So what are the main arguments against having this container object? In answering this question, please keep in mind:

? If you say "implementation details," please just go ahead and be more specific because that's what I'm going to ask you to do anyway. If "implementation details" is the concern, please follow this with a hypothetical or concrete example as to what kinds of implementations this object would invalidate simply by existing in the model, or what restrictions this object introduces.
? If you say "I don't see a need" then you're really just asking people to come up with a use case that is more easily solved using the logical load balancer object rather than the VIP without the load balancer. I hope my reasons above address this, but I'm happy to be more specific if you'd like: Please point out how my examples above are not convincing reasons for having this object, and I will be more specific.

Thanks,
Stephen


On Fri, May 9, 2014 at 1:36 AM, Eugene Nikanorov <enikanorov at mirantis.com<mailto:enikanorov at mirantis.com>> wrote:
Hi Brandon

Let me know if I am misunderstanding this,and please explain it
further.
A single neutron port can have many fixed ips on many subnets.  Since
this is the case you're saying that there is no need for the API to
define multiple VIPs since a single neutron port can represent all the
IPs that all the VIPs require?
Right, if you want to to have both ipv4 and ipv6 addresses on the VIP then it's possible with single neutron port.
So multiple VIPs for this case are not needed.

Eugene.

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140510/52ec3527/attachment.html>


More information about the OpenStack-dev mailing list