[openstack-dev] [Neutron][LBaaS] API proposal review thoughts
Brandon Logan
brandon.logan at RACKSPACE.COM
Sat May 10 22:49:01 UTC 2014
Hi Sam,
I do not have access to those statistics. Though, I can say that with
our current networking infrastructure customers that have multiple IPv4
or multiple IPv6 VIPs are in the minority. However, we have received
feature requests on allowing VIPs on our two main networks (public and
private). This is mainly because of us not charging for bandwidth on
the private network provided the client resides in the same datacenter
as the load balancer (otherwise, its not accessible by the client.)
Having said that, I still would argue that the main reason for having a
load balancer to many vips to many listeners is for user expectations. A
user expects to configure a load balancer, send that configuration to
our service, and then return the details of that fully configured load
balancer back to the user. Is your argument either 1) A user does not
expect LBaaS to accept and return load balancers or 2) Even if a user
expects this, its not that important of a detail?
Thanks,
Brandon
On Fri, 2014-05-09 at 20:37 +0000, Samuel Bercovici wrote:
> It boils down to two aspects:
>
> 1. How common is it for tenant to care about affinity or have
> more than a single VIP used in a way that adding an additional
> (mandatory) construct makes sense for them to handle?
>
> For example if 99% of users do not care about affinity or will only
> use a single VIP (with multiple listeners). In this case does adding
> an additional object that tenants need to know about makes sense?
>
> 2. Scheduling this so that it can be handled efficiently by
> different vendors and SLAs. We can elaborate on this F2F next week.
>
>
>
> Can providers share their statistics to assist to understand how
> common are those use cases?
>
>
>
> Regards,
>
> -Sam.
>
>
>
>
>
>
>
> From: Stephen Balukoff [mailto:sbalukoff at bluebox.net]
> Sent: Friday, May 09, 2014 9:26 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [Neutron][LBaaS] API proposal review
> thoughts
>
>
>
> Hi Eugene,
>
>
>
>
> This assumes that 'VIP' is an entity that can contain both an IPv4
> address and an IPv6 address. This is how it is in the API proposal and
> corresponding object model that I suggested, but it is a slight
> re-definition of the term "virtual IP" as it's used in the rest of the
> industry. (And again, we're not yet in agreement that 'VIP' should
> actually contain two ip addresses like this.)
>
>
>
>
>
> In my mind, the main reasons I would like to see the container object
> are:
>
>
>
>
>
> * It solves the colocation / apolcation (or affinity /
> anti-affinity) problem for VIPs in a way that is much more
> intuitive to understand and less confusing for users than
> either the "hints" included in my API, or something based off
> the nova blueprint for doing the same for virtual
> servers/containers. (Full disclosure: There probably would
> still be a need for some anti-affinity logic at the logical
> load balancer level as well, though at this point it would be
> an operator concern only and expressed to the user in the
> "flavor" of the logical load balancer object, and probably be
> associated with different billing strategies. "The user wants
> a dedicated physical load balancer? Then he should create one
> with this flavor, and note that it costs this much more...")
> * From my experience, users are already familiar with the
> concept of what a logical load balancer actually is (ie.
> something that resembles a physical or virtual appliance from
> their perspective). So this probably fits into their view of
> the world better.
> * It makes sense for "Load Balancer as a Service" to hand out
> logical load balancer objects. I think this will aid in a more
> intuitive understanding of the service for users who otherwise
> don't want to be concerned with operations.
> * This opens up the option for private cloud operators /
> providers to bill based on number of physical load balancers
> used (if the "logical load balancer" happens to coincide with
> physical load balancer appliances in their implementation) in
> a way that is going to be seen as "more fair" and "more
> predictable" to the user because the user has more control
> over it. And it seems to me this is accomplished without
> producing any undue burden on public cloud providers, those
> who don't bill this way, or those for whom the "logical load
> balancer" doesn't coincide with physical load balancer
> appliances.
> * Attaching a "flavor" attribute to a logical load balancer
> seems like a better idea than attaching it to the VIP. What if
> the user wants to change the flavor on which their VIP is
> deployed (ie. without changing IP addresses)? What if they
> want to do this for several VIPs at once? I can definitely see
> this happening in our customer base through the lifecycle of
> many of our customers' applications.
> * Having flavors associated with load balancers and not VIPs
> also allows for operators to provide a lot more differing
> product offerings to the user in a way that is simple for the
> user to understand. For example:
> * "Flavor A" is the cheap load balancer option, deployed
> on a "shared" platform used by many tenants that has
> fewer guarantees around performance and costs X.
> * "Flavor B" is guaranteed to be deployed on "vendor Q's
> Super Special Product (tm)" but to keep down costs,
> may be shared with other tenants, though not among a
> single tenant's "load balancers" unless the tenant
> uses the same load balancer id when deploying their
> VIPs (ie. user has control of affinity among their own
> VIPs, but no control over whether affinity happens
> with other tenants). It may experience variable
> performance as a result, but has higher guarantees
> than the above and costs a little more.
> * "Flavor C" is guaranteed to be deployed on "vendor P's
> Even Better Super Special Product (tm)" and is also
> guaranteed not to be shared among tenants. This is
> essentially the "dedicated load balancer" option that
> gets you the best guaranteed performance, but costs a
> lot more than the above.
> * ...and so on.
> * A logical load balancer object is a great demarcation
> point between operator concerns and user concerns. It seems
> likely that there will be an operator API created, and this
> will need to interface with the user API at some well-defined
> interface. (If you like, I can provide a couple specific
> operator concerns which are much more easily accomplished
> without disrupting the user experience using the demarc at the
> 'load balancer' instead of at the 'VIP'.)
>
>
>
> So what are the main arguments against having this container object?
> In answering this question, please keep in mind:
>
>
>
>
>
> * If you say "implementation details," please just go ahead and
> be more specific because that's what I'm going to ask you to
> do anyway. If "implementation details" is the concern, please
> follow this with a hypothetical or concrete example as to what
> kinds of implementations this object would invalidate simply
> by existing in the model, or what restrictions this object
> introduces.
> * If you say "I don't see a need" then you're really just asking
> people to come up with a use case that is more easily solved
> using the logical load balancer object rather than the VIP
> without the load balancer. I hope my reasons above address
> this, but I'm happy to be more specific if you'd like: Please
> point out how my examples above are not convincing reasons for
> having this object, and I will be more specific.
>
>
>
> Thanks,
>
>
> Stephen
>
>
>
>
>
>
>
> On Fri, May 9, 2014 at 1:36 AM, Eugene Nikanorov
> <enikanorov at mirantis.com> wrote:
>
> Hi Brandon
>
>
>
> Let me know if I am misunderstanding this,and please explain
> it
>
>
> further.
> A single neutron port can have many fixed ips on many
> subnets. Since
> this is the case you're saying that there is no need for the
> API to
> define multiple VIPs since a single neutron port can represent
> all the
> IPs that all the VIPs require?
>
> Right, if you want to to have both ipv4 and ipv6 addresses on the VIP
> then it's possible with single neutron port.
>
>
> So multiple VIPs for this case are not needed.
>
>
>
>
>
> Eugene.
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
>
>
> --
> Stephen Balukoff
> Blue Box Group, LLC
> (800)613-4305 x807
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
More information about the OpenStack-dev
mailing list