[openstack-dev] [Neutron][LBaaS] API proposal review thoughts

Stephen Balukoff sbalukoff at bluebox.net
Sat May 10 16:50:21 UTC 2014


Hi Eugene,

A couple notes of clarification:

On Sat, May 10, 2014 at 2:30 AM, Eugene Nikanorov
<enikanorov at mirantis.com>wrote:

>
> On Fri, May 9, 2014 at 10:25 PM, Stephen Balukoff <sbalukoff at bluebox.net>wrote:
>
>> Hi Eugene,
>>
>> This assumes that 'VIP' is an entity that can contain both an IPv4
>> address and an IPv6 address. This is how it is in the API proposal and
>> corresponding object model that I suggested, but it is a slight
>> re-definition of the term "virtual IP" as it's used in the rest of the
>> industry. (And again, we're not yet in agreement that 'VIP' should actually
>> contain two ip addresses like this.)
>>
> That seems a minor issue to me. May be we can just introduce a statement
> that VIP has L2 endpoint first of all?
>

Well, sure, except the user is going to want to know what the IP
address(es) are for obvious reasons, and expect them to be taken from
subnet(s) the user specifies. Asking the user to provide a Neutron
network_id (ie. where we'll attach the L2 interface) isn't definitive here
because a neutron network can contain many subnets, and these subnets might
be either IPv4 or IPv6. Asking the user to provide an IPv4 and IPv6 subnet
might cause us problems if the IPv4 subnet provided and the IPv6 subnet
provided are not on the same neutron network. In that scenario, we'd need
two L2 interfaces / neutron ports to service this, and of course some way
to record this information in the model.

We could introduce the restriction that all of the IP addresses / subnets
associated with the VIP must come from the same neutron network, but this
begs the question:  Why? Why shouldn't a VIP be allowed to connect to
multiple neutron networks to service all its front-end IPs?

If the answer to the above is "there's no reason" or "because it's easier
to implement," then I think these are not good reasons to apply these
restrictions. If the answer to the above is "because nobody deploys their
IPv4 and IPv6 networks separate like that," then I think you are unfamiliar
with the environments in which many operators must survive, nor the
requirements imposed on us by our users. :P

In any case, if you agree that in the IPv4 + IPv6 case it might make sense
to allow for multiple L2 interfaces on the VIP, doesn't it then also make
more sense to define a VIP as a single IP address (ie. what the rest of the
industry calls a "VIP"), and call the groupings of all these IP addresses
together a 'load balancer' ? At that point the number of L2 interfaces
required to service all the IPs in this VIP grouping becomes an
implementation problem.

For what it's worth, I do go back and forth on my opinion on this one, as
you can probably tell. I'm trying to get us to a model that is first and
foremost simple to understand for users, and relatively easy for operators
and vendors to implement.


> In my mind, the main reasons I would like to see the container object are:
>>
>>
>>    - It solves the colocation / apolcation (or affinity / anti-affinity)
>>    problem for VIPs in a way that is much more intuitive to understand and
>>    less confusing for users than either the "hints" included in my API, or
>>    something based off the nova blueprint for doing the same for virtual
>>    servers/containers. (Full disclosure: There probably would still be a need
>>    for some anti-affinity logic at the logical load balancer level as well,
>>    though at this point it would be an operator concern only and expressed to
>>    the user in the "flavor" of the logical load balancer object, and probably
>>    be associated with different billing strategies. "The user wants a
>>    dedicated physical load balancer? Then he should create one with this
>>    flavor, and note that it costs this much more...")
>>
>> In fact, that can be solved by scheduling, without letting user to
> control that. Flavor Framework will be able to address that.
>

I never said it couldn't be solved by scheduling. In fact, my original API
proposal solves it this way!

I was saying that it's *much more intuitive to understand and less
confusing for users* to do it using a logical load balancer construct. I've
yet to see a good argument for why working with colocation_hints /
apolocation_hints or affinity grouping rules (akin to the nova model) is
*easier* *for the user to understand* than working with a logical load
balancer model.

And by the way--  maybe you didn't see this in my example below, but just
because a user is using separate load balancer objects doesn't mean the
vendor or operator needs to implement these on separate pieces of hardware.
Whether or not the operator decides to let the user have this level of
control will be expressed in the flavor.


>
>>    - From my experience, users are already familiar with the concept of
>>    what a logical load balancer actually is (ie. something that resembles a
>>    physical or virtual appliance from their perspective). So this probably
>>    fits into their view of the world better.
>>
>> That might be so, but apparently it goes in opposite direction than
> neutron in general (i.e. more abstraction)
>

Doesn't more abstraction give vendors and operators more flexibility in how
they implement it? Isn't that seen as a good thing in general? In any case,
this sounds like "your opinion" more than an actual stated or implied
agenda from the Neutron team. And even if it is an implied or stated
agenda, perhaps it's worth revisiting the reason for having it?


>
>>    - It makes sense for "Load Balancer as a Service" to hand out logical
>>    load balancer objects. I think this will aid in a more intuitive
>>    understanding of the service for users who otherwise don't want to be
>>    concerned with operations.
>>    - This opens up the option for private cloud operators / providers to
>>    bill based on number of physical load balancers used (if the "logical load
>>    balancer" happens to coincide with physical load balancer appliances in
>>    their implementation) in a way that is going to be seen as "more fair" and
>>    "more predictable" to the user because the user has more control over it.
>>    And it seems to me this is accomplished without producing any undue burden
>>    on public cloud providers, those who don't bill this way, or those for whom
>>    the "logical load balancer" doesn't coincide with physical load balancer
>>    appliances.
>>
>> I don't see how 'loadbalancer' is better than 'VIP' here, other than
> being a bit closer term to 'logical loadbalancer'.
>

You have no idea how many support requests my team is going to have to
answer by naming this the wrong thing. And before you start: It's not
really about the name, it's about the concept.


>
>>    - Attaching a "flavor" attribute to a logical load balancer seems
>>    like a better idea than attaching it to the VIP. What if the user wants to
>>    change the flavor on which their VIP is deployed (ie. without changing IP
>>    addresses)? What if they want to do this for several VIPs at once? I can
>>    definitely see this happening in our customer base through the lifecycle of
>>    many of our customers' applications.
>>
>> I don't see any problems with above cases if VIP is the root object
>

Ok, I'll concede that-- this is probably my weakest argument. But again, I
still think it's easier for the user to understand how to do the above, and
how flavor interacts with affinity / colocation rules using the logical
load balancer object.

>
>>    - Having flavors associated with load balancers and not VIPs also
>>    allows for operators to provide a lot more differing product offerings to
>>    the user in a way that is simple for the user to understand. For example:
>>       - "Flavor A" is the cheap load balancer option, deployed on a
>>       "shared" platform used by many tenants that has fewer guarantees around
>>       performance and costs X.
>>       - "Flavor B" is guaranteed to be deployed on "vendor Q's Super
>>       Special Product (tm)" but to keep down costs, may be shared with other
>>       tenants, though not among a single tenant's "load balancers" unless the
>>       tenant uses the same load balancer id when deploying their VIPs (ie. user
>>       has control of affinity among their own VIPs, but no control over whether
>>       affinity happens with other tenants). It may experience variable
>>       performance as a result, but has higher guarantees than the above and costs
>>       a little more.
>>       - "Flavor C" is guaranteed to be deployed on "vendor P's Even
>>       Better Super Special Product (tm)" and is also guaranteed not to be shared
>>       among tenants. This is essentially the "dedicated load balancer" option
>>       that gets you the best guaranteed performance, but costs a lot more than
>>       the above.
>>       - ...and so on.
>>
>> Right, that's how flavors are supposed to work, but that's again
> unrelated to whether we make VIP or loadbalancer our root object.
>
>>
>>    - A logical load balancer object is a great demarcation point <http://en.wikipedia.org/wiki/Demarcation_point> between
>>    operator concerns and user concerns. It seems likely that there will be an
>>    operator API created, and this will need to interface with the user API at
>>    some well-defined interface. (If you like, I can provide a couple specific
>>    operator concerns which are much more easily accomplished without
>>    disrupting the user experience using the demarc at the 'load balancer'
>>    instead of at the 'VIP'.)
>>
>> That might be fine to have loadbalancer for admin API, but we're
> discussing tenant API right now.
> For admin API 'loadbalancer' could be direct representation of a backend.
>

Having different definitions of 'load balancer' for the admin vs. user APIs
is about the most scary idea I've heard in this discussion so far.

And yes, we're discussing the tenant / user API right now. But I don't like
going into these kinds of discussions with blinders on. *Obviously* we're
going to make an admin / operator API, and *obviously* it's going to need
to interact with the user / tenant API in some well-defined fashion. If we
ignore this now, we're only going to be revisiting it (possibly forcing a
revision of the user API) when we get down to defining the admin API. :P



> So what are the main arguments against having this container object? In
>> answering this question, please keep in mind:
>>
>>
>>    - If you say "implementation details," please just go ahead and be
>>    more specific because that's what I'm going to ask you to do anyway. If
>>    "implementation details" is the concern, please follow this with a
>>    hypothetical or concrete example as to what kinds of implementations this
>>    object would invalidate simply by existing in the model, or what
>>    restrictions this object introduces.
>>
>> I personally never used this as an argument.
>

Right. But those whose opinion you're arguing for have. If they're still
voicing this as an objection, can you please get them to be more specific
about this?


>
>>    -
>>    - If you say "I don't see a need" then you're really just asking
>>    people to come up with a use case that is more easily solved using the
>>    logical load balancer object rather than the VIP without the load balancer.
>>
>> Right, there could be cases that more 'easily' solved by loadbalancer
> rather then other methods. Like aforementioned collocation problem.
> But that's where project-wise design considerations apply. It's better if
> we go with projects direction, which is going to address those cases by
> other methods rather than by direct user control.
>

Correct me if I'm wrong, but wasn't "the existing API is confusing and
difficult to use" one of the major complaints with it (as voiced in the IRC
meeting, say on April 10th in IRC, starting around... I dunno... 14:13
GMT)?  If that's the case, then the user experience seems like an important
concern, and possibly trumps some vaguely defined "project direction" which
apparently doesn't take this into account if it's vetoing an approach which
is more easily done / understood by the user.

Thanks,
Stephen



-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140510/9c289e6e/attachment.html>


More information about the OpenStack-dev mailing list