[openstack-dev] [Neutron][LBaaS] API proposal review thoughts

Eugene Nikanorov enikanorov at mirantis.com
Sun May 11 05:31:55 UTC 2014


Hi Stephen,

Well, sure, except the user is going to want to know what the IP
> address(es) are for obvious reasons, and expect them to be taken from
> subnet(s) the user specifies. Asking the user to provide a Neutron
> network_id (ie. where we'll attach the L2 interface) isn't definitive here
> because a neutron network can contain many subnets, and these subnets might
> be either IPv4 or IPv6. Asking the user to provide an IPv4 and IPv6 subnet
> might cause us problems if the IPv4 subnet provided and the IPv6 subnet
> provided are not on the same neutron network. In that scenario, we'd need
> two L2 interfaces / neutron ports to service this, and of course some way
> to record this information in the model.
>
Right, that's why VIP need to have clear definition in relation to L2 port:
we allow one L2 port per VIP, hence only addresses from subnets from one
network are allowed. That seems to be a fair limitation.

We could introduce the restriction that all of the IP addresses / subnets
> associated with the VIP must come from the same neutron network,
>
Right.

> but this begs the question:  Why? Why shouldn't a VIP be allowed to
> connect to multiple neutron networks to service all its front-end IPs?
>

> If the answer to the above is "there's no reason" or "because it's easier
> to implement," then I think these are not good reasons to apply these
> restrictions. If the answer to the above is "because nobody deploys their
> IPv4 and IPv6 networks separate like that," then I think you are unfamiliar
> with the environments in which many operators must survive, nor the
> requirements imposed on us by our users. :P
>
I approach this question from opposite side: if we allow this - we're
exposing 'virtual appliance'-API, where user fully controls how lb instance
is wired, how many VIPs it has, etc.
As i said in other thread, that is 'virtual functions vs virtualized
appliance' question which is about general neutron project goal.
If something seem to map more easily on physical infrastructure (or to a
concept o physical infra) doesn't mean that cloud API needs to follow that.


> In any case, if you agree that in the IPv4 + IPv6 case it might make sense
> to allow for multiple L2 interfaces on the VIP, doesn't it then also make
> more sense to define a VIP as a single IP address (ie. what the rest of the
> industry calls a "VIP"), and call the groupings of all these IP addresses
> together a 'load balancer' ? At that point the number of L2 interfaces
> required to service all the IPs in this VIP grouping becomes an
> implementation problem.
>
> For what it's worth, I do go back and forth on my opinion on this one, as
> you can probably tell. I'm trying to get us to a model that is first and
> foremost simple to understand for users, and relatively easy for operators
> and vendors to implement.
>
Users are different, and you apparently consider those who understand
networks and load balancing.

I was saying that it's *much more intuitive to understand and less
> confusing for users* to do it using a logical load balancer construct.
> I've yet to see a good argument for why working with colocation_hints /
> apolocation_hints or affinity grouping rules (akin to the nova model) is
> *easier* *for the user to understand* than working with a logical load
> balancer model.
>
Something done by hand may be much more intuitive than something performed
by magic behind scheduling, flavors etc.
But that doesn't seem like a good reason to me to put user in charge of
defining resource placement.


>
> And by the way--  maybe you didn't see this in my example below, but just
> because a user is using separate load balancer objects doesn't mean the
> vendor or operator needs to implement these on separate pieces of hardware.
> Whether or not the operator decides to let the user have this level of
> control will be expressed in the flavor.
>
Yes, and without container user has less than that - only balancing
endpoints - VIPs, without direct control of how they are grouped within
instances.

That might be so, but apparently it goes in opposite direction than neutron
>> in general (i.e. more abstraction)
>>
>
> Doesn't more abstraction give vendors and operators more flexibility in
> how they implement it? Isn't that seen as a good thing in general? In any
> case, this sounds like "your opinion" more than an actual stated or implied
> agenda from the Neutron team. And even if it is an implied or stated
> agenda, perhaps it's worth revisiting the reason for having it?
>
I'm translating the argument of other team members and it seems valid to me.
For sure you can try to revisit those reasons ;)


>
>> So what are the main arguments against having this container object? In
>>> answering this question, please keep in mind:
>>>
>>>
>>>    - If you say "implementation details," please just go ahead and be
>>>    more specific because that's what I'm going to ask you to do anyway. If
>>>    "implementation details" is the concern, please follow this with a
>>>    hypothetical or concrete example as to what kinds of implementations this
>>>    object would invalidate simply by existing in the model, or what
>>>    restrictions this object introduces.
>>>
>>> I personally never used this as an argument.
>>
>
> Right. But those whose opinion you're arguing for have. If they're still
> voicing this as an objection, can you please get them to be more specific
> about this?
>
>
>>
>>>    -
>>>    - If you say "I don't see a need" then you're really just asking
>>>    people to come up with a use case that is more easily solved using the
>>>    logical load balancer object rather than the VIP without the load balancer.
>>>
>>> Right, there could be cases that more 'easily' solved by loadbalancer
>> rather then other methods. Like aforementioned collocation problem.
>> But that's where project-wise design considerations apply. It's better if
>> we go with projects direction, which is going to address those cases by
>> other methods rather than by direct user control.
>>
>
> Correct me if I'm wrong, but wasn't "the existing API is confusing and
> difficult to use" one of the major complaints with it (as voiced in the IRC
> meeting, say on April 10th in IRC, starting around... I dunno... 14:13
> GMT)?  If that's the case, then the user experience seems like an important
> concern, and possibly trumps some vaguely defined "project direction" which
> apparently doesn't take this into account if it's vetoing an approach which
> is more easily done / understood by the user.
>

The existing API is confusing because virtual functions it provides doesn't
address use cases well.
The idea of 'loadbalancer' is  about grouping of those virtual functions in
a virtual appliance. That's what neutron tries to move from (per my
understanding).
That's the whole argument, which suggests that use cases for which you want
a container could be addressed differently.

Thanks,
Eugene.

>
> Thanks,
> Stephen
>
>
>
> --
> Stephen Balukoff
> Blue Box Group, LLC
> (800)613-4305 x807
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140511/1c222653/attachment.html>


More information about the OpenStack-dev mailing list