[openstack-dev] [Neutron][LBaaS] BBG edit of new API proposal
Stephen Balukoff
sbalukoff at bluebox.net
Thu Apr 24 00:07:54 UTC 2014
Hi Brandon!
Thanks for the questions. Responses inline:
On Wed, Apr 23, 2014 at 2:51 PM, Brandon Logan
<brandon.logan at rackspace.com>wrote:
> Hey Stephen!
> Thanks for the proposal and spending time on it (I know it is a large time
> investment). This is actually very similar in structure to something I had
> started on except a load balancer object was the root and it had a
> one-to-many relationship to VIPs and each VIP had a one-to-many
> relationship to listeners. We decided to scrap that because it became a
> bit complicated and the concept of sharing VIPs across load balancers
> (single port and protocol this time), accomplished the same thing but with
> a more streamlined API. The multiple VIPs having multiple listeners was
> the main complexity and your proposal does not have that either.
>
> Anyway, some comments and questions on your proposal are listed below.
> Most are minor quibbles, questions and suggestions that can probably be
> fleshed out later when we decide on one proposal and I am going to use your
> object names as terminology.
>
> 1. If a VIP can have IPv4 and IPv6 IPs at the same time is that really a
> single VIP? Why not call that a load balancer? I'm always going to
> advocate for calling the root object a load balancer, and I think even in
> this proposal calling the VIP a load balancer makes sense. Renaming your
> model's load balancer to something else should be trivial.
>
A couple things about this: The more I think about it, the more I like the
idea of not using the term 'loadbalancer' for any of the primitives in the
model. In my original version of this we had called the thing presently
known as a load balancer a "cluster" but then that's also a pretty
over-used term. The problem with the term "loadbalancer" is that people
seem to jump to conclusions too readily as to what it means (which slows
down discussions all over the place-- both talking to users and talking to
Neutron core developers).I was *really* tempted to call it what we've
started to call it internally ("Efkalb" reminiscent of the "squonky"
discussion we had earlier)... but then I figured it probably made more
sense to stick with the glossary this group has already agreed upon for
now. I'm still for changing the name and making the policy decision that no
single primitive will be called a "loadbalancer." My thought is that this
whole project we're working on is "the load balancer," but when we're
talking about the nitty-gritty details, we should use more specific terms.
:/
Having said this, I think VIP still makes good sense for the name of the
object that is the root of the user API. One of the key characteristics of
a "virtual IP" as the world knows it is that it can "float" from machine to
machine as necessary. The fact that this object can have both an IPv4 and
an IPv6 address just means these addresses should "float" together.
We had considered creating an object that just has an 'ip_address' field
and then have an 'address_type' field or something to distinguish it
between IPv4 and IPv6. But we decided to go with what's there because of
practical considerations:
The *only* case we've seen in use by our customers where they want services
exposed on both IPv4 and IPv6 is where they're delivering the exact same
functionality or content, just over the different IP protocol. This means
that in a load balancer configuration, the listeners, pools, rules, SSL
certificates, etc. are all the same for both the IPv4 and IPv6 service. To
allow for an IPv4 and IPv6 VIP to share all the same child primitives, we'd
have to make some kind of other primitive which acts as a join between VIPs
and Listeners (as you alluded to above was in your original model), which
seemed like a lot more complication than was necessary, considering the
only case where we'd likely see it in use.
Also, in a lot of actual implementations, it's also more efficient to have
a single process listen on multiple IPs if they're going to have the same
back-ends, rules, SSL certs, etc.
So we cheated a bit, and said a 'VIP' can have both a single IPv4 and a
single IPv6 address.
It's also worth saying that in designing this object model, we took pains
to try to minimize the explosion of too many primitive objects because this
leads to a lot of complication that most people never need anyway. And,
it's also hard to get this group to agree to drastic changes because
backward compatibility, and the need to support the models we do agree on
for a very long time factor into the cost of maintaining things.
> 2. How would a user be able to add another IPv4 or IPv6 IP to the same VIP?
>
A single VIP can have only a single IPv4 address and a single IPv6 address.
If you need another IPv4 address, create another VIP. :)
> 3. Pool does not have a subnet attribute, how do you define what subnet
> the pool members should be on?
>
So, this is one where I'm a bit torn: It's clear from some of the
re-encryption discussion that sometimes members of the same pool will be in
different subnets. (The example given was that some members are local, and
some are hosted with an entirely different provider.) Given this, it
doesn't make sense to have the assumption that all members of the same pool
will be in the same subnet.
What's really important, in any case, is that the appliance or service
which actually does the load balancing is able to communicate directly or
indirectly with each of the members. We solved this at our company by
making them talk IPv6 to each member when the appliance wasn't hosted on
the same back-end network as the members. Since IPv6 addresses are globally
routable, this becomes a cinch to manage. However, in the world of Neutron
networking, where tenants can potentially create an arbitrary number of
virtual networks and have them have overlapping rfc1918 IP space... things
get a lot more complicated.
So maybe it makes sense to have a 'neutron_subnet' as an optional attribute
to the member objects? (Again, it only becomes important if the device(s)
doing the load balancing need to have something like a neutron network port
connected to the subnets the members are using.)
Keep in mind, also, that I don't know of any load balancing solution at
present that's prepared to deal with overlapping IP space in a single
instance like this either. (For example, if I have member A on
neutron_subnet B with IP address 10.0.0.5 in the pool, and member X on
neutron_subnet Y with IP address 10.0.0.5 in the same pool... is there any
solution available today that can actually deal with this?)
Again, I'm still a bit torn on this and would love to hear others' ideas...
Everything I've come up with so far looks like it's so dependent on the
implementation details for the cloud operator that it's hard to know what's
appropriate to make part of the model.
> 4. In the single create call, how would a user reuse an object that is
> defined inside that request body since they will not have an actual id.
>
I understand there are actually ways to do specify this in JSON, but I also
understand these are nasty, difficult to understand, and easy to screw up.
If you know you're going to want to re-use a specific primitive a whole
bunch, I would recommend creating that primitive as a separate operation
prior to the "single-call," then just referencing its ID in your single
call.
Yes, technically this isn't single-call anymore, but it's probably easier
than the alternative.
Alternatively, you can just repeat the same object over and over again (ie.
don't reuse). The LBaaS code shouldn't care if two primitives are identical
in every way except ID.
> 5. I would like to see expanded details of child objects when getting the
> details of an object (i.e. GET /pools shows details of a health monitor)
>
We considered this as well-- and it would be pretty easy to add. Didn't do
it in the proposal just because we wanted to get the proposal out the door.
:) But again, this sort of thing is pretty trivial to do.
> 6. Why is there a protocol on the pool object and the listener object? Is
> this for translating from secure protocols to insecure protocols (i.e.
> HTTPS to HTTP).
>
Yep. It's important that the protocol of the listener be compatible with
the protocol of the pool (and the protocol of the health check). It doesn't
make sense to have an HTTPS listener that points to an IMAP pool. (Not that
we're proposing supporting IMAP at this time... but... yeah.)
Note that I didn't say they had to be equivalent, just compatible. As you
point out, there are valid use cases for having an HTTPS listener and an
HTTP pool.
> 7. When returning lists of objects (i.e. GET /vips, GET /pools) I'd like
> to see the name returned as well.
>
Yep, makes sense. See answer to #5 above. But I agree that this probably
makes sense.
> 8. Can all primitives be shared among other parent objects belonging to
> the same tenant?
>
No. We didn't see great enough need to allow members to be shared among
pools, nor L7Rules among L7Policies. (In fact, I was really tempted to
roll all the attributes of the Health monitor into the Pool primitive since
I don't see much use in sharing health monitors either.) If we allowed
this kind of sharing, we'd need to add more "join" primitives to allow for
the n:m relationship between pools and members, for example.
As a rule of thumb that I try to follow: If you have a 'join' primitive
that has no attributes other than the IDs of the primitives you're
joining... take a closer look at your object model. There's probably a way
to do it without the join. :)
> 9. can pool members be shared between pools on the same tenant?
> -if so, what happens if two pools are sharing the same pool member, one
> pool has a health monitor, the other does not. The pool member's status
> will get updated to "DOWN" for both pools.
> -if not, why not just make them children resources of /pools (i.e.
> /pools/{pool_id}/members).
>
The answer is: No. And you're right! We could easily make members a pure
leaf primitive connected to the pool primitive. This would mean that when
the pool is destroyed, the members are implicitly destroyed. The same is
true for L7Rules and L7Policies.
Can anyone think of an instance where a user might want to destroy a pool,
but leave its member primitives intact (presumably so they could be
attached to another pool at a later time)?
Absent any counterexamples, I'm all for making Members a leaf primitive of
Pools, and L7Rules a leaf primitive of L7Policies.
>
> Again, thanks for spending the time on this. It has a lot of good ideas
> and things we did not think about. We've been requested to do a POC of our
> proposal, will you and your team be able to do the same?
>
No problem, eh! I'm glad you've apparently found it useful. :)
And... I guess it depends on what form the POC has to take. Blue Box is a
much, much smaller organization than Rackspace, so we don't have nearly the
amount of man-power for pursuing POCs and whatnot as I would like. (It's a
normal course of events for prototypes we build to end up being put into
production for a given customer shortly thereafter. We do a lot of custom
things for our customers, and occasionally we get to build something more
general purpose, like the load balancer software appliances we made...)
Assuming we're still making progress in the discussion on things like this
API revision, or how exactly we're going to solve the operator concerns
around HA functionality... I'm hoping to spend more time porting our
software appliance to a form that can be deployed on-demand for OpenStack.
(We're open-sourcing the software appliance for this purpose-- so far,
though, I've only had time to work on refining its API documentation...
which is of course not the same one we've been discussing here. XD)
What did you have in mind, as far as a POC is concerned?
Thanks,
Stephen
--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140423/c2d41950/attachment.html>
More information about the OpenStack-dev
mailing list