[openstack-dev] [Neutron][LBaaS] Unanswered questions in object model refactor blueprint

Stephen Balukoff sbalukoff at bluebox.net
Wed May 28 02:42:01 UTC 2014


Hi y'all!


On Tue, May 27, 2014 at 12:32 PM, Brandon Logan <brandon.logan at rackspace.com
> wrote:

> Referencing this blueprint:
>
> https://review.openstack.org/#/c/89903/5/specs/juno/lbaas-api-and-objmodel-improvement.rst
>
> Anyone who has suggestions to possible issues or can answer some of
> these questions please respond.
>
>
> 1. LoadBalancer to Listener relationship M:N vs 1:N
> The main reason we went with the M:N was so IPv6 could use the same
> listener as IPv4.  However this can be accomplished by the user just
> creating a second listener and pool with the same configuration.  This
> will end up being a bad user experience when the listener and pool
> configuration starts getting complex (adding in TLS, health monitors,
> SNI, etc). A good reason to not do the M:N is because the logic on might
> get complex when dealing with status.  I'd like to get people's opinions
> on this on whether we should do M:N or just 1:N.  Another option, is to
> just implement 1:N right now and later implement the M:N in another
> blueprint if it is decided that the user experience suffers greatly.
>
> My opinion: I like the idea of leaving it to another blueprint to
> implement.  However, we would need to watch out for any major
> architecture changes in the time itis not implemented that could make
> this more difficult than what it needs to be.
>

Is there such a thing as a 'possibly planned but not implemented design' to
serve as a placeholder when considering other in-parallel blueprints and
designs which could potentially conflict with the ability to implement an
anticipated design like this?  (I'm guessing "no." I really wish we had a
better design tracking tool than blueprint.)

Anyway, I don't have a problem with implementing 1:N right now. But, I do
want to point out: The one and only common case I've seen where listener
re-use actually makes a lot of sense (IPv4 and IPv6 for same listener)
could be alleviated by adding separate ipv4 and ipv6 attributes to the
loadbalancer object. I believe this was shot down when people were still
calling it a VIP for philosophical reasons. Are people more open to this
idea now that we're calling the object a 'load balancer'?  ;)

Does anyone have any other use cases where listener re-use makes sense?


>
> 2. Pool to Health Monitor relationship 1:N vs 1:1
> Currently, I believe this is 1:N however it was suggested to deprecate
> this in favor of 1:1 by Susanne and Kyle agreed.  Are there any
> objections to channging to 1:1?
>
> My opinion: I'm for 1:1 as long as there aren't any major reasons why
> there needs to be 1:N.
>
>
Yep, totally on-board with 1:1 for pool and health monitor.


> 3. Does the Pool object need a status field now that it is a pure
> logical object?
>
> My opinion: I don't think it needs the status field.  I think the
> LoadBalancer object may be the only thing that needs a status, other
> than the pool members for health monitoring.  I might be corrected on
> this though.
>

So, I think it does make sense when using L7 rules. And it's specifically
like this:

A pool is 'UP' if at least one non-backup member is 'UP' and 'DOWN'
otherwise. This can be an important monitoring point if, for example,
operations wants to be informed if the 'api pool' is down while the 'web
frontend' pool is still up, in some meaningful way (ie. other than having
to sort through a potential barrage of member down notifications to see
wether any members of a given pool are still up.) Also, given that member
status might be more than just 'UP' and 'DOWN' (eg. "PROVISIONING" or
"DRAINING" might also be valid status types for certain kinds of
back-ends.) It's harder for an operator to know whether a given pool is
'UP' or 'DOWN' without some kind of indication.

Also note that the larger the pool, the less meaningful individual members'
statuses are: If you've got 1000 servers in a pool, it's probably OK if 3-4
of them are 'down' at any given time.

Having said this, if someone asks for the status of a pool, I'd be OK if
the response was simply an array of statuses of each member in the pool,
plus one 'summary' status (ie. as described above) and it's left to the
user to do with that data whatever makes sense for their application. This
would allow one to see, for example, that a pool with no members is by
definition 'DOWN'.

Are we going to allow a pool to be set administratively down? (ex. for
maintenance on one pool of servers powering a listener, while other pools
remain online and available.) In this case, having a response that says the
pool is 'ADMIN_DOWN' probably also makes sense.

Doug:  What do you think of the idea of having both IPv4 and IPv6
attributes on a 'load balancer' object? One doesn't need to have a single
appliance serving both types of addresses for the listener, but there's
certainly a chance (albeit small) to hit an async scenario if they're not.

Vijay:  I think the plan here was to come up with an use an entirely
different DB schema than the legacy Neutron LBaaS, and then retro-fit
backward compatibility with the old user API once the new LBaaS service is
functional. I don't think anyone here is suggesting we try to make the old
schema work for the new API.

Stephen

-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140527/818ef27c/attachment.html>


More information about the OpenStack-dev mailing list