[openstack-dev] [Neutron][LBaaS] Unanswered questions in object model refactor blueprint

Brandon Logan brandon.logan at RACKSPACE.COM
Wed May 28 18:10:58 UTC 2014


Hi Stephen

On Tue, 2014-05-27 at 19:42 -0700, Stephen Balukoff wrote:
> Hi y'all!
> 
> 
> On Tue, May 27, 2014 at 12:32 PM, Brandon Logan
> <brandon.logan at rackspace.com> wrote:
>         Referencing this blueprint:
>         https://review.openstack.org/#/c/89903/5/specs/juno/lbaas-api-and-objmodel-improvement.rst
>         
>         Anyone who has suggestions to possible issues or can answer
>         some of
>         these questions please respond.
>         
>         
>         1. LoadBalancer to Listener relationship M:N vs 1:N
>         The main reason we went with the M:N was so IPv6 could use the
>         same
>         listener as IPv4.  However this can be accomplished by the
>         user just
>         creating a second listener and pool with the same
>         configuration.  This
>         will end up being a bad user experience when the listener and
>         pool
>         configuration starts getting complex (adding in TLS, health
>         monitors,
>         SNI, etc). A good reason to not do the M:N is because the
>         logic on might
>         get complex when dealing with status.  I'd like to get
>         people's opinions
>         on this on whether we should do M:N or just 1:N.  Another
>         option, is to
>         just implement 1:N right now and later implement the M:N in
>         another
>         blueprint if it is decided that the user experience suffers
>         greatly.
>         
>         My opinion: I like the idea of leaving it to another blueprint
>         to
>         implement.  However, we would need to watch out for any major
>         architecture changes in the time itis not implemented that
>         could make
>         this more difficult than what it needs to be.
> 
> 
> Is there such a thing as a 'possibly planned but not implemented
> design' to serve as a placeholder when considering other in-parallel
> blueprints and designs which could potentially conflict with the
> ability to implement an anticipated design like this?  (I'm guessing
> "no." I really wish we had a better design tracking tool than
> blueprint.)
I know I have seen blueprints that are much like a container for many
other blueprints that are all intended to solve the container's goal.
It's much like an epic in scrum.  I wonder if we should make something
like that for the object model refactor since it is now in 2 blueprints
and possibly more with this.
> 
> 
> Anyway, I don't have a problem with implementing 1:N right now. But, I
> do want to point out: The one and only common case I've seen where
> listener re-use actually makes a lot of sense (IPv4 and IPv6 for same
> listener) could be alleviated by adding separate ipv4 and ipv6
> attributes to the loadbalancer object. I believe this was shot down
> when people were still calling it a VIP for philosophical reasons. Are
> people more open to this idea now that we're calling the object a
> 'load balancer'?  ;)
> 

I'm not sure about this. It can lead down a rabbit hole of being able to
add multiple IPv4 and/or IPv6 addresses on the same LB.  Which is kind
of weird to me that we woudln't allow that, not a big deal though.  What
I am strongly opposed to, though, is if we do intend on implementing N:M
then it would probably make sense to deprecate the IPv6 address since
that can now be solved by sharing listeners.  Implementing something
just to deprecate it I'm not a big fan of, but sometimes it can be
necessary.  In this case I'd rather just say we are only ever going to
do 1:N and add the IPv6 address attribute to the load balancer, or we
are definitely going to do the N:M and not add it at all.

> Does anyone have any other use cases where listener re-use makes
> sense?

It really only ever makes sense, even with IPv4 and IPv6, as a mechanism
to prevent a user from having to duplicate their listener and pools
configuration.  If two load balancers are sharing a listener, its really
up to the driver/backend on where the load balancers and vips are
located.

>  
>         
>         2. Pool to Health Monitor relationship 1:N vs 1:1
>         Currently, I believe this is 1:N however it was suggested to
>         deprecate
>         this in favor of 1:1 by Susanne and Kyle agreed.  Are there
>         any
>         objections to channging to 1:1?
>         
>         My opinion: I'm for 1:1 as long as there aren't any major
>         reasons why
>         there needs to be 1:N.
>         
> 
> 
> Yep, totally on-board with 1:1 for pool and health monitor.
>  
>         3. Does the Pool object need a status field now that it is a
>         pure
>         logical object?
>         
>         My opinion: I don't think it needs the status field.  I think
>         the
>         LoadBalancer object may be the only thing that needs a status,
>         other
>         than the pool members for health monitoring.  I might be
>         corrected on
>         this though.
> 
> 
> So, I think it does make sense when using L7 rules. And it's
> specifically like this:
> 
> 
> A pool is 'UP' if at least one non-backup member is 'UP' and 'DOWN'
> otherwise. This can be an important monitoring point if, for example,
> operations wants to be informed if the 'api pool' is down while the
> 'web frontend' pool is still up, in some meaningful way (ie. other
> than having to sort through a potential barrage of member down
> notifications to see wether any members of a given pool are still up.)
> Also, given that member status might be more than just 'UP' and
> 'DOWN' (eg. "PROVISIONING" or "DRAINING" might also be valid status
> types for certain kinds of back-ends.) It's harder for an operator to
> know whether a given pool is 'UP' or 'DOWN' without some kind of
> indication.
> 
> 
> Also note that the larger the pool, the less meaningful individual
> members' statuses are: If you've got 1000 servers in a pool, it's
> probably OK if 3-4 of them are 'down' at any given time.
> 
> 
> Having said this, if someone asks for the status of a pool, I'd be OK
> if the response was simply an array of statuses of each member in the
> pool, plus one 'summary' status (ie. as described above) and it's left
> to the user to do with that data whatever makes sense for their
> application. This would allow one to see, for example, that a pool
> with no members is by definition 'DOWN'.
> 

This would be a good feature to add later for sure.  Until that feature
is actually enabled then would it make sense for pool to not have a
status field?  Then again, that'd be deprecating something to only add
it back in later.

> 
> Are we going to allow a pool to be set administratively down? (ex. for
> maintenance on one pool of servers powering a listener, while other
> pools remain online and available.) In this case, having a response
> that says the pool is 'ADMIN_DOWN' probably also makes sense.
>  
> Doug:  What do you think of the idea of having both IPv4 and IPv6
> attributes on a 'load balancer' object? One doesn't need to have a
> single appliance serving both types of addresses for the listener, but
> there's certainly a chance (albeit small) to hit an async scenario if
> they're not.
> 
> 
> Vijay:  I think the plan here was to come up with an use an entirely
> different DB schema than the legacy Neutron LBaaS, and then retro-fit
> backward compatibility with the old user API once the new LBaaS
> service is functional. I don't think anyone here is suggesting we try
> to make the old schema work for the new API.
> 

Thanks,
Brandon Logan




More information about the OpenStack-dev mailing list