[openstack-dev] [Neutron][LBaaS] Object Model discussion

Eugene Nikanorov enikanorov at mirantis.com
Wed Feb 19 07:00:05 UTC 2014


Thanks for quick response, Stephen,

See my comments inline:


On Wed, Feb 19, 2014 at 6:28 AM, Stephen Balukoff <sbalukoff at bluebox.net>
 wrote:

> Hi y'all!
>
> Eugene:  Are the arrows in your diagrams meaningful?
>
Arrow means 'one object references another'.


> Regarding existing workflows: Do we have any idea how widely the existing
> Neutron LBaaS is being deployed / used in the wild?  (In our environment,
> we don't use it yet because it's not sophisticated enough for many of our
> customers' needs.)  In other words, is breaking backward compatibility a
> huge concern?  In our environment, obviously it's not.
>
It's more a policy then a concern: we need to maintain compatibility for at
least one release cycle before deprecating workflow/API parts.

>
> I personally favor #3 as suggested, but again, I do doubt the need to have
> pools associated with a vip_id, or listener_id:  That is, in larger
> clusters it may be advantageous to have a single pool that is referenced by
> several listeners and VIPs.
>
Agree, the pool can be shared, we concluded this as well during the
discussion with Mark.
Just one concern here - pool currently has 'subnet' attribute which means
the subnet where members reside. More formally it means that loadbalancer
device should have a port on that subnet (Say in routed mode vip and pool
may be on different subnets and then device should have two ports on those
subnets)

If we keep the vip_id as an attribute of a pool (implying a pool can belong
> to only one vip), this isn't too bad--  you can duplicate the behavior by
> having multiple pools with the same actual member ips associated (though
> different member object instantiations, of course), and just make sure you
> update all of these "clone" pools whenever adding / removing members or
> changing healthmonitors, etc. It's certainly more housekeeping on the part
> of the application developer, though.
>
Right, it isn't a big deal.


> You mention in the notes that having the pools with a vip_id attribute
> solves a collocation problem. What is this specific collocation problem?
>
When making complex configurations with several pools (L7 rules usage) or
multiple VIPs (or Listeners) user may want to have this in a single
'logical configuration' for various reasons. One of the important reasons
is resource consumption: user may want to consume/pay for 1 backend, where
his configuration will be deployed. With existing API and workflow it's not
quite possible because 1) pool is the root object 2) pool is associated
with backend at the point when it is created.

If we go with #3, I would keep IP address as an attribute of the VIP in
> this model.
>
Yes, that makes sense. I'd say that we need port_id there, rather than ip.


> As far as not having a 'loadbalancer' entity: Again, I think we're going
> to need something like this when we solve the HA or horizontal scalability
> problem. If we're going to break workflows with the introduction of L7
> rules, then I would prefer to approach the model changes that will need to
> happen for HA and horizontal scalability sooner rather than later, so that
> we don't have to (potentially) contemplate yet another
> workflow-backward-compatibility model change.
>

That will need further clarification. So far we planned to introduce HA in
such way that user can't control it other then 'enable-disable', so
everything related to HA isn't really exposed to API. With approaches
either #2 or #3 HA is just a flag on the instance that indicates that it is
deployed in HA mode. Then driver does whatever it thinks HA is.
While HA may require additional DB objects, like additional
associations/bindings between logical config and backends, those objects
are not a part of public API.


> Could you please describe what is the 'provider/flavor' attribute of the
> VIP being proposed?
>
Currently we have a notion of 'provider' which is a user-facing
representation of 'driver', e.g. vendor-specific code that works after the
persistence layer and communicates with physical backend. Currently Pool,
as a root of configuration, has such attribute, so when any call is handled
for the pool or its child objects, that attribute is used to dispatch the
call to the appropriate driver.

Flavor is something more flexible (it's not there right now, we're working
on designing the framework), that would allow user to choose capabilities
rather then vendors. In particular, that will allow, having several
configurations for one driver.

As for the pictures - I have intentionally omitted some objects like L7
rules, ssl objects, health monitors since existing API around these object
makes sense and at this point we don't have plans to change it.

Regarding picture #4, could you describe once again in more details, what
is cluster and loadbalancer in this scheme?

>
> Thoughts?
>
> Thanks,
> Stephen
>


On Wed, Feb 19, 2014 at 6:28 AM, Stephen Balukoff <sbalukoff at bluebox.net>wrote:

> Hi y'all!
>
> Eugene:  Are the arrows in your diagrams meaningful?
>
> Regarding existing workflows: Do we have any idea how widely the existing
> Neutron LBaaS is being deployed / used in the wild?  (In our environment,
> we don't use it yet because it's not sophisticated enough for many of our
> customers' needs.)  In other words, is breaking backward compatibility a
> huge concern?  In our environment, obviously it's not.
>
> I personally favor #3 as suggested, but again, I do doubt the need to have
> pools associated with a vip_id, or listener_id:  That is, in larger
> clusters it may be advantageous to have a single pool that is referenced by
> several listeners and VIPs. If we keep the vip_id as an attribute of a pool
> (implying a pool can belong to only one vip), this isn't too bad--  you can
> duplicate the behavior by having multiple pools with the same actual member
> ips associated (though different member object instantiations, of course),
> and just make sure you update all of these "clone" pools whenever adding /
> removing members or changing healthmonitors, etc. It's certainly more
> housekeeping on the part of the application developer, though.
>
> You mention in the notes that having the pools with a vip_id attribute
> solves a collocation problem. What is this specific collocation problem?
>
> If we go with #3, I would keep IP address as an attribute of the VIP in
> this model.
>
> As far as not having a 'loadbalancer' entity: Again, I think we're going
> to need something like this when we solve the HA or horizontal scalability
> problem. If we're going to break workflows with the introduction of L7
> rules, then I would prefer to approach the model changes that will need to
> happen for HA and horizontal scalability sooner rather than later, so that
> we don't have to (potentially) contemplate yet another
> workflow-backward-compatibility model change.
>
> Could you please describe what is the 'provider/flavor' attribute of the
> VIP being proposed?
>
> Anyway, the devil is often in the details on this, so I've created a few
> graphs to illustrate my understanding on these things, and my proposal for
> alterations of the ideas you've described on the wiki pages above.
>
> These graphs are:
>
> #3 VIP-centric solution (full object view):  This is the #3 proposal
> filled out with the L7 proposal as detailed here:
> https://wiki.openstack.org/wiki/Neutron/LBaaS/l7 and the objects having
> to do with load balancing and all their attributes as currently exist in
> the havana database.
>
> #3 VIP-centric solution (abridged view): Only those objects corresponding
> to the #3 graph with all their attributes as I understand Mark is proposing.
>
> #3 VIP-centric solution (sbalukoff edit): My proposed change to the #3
> graph. That is, IP address should be an attribute of the VIP, and pool
> doesn't need to care about which VIP(s) it's a part of (from what I can
> tell-- again, let me know what that 'collocation problem' is, eh).
>
> While we're at it, though, I would like to propose option #4: This is a
> model that you could almost look at as a happy medium between #2 and #3,
> that allows application developers / tenants to develop automation for
> business needs around load balancer objects, helps separate operational
> concerns that cloud administrators need to worry about from application
> development concerns (ie. physical or virtual load balancers are hidden
> from tenants), and provides a model that works well with HA and auto-scale
> topologies.
>
> As usual, I'm happy to provide the .dot files for any of the above graphs.
>
> Thoughts?
>
> Thanks,
> Stephen
>
>
>
>
> On Tue, Feb 18, 2014 at 11:34 AM, Eugene Nikanorov <
> enikanorov at mirantis.com> wrote:
>
>> Hi folks,
>>
>> Recently we were discussing LBaaS object model with Mark McClain in order
>> to address several problems that we faced while approaching L7 rules and
>> multiple vips per pool.
>>
>> To cut long story short: with existing workflow and model it's impossible
>> to use L7 rules, because
>> each pool being created is 'instance' object in itself, it defines
>> another logical configuration and can't be attached to other existing
>> configuration.
>> To address this problem, plus create a base for multiple vips per pool,
>> the 'loadbalancer' object was introduced (see
>> https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance ).
>> However this approach raised a concern of whether we want to let user to
>> care about 'instance' object.
>>
>> My personal opinion is that letting user to work with 'loadbalancer'
>> entity is no big deal (and might be even useful for terminological clarity;
>> Libra and AWS APIs have that) especially if existing simple workflow is
>> preserved, so the 'loadbalancer' entity is only required when working with
>> L7 or multiple vips per pool.
>>
>> The alternative solution proposed by Mark is described here under #3:
>>
>> https://wiki.openstack.org/wiki/Neutron/LBaaS/LoadbalancerInstance/Discussion
>> In (3) the root object of the configuration is VIP, where all kinds of
>> bindings are made (such as provider, agent, device, router). To address
>> 'multiple vips' case another entity 'Listener' is introduced, which
>> receives most attributes of former 'VIP' (attribute sets are not finalized
>> on those pictures, so don't pay much attention)
>> If you take closer look at #2 and #3 proposals, you'll see that they are
>> essentially similar, where in #3 the VIP object takes instance/loadbalancer
>> role from #2.
>> Both #2 and #3 proposals make sense to me because they address both
>> problems with L7 and multiple vips (or listeners)
>> My concern about #3 is that it redefines lots of workflow and API aspects
>> and even if we manage to make transition to #3 in backward-compatible way,
>> it will be more complex in terms of code/testing, then #2 (which is on
>> review already and works).
>>
>> The whole thing is important design decision, so please share your
>> thoughts everyone.
>>
>> Thanks,
>> Eugene.
>>
>
>
>
> --
> Stephen Balukoff
> Blue Box Group, LLC
> (800)613-4305 x807
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140219/e206141b/attachment.html>


More information about the OpenStack-dev mailing list