[openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use Cases that led us to adopt this.
Stephen Balukoff
sbalukoff at bluebox.net
Mon Dec 8 22:23:06 UTC 2014
So... I should probably note that I see the case where a user actually
shares object as being the exception. I expect that 90% of deployments will
never need to share objects, except for a few cases-- those cases (of 1:N)
relationships are:
* Loadbalancers must be able to have many Listeners
* When L7 functionality is introduced, L7 policies must be able to refer to
the same Pool under a single Listener. (That is to say, sharing Pools under
the scope of a single Listener makes sense, but only after L7 policies are
introduced.)
I specifically see the following kind of sharing having near zero demand:
* Listeners shared across multiple loadbalancers
* Pools shared across multiple listeners
* Members shared across multiple pools
So, despite the fact that sharing doesn't make status reporting any more or
less complex, I'm still in favor of starting with 1:1 relationships between
most kinds of objects and then changing those to 1:N or M:N as we get user
demand for this. As I said in my first response, allowing too many many to
many relationships feels like a solution to a problem that doesn't really
exist, and introduces a lot of unnecessary complexity.
Stephen
On Sun, Dec 7, 2014 at 11:43 PM, Samuel Bercovici <SamuelB at radware.com>
wrote:
> +1
>
>
>
>
>
> *From:* Stephen Balukoff [mailto:sbalukoff at bluebox.net]
> *Sent:* Friday, December 05, 2014 7:59 PM
>
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS -
> Use Cases that led us to adopt this.
>
>
>
> German-- but the point is that sharing apparently has no effect on the
> number of permutations for status information. The only difference here is
> that without sharing it's more work for the user to maintain and modify
> trees of objects.
>
>
>
> On Fri, Dec 5, 2014 at 9:36 AM, Eichberger, German <
> german.eichberger at hp.com> wrote:
>
> Hi Brandon + Stephen,
>
>
>
> Having all those permutations (and potentially testing them) made us lean
> against the sharing case in the first place. It’s just a lot of extra work
> for only a small number of our customers.
>
>
>
> German
>
>
>
> *From:* Stephen Balukoff [mailto:sbalukoff at bluebox.net]
> *Sent:* Thursday, December 04, 2014 9:17 PM
>
>
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS -
> Use Cases that led us to adopt this.
>
>
>
> Hi Brandon,
>
>
>
> Yeah, in your example, member1 could potentially have 8 different statuses
> (and this is a small example!)... If that member starts flapping, it means
> that every time it flaps there are 8 notifications being passed upstream.
>
>
>
> Note that this problem actually doesn't get any better if we're not
> sharing objects but are just duplicating them (ie. not sharing objects but
> the user makes references to the same back-end machine as 8 different
> members.)
>
>
>
> To be honest, I don't see sharing entities at many levels like this being
> the rule for most of our installations-- maybe a few percentage points of
> installations will do an excessive sharing of members, but I doubt it. So
> really, even though reporting status like this is likely to generate a
> pretty big tree of data, I don't think this is actually a problem, eh. And
> I don't see sharing entities actually reducing the workload of what needs
> to happen behind the scenes. (It just allows us to conceal more of this
> work from the user.)
>
>
>
> Stephen
>
>
>
>
>
>
>
> On Thu, Dec 4, 2014 at 4:05 PM, Brandon Logan <brandon.logan at rackspace.com>
> wrote:
>
> Sorry it's taken me a while to respond to this.
>
> So I wasn't thinking about this correctly. I was afraid you would have
> to pass in a full tree of parent child representations to /loadbalancers
> to update anything a load balancer it is associated to (including down
> to members). However, after thinking about it, a user would just make
> an association call on each object. For Example, associate member1 with
> pool1, associate pool1 with listener1, then associate loadbalancer1 with
> listener1. Updating is just as simple as updating each entity.
>
> This does bring up another problem though. If a listener can live on
> many load balancers, and a pool can live on many listeners, and a member
> can live on many pools, there's lot of permutations to keep track of for
> status. you can't just link a member's status to a load balancer bc a
> member can exist on many pools under that load balancer, and each pool
> can exist under many listeners under that load balancer. For example,
> say I have these:
>
> lb1
> lb2
> listener1
> listener2
> pool1
> pool2
> member1
> member2
>
> lb1 -> [listener1, listener2]
> lb2 -> [listener1]
> listener1 -> [pool1, pool2]
> listener2 -> [pool1]
> pool1 -> [member1, member2]
> pool2 -> [member1]
>
> member1 can now have a different statuses under pool1 and pool2. since
> listener1 and listener2 both have pool1, this means member1 will now
> have a different status for listener1 -> pool1 and listener2 -> pool2
> combination. And so forth for load balancers.
>
> Basically there's a lot of permutations and combinations to keep track
> of with this model for statuses. Showing these in the body of load
> balancer details can get quite large.
>
> I hope this makes sense because my brain is ready to explode.
>
> Thanks,
> Brandon
>
>
> On Thu, 2014-11-27 at 08:52 +0000, Samuel Bercovici wrote:
> > Brandon, can you please explain further (1) bellow?
> >
> > -----Original Message-----
> > From: Brandon Logan [mailto:brandon.logan at RACKSPACE.COM]
> > Sent: Tuesday, November 25, 2014 12:23 AM
> > To: openstack-dev at lists.openstack.org
> > Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS -
> Use Cases that led us to adopt this.
> >
> > My impression is that the statuses of each entity will be shown on a
> detailed info request of a loadbalancer. The root level objects would not
> have any statuses. For example a user makes a GET request to
> /loadbalancers/{lb_id} and the status of every child of that load balancer
> is show in a "status_tree" json object. For example:
> >
> > {"name": "loadbalancer1",
> > "status_tree":
> > {"listeners":
> > [{"name": "listener1", "operating_status": "ACTIVE",
> > "default_pool":
> > {"name": "pool1", "status": "ACTIVE",
> > "members":
> > [{"ip_address": "10.0.0.1", "status": "OFFLINE"}]}}
> >
> > Sam, correct me if I am wrong.
> >
> > I generally like this idea. I do have a few reservations with this:
> >
> > 1) Creating and updating a load balancer requires a full tree
> configuration with the current extension/plugin logic in neutron. Since
> updates will require a full tree, it means the user would have to know the
> full tree configuration just to simply update a name. Solving this would
> require nested child resources in the URL, which the current neutron
> extension/plugin does not allow. Maybe the new one will.
> >
> > 2) The status_tree can get quite large depending on the number of
> listeners and pools being used. This is a minor issue really as it will
> make horizon's (or any other UI tool's) job easier to show statuses.
> >
> > Thanks,
> > Brandon
> >
> > On Mon, 2014-11-24 at 12:43 -0800, Stephen Balukoff wrote:
> > > Hi Samuel,
> > >
> > >
> > > We've actually been avoiding having a deeper discussion about status
> > > in Neutron LBaaS since this can get pretty hairy as the back-end
> > > implementations get more complicated. I suspect managing that is
> > > probably one of the bigger reasons we have disagreements around object
> > > sharing. Perhaps it's time we discussed representing state "correctly"
> > > (whatever that means), instead of a round-a-bout discussion about
> > > object sharing (which, I think, is really just avoiding this issue)?
> > >
> > >
> > > Do you have a proposal about how status should be represented
> > > (possibly including a description of the state machine) if we collapse
> > > everything down to be logical objects except the loadbalancer object?
> > > (From what you're proposing, I suspect it might be too general to, for
> > > example, represent the UP/DOWN status of members of a given pool.)
> > >
> > >
> > > Also, from an haproxy perspective, sharing pools within a single
> > > listener actually isn't a problem. That is to say, having the same
> > > L7Policy pointing at the same pool is OK, so I personally don't have a
> > > problem allowing sharing of objects within the scope of parent
> > > objects. What do the rest of y'all think?
> > >
> > >
> > > Stephen
> > >
> > >
> > >
> > > On Sat, Nov 22, 2014 at 11:06 PM, Samuel Bercovici
> > > <SamuelB at radware.com> wrote:
> > > Hi Stephen,
> > >
> > >
> > >
> > > 1. The issue is that if we do 1:1 and allow status/state
> > > to proliferate throughout all objects we will then get an
> > > issue to fix it later, hence even if we do not do sharing, I
> > > would still like to have all objects besides LB be treated as
> > > logical.
> > >
> > > 2. The 3rd use case bellow will not be reasonable without
> > > pool sharing between different policies. Specifying different
> > > pools which are the same for each policy make it non-started
> > > to me.
> > >
> > >
> > >
> > > -Sam.
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > From: Stephen Balukoff [mailto:sbalukoff at bluebox.net]
> > > Sent: Friday, November 21, 2014 10:26 PM
> > > To: OpenStack Development Mailing List (not for usage
> > > questions)
> > > Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects
> > > in LBaaS - Use Cases that led us to adopt this.
> > >
> > >
> > >
> > > I think the idea was to implement 1:1 initially to reduce the
> > > amount of code and operational complexity we'd have to deal
> > > with in initial revisions of LBaaS v2. Many to many can be
> > > simulated in this scenario, though it does shift the burden of
> > > maintenance to the end user. It does greatly simplify the
> > > initial code for v2, in any case, though.
> > >
> > >
> > >
> > >
> > >
> > > Did we ever agree to allowing listeners to be shared among
> > > load balancers? I think that still might be a N:1
> > > relationship even in our latest models.
> > >
> > >
> > >
> > >
> > > There's also the difficulty introduced by supporting different
> > > flavors: Since flavors are essentially an association between
> > > a load balancer object and a driver (with parameters), once
> > > flavors are introduced, any sub-objects of a given load
> > > balancer objects must necessarily be purely logical until they
> > > are associated with a load balancer. I know there was talk of
> > > forcing these objects to be sub-objects of a load balancer
> > > which can't be accessed independently of the load balancer
> > > (which would have much the same effect as what you discuss:
> > > State / status only make sense once logical objects have an
> > > instantiation somewhere.) However, the currently proposed API
> > > treats most objects as root objects, which breaks this
> > > paradigm.
> > >
> > >
> > >
> > >
> > >
> > > How we handle status and updates once there's an instantiation
> > > of these logical objects is where we start getting into real
> > > complexity.
> > >
> > >
> > >
> > >
> > >
> > > It seems to me there's a lot of complexity introduced when we
> > > allow a lot of many to many relationships without a whole lot
> > > of benefit in real-world deployment scenarios. In most cases,
> > > objects are not going to be shared, and in those cases with
> > > sufficiently complicated deployments in which shared objects
> > > could be used, the user is likely to be sophisticated enough
> > > and skilled enough to manage updating what are essentially
> > > "copies" of objects, and would likely have an opinion about
> > > how individual failures should be handled which wouldn't
> > > necessarily coincide with what we developers of the system
> > > would assume. That is to say, allowing too many many to many
> > > relationships feels like a solution to a problem that doesn't
> > > really exist, and introduces a lot of unnecessary complexity.
> > >
> > >
> > >
> > >
> > >
> > > In any case, though, I feel like we should walk before we run:
> > > Implementing 1:1 initially is a good idea to get us rolling.
> > > Whether we then implement 1:N or M:N after that is another
> > > question entirely. But in any case, it seems like a bad idea
> > > to try to start with M:N.
> > >
> > >
> > >
> > >
> > >
> > > Stephen
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > On Thu, Nov 20, 2014 at 4:52 AM, Samuel Bercovici
> > > <SamuelB at radware.com> wrote:
> > >
> > > Hi,
> > >
> > > Per discussion I had at OpenStack Summit/Paris with Brandon
> > > and Doug, I would like to remind everyone why we choose to
> > > follow a model where pools and listeners are shared (many to
> > > many relationships).
> > >
> > > Use Cases:
> > > 1. The same application is being exposed via different LB
> > > objects.
> > > For example: users coming from the internal "private"
> > > organization network, have an LB1(private_VIP) -->
> > > Listener1(TLS) -->Pool1 and user coming from the "internet",
> > > have LB2(public_vip)-->Listener1(TLS)-->Pool1.
> > > This may also happen to support ipv4 and ipv6: LB_v4(ipv4_VIP)
> > > --> Listener1(TLS) -->Pool1 and LB_v6(ipv6_VIP) -->
> > > Listener1(TLS) -->Pool1
> > > The operator would like to be able to manage the pool
> > > membership in cases of updates and error in a single place.
> > >
> > > 2. The same group of servers is being used via different
> > > listeners optionally also connected to different LB objects.
> > > For example: users coming from the internal "private"
> > > organization network, have an LB1(private_VIP) -->
> > > Listener1(HTTP) -->Pool1 and user coming from the "internet",
> > > have LB2(public_vip)-->Listener2(TLS)-->Pool1.
> > > The LBs may use different flavors as LB2 needs TLS termination
> > > and may prefer a different "stronger" flavor.
> > > The operator would like to be able to manage the pool
> > > membership in cases of updates and error in a single place.
> > >
> > > 3. The same group of servers is being used in several
> > > different L7_Policies connected to a listener. Such listener
> > > may be reused as in use case 1.
> > > For example: LB1(VIP1)-->Listener_L7(TLS)
> > > |
> > >
> > > +-->L7_Policy1(rules..)-->Pool1
> > > |
> > >
> > > +-->L7_Policy2(rules..)-->Pool2
> > > |
> > >
> > > +-->L7_Policy3(rules..)-->Pool1
> > > |
> > >
> > > +-->L7_Policy3(rules..)-->Reject
> > >
> > >
> > > I think that the "key" issue handling correctly the
> > > "provisioning" state and the operation state in a many to many
> > > model.
> > > This is an issue as we have attached status fields to each and
> > > every object in the model.
> > > A side effect of the above is that to understand the
> > > "provisioning/operation" status one needs to check many
> > > different objects.
> > >
> > > To remedy this, I would like to turn all objects besides the
> > > LB to be logical objects. This means that the only place to
> > > manage the status/state will be on the LB object.
> > > Such status should be hierarchical so that logical object
> > > attached to an LB, would have their status consumed out of the
> > > LB object itself (in case of an error).
> > > We also need to discuss how modifications of a logical object
> > > will be "rendered" to the concrete LB objects.
> > > You may want to revisit
> > >
> https://docs.google.com/document/d/1D-1n8nCEFurYzvEBxIRfXfffnImcIPwWSctAG-NXonY/edit#heading=h.3rvy5drl5b5r
> the "Logical Model + Provisioning Status + Operation Status + Statistics"
> for a somewhat more detailed explanation albeit it uses the LBaaS v1 model
> as a reference.
> > >
> > > Regards,
> > > -Sam.
> > >
> > >
> > >
> > >
> > >
> > > _______________________________________________
> > > OpenStack-dev mailing list
> > > OpenStack-dev at lists.openstack.org
> > >
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > --
> > >
> > > Stephen Balukoff
> > > Blue Box Group, LLC
> > > (800)613-4305 x807
> > >
> > >
> > >
> > > _______________________________________________
> > > OpenStack-dev mailing list
> > > OpenStack-dev at lists.openstack.org
> > >
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > >
> > >
> > >
> > >
> > > --
> > > Stephen Balukoff
> > > Blue Box Group, LLC
> > > (800)613-4305 x807
> > > _______________________________________________
> > > OpenStack-dev mailing list
> > > OpenStack-dev at lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
> --
>
> Stephen Balukoff
> Blue Box Group, LLC
> (800)613-4305 x807
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
> --
>
> Stephen Balukoff
> Blue Box Group, LLC
> (800)613-4305 x807
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141208/94705f45/attachment.html>
More information about the OpenStack-dev
mailing list