<div dir="ltr">Hi Brandon,<div><br></div><div>Yeah, in your example, member1 could potentially have 8 different statuses (and this is a small example!)... If that member starts flapping, it means that every time it flaps there are 8 notifications being passed upstream.</div><div><br></div><div>Note that this problem actually doesn't get any better if we're not sharing objects but are just duplicating them (ie. not sharing objects but the user makes references to the same back-end machine as 8 different members.)</div><div><br></div><div>To be honest, I don't see sharing entities at many levels like this being the rule for most of our installations-- maybe a few percentage points of installations will do an excessive sharing of members, but I doubt it. So really, even though reporting status like this is likely to generate a pretty big tree of data, I don't think this is actually a problem, eh. And I don't see sharing entities actually reducing the workload of what needs to happen behind the scenes. (It just allows us to conceal more of this work from the user.)</div><div><br></div><div>Stephen</div><div><br></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Dec 4, 2014 at 4:05 PM, Brandon Logan <span dir="ltr"><<a href="mailto:brandon.logan@rackspace.com" target="_blank">brandon.logan@rackspace.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Sorry it's taken me a while to respond to this.<br>
<br>
So I wasn't thinking about this correctly. I was afraid you would have<br>
to pass in a full tree of parent child representations to /loadbalancers<br>
to update anything a load balancer it is associated to (including down<br>
to members). However, after thinking about it, a user would just make<br>
an association call on each object. For Example, associate member1 with<br>
pool1, associate pool1 with listener1, then associate loadbalancer1 with<br>
listener1. Updating is just as simple as updating each entity.<br>
<br>
This does bring up another problem though. If a listener can live on<br>
many load balancers, and a pool can live on many listeners, and a member<br>
can live on many pools, there's lot of permutations to keep track of for<br>
status. you can't just link a member's status to a load balancer bc a<br>
member can exist on many pools under that load balancer, and each pool<br>
can exist under many listeners under that load balancer. For example,<br>
say I have these:<br>
<br>
lb1<br>
lb2<br>
listener1<br>
listener2<br>
pool1<br>
pool2<br>
member1<br>
member2<br>
<br>
lb1 -> [listener1, listener2]<br>
lb2 -> [listener1]<br>
listener1 -> [pool1, pool2]<br>
listener2 -> [pool1]<br>
pool1 -> [member1, member2]<br>
pool2 -> [member1]<br>
<br>
member1 can now have a different statuses under pool1 and pool2. since<br>
listener1 and listener2 both have pool1, this means member1 will now<br>
have a different status for listener1 -> pool1 and listener2 -> pool2<br>
combination. And so forth for load balancers.<br>
<br>
Basically there's a lot of permutations and combinations to keep track<br>
of with this model for statuses. Showing these in the body of load<br>
balancer details can get quite large.<br>
<br>
I hope this makes sense because my brain is ready to explode.<br>
<br>
Thanks,<br>
Brandon<br>
<div class="HOEnZb"><div class="h5"><br>
On Thu, 2014-11-27 at 08:52 +0000, Samuel Bercovici wrote:<br>
> Brandon, can you please explain further (1) bellow?<br>
><br>
> -----Original Message-----<br>
> From: Brandon Logan [mailto:<a href="mailto:brandon.logan@RACKSPACE.COM">brandon.logan@RACKSPACE.COM</a>]<br>
> Sent: Tuesday, November 25, 2014 12:23 AM<br>
> To: <a href="mailto:openstack-dev@lists.openstack.org">openstack-dev@lists.openstack.org</a><br>
> Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use Cases that led us to adopt this.<br>
><br>
> My impression is that the statuses of each entity will be shown on a detailed info request of a loadbalancer. The root level objects would not have any statuses. For example a user makes a GET request to /loadbalancers/{lb_id} and the status of every child of that load balancer is show in a "status_tree" json object. For example:<br>
><br>
> {"name": "loadbalancer1",<br>
> "status_tree":<br>
> {"listeners":<br>
> [{"name": "listener1", "operating_status": "ACTIVE",<br>
> "default_pool":<br>
> {"name": "pool1", "status": "ACTIVE",<br>
> "members":<br>
> [{"ip_address": "10.0.0.1", "status": "OFFLINE"}]}}<br>
><br>
> Sam, correct me if I am wrong.<br>
><br>
> I generally like this idea. I do have a few reservations with this:<br>
><br>
> 1) Creating and updating a load balancer requires a full tree configuration with the current extension/plugin logic in neutron. Since updates will require a full tree, it means the user would have to know the full tree configuration just to simply update a name. Solving this would require nested child resources in the URL, which the current neutron extension/plugin does not allow. Maybe the new one will.<br>
><br>
> 2) The status_tree can get quite large depending on the number of listeners and pools being used. This is a minor issue really as it will make horizon's (or any other UI tool's) job easier to show statuses.<br>
><br>
> Thanks,<br>
> Brandon<br>
><br>
> On Mon, 2014-11-24 at 12:43 -0800, Stephen Balukoff wrote:<br>
> > Hi Samuel,<br>
> ><br>
> ><br>
> > We've actually been avoiding having a deeper discussion about status<br>
> > in Neutron LBaaS since this can get pretty hairy as the back-end<br>
> > implementations get more complicated. I suspect managing that is<br>
> > probably one of the bigger reasons we have disagreements around object<br>
> > sharing. Perhaps it's time we discussed representing state "correctly"<br>
> > (whatever that means), instead of a round-a-bout discussion about<br>
> > object sharing (which, I think, is really just avoiding this issue)?<br>
> ><br>
> ><br>
> > Do you have a proposal about how status should be represented<br>
> > (possibly including a description of the state machine) if we collapse<br>
> > everything down to be logical objects except the loadbalancer object?<br>
> > (From what you're proposing, I suspect it might be too general to, for<br>
> > example, represent the UP/DOWN status of members of a given pool.)<br>
> ><br>
> ><br>
> > Also, from an haproxy perspective, sharing pools within a single<br>
> > listener actually isn't a problem. That is to say, having the same<br>
> > L7Policy pointing at the same pool is OK, so I personally don't have a<br>
> > problem allowing sharing of objects within the scope of parent<br>
> > objects. What do the rest of y'all think?<br>
> ><br>
> ><br>
> > Stephen<br>
> ><br>
> ><br>
> ><br>
> > On Sat, Nov 22, 2014 at 11:06 PM, Samuel Bercovici<br>
> > <<a href="mailto:SamuelB@radware.com">SamuelB@radware.com</a>> wrote:<br>
> > Hi Stephen,<br>
> ><br>
> ><br>
> ><br>
> > 1. The issue is that if we do 1:1 and allow status/state<br>
> > to proliferate throughout all objects we will then get an<br>
> > issue to fix it later, hence even if we do not do sharing, I<br>
> > would still like to have all objects besides LB be treated as<br>
> > logical.<br>
> ><br>
> > 2. The 3rd use case bellow will not be reasonable without<br>
> > pool sharing between different policies. Specifying different<br>
> > pools which are the same for each policy make it non-started<br>
> > to me.<br>
> ><br>
> ><br>
> ><br>
> > -Sam.<br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> > From: Stephen Balukoff [mailto:<a href="mailto:sbalukoff@bluebox.net">sbalukoff@bluebox.net</a>]<br>
> > Sent: Friday, November 21, 2014 10:26 PM<br>
> > To: OpenStack Development Mailing List (not for usage<br>
> > questions)<br>
> > Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects<br>
> > in LBaaS - Use Cases that led us to adopt this.<br>
> ><br>
> ><br>
> ><br>
> > I think the idea was to implement 1:1 initially to reduce the<br>
> > amount of code and operational complexity we'd have to deal<br>
> > with in initial revisions of LBaaS v2. Many to many can be<br>
> > simulated in this scenario, though it does shift the burden of<br>
> > maintenance to the end user. It does greatly simplify the<br>
> > initial code for v2, in any case, though.<br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> > Did we ever agree to allowing listeners to be shared among<br>
> > load balancers? I think that still might be a N:1<br>
> > relationship even in our latest models.<br>
> ><br>
> ><br>
> ><br>
> ><br>
> > There's also the difficulty introduced by supporting different<br>
> > flavors: Since flavors are essentially an association between<br>
> > a load balancer object and a driver (with parameters), once<br>
> > flavors are introduced, any sub-objects of a given load<br>
> > balancer objects must necessarily be purely logical until they<br>
> > are associated with a load balancer. I know there was talk of<br>
> > forcing these objects to be sub-objects of a load balancer<br>
> > which can't be accessed independently of the load balancer<br>
> > (which would have much the same effect as what you discuss:<br>
> > State / status only make sense once logical objects have an<br>
> > instantiation somewhere.) However, the currently proposed API<br>
> > treats most objects as root objects, which breaks this<br>
> > paradigm.<br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> > How we handle status and updates once there's an instantiation<br>
> > of these logical objects is where we start getting into real<br>
> > complexity.<br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> > It seems to me there's a lot of complexity introduced when we<br>
> > allow a lot of many to many relationships without a whole lot<br>
> > of benefit in real-world deployment scenarios. In most cases,<br>
> > objects are not going to be shared, and in those cases with<br>
> > sufficiently complicated deployments in which shared objects<br>
> > could be used, the user is likely to be sophisticated enough<br>
> > and skilled enough to manage updating what are essentially<br>
> > "copies" of objects, and would likely have an opinion about<br>
> > how individual failures should be handled which wouldn't<br>
> > necessarily coincide with what we developers of the system<br>
> > would assume. That is to say, allowing too many many to many<br>
> > relationships feels like a solution to a problem that doesn't<br>
> > really exist, and introduces a lot of unnecessary complexity.<br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> > In any case, though, I feel like we should walk before we run:<br>
> > Implementing 1:1 initially is a good idea to get us rolling.<br>
> > Whether we then implement 1:N or M:N after that is another<br>
> > question entirely. But in any case, it seems like a bad idea<br>
> > to try to start with M:N.<br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> > Stephen<br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> > On Thu, Nov 20, 2014 at 4:52 AM, Samuel Bercovici<br>
> > <<a href="mailto:SamuelB@radware.com">SamuelB@radware.com</a>> wrote:<br>
> ><br>
> > Hi,<br>
> ><br>
> > Per discussion I had at OpenStack Summit/Paris with Brandon<br>
> > and Doug, I would like to remind everyone why we choose to<br>
> > follow a model where pools and listeners are shared (many to<br>
> > many relationships).<br>
> ><br>
> > Use Cases:<br>
> > 1. The same application is being exposed via different LB<br>
> > objects.<br>
> > For example: users coming from the internal "private"<br>
> > organization network, have an LB1(private_VIP) --><br>
> > Listener1(TLS) -->Pool1 and user coming from the "internet",<br>
> > have LB2(public_vip)-->Listener1(TLS)-->Pool1.<br>
> > This may also happen to support ipv4 and ipv6: LB_v4(ipv4_VIP)<br>
> > --> Listener1(TLS) -->Pool1 and LB_v6(ipv6_VIP) --><br>
> > Listener1(TLS) -->Pool1<br>
> > The operator would like to be able to manage the pool<br>
> > membership in cases of updates and error in a single place.<br>
> ><br>
> > 2. The same group of servers is being used via different<br>
> > listeners optionally also connected to different LB objects.<br>
> > For example: users coming from the internal "private"<br>
> > organization network, have an LB1(private_VIP) --><br>
> > Listener1(HTTP) -->Pool1 and user coming from the "internet",<br>
> > have LB2(public_vip)-->Listener2(TLS)-->Pool1.<br>
> > The LBs may use different flavors as LB2 needs TLS termination<br>
> > and may prefer a different "stronger" flavor.<br>
> > The operator would like to be able to manage the pool<br>
> > membership in cases of updates and error in a single place.<br>
> ><br>
> > 3. The same group of servers is being used in several<br>
> > different L7_Policies connected to a listener. Such listener<br>
> > may be reused as in use case 1.<br>
> > For example: LB1(VIP1)-->Listener_L7(TLS)<br>
> > |<br>
> ><br>
> > +-->L7_Policy1(rules..)-->Pool1<br>
> > |<br>
> ><br>
> > +-->L7_Policy2(rules..)-->Pool2<br>
> > |<br>
> ><br>
> > +-->L7_Policy3(rules..)-->Pool1<br>
> > |<br>
> ><br>
> > +-->L7_Policy3(rules..)-->Reject<br>
> ><br>
> ><br>
> > I think that the "key" issue handling correctly the<br>
> > "provisioning" state and the operation state in a many to many<br>
> > model.<br>
> > This is an issue as we have attached status fields to each and<br>
> > every object in the model.<br>
> > A side effect of the above is that to understand the<br>
> > "provisioning/operation" status one needs to check many<br>
> > different objects.<br>
> ><br>
> > To remedy this, I would like to turn all objects besides the<br>
> > LB to be logical objects. This means that the only place to<br>
> > manage the status/state will be on the LB object.<br>
> > Such status should be hierarchical so that logical object<br>
> > attached to an LB, would have their status consumed out of the<br>
> > LB object itself (in case of an error).<br>
> > We also need to discuss how modifications of a logical object<br>
> > will be "rendered" to the concrete LB objects.<br>
> > You may want to revisit<br>
> > <a href="https://docs.google.com/document/d/1D-1n8nCEFurYzvEBxIRfXfffnImcIPwWSctAG-NXonY/edit#heading=h.3rvy5drl5b5r" target="_blank">https://docs.google.com/document/d/1D-1n8nCEFurYzvEBxIRfXfffnImcIPwWSctAG-NXonY/edit#heading=h.3rvy5drl5b5r</a> the "Logical Model + Provisioning Status + Operation Status + Statistics" for a somewhat more detailed explanation albeit it uses the LBaaS v1 model as a reference.<br>
> ><br>
> > Regards,<br>
> > -Sam.<br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> > _______________________________________________<br>
> > OpenStack-dev mailing list<br>
> > <a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
> ><br>
> > <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> > --<br>
> ><br>
> > Stephen Balukoff<br>
> > Blue Box Group, LLC<br>
> > <a href="tel:%28800%29613-4305%20x807" value="+18006134305">(800)613-4305 x807</a><br>
> ><br>
> ><br>
> ><br>
> > _______________________________________________<br>
> > OpenStack-dev mailing list<br>
> > <a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
> ><br>
> > <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
> ><br>
> ><br>
> ><br>
> ><br>
> ><br>
> > --<br>
> > Stephen Balukoff<br>
> > Blue Box Group, LLC<br>
> > <a href="tel:%28800%29613-4305%20x807" value="+18006134305">(800)613-4305 x807</a><br>
> > _______________________________________________<br>
> > OpenStack-dev mailing list<br>
> > <a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
> > <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
><br>
> _______________________________________________<br>
> OpenStack-dev mailing list<br>
> <a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
><br>
> _______________________________________________<br>
> OpenStack-dev mailing list<br>
> <a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br>
_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature"><span></span>Stephen Balukoff
<br>Blue Box Group, LLC
<br>(800)613-4305 x807
</div>
</div>