[openstack-dev] [neutron][lbaas] Shared Objects in LBaaS - Use Cases that led us to adopt this.
Brandon Logan
brandon.logan at RACKSPACE.COM
Mon Nov 24 22:23:20 UTC 2014
My impression is that the statuses of each entity will be shown on a
detailed info request of a loadbalancer. The root level objects would
not have any statuses. For example a user makes a GET request
to /loadbalancers/{lb_id} and the status of every child of that load
balancer is show in a "status_tree" json object. For example:
{"name": "loadbalancer1",
"status_tree":
{"listeners":
[{"name": "listener1", "operating_status": "ACTIVE",
"default_pool":
{"name": "pool1", "status": "ACTIVE",
"members":
[{"ip_address": "10.0.0.1", "status": "OFFLINE"}]}}
Sam, correct me if I am wrong.
I generally like this idea. I do have a few reservations with this:
1) Creating and updating a load balancer requires a full tree
configuration with the current extension/plugin logic in neutron. Since
updates will require a full tree, it means the user would have to know
the full tree configuration just to simply update a name. Solving this
would require nested child resources in the URL, which the current
neutron extension/plugin does not allow. Maybe the new one will.
2) The status_tree can get quite large depending on the number of
listeners and pools being used. This is a minor issue really as it will
make horizon's (or any other UI tool's) job easier to show statuses.
Thanks,
Brandon
On Mon, 2014-11-24 at 12:43 -0800, Stephen Balukoff wrote:
> Hi Samuel,
>
>
> We've actually been avoiding having a deeper discussion about status
> in Neutron LBaaS since this can get pretty hairy as the back-end
> implementations get more complicated. I suspect managing that is
> probably one of the bigger reasons we have disagreements around object
> sharing. Perhaps it's time we discussed representing state
> "correctly" (whatever that means), instead of a round-a-bout
> discussion about object sharing (which, I think, is really just
> avoiding this issue)?
>
>
> Do you have a proposal about how status should be represented
> (possibly including a description of the state machine) if we collapse
> everything down to be logical objects except the loadbalancer object?
> (From what you're proposing, I suspect it might be too general to, for
> example, represent the UP/DOWN status of members of a given pool.)
>
>
> Also, from an haproxy perspective, sharing pools within a single
> listener actually isn't a problem. That is to say, having the same
> L7Policy pointing at the same pool is OK, so I personally don't have a
> problem allowing sharing of objects within the scope of parent
> objects. What do the rest of y'all think?
>
>
> Stephen
>
>
>
> On Sat, Nov 22, 2014 at 11:06 PM, Samuel Bercovici
> <SamuelB at radware.com> wrote:
> Hi Stephen,
>
>
>
> 1. The issue is that if we do 1:1 and allow status/state
> to proliferate throughout all objects we will then get an
> issue to fix it later, hence even if we do not do sharing, I
> would still like to have all objects besides LB be treated as
> logical.
>
> 2. The 3rd use case bellow will not be reasonable without
> pool sharing between different policies. Specifying different
> pools which are the same for each policy make it non-started
> to me.
>
>
>
> -Sam.
>
>
>
>
>
>
>
> From: Stephen Balukoff [mailto:sbalukoff at bluebox.net]
> Sent: Friday, November 21, 2014 10:26 PM
> To: OpenStack Development Mailing List (not for usage
> questions)
> Subject: Re: [openstack-dev] [neutron][lbaas] Shared Objects
> in LBaaS - Use Cases that led us to adopt this.
>
>
>
> I think the idea was to implement 1:1 initially to reduce the
> amount of code and operational complexity we'd have to deal
> with in initial revisions of LBaaS v2. Many to many can be
> simulated in this scenario, though it does shift the burden of
> maintenance to the end user. It does greatly simplify the
> initial code for v2, in any case, though.
>
>
>
>
>
> Did we ever agree to allowing listeners to be shared among
> load balancers? I think that still might be a N:1
> relationship even in our latest models.
>
>
>
>
> There's also the difficulty introduced by supporting different
> flavors: Since flavors are essentially an association between
> a load balancer object and a driver (with parameters), once
> flavors are introduced, any sub-objects of a given load
> balancer objects must necessarily be purely logical until they
> are associated with a load balancer. I know there was talk of
> forcing these objects to be sub-objects of a load balancer
> which can't be accessed independently of the load balancer
> (which would have much the same effect as what you discuss:
> State / status only make sense once logical objects have an
> instantiation somewhere.) However, the currently proposed API
> treats most objects as root objects, which breaks this
> paradigm.
>
>
>
>
>
> How we handle status and updates once there's an instantiation
> of these logical objects is where we start getting into real
> complexity.
>
>
>
>
>
> It seems to me there's a lot of complexity introduced when we
> allow a lot of many to many relationships without a whole lot
> of benefit in real-world deployment scenarios. In most cases,
> objects are not going to be shared, and in those cases with
> sufficiently complicated deployments in which shared objects
> could be used, the user is likely to be sophisticated enough
> and skilled enough to manage updating what are essentially
> "copies" of objects, and would likely have an opinion about
> how individual failures should be handled which wouldn't
> necessarily coincide with what we developers of the system
> would assume. That is to say, allowing too many many to many
> relationships feels like a solution to a problem that doesn't
> really exist, and introduces a lot of unnecessary complexity.
>
>
>
>
>
> In any case, though, I feel like we should walk before we run:
> Implementing 1:1 initially is a good idea to get us rolling.
> Whether we then implement 1:N or M:N after that is another
> question entirely. But in any case, it seems like a bad idea
> to try to start with M:N.
>
>
>
>
>
> Stephen
>
>
>
>
>
>
>
> On Thu, Nov 20, 2014 at 4:52 AM, Samuel Bercovici
> <SamuelB at radware.com> wrote:
>
> Hi,
>
> Per discussion I had at OpenStack Summit/Paris with Brandon
> and Doug, I would like to remind everyone why we choose to
> follow a model where pools and listeners are shared (many to
> many relationships).
>
> Use Cases:
> 1. The same application is being exposed via different LB
> objects.
> For example: users coming from the internal "private"
> organization network, have an LB1(private_VIP) -->
> Listener1(TLS) -->Pool1 and user coming from the "internet",
> have LB2(public_vip)-->Listener1(TLS)-->Pool1.
> This may also happen to support ipv4 and ipv6: LB_v4(ipv4_VIP)
> --> Listener1(TLS) -->Pool1 and LB_v6(ipv6_VIP) -->
> Listener1(TLS) -->Pool1
> The operator would like to be able to manage the pool
> membership in cases of updates and error in a single place.
>
> 2. The same group of servers is being used via different
> listeners optionally also connected to different LB objects.
> For example: users coming from the internal "private"
> organization network, have an LB1(private_VIP) -->
> Listener1(HTTP) -->Pool1 and user coming from the "internet",
> have LB2(public_vip)-->Listener2(TLS)-->Pool1.
> The LBs may use different flavors as LB2 needs TLS termination
> and may prefer a different "stronger" flavor.
> The operator would like to be able to manage the pool
> membership in cases of updates and error in a single place.
>
> 3. The same group of servers is being used in several
> different L7_Policies connected to a listener. Such listener
> may be reused as in use case 1.
> For example: LB1(VIP1)-->Listener_L7(TLS)
> |
>
> +-->L7_Policy1(rules..)-->Pool1
> |
>
> +-->L7_Policy2(rules..)-->Pool2
> |
>
> +-->L7_Policy3(rules..)-->Pool1
> |
>
> +-->L7_Policy3(rules..)-->Reject
>
>
> I think that the "key" issue handling correctly the
> "provisioning" state and the operation state in a many to many
> model.
> This is an issue as we have attached status fields to each and
> every object in the model.
> A side effect of the above is that to understand the
> "provisioning/operation" status one needs to check many
> different objects.
>
> To remedy this, I would like to turn all objects besides the
> LB to be logical objects. This means that the only place to
> manage the status/state will be on the LB object.
> Such status should be hierarchical so that logical object
> attached to an LB, would have their status consumed out of the
> LB object itself (in case of an error).
> We also need to discuss how modifications of a logical object
> will be "rendered" to the concrete LB objects.
> You may want to revisit
> https://docs.google.com/document/d/1D-1n8nCEFurYzvEBxIRfXfffnImcIPwWSctAG-NXonY/edit#heading=h.3rvy5drl5b5r the "Logical Model + Provisioning Status + Operation Status + Statistics" for a somewhat more detailed explanation albeit it uses the LBaaS v1 model as a reference.
>
> Regards,
> -Sam.
>
>
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
>
>
> --
>
> Stephen Balukoff
> Blue Box Group, LLC
> (800)613-4305 x807
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
>
> --
> Stephen Balukoff
> Blue Box Group, LLC
> (800)613-4305 x807
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
More information about the OpenStack-dev
mailing list