[openstack-dev] [Neutron][LBaaS] Object Model discussion
Jay Pipes
jaypipes at gmail.com
Mon Feb 24 20:35:20 UTC 2014
Thanks, Eugene! I've given the API a bit of thought today and jotted
down some thoughts below.
On Fri, 2014-02-21 at 23:57 +0400, Eugene Nikanorov wrote:
> Could you provide some examples -- even in the pseudo-CLI
> commands like
> I did below. It's really difficult to understand where the
> limits are
> without specific examples.
> You know, I always look at the API proposal from implementation
> standpoint also, so here's what I see.
> In the cli workflow that you described above, everything is fine,
> because the driver knowы how and where to deploy each object
> that you provide in your command, because it's basically a batch.
Yes, that is true.
> When we're talking about separate objectы that form a loadbalancer -
> vips, pools, members, it becomes not clear how to map them backends
> and at which point.
Understood, but I think we can make some headway here. Examples below.
> So here's an example I usually give:
> We have 2 VIPs (in fact, one address and 2 ports listening for http
> and https, now we call them listeners),
> both listeners pass request to a webapp server farm, and http listener
> also passes requests to static image servers by processing incoming
> request URIs by L7 rules.
> So object topology is:
>
>
> Listener1 (addr:80) Listener2(addr:443)
> | \ /
> | \ /
> | X
> | / \
> pool1(webapp) pool2(static imgs)
> sorry for that stone age pic :)
>
>
> The proposal that we discuss can create such object topology by the
> following sequence of commands:
> 1) create-vip --name VipName address=addr
> returns vid_id
> 2) create-listener --name listener1 --port 80 --protocol http --vip_id
> vip_id
> returns listener_id1
> 3) create-listener --name listener2 --port 443 --protocol https
> --sl-params params --vip_id vip_id
>
> returns listener_id2
> 4) create-pool --name pool1 <members>
>
> returns pool_id1
> 5) create-pool --name pool1 <members>
> returns pool_id2
>
> 6) set-listener-pool listener_id1 pool_id1 --default
> 7) set-listener-pool listener_id1 pool_id2 --l7policy policy
>
> 7) set-listener-pool listener_id2 pool_id1 --default
> That's a generic workflow that allows you to create such config. The
> question is at which point the backend is chosen.
>From a user's perspective, they don't care about VIPs, listeners or
pools :) All the user cares about is:
* being able to add or remove backend nodes that should be balanced
across
* being able to set some policies about how traffic should be directed
I do realize that AWS ELB's API uses the term "listener" in its API, but
I'm not convinced this is the best term. And I'm not convinced that
there is a need for a "pool" resource at all.
Could the above steps #1 through #6 be instead represented in the
following way?
# Assume we've created a load balancer with ID $BALANCER_ID using
# Something like I showed in my original response:
neutron balancer-create --type=advanced --front=<ip> \
--back=<list_of_ips> --algorithm="least-connections" \
--topology="active-standby"
neutron balancer-configure $BALANCER_ID --front-protocol=http \
--front-port=80 --back-protocol=http --back-port=80
neutron balancer-configure $BALANCER_ID --front-protocol=https \
--front-port=443 --back-protocol=https --back-port=443
Likewise, we could configure the load balancer to send front-end HTTPS
traffic (terminated at the load balancer) to back-end HTTP services:
neutron balancer-configure $BALANCER_ID --front-protocol=https \
--front-port=443 --back-protocol=http --back-port=80
No mention of listeners, VIPs, or pools at all.
The REST API for the balancer-update CLI command above might be
something like this:
PUT /balancers/{balancer_id}
with JSON body of request like so:
{
"front-port": 443,
"front-protocol": "https",
"back-port": 80,
"back-protocol": "http"
}
And the code handling the above request would simply look to see if the
load balancer had a "routing entry" for the front-end port and protocol
of (443, https) and set the entry to route to back-end port and protocol
of (80, http).
For the advanced L7 policy heuristics, it makes sense to me to use a
similar strategy. For example (using a similar example from ELB):
neutron l7-policy-create --type="ssl-negotiation" \
--attr=ProtocolSSLv3=true \
--attr=ProtocolTLSv1.1=true \
--attr=DHE-RSA-AES256-SHA256=true \
--attr=Server-Defined-Cipher-Order=true
Presume above returns an ID for the policy $L7_POLICY_ID. We could then
assign that policy to operate on the front-end of the load balancer by
doing:
neutron balancer-apply-policy $BALANCER_ID $L7_POLICY_ID --port=443
There's no need to specify --front-port of course, since the policy only
applies to the front-end.
There is also no need to refer to a "listener" object, no need to call
anything a VIP, nor any reason to use the term "pool" in the API.
Best,
-jay
> In our current proposal backend is chosen and step (1) and all further
> objects are implicitly go on the same backend as VipName.
>
>
> The API allows the following addition:
> 8) create-vip --name VipName2 address=addr2
> 9) create-listener ... listener3 ...
> 10) set-listener-pool listener_id3 pool_id1
>
>
> E.g. from API stand point the commands above are valid. But that
> particular ability (pool1 is shared by two different backends)
> introduces lots of complexity in the implementation and API, and that
> is what we would like to avoid at this point.
>
>
> So the proposal makes step #10 forbidden: pool is already associated
> with the listener on other backend, so we don't share it with
> listeners on another one.
> That kind of restriction introduces implicit knowledge about the
> object-to-backend mapping into the API.
> In my opinion it's not a big deal. Once we sort out those
> complexities, we can allow that.
>
>
> What do you think?
>
>
> Thanks,
> Eugene.
>
>
>
>
>
>
>
> > Looking at your proposal it reminds me Heat template for
> > loadbalancer.
> > It's fine, but we need to be able to operate on particular
> objects.
>
>
> I'm not ruling out being able to add or remove nodes from a
> balancer, if
> that's what you're getting at?
>
> Best,
> -jay
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
More information about the OpenStack-dev
mailing list