[openstack-dev] [Octavia] Responsibilities for controller drivers

Stephen Balukoff sbalukoff at bluebox.net
Mon Sep 15 20:33:32 UTC 2014


Hi Brandon!

My responses in-line:

On Fri, Sep 12, 2014 at 11:27 AM, Brandon Logan <brandon.logan at rackspace.com
> wrote:

> IN IRC the topic came up about supporting many-to-many load balancers to
> amphorae.  I believe a consensus was made that allowing only one-to-many
> load balancers to amphorae would be the first step forward, and
> re-evaluate later, since colocation and apolocation will need to work
> (which brings up another topic, defining what it actually means to be
> colocated: On the same amphorae, on the same amphorae host, on the same
> cell/cluster, on the same data center/availability zone. That should be
> something we discuss later, but not right now).
>
> I am fine with that decisions, but Doug brought up a good point that
> this could very well just be a decision for the controller driver and
> Octavia shouldn't mandate this for all drivers.  So I think we need to
> clearly define what decisions are the responsibility of the controller
> driver versus what decisions are mandated by Octavia's construct.
>

In my mind, the only thing dictated by the controller to the driver here
would be things related to colocation / apolocation. So in order to fully
have that discussion here, we first need to have a conversation about what
these things actually mean in the context of Octavia and/or get specific
requirements from operators here.  The reference driver (ie. haproxy
amphora) will of course have to follow a given behavior here as well, and
there's the possibility that even if we don't dictate behavior in one way
or another, operators and users may come to expect the behavior of the
reference driver here to become the defacto requirements.


>
> Items I can come up with off the top of my head:
>
> 1) LB:Amphora - M:N vs 1:N
>

My opinion:  For simplicity, first revision should be 1:N, but leave open
the possibility of M:N at a later date, depending on what people require.
That is to say, we'll only do 1:N at first so we can have simpler
scheduling algorithms for now, but let's not paint ourselves into a corner
in other portions of the code by assuming there will only ever be one LB on
an amphora.


> 2) VIPs:LB - M:N vs 1:N
>

So, I would revise that to be N:1 or 1:1. I don't think we'll ever want to
support a case where multiple LBs share the same VIP. (Multiple amphorae
per VIP, yes... but not multiple LBs per VIP. LBs are logical constructs
that also provide for good separation of concerns, particularly around
security.)

The most solid use case for N:1 that I've heard is the IPv6 use case, where
a user wants to expose the exact same services over IPv4 and IPv6, and
therefore it makes sense to be able to have multiple VIPs per load
balancer. (In fact, I'm not aware of other use cases here that hold any
water.) Having said this, we're quite a ways from IPv6 being ready for use
in the underlying networking infrastructure.  So...  again, I would say
let's go with 1:1 for now to make things simple for scheduling, but not
paint ourselves into a corner here architecturally in other areas of the
code by assuming there will only ever be one VIP per LB.

3) Pool:HMs - 1:N vs 1:1
>

Does anyone have a solid use case for having more than one health monitor
per pool?  (And how do you resolve conflicts in health monitor check
results?)  I can't think of one, so 1:1 has my vote here.



>
> I'm sure there are others.  I'm sure each one will need to be evaluated
> on a case-by-case basis.  We will be walking a fine line between
> flexibility and complexity.  We just need to define how far over that
> line and in which direction we are willing to go.
>
> Thanks,
> Brandon
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140915/8f7c0546/attachment.html>


More information about the OpenStack-dev mailing list