[openstack-dev] [Octavia] Responsibilities for controller drivers
Brandon Logan
brandon.logan at RACKSPACE.COM
Tue Sep 16 02:32:29 UTC 2014
Hi Stephen,
Same drill
On Mon, 2014-09-15 at 13:33 -0700, Stephen Balukoff wrote:
> Hi Brandon!
>
>
> My responses in-line:
>
> On Fri, Sep 12, 2014 at 11:27 AM, Brandon Logan
> <brandon.logan at rackspace.com> wrote:
> IN IRC the topic came up about supporting many-to-many load
> balancers to
> amphorae. I believe a consensus was made that allowing only
> one-to-many
> load balancers to amphorae would be the first step forward,
> and
> re-evaluate later, since colocation and apolocation will need
> to work
> (which brings up another topic, defining what it actually
> means to be
> colocated: On the same amphorae, on the same amphorae host, on
> the same
> cell/cluster, on the same data center/availability zone. That
> should be
> something we discuss later, but not right now).
>
> I am fine with that decisions, but Doug brought up a good
> point that
> this could very well just be a decision for the controller
> driver and
> Octavia shouldn't mandate this for all drivers. So I think we
> need to
> clearly define what decisions are the responsibility of the
> controller
> driver versus what decisions are mandated by Octavia's
> construct.
>
>
> In my mind, the only thing dictated by the controller to the driver
> here would be things related to colocation / apolocation. So in order
> to fully have that discussion here, we first need to have a
> conversation about what these things actually mean in the context of
> Octavia and/or get specific requirements from operators here. The
> reference driver (ie. haproxy amphora) will of course have to follow a
> given behavior here as well, and there's the possibility that even if
> we don't dictate behavior in one way or another, operators and users
> may come to expect the behavior of the reference driver here to become
> the defacto requirements.
So since with HA we will want apolocation, are you saying the controller
should dictate that every driver create a load balancer's amphorae on
different hosts? I'm not sure the controller could enforce this, other
than code reviews, but I might be a short-sighted here.
>
>
> Items I can come up with off the top of my head:
>
> 1) LB:Amphora - M:N vs 1:N
>
>
> My opinion: For simplicity, first revision should be 1:N, but leave
> open the possibility of M:N at a later date, depending on what people
> require. That is to say, we'll only do 1:N at first so we can have
> simpler scheduling algorithms for now, but let's not paint ourselves
> into a corner in other portions of the code by assuming there will
> only ever be one LB on an amphora.
This is reasonable. Of course, this brings up the question on whether
we should keep the table structure as is with a M:N relationship. My
opinion is we start with the 1:N table structure. My reasons are in
response to your comment on this review:
https://review.openstack.org/#/c/116718/
>
> 2) VIPs:LB - M:N vs 1:N
>
>
> So, I would revise that to be N:1 or 1:1. I don't think we'll ever
> want to support a case where multiple LBs share the same VIP.
> (Multiple amphorae per VIP, yes... but not multiple LBs per VIP. LBs
> are logical constructs that also provide for good separation of
> concerns, particularly around security.)
Yeah sorry about that, brain fart. Unless we want shareable VIPs!?
anyone? anyone?
>
>
> The most solid use case for N:1 that I've heard is the IPv6 use case,
> where a user wants to expose the exact same services over IPv4 and
> IPv6, and therefore it makes sense to be able to have multiple VIPs
> per load balancer. (In fact, I'm not aware of other use cases here
> that hold any water.) Having said this, we're quite a ways from IPv6
> being ready for use in the underlying networking infrastructure.
> So... again, I would say let's go with 1:1 for now to make things
> simple for scheduling, but not paint ourselves into a corner here
> architecturally in other areas of the code by assuming there will only
> ever be one VIP per LB.
Yeah N:1 every comes up as something we should and can do, we'll revisit
it then.
>
>
> 3) Pool:HMs - 1:N vs 1:1
>
>
> Does anyone have a solid use case for having more than one health
> monitor per pool? (And how do you resolve conflicts in health monitor
> check results?) I can't think of one, so 1:1 has my vote here.
I don't know of any strong ones, but it is allowed by some vendors.
>
>
>
>
> I'm sure there are others. I'm sure each one will need to be
> evaluated
> on a case-by-case basis. We will be walking a fine line
> between
> flexibility and complexity. We just need to define how far
> over that
> line and in which direction we are willing to go.
>
> Thanks,
> Brandon
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
> Stephen Balukoff
> Blue Box Group, LLC
> (800)613-4305 x807
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
More information about the OpenStack-dev
mailing list