[openstack-dev] [Octavia] Question about where to render haproxy configurations

Stephen Balukoff sbalukoff at bluebox.net
Sun Sep 7 09:51:40 UTC 2014


Hi German and Brandon,

Responses in-line:


On Sun, Sep 7, 2014 at 12:21 AM, Brandon Logan <brandon.logan at rackspace.com>
wrote:

> Hi German,
>
> Comments in-line
>
> On Sun, 2014-09-07 at 04:49 +0000, Eichberger, German wrote:
> > Hi Steven,
> >
> >
> >
> > Thanks for taking the time to lay out the components clearly. I think
> > we are pretty much on the same pageJ
> >
> >
> >
> > Driver vs, Driver-less
> >
> > I strongly believe that REST is a cleaner interface/integration point
> > – but  if even Brandon believes that drivers are the better approach
> > (having suffered through the LBaaS v1 driver world which is not an
> > advertisement for this approach) I will concede on that front. Let’s
> > hope nobody makes an asynchronous driver and/or writes straight to the
> > DBJ That said I still believe that adding the driver interface now
> > will lead to some more complexity and I am not sure we will get the
> > interface right in the first version: so let’s agree to develop with a
> > driver in mind but don’t allow third party drivers before the
> > interface has matured. I think that is something we already sort of
> > agreed to, but I just want to make that explicit.
>
> I think the LBaaS V1/V2 driver approach works well enough.  The problems
> that arose from it were because most entities were root level objects
> and thus had some independent properties to them.  For example, a pool
> can exist without a listener, and a listener can exist without a load
> balancer.  The load balancer was the entity tied to the driver.  For
> Octavia, we've already agreed that everything will be a direct or
> indirect child of a load balancer so this should not be an issue.
>
> I agree with you that we will not get the interface right the first
> time.  I hope no one was planning on starting another driver other than
> haproxy anytime before 1.0 because I vaguely remember 2.0 being the time
> that multiple drivers can be used.  By that time the interface should be
> in a good shape.
>

I'm certainly comfortable with the self-imposed development restriction
that we develop only the haproxy driver until at least 1.0, and that we
don't allow multiple drivers until 2.0. This seems reasonable, as well, in
order to follow our constitutional mandate that the reference
implementation always be open source and with unencumbered licensing. (It
seems to follow logically that the open source reference driver must
necessarily lead any 3rd party drivers in feature development.)

Also, the protocol the haproxy driver will be speaking to the Octavia
haproxy amphoras will certainly be REST-like, if not completely RESTful. I
don't think anyone is disagreeing about that. (Keep in mind that REST
doesn't demand JSON or XML be used for resource representations--
 "haproxy.cfg" can still be a valid listener resource representation and
the interface still qualifies as RESTful.) Again, I'm still working on that
API spec, so I'd prefer to have a draft of that to discuss before we get
too much further into the specifics of that API so we have something
concrete to discuss (and don't waste time on non-specific speculative
objections), eh.


> >
> >
> > Multiple drivers/version for the same Controller
> >
> > This is a really contentious point for us at HP: If we allow say
> > drivers or even different versions of the same driver, e.g. A, B, C to
> > run in parallel, testing will involve to test all the possible
> > (version) combination to avoid potential side effects. That can get
> > extensive really quick. So HP is proposing, given that we will have
> > 100s of controllers any way, to limit the number of drivers per
> > controller to 1 to aide testing. We can revisit that at a future time
> > when our testing capabilities have improved but for now I believe we
> > should choose that to speed things up. I personally don’t see the need
> > for multiple drivers per controller – in an operator grade environment
> > we likely don’t need to “save” on the number of controllers ;-) The
> > only reason we might need two drivers on the same controller is if an
> > Amphora for whatever reason needs to be talked to by two drivers.
> > (e.g. you install nginx and haproxy  and have a driver for each). This
> > use case scares me so we should not allow it.
> >
> > We also see some operational simplifications from supporting only one
> > driver per controller: If we have an update for driver A we don’t need
> > to touch any controller running Driver B. Furthermore we can keep the
> > old version running but make sure no new Amphora gets scheduled there
> > to let it wind down with attrition and then stop that controller when
> > it doesn’t have any more Amphora to serve.
>
> I also agree that we should, for now, only allow 1 driver at a time and
> revisit it after we've got a solid grasp on everything.  I honestly
> don't think we will have multiple drivers for a while anyway, so by the
> time we have a solid grasp on it we will know the complexities it will
> introduce and thus make it a permanent rule or implement it.
>

This sounds reasonable to me. When it comes time to support additional
drivers, it may become more obvious whether it makes sense to support one
driver per controller or multiple drivers per controller (we're a ways off
from needing to worry too much about that right now anyway). But if we try
to keep the interface between the driver and controller clean, we should be
in a good spot to do either.

I think the crux of this discussion is going to really be determined around
our recommended upgrade path for controllers and amphorae (which, again, we
haven't really discussed in depth, and which probably deserves its own
thread when we get to the point that having a recommended process there
becomes pertinent.)


>
> I do recognize your worry about the many permutations that could arise
> from having a controller driver version and an amphora version.  I might
> be short-sighted or just blind to it, but would you be testing an nginx
> controller driver against an haproxy amphora?  That shouldn't work, and
> thus I don't see why you would want to test that.  So the only other
> option is testing (as an example) an haproxy 1.5 controller driver with
> amphorae that may have different versions of code, scripts, ancillary
> applications, and/or haproxy.  So its possible there could be N number
> of amphorae running haproxy 1.5, if you are planning on keeping older
> versions around.  You would need to test the haproxy 1.5 controller
> driver against N amphorae versions.  Obviously if we are allowing
> multiple versions of haproxy controller drivers, then we'd have to test
> N controller drivers against N amphorae versions.  Correct me if I am
> interpreting your versioning issue incorrectly.


I'm hoping we can come up with an upgrade workflow that encourages expiring
and recycling obsolete-versioned amphorae in a relatively quick, seamless,
and low-risk way. I *think* this will be possible, but again, we need to
discuss this more thoroughly (probably after we have at least one complete
concrete proposal for a solution here).

Just to point out, again: This is a very different problem from deciding
where we render haproxy configs. :) (Yes, not 100% unrelated, but still, I
think it deserves a different discussion.)


> I see this being a potential issue.  However, right now at least, I
> think the benefit of not having to update all amphorae in a deployment
> if we need to make a simple config rendering change outweighs this
> potential issue.  I feel like we will be doing a lot more of those
> changes rather than adding new haproxy version controller drivers.  Even
> when a new haproxy version is released, what are the odds that we would
> want to use it if what we already have works for what we need?  If what
> we already have doesn't, then we'd probably not use an old and busted
> controller and an old and busted amphora version.
>

If a new version of haproxy is released that is backward-incompatible with
previous releases, we'd probably be developing a new, separate driver for
it anyway. Same goes if it is backward-compatible but exposes a plethora of
new features we want to take advantage of that haproxy 1.5 can't do.

Seeing how long it took them to release haproxy 1.5 as "stable," I don't
think this is going to be a problem we have to worry about for a long time.
:)


>
> That's my take on it currently.  It is subject to change of course.
>
> >
> >
> > Lastly, I interpreted the word “VM driver” in the spec along the lines
> > what we have in libra: A driver interface on the Amphora agent that
> > abstracts starting/stopping the haproxy if we end up on some different
> > and abstracts writing the haproxy file. But that is for the agent on
> > the Amphora. I am sorry I got confused  that way when reading the 0.5
> > spec and I am therefore happy we can have that discussion to make
> > things more clear.
>
> I'm sure more things will come up that we've all made assumptions on and
> while reading the specs we read what we thought was what it said, but
> actually didn't.
>
>
It's very easy to miss details and even completely misunderstand certain
concepts or discussion points, both from lack of ability to convey an idea
thoroughly or correctly, and lack of ability to grasp all the subtleties of
someone else's idea, especially if there's confusion on terminology. Heck,
in this thread alone I know I've misunderstood several of your points
several times (and *mostly* on accident. ;) But I think the communication
we've been having on this has been good and I think we're all understanding
each other's concerns and aligning our efforts better by doing so, eh!

In any case, I don't want anyone to feel like we can't revisit decisions
made in the past if new knowledge, ideas, or a better understanding
warrants it, eh. We just need to make sure we keep moving *mostly* forward,
eh. :)

Stephen

-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140907/b9e3957d/attachment.html>


More information about the OpenStack-dev mailing list