[openstack-dev] [Octavia] Question about where to render haproxy configurations

Stephen Balukoff sbalukoff at bluebox.net
Thu Sep 4 05:01:28 UTC 2014


Hello!

At today's Octavia meeting, one of the questions briefly discussed was
where, in the Octavia architecture, haproxy configuration files should be
rendered. This discussion went long and we decided to take it to the
mailing list for more thorough discussion and to gather alternate ideas.

Anyway, the two main ideas are to render the configuration in the back-end
(haproxy) driver, and push complete configs out to the VMs/containers
running the haproxy software via API, or to use the back-end API to push
out configuration updates and have the VMs/containers render them
themselves.

I'm in favor of rendering haproxy configs in the driver, and will present
arguments in favor of this architecture. I understand German @ HP is the
main proponent of rendering the configs on the back-end VMs/containers and
will let him speak to his own points there, and respond to mine below.

Why we should render haproxy configurations within the driver:


   - This is analogous to how other back-end drivers for proprietary
   virtual machines will do it. This means our reference implementation is
   easier used as a model for 3rd party vendors who want to use Octavia for
   managing their own appliance images.
   - This keeps the back-end API simpler, as the back-end Octavia VM /
   container doesn't need to know anything about how to render a
   configuration, it just needs to know how to run it.
   - Simpler back-end API means fewer failure scenarios to have to plan
   for. Either the config pushed out works or it doesn't.
   - Minor bugfixes, configuration-related security fixes, and minor
   feature improvements can be done centrally without having to update
   potentially 10's of thousands of back-end VMs/containers. (The alternative
   is to have to do this for even the smallest of updates using the other
   model.)
   - Major bugfixes and changes will still need to touch all back-end
   VMs/containers, but this is no different than the other model.
   - If an operator wants to deliver service using another set of load
   balancing software (eg. nginx) this can be done either by writing a new
   driver for a new VM / container image which does this, or by "enhancing"
   the haproxy driver to be able to render both configs and updating the image
   and back-end API to know how to deal with nginx configs. No real advantage
   or disadvantage here, IMO.
   - This better follows the "centralize intelligence / decentralize
   workload" development philosophy for the project. Putting the rendering
   logic on the Octavia VMs / containers unnecessarily pushes a fair bit of
   intelligence out to what should be fairly "dumb" workload-handling
   instances.
   - It is far simpler for the driver, being part of the controller, to
   pull in extra data needed for a complete config (eg. TLS certificates) than
   for the back-end VM / container to do this directly. The back-end VM /
   container should not have privileged resource access to get this kind of
   information directly since that could easily lead to security problems with
   the whole of OpenStack.
   - Since the Octavia VM / container image is "dumber" it's also simpler
   to write, test, and troubleshoot.
   - Spinning up new instances is far simpler because the new back-end just
   gets a complete config from the driver / controller and runs with it. No
   need to try to guess state.
   - This design is, overall, more idempotent.  (Something went wrong with
   that update to a pool member configuration? No worries-- the next update
   will get that change, too.)

The only down-side I see to this is:

   - We will need to keep track of which VMs / containers are running which
   images, and the driver will need to know how to speak to them
   appropriately.  However, I think this is a false down-side, since:
      - In a large installation, it's unreasonable to expect the
      potentially 10's of thousands of VMs to be running the exact same version
      of software. Forcing this requirement makes things very inflexible and a
      lot more risky for the operator, as far as maintenance and upgrade
      schedules are concerned.
      - The above point is to say, having the ability to run with different
      versions of back-end VMs / containers is actually a good thing, and
      probably a requirement for large installations anyway.
      - If you really want the model where you have to update all your
      back-end VM / container images with each update, you can still
do that with
      this topology. Again, with the alternate topology suggested you
are forced
      to do that even for very minor updates.
      - API versioning (even for the back-end) is also a requirement of
      this project's philosophy. And if you're going to have a versioned API:
      - It's pretty simple to write a 'status' API command which among
      other things lists the API version in use, as well as the versions of all
      the major utilities installed (eg. haproxy, openssl, nginx,
etc.) It's then
      up to the controller to render configurations appropriate to the software
      versions installed and/or to force an image to be retired in favor of
      spinning up a new one and/or return an error to the user trying
to push out
      a feature request for an image that doesn't support it. Lots of options,
      all of which can be intelligently handled within the driver / controller.

Thoughts?

Stephen

-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140903/d39ea73f/attachment.html>


More information about the OpenStack-dev mailing list