<div dir="ltr">Hello!<div><br></div><div>At today's Octavia meeting, one of the questions briefly discussed was where, in the Octavia architecture, haproxy configuration files should be rendered. This discussion went long and we decided to take it to the mailing list for more thorough discussion and to gather alternate ideas.</div>
<div><br></div><div>Anyway, the two main ideas are to render the configuration in the back-end (haproxy) driver, and push complete configs out to the VMs/containers running the haproxy software via API, or to use the back-end API to push out configuration updates and have the VMs/containers render them themselves.</div>
<div><br></div><div>I'm in favor of rendering haproxy configs in the driver, and will present arguments in favor of this architecture. I understand German @ HP is the main proponent of rendering the configs on the back-end VMs/containers and will let him speak to his own points there, and respond to mine below.</div>
<div><br></div><div>Why we should render haproxy configurations within the driver:</div><div><br></div><div><ul><li>This is analogous to how other back-end drivers for proprietary virtual machines will do it. This means our reference implementation is easier used as a model for 3rd party vendors who want to use Octavia for managing their own appliance images.<br>
</li><li>This keeps the back-end API simpler, as the back-end Octavia VM / container doesn't need to know anything about how to render a configuration, it just needs to know how to run it.<br></li><li>Simpler back-end API means fewer failure scenarios to have to plan for. Either the config pushed out works or it doesn't.</li>
<li>Minor bugfixes, configuration-related security fixes, and minor feature improvements can be done centrally without having to update potentially 10's of thousands of back-end VMs/containers. (The alternative is to have to do this for even the smallest of updates using the other model.)</li>
<li>Major bugfixes and changes will still need to touch all back-end VMs/containers, but this is no different than the other model.</li><li>If an operator wants to deliver service using another set of load balancing software (eg. nginx) this can be done either by writing a new driver for a new VM / container image which does this, or by "enhancing" the haproxy driver to be able to render both configs and updating the image and back-end API to know how to deal with nginx configs. No real advantage or disadvantage here, IMO.</li>
<li>This better follows the "centralize intelligence / decentralize workload" development philosophy for the project. Putting the rendering logic on the Octavia VMs / containers unnecessarily pushes a fair bit of intelligence out to what should be fairly "dumb" workload-handling instances.</li>
<li>It is far simpler for the driver, being part of the controller, to pull in extra data needed for a complete config (eg. TLS certificates) than for the back-end VM / container to do this directly. The back-end VM / container should not have privileged resource access to get this kind of information directly since that could easily lead to security problems with the whole of OpenStack.</li>
<li>Since the Octavia VM / container image is "dumber" it's also simpler to write, test, and troubleshoot.</li><li>Spinning up new instances is far simpler because the new back-end just gets a complete config from the driver / controller and runs with it. No need to try to guess state.</li>
<li>This design is, overall, more idempotent. (Something went wrong with that update to a pool member configuration? No worries-- the next update will get that change, too.)</li></ul><div>The only down-side I see to this is:</div>
</div><div><ul><li>We will need to keep track of which VMs / containers are running which images, and the driver will need to know how to speak to them appropriately. However, I think this is a false down-side, since:</li>
<ul><li>In a large installation, it's unreasonable to expect the potentially 10's of thousands of VMs to be running the exact same version of software. Forcing this requirement makes things very inflexible and a lot more risky for the operator, as far as maintenance and upgrade schedules are concerned.</li>
<li>The above point is to say, having the ability to run with different versions of back-end VMs / containers is actually a good thing, and probably a requirement for large installations anyway.</li><li>If you really want the model where you have to update all your back-end VM / container images with each update, you can still do that with this topology. Again, with the alternate topology suggested you are forced to do that even for very minor updates.</li>
<li>API versioning (even for the back-end) is also a requirement of this project's philosophy. And if you're going to have a versioned API:</li><li>It's pretty simple to write a 'status' API command which among other things lists the API version in use, as well as the versions of all the major utilities installed (eg. haproxy, openssl, nginx, etc.) It's then up to the controller to render configurations appropriate to the software versions installed and/or to force an image to be retired in favor of spinning up a new one and/or return an error to the user trying to push out a feature request for an image that doesn't support it. Lots of options, all of which can be intelligently handled within the driver / controller.</li>
</ul></ul></div><div>Thoughts?</div><div><br></div><div>Stephen</div><div><div><br></div>-- <br><span></span>Stephen Balukoff
<br>Blue Box Group, LLC
<br>(800)613-4305 x807
</div></div>