[openstack-dev] [Octavia] Proposal to support multiple listeners on one HAProxy instance

Stephen Balukoff sbalukoff at bluebox.net
Thu Aug 21 23:39:42 UTC 2014


Hi Dustin,

Responses in-line:



On Thu, Aug 21, 2014 at 1:56 PM, Dustin Lundquist <dustin at null-ptr.net>
wrote:

> I'm on the fence here, I see a number of advantages to each:
>
> Single HAProxy process per listener:
>
>    - Failure isolation
>    - TLS Performance -- for non TLS services HAProxy is IO bound, and
>    there is no reason to run it across multiple CPU cores, but with HAProxy
>    terminating TLS there is an increased potential of a DoS with a large
>    number of incoming TLS handshakes.
>    - Reduced impact of reconfiguration -- while there is very little
>    impact when reloading the configuration since HAProxy hands off the
>    listener sockets to the new instance and the old instance continues to
>    handle those connections, with a more complex configuration it is more
>    likely to affect services on other listeners.
>
> Multiple listeners on a single HAProxy instance:
>
>    - Enables sharing pools between listeners -- this would reduce keep
>    health monitor traffic, and pipe-lining requests from multiple listeners is
>    possible
>
> I spoke to this point above. Frankly, I'm starting to think this argument
might be premature optimization:  I'm guessing the number of incidents
where pools are shared between listeners on a single loadbalancer is going
to be relatively rare--  so few as to not merit consideration for the
overall design. :/


>
>    - Reduced resource usage -- we should preform the benchmarks and
>    quantify this
>
> Yep, I'm looking forward to seeing the benchmarks here.


>
>    - Simplified stats/log aggregation
>
> I disagree here. This is especially the case if we use something like
syslog-ng for gathering logs (which is effectively non-blocking, which is
probably desirable no matter whether one haproxy process or multiple
haproxy processes are used). I'm not sure haproxy's code for appending logs
it writes to directly is non-blocking.

Stats parsing from haproxy is simpler if more processes are used. As far as
aggregation: Well, we've yet to define what people might want aggregated.
But note here that shared pools across listeners means shared stats for
those pools:  A user might want to see that pool's stats for listener A
versus listener B, which isn't possible if the pool is shared across
listeners. :/  (In any case, we're still talking hypotheticals here...)

>
>    - Simplified Octavia instances -- I think each Octavia instance only
>    running a single HAProxy process is a win, its easier to monitor and
>    upstart/systemd/init only needs to start a single process.
>
> So, in the model proposed by Michael, a single haproxy instance consists
of all the listeners on that loadbalancer as a single process. So if more
than one loadbalancer is deployed to a single Octavia VM, you're going to
need to start / stop / otherwise control multiple haproxy processes anyway.
So the system upstart / systemd / init scripts aren't going to cut it for
this set-up. My thought was to write a new control script (similar to the
one we use in our environment already) which controls all the haproxy
processes, and which can be called on boot to look for and start any
processes for which configuration exists locally (assuming persistent
storage for the VM or something-- if some operators want to do this).  It's
just as likely that we would have a freshly-booted Octavia VM check in with
its controller on boot, download any configurations it should be running,
and start the associated haproxy process(es). Again, the model proposed by
Michael and the model proposed by me do not differ much in how this control
must work if we're allowing multiple loadbalancers per Octavia VM.

We can potentially debate whether we allow multiple loadbalancers per
Octavia VM, but I think restricting this to a maximum of one is not
desirable from a hardware utilization perspective. Many production load
balanced services sit nearly idle all day, so there's no reason an Operator
shouldn't be allowed to combine multiple loadbalancers on a single Octavia
VM (perhaps at a lower price tier to the user). This is also similar to how
actual load balancing hardware appliance vendors tend to operate. The
restrction of 1 loadbalancer per Octavia VM does limit the operator's
options, eh.

Stephen

-- 
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140821/21635bd8/attachment.html>


More information about the OpenStack-dev mailing list