[openstack-dev] [TripleO] Haproxy configuration options

Gregory Haynes greg at greghaynes.net
Mon May 26 05:20:39 UTC 2014


Excerpts from Robert Collins's message of 2014-05-25 23:12:26 +0000:
> On 23 May 2014 04:57, Gregory Haynes <greg at greghaynes.net> wrote:
> >>
> >> Eventually we may need to scale traffic beyond one HAProxy, at which
> >> point we'll need to bring something altogether more sophisticated in -
> >> lets design that when we need it.
> >> Sooner than that we're likely going to need to scale load beyond one
> >> control plane server at which point the HAProxy VIP either needs to be
> >> distributed (so active-active load receiving) or we need to go
> >> user -> haproxy (VIP) -> SSL endpoint (on any control plane node) ->
> >> localhost bound service.
> >
> > Putting SSL termination behind HAProxy seems odd. Typically your load
> > balancer wants to be able to grok the traffic sent though it which is
> 
> Not really :). There is a sophistication curve - yes, but generally
> load balancers don't need to understand the traffic *except* when the
> application servers they are sending to have locality of reference
> performance benefits from clustered requests. (e.g. all requests from
> user A on server Z will hit a local cache of user metadata as long as
> they are within 5 seconds). Other than that, load balancers care about
> modelling server load to decide where to send traffic).
> 
> SSL is a particularly interesting thing because you know that all
> requests from that connection are from one user - its end to end
> whereas HTTP can be multiplexed by intermediaries. This means that
> while you don't know that 'all user A's requests' are on the one
> socket, you do know that all requests on that socket are from user A.
> 
> So for our stock - and thus probably most common - API clients we have
> the following characteristics:
>  - single threaded clients
>  - one socket (rather than N)
> 
> Combine these with SSL and clearly whatever efficiency we *can* get
> from locality of reference, we will get just by taking SSL and
> backending it to one backend. That backend might itself be haproxy
> managing local load across local processes but there is no reason to
> expose the protocol earlier.

This is a good point and I agree that performance-wise there is not an
issue here.

> 
> > not possible in this setup. For an environment where sending unencrypted
> > traffic across the internal work is not allowed I agree with Mark's
> > suggestion of re-encrypting for internal traffic, but IMO it should
> > still pass through the load balancer unencrypted. Basically:
> > User -> External SSL Terminate -> LB -> SSL encrypt -> control plane
> 
> I think this is wasted CPU cycles given the characteristics of the
> APIs we're balancing. We have four protocols that need VIP usage AIUI:
> 

One other, separate issue with letting external SSL pass through to your
backends has to do with secutity: Your app servers (or in our case
control nodes) generally have a larger attack surface and are more
distributed than your load balancers (or an SSL endpoint placed infront
of them). Additionally, compromise of an external-facing SSL cert is far
worse than an internal-only SSL cert which could be made backend-server
specific.

I agree that re-encryption is not useful with our current setup, though:
It would occur on a control node which removes the security benefits (I
still wanted to make sure this point is made :)).

TL;DR - +1 on the 'User -> haproxy -> ssl endpoint -> app' design.

Thanks,
Greg

-- 
Gregory Haynes
greg at greghaynes.net



More information about the OpenStack-dev mailing list