[openstack-dev] [TripleO] Haproxy configuration options
Robert Collins
robertc at robertcollins.net
Sun May 25 23:12:26 UTC 2014
On 23 May 2014 04:57, Gregory Haynes <greg at greghaynes.net> wrote:
>>
>> Eventually we may need to scale traffic beyond one HAProxy, at which
>> point we'll need to bring something altogether more sophisticated in -
>> lets design that when we need it.
>> Sooner than that we're likely going to need to scale load beyond one
>> control plane server at which point the HAProxy VIP either needs to be
>> distributed (so active-active load receiving) or we need to go
>> user -> haproxy (VIP) -> SSL endpoint (on any control plane node) ->
>> localhost bound service.
>
> Putting SSL termination behind HAProxy seems odd. Typically your load
> balancer wants to be able to grok the traffic sent though it which is
Not really :). There is a sophistication curve - yes, but generally
load balancers don't need to understand the traffic *except* when the
application servers they are sending to have locality of reference
performance benefits from clustered requests. (e.g. all requests from
user A on server Z will hit a local cache of user metadata as long as
they are within 5 seconds). Other than that, load balancers care about
modelling server load to decide where to send traffic).
SSL is a particularly interesting thing because you know that all
requests from that connection are from one user - its end to end
whereas HTTP can be multiplexed by intermediaries. This means that
while you don't know that 'all user A's requests' are on the one
socket, you do know that all requests on that socket are from user A.
So for our stock - and thus probably most common - API clients we have
the following characteristics:
- single threaded clients
- one socket (rather than N)
Combine these with SSL and clearly whatever efficiency we *can* get
from locality of reference, we will get just by taking SSL and
backending it to one backend. That backend might itself be haproxy
managing local load across local processes but there is no reason to
expose the protocol earlier.
> not possible in this setup. For an environment where sending unencrypted
> traffic across the internal work is not allowed I agree with Mark's
> suggestion of re-encrypting for internal traffic, but IMO it should
> still pass through the load balancer unencrypted. Basically:
> User -> External SSL Terminate -> LB -> SSL encrypt -> control plane
I think this is wasted CPU cycles given the characteristics of the
APIs we're balancing. We have four protocols that need VIP usage AIUI:
HTTP API
HTTP Data (Swift only atm)
AMQP
MySQL
For HTTP API see my analysis above. For HTTP Data unwrapping and
re-wrapping is expensive and must be balanced against expected
benefits: what request characteristic would be
pinning/balancing/biasing on for Swift?
For AMQP and MySQL we'll be in tunnel mode anyway, so there is no
alternative but SSL to the backend machine and unwrap there.
> This is a bit overkill given our current state, but I think for now its
> important we terminate external SSL earlier on: See ML thread linked
> above for reasoning.
If I read this correctly, you're arguing yourself back to the "User ->
haproxy (VIP) -> SSL endpoint (on any control plane node) -> localhost
bound service." I mentioned ?
-Rob
--
Robert Collins <rbtcollins at hp.com>
Distinguished Technologist
HP Converged Cloud
More information about the OpenStack-dev
mailing list