[openstack-dev] [Neutron][LBaaS] SSL re-encryption scenario question

Carlos Garza carlos.garza at rackspace.com
Tue Apr 22 05:25:18 UTC 2014


On Apr 21, 2014, at 1:51 PM, "Eichberger, German" <german.eichberger at hp.com<mailto:german.eichberger at hp.com>>
 wrote:

Hi,

Despite there are some good use cases for the re-encryption I think it’s out of scope for a Load Balancer. We can defer that functionality to the VPN – as long as we have a mechanism to insert a LoadBalancer as a VPN node we should get all kind of encryption infrastructure “for free”.

   I think the feature should be apart of the API but I think it should be up to the vender to implement the feature or not since some venders can't.
Plus an end user might not be able to append a vpn tunnel on the tail of the loadbalancer.

I like the Unix philosophy of little programs doing one task very well and can be chained. So in our case we might want to chain a firewall to a load balancer to a VPN to get the functionality we want.

   I like that philosophy as well but must admit that the chains do break when versions or interactions  of these components change. GNU's Autotools for example is a nightmare compared to Maven for Java. Even simpler tools like  sort, tail,  broke some tools I used to use. Monolithic tools like emacs likewise seem to be doing daily well.

    I get the impression that a the simple chained tool philosophy came from the era where individual programs had to be small enough to fit in memory and data would be spooled to tape as the intermediary pipe. Still a nice idea though for admins.

Thoughts?

German

From: Stephen Balukoff [mailto:sbalukoff at bluebox.net<http://bluebox.net>]
Sent: Friday, April 18, 2014 9:07 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron][LBaaS] SSL re-encryption scenario question

Hi y'all!

Carlos: When I say 'client cert' I'm talking about the certificate / key combination the load balancer will be using to initiate the SSL connection to the back-end server. The implication here is that if the back-end server doesn't like the client cert, it will reject the connection (as being not from a trusted source). By 'CA cert' I'm talking about the certificate (sans key) that the load balancer will be using the authenticate the back-end server. If the back-end server's "server certificate" isn't signed by the CA, then the load balancer should reject the connection.

Of course, the use of a client cert or CA cert on the load balancer should be optional: As Clint pointed out, for some users, just using SSL without doing any particular authentication (on either the part of the load balancer or back-end) is going to be good enough.

Anyway, the case for supporting re-encryption on the load-balancers has been solidly made, and the API proposal we're making will reflect this capability. Next question:

When specific client certs / CAs are used for re-encryption, should these be associated with the pool or member?

I could see an argument for either case:

Pool (ie. one client cert / CA cert will be used for all members in a pool):
* Consistency of back-end nodes within a pool is probably both extremely common, and a best practice. It's likely all will be accessed the same way.
* Less flexible than certs associated with members, but also less complicated config.
* For CA certs, assumes user knows how to manage their own PKI using a CA.

Member (ie. load balancer will potentially use a different client cert / CA cert for each member individually):
* Customers will sometimes run with inconsistent back-end nodes (eg. "local" nodes in a pool treated differently than "remote" nodes in a pool).
* More flexible than certs associated with members, more complicated configuration.
* If back-end certs are all individually self-signed (ie. no single CA used for all nodes), then certs must be associated with members.

What are people seeing "in the wild"? Are your users using inconsistently-signed or per-node self-signed certs in a single pool?

Thanks,
Stephen




On Fri, Apr 18, 2014 at 5:56 PM, Carlos Garza <carlos.garza at rackspace.com<mailto:carlos.garza at rackspace.com>> wrote:

On Apr 18, 2014, at 12:36 PM, Stephen Balukoff <sbalukoff at bluebox.net<mailto:sbalukoff at bluebox.net>> wrote:


Dang.  I was hoping this wasn't the case.  (I personally think it's a little silly not to trust your service provider to secure a network when they have root access to all the machines powering your cloud... but I digress.)

Part of the reason I was hoping this wasn't the case, isn't just because it consumes a lot more CPU on the load balancers, but because now we potentially have to manage client certificates and CA certificates (for authenticating from the proxy to back-end app servers). And we also have to decide whether we allow the proxy to use a different client cert / CA per pool, or per member.

   If you choose to support re-encryption on your service then you are free to charge for the extra CPU cycles. I'm convinced re-encryption and SslTermination is general needs to be mandatory but I think the API should allow them to be specified.


Yes, I realize one could potentially use no client cert or CA (ie. encryption but no auth)...  but that actually provides almost no extra security over the unencrypted case:  If you can sniff the traffic between proxy and back-end server, it's not much more of a stretch to assume you can figure out how to be a man-in-the-middle.

    Yes but considering you have no problem advocating pure ssl termination for your customers(Decryption on the front end and plain text) on the back end I'm actually surprised this disturbs you. I would recommend users use Straight SSL passthrough or re-enecryption but I wouldn't force this on them should they choose naked encryption with no checking.



Do any of you have a use case where some back-end members require SSL authentication from the proxy and some don't? (Again, deciding whether client cert / CA usage should attach to a "pool" or to a "member.")

When you say client Cert are you referring to the end users X509 Certificate (To be rejected by the backend server)or are you referring to the back end servers X509Certificate which the loadbalancer would reject if it discovered the back end server had a bad signature or mismatched key? I am speaking of the case where the user wants re-encryption but wants to be able to install CA certificates that sign backend servers Keys via PKIX path building. I would even like to offer the customer the ability to skip hostname validation since not every one wants to expose DNS entries for IPs that are not publicly routable anyways. Unless your suggesting that we should force this on the user which likewise forces us to host a name server that maps hosts to the X509s subject CN fields.  Users should be free to validate back end hostnames, just the subject name and key or no validation at all. It should be up to them.





It's a bit of a rabbit hole, eh.
Stephen


On Fri, Apr 18, 2014 at 10:21 AM, Eichberger, German <german.eichberger at hp.com<mailto:german.eichberger at hp.com>> wrote:
Hi Stephen,

The use case is that the Load Balancer needs to look at the HTTP requests be it to add an X-Forward field or change the timeout – but the network between the load balancer and the nodes is not completely private and the sensitive information needs to be again transmitted encrypted. This is admittedly an edge case but we had to implement a similar scheme for HP Cloud’s swift storage.

German

From: Stephen Balukoff [mailto:sbalukoff at bluebox.net<mailto:sbalukoff at bluebox.net>]
Sent: Friday, April 18, 2014 8:22 AM

To: OpenStack Development Mailing List (not for usage questions)
Subject: [openstack-dev] [Neutron][LBaaS] SSL re-encryption scenario question


Howdy, folks!

Could someone explain to me the SSL usage scenario where it makes sense to re-encrypt traffic traffic destined for members of a back-end pool?  SSL termination on the load balancer makes sense to me, but I'm having trouble understanding why one would be concerned about then re-encrypting the traffic headed toward a back-end app server. (Why not just use straight TCP load balancing in this case, and save the CPU cycles on the load balancer?)

We terminate a lot of SSL connections on our load balancers, but have yet to have a customer use this kind of functionality.  (We've had a few ask about it, usually because they didn't understand what a load balancer is supposed to do-- and with a bit of explanation they went either with SSL termination on the load balancer + clear text on the back-end, or just straight TCP load balancing.)

Thanks,
Stephen


--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807<tel:%28800%29613-4305%20x807>

_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807<tel:%28800%29613-4305%20x807>
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Stephen Balukoff
Blue Box Group, LLC
(800)613-4305 x807
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140422/84ae6dfe/attachment.html>


More information about the OpenStack-dev mailing list