[openstack-dev] Secure RPC

Eric Windisch eric at cloudscaling.com
Wed Oct 24 13:21:30 UTC 2012


> 
> Given that modern CPUs include AES support in hardware, the computational
> overheads may not matter significantly. TBD based on real world testing
> of course.
> 
You're making the assumption of AES here. I'm advocating PKI across the board. To use a symmetric cipher such as AES securely, you need a channel or session to support Diffie-Hellman, neither which we get through the RPC design. Tacking in a state machine is something I really don't want, I don't think anyone wants this.

A local key cache is fine. A bi-directional channel to negiotate DH keys is not, in my opinion.
> 
> For broadcast messages you have the problem of what key to encrypt with.
> So it is probably simplest to just not encrypt broadcast messages.
> 
But we can sign in all instances. This is one of the reasons why I felt signing-only would be a good first step.
 
> I would be interested to know the kind of data volumes / patterns
> seen for the RPC service in the real world, before ruling out the
> use of x509 certs. A supposed 1.5kb overhead per call needs to be
> put in context of the overall traffic & capacity before we judge
> that it is unacceptable overhead. Just sending an x509 fingerprint
> sounds interesting idea, but I'm not sure what security implications
> that would have. One of the things I try to remember is that I'm
> not Bruce Schneier, so it is best to aim for tried-and-tested design
> patterns, rather than try to invent some special/clear new security
> system ;-P
> 
> 


Sending certificates in messages and keyserver designs should both be well trusted, conceptually. The trust is in the signature of the certificate, not the transport and storage mechanisms.

PGP/GPG tends to use or at least tollerates keyservers. SSL sends certificates over the wire.
> 
> Personally I'm not so worried about the network traffic volumes.
> The more important concerns to me are around the overall resillience
> & computational scalability of the system. In particular making sure
> you don't have to do something crazy like check keystone (or another
> centralized auth service) to validate every single RPC call received.
> Also trying to avoid putting any significant administrative setup or
> ongoing burden on deploying openstack; retaining the flexility to
> quickly re-configure the location of nova services, and the speed
> with which you can react to an hostile attack and isolate compromised
> services.
> 
> 


In Nova, we'd be able to locally cache many of these keys. Most of our upstream lookups will be from the scheduler, verifying replies from compute nodes, since in a large enough deployment, individual schedulers will not speak to specific compute nodes frequently enough to have a cache of their key.

However, we shouldn't necessarily assume this to be the general pattern, as it is not a general assumption of the RPC libraries.

Regards, 
Eric Windisch
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20121024/70c5dd9d/attachment.html>


More information about the OpenStack-dev mailing list