[openstack-dev] Swift blueprint encrypted-objects
Bhandaru, Malini K
malini.k.bhandaru at intel.com
Wed Jan 23 08:53:50 UTC 2013
Thank you Caitlin for your detailed comments. Shall try to address them ...
Regards
Malini
-----Original Message-----
From: Caitlin Bestler [mailto:Caitlin.Bestler at nexenta.com]
Sent: Tuesday, January 22, 2013 2:43 PM
To: OpenStack Development Mailing List
Subject: Re: [openstack-dev] Swift blueprint encrypted-objects
> Object Encryption: Extending Swift
This is the wrong scope for server based encryption. Swift and Cinder should use the same encryption solution.
Malini: Both Swift and Cinder seek to encrypt and to that end they shall both be using keys. Where the keys are stored and how they are accessed is a common problem. Bruce Benjamin and self have been communicating in this regard. Bruce also suggested supporting the open key manager protocol, OASIS. That being said, the actual encryption algorithms used for block storage is typically XTS, in order to make individual sector retrieval independent of other sectors associated with the original data stream. Objects encryption would be AES with some variant of cipher chaining.
I favor a unit of encryption be an object for Swift and a volume for Cinder. A VM user mounts a volume. A Swift user gets and puts objects.
One could also dismiss object encryption as a solved problem by storing all swift objects on drives that are encrypted, after all swift is based on file systems. The swift background tasks would then work with the objects as they do but under the covers changes are all encrypted.
Perhaps it is a jumping point for a thought experiment, an implementation expert would reveal typical usage and performance pros and cons.
Encrypting at the granularity of an object would make leave all current swift background processes for replication and recovery in tact. The only overhead at the storage level would be the encryption related meta data that needs to be stored.
> Protection of data at rest: data encrypted and keys held in a separate location. Stealing the data disk still leaves the data protected.
> Keys will also be encrypted, using a Master-key. One thing to keep safe as opposed to multiple keys. A notion similar to a safe deposit box requiring a bank key and a customer key to open.
You actually want slight more than protection against the removable disks being stolen.
You also want to protect against the entire server being stolen.
Malini: True. I do not comprehend your lock-box analogy completely.
A Master Key is crucial here. You want a key held on another server that will not be available from another network.
But you also want the Master Key to be specific to the server lockbox holding the master keys, so that you cannot intercept the master key being sent to another server.
Malini: We are in agreement on the desirability of a master key. In my proposal the Swift/Cinder encryption component holds the master key while the encrypted keys are held in a separate server. So lockbox and masterkey on separate physical machines.
> Key Manager will not maintain mapping between keys to objects.
Why is there a key manager at all?
Malini: The key manager will hold a dictionary of <key-id, key-string-encrypted-with-master-key> pairs.
The Swift object store would hold <object-id, encrypted-object, meta-data: encrypted, key-id, algorithm ...>
Thus if anyone captured the key manager, they shall have a bunch of encrypted key strings and key-ids but not have access to the
Object-ids that they open, nor the decrypted keystrings.
Keys should be generated and stored within server specific lockboxes.
Malini: Keys if we take the per-object route would be many in number. Per domain or per project then fewer. In the design described, a single
master key would have to be shared among the object servers, those that ensure load balancing,
and actually serve as primaries for the data.
Operationally, I cannot imagine a "cloud" solution that requires a human to re-enter a pass-phrase every time a Server is relaunched. Anything the Openstack compute can manage is not providing enough encryption for data at rest. Therefore migration of lockboxes needs to be very rare.
Malini: Each server on reboot or openstack process init could contact the key-manager (keystone and key-manager would need to be up, or re-try
after some wait), and load their master key, Cinder and Swift nodes in particular offering encryption.
I am guessing we may want this aspect in Ceilometer too to encrypt logs etc.
The role of a central "key manager" should be to authorize specific lockbox-to-lockbox transfer to tagged encryption keys, rather than storing the keys itself. Central storage of keys just makes the entire system vulnerable to attacks on the key manager and the communications with the key manager.
> Authorization and access control support for key manager to protect from unauthorized use.
The real question is how we enable an entire set of machines to reboot without manual re-authorizing each one by hand.
They will never be re-authorized manually. Any attempt to do so will result in automated scripting of consoles, which Will become the weak link in the entire system.
> Protection from denial of service, either from malicious activity or natural disasters by way of key replication (akin to object replication and recovery in Swift).
Actually this is more of an issue for cinder than it is for swift. Given even simple local replication of a lockbox, Swift will recover from the loss of a lockbox the same way it would recover from loss of the entire server. If you can make the loss of a lockbox be relatively rare then it may be preferable for Swift to *never* transfer keys. Never transferring keys will provide better security.
Malini: Key manager would use a Swift system in my design to leaverage all the benefits of Swift. But if the lockbox (my understanding of your suggestion) is a smaller collection of keys, storing an encrypted version of itself along side other obects in the cluser would work.
On the other hand, Cinder needs to copy snapshots and clone new volumes from those copied snapshots. That favors allowing keys to be explicitly transferred from lockbox A to lockbox B. Since I think Cinder and Swift should use the same encryption solution this would obviously be available for Swift as well.
> Use Cases
> Key Provider:
> * User (would rather not delegate trust, plans to use the same key for
> each object ..)
> * Auto-generation (either by the object storage system or key manager)
The only reason for a user to generate a key is if they are *not* sharing it with OpenStack.
That is called end-to-end encryption, and ultimately both need to be supported. What we are currently describing is Service Provider encryption.
Malini: True end-to-end is user controlled, Swift/OpenStack are agnostic of the encryption. End user passing along an encryption is a mid-way solution, and server dealing with keys and all things encryption is the server encryption, the thrust of this document.
Every time a key is communicated it is put at risk. Therefore the best keys are generated in Server specific lockboxes, *never* exposed in plain text and only transferred to other lockboxes under transaction specific encryption enabled by the key-manager.
Malini: A communication is typically via https or ssl between services. When passing encrypted data between swift nodes internally, that is the protection. The design proposes passing only the encrypted keys between the Swift node and the key-manager.
>Key Scope:
>* Per object
>* Per project (within a domain)
>* Per domain
What benefit is there to per-object keys when the entire set of keys is held in the same lockbox?
Malini: back to my point that keys and objects live in separate places.
Similarly, why have tenant specific keys when the Service Provider hold them?
Malini: to protect tenant-A data from tenant-B. It limits the amount of data exposed should a tenant key be compromised.
Also nice.. tenants just may like being assured they have their own key even if only the service provider is holding them.
Perhaps they even want to hold their own key, may be we do not want to let them do so .. because in case of exposure there could be a blaming game.
The most natural granularity is based upon replication of still encrypted data.
For cinder that suggests a set of volumes/snapshots/clones. For Swift a Swift partition.
> Key-Storage:
> * End-User
> * Key Manager
End-user storage of keys is a separate project. But I think the OS vendors have to take the lead on end-to-end encryption solutions. At the minimum it is a different OpenStack project.
A central key-manager is creating a single point of attack. Using multiple lockboxes is more secure, and simpler.
Malini: No different from Swift or Cinder being a single point of attack. With the Key Manager replicated just like Swift because it is a swift instance for encrypted keys, it is no worse a failure point than Swift.
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130123/a2a7d2ec/attachment.html>
More information about the OpenStack-dev
mailing list