[openstack-dev] [barbican] Tech approach blueprint updated
Jarret Raim
jarret.raim at RACKSPACE.COM
Fri May 3 17:50:43 UTC 2013
On 5/3/13 10:34 AM, "John Wood" <john.wood at RACKSPACE.COM> wrote:
>
>[
>3) "expires" and not serving back a key that has expired ..
> If there are objects in the system encrypted with the key that has
>become expired, are we implying this is a way
> to burn/bury the object? (assuming we do not care to retrieve the
>space occupied by said object).
> If not, we need to still serve the key for decryption but no longer
>for encryption.
> How does one distinguish the two use cases via a simple "get"
>request?
> May need to "trust" the application that makes calls into key
>manager specify purpose in the get request.
>]
>
>Hmm...that make sense. Maybe we could instead consider an approach where
>a notification is sent to a contact list when secrets are expired?
>
>[
>4) I am a little concerned about supporting a "put" on a key ..
> For instance if one changes the secret key string itself, and it was
>in use, all the objects encrypted with the original
> key would no longer be accessible. A put that changed key-string or
>expiration timestamp would need to trigger a work-flow that retrieved all
>objects encrypted with said key and decrypt, then encrypt with new key.
>Time and Compute resources.
>]
>
>This might be another 'trust the application' situation, but could
>certainly leave a customer in a pickle if they didn't do this correctly.
>Immutable keys would be safer from that perspective...any other community
>thoughts here?
The people using our APIs will be devs. I tend to fall on the side of
allowing the app the power and expecting the user to make the correct
choices. Having a key expire and orphan data is a valid use case as well.
My thought on the expires feature is that once expired, the key is not
served at all, encryption of decryption. The key is dead. It may be a flag
on whether we allow recovery of expired keys or just delete them as soon
as they expire.
We might be able to mitigate this by allowing a per-secret setting on
whether we should store a history of key changes. Trades off security for
some safety.
>[
>7) Based on the input I got at the key manager design session, there was
>a request for the key manager to create symmetric keys.
> A simple 256/512/1024 bit key should not take long to generate (but
>PKI pairs with a certificate would be a different story).
> Would be nice to support a "create" key.
> https:// ../get/{tenant_id}/secrets/new or basically one with no uuid
> Default expiration time, format ..
> Or more parameters in the get url
>]
>
>A POST to secrets with the 'plain-text' omitted could do a similar thing
>(i.e. have Barbican create the secret). However, this behavior would be
>inconsistent with other secret types (such as SSL certs), that have
>long-running asynchronous work flows to generate them. The orders ->
>create-secret flow, though a bit clunky for quick-gen secrets, would at
>least be consistent. Yet another thing to query the community about.
Barbican will be capable of creating a wide variety of keying material.
The order abstraction allows us to treat all create requests the same,
whether or not they require a longer processing workflow. I think it is
also cleaner to not clutter the secret schema with a bunch of provisioning
information, hence the concept of storing all provisioning information in
an Order schema, that is linked to the secret.
So the current plan is that all key creations would hit the /orders
resource to submit an order. They would then poll the order resource until
the status is COMPLETE. At that point, the orders resource will have a uri
to a valid secret resource that can be retrieved.
If people don't like this idea - speak up now :)
>[
>10) kek-metadata I guess would hold any encryption algorithm and details
>that the encryption plug-in engine uses.
> Do we need to /want to support more than one plug-in engine? Given
>a plain text string, a master key and an encryption algorithm, would any
>plugin generate the same cipher text? (then a common format for the
>kek-meta data) ..
>A use case would be good for illustration.
>]
>
>The intent of the kek-metadata is to store whatever info is needed to be
>able to decrypt the data later. Barbican would be agnostic to this data
>and would simply store it. Only the plugin implementation would know how
>to utilize the metadata on the decrypt once Barbican hands this metadata
>and the encrypted cypher text back to the plugin to decrypt it.
>
>If we intend to support more than one plugin type for a given type of
>secret, then we would probably also need to store info about what plugin
>was used to encrypt/decrypt the data.
We might want to just go ahead and add that data to the schema. I could
see a use case of migrating from one backend to another at some point and
knowing which plugin is in use for a key seems like useful information.
We'll just need to add a method to the encryption abstraction layer where
the plugin can return a string indicating its identity. This might also be
how we support upgrades, e.g. Version 10 of Dogtag to v11.
>[
>11) Would the roles referred to here be the same as that in keystone?
>]
>
>They might be, that needs to be fleshed out more over time.
The roles here are specifically speaking to what relationship the tenant
has to the key. For example, if a customer hits the API and creates a key
directly, their tenant might be listed as CREATOR while another tenant
might have a READER role. We'll use this data to perform authZ checks at
some point in the future.
I think these roles will be specific to us and not come from Keystone.
However, if we could figure out how to use keystone roles / policies to do
this, I'd be happy with that approach.
Jarret
More information about the OpenStack-dev
mailing list