[openstack-dev] [Keystone][Fernet] HA SQL backend for Fernet keys

Dolph Mathews dolph.mathews at gmail.com
Mon Jul 27 18:48:12 UTC 2015


On Mon, Jul 27, 2015 at 1:31 PM, Clint Byrum <clint at fewbar.com> wrote:

> Excerpts from Alexander Makarov's message of 2015-07-27 10:01:34 -0700:
> > Greetings!
> >
> > I'd like to discuss pro's and contra's of having Fernet encryption keys
> > stored in a database backend.
> > The idea itself emerged during discussion about synchronizing rotated
> keys
> > in HA environment.
> > Now Fernet keys are stored in the filesystem that has some availability
> > issues in unstable cluster.
> > OTOH, making SQL highly available is considered easier than that for a
> > filesystem.
> >
>
> I don't think HA is the root of the problem here. The problem is
> synchronization. If I have 3 keystone servers (n+1), and I rotate keys on
> them, I must very carefully restart them all at the exact right time to
> make sure one of them doesn't issue a token which will not be validated
> on another. This is quite a real possibility because the validation
> will not come from the user, but from the service, so it's not like we
> can use simple persistence rules. One would need a layer 7 capable load
> balancer that can find the token ID and make sure it goes back to the
> server that issued it.
>

This is not true (or if it is, I'd love see a bug report). keystone-manage
fernet_rotate uses a three phase rotation strategy (staged -> primary ->
secondary) that allows you to distribute a staged key (used only for token
validation) throughout your cluster before it becomes a primary key (used
for token creation and validation) anywhere. Secondary keys are only used
for token validation.

All you have to do is atomically replace the fernet key directory with a
new key set.

You also don't have to restart keystone for it to pickup new keys dropped
onto the filesystem beneath it.


>
> A database will at least ensure that it is updated in one place,
> atomically, assuming each server issues a query to find the latest
> key at every key validation request. That would be a very cheap query,
> but not free. A cache would be fine, with the cache being invalidated
> on any failed validation, but then that opens the service up to DoS'ing
> the database simply by throwing tons of invalid tokens at it.
>
> So an alternative approach is to try to reload the filesystem based key
> repository whenever a validation fails. This is quite a bit cheaper than a
> SQL query, so the DoS would have to be a full-capacity DoS (overwhelming
> all the nodes, not just the database) which you can never prevent. And
> with that, you can simply sync out new keys at will, and restart just
> one of the keystones, whenever you are confident the whole repository is
> synchronized. This is also quite a bit simpler, as one basically needs
> only to add a single piece of code that issues load_keys and retries
> inside validation.
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150727/1530914c/attachment.html>


More information about the OpenStack-dev mailing list