[dev][keystone] Key loading performance during token validation
Lance Bragstad
lbragstad at gmail.com
Wed Feb 13 21:00:41 UTC 2019
Both fernet tokens and JSON Web Tokens (still under review [0]) require
keys to either encrypt or sign the token payload. These keys are kept on
disk by default and are treated like configuration. During the
validation process, they're loaded from disk, stored as a list, and
iterated over until the correct key decrypts the ciphertext/validates
the signature or the iterable is exhausted [1][2].
Last week XiYuan raised some concerns about loading all public keys from
disk every time we validate a token [3]. To clarify, this would be
applicable to both fernet keys and asymmetric key pairs used for JWT. I
spent a couple days late last week trying to recreate the performance
bottleneck.
There were two obvious approaches.
1. Watch for writable changes to key repositories on disk
I used a third-party library for this called watchdog [4], but the
inherent downside is that it is specific to file-based key repositories.
For example, this might not work with a proper key manager, which has
been proposed in the past [5].
2. Encode a hash of the key used to create the token inside the token
payload
This would give any keystone server validating the token the ability
preemptively load the correct key for validation instead of iterating
over a list of all possible keys. There was some hesitation around this
approach because the hash would be visible to anyone with access to the
token (it would be outside of ciphertext in the case of fernet). While
hashes are one-way, it would allow someone to "collect" tokens they know
were issued using the same symmetric private key or signed with the same
asymmetric private key.
Security concerns aside, I did attempt to write up the second approach
[6]. I was able to get preemptive loading to work, but I didn't notice a
significant performance impact. For clarification, I was using a single
keystone server with one private key for signing and 100 public keys to
simulate a deployment of 100 API servers that all need to validate
tokens issued by each other. At a certain point, I wondered if the
loading of keys was really the issue, or if it was because we were
iterating over an entire set every time we validate a token. It's also
important to note that I had to turn off caching to fully test this.
Keystone enables caching by default, which short-circuits the entire key
loading/token provider code path after the token is issued.
My question is if anyone has additional feedback on how to recreate
performance issues specifically for loading files from disk, since I
wasn't particularly successful in noticing a difference between
repetitive key loading or iterating all keys on token validation against
what we already do.
[0] https://review.openstack.org/#/c/614549/
[1]
https://git.openstack.org/cgit/openstack/keystone/tree/keystone/token/token_formatters.py?id=053f908853481c00ab28f562a7623f121a7703af#n69
[2]
https://github.com/pyca/cryptography/blob/master/src/cryptography/fernet.py#L165-L171
[3]
https://review.openstack.org/#/c/614549/13/keystone/token/providers/jws/core.py,unified@103
[4] https://pythonhosted.org/watchdog/
[5] https://review.openstack.org/#/c/363065/
[6] https://review.openstack.org/#/c/636151/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20190213/40998597/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20190213/40998597/attachment.sig>
More information about the openstack-discuss
mailing list