[Openstack-operators] [Openstack-Operators] Keystone cache strategies
Matt Fischer
matt at mattfischer.com
Wed Jun 22 00:58:22 UTC 2016
Have you setup token caching at the service level? Meaning a Memcache
cluster that glance, Nova etc would talk to directly? That will really cut
down the traffic.
On Jun 21, 2016 5:55 PM, "Sam Morrison" <sorrison at gmail.com> wrote:
>
> On 22 Jun 2016, at 9:42 AM, Matt Fischer <matt at mattfischer.com> wrote:
>
> On Tue, Jun 21, 2016 at 4:21 PM, Sam Morrison <sorrison at gmail.com> wrote:
>
>>
>> On 22 Jun 2016, at 1:45 AM, Matt Fischer <matt at mattfischer.com> wrote:
>>
>> I don't have a solution for you, but I will concur that adding
>> revocations kills performance especially as that tree grows. I'm curious
>> what you guys are doing revocations on, anything other than logging out of
>> Horizon?
>>
>>
>> Is there a way to disable revocations?
>>
>> Sam
>>
>
>
> I don't think so. There is no no-op driver for it that I can see. I've not
> tried it but maybe setting the expiration_buffer to a negative value would
> cause them to not be retained?
>
> They expire at the rate your tokens expire (plus a buffer of 30 min by
> default) and under typical operation are not generated very often, so
> usually when you have say 10-20ish in the tree, its not too bad. It gets
> way worse when you have say 1000 of them. However, in our cloud anyway we
> just don't generate many. The only things that generate them are Horizon
> log outs and test suites that add and delete users and groups. If I knew we
> were generating anymore I'd probably setup an icinga alarm for them. When
> the table gets large after multiple test runs or we want to do perf tests
> we end up truncating the table in the DB. However that clearly is not a
> best security practice.
>
>
>
> How token TTLs are very low so I’d be willing to remove revocation. The
> bulk of data going though our load balancers on API requests if you take
> glance images out is requests to the revocation url.
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20160621/9fe1864a/attachment.html>
More information about the OpenStack-operators
mailing list