[Openstack-operators] [Openstack-Operators] Keystone cache strategies

Jose Castro Leon jose.castro.leon at cern.ch
Thu Jun 23 09:36:08 UTC 2016


We only have the cache configured at the keystone level, not sure that will help, it will still require to retrieve the revoke tree from the cache to validate the tree…

Modifying the values on the cache_time does not reduce the number of requests on the cache, it increases the load on the database behind…

I am looking into other backends possibilities to reduce “hot key” issues, just wondering what are you using, and for the replies everyone is using memcache…

Cheers,
Jose

Jose Castro Leon
CERN IT-CM-RPS                   tel:    +41.22.76.74272
                                                mob: +41.75.41.19222
                                                fax:    +41.22.76.67955
Office: 31-1-026                  CH-1211      Geneve 23
email: jose.castro.leon at cern.ch<mailto:jose.castro.leon at cern.ch>

From: tadowguy at gmail.com [mailto:tadowguy at gmail.com] On Behalf Of Matt Fischer
Sent: Wednesday, June 22, 2016 5:07 AM
To: Sam Morrison <sorrison at gmail.com>
Cc: Jose Castro Leon <jose.castro.leon at cern.ch>; openstack-operators at lists.openstack.org
Subject: Re: [Openstack-operators] [Openstack-Operators] Keystone cache strategies

On Tue, Jun 21, 2016 at 7:04 PM, Sam Morrison <sorrison at gmail.com<mailto:sorrison at gmail.com>> wrote:

On 22 Jun 2016, at 10:58 AM, Matt Fischer <matt at mattfischer.com<mailto:matt at mattfischer.com>> wrote:


Have you setup token caching at the service level? Meaning a Memcache cluster that glance, Nova etc would talk to directly? That will really cut down the traffic.
Yeah we have that although the default cache time is 10 seconds for revocation lists. I might just set that to some large number to limit this traffic a bit.

Sam

We have ours set to 60 seconds. I've fiddled around with it some, but I've found that revocation events are damaging to perf no matter how much magic you try to apply.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20160623/97755b2e/attachment-0001.html>


More information about the OpenStack-operators mailing list