[Openstack] [keystone] memcache token backend performance
Clint Byrum
clint at fewbar.com
Mon Jan 6 18:49:47 UTC 2014
Excerpts from Jay Pipes's message of 2014-01-06 08:32:13 -0800:
> On Mon, 2014-01-06 at 10:10 -0500, Adam Young wrote:
> >
> > On 01/03/2014 11:38 PM, Xu (Simon) Chen wrote:
> >
> > > Hi folks,
> > >
> > >
> > > I am having trouble with using memcache as the keystone token
> > > backend. I have three keystone nodes running active/active. Each is
> > > running keystone on apache (for kerberos auth). I recently switched
> > > from using sql backend to memcache, while have memcached running on
> > > all three of the keystone nodes.
> > >
> >
> > This triggers a memory of htere being something wonky with
> > Greenthreads, the threading override in Eventlet, and Memcached. But
> > you said Apache, so I think that you are not running with
> > greenthreads?
> >
> > There are numerous things out there about apache and memcached
> > performance. For example, one article talks about filling up
> > partitions, etc.
> >
> >
> > >
> > >
> > > This setup would run well for a while, but then apache would start
> > > to hog CPUs, and memcached would increase to 30% or so. I tried to
> > > increase memcached cluster from 3 to 6 nodes, but in general the
> > > performance is much worse compared to sql backend.
> > Probably due to the need for replication. In order to keep it
> > anywhere close to in sync, it is going to require some non-trivial
> > subset of fully connectedness.
>
> Instead of doing replication of memcache, instead just tell your
> loadbalancer to have sticky sessions, and have each keystone server have
> its own dedicated memcache instance.
Whats wrong with the usual memcached hyper-scale-out method of a list
of servers and hashing the key to choose one?
More information about the Openstack
mailing list