[Openstack-operators] Keystone performance issue
ybrodskiy at gmail.com
Wed Nov 11 02:30:24 UTC 2015
There was a good presentation on different token formats during Tokyo
summit. It may help answering some of the questions.
On Sat, Nov 7, 2015 at 10:24 AM, Reza Bakhshayeshi <reza.b2008 at gmail.com>
> Thanks all for your tips,
> I switched to Fernet token and average response time reduced to 6.8
> I think, as Clint said, I have to balance the load between multiple tinier
> keystone servers.
> What's your opinion?
> No, I just used Apache JMeter.
> On 27 October 2015 at 04:33, Dina Belova <dbelova at mirantis.com> wrote:
>> afair the number of tokens that can be processed simultaneously by
>> Keystone in reality is equal to the number of Keystone workers (either
>> admin workers or public workers, depending on the user's nature). And this
>> number defaults to the number of CPUs. So that is kind of default
>> limitation, that may influence your testing.
>> Btw did you evaluate Rally for the Keystone CRUD benchmarking?
>> On Tue, Oct 27, 2015 at 12:39 AM, Clint Byrum <clint at fewbar.com> wrote:
>>> Excerpts from Reza Bakhshayeshi's message of 2015-10-27 05:11:28 +0900:
>>> > Hi all,
>>> > I've installed OpenStack Kilo (with help of official document) on a
>>> > physical HP server with following specs:
>>> > 2 Intel(R) Xeon(R) CPU E5-2695 v2 @ 2.40GHz each 12 physical core
>>> > 48 threads)
>>> > and 128 GB of Ram
>>> > I'm going to benchmark keystone performance (with Apache JMeter) in
>>> > to deploy OpenStack in production, but unfortunately I'm facing
>>> > low performance.
>>> > 1000 simultaneously token creation requests took around 45 seconds.
>>> > By using memcached in keystone.conf (following configuration) and
>>> > Keystone processes to 48, response time decreased to 18 seconds, which
>>> > still too high.
>>> I'd agree that 56 tokens per second isn't very high. However, it
>>> also isn't all that terrible given that keystone is meant to be load
>>> balanced, and so you can at least just throw more boxes at it without
>>> any complicated solution at all.
>>> Of course, that's assuming you're running with Fernet tokens. With UUID,
>>> which is the default if you haven't changed it, then you're pounding
>>> tokens into the database, and that means you need to tune your database
>>> service quite a bit and provide high performance I/O (you didn't mention
>>> the I/O system).
>>> So, first thing I'd recommend is to switch to Liberty, as it has had some
>>> performance fixes for sure. But I'd also recommend evaluating the Fernet
>>> token provider. You will see much higher CPU usage on token validations,
>>> because the caching bonuses you get with UUID tokens aren't as mature in
>>> Fernet even in Liberty, but you should still see an overall scalability
>>> win by not needing to scale out your database server for heavy writes.
>>> > [cache]
>>> > enabled = True
>>> > config_prefix = cache.keystone
>>> > expiration_time = 300
>>> > backend = dogpile.cache.memcached
>>> > backend_argument = url:localhost:11211
>>> > use_key_mangler = True
>>> > debug_cache_backend = False
>>> > I also increased Mariadb, "max_connections" and Apache allowed open
>>> > to 4096, but they didn't help much (2 seconds!)
>>> > Is it natural behavior? or we can optimize keystone performance more?
>>> > What are your suggestions?
>>> I'm pretty focused on doing exactly that right now, but we will need to
>>> establish some baselines and try to make sure we have tools to maintain
>>> the performance long-term.
>>> OpenStack-operators mailing list
>>> OpenStack-operators at lists.openstack.org
>> Best regards,
>> Dina Belova
>> Senior Software Engineer
>> Mirantis Inc.
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the OpenStack-operators