[Openstack-operators] Keystone performance issue

Reza Bakhshayeshi reza.b2008 at gmail.com
Sat Nov 7 18:24:05 UTC 2015


Thanks all for your tips,
I switched to Fernet token and average response time reduced to 6.8 seconds.
I think, as Clint said, I have to balance the load between multiple tinier
keystone servers.
What's your opinion?

Dina,
No, I just used Apache JMeter.

Regards,
Reza

On 27 October 2015 at 04:33, Dina Belova <dbelova at mirantis.com> wrote:

> Reza,
>
> afair the number of tokens that can be processed simultaneously by
> Keystone in reality is equal to the number of Keystone workers (either
> admin workers or public workers, depending on the user's nature). And this
> number defaults to the number of CPUs. So that is kind of default
> limitation, that may influence your testing.
>
> Btw did you evaluate Rally for the Keystone CRUD benchmarking?
>
> Cheers,
> Dina
>
> On Tue, Oct 27, 2015 at 12:39 AM, Clint Byrum <clint at fewbar.com> wrote:
>
>> Excerpts from Reza Bakhshayeshi's message of 2015-10-27 05:11:28 +0900:
>> > Hi all,
>> >
>> > I've installed OpenStack Kilo (with help of official document) on a
>> > physical HP server with following specs:
>> >
>> > 2 Intel(R) Xeon(R) CPU E5-2695 v2 @ 2.40GHz each 12 physical core
>> (totally
>> > 48 threads)
>> > and 128 GB of Ram
>> >
>> > I'm going to benchmark keystone performance (with Apache JMeter) in
>> order
>> > to deploy OpenStack in production, but unfortunately I'm facing
>> extremely
>> > low performance.
>> >
>> > 1000 simultaneously token creation requests took around 45 seconds.
>> (WOW!)
>> > By using memcached in keystone.conf (following configuration) and
>> threading
>> > Keystone processes to 48, response time decreased to 18 seconds, which
>> is
>> > still too high.
>> >
>>
>> I'd agree that 56 tokens per second isn't very high. However, it
>> also isn't all that terrible given that keystone is meant to be load
>> balanced, and so you can at least just throw more boxes at it without
>> any complicated solution at all.
>>
>> Of course, that's assuming you're running with Fernet tokens. With UUID,
>> which is the default if you haven't changed it, then you're pounding those
>> tokens into the database, and that means you need to tune your database
>> service quite a bit and provide high performance I/O (you didn't mention
>> the I/O system).
>>
>> So, first thing I'd recommend is to switch to Liberty, as it has had some
>> performance fixes for sure. But I'd also recommend evaluating the Fernet
>> token provider. You will see much higher CPU usage on token validations,
>> because the caching bonuses you get with UUID tokens aren't as mature in
>> Fernet even in Liberty, but you should still see an overall scalability
>> win by not needing to scale out your database server for heavy writes.
>>
>> > [cache]
>> > enabled = True
>> > config_prefix = cache.keystone
>> > expiration_time = 300
>> > backend = dogpile.cache.memcached
>> > backend_argument = url:localhost:11211
>> > use_key_mangler = True
>> > debug_cache_backend = False
>> >
>> > I also increased Mariadb, "max_connections" and Apache allowed open
>> files
>> > to 4096, but they didn't help much (2 seconds!)
>> >
>> > Is it natural behavior? or we can optimize keystone performance more?
>> > What are your suggestions?
>>
>> I'm pretty focused on doing exactly that right now, but we will need to
>> establish some baselines and try to make sure we have tools to maintain
>> the performance long-term.
>>
>> _______________________________________________
>> OpenStack-operators mailing list
>> OpenStack-operators at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
>
>
> --
>
> Best regards,
>
> Dina Belova
>
> Senior Software Engineer
>
> Mirantis Inc.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20151107/74e9b8d2/attachment.html>


More information about the OpenStack-operators mailing list