[openstack-dev] [keystone][performance][profiling] Profiling Mitaka Keystone: some results and asking for a help

Matt Fischer matt at mattfischer.com
Mon Apr 11 19:57:04 UTC 2016

On Mon, Apr 11, 2016 at 8:11 AM, Dina Belova <dbelova at mirantis.com> wrote:

> Hey, openstackers!
> Recently I was trying to profile Keystone (OpenStack Liberty vs Mitaka)
> using this set of changes
> <https://review.openstack.org/#/q/topic:osprofiler-support-in-keystone+status:open> (that's
> currently on review - some final steps are required there to finish the
> work) and OSprofiler.
> Some preliminary results (all in one OpenStack node) can be found here
> <http://docs.openstack.org/developer/performance-docs/test_results/keystone/all-in-one/index.html> (raw
> OSprofiler reports are not yet merged to some place and can be found here
> <https://review.openstack.org/#/c/299514/>). The full plan
> <http://docs.openstack.org/developer/performance-docs/test_plans/keystone/plan.html#keystone-performance> of
> what's going to be tested  can be found in the docs as well. In short I
> wanted to take a look how does Keystone changed its DB/Cache usage from
> Liberty to Mitaka, keeping in mind that there were several changes
> introduced:
>    - federation support was added (and made DB scheme a bit more complex)
>    - Keystone moved to oslo.cache usage
>    - local context cache was introduced during Mitaka
> First of all - *good job on making Keystone less DB-extensive in case of
> cache turned on*! If Keystone caching is turned on, number of DB queries
> done to Keystone DB in Mitaka is averagely twice less than in Liberty,
> comparing the same requests and topologies. Thanks Keystone community to
> make it happen :)
> Although, I faced *two strange issues* during my experiments, and I'm
> kindly asking you, folks, to help me here:
>    - I've created #1567403
>    <https://bugs.launchpad.net/keystone/+bug/1567403> bug to share
>    information - when I turned caching on, local context cache should cache
>    identical per API requests function calls not to ping Memcache too often.
>    Although I faced such calls, Keystone still used Memcache to gather this
>    information. May someone take a look on this and help me figure out what am
>    I observing? At the first sight local context cache should work ok, but for
>    some reason I do not see it's being used.
>    - One more filed bug - #1567413
>    <https://bugs.launchpad.net/keystone/+bug/1567413> - is about a bit
>    opposite situation :) When I turned cache off explicitly in the
>    keystone.conf file, I still observed some of the values being fetched from
>    Memcache... Your help is very appreciated!
> Thanks in advance and sorry for a long email :)
> Cheers,
> Dina

Thanks for starting this conversation. I had some weird perf results
comparing L to an RC release of Mitaka, but I was holding them until
someone else confirmed what I saw. I'm testing token creation and
validation. From what I saw, token validation slowed down in Mitaka. After
doing my benchmark runs, the traffic to memcache was 8x in Mitaka from what
it was in Liberty. That implies more caching but 8x is a lot and even
memcache references are not free.

I know some of the Keystone folks are looking into this so it will be good
to follow-up on it. Maybe we could talk about this at the summit?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160411/a8eb6142/attachment.html>

More information about the OpenStack-dev mailing list