[openstack-dev] [keystone][neutron][requirements] - keystonemiddleware-4.1.0 performance regression

Morgan Fainberg morgan.fainberg at gmail.com
Wed Jan 20 19:59:53 UTC 2016


So this was due to a change in keystonemiddleware. We stopped doing
in-memory caching of tokens per process, per worker by default [1]. There
are a couple of reasons:

1) in-memory caching produced unreliable validation because some processed
may have a cache, some may not
2) in-memory caching was unbounded memory wise per worker.

I'll spin up a devstack change to enable memcache and use the memcache
caching for keystonemiddleware today. This will benefit things in a couple
ways

* All services and all service's workers will share the offload of the
validation, likely producing a real speedup even over the old in-memory
caching
* There will no longer be inconsistent validation offload/responses based
upon which worker you happen to hit for a given service.

I'll post to the ML here with the proposed change later today.

[1]
https://github.com/openstack/keystonemiddleware/commit/f27d7f776e8556d976f75d07c99373455106de52

Cheers,
--Morgan

On Tue, Jan 19, 2016 at 10:57 PM, Armando M. <armamig at gmail.com> wrote:

>
>
> On 19 January 2016 at 22:46, Kevin Benton <blak111 at gmail.com> wrote:
>
>> Hi all,
>>
>> We noticed a major jump in the neutron tempest and API test run times
>> recently in Neutron. After digging through logstash I found out that it
>> first occurred on the requirements bump here:
>> https://review.openstack.org/#/c/265697/
>>
>> After locally testing each requirements change individually, I found that
>> the keystonemiddleware change seems to be the culprit. It almost doubles
>> the time it takes to fulfill simple port-list requests in Neutron.
>>
>> Armando pushed up a patch here to confirm:
>> https://review.openstack.org/#/c/270024/
>> Once that's verified, we should probably put a cap on the middleware
>> because it's causing the tests to run up close to their time limits.
>>
>
> Kevin,
>
> As usual your analytical skills are to be praised.
>
> I wonder if anyone else is aware of the issue/s, because during the usual
> hunting I could see other projects being affected and showing abnormally
> high run times of the dsvm jobs.
>
> I am not sure that [1] is the right approach, but it should give us some
> data points if executed successfully.
>
> Cheers,
> Armando
>
> [1]  https://review.openstack.org/#/c/270024/
>
>
>> --
>> Kevin Benton
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160120/d52d03a6/attachment.html>


More information about the OpenStack-dev mailing list