[openstack-dev] [keystone][neutron][requirements] - keystonemiddleware-4.1.0 performance regression

Boris Pavlovic boris at pavlovic.me
Thu Jan 21 09:36:03 UTC 2016


Hi,


By the way OSprofiler trace shows how this regression impacts on amount of
DB queries done by Keystone (during the boot of VM):
http://boris-42.github.io/b2.html


Best regards,
Boris Pavlovic

On Wed, Jan 20, 2016 at 3:30 PM, Morgan Fainberg <morgan.fainberg at gmail.com>
wrote:

> As promised here are the fixes:
>
>
> https://review.openstack.org/#/q/Ifc17c27744dac5ad55e84752ca6f68169c2f5a86,n,z
>
> Proposed to both master and liberty.
>
> On Wed, Jan 20, 2016 at 12:15 PM, Sean Dague <sean at dague.net> wrote:
>
>> On 01/20/2016 02:59 PM, Morgan Fainberg wrote:
>> > So this was due to a change in keystonemiddleware. We stopped doing
>> > in-memory caching of tokens per process, per worker by default [1].
>> > There are a couple of reasons:
>> >
>> > 1) in-memory caching produced unreliable validation because some
>> > processed may have a cache, some may not
>> > 2) in-memory caching was unbounded memory wise per worker.
>> >
>> > I'll spin up a devstack change to enable memcache and use the memcache
>> > caching for keystonemiddleware today. This will benefit things in a
>> > couple ways
>> >
>> > * All services and all service's workers will share the offload of the
>> > validation, likely producing a real speedup even over the old in-memory
>> > caching
>> > * There will no longer be inconsistent validation offload/responses
>> > based upon which worker you happen to hit for a given service.
>> >
>> > I'll post to the ML here with the proposed change later today.
>> >
>> > [1]
>> >
>> https://github.com/openstack/keystonemiddleware/commit/f27d7f776e8556d976f75d07c99373455106de52
>>
>> This seems like a pretty substantial performance impact. Was there a
>> reno associated with this?
>>
>> I think that we should still probably:
>>
>> * != the keystone middleware version, it's impacting the ability to land
>> fixes in the gate
>> * add devstack memcache code
>> * find some way to WARN if we are running without memcache config, so
>> people realize they are in a regressed state
>> * add back keystone middleware at that version
>>
>>         -Sean
>>
>> >
>> > Cheers,
>> > --Morgan
>> >
>> > On Tue, Jan 19, 2016 at 10:57 PM, Armando M. <armamig at gmail.com
>> > <mailto:armamig at gmail.com>> wrote:
>> >
>> >
>> >
>> >     On 19 January 2016 at 22:46, Kevin Benton <blak111 at gmail.com
>> >     <mailto:blak111 at gmail.com>> wrote:
>> >
>> >         Hi all,
>> >
>> >         We noticed a major jump in the neutron tempest and API test run
>> >         times recently in Neutron. After digging through logstash I
>> >         found out that it first occurred on the requirements bump here:
>> >         https://review.openstack.org/#/c/265697/
>> >
>> >         After locally testing each requirements change individually, I
>> >         found that the keystonemiddleware change seems to be the
>> >         culprit. It almost doubles the time it takes to fulfill simple
>> >         port-list requests in Neutron.
>> >
>> >         Armando pushed up a patch here to
>> >         confirm: https://review.openstack.org/#/c/270024/
>> >         Once that's verified, we should probably put a cap on the
>> >         middleware because it's causing the tests to run up close to
>> >         their time limits.
>> >
>> >
>> >     Kevin,
>> >
>> >     As usual your analytical skills are to be praised.
>> >
>> >     I wonder if anyone else is aware of the issue/s, because during the
>> >     usual hunting I could see other projects being affected and showing
>> >     abnormally high run times of the dsvm jobs.
>> >
>> >     I am not sure that [1] is the right approach, but it should give us
>> >     some data points if executed successfully.
>> >
>> >     Cheers,
>> >     Armando
>> >
>> >     [1]  https://review.openstack.org/#/c/270024/
>> >
>> >
>> >         --
>> >         Kevin Benton
>> >
>> >
>>  __________________________________________________________________________
>> >         OpenStack Development Mailing List (not for usage questions)
>> >         Unsubscribe:
>> >         OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >         <
>> http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
>> >
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> >
>>  __________________________________________________________________________
>> >     OpenStack Development Mailing List (not for usage questions)
>> >     Unsubscribe:
>> >     OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >     <
>> http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
>> >     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> >
>> >
>> __________________________________________________________________________
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>>
>> --
>> Sean Dague
>> http://dague.net
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160121/8b8318f0/attachment.html>


More information about the OpenStack-dev mailing list