[openstack-dev] [devstack] openstack client slowness / client-as-a-service

Perry, Sean sean.perry at hpe.com
Tue Apr 19 18:24:29 UTC 2016

From: Ian Cordasco [sigmavirus24 at gmail.com]
Sent: Tuesday, April 19, 2016 11:11 AM
To: Perry, Sean; OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [devstack] openstack client slowness / client-as-a-service

-----Original Message-----
From: Perry, Sean <sean.perry at hpe.com>
Reply: OpenStack Development Mailing List (not for usage questions) <openstack-dev at lists.openstack.org>
Date: April 19, 2016 at 12:41:02
To: OpenStack Development Mailing List (not for usage questions) <openstack-dev at lists.openstack.org>
Subject:  Re: [openstack-dev] [devstack] openstack client slowness /    client-as-a-service

> What Dan sees WRT a persistent client process, though, is a combination of those two things:
> saving the Python loading and the Keystone round trips.
> If I replace the import of openstack.shell with a main that just returns 0 in the OSC entry
> point and time it the result floats around 0.030s on my VMs. Python itself is not the issue.
> The cost is in the imports.
> openstack --help >/dev/null takes over 4 seconds.
> Dean, have you tried replacing all of the stevedore/cliff magic with explicit imports
> of the common OpenStack clients? I am curious if the issue is disk access, Python's egg
> loading, etc.

Define "common". I don't think you'll franly find a common definition of that. Further, if you're using an even remotely recent version of pip (probably 1.4 or newer) eggs are no longer a thing. If you're using vanilla setup.py to install everything, though, then yes maybe you're having problems related to that.

While it does not make (every|any)one happy, for benchmarking the list could be keystone nova, glance, cinder, and neutron. This is NOT about choosing projects but about setting up a reasonable baseline number.

By "egg" I mean all of the bits involved in Python loading our code. While zipped up eggs may not be involved there is the egg-info to parse for the entry points for instance.

A quick test this morning of replacing clientmanager and commandmanager with Mock and then having OSC exit before it invokes run() shows that loading the potential clients in client manager add half a second. So right now I have:

0.03s just to start the process
0.5s to start and exit
1.1s to start and exit if clientmanager rounds up potential client modules.

Then we start hitting token issue, chatting with services, etc. In my devstack 'token issue' for admin takes 7.5 seconds of which 6.6s is network according to --timing evenly split between POST and GET.

I tried commenting out / mocking the other modules in openstackclient.shell to little effect.

This tells me that potentially we can save time

* optimizing the shell initialization
* optimizing the plugin loading / entry point handling
* the round tripping to the services

Whether our time is better spent optimizing / refactoring what we have or rewriting, introducing bugs, finding out the performance issues, plugin loading issues, etc. of a new language is good discussion. But just saying , yeah Python is slow is not selling it based on the numbers I am seeing.

More information about the OpenStack-dev mailing list