[openstack-dev] [devstack] openstack client slowness / client-as-a-service
Perry, Sean
sean.perry at hpe.com
Tue Apr 19 17:10:48 UTC 2016
________________________________
From: Adam Young [ayoung at redhat.com]
Sent: Tuesday, April 19, 2016 7:06 AM
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [devstack] openstack client slowness / client-as-a-service
On 04/18/2016 09:19 AM, Daniel P. Berrange wrote:
There have been threads in the past about the slowness of the "openstack"
client tool such as this one by Sean last year:
http://lists.openstack.org/pipermail/openstack-dev/2015-April/061317.html
Sean mentioned a 1.5s fixed overhead on openstack client, and mentions it
is significantly slower than the equivalent nova command. In my testing
I don't see any real speed difference between openstack & nova client
programs, so maybe that differential has been addressed since Sean's
original thread, or maybe nova has got slower.
Overall though, I find it is way too sluggish considering it is running
on a local machine with 12 cpus and 30 GB of RAM.
I had a quick go at trying to profile the tools with cprofile and analyse
with KCacheGrind as per this blog:
https://julien.danjou.info/blog/2015/guide-to-python-profiling-cprofile-concrete-case-carbonara
And notice that in profiling 'nova help' for example, the big sink appears
to come from the 'pkg_resource' module and its use of pyparsing. I didn't
spend any real time to dig into this in detail, because it got me wondering
whether we can easily just avoid the big startup penalty by not having to
startup a new python interpretor for each command we run.
I traced devstack and saw it run 'openstack' and 'neutron' commands approx
140 times in my particular configuration. If each one of those has a 1.5s
overhead, we could potentially save 3 & 1/2 minutes off devstack execution
time.
So as a proof of concept I have created an 'openstack-server' command
which listens on a unix socket for requests and then invokes the
OpenStackShell.run / OpenStackComputeShell.main / NeutronShell.run
methods as appropriate.
I then replaced the 'openstack', 'nova' and 'neutron' commands with
versions that simply call to the 'openstack-server' service over the
UNIX socket. Since devstack will always recreate these commands in
/usr/bin, I simply put my replacements in $HOME/bin and then made
sure $HOME/bin was first in the $PATH
You might call this 'command line as a service' :-)
Anyhow, with my devstack setup a traditional install takes
real 21m34.050s
user 7m8.649s
sys 1m57.865s
And when using openstack-server it only takes
real 17m47.059s
user 3m51.087s
sys 1m42.428s
So that has cut 18% off the total running time for devstack, which
is quite considerable really.
I'm attaching the openstack-server & replacement openstack commands
so you can see what I did. You have to manually run the openstack-server
command ahead of time and it'll print out details of every command run
on stdout.
Anyway, I'm not personally planning to take this experiment any further.
I'll probably keep using this wrapper in my own local dev env since it
does cut down on devstack time significantly. This mail is just to see
if it'll stimulate any interesting discussion or motivate someone to
explore things further.
I wonder how much of that is Token caching. In a typical CLI use patter, a new token is created each time a client is called, with no passing of a token between services. Using a session can greatly decrease the number of round trips to Keystone.
In combination with a Session cache in ~/, maybe a modules found cache so stevedore et al do not need to hunt during initialization?
More information about the OpenStack-dev
mailing list