[openstack-dev] [devstack] openstack client slowness / client-as-a-service

Dean Troyer dtroyer at gmail.com
Wed Apr 20 17:28:04 UTC 2016

On Wed, Apr 20, 2016 at 9:43 AM, Doug Hellmann <doug at doughellmann.com>

> Cliff looks for commands on demand. If we modify it's command loader to
> support some "built in" commands, and then implement the commands in OSC
> that way, we can avoid scanning the real plugin system until we hit a
> command that isn't built in.

Maybe down the road once this becomes a bigger percentage of the startup
time, for now I do not (yet) believe the plugins are the problem others
do.  See below...

> The last time I proposed that, though, someone (Dean?) pointed out that
> loading the plugins wasn't actually where OSC was spending its time. So,
> maybe we should profile the thing before proposing code changes.

It has been a while since I looked into this in detail, we made a couple of
changes then that helped, and since then the world kept moving and we're
behind again. Right now OSC is a mess WRT keystoneclient/keystoneauth and
their interactions with os-client-config.  We have lost sight of who is
supposed to be doing what here.  I know for a fact that there is
duplication in auth setup, we often make duplicate REST auth calls.

OSC 2.4.0 should be released today/tomorrow, following which we begin
merging the cleanup starting with the ksc/ksa bits.

Here are a couple of things to consider for those who want to investigate
* OSC does not load _any_ project client (modulo the ksc bit mentioned
above) unless/until it is actually needed to make a REST call.
* Timing on a help command includes a complete scan of all entry points to
generate the list of commands
* The --timing option lists all REST calls that properly go through our
TimingSession object.  That should be all of them unless a library doesn't
use the session it is given (the ones used by the commands in the OSC repo
all do this correctly).
* Interactive mode can be useful to get timing on just the setup/teardown
process without actually running a command:

  time openstack </dev/null

So time for a couple of fun baselines, using the OSC SHA proposed for 2.4.0
(4639148b1d) on a Lenovo T420s with Ubuntu 14.0.4 against a DevStack on an
Ubuntu 14.0.3 VM on a moderately-sized local NUC:

* time openstack --timing </dev/null
  * py2: 0m0.307s
  * py3: 0m0.376s

* time openstack --timing help
  * py2: 0m1.939s
  * py3: 0m1.803s

* time openstack --timing catalog list
  * py2: 0m0.675s - 0.360 REST = 0.315s
  * py3: 0m0.704s - 0.312 REST = 0.392s

* time openstack --timing flavor list
  * py2: 0m0.772s - 0.447 REST = 0.325s
  * py3: 0m2.563s - 2.146 REST = 0.417s

* time openstack --timing image list
  * py2: 0m0.860s - 0.517 REST = 0.343s
  * py3: 0m0.952s - 0.539 REST = 0.423s

Are there any conclusions to draw from this seat-of-the-pants look?

* The differences between py2 and py3 are small, and not consistent.
* The time for OSC to load and then exit immediately is within 0.1s of the
time to execute a near-trivial command when the REST round trip times are
* Two auth calls are consistently being made, this is one of the things
actively being cleaned up with the ksc/ksa transition bits.  The additional
REST round trip in these tests is consistently between 0.14s and 0.2s, so
that gain will come soon.

I also have buried in my notes some out-of-date results of using boris-42's
profimp that lead me to keystoneclient and the largest single static import
being done and accounting for nearly half of the total static load time.
The transition to using ksa will help here, I do not have profimp numbers
for that yet.



Dean Troyer
dtroyer at gmail.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160420/a7e3c189/attachment-0001.html>

More information about the OpenStack-dev mailing list