[openstack-dev] [devstack] openstack client slowness / client-as-a-service

Ian Cordasco sigmavirus24 at gmail.com
Tue Apr 19 18:11:00 UTC 2016


-----Original Message-----
From: Perry, Sean <sean.perry at hpe.com>
Reply: OpenStack Development Mailing List (not for usage questions) <openstack-dev at lists.openstack.org>
Date: April 19, 2016 at 12:41:02
To: OpenStack Development Mailing List (not for usage questions) <openstack-dev at lists.openstack.org>
Subject:  Re: [openstack-dev] [devstack] openstack client slowness /	client-as-a-service

> What Dan sees WRT a persistent client process, though, is a combination of those two things:  
> saving the Python loading and the Keystone round trips.
> If I replace the import of openstack.shell with a main that just returns 0 in the OSC entry  
> point and time it the result floats around 0.030s on my VMs. Python itself is not the issue.  
> The cost is in the imports.
> openstack --help >/dev/null takes over 4 seconds.
> Dean, have you tried replacing all of the stevedore/cliff magic with explicit imports  
> of the common OpenStack clients? I am curious if the issue is disk access, Python's egg  
> loading, etc.

Define "common". I don't think you'll franly find a common definition of that. Further, if you're using an even remotely recent version of pip (probably 1.4 or newer) eggs are no longer a thing. If you're using vanilla setup.py to install everything, though, then yes maybe you're having problems related to that.

> Yeah Rust and friends are fun. But it presents a significant barrier to entry. If nothing  
> else, Rust is not Enterprise ready. Go is not much better. We need to remember that not 

Define "Enterprise ready". Considering the number and size of users/backers of each, I'm sure you have a different definition of Enterprise. If we're going to compare the two, Rust wins by simple virtue of actually having a packaging story.

> everyone will have their systems connected to the full Internet during system install  
> time. Pypi mirrors with selected packages are possible today. Sure we could implement  
> a cargo mirror too (or whatever Go calls theirs). But is that really better than improving  
> the current situation?

Given the current status of CPython 2.7 (which I suspect most people are going to be using) what kind of speed improvements do you think you'll be able to add there? You could certainly work on fixing up Python 3.6 onwards (if it hasn't already been improved) but Enterprises aren't really looking to migrate to Python 3 anytime soon. If they were eager/willing to do that, the CPython developers wouldn't have extended the support lifetime of Python 2.7. You can try to optimize osc a whole lot, but as I understand it, a lot of effort has gone into that already.

I like OSC and it's goals and everything, but sometimes you have to wonder exactly how much more time and effort you are going to be willing to throw at a problem where the limiting factors are potentially out of scope for the project itself.

Ian Cordasco

More information about the OpenStack-dev mailing list