[openstack-dev] [devstack] openstack client slowness / client-as-a-service

John Griffith john.griffith8 at gmail.com
Tue Apr 19 21:06:04 UTC 2016


On Tue, Apr 19, 2016 at 12:17 PM, Monty Taylor <mordred at inaugust.com> wrote:

> On 04/19/2016 10:16 AM, Daniel P. Berrange wrote:
>
>> On Tue, Apr 19, 2016 at 09:57:56AM -0500, Dean Troyer wrote:
>>
>>> On Tue, Apr 19, 2016 at 9:06 AM, Adam Young <ayoung at redhat.com> wrote:
>>>
>>> I wonder how much of that is Token caching.  In a typical CLI use patter,
>>>> a new token is created each time a client is called, with no passing of
>>>> a
>>>> token between services.  Using a session can greatly decrease the
>>>> number of
>>>> round trips to Keystone.
>>>>
>>>>
>>> Not as much as you think (or hope?).  Persistent token caching to disk
>>> will
>>> help some, at other expenses though.  Using --timing on OSC will show how
>>> much time the Identity auth round trip cost.
>>>
>>> I don't have current numbers, the last time I instrumented OSC there were
>>> significant load times for some modules, so we went a good distance to
>>> lazy-load as much as possible.
>>>
>>> What Dan sees WRT a persistent client process, though, is a combination
>>> of
>>> those two things: saving the Python loading and the Keystone round trips.
>>>
>>
>> The 1.5sec overhead I eliminated doesn't actually have anything todo
>> with network round trips at all. Even if you turn off all network
>> services and just run 'openstack <somecmmand>' and let it fail due
>> to inability to connect it'll still have that 1.5 sec overhead. It
>> is all related to python runtime loading and work done during module
>> importing.
>>
>> eg run 'unstack.sh' and then compare the main openstack client:
>>
>> $ time /usr/bin/openstack server list
>> Discovering versions from the identity service failed when creating the
>> password plugin. Attempting to determine version from URL.
>> Unable to establish connection to http://192.168.122.156:5000/v2.0/tokens
>>
>> real    0m1.555s
>> user    0m1.407s
>> sys     0m0.147s
>>
>> Against my client-as-a-service version:
>>
>> $ time $HOME/bin/openstack server list
>> [Errno 111] Connection refused
>>
>> real    0m0.045s
>> user    0m0.029s
>> sys     0m0.016s
>>
>>
>> I'm sure there is scope for also optimizing network traffic / round
>> trips, but I didn't investigate that at all.
>>
>> I have (had!) a version of DevStack that put OSC into a subprocess and
>>> called it via pipes to do essentially what Dan suggests.  It saves some
>>> time, at the expense of complexity that may or may not be worth the
>>> effort.
>>>
>>
>> devstack doesn't actually really need any significant changes beyond
>> making sure $PATH pointed to the replacement client programs and that
>> the server was running - the latter could be automated as a launch on
>> demand thing which would limit devstack changes.
>>
>> It actually doesn't technically need any devstack change - these
>> replacement clients could simply be put in some 3rd party git repo
>> and let developers who want the speed benefit simply put them in
>> their $PATH before running devstack.
>>
>> One thing missing is any sort of transactional control in the I/O with the
>>> subprocess, ie, an EOT marker.  I planned to add a -0 option (think
>>> xargs)
>>> to handle that but it's still down a few slots on my priority list.
>>> Error
>>> handling is another problem, and at this point (for DevStack purposes
>>> anyway) I stopped the investigation, concluding that reliability trumped
>>> a
>>> few seconds saved here.
>>>
>>
>> For I/O I simply replaced stdout + stderr with a new StringIO handle to
>> capture the data when running each command, and for error handling I
>> ensured the exit status was fed back & likewise stderr printed.
>>
>> It is more than just a few seconds saved - almost 4 minutes, or
>> nearly 20% of entire time to run stack.sh on my machine
>>
>>
>> Ultimately, this is one of the two giant nails in the coffin of continuing
>>> to persue CLIs in Python.  The other is co-installability. (See that
>>> current thread on the ML for pain points).  Both are easily solved with
>>> native-code-generating languages.  Go and Rust are at the top of my
>>> personal list here...
>>>
>>
> Using entrypoints and plugins in python is slow, so loading them is slow,
> as is loading all of the dependent libraries. Those were choices made for
> good reason back in the day, but I'm not convinced either are great anymore.
>
> A pluginless CLI that simply used REST calls rather than the
> python-clientlibs should be able to launch in get to the business of doing
> work in 0.2 seconds - counting time to load and parse clouds.yaml. That
> time could be reduced - the time spent in occ parsing vendor json files is
> not strictly necessary and certainly could go faster. It's not as fast as
> 0.004 seconds, but with very little effort it's 6x faster.
>
> Rather than ditching python for something like go, I'd rather put together
> a CLI with no plugins and that only depended on keystoneauth and
> os-client-config as libraries. No?

​
Yes, it would certainly seem more pragmatic than just dumping everything we
have and chasing the shiny object.​


>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160419/e0bf3160/attachment.html>


More information about the OpenStack-dev mailing list