<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On 20 April 2016 at 04:17, Monty Taylor <span dir="ltr"><<a href="mailto:mordred@inaugust.com" target="_blank">mordred@inaugust.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div class=""><div class="h5">On 04/19/2016 10:16 AM, Daniel P. Berrange wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
On Tue, Apr 19, 2016 at 09:57:56AM -0500, Dean Troyer wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
On Tue, Apr 19, 2016 at 9:06 AM, Adam Young <<a href="mailto:ayoung@redhat.com" target="_blank">ayoung@redhat.com</a>> wrote:<br>
<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
I wonder how much of that is Token caching. In a typical CLI use patter,<br>
a new token is created each time a client is called, with no passing of a<br>
token between services. Using a session can greatly decrease the number of<br>
round trips to Keystone.<br>
<br>
</blockquote>
<br>
Not as much as you think (or hope?). Persistent token caching to disk will<br>
help some, at other expenses though. Using --timing on OSC will show how<br>
much time the Identity auth round trip cost.<br>
<br>
I don't have current numbers, the last time I instrumented OSC there were<br>
significant load times for some modules, so we went a good distance to<br>
lazy-load as much as possible.<br>
<br>
What Dan sees WRT a persistent client process, though, is a combination of<br>
those two things: saving the Python loading and the Keystone round trips.<br>
</blockquote>
<br>
The 1.5sec overhead I eliminated doesn't actually have anything todo<br>
with network round trips at all. Even if you turn off all network<br>
services and just run 'openstack <somecmmand>' and let it fail due<br>
to inability to connect it'll still have that 1.5 sec overhead. It<br>
is all related to python runtime loading and work done during module<br>
importing.<br>
<br>
eg run 'unstack.sh' and then compare the main openstack client:<br>
<br>
$ time /usr/bin/openstack server list<br>
Discovering versions from the identity service failed when creating the password plugin. Attempting to determine version from URL.<br>
Unable to establish connection to <a href="http://192.168.122.156:5000/v2.0/tokens" rel="noreferrer" target="_blank">http://192.168.122.156:5000/v2.0/tokens</a><br>
<br>
real 0m1.555s<br>
user 0m1.407s<br>
sys 0m0.147s<br>
<br>
Against my client-as-a-service version:<br>
<br>
$ time $HOME/bin/openstack server list<br>
[Errno 111] Connection refused<br>
<br>
real 0m0.045s<br>
user 0m0.029s<br>
sys 0m0.016s<br>
<br>
<br>
I'm sure there is scope for also optimizing network traffic / round<br>
trips, but I didn't investigate that at all.<br>
<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
I have (had!) a version of DevStack that put OSC into a subprocess and<br>
called it via pipes to do essentially what Dan suggests. It saves some<br>
time, at the expense of complexity that may or may not be worth the effort.<br>
</blockquote>
<br>
devstack doesn't actually really need any significant changes beyond<br>
making sure $PATH pointed to the replacement client programs and that<br>
the server was running - the latter could be automated as a launch on<br>
demand thing which would limit devstack changes.<br>
<br>
It actually doesn't technically need any devstack change - these<br>
replacement clients could simply be put in some 3rd party git repo<br>
and let developers who want the speed benefit simply put them in<br>
their $PATH before running devstack.<br>
<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
One thing missing is any sort of transactional control in the I/O with the<br>
subprocess, ie, an EOT marker. I planned to add a -0 option (think xargs)<br>
to handle that but it's still down a few slots on my priority list. Error<br>
handling is another problem, and at this point (for DevStack purposes<br>
anyway) I stopped the investigation, concluding that reliability trumped a<br>
few seconds saved here.<br>
</blockquote>
<br>
For I/O I simply replaced stdout + stderr with a new StringIO handle to<br>
capture the data when running each command, and for error handling I<br>
ensured the exit status was fed back & likewise stderr printed.<br>
<br>
It is more than just a few seconds saved - almost 4 minutes, or<br>
nearly 20% of entire time to run stack.sh on my machine<br>
<br>
<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Ultimately, this is one of the two giant nails in the coffin of continuing<br>
to persue CLIs in Python. The other is co-installability. (See that<br>
current thread on the ML for pain points). Both are easily solved with<br>
native-code-generating languages. Go and Rust are at the top of my<br>
personal list here...<br>
</blockquote></blockquote>
<br></div></div>
Using entrypoints and plugins in python is slow, so loading them is slow, as is loading all of the dependent libraries. Those were choices made for good reason back in the day, but I'm not convinced either are great anymore.<br>
<br>
A pluginless CLI that simply used REST calls rather than the python-clientlibs should be able to launch in get to the business of doing work in 0.2 seconds - counting time to load and parse clouds.yaml. That time could be reduced - the time spent in occ parsing vendor json files is not strictly necessary and certainly could go faster. It's not as fast as 0.004 seconds, but with very little effort it's 6x faster.<br>
<br>
Rather than ditching python for something like go, I'd rather put together a CLI with no plugins and that only depended on keystoneauth and os-client-config as libraries. No?<div class=""><div class="h5"><br></div></div></blockquote><div><br>I can feel Dean banging his head from here :) <br><br></div><div>If you extend this because all the apis are subtly different you end up with the SDK for which there's been talk about adopting in OSC since its inception. If you are happy with dealing with the APIs directly yourself have a look at os-http[1] which depends only on os-client-config and keystoneauth. <br><br></div><div><br>[1] <a href="http://www.jamielennox.net/blog/2016/04/12/os-http/">http://www.jamielennox.net/blog/2016/04/12/os-http/</a><br></div><div> <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div class=""><div class="h5">
<br>
__________________________________________________________________________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</div></div></blockquote></div><br></div></div>