[openstack-dev] [neutron][rootwrap] Performance considerations, sudo?

Joe Gordon joe.gordon0 at gmail.com
Mon Mar 17 23:53:23 UTC 2014


On Mon, Mar 17, 2014 at 4:48 PM, Joe Gordon <joe.gordon0 at gmail.com> wrote:

>
>
>
> On Tue, Mar 11, 2014 at 1:46 AM, Miguel Angel Ajo Pelayo <
> mangelajo at redhat.com> wrote:
>
>>
>> I have included on the etherpad, the option to write a sudo
>> plugin (or several), specific for openstack.
>>
>>
>> And this is a test with shedskin, I suppose that in more complicated
>> dependecy scenarios it should perform better.
>>
>> [majopela at redcylon tmp]$ cat <<EOF >test.py
>> > import sys
>> > print "hello world"
>> > sys.exit(0)
>> > EOF
>>
>> [majopela at redcylon tmp]$ time python test.py
>> hello world
>>
>> real    0m0.016s
>> user    0m0.015s
>> sys     0m0.001s
>>
>
>
> This looks very promising!
>
> A few gotchas:
>
> * Very limited library support
> https://code.google.com/p/shedskin/wiki/docs#Library_Limitations
>   * no logging
>   * no six
>   * no subprocess
>
> * no *args support
>   *
> https://code.google.com/p/shedskin/wiki/docs#Python_Subset_Restrictions
>
> that being said I did a quick comparison with great results:
>
> $ cat tmp.sh
> #!/usr/bin/env bash
> echo $0 $@
> ip a
>
> $ time ./tmp.sh  foo bar> /dev/null
>
> real    0m0.009s
> user    0m0.003s
> sys     0m0.006s
>
>
>
> $ cat tmp.py
> #!/usr/bin/env python
> import os
> import sys
>
> print sys.argv
> print os.system("ip a")
>
> $ time ./tmp.py  foo bar > /dev/null
>
> min:
> real    0m0.016s
> user    0m0.004s
> sys     0m0.012s
>
> max:
> real    0m0.038s
> user    0m0.016s
> sys     0m0.020s
>
>
>
> shedskin  tmp.py && make
>
>
> $ time ./tmp  foo bar > /dev/null
>
> real    0m0.010s
> user    0m0.007s
> sys     0m0.002s
>
>
for completeness here is the auto generated cpp code:
http://paste.openstack.org/show/73711/

>
>
> Based in these results I think a deeper dive into making rootwrap
> supportshedskin is worthwhile.
>
>
>
>
>
>>
>>
>> [majopela at redcylon tmp]$ shedskin test.py
>> *** SHED SKIN Python-to-C++ Compiler 0.9.4 ***
>> Copyright 2005-2011 Mark Dufour; License GNU GPL version 3 (See LICENSE)
>>
>> [analyzing types..]
>> ********************************100%
>> [generating c++ code..]
>> [elapsed time: 1.59 seconds]
>> [majopela at redcylon tmp]$ make
>> g++  -O2 -march=native -Wno-deprecated  -I.
>> -I/usr/lib/python2.7/site-packages/shedskin/lib /tmp/test.cpp
>> /usr/lib/python2.7/site-packages/shedskin/lib/sys.cpp
>> /usr/lib/python2.7/site-packages/shedskin/lib/re.cpp
>> /usr/lib/python2.7/site-packages/shedskin/lib/builtin.cpp -lgc -lpcre  -o
>> test
>> [majopela at redcylon tmp]$ time ./test
>> hello world
>>
>> real    0m0.003s
>> user    0m0.000s
>> sys     0m0.002s
>>
>>
>> ----- Original Message -----
>> > We had this same issue with the dhcp-agent. Code was added that
>> paralleled
>> > the initial sync here: https://review.openstack.org/#/c/28914/ that
>> made
>> > things a good bit faster if I remember correctly. Might be worth doing
>> > something similar for the l3-agent.
>> >
>> > Best,
>> >
>> > Aaron
>> >
>> >
>> > On Mon, Mar 10, 2014 at 5:07 PM, Joe Gordon < joe.gordon0 at gmail.com >
>> wrote:
>> >
>> >
>> >
>> >
>> >
>> >
>> > On Mon, Mar 10, 2014 at 3:57 PM, Joe Gordon < joe.gordon0 at gmail.com >
>> wrote:
>> >
>> >
>> >
>> > I looked into the python to C options and haven't found anything
>> promising
>> > yet.
>> >
>> >
>> > I tried Cython, and RPython, on a trivial hello world app, but git
>> similar
>> > startup times to standard python.
>> >
>> > The one thing that did work was adding a '-S' when starting python.
>> >
>> > -S Disable the import of the module site and the site-dependent
>> manipulations
>> > of sys.path that it entails.
>> >
>> > Using 'python -S' didn't appear to help in devstack
>> >
>> > #!/usr/bin/python -S
>> > # PBR Generated from u'console_scripts'
>> >
>> > import sys
>> > import site
>> > site.addsitedir('/mnt/stack/oslo.rootwrap/oslo/rootwrap')
>> >
>> >
>> >
>> >
>> >
>> >
>> > I am not sure if we can do that for rootwrap.
>> >
>> >
>> > jogo at dev:~/tmp/pypy-2.2.1-src$ time ./tmp-c
>> > hello world
>> >
>> > real 0m0.021s
>> > user 0m0.000s
>> > sys 0m0.020s
>> > jogo at dev:~/tmp/pypy-2.2.1-src$ time ./tmp-c
>> > hello world
>> >
>> > real 0m0.021s
>> > user 0m0.000s
>> > sys 0m0.020s
>> > jogo at dev:~/tmp/pypy-2.2.1-src$ time python -S ./tmp.py
>> > hello world
>> >
>> > real 0m0.010s
>> > user 0m0.000s
>> > sys 0m0.008s
>> >
>> > jogo at dev:~/tmp/pypy-2.2.1-src$ time python -S ./tmp.py
>> > hello world
>> >
>> > real 0m0.010s
>> > user 0m0.000s
>> > sys 0m0.008s
>> >
>> >
>> >
>> > On Mon, Mar 10, 2014 at 3:26 PM, Miguel Angel Ajo Pelayo <
>> > mangelajo at redhat.com > wrote:
>> >
>> >
>> > Hi Carl, thank you, good idea.
>> >
>> > I started reviewing it, but I will do it more carefully tomorrow
>> morning.
>> >
>> >
>> >
>> > ----- Original Message -----
>> > > All,
>> > >
>> > > I was writing down a summary of all of this and decided to just do it
>> > > on an etherpad. Will you help me capture the big picture there? I'd
>> > > like to come up with some actions this week to try to address at least
>> > > part of the problem before Icehouse releases.
>> > >
>> > > https://etherpad.openstack.org/p/neutron-agent-exec-performance
>> > >
>> > > Carl
>> > >
>> > > On Mon, Mar 10, 2014 at 5:26 AM, Miguel Angel Ajo <
>> majopela at redhat.com >
>> > > wrote:
>> > > > Hi Yuri & Stephen, thanks a lot for the clarification.
>> > > >
>> > > > I'm not familiar with unix domain sockets at low level, but , I
>> wonder
>> > > > if authentication could be achieved just with permissions (only
>> users in
>> > > > group "neutron" or group "rootwrap" accessing this service.
>> > > >
>> > > > I find it an interesting alternative, to the other proposed
>> solutions,
>> > > > but
>> > > > there are some challenges associated with this solution, which
>> could make
>> > > > it
>> > > > more complicated:
>> > > >
>> > > > 1) Access control, file system permission based or token based,
>> > > >
>> > > > 2) stdout/stderr/return encapsulation/forwarding to the caller,
>> > > > if we have a simple/fast RPC mechanism we can use, it's a matter
>> > > > of serializing a dictionary.
>> > > >
>> > > > 3) client side implementation for 1 + 2.
>> > > >
>> > > > 4) It would need to accept new domain socket connections in green
>> threads
>> > > > to
>> > > > avoid spawning a new process to handle a new connection.
>> > > >
>> > > > The advantages:
>> > > > * we wouldn't need to break the only-python-rule.
>> > > > * we don't need to rewrite/translate rootwrap.
>> > > >
>> > > > The disadvantages:
>> > > > * it needs changes on the client side (neutron + other projects),
>> > > >
>> > > >
>> > > > Cheers,
>> > > > Miguel Ángel.
>> > > >
>> > > >
>> > > >
>> > > > On 03/08/2014 07:09 AM, Yuriy Taraday wrote:
>> > > >>
>> > > >> On Fri, Mar 7, 2014 at 5:41 PM, Stephen Gran
>> > > >> < stephen.gran at theguardian.com <mailto:
>> stephen.gran at theguardian.com >>
>> > > >> wrote:
>> > > >>
>> > > >> Hi,
>> > > >>
>> > > >> Given that Yuriy says explicitly 'unix socket', I dont think he
>> > > >> means 'MQ' when he says 'RPC'. I think he just means a daemon
>> > > >> listening on a unix socket for execution requests. This seems like
>> > > >> a reasonably sensible idea to me.
>> > > >>
>> > > >>
>> > > >> Yes, you're right.
>> > > >>
>> > > >> On 07/03/14 12:52, Miguel Angel Ajo wrote:
>> > > >>
>> > > >>
>> > > >> I thought of this option, but didn't consider it, as It's somehow
>> > > >> risky to expose an RPC end executing priviledged (even filtered)
>> > > >> commands.
>> > > >>
>> > > >>
>> > > >> subprocess module have some means to do RPC securely over UNIX
>> sockets.
>> > > >> I does this by passing some token along with messages. It should be
>> > > >> secure because with UNIX sockets we don't need anything stronger
>> since
>> > > >> MITM attacks are not possible.
>> > > >>
>> > > >> If I'm not wrong, once you have credentials for messaging, you can
>> > > >> send messages to any end, even filtered, I somehow see this as a
>> > > >> higher
>> > > >> risk option.
>> > > >>
>> > > >>
>> > > >> As Stephen noted, I'm not talking about using MQ for RPC. Just some
>> > > >> local UNIX socket with very simple RPC over it.
>> > > >>
>> > > >> And btw, if we add RPC in the middle, it's possible that all those
>> > > >> system call delays increase, or don't decrease all it'll be
>> > > >> desirable.
>> > > >>
>> > > >>
>> > > >> Every call to rootwrap would require the following.
>> > > >>
>> > > >> Client side:
>> > > >> - new client socket;
>> > > >> - one message sent;
>> > > >> - one message received.
>> > > >>
>> > > >> Server side:
>> > > >> - accepting new connection;
>> > > >> - one message received;
>> > > >> - one fork-exec;
>> > > >> - one message sent.
>> > > >>
>> > > >> This looks like way simpler than passing through sudo and rootwrap
>> that
>> > > >> requires three exec's and whole lot of configuration files opened
>> and
>> > > >> parsed.
>> > > >>
>> > > >> --
>> > > >>
>> > > >> Kind regards, Yuriy.
>> > > >>
>> > > >>
>> > > >> _______________________________________________
>> > > >> OpenStack-dev mailing list
>> > > >> OpenStack-dev at lists.openstack.org
>> > > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > > >>
>> > > >
>> > > > _______________________________________________
>> > > > OpenStack-dev mailing list
>> > > > OpenStack-dev at lists.openstack.org
>> > > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > >
>> > > _______________________________________________
>> > > OpenStack-dev mailing list
>> > > OpenStack-dev at lists.openstack.org
>> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> > >
>> >
>> > _______________________________________________
>> > OpenStack-dev mailing list
>> > OpenStack-dev at lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> > _______________________________________________
>> > OpenStack-dev mailing list
>> > OpenStack-dev at lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> > _______________________________________________
>> > OpenStack-dev mailing list
>> > OpenStack-dev at lists.openstack.org
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140317/7dd64101/attachment.html>


More information about the OpenStack-dev mailing list