[openstack-dev] [oslo][neutron][rootwrap] Performance considerations, sudo?
Miguel Angel Ajo
majopela at redhat.com
Fri Mar 7 08:27:48 UTC 2014
I'm really happy to see that I'm not the only one concerned about
I'm reviewing the thread, and summarizing / replying to multiple people
on the thread:
* Thanks for pointing us to the previous thread about this topic:
* iproute commit f0124b0f0aa0e5b9288114eb8e6ff9b4f8c33ec8 upstream,
I have to check if it's on my system.
* Very interesting investigation about sudo:
http://www.sudo.ws/repos/sudo/rev/e9dc28c7db60 this is as important
as the bottleneck in rootwrap when you start having lots of interfaces.
* To your question: my times are only from neutron-dhcp-agent &
neutron-l3-agent start, to completion, system boot times are excluded
measurement (That's <1min).
* About the Linux networking folks not exposing API interfaces to avoid
lock in, in the end they're already locked in with the cmd api
interface, if they made an API at the same level, it shouldn't be that
bad... but of course, it's not free...
* yes, pypy start time is too slow, and I must definitely investigate
about the RPython toolchain.
* Ideally, I agree, that a an automated py->C solution would be
the best from the openstack project point of view. Have you had any
experience with that using such toolchain? Could you point me to some
* shedskin seems to do this kind of translation, for a limited python
subset, which would mean rewriting rootwrap's python to accommodate
If no tool offers the translation we need, or if the result is slow:
I'm not against a rewrite of rootwrap in C/C++, if we have developers
on the project, with C/C++ experience, specially related to security.
I have such experience, and I'm sure there are more around (even if
not all openstack developers talk C). But, that doesn't exclude that
we maintain a rootwrap in python to foster innovation around the tool.
(here I agree with Vishvananda Ishaya)
I haven't tried cython, but I will check it in a few minutes.
Thanks for pointing us to "ip netns exec" too, I wonder if that's
releated to the iproute upstream change Rick Jones was talking about.
On 03/06/2014 09:31 AM, Miguel Angel Ajo wrote:
> On 03/06/2014 07:57 AM, IWAMOTO Toshihiro wrote:
>> At Wed, 05 Mar 2014 15:42:54 +0100,
>> Miguel Angel Ajo wrote:
>>> 3) I also find 10 minutes a long time to setup 192 networks/basic tenant
>>> structures, I wonder if that time could be reduced by conversion
>>> of system process calls into system library calls (I know we don't have
>>> libraries for iproute, iptables?, and many other things... but it's a
>>> problem that's probably worth looking at.)
>> Try benchmarking
>> $ sudo ip netns exec qfoobar /bin/echo
> You're totally right, that takes the same time as rootwrap itself. It's
> another point to think about from the performance point of view.
> An interesting read:
> ip netns does a lot of mounts around to simulate a normal environment,
> where an netns-aware application could avoid all this.
>> Network namespace switching costs almost as much as a rootwrap
>> execution, IIRC.
>> Execution coalesceing is not enough in this case and we would need to
>> change how Neutron issues commands, IMO.
> Yes, one option could be to coalesce all calls that go into
> a namespace into a shell script and run this in the
> ootwrap > ip netns exec
> But we might find a mechanism to determine if some of the steps failed,
> and what was the result / output, something like failing line + result
> code. I'm not sure if we rely on stdout/stderr results at any time.
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
More information about the OpenStack-dev