[openstack-dev] Python overhead for rootwrap

John Garbutt john at johngarbutt.com
Mon Jul 29 09:38:24 UTC 2013


> Joe Gordon wrote:
> time python -c "print 'test'"

Is this a fair test, because I assume we don't need to compile
rootwrap each time?

Having said that, I believe you that there is overhead in starting python.

>>> Mike Wilson wrote:
>>>> In my opinion:
>>>>
>>>> 1. Stop using rootwrap completely and get strong argument checking
>>>> support into sudo (regex).

It seems like this is hard, and might pull us away from the finer
grained control we would like.

Rewriting rootwrap in C (option 4?) might work, but it would be a mine
field of string handing errors in the filters.

I tend to agree that (option 3) aggregating all of the calls to
rootwrap may be impractical:
> Sean Dague wrote:
> The reason there are 20 different call outs is that they aren't all in the
> same place. There are phases that happen here, and different kind of errors
> needed. I'm skeptical that you could push it all into one place.

However it seems like the quickest way to reduce _some_ of the impact.

Maybe just have python command-lets, like the filters (python code
that runs as root) that chain a set of shell requests, and the input
is restricted by the filters in the usual way. I do worry that it
encourages larger chunks of code running as root, but that is
something we should be able to avoid.

>>>> 2. Some sort of long lived rootwrap process, either forked by the
>>>> service that want's to shell out or a general purpose rootwrapd type
>>>> thing.

Not sure we can have explored this sort of option fully.

In general, it creates interesting permissions issues around the IPC,
and obviously increases the amount of memory we consume.

I am assuming that running rootwrap as another RPC consuming service,
that runs as root on each host, would perform worse that starting
python lots of times? Also, I am assuming that we get the messaging
security tight enough that it would be secure enough.

To improve performance, what about some kind of named pipe based "same
host" RPC driver to reduce the overhead. The service.hostname queue
could have a "local" equivalent, that could be used as a quick path,
where it is enabled. I am assuming we could get such name pipes
secured by appropriate groups. As an example, a rootwrap process (or
something specific like "nova-compute-rootwrap") may only listen on a
local route. But you could imagine things like nova-network and
nova-compute using the local fast path also, while at the same time
listening on the existing remote queues.

Its not clear to me that named pipes would be any more performant that
starting lots of python processes. Anyone tried that? My gut feeling
is that we would need to test it first.

John



More information about the OpenStack-dev mailing list