[openstack-dev] [neutron] proposal to resolve a rootwrap problem for XenServer
Ihar Hrachyshka
ihrachys at redhat.com
Tue Nov 1 11:45:43 UTC 2016
Jianghua Wang <jianghua.wang at citrix.com> wrote:
> Hi Neutron guys,
>
> I’m trying to explain a problem with the XenServer rootwrap and give a
> proposal to resolve it. I need some input on how to proceed with this
> proposal: e.g. if requires a spec? Any concerns need further discussion
> or clarification?
>
> Problem description:
> As we’ve known, some neutron services need run commands with root
> privileges and it’s achieved by running commands via the rootwrap. And in
> order to resolve performance issue, it has been improved to support
> daemon mode for the rootwrap [1]. Either way has the commands running on
> the same node/VM which has relative neutron services running on.
>
> But as a type-1 hypervisor, XenServer OpenStack has different behavior.
> Neutron’s compute agent neutron-openvswitch-agent need run commands in
> dom0, as the tenants’ interfaces are plugged in an integration OVS which
> locates in Dom0. Currently the script of
> https://github.com/openstack/neutron/blob/master/bin/neutron-rootwrap-xen-dom0is
> used as XenServer OpenStack’s rootwrap. This script will create a XenAPI
> session with dom0 and passes the commands to dom0 for the real execution.
> Each command execution will run this script once. So it has the similar
> performance issue as the non-daemon mode of rootwrap on other
> hypervisors: For each command, it has to parse the
> neutron-rootwrap-xen-dom0 script and the rootwrap configure file.
> Furthermore, this rootwrap script will create a XenAPI for each command
> execution and XenServer by default will log the XenAPI session creation
> events. It will cause frequent log file rotation and so other real useful
> log is lost.
>
> Proposal:
> The os.rootwrap support daemon mode for other hypervisors; but
> XenServer’s compute agent can’t use that as again it need run commands in
> Dom0. But we can refer to that design and implement the daemon mode for
> XenServer. After creating a XenAPI session, Dom0’s XAPI will accept the
> command running requests from the session and reply with the running
> result. So logically we’ve had a daemon in dom0. So we can support daemon
> mode rootwrap with the following design:
> 1. Develop a daemon client module for XenServer: The agent service will
> use this client module to create a XenAPI session, and keep this session
> during the service’s whole life.
> 2. once need run command on dom0, use the above client to runs commands
> in dom0.
> It should be able to result the issues mentioned above, as the client
> module need import only once for each agent service and only use a single
> session for all commands. The prototype code[3] works well.
>
> Any concern or comments for the above proposal? And how I can proceed
> with solution? We’ve filed a RFE bug[2] which is in wishlist&incomplete
> status. Per the neutron policy[4], it seems need neutron-drivers team to
> evaluate the RFE and determine if a spec is required. Could anyone help
> to evaluate this proposal and tell me how I should proceed? And I’m also
> open and happy for any comments. Thanks very much.
>
> [1]
> https://specs.openstack.org/openstack/oslo-specs/specs/juno/rootwrap-daemon-mode.html
> [2] https://bugs.launchpad.net/neutron/+bug/1585510
> [3]prototype code: https://review.openstack.org/#/c/390931/
> [4] http://docs.openstack.org/developer/neutron/policies/blueprints.html
>
I suggested in the bug and the PoC review that neutron is not the right
project to solve the issue. Seems like oslo.rootwrap is a better place to
maintain privilege management code for OpenStack. Ideally, a solution would
be found in scope of the library that would not require any changes
per-project.
I moved the bug to Opinion since I don’t believe it’s in scope for neutron;
I also added oslo.rootwrap to the list of affected projects to collect
feedback from oslo folks. Finally, I blocked the PoC patch with -2 until we
agree on how to scope the feature for neutron.
I hope it helps,
Ihar
More information about the OpenStack-dev
mailing list