[openstack-dev] [olso][neutron] proxying oslo.messaging from management network into tenant network/VMs

Dmitry Mescheryakov dmescheryakov at mirantis.com
Wed Apr 9 11:19:10 UTC 2014


Hello Isaku,

Thanks for sharing this! Right now in Sahara project we think to use
Marconi as a mean to communicate with VM. Seems like you are familiar
with the discussions happened so far. If not, please see links at the
bottom of UnifiedGuestAgent [1] wiki page. In short we see Marconi's
supports for multi-tenancy as a huge advantage over other MQ
solutions. Our agent is network-based, so tenant isolation is a real
issue here. For clarity, here is the overview scheme of network based
agent:

server <-> MQ (Marconi) <-> agent

All communication goes over network. I've made a PoC of the Marconi
driver for oslo.messaging, you can find it at [2]


We also considered 'hypervisor-dependent' agents (as I called them in
the initial thread) like the one you propose. They also provide tenant
isolation. But the drawback is _much_ bigger development cost and more
fragile and complex deployment.

In case of network-based agent all the code is
 * Marconi driver for RPC library (oslo.messaging)
 * thin client for server to make calls
 * a guest agent with thin server-side
If you write your agent on python, it will work on any OS with any
host hypervisor.


For hypervisor dependent-agent it becomes much more complex. You need
one more additional component - a proxy-agent running on Compute host,
which makes deployment harder. You also need to support various
transports for various hypervisors: virtio-serial for KVM, XenStore
for Xen, something for Hyper-V, etc. Moreover guest OS must have
driver for these transports and you will probably need to write
different implementation for different OSes.

Also you mention that in some cases a second proxy-agent is needed and
again in some cases only cast operations could be used. Using cast
only is not an option for Sahara, as we do need feedback from the
agent and sometimes getting the return value is the main reason to
make an RPC call.

I didn't see a discussion in Neutron on which approach to use (if it
was, I missed it). I see simplicity of network-based agent as a huge
advantage. Could you please clarify why you've picked design depending
on hypervisor?

Thanks,

Dmitry


[1] https://wiki.openstack.org/wiki/UnifiedGuestAgent
[2] https://github.com/dmitrymex/oslo.messaging

2014-04-09 12:33 GMT+04:00 Isaku Yamahata <isaku.yamahata at gmail.com>:
> Hello developers.
>
>
> As discussed many times so far[1], there are many projects that needs
> to propagate RPC messages into VMs running on OpenStack. Neutron in my case.
>
> My idea is to relay RPC messages from management network into tenant
> network over file-like object. By file-like object, I mean virtio-serial,
> unix domain socket, unix pipe and so on.
> I've wrote some code based on oslo.messaging[2][3] and a documentation
> on use cases.[4][5]
> Only file-like transport and proxying messages would be in oslo.messaging
> and agent side code wouldn't be a part of oslo.messaging.
>
>
> use cases:([5] for more figures)
> file-like object: virtio-serial, unix domain socket, unix pipe
>
>   server <-> AMQP <-> agent in host <-virtio serial-> guest agent in VM
>                       per VM
>
>   server <-> AMQP <-> agent in host <-unix socket/pipe->
>              agent in tenant network <-> guest agent in VM
>
>
> So far there are security concerns to forward oslo.messaging from management
> network into tenant network. One approach is to allow only cast-RPC from
> server to guest agent in VM so that guest agent in VM only receives messages
> and can't send anything to servers. With unix pipe, it's write-only
> for server, read-only for guest agent.
>
>
> Thoughts? comments?
>
>
> Details of Neutron NFV use case[6]:
> Neutron services so far typically runs agents in host, the host agent
> in host receives RPCs from neutron server, then it executes necessary
> operations. Sometimes the agent in host issues RPC to neutron server
> periodically.(e.g. status report etc)
> It's desirable to make such services virtualized as Network Function
> Virtualizaton(NFV), i.e. make those features run in VMs. So it's quite
> natural approach to propagate those RPC message into agents into VMs.
>
>
> [1] https://wiki.openstack.org/wiki/UnifiedGuestAgent
> [2] https://review.openstack.org/#/c/77862/
> [3] https://review.openstack.org/#/c/77863/
> [4] https://blueprints.launchpad.net/oslo.messaging/+spec/message-proxy-server
> [5] https://wiki.openstack.org/wiki/Oslo/blueprints/message-proxy-server
> [6] https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms
> --
> Isaku Yamahata <isaku.yamahata at gmail.com>



More information about the OpenStack-dev mailing list