[openstack-dev] [olso][neutron] proxying oslo.messaging from management network into tenant network/VMs

i y yamahataml at gmail.com
Thu Apr 10 03:06:35 UTC 2014


Thanks for the explanation. Now I'm seeing how it works.
So the assumption is
- VMs are required to be on a tenant network connected to public network so
that it can reach openstack public REST API

Is this a widely acceptable assumption? acceptable for NFV use case?
I'm not sure for now, I'd like to hear from others.
So far I've assumed that people may want VMs on a tenant network that is
not connected to public network(isolated tenant network)

Thanks,
Isaku Yamahata


On Thu, Apr 10, 2014 at 1:45 AM, Dmitry Mescheryakov <
dmescheryakov at mirantis.com> wrote:

> > I agree those arguments.
> > But I don't see how network-based agent approach works with Neutron
> > network for now. Can you please elaborate on it?
>
> Here is the scheme of network-based agent:
>
> server <-> MQ (Marconi) <-> agent
>
> As Doug said, Marconi exposes REST API, just like any other OpenStack
> service. The services it provides are similar to the MQ ones (Rabbit
> MQ, Qpid, etc.). I.e. very simply there are methods:
>  * put_message(queue_name, message_payload)
>  * get_message(queue_name)
>
> Multi-tenancy is provided by the same means as in the other OpenStack
> projects - user supplies Keystone token in the request and it
> determines the tenant used.
>
> As for the network, a networking-based agent requires tcp connection
> to Marconi. I.e. you need an agent running on the VM to be able to
> connect to Marconi, but not vice versa. That does not sound like a
> harsh requirement.
>
> The standard MQ solutions like Rabbit and Qpid actually could be used
> here instead of Marconi with one drawback - it is really hard to
> reliably implement tenant isolation with them.
>
> Thanks,
>
> Dmitry
>
> 2014-04-09 17:38 GMT+04:00 Isaku Yamahata <isaku.yamahata at gmail.com>:
> > Hello Dmitry. Thank you for reply.
> >
> > On Wed, Apr 09, 2014 at 03:19:10PM +0400,
> > Dmitry Mescheryakov <dmescheryakov at mirantis.com> wrote:
> >
> >> Hello Isaku,
> >>
> >> Thanks for sharing this! Right now in Sahara project we think to use
> >> Marconi as a mean to communicate with VM. Seems like you are familiar
> >> with the discussions happened so far. If not, please see links at the
> >> bottom of UnifiedGuestAgent [1] wiki page. In short we see Marconi's
> >> supports for multi-tenancy as a huge advantage over other MQ
> >> solutions. Our agent is network-based, so tenant isolation is a real
> >> issue here. For clarity, here is the overview scheme of network based
> >> agent:
> >>
> >> server <-> MQ (Marconi) <-> agent
> >>
> >> All communication goes over network. I've made a PoC of the Marconi
> >> driver for oslo.messaging, you can find it at [2]
> >
> > I'm not familiar with Marconi, so please enlighten me first.
> > How does MQ(Marconi) communicates both to management network and
> > tenant network?
> > Does it work with Neutron network? not nova-network.
> >
> > Neutron network isolates not only tenant networks each other,
> > but also management network at L2. So openstack servers can't send
> > any packets to VMs. VMs can't to openstack servers.
> > This is the reason why neutron introduced HTTP proxy for instance
> metadata.
> > It is also the reason why I choose to introduce new agent on host.
> > If Marconi (or other porjects like sahara) already solved those issues,
> > that's great.
> >
> >
> >> We also considered 'hypervisor-dependent' agents (as I called them in
> >> the initial thread) like the one you propose. They also provide tenant
> >> isolation. But the drawback is _much_ bigger development cost and more
> >> fragile and complex deployment.
> >>
> >> In case of network-based agent all the code is
> >>  * Marconi driver for RPC library (oslo.messaging)
> >>  * thin client for server to make calls
> >>  * a guest agent with thin server-side
> >> If you write your agent on python, it will work on any OS with any
> >> host hypervisor.
> >>
> >>
> >> For hypervisor dependent-agent it becomes much more complex. You need
> >> one more additional component - a proxy-agent running on Compute host,
> >> which makes deployment harder. You also need to support various
> >> transports for various hypervisors: virtio-serial for KVM, XenStore
> >> for Xen, something for Hyper-V, etc. Moreover guest OS must have
> >> driver for these transports and you will probably need to write
> >> different implementation for different OSes.
> >>
> >> Also you mention that in some cases a second proxy-agent is needed and
> >> again in some cases only cast operations could be used. Using cast
> >> only is not an option for Sahara, as we do need feedback from the
> >> agent and sometimes getting the return value is the main reason to
> >> make an RPC call.
> >>
> >> I didn't see a discussion in Neutron on which approach to use (if it
> >> was, I missed it). I see simplicity of network-based agent as a huge
> >> advantage. Could you please clarify why you've picked design depending
> >> on hypervisor?
> >
> > I agree those arguments.
> > But I don't see how network-based agent approach works with Neutron
> > network for now. Can you please elaborate on it?
> >
> >
> > thanks,
> >
> >
> >> Thanks,
> >>
> >> Dmitry
> >>
> >>
> >> [1] https://wiki.openstack.org/wiki/UnifiedGuestAgent
> >> [2] https://github.com/dmitrymex/oslo.messaging
> >>
> >> 2014-04-09 12:33 GMT+04:00 Isaku Yamahata <isaku.yamahata at gmail.com>:
> >> > Hello developers.
> >> >
> >> >
> >> > As discussed many times so far[1], there are many projects that needs
> >> > to propagate RPC messages into VMs running on OpenStack. Neutron in
> my case.
> >> >
> >> > My idea is to relay RPC messages from management network into tenant
> >> > network over file-like object. By file-like object, I mean
> virtio-serial,
> >> > unix domain socket, unix pipe and so on.
> >> > I've wrote some code based on oslo.messaging[2][3] and a documentation
> >> > on use cases.[4][5]
> >> > Only file-like transport and proxying messages would be in
> oslo.messaging
> >> > and agent side code wouldn't be a part of oslo.messaging.
> >> >
> >> >
> >> > use cases:([5] for more figures)
> >> > file-like object: virtio-serial, unix domain socket, unix pipe
> >> >
> >> >   server <-> AMQP <-> agent in host <-virtio serial-> guest agent in
> VM
> >> >                       per VM
> >> >
> >> >   server <-> AMQP <-> agent in host <-unix socket/pipe->
> >> >              agent in tenant network <-> guest agent in VM
> >> >
> >> >
> >> > So far there are security concerns to forward oslo.messaging from
> management
> >> > network into tenant network. One approach is to allow only cast-RPC
> from
> >> > server to guest agent in VM so that guest agent in VM only receives
> messages
> >> > and can't send anything to servers. With unix pipe, it's write-only
> >> > for server, read-only for guest agent.
> >> >
> >> >
> >> > Thoughts? comments?
> >> >
> >> >
> >> > Details of Neutron NFV use case[6]:
> >> > Neutron services so far typically runs agents in host, the host agent
> >> > in host receives RPCs from neutron server, then it executes necessary
> >> > operations. Sometimes the agent in host issues RPC to neutron server
> >> > periodically.(e.g. status report etc)
> >> > It's desirable to make such services virtualized as Network Function
> >> > Virtualizaton(NFV), i.e. make those features run in VMs. So it's quite
> >> > natural approach to propagate those RPC message into agents into VMs.
> >> >
> >> >
> >> > [1] https://wiki.openstack.org/wiki/UnifiedGuestAgent
> >> > [2] https://review.openstack.org/#/c/77862/
> >> > [3] https://review.openstack.org/#/c/77863/
> >> > [4]
> https://blueprints.launchpad.net/oslo.messaging/+spec/message-proxy-server
> >> > [5]
> https://wiki.openstack.org/wiki/Oslo/blueprints/message-proxy-server
> >> > [6]
> https://blueprints.launchpad.net/neutron/+spec/adv-services-in-vms
> >> > --
> >> > Isaku Yamahata <isaku.yamahata at gmail.com>
> >>
> >> _______________________________________________
> >> OpenStack-dev mailing list
> >> OpenStack-dev at lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> > --
> > Isaku Yamahata <isaku.yamahata at gmail.com>
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140410/45e21a8c/attachment.html>


More information about the OpenStack-dev mailing list