[openstack-dev] [Oslo] [Marconi] oslo.messaging on VMs

Georgy Okrokvertskhov gokrokvertskhov at mirantis.com
Thu Mar 6 20:16:57 UTC 2014


As a result of this discussion, I think we need also involve Marconi  team
to this discussion. (I am sorry for changing the Subject).

I am not very familiar with Marconi project details, but at first look it
looks like it can help to setup separate MQ infrastructure for agent <->
service communication.

I don't have any specific design suggestions and I hope Marconi team will
help us to find a right approach.

It looks like that option with oslo.message framework has now lower
priority due to security reasons.

Thanks
Georgy


On Thu, Mar 6, 2014 at 11:33 AM, Steven Dake <sdake at redhat.com> wrote:

> On 03/06/2014 10:24 AM, Daniel P. Berrange wrote:
>
>> On Thu, Mar 06, 2014 at 07:25:37PM +0400, Dmitry Mescheryakov wrote:
>>
>>> Hello folks,
>>>
>>> A number of OpenStack and related projects have a need to perform
>>> operations inside VMs running on OpenStack. A natural solution would
>>> be an agent running inside the VM and performing tasks.
>>>
>>> One of the key questions here is how to communicate with the agent. An
>>> idea which was discussed some time ago is to use oslo.messaging for
>>> that. That is an RPC framework - what is needed. You can use different
>>> transports (RabbitMQ, Qpid, ZeroMQ) depending on your preference or
>>> connectivity your OpenStack networking can provide. At the same time
>>> there is a number of things to consider, like networking, security,
>>> packaging, etc.
>>>
>>> So, messaging people, what is your opinion on that idea? I've already
>>> raised that question in the list [1], but seems like not everybody who
>>> has something to say participated. So I am resending with the
>>> different topic. For example, yesterday we started discussing security
>>> of the solution in the openstack-oslo channel. Doug Hellmann at the
>>> start raised two questions: is it possible to separate different
>>> tenants or applications with credentials and ACL so that they use
>>> different queues? My opinion that it is possible using RabbitMQ/Qpid
>>> management interface: for each application we can automatically create
>>> a new user with permission to access only her queues. Another question
>>> raised by Doug is how to mitigate a DOS attack coming from one tenant
>>> so that it does not affect another tenant. The thing is though
>>> different applications will use different queues, they are going to
>>> use a single broker.
>>>
>> Looking at it from the security POV, I'd absolutely not want to
>> have any tenant VMs connected to the message bus that openstack
>> is using between its hosts. Even if you have security policies
>> in place, the inherent architectural risk of such a design is
>> just far too great. One small bug or misconfiguration and it
>> opens the door to a guest owning the entire cloud infrastructure.
>> Any channel between a guest and host should be isolated per guest,
>> so there's no possibility of guest messages finding their way out
>> to either the host or to other guests.
>>
>> If there was still a desire to use oslo.messaging, then at the
>> very least you'd want a completely isolated message bus for guest
>> comms, with no connection to the message bus used between hosts.
>> Ideally the message bus would be separate per guest too, which
>> means it ceases to be a bus really - just a point-to-point link
>> between the virt host + guest OS that happens to use the oslo.messaging
>> wire format.
>>
>> Regards,
>> Daniel
>>
> I agree and have raised this in the past.
>
> IMO oslo.messaging is a complete nonstarter for guest communication
> because of security concerns.
>
> We do not want guests communicating on the same message bus as
> infrastructure.  The response to that was "well just have all the guests
> communicate on their own unique messaging server infrastructure".  The
> downside of this is one guests activity could damage a different guest
> because of a lack of isolation and the nature in which message buses work.
>  The only workable solution which ensures security is a unique message bus
> per guest - which means a unique daemon per guest.  Surely there has to be
> a better way.
>
> The idea of isolating guests on a user basis, but allowing them to all
> exchange messages on one topic doesn't make logical sense to me.  I just
> don't think its possible, unless somehow rpc delivery were changed to
> deliver credentials enforced by the RPC server in addition to calling
> messages.  Then some type of credential management would need to be done
> for each guest in the infrastructure wishing to use the shared message bus.
>
> The requirements of oslo.messaging solution for a shared agent is that the
> agent would only be able to listen and send messages directed towards it
> (point to point) but would be able to publish messages to a topic for
> server consumption (the agent service, which may be integrated into other
> projects).  This way any number of shared agents could communicate to one
> agent service, but those agents would be isolated from one another.
>
> Perhaps user credentials could be passed as well in the delivery of each
> RPC message, but that means putting user credentials in the VM to start the
> communication.  Bootstrapping seems like a second obvious problem with this
> model.
>
> I prefer a point to point model, much as the metadata service works today.
>  Although rpc.messaging is a really nice framework (I know, I just ported
> heat to oslo.messaging!) it doesn't fit this problem well because of the
> security implications.
>
> Regards
> -steve
>
>
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Georgy Okrokvertskhov
Architect,
OpenStack Platform Products,
Mirantis
http://www.mirantis.com
Tel. +1 650 963 9828
Mob. +1 650 996 3284
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140306/1adf481a/attachment.html>


More information about the OpenStack-dev mailing list