[openstack-dev] [Oslo] oslo.messaging on VMs
Julien Danjou
julien at danjou.info
Thu Mar 6 16:59:26 UTC 2014
On Thu, Mar 06 2014, Georgy Okrokvertskhov wrote:
> I there are valid reasons why we can consider MQ approach for communicating
> with VM agents. The first obvious reason is scalability and performance.
> User can ask infrastructure to create 1000 VMs and configure them. With
> HTTP approach it will lead to a corresponding number of connections to a
> REST API service. Taking into account that cloud has multiple clients the
> load on infrastructure will be pretty significant. You can address this
> with introducing Load Balancing for each service, but it will significantly
> increase management overhead and complexity of OpenStack infrastructure.
Uh? I'm having trouble imagining any large OpenStack deployment without
load-balancing services. I don't think we ever designed OpenStack to run
without load-balancers at large scale.
> The second issue is connectivity and security. I think that in typical
> production deployment VMs will not have an access to OpenStack
> infrastructure services.
Why? Should they be different than other VM? Are you running another
OpenStack cloud to run your OpenStack cloud?
> It is fine for core infrastructure services like
> Nova and Cinder as they do not work directly with VM. But it makes a huge
> problem for VM level services like Savanna, Heat, Trove and Murano which
> have to be able to communicate with VMs. The solution here is to put an
> intermediary to create a controllable way of communication. In case of HTTP
> you will need to have a proxy with QoS and Firewalls or policies, to be
> able to restrict an access to some specific URLS or services, to throttle
> the number of connections and bandwidth to protect services from DDoS
> attacks from VM sides.
This really sounds like weak arguments. You probably already do need
firewall, QoS, and throttling for your users if you're deploying a cloud
and want to mitigate any kind of attack.
> In case of MQ usage you can have a separate MQ broker for communication
> between service and VMs. Typical brokers have throttling mechanism, so you
> can protect service from DDoS attacks via MQ.
Yeah and I'm pretty sure a lot of HTTP servers have throttling for
connection rate and/or bandwidth limitation. I'm not really convinced.
> Using different queues and even vhosts you can effectively segregate
> different tenants.
Sounds like could do the same thing the HTTP protocol.
> For example we use this approach in Murano service when it is
> installed by Fuel. The default deployment configuration for Murano
> produced by Fuel is to have separate RabbitMQ instance for Murano<->VM
> communications. This configuration will not expose the OpenStack
> internals to VM, so even if someone broke the Murano rabbitmq
> instance, the OpenSatck itself will be unaffected and only the Murano
> part will be broken.
It really sounds like you already settled on the solution being
RabbitMQ, so I'm not sure what/why you ask in the first place. :)
Is there any problem with starting VMs on a network that is connected to
your internal network? You just have to do that and connect your
application to the/one internal messages bus and that's it.
--
Julien Danjou
-- Free Software hacker
-- http://julien.danjou.info
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 832 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140306/f8cbfa77/attachment.pgp>
More information about the OpenStack-dev
mailing list