[openstack-dev] [Oslo] oslo.messaging on VMs
gokrokvertskhov at mirantis.com
Thu Mar 6 18:00:59 UTC 2014
On Thu, Mar 6, 2014 at 8:59 AM, Julien Danjou <julien at danjou.info> wrote:
> On Thu, Mar 06 2014, Georgy Okrokvertskhov wrote:
> > I there are valid reasons why we can consider MQ approach for
> > with VM agents. The first obvious reason is scalability and performance.
> > User can ask infrastructure to create 1000 VMs and configure them. With
> > HTTP approach it will lead to a corresponding number of connections to a
> > REST API service. Taking into account that cloud has multiple clients the
> > load on infrastructure will be pretty significant. You can address this
> > with introducing Load Balancing for each service, but it will
> > increase management overhead and complexity of OpenStack infrastructure.
> Uh? I'm having trouble imagining any large OpenStack deployment without
> load-balancing services. I don't think we ever designed OpenStack to run
> without load-balancers at large scale.
Not all services require LoadBalancer instances. It makes sense to use LB
for API services but even in Nova there are components which use MQ RPC for
communication and one doesn't need to put them behind LB as they scale
naturally just using MQ concurrently. I believe this change to MQ RPC was
exactly done to address the problems of scalability for internal services.
I agree that LBs are supposed to be in production grade deployment but this
solution is not a silver bullet and has lot of limitations and overall
> The second issue is connectivity and security. I think that in typical
> > production deployment VMs will not have an access to OpenStack
> > infrastructure services.
> Why? Should they be different than other VM? Are you running another
> OpenStack cloud to run your OpenStack cloud?
There are use cases and security requirements that usually enforce to have
very limited access to OpenStack infrastructure components. As cloud admins
do not control the workloads on VMs there is a significant security risk of
being attacked from VM. The common requirements we see in production
deployment is to enable SSL for everything including MySQL, MQ and nova
I also would like to highlight that even Nova\Neutron for working with
cloud-init enables an access to metadata temporary by managing routes on
the VM. So for design purpose it is better to assume that there will be no
access to OpenStack services from VM side and if you need is, you will have
to configure this properly.
> > It is fine for core infrastructure services like
> > Nova and Cinder as they do not work directly with VM. But it makes a huge
> > problem for VM level services like Savanna, Heat, Trove and Murano which
> > have to be able to communicate with VMs. The solution here is to put an
> > intermediary to create a controllable way of communication. In case of
> > you will need to have a proxy with QoS and Firewalls or policies, to be
> > able to restrict an access to some specific URLS or services, to throttle
> > the number of connections and bandwidth to protect services from DDoS
> > attacks from VM sides.
> This really sounds like weak arguments. You probably already do need
> firewall, QoS, and throttling for your users if you're deploying a cloud
> and want to mitigate any kind of attack.
I don't argue about existence of such components in OpenStack deployment. I
just show that with increasing number of services one will have to manage
the complexity of such configuration. Taking into account number of
possible Neutron configurations, possibility of overlapping subnets in
virtual networks, and existence of fully private network which are not
attached through the router to external network the connectivity and access
control looks like a real complex task which will be a headache for cloud
admins and devops.
> > In case of MQ usage you can have a separate MQ broker for communication
> > between service and VMs. Typical brokers have throttling mechanism, so
> > can protect service from DDoS attacks via MQ.
> Yeah and I'm pretty sure a lot of HTTP servers have throttling for
> connection rate and/or bandwidth limitation. I'm not really convinced.
Yes, some of them have and you will need to configure them properly.
> > Using different queues and even vhosts you can effectively segregate
> > different tenants.
> Sounds like could do the same thing the HTTP protocol.
> > For example we use this approach in Murano service when it is
> > installed by Fuel. The default deployment configuration for Murano
> > produced by Fuel is to have separate RabbitMQ instance for Murano<->VM
> > communications. This configuration will not expose the OpenStack
> > internals to VM, so even if someone broke the Murano rabbitmq
> > instance, the OpenSatck itself will be unaffected and only the Murano
> > part will be broken.
> It really sounds like you already settled on the solution being
> RabbitMQ, so I'm not sure what/why you ask in the first place. :)
> Is there any problem with starting VMs on a network that is connected to
> your internal network? You just have to do that and connect your
> application to the/one internal messages bus and that's it.
Murano currently has a specific design for agent communication, that is
true, but the same applies to other services too. As you probably
mentioned, Dmitry raised this question within the Unified agent initiative
which is important for Murano too as it will move eventually from its own
agent infrastructure to this unified once it is completed. The reason why I
share this, is to expose our experience with deployments of services which
have to work with VMs. I don't want to say that MQ is the only one blessed
way for communications, but definitely it deserved to be discussed. I don't
want to make a blind decision on communication type for agents by taking
only HTTP and REST as other service do this in OpenStack. Different
services have different requirements. I want to point too the fact that
core OpenStack services did not need to communicate with VMs.
OpenStack Platform Products,
Tel. +1 650 963 9828
Mob. +1 650 996 3284
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the OpenStack-dev