[Openstack-operators] Murano in Production
Kris G. Lindgren
klindgren at godaddy.com
Fri Sep 23 07:11:46 UTC 2016
How are you having ha proxy pointing to the current primary controller? Is this done automatically or are you manually setting a server as the master?
Sent from my iPad
> On Sep 23, 2016, at 5:17 AM, Serg Melikyan <smelikyan at mirantis.com> wrote:
> Hi Joe,
> I can share some details on how murano is configured as part of the
> default Mirantis OpenStack configuration and try to explain why it's
> done in that way as it's done, I hope it helps you in your case.
> As part of Mirantis OpenStack second instance of the RabbitMQ is
> getting deployed specially for the murano, but it's configuration is
> different than for the RabbitMQ instance used by the other OpenStack
> Why to use separate instance of the RabbitMQ?
> 1. Prevent possibility to get access to the RabbitMQ supporting
> whole cloud infrastructure by limiting access on the networking level
> rather than rely on authentication/authorization
> 2. Prevent possibility of DDoS by limiting access on the
> networking level to the infrastructure RabbitMQ
> Given that second RabbitMQ instance is used only for the murano-agent
> <-> murano-engine communications and murano-agent is running on the
> VMs we had to make couple of changes in the deployment of the RabbitMQ
> (bellow I am referencing RabbitMQ as RabbitMQ instance used by Murano
> for m-agent <-> m-engine communications):
> 1. RabbitMQ is not clustered, just separate instance running on each
> controller node
> 2. RabbitMQ is exposed on the Public VIP where all OpenStack APIs are exposed
> 3. It's has different port number than default
> 4. HAProxy is used, RabbitMQ is hidden behind it and HAProxy is always
> pointing to the RabbitMQ on the current primary controller
> Note: How murano-agent is working? Murano-engine creates queue with
> uniq name and put configuration tasks to that queue which are later
> getting picked up by murano-agent when VM is booted and murano-agent
> is configured to use created queue through cloud-init.
> #1 Clustering
> * Given that per 1 app deployment from we create 1-N VMs and send 1-M
> configuration tasks, where in most of the cases N and M are less than
> * Even if app deployment will be failed due to cluster failover it's
> can be always re-deployed by the user.
> * Controller-node failover most probably will lead to limited
> accessibility of the Heat, Nova & Neutron API and application
> deployment will fail regardless of the not executing configuration
> task on the VM.
> #2 Exposure on the Public VIP
> One of the reasons behind choosing RabbitMQ as transport for
> murano-agent communications was connectivity from the VM - it's much
> easier to implement connectivity *from* the VM than *to* VM.
> But even in the case when you are connecting to the broker from the VM
> you should have connectivity and public interface where all other
> OpenStack APIs are exposed is most natural way to do that.
> #3 Different from the default port number
> Just to avoid confusion from the RabbitMQ used for the infrastructure,
> even given that they are on the different networks.
> #4 HAProxy
> In case of the default Mirantis OpenStack configuration is used mostly
> to support non-clustered RabbitMQ setup and exposure on the Public
> VIP, but also helpful in case of more complicated setups.
> P.S. I hope my answers helped, let me know if I can cover something in
> more details.
> Serg Melikyan, Development Manager at Mirantis, Inc.
> http://mirantis.com | smelikyan at mirantis.com
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
More information about the OpenStack-operators