[Openstack-operators] Murano in Production

Joe Topjian joe at topjian.net
Tue Sep 27 03:09:47 UTC 2016


Hi Serg,

We were indeed hitting that bug, but the cert wasn't self-signed. It was
easier for us to manually patch the Ubuntu Cloud package of Murano with the
stable/mitaka fix linked in that bug report than trying to debug where
OpenSSL/python/requests/etc was going awry.

We might redeploy Murano strictly using virtualenv's and pip so we stay on
the latest stable patches.

Thanks,
Joe

On Mon, Sep 26, 2016 at 11:03 PM, Serg Melikyan <smelikyan at mirantis.com>
wrote:

> Hi Joe,
>
> >Also, is it safe to say that communication between agent/engine only, and
> will only, happen during app deployment?
>
> murano-agent & murano-engine keep active connection to the Rabbit MQ
> broker but message exchange happens only during deployment of the app.
>
> >One thing we just ran into, though, was getting the agent/engine rmq
> config to work with SSL
>
> We had related bug fixed in Newton, can you confirm that you are *not*
> hitting bug #1578421 [0]
>
> References:
> [0] https://bugs.launchpad.net/murano/+bug/1578421
>
>
>
>
> On Mon, Sep 26, 2016 at 1:43 PM, Andrew Woodward <xarses at gmail.com> wrote:
> > In Fuel we deploy haproxy to all of the nodes that are part of the
> > VIP/endpoint service (This is usually part of the controller role) Then
> the
> > vips (internal or public) can be active on any member of the group.
> > Corosync/Pacemaker is used to move the VIP address (as apposed to
> > keepalived) in our case both haproxy, and the vip live in a namespace and
> > haproxy is always running on all of these nodes bound to 0/0.
> >
> > In the case of murano-rabbit we take the same approach as we do for
> galera,
> > all of the members are listed in the balancer, but with the others as
> > backup's this makes them inactive until the first node is down. This
> allow
> > the vip to move to any of the proxies in the cluster, and continue to
> direct
> > traffic to the same node util that rabbit instance is also un-available
> >
> > isten mysqld
> >   bind 192.168.0.2:3306
> >   mode  tcp
> >   option  httpchk
> >   option  tcplog
> >   option  clitcpka
> >   option  srvtcpka
> >   stick on  dst
> >   stick-table  type ip size 1
> >   timeout client  28801s
> >   timeout server  28801s
> >   server node-1 192.168.0.4:3307  check port 49000 inter 20s fastinter
> 2s
> > downinter 2s rise 3 fall 3
> >   server node-3 192.168.0.6:3307 backup check port 49000 inter 20s
> fastinter
> > 2s downinter 2s rise 3 fall 3
> >   server node-4 192.168.0.5:3307 backup check port 49000 inter 20s
> fastinter
> > 2s downinter 2s rise 3 fall 3
> >
> > listen murano_rabbitmq
> >   bind 10.110.3.3:55572
> >   balance  roundrobin
> >   mode  tcp
> >   option  tcpka
> >   timeout client  48h
> >   timeout server  48h
> >   server node-1 192.168.0.4:55572  check inter 5000 rise 2 fall 3
> >   server node-3 192.168.0.6:55572 backup check inter 5000 rise 2 fall 3
> >   server node-4 192.168.0.5:55572 backup check inter 5000 rise 2 fall 3
> >
> >
> > On Fri, Sep 23, 2016 at 7:30 AM Mike Lowe <jomlowe at iu.edu> wrote:
> >>
> >> Would you mind sharing an example snippet from HA proxy config?  I had
> >> struggled in the past with getting this part to work.
> >>
> >>
> >> > On Sep 23, 2016, at 12:13 AM, Serg Melikyan <smelikyan at mirantis.com>
> >> > wrote:
> >> >
> >> > Hi Joe,
> >> >
> >> > I can share some details on how murano is configured as part of the
> >> > default Mirantis OpenStack configuration and try to explain why it's
> >> > done in that way as it's done, I hope it helps you in your case.
> >> >
> >> > As part of Mirantis OpenStack second instance of the RabbitMQ is
> >> > getting deployed specially for the murano, but it's configuration is
> >> > different than for the RabbitMQ instance used by the other OpenStack
> >> > components.
> >> >
> >> > Why to use separate instance of the RabbitMQ?
> >> >     1. Prevent possibility to get access to the RabbitMQ supporting
> >> > whole cloud infrastructure by limiting access on the networking level
> >> > rather than rely on authentication/authorization
> >> >     2. Prevent possibility of DDoS by limiting access on the
> >> > networking level to the infrastructure RabbitMQ
> >> >
> >> > Given that second RabbitMQ instance is used only for the murano-agent
> >> > <-> murano-engine communications and murano-agent is running on the
> >> > VMs we had to make couple of changes in the deployment of the RabbitMQ
> >> > (bellow I am referencing RabbitMQ as RabbitMQ instance used by Murano
> >> > for m-agent <-> m-engine communications):
> >> >
> >> > 1. RabbitMQ is not clustered, just separate instance running on each
> >> > controller node
> >> > 2. RabbitMQ is exposed on the Public VIP where all OpenStack APIs are
> >> > exposed
> >> > 3. It's has different port number than default
> >> > 4. HAProxy is used, RabbitMQ is hidden behind it and HAProxy is always
> >> > pointing to the RabbitMQ on the current primary controller
> >> >
> >> > Note: How murano-agent is working? Murano-engine creates queue with
> >> > uniq name and put configuration tasks to that queue which are later
> >> > getting picked up by murano-agent when VM is booted and murano-agent
> >> > is configured to use created queue through cloud-init.
> >> >
> >> > #1 Clustering
> >> >
> >> > * Given that per 1 app deployment from we create 1-N VMs and send 1-M
> >> > configuration tasks, where in most of the cases N and M are less than
> >> > 3.
> >> > * Even if app deployment will be failed due to cluster failover it's
> >> > can be always re-deployed by the user.
> >> > * Controller-node failover most probably will lead to limited
> >> > accessibility of the Heat, Nova & Neutron API and application
> >> > deployment will fail regardless of the not executing configuration
> >> > task on the VM.
> >> >
> >> > #2 Exposure on the Public VIP
> >> >
> >> > One of the reasons behind choosing RabbitMQ as transport for
> >> > murano-agent communications was connectivity from the VM - it's much
> >> > easier to implement connectivity *from* the VM than *to* VM.
> >> >
> >> > But even in the case when you are connecting to the broker from the VM
> >> > you should have connectivity and public interface where all other
> >> > OpenStack APIs are exposed is most natural way to do that.
> >> >
> >> > #3 Different from the default port number
> >> >
> >> > Just to avoid confusion from the RabbitMQ used for the infrastructure,
> >> > even given that they are on the different networks.
> >> >
> >> > #4 HAProxy
> >> >
> >> > In case of the default Mirantis OpenStack configuration is used mostly
> >> > to support non-clustered RabbitMQ setup and exposure on the Public
> >> > VIP, but also helpful in case of more complicated setups.
> >> >
> >> > P.S. I hope my answers helped, let me know if I can cover something in
> >> > more details.
> >> > --
> >> > Serg Melikyan, Development Manager at Mirantis, Inc.
> >> > http://mirantis.com | smelikyan at mirantis.com
> >> >
> >> > _______________________________________________
> >> > OpenStack-operators mailing list
> >> > OpenStack-operators at lists.openstack.org
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/
> openstack-operators
> >>
> >> _______________________________________________
> >> OpenStack-operators mailing list
> >> OpenStack-operators at lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
> > --
> > Andrew Woodward
> > Mirantis
>
>
>
> --
> Serg Melikyan, Development Manager at Mirantis, Inc.
> http://mirantis.com | smelikyan at mirantis.com
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20160926/40a1235d/attachment.html>


More information about the OpenStack-operators mailing list