[openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, containers, and the future of TripleO

Fox, Kevin M Kevin.Fox at pnnl.gov
Thu Mar 31 17:54:31 UTC 2016


Ideally it can roll the services one instance at a time while doing the appropriate load balancer stuff to make it seamless. Our experience has been even though services should retry, they dont always do it right. So better to do it with the lb proper if you can. Between ansible/container orchestration and containers, it should be pretty easy to do. While doing it with just packages would be very hard.

Thanks,
Kevin

________________________________
From: Steven Dake (stdake)
Sent: Thursday, March 31, 2016 1:22:21 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, containers, and the future of TripleO

Kevin,

I am not directly answering your question, but from the perspective of Kolla, our upgrades are super simple because we don't make a big mess in the first place to upgrade from.  In my experience, this is the number one problem with upgrades – everyone makes a mess of the first deployment, so upgrading from there is a minefield.  Better to walk straight through that minefield by not making a mess of the system in the first place using my favorite deployment tool: Kolla ;-)

Kolla upgrades rock.  I have no doubt we will have some minor issues in the field, but we have tested 1 month old master to master upgrades with database migrations of the services we deploy, and it takes approximately 10 minutes on a 64 (3 control rest compute) node cluster without VM downtime or loss of networking service to the virtual machines.  This is because our upgrades, while not totally atomic across the clsuter, are pretty darn close and upgrade the entire filesystem runtime in one atomic action per service while rolling the ugprade in the controller nodes.

During the upgrade process there may be some transient failures for API service calls, but they are typically repeated by clients and no real harm is done.  Note we follow project's best practices for handling upgrades, without the mess of dealing with packaging or configuration on the filesystem and migration thereof.

Regards
-steve


From: "Fox, Kevin M" <Kevin.Fox at pnnl.gov<mailto:Kevin.Fox at pnnl.gov>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Date: Wednesday, March 30, 2016 at 9:12 PM
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, containers, and the future of TripleO

The main issue is one of upgradability, not stability. We all know tripleo is stable. Tripleo cant do upgrades today. We're looking for ways to get there. So "upgrading" to ansible isnt nessisary for sure since folks deploying tripleo today must assume they cant upgrade anyway.

Honestly I have doubts any config management system from puppet to heat software deployments can be coorced to do a cloud upgrade without downtime without a huge amount of workarounds. You really either need a workflow oriented system with global knowledge like ansible or a container orchestration system like kubernes to ensure you dont change too many things at once and break things. You need to be able to run some old things and some new, all at the same time. And in some cases different versions/config of the same service on different machines.

Thoughts on how this may be made to work with puppet/heat?

Thanks,
Kevin

________________________________
From: Dan Prince
Sent: Monday, March 28, 2016 12:07:22 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, containers, and the future of TripleO

On Wed, 2016-03-23 at 07:54 -0400, Ryan Hallisey wrote:
> *Snip*
>
> >
> > Indeed, this has literally none of the benefits of the ideal Heat
> > deployment enumerated above save one: it may be entirely the wrong
> > tool
> > in every way for the job it's being asked to do, but at least it
> > is
> > still well-integrated with the rest of the infrastructure.
> >
> > Now, at the Mitaka summit we discussed the idea of a 'split
> > stack',
> > where we have one stack for the infrastructure and a separate one
> > for
> > the software deployments, so that there is no longer any tight
> > integration between infrastructure and software. Although it makes
> > me a
> > bit sad in some ways, I can certainly appreciate the merits of the
> > idea
> > as well. However, from the argument above we can deduce that if
> > this is
> > the *only* thing we do then we will end up in the very worst of
> > all
> > possible worlds: the wrong tool for the job, poorly integrated.
> > Every
> > single advantage of using Heat to deploy software will have
> > evaporated,
> > leaving only disadvantages.
> I think Heat is a very powerful tool having done the container
> integration
> into the tripleo-heat-templates I can see its appeal.  Something I
> learned
> from integration, was that Heat is not the best tool for container
> deployment,
> at least right now.  We were able to leverage the work in Kolla, but
> what it
> came down to was that we're not using containers or Kolla to its max
> potential.
>
> I did an evaluation recently of tripleo and kolla to see what we
> would gain
> if the two were to combine. Let's look at some items on tripleo's
> roadmap.
> Split stack, as mentioned above, would be gained if tripleo were to
> adopt
> Kolla.  Tripleo holds the undercloud and ironic.  Kolla separates
> config
> and deployment.  Therefore, allowing for the decoupling for each
> piece of
> the stack.  Composable roles, this would be the ability to land
> services
> onto separate hosts on demand.  Kolla also already does this [1].
> Finally,
> container integration, this is just a given :).
>
> In the near term, if tripleo were to adopt Kolla as its overcloud it
> would
> be provided these features and retire heat to setting up the
> baremetal nodes
> and providing those ips to ansible.  This would be great for kolla
> too because
> it would provide baremetal provisioning.
>
> Ian Main and I are currently working on a POC for this as of last
> week [2].
> It's just a simple heat template :).
>
> I think further down the road we can evaluate using kubernetes [3].
> For now though,  kolla-anisble is rock solid and is worth using for
> the
> overcloud.

Yeah, well TripleO heat Overclouds are rock solid too. They just aren't
using containers everywhere yet. So lets fix that.

I'm not a fan of replacing the TripleO overcloud configuration with
Kolla. I don't think there is feature parity, the architectures are
different (HA, etc.) and I don't think you could easily pull off an
upgrade from one deployment to the other (going from TripleO Heat
template deployed overcloud to Kolla deployed overcloud).

>
> Thanks!
> -Ryan
>
> [1] - https://github.com/openstack/kolla/blob/master/ansible/inventor
> y/multinode
> [2] - https://github.com/rthallisey/kolla-heat-templates
> [3] - https://review.openstack.org/#/c/255450/
>
>
> _____________________________________________________________________
> _____
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160331/9d0ef5d6/attachment.html>


More information about the OpenStack-dev mailing list