[openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, containers, and the future of TripleO

Dan Prince dprince at redhat.com
Mon Apr 4 12:53:10 UTC 2016


On Mon, 2016-04-04 at 02:31 +0000, Steven Dake (stdake) wrote:
> 
> On 4/3/16, 6:38 PM, "Dan Prince" <dprince at redhat.com> wrote:
> 
> > 
> > 
> > 
> > 
> > On Mon, 2016-03-21 at 16:14 -0400, Zane Bitter wrote:
> > > 
> > > As of the Liberty release, Magnum now supports provisioning Mesos
> > > clusters, so TripleO wouldn't have to maintain the installer for
> > > that 
> > > either. (The choice of Mesos is somewhat unfortunate in our case,
> > > because Magnum's Kubernetes support is much more mature than its
> > > Mesos 
> > > support, and because the reasons for the decision are about to be
> > > or
> > > have already been overtaken by events - I've heard reports that
> > > the
> > > features that Kubernetes was missing to allow it to be used for
> > > controller nodes, and maybe even compute nodes, are now
> > > available.
> > > Nonetheless, I expect the level of Magnum support for Mesos is
> > > likely 
> > > workable.) This is where the TripleO strategy of using OpenStack
> > > to
> > > deploy OpenStack can really pay dividends: because we use Ironic
> > > all
> > > of 
> > > our servers are accessible through the Nova API, so in theory we
> > > can
> > > just run Magnum out of the box.
> > > 
> > > 
> > > The chances of me personally having time to prototype this are
> > > slim-to-zero, but I think this is a path worth investigating.
> > Looking at Magnum more closely... At a high level I like the idea
> > of
> > Magnum. And interestingly it could be a surprisingly good fit for
> > someone wanting containers on baremetal to consider using the
> > TripleO
> > paving machine (instack-undercloud).
> Dan,
> 
> When I originally got involved in Magnum and submitted the first 100
> or so
> patches to the repository to kick off development, my thinking was to
> use
> Magnum as an integration point for Kubernetes for Kolla (whiich at
> the
> time had no ansible but kubernetes pod files instead) running an
> atomic
> distro.
> 
> It looked good on paper but in practice, all those layers and
> dependencies
> introduced unnecessary complexity making the system I had envisioned
> unwieldy and more complex then the U.S. Space Shuttle.
> 
> When I finally took off my architecture astronaut helmet, I went back
> to
> basics and dismissed the idea of a Magnum and a tripleo integration.
> 
> Remember, that was my idea - and I gave up on it - for a reason.
> 
> Magnum standalone, however, is still very viable and I like where the
> core
> reviewer team has taken Magnum since I stopped participation in that
> project.
> 
> I keep telling people underlays for OpenStack deployment are much
> more
> complex then they look and are 5-10 years down the road.  Yet people
> keep
> trying - good for them ;)
> 
> Regards
> -steve


I don't think we can just leave it at "it is complex". For me
understanding the services it uses, the architecture that it sets up,
and how we would probably try to consume that is the required
information. Perhaps there are other docs out there on these things but
nobody pointed me to them so I just went and looked at Magnum myself.

The reason I listed these things is because Magnum continually comes up
as the go-to project for container integration within TripleO. I wanted
some details about why... not just the fact that it had been tried
before and didn't work out for Kolla in past months or years.

Anyways, If you read all of my reply you'll see that I also reach a
similar conclusions on where Magnum stands today. And if we added
Magnum support (which again would be a cool thing to do I think) it
would probably be targeted for use as a generic standalone baremetal
containers installer. And not as an OpenStack deployment tool. At least
not until Magnum matures a bit...

Dan

> 
> 
> 
> > 
> > 
> > We would need to add a few services I think to instack to supply
> > the
> > Magnum heat templates with the required API's. Specifically:
> > 
> > -barbican
> > -neutron L3 agent
> > -neutron Lbaas
> > -Magnum (API, and conductor)
> > 
> > This isn't hard and would be a cool thing to have supported withing
> > instack (although I wouldn't enable these services by default I
> > think... at least not for now).
> > 
> > So again, at a high level things look good. Taking a closer look at
> > how
> > Magnum architects its network things start to fall apart a bit I
> > think.
> > From what I can tell the Magnum network architecture with its usage
> > of
> > the L3 agent, and Lbaas the undercloud itself would be much more
> > important. Depending on the networking vendor we would possibly
> > need to
> > make the Undercloud itself HA in order to ensure anything built on
> > top
> > was also HA. Contrast this with the fact that you can deploy an
> > Overcloud today that will continue to function should the
> > undercloud
> > (momentarily) go down.
> > 
> > Then there is the fact that Magnum would be calling Heat to create
> > our
> > baremetal servers (Magnum creates the OS::Nova::Server resources...
> > not
> > our own Heat templates). This is fine but we have a lot of value
> > add in
> > our own templates. We could actually write our own Heat templates
> > and
> > plug them into magnum.conf via k8s_atomic_template_path= or
> > mesos_fedora_template_path= (doesn't exist yet but it could?). What
> > this means for our workflow and how end users would would configure
> > underlying parameters would need to be discussed. Would we still
> > have
> > our own Heat templates that created OS::Magnum::Bay resources? Or
> > would
> > we use totally separate stacks to generate these things? The former
> > causes a bit of a "Yo Dawg: I hear you like Heat, so I'm calling
> > Heat
> > to call Magnum to call Heat to spin up your cloud". Perhaps I'm off
> > here but we'd still want to expose many of the service level
> > parameters
> > to end users via our workflows... and then use them to deploy
> > containers into the bays so something like this would need to
> > happen I
> > think.
> > 
> > Aside from creating the bays we likely wouldn't use the /containers
> > API
> > to spin up containers but would go directly at Mesos or Kubernetes
> > instead. The Magnum API just isn't leaky enough yet for us to get
> > access to all the container bits we'd need at the moment. Over time
> > it
> > could get there... but I don't think it is there yet.
> > 
> > So all that to say maybe we should integrate it into instack-
> > undercloud
> > as a baremetal containers side project. This would also make it
> > easier
> > to develop and evolve Magnum baremetal capabilities if we really
> > want
> > to pursue them. But I think we'd have an easier go of implementing
> > our
> > containers architecture (with all the network isolation, HA
> > architecture, and underpinnings we desire) by managing our own
> > deployment of these things in the immediate future.
> > 
> > Dan
> > 
> > ___________________________________________________________________
> > _______
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsu
> > bscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> _____________________________________________________________________
> _____
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubs
> cribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



More information about the OpenStack-dev mailing list