[openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, containers, and the future of TripleO
dprince at redhat.com
Mon Apr 4 01:38:12 UTC 2016
On Mon, 2016-03-21 at 16:14 -0400, Zane Bitter wrote:
> As of the Liberty release, Magnum now supports provisioning Mesos
> clusters, so TripleO wouldn't have to maintain the installer for
> either. (The choice of Mesos is somewhat unfortunate in our case,
> because Magnum's Kubernetes support is much more mature than its
> support, and because the reasons for the decision are about to be or
> have already been overtaken by events - I've heard reports that the
> features that Kubernetes was missing to allow it to be used for
> controller nodes, and maybe even compute nodes, are now available.
> Nonetheless, I expect the level of Magnum support for Mesos is
> workable.) This is where the TripleO strategy of using OpenStack to
> deploy OpenStack can really pay dividends: because we use Ironic all
> our servers are accessible through the Nova API, so in theory we can
> just run Magnum out of the box.
> The chances of me personally having time to prototype this are
> slim-to-zero, but I think this is a path worth investigating.
Looking at Magnum more closely... At a high level I like the idea of
Magnum. And interestingly it could be a surprisingly good fit for
someone wanting containers on baremetal to consider using the TripleO
paving machine (instack-undercloud).
We would need to add a few services I think to instack to supply the
Magnum heat templates with the required API's. Specifically:
-neutron L3 agent
-Magnum (API, and conductor)
This isn't hard and would be a cool thing to have supported withing
instack (although I wouldn't enable these services by default I
think... at least not for now).
So again, at a high level things look good. Taking a closer look at how
Magnum architects its network things start to fall apart a bit I think.
>From what I can tell the Magnum network architecture with its usage of
the L3 agent, and Lbaas the undercloud itself would be much more
important. Depending on the networking vendor we would possibly need to
make the Undercloud itself HA in order to ensure anything built on top
was also HA. Contrast this with the fact that you can deploy an
Overcloud today that will continue to function should the undercloud
(momentarily) go down.
Then there is the fact that Magnum would be calling Heat to create our
baremetal servers (Magnum creates the OS::Nova::Server resources... not
our own Heat templates). This is fine but we have a lot of value add in
our own templates. We could actually write our own Heat templates and
plug them into magnum.conf via k8s_atomic_template_path= or
mesos_fedora_template_path= (doesn't exist yet but it could?). What
this means for our workflow and how end users would would configure
underlying parameters would need to be discussed. Would we still have
our own Heat templates that created OS::Magnum::Bay resources? Or would
we use totally separate stacks to generate these things? The former
causes a bit of a "Yo Dawg: I hear you like Heat, so I'm calling Heat
to call Magnum to call Heat to spin up your cloud". Perhaps I'm off
here but we'd still want to expose many of the service level parameters
to end users via our workflows... and then use them to deploy
containers into the bays so something like this would need to happen I
Aside from creating the bays we likely wouldn't use the /containers API
to spin up containers but would go directly at Mesos or Kubernetes
instead. The Magnum API just isn't leaky enough yet for us to get
access to all the container bits we'd need at the moment. Over time it
could get there... but I don't think it is there yet.
So all that to say maybe we should integrate it into instack-undercloud
as a baremetal containers side project. This would also make it easier
to develop and evolve Magnum baremetal capabilities if we really want
to pursue them. But I think we'd have an easier go of implementing our
containers architecture (with all the network isolation, HA
architecture, and underpinnings we desire) by managing our own
deployment of these things in the immediate future.
More information about the OpenStack-dev