[openstack-dev] [TripleO][Heat][Kolla][Magnum] The zen of Heat, containers, and the future of TripleO

Zane Bitter zbitter at redhat.com
Wed Apr 6 15:34:20 UTC 2016


On 03/04/16 21:38, Dan Prince wrote:
>
>
>
> On Mon, 2016-03-21 at 16:14 -0400, Zane Bitter wrote:
>> As of the Liberty release, Magnum now supports provisioning Mesos
>> clusters, so TripleO wouldn't have to maintain the installer for
>> that
>> either. (The choice of Mesos is somewhat unfortunate in our case,
>> because Magnum's Kubernetes support is much more mature than its
>> Mesos
>> support, and because the reasons for the decision are about to be or
>> have already been overtaken by events - I've heard reports that the
>> features that Kubernetes was missing to allow it to be used for
>> controller nodes, and maybe even compute nodes, are now available.
>> Nonetheless, I expect the level of Magnum support for Mesos is
>> likely
>> workable.) This is where the TripleO strategy of using OpenStack to
>> deploy OpenStack can really pay dividends: because we use Ironic all
>> of
>> our servers are accessible through the Nova API, so in theory we can
>> just run Magnum out of the box.
>>
>>
>> The chances of me personally having time to prototype this are
>> slim-to-zero, but I think this is a path worth investigating.
>
> Looking at Magnum more closely... At a high level I like the idea of
> Magnum. And interestingly it could be a surprisingly good fit for
> someone wanting containers on baremetal to consider using the TripleO
> paving machine (instack-undercloud).
>
> We would need to add a few services I think to instack to supply the
> Magnum heat templates with the required API's. Specifically:
>
>   -barbican
>   -neutron L3 agent
>   -neutron Lbaas
>   -Magnum (API, and conductor)
>
> This isn't hard and would be a cool thing to have supported withing
> instack (although I wouldn't enable these services by default I
> think... at least not for now).
>
> So again, at a high level things look good. Taking a closer look at how
> Magnum architects its network things start to fall apart a bit I think.
>  From what I can tell the Magnum network architecture with its usage of
> the L3 agent, and Lbaas the undercloud itself would be much more
> important. Depending on the networking vendor we would possibly need to
> make the Undercloud itself HA in order to ensure anything built on top
> was also HA. Contrast this with the fact that you can deploy an
> Overcloud today that will continue to function should the undercloud
> (momentarily) go down.

Yeah, we'd definitely need to be able to attach the controller cluster 
to the right networks in order for this to work, and an HA undercloud 
would need to be optional.

Can any Magnum folks reading comment on this?

> Then there is the fact that Magnum would be calling Heat to create our
> baremetal servers (Magnum creates the OS::Nova::Server resources... not
> our own Heat templates). This is fine but we have a lot of value add in
> our own templates.

Isn't that sorta the problem? ;)

> We could actually write our own Heat templates and
> plug them into magnum.conf via k8s_atomic_template_path= or
> mesos_fedora_template_path= (doesn't exist yet but it could?). What
> this means for our workflow and how end users would would configure
> underlying parameters would need to be discussed. Would we still have
> our own Heat templates that created OS::Magnum::Bay resources?

I assume so, yes.

> Or would
> we use totally separate stacks to generate these things? The former
> causes a bit of a "Yo Dawg: I hear you like Heat, so I'm calling Heat
> to call Magnum to call Heat to spin up your cloud".

That's an implementation detail of Magnum... I don't see why it would be 
an issue. Actually, using the same tool at two different levels of 
abstraction seems strictly better than using two different tools.

Props for the Xzibit reference though :D

> Perhaps I'm off
> here but we'd still want to expose many of the service level parameters
> to end users via our workflows... and then use them to deploy
> containers into the bays so something like this would need to happen I
> think.
>
> Aside from creating the bays we likely wouldn't use the /containers API
> to spin up containers but would go directly at Mesos or Kubernetes
> instead. The Magnum API just isn't leaky enough yet for us to get
> access to all the container bits we'd need at the moment. Over time it
> could get there... but I don't think it is there yet.

Yes, totally agree, this is what I was suggesting.

> So all that to say maybe we should integrate it into instack-undercloud
> as a baremetal containers side project. This would also make it easier
> to develop and evolve Magnum baremetal capabilities if we really want
> to pursue them. But I think we'd have an easier go of implementing our
> containers architecture (with all the network isolation, HA
> architecture, and underpinnings we desire) by managing our own
> deployment of these things in the immediate future.
>
> Dan
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>




More information about the OpenStack-dev mailing list