[openstack-dev] [Magnum] Discuss mesos-bay-type Blueprint

Hongbin Lu hongbin.lu at huawei.com
Mon Jun 1 22:10:45 UTC 2015


Hi Jay,

For your question “what is the mesos object that we want to manage”, the short answer is it depends. There are two options I can think of:

1.       Don’t manage any object from Marathon directly. Instead, we can focus on the existing Magnum objects (i.e. container), and implements them by using Marathon APIs if it is possible. Use the abstraction ‘container’ as an example. For a swarm bay, container will be implemented by calling docker APIs. For a mesos bay, container could be implemented by using Marathon APIs (it looks the Marathon’s object ‘app’ can be leveraged to operate a docker container). The effect is that Magnum will have a set of common abstractions that is implemented differently by different bay type.

2.       Do manage a few Marathon objects (i.e. app). The effect is that Magnum will have additional API object(s) that is from Marathon (like what we have for existing k8s objects: pod/service/rc).
Thoughts?

Thanks
Hongbin

From: Jay Lau [mailto:jay.lau.513 at gmail.com]
Sent: May-29-15 1:35 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] Discuss mesos-bay-type Blueprint

I want to mention that there is another mesos framework named as chronos: https://github.com/mesos/chronos , it is used for job orchestration.

For others, please refer to my comments in line.

2015-05-29 7:45 GMT+08:00 Adrian Otto <adrian.otto at rackspace.com<mailto:adrian.otto at rackspace.com>>:
I’m moving this whiteboard to the ML so we can have some discussion to refine it, and then go back and update the whiteboard.

Source: https://blueprints.launchpad.net/magnum/+spec/mesos-bay-type

My comments in-line below.


Begin forwarded message:

From: hongbin <hongbin.lu at huawei.com<mailto:hongbin.lu at huawei.com>>
Subject: COMMERCIAL:[Blueprint mesos-bay-type] Add support for mesos bay type
Date: May 28, 2015 at 2:11:29 PM PDT
To: <adrian.otto at rackspace.com<mailto:adrian.otto at rackspace.com>>
Reply-To: hongbin <hongbin.lu at huawei.com<mailto:hongbin.lu at huawei.com>>

Blueprint changed by hongbin:

Whiteboard set to:

I did a preliminary research on possible implementations. I think this BP can be implemented in two steps.
1. Develop a heat template for provisioning mesos cluster.
2. Implement a magnum conductor for managing the mesos cluster.

Agreed, thanks for filing this blueprint!
For 2, the conductor is mainly used to manage objects for CoE, k8s has pod, service, rc, what is the mesos object that we want to manage? IMHO, mesos is a resource manager and it needs to be worked with some frameworks to provide services.


First, I want to emphasis that mesos is not a service (It looks like a
library). Therefore, mesos doesn't have web API, and most users don't
use mesos directly. Instead, they use a mesos framework that is on top
of mesos. Therefore, a mesos bay needs to have a mesos framework pre-
configured so that magnum can talk to the framework to manage the bay.
There are several framework choices. Below is a list of frameworks that
look like a fit (in my opinion). A exhaustive list of framework can be
found here [1].

1. Marathon [2]
This is a framework controlled by a company (mesosphere [3]). It is open source through. It supports running app on clusters of docker containers. It is probably the most widely-used mesos framework for long-running application.

Marathon offers a REST API, whereas Aroura does not (unless one has materialized in the last month). This was the one we discussed in our Vancouver design summit, and we agreed that those wanting to use Apache Mesos are probably expecting this framework.


2. Aurora [4]
This is a framework governed by Apache Software Foundation. It looks very similar to Marathon, but maybe more advanced in nature. It has been used by Twitter at scale. Here [5] is a detailed comparison between Marathon and Aurora.

We should have an alternate bay template for Aroura in our contrib directory. If users like Aroura better than Marathon, we can discuss making it the default template, and put the Marathon template in the contrib directory.


3. Kubernetes/Docker swarm
It looks the swarm-mesos is not ready yet. I cannot find any thing about that (besides several videos on Youtube). The kubernetes-mesos is there [6]. In theory, magnum should be able to deploy a mesos bay and talk to the bay through kubernetes API. An advantage is that we can reuse the kubernetes conductor. A disadvantage is that it is not a 'mesos' way to manage containers. Users from mesos community are probably more comfortable to manage containers through Marathon/Aurora.

If you want Kubernetes, you should use the Kubernetes bay type. If you want Kubernetes controlling Mesos, make a custom Heat template for that, and we can put it into contrib.
Agree, even using Mesos as resource manager, end user can still use magnum API to create pod, service, and rc.

If you want Swarm controlling Mesos, then you want BOTH a Swarm bay *and* a Mesos bay, with the Swarm bay configured to use the Mesos bay using the (currently developing) integration hook for Mesos in Swarm.

Any opposing viewpoints to consider?

Thanks,

Adrian


--hongbin

[1] http://mesos.apache.org/documentation/latest/mesos-frameworks/
[2] https://github.com/mesosphere/marathon
[3] https://mesosphere.com/
[4] http://aurora.apache.org/
[5] http://stackoverflow.com/questions/28651922/marathon-vs-aurora-and-their-purposes
[6] https://github.com/mesosphere/kubernetes-mesos

--
Add support for mesos bay type
https://blueprints.launchpad.net/magnum/+spec/mesos-bay-type


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



--
Thanks,
Jay Lau (Guangya Liu)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150601/f0f52271/attachment.html>


More information about the OpenStack-dev mailing list