[openstack-dev] Mesos Conductor using container-create operations

Ton Ngo ton at us.ibm.com
Fri Dec 11 01:17:54 UTC 2015


I think extending the container object to Mesos via command like
container-create is a fine idea.  Going into details, however, we run into
some complication.
1. The user would still have to choose a DSL to express the container.
This would have to be a kube and/or swarm DSL since we don't want to invent
a new one.
2. For Mesos bay in particular, kube or swarm may be running on top of
Mesos along side with Marathon, so somewhere along the line, Magnum has to
be able to make the distinction and handle things appropriately.

We should think through the scenarios carefully to come to agreement on how
this would work.

Ton Ngo,




From:	Hongbin Lu <hongbin.lu at huawei.com>
To:	"OpenStack Development Mailing List (not for usage questions)"
            <openstack-dev at lists.openstack.org>
Date:	12/09/2015 03:09 PM
Subject:	Re: [openstack-dev] Mesos Conductor using container-create
            operations



As Bharath mentioned, I am +1 to extend the “container” object to Mesos
bay. In addition, I propose to extend “container” to k8s as well (the
details are described in this BP [1]). The goal is to promote this API
resource to be technology-agnostic and make it portable across all COEs. I
am going to justify this proposal by a use case.

Use case:
I have an app. I used to deploy my app to a VM in OpenStack. Right now, I
want to deploy my app to a container. I have basic knowledge of container
but not familiar with specific container tech. I want a simple and
intuitive API to operate a container (i.e. CRUD), like how I operated a VM
before. I find it hard to learn the DSL introduced by a specific COE
(k8s/marathon). Most importantly, I want my deployment to be portable
regardless of the choice of cluster management system and/or container
runtime. I want OpenStack to be the only integration point, because I don’t
want to be locked-in to specific container tech. I want to avoid the risk
that a specific container tech being replaced by another in the future.
Optimally, I want Keystone to be the only authentication system that I need
to deal with. I don't want the extra complexity to deal with additional
authentication system introduced by specific COE.

Solution:
Implement "container" object for k8s and mesos bay (and all the COEs
introduced in the future).

That's it. I would appreciate if you can share your thoughts on this
proposal.

[1] https://blueprints.launchpad.net/magnum/+spec/unified-containers

Best regards,
Hongbin

From: bharath thiruveedula [mailto:bharath_ves at hotmail.com]
Sent: December-08-15 11:40 PM
To: openstack-dev at lists.openstack.org
Subject: [openstack-dev] Mesos Conductor using container-create operations

Hi,

As we have discussed in last meeting, we cannot continue with changes in
container-create[1] as long as we have suitable use case. But I honestly
feel to have some kind of support for mesos + marathon apps, because magnum
supports COE related functionalities for docker swarm (container-create)
and k8s (pod-create, rc-create..) but not for mesos bays.

As hongbin suggested, we use the existing functionality of container-create
and support in mesos-conductor. Currently we have container-create only for
docker swarm bay. Let's have support for the same command for mesos bay
with out any changes in client side.

Let me know your suggestions.

Regards
Bharath T
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20151210/239793cf/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20151210/239793cf/attachment.gif>


More information about the OpenStack-dev mailing list