[openstack-dev] [magnum]swarm + compose = k8s?

Kai Qiang Wu wkqwu at cn.ibm.com
Mon Feb 15 00:17:14 UTC 2016


HongBin,

See my replies and questions in line. >>


Thanks

Best Wishes,
--------------------------------------------------------------------------------
Kai Qiang Wu (Î⿪ǿ  Kennan£©
IBM China System and Technology Lab, Beijing

E-mail: wkqwu at cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
         No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China
100193
--------------------------------------------------------------------------------
Follow your heart. You are miracle!



From:	Hongbin Lu <hongbin.lu at huawei.com>
To:	"OpenStack Development Mailing List (not for usage questions)"
            <openstack-dev at lists.openstack.org>
Date:	15/02/2016 01:26 am
Subject:	Re: [openstack-dev] [magnum]swarm + compose = k8s?



Kai Qiang,

A major benefit is to have Magnum manage the COEs for end-users. Currently,
Magnum basically have its end-users manage the COEs by themselves after a
successful deployment. This might work well for domain users, but it is a
pain for non-domain users to manage their COEs. By moving master nodes out
of users¡¯ tenants, Magnum could offer users a COE management service. For
example, Magnum could offer to monitor the etcd/swarm-manage clusters and
recover them on failure. Again, the pattern of managing COEs for end-users
is what Google container service and AWS container service offer. I guess
it is fair to conclude that there are use cases out there?

>> I am not sure when you talked about domain here, is it keystone domain
or other case ?  What's the non-domain users case to manage the COEs?

If we decide to offer a COE management service, we could discuss further on
how to consolidate the IaaS resources for improving utilization. Solutions
could be (i) introducing a centralized control services for all
tenants/clusters, or (ii) keeping the control services separated but
isolating them by containers (instead of VMs). A typical use case is what
Kris mentioned below.

>> for (i) it is more complicated than (ii), and I did not see much
benefits gain for utilization case here for (i), instead it could introduce
much burden for upgrade case and service interference for all
tenants/clusters


Best regards,
Hongbin

From: Kai Qiang Wu [mailto:wkqwu at cn.ibm.com]
Sent: February-13-16 11:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?



Hi HongBin and Egor,
I went through what you talked about, and thinking what's the great
benefits for utilisation here.
For user cases, looks like following:

user A want to have a COE provision.
user B want to have a separate COE. (different tenant, non-share)
user C want to use existed COE (same tenant as User A, share)

When you talked about utilisation case, it seems you mentioned:
different tenant users want to use same control node to manage different
nodes, it seems that try to make COE openstack tenant aware, it also means
you want to introduce another control schedule layer above the COEs, we
need to think about the if it is typical user case, and what's the benefit
compared with containerisation.


And finally, it is a topic can be discussed in middle cycle meeting.


Thanks

Best Wishes,
--------------------------------------------------------------------------------

Kai Qiang Wu (Î⿪ǿ Kennan£©
IBM China System and Technology Lab, Beijing

E-mail: wkqwu at cn.ibm.com
Tel: 86-10-82451647
Address: Building 28(Ring Building), ZhongGuanCun Software Park,
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193
--------------------------------------------------------------------------------

Follow your heart. You are miracle!

Inactive hide details for Hongbin Lu ---13/02/2016 11:02:13 am---Egor,
Thanks for sharing your insights. I gave it more thoughtHongbin Lu
---13/02/2016 11:02:13 am---Egor, Thanks for sharing your insights. I gave
it more thoughts. Maybe the goal can be achieved with

From: Hongbin Lu <hongbin.lu at huawei.com>
To: Guz Egor <guz_egor at yahoo.com>, "OpenStack Development Mailing List (not
for usage questions)" <openstack-dev at lists.openstack.org>
Date: 13/02/2016 11:02 am
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?




Egor,

Thanks for sharing your insights. I gave it more thoughts. Maybe the goal
can be achieved without implementing a shared COE. We could move all the
master nodes out of user tenants, containerize them, and consolidate them
into a set of VMs/Physical servers.

I think we could separate the discussion into two:
            1. Should Magnum introduce a new bay type, in which master
            nodes are managed by Magnum (not users themselves)? Like what
            GCE [1] or ECS [2] does.
            2. How to consolidate the control services that originally runs
            on master nodes of each cluster?

Note that the proposal is for adding a new COE (not for changing the
existing COEs). That means users will continue to provision existing
self-managed COE (k8s/swarm/mesos) if they choose to.

[1] https://cloud.google.com/container-engine/
[2] http://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html

Best regards,
Hongbin

From: Guz Egor [mailto:guz_egor at yahoo.com]
Sent: February-12-16 2:34 PM
To: OpenStack Development Mailing List (not for usage questions)
Cc: Hongbin Lu
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Hongbin,

I am not sure that it's good idea, it looks you propose Magnum enter to
"schedulers war" (personally I tired from these debates Mesos vs Kub vs
Swarm).
If your concern is just utilization you can always run control plane at
"agent/slave" nodes, there main reason why operators (at least in our case)
keep them
separate because they need different attention (e.g. I almost don't care
why/when "agent/slave" node died, but always double check that master node
was
repaired or replaced).

One use case I see for shared COE (at least in our environment), when
developers want run just docker container without installing anything
locally
(e.g docker-machine). But in most cases it's just examples from internet or
there own experiments ):

But we definitely should discuss it during midcycle next week.

---
Egor

From: Hongbin Lu <hongbin.lu at huawei.com>
To: OpenStack Development Mailing List (not for usage questions) <
openstack-dev at lists.openstack.org>
Sent: Thursday, February 11, 2016 8:50 PM
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

Hi team,

Sorry for bringing up this old thread, but a recent debate on container
resource [1] reminded me the use case Kris mentioned below. I am going to
propose a preliminary idea to address the use case. Of course, we could
continue the discussion in the team meeting or midcycle.

Idea: Introduce a docker-native COE, which consists of only
minion/worker/slave nodes (no master nodes).
Goal: Eliminate duplicated IaaS resources (master node VMs, lbaas vips,
floating ips, etc.)
Details: Traditional COE (k8s/swarm/mesos) consists of master nodes and
worker nodes. In these COEs, control services (i.e. scheduler) run on
master nodes, and containers run on worker nodes. If we can port the COE
control services to Magnum control plate and share them with all tenants,
we eliminate the need of master nodes thus improving resource utilization.
In the new COE, users create/manage containers through Magnum API
endpoints. Magnum is responsible to spin tenant VMs, schedule containers to
the VMs, and manage the life-cycle of those containers. Unlike other COEs,
containers created by this COE are considered as OpenStack-manage
resources. That means they will be tracked in Magnum DB, and accessible by
other OpenStack services (i.e. Horizon, Heat, etc.).

What do you feel about this proposal? Let¡¯s discuss.

[1] https://etherpad.openstack.org/p/magnum-native-api

Best regards,
Hongbin

From: Kris G. Lindgren [mailto:klindgren at godaddy.com]
Sent: September-30-15 7:26 PM
To: openstack-dev at lists.openstack.org
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

We are looking at deploying magnum as an answer for how do we do containers
company wide at Godaddy. I am going to agree with both you and josh.

I agree that managing one large system is going to be a pain and pas
experience tells me this wont be practical/scale, however from experience I
also know exactly the pain Josh is talking about.

We currently have ~4k projects in our internal openstack cloud, about 1/4
of the projects are currently doing some form of containers on their own,
with more joining every day. If all of these projects were to convert of to
the current magnum configuration we would suddenly be attempting to
support/configure ~1k magnum clusters. Considering that everyone will want
it HA, we are looking at a minimum of 2 kube nodes per cluster + lbaas vips
+ floating ips. From a capacity standpoint this is an excessive amount of
duplicated infrastructure to spinup in projects where people maybe running
10¨C20 containers per project. From an operator support perspective this is
a special level of hell that I do not want to get into. Even if I am off by
75%, 250 still sucks.

From my point of view an ideal use case for companies like ours
(yahoo/godaddy) would be able to support hierarchical projects in magnum.
That way we could create a project for each department, and then the
subteams of those departments can have their own projects. We create a a
bay per department. Sub-projects if they want to can support creation of
their own bays (but support of the kube cluster would then fall to that
team). When a sub-project spins up a pod on a bay, minions get created
inside that teams sub projects and the containers in that pod run on the
capacity that was spun up under that project, the minions for each pod
would be a in a scaling group and as such grow/shrink as dictated by load.

The above would make it so where we support a minimal, yet imho reasonable,
number of kube clusters, give people who can't/don¡¯t want to fall inline
with the provided resource a way to make their own and still offer a "good
enough for a single company" level of multi-tenancy.
>Joshua,
>
>If you share resources, you give up multi-tenancy. No COE system has the
>concept of multi-tenancy (kubernetes has some basic implementation but it
>is totally insecure). Not only does multi-tenancy have to ¡°look like¡± it
>offers multiple tenants isolation, but it actually has to deliver the
>goods.
>
>I understand that at first glance a company like Yahoo may not want
>separate bays for their various applications because of the perceived
>administrative overhead. I would then challenge Yahoo to go deploy a COE
>like kubernetes (which has no multi-tenancy or a very basic implementation
>of such) and get it to work with hundreds of different competing
>applications. I would speculate the administrative overhead of getting
>all that to work would be greater then the administrative overhead of
>simply doing a bay create for the various tenants.
>
>Placing tenancy inside a COE seems interesting, but no COE does that
>today. Maybe in the future they will. Magnum was designed to present an
>integration point between COEs and OpenStack today, not five years down
>the road. Its not as if we took shortcuts to get to where we are.
>
>I will grant you that density is lower with the current design of Magnum
>vs a full on integration with OpenStack within the COE itself. However,
>that model which is what I believe you proposed is a huge design change to
>each COE which would overly complicate the COE at the gain of increased
>density. I personally don¡¯t feel that pain is worth the gain.


___________________________________________________________________
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160215/1a0a10ee/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160215/1a0a10ee/attachment.gif>


More information about the OpenStack-dev mailing list