[openstack-dev] [magnum]swarm + compose = k8s?

Hongbin Lu hongbin.lu at huawei.com
Thu Oct 1 18:50:59 UTC 2015


Do you mean this proposal http://specs.openstack.org/openstack/keystone-specs/specs/juno/hierarchical_multitenancy.html ? It looks like a support of hierarchical role/privilege, and I couldn't find anything related to resource sharing. I am not sure if it can address the use cases Kris mentioned.

Best regards,
Hongbin

From: Fox, Kevin M [mailto:Kevin.Fox at pnnl.gov]
Sent: October-01-15 11:58 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

I believe keystone already supports hierarchical projects....

Thanks,
Kevin
________________________________
From: Hongbin Lu [hongbin.lu at huawei.com]
Sent: Thursday, October 01, 2015 7:39 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
Kris,

I think the proposal of hierarchical projects is out of the scope of magnum, and you might need to bring it up at keystone or cross-project meeting. I am going to propose a walk-around that might work for you at existing tenancy model.

Suppose there is a department (department A) with two subteams (team 1 and team 2). You can create three projects:

*         Project A

*         Project A-1

*         Project A-2

Then you can assign users to projects in the following ways:

*         Assign team 1 members to both Project A and Project A-1

*         Assign team 2 members to both Project A and Project A-2

Then you can create a bay at project A, which is shared by the whole department. In addition, each subteam can create their own bays at project A-X if they want. Does it address your use cases?

Best regards,
Hongbin

From: Kris G. Lindgren [mailto:klindgren at godaddy.com]
Sent: September-30-15 7:26 PM
To: openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org>
Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?

We are looking at deploying magnum as an answer for how do we do containers company wide at Godaddy.  I am going to agree with both you and josh.

I agree that managing one large system is going to be a pain and pas experience tells me this wont be practical/scale, however from experience I also know exactly the pain Josh is talking about.

We currently have ~4k projects in our internal openstack cloud, about 1/4 of the projects are currently doing some form of containers on their own, with more joining every day.  If all of these projects were to convert of to the current magnum configuration we would suddenly be attempting to support/configure ~1k magnum clusters.  Considering that everyone will want it HA, we are looking at a minimum of 2 kube nodes per cluster + lbaas vips + floating ips.  From a capacity standpoint this is an excessive amount of duplicated infrastructure to spinup in projects where people maybe running 10-20 containers per project.  From an operator support perspective this is a special level of hell that I do not want to get into.   Even if I am off by 75%,  250 still sucks.

>From my point of view an ideal use case for companies like ours (yahoo/godaddy) would be able to support hierarchical projects in magnum.  That way we could create a project for each department, and then the subteams of those departments can have their own projects.  We create a a bay per department.  Sub-projects if they want to can support creation of their own bays (but support of the kube cluster would then fall to that team).  When a sub-project spins up a pod on a bay, minions get created inside that teams sub projects and the containers in that pod run on the capacity that was spun up  under that project, the minions for each pod would be a in a scaling group and as such grow/shrink as dictated by load.

The above would make it so where we support a minimal, yet imho reasonable, number of kube clusters, give people who can't/don't want to fall inline with the provided resource a way to make their own and still offer a "good enough for a single company" level of multi-tenancy.

>Joshua,

>

>If you share resources, you give up multi-tenancy.  No COE system has the

>concept of multi-tenancy (kubernetes has some basic implementation but it

>is totally insecure).  Not only does multi-tenancy have to "look like" it

>offers multiple tenants isolation, but it actually has to deliver the

>goods.

>

>I understand that at first glance a company like Yahoo may not want

>separate bays for their various applications because of the perceived

>administrative overhead.  I would then challenge Yahoo to go deploy a COE

>like kubernetes (which has no multi-tenancy or a very basic implementation

>of such) and get it to work with hundreds of different competing

>applications.  I would speculate the administrative overhead of getting

>all that to work would be greater then the administrative overhead of

>simply doing a bay create for the various tenants.

>

>Placing tenancy inside a COE seems interesting, but no COE does that

>today.  Maybe in the future they will.  Magnum was designed to present an

>integration point between COEs and OpenStack today, not five years down

>the road.  Its not as if we took shortcuts to get to where we are.

>

>I will grant you that density is lower with the current design of Magnum

>vs a full on integration with OpenStack within the COE itself.  However,

>that model which is what I believe you proposed is a huge design change to

>each COE which would overly complicate the COE at the gain of increased

>density.  I personally don't feel that pain is worth the gain.



___________________________________________________________________
Kris Lindgren
Senior Linux Systems Engineer
GoDaddy
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20151001/d07fa485/attachment.html>


More information about the OpenStack-dev mailing list