[User-committee] [AppEco-WG] Kubernetes-OpenStack-SIG > Re: K8s multi-tenancy

David F Flanders dff.openstack at gmail.com
Mon Jul 4 02:41:23 UTC 2016


I'm also FWD'ing this convo onto the AppEco-WG mailing list for awareness
by some of the dev re below...

On Sun, Jul 3, 2016 at 4:30 AM, Clayton Coleman <ccoleman at redhat.com> wrote:

>
>
> On Jul 1, 2016, at 7:46 PM, Hongbin Lu <hongbin034 at gmail.com> wrote:
>
>
>
> On Fri, Jul 1, 2016 at 6:14 AM, Salvatore Orlando <salv.orlando at gmail.com>
> wrote:
>
>> Hello,
>>
>> some comments from me as well.
>>
>> Salvatore
>>
>> On 1 July 2016 at 00:48, Chris Marino <chris at romana.io> wrote:
>>
>>> Hi Hongbin, as a first step, I like the idea of tenants using
>>> OpenStack to provision VMs for their own private k8s pods.  And with
>>> RBAC [1] and network policy [2] support in k8s 1.3 I think it probably
>>> can be built.
>>>
>>
>> I think this also requires that a node is exclusively assigned to a
>> tenant. The kubernetes scheduler allows using node selectors or node
>> affinity [3]; that is something this solution should leverage.
>> The flipside is that it must be ensured that a tenant-specific node only
>> hosts pods belonging to that tenant. Therefore every pod must specify a
>> node selector. If you are wrapping pod creation requests via magnum (or
>> higgins???) APIs, you can surely do that.
>>
>> [3] http://kubernetes.io/docs/user-guide/node-selection/
>>
>>
>>>
>>> As for what might go upstream, that's hard to tell. But having this as
>>> an example could provide lots of use cases that need to be addressed.
>>>
>>> [1] https://coreos.com/blog/kubernetes-v1.3-preview.html
>>> [2]
>>> https://github.com/caseydavenport/kubernetes.github.io/blob/e232dae842529e17a17f5490a5f8db9e2f900408/docs/user-guide/networkpolicies.md
>>>
>>> CM
>>>
>>> On Thu, Jun 30, 2016 at 12:47 PM, Hongbin Lu <hongbin034 at gmail.com>
>>> wrote:
>>> > Hi folks,
>>> >
>>> > This is a continued discussion about multi-tenancy. In short, I am
>>> looking
>>> > for a solution (optimally an upstream solution) to make k8s be
>>> compatible
>>> > with the multi-tenancy model of cloud providers, in particular
>>> OpenStack.
>>> >
>>> > Currently, there is no perfect solution for that. The most common
>>> solution
>>> > is to place the entire k8s cluster to a set of VMs within a single
>>> OpenStack
>>> > tenant. This works well in some cases, but it is not perfect. First, it
>>> > requires end-users to manage their clusters but they generally don't
>>> have
>>> > the expertise to do that. Second, the resource utilization is low if
>>> there
>>> > are many OpenStack tenants and each tenant/user wants to create its
>>> own k8s
>>> > cluster. Such problem was well described in this email [1].
>>>
>>
>> The beauty of this scenario is that we don't have to worry about it. It
>> has plenty of problems but surely is the easiest way to run kubernetes on
>> Openstack!
>>
>>
>>> >
>>> > A potential solution is to have a big k8s cluster hosted in an
>>> OpenStack
>>> > tenant, and re-distribute it to other tenants. This solution is not
>>> perfect
>>> > as well. An obstacle is the difficulties for IaaS providers to bill
>>> users
>>> > because all the resources are mixed in a single tenant. There are other
>>> > challenges as well. For example, how to isolate containers from
>>> different
>>> > tenants in the same host.
>>>
>>
>> It seems to me that you are implicitly assuming that a k8s cluster must
>> run in OpenStack instances, which is not necessary in many cases.
>> This approach frees tenants from the burden of managing the cluster by
>> themselves but introduces, in my opinion, a new set of issues that we
>> probably don't want to deal with - mostly associated with redistribution of
>> the cluster to other tenants.
>>
>>
>>> >
>>> > IMHO, an ideal solution is to have a centralized k8s control plane, and
>>> > allow end-users to provision minion nodes from their tenants, like AWS
>>> ECS.
>>> > This requires k8s to understand where each node is coming from. For
>>> example,
>>> > if OpenStack users want to create pods, they need to pre-create a set
>>> of VMs
>>> > in their tenants. K8s will use the Keystone plugin to authenticate the
>>> > users, and schedule their pods to their VMs. Is this a reasonable
>>> > requirement for k8s upstream?
>>>
>>
>> It seems that is kind of similar to consider a kubernetes cluster
>> conceptually similar to a compute service, and the cluster nodes as the
>> hypervisors it manages?
>> That would seem a decent thing to do imho, and from my 30k ft perspective
>> surely appears feasible, especially if you're using a service like higgins
>> (magnum???) to wrap k8s APIs.
>>
>
> I am afraid wrapping k8s APIs is out of Magnum's scope. Higgins (renamed
> to Zun) might do it but it doesn't address the use cases that end-users use
> native tool (i.e. kubectl) to access the k8s cluster. Frankly, the approach
> of building an external API wrapper looks like a hack. It is better to have
> a built-in multi-tenancy solution rather than relying on external system to
> tell k8s how to enforce multi-tenancy policy.
>
>
> TL:DR - yes.
>
> It has always been a goal to enable Kube for multi-tenancy, but to do so
> in a way that remains flexible enough so that many different use cases are
> possible.  For example, OpenShift and (probably) Tectonic are interested in
> "innate multi-tenancy" - the cluster provides tenancy and policy in an
> integrated fashion, without requiring an external authorization store.
> Others such as GKE and OpenStack have investigated "1:1 tenancy" - each
> cluster is a tenant.  Still others want something in the middle (what I
> call "programmable multi-tenancy") where K8S has enough tools that a higher
> level program can describe policy onto the cluster but Kube isn't aware of
> the tenancy.
>
> Depending on which of these you are targeting, there is varying levels of
> readiness, and varying levels of work to be done.
>
> 1:1 tenancy is done (as noted).
>
> OpenShift is fully MT (down to a fine grained level of including a user
> and oath model that correlates to security policy on the cluster) today,
> but it will likely take some time to introduce those features into Kube.
> PodSecurityPolicy, Quota/Limits the Authenticator/Authorizer interfaces,
> and the work CoreOS did to port the OpenShift Authz engine into Kube are
> mostly done, but lots remain.
>
> Those features should be usable for "programmable multitenancy" today, but
> in the programmable model an external party (what we in Kube call a
> controller, but could be a management head or manual scripting) needs to
> manage that work.  Instead of trying to mutate API calls, you'll want to
> set up core policy on Kube to enforce your wishes.  That could be magnum or
> another custom component.
>
> It would probably be best to start from what works today and help identify
> gaps - policy that cannot be enforced from outside the cluster without API
> mutation - and target those.  The policy proposals from Apcera and others,
> field level access control, enhanced defaulting and quota controls (we'd
> like to be able to quota arbitrary fields eventually), and allowing
> external admission hooks are all examples of ongoing work that can benefit
> from participation.
>
> --
> You received this message because you are subscribed to the Google Groups
> "kubernetes-sig-openstack" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kubernetes-sig-openstack+unsubscribe at googlegroups.com.
> To post to this group, send email to
> kubernetes-sig-openstack at googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/kubernetes-sig-openstack/-7868473893287563063%40unknownmsgid
> <https://groups.google.com/d/msgid/kubernetes-sig-openstack/-7868473893287563063%40unknownmsgid?utm_medium=email&utm_source=footer>
> .
>
> For more options, visit https://groups.google.com/d/optout.
>
> --
> =================
> Twitter: @DFFlanders <https://twitter.com/dfflanders>
> Skype: david.flanders
> Based in Melbourne, Australia
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/user-committee/attachments/20160704/31634411/attachment.html>


More information about the User-committee mailing list