[openstack-dev] [magnum]swarm + compose = k8s?
王华
wanghua.humble at gmail.com
Tue Feb 16 02:51:34 UTC 2016
I think master nodes should be controlled by Magnum, so that we can do the
operation work for users. AWS and GCE use the mode. And master nodes are
resource-consuming. If master nodes are not controlled by users, we can do
some optimization to reduce the cost which is invisible to users. For
example, we can combine some masters node into one with correct isolation.
Regards,
Wanghua
On Tue, Feb 16, 2016 at 1:52 AM, Hongbin Lu <hongbin.lu at huawei.com> wrote:
> Regarding to the COE mode, it seems there are three options:
>
> 1. Place both master nodes and worker nodes to user’s tenant
> (current implementation).
>
> 2. Place only worker nodes to user’s tenant.
>
> 3. Hide both master nodes and worker nodes from user’s tenant.
>
>
>
> Frankly, I don’t know which one will succeed/fail in the future. Each mode
> seems to have use cases. Maybe magnum could support multiple modes?
>
>
>
> Best regards,
>
> Hongbin
>
>
>
> *From:* Corey O'Brien [mailto:coreypobrien at gmail.com]
> *Sent:* February-15-16 8:43 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [magnum]swarm + compose = k8s?
>
>
>
> Hi all,
>
>
>
> A few thoughts to add:
>
>
>
> I like the idea of isolating the masters so that they are not
> tenant-controllable, but I don't think the Magnum control plane is the
> right place for them. They still need to be running on tenant-owned
> resources so that they have access to things like isolated tenant networks
> or that any bandwidth they consume can still be attributed and billed to
> tenants.
>
>
>
> I think we should extend that concept a little to include worker nodes as
> well. While they should live in the tenant like the masters, they shouldn't
> be controllable by the tenant through anything other than the COE API. The
> main use case that Magnum should be addressing is providing a managed COE
> environment. Like Hongbin mentioned, Magnum users won't have the domain
> knowledge to properly maintain the swarm/k8s/mesos infrastructure the same
> way that Nova users aren't expected to know how to manage a hypervisor.
>
>
>
> I agree with Egor that trying to have Magnum schedule containers is going
> to be a losing battle. Swarm/K8s/Mesos are always going to have better
> scheduling for their containers. We don't have the resources to try to be
> yet another container orchestration engine. Besides that, as a developer, I
> don't want to learn another set of orchestration semantics when I already
> know swarm or k8s or mesos.
>
>
>
> @Kris, I appreciate the real use case you outlined. In your idea of having
> multiple projects use the same masters, how would you intend to isolate
> them? As far as I can tell none of the COEs would have any way to isolate
> those teams from each other if they share a master. I think this is a big
> problem with the idea of sharing masters even within a single tenant. As an
> operator, I definitely want to know that users can isolate their resources
> from other users and tenants can isolate their resources from other tenants.
>
>
>
> Corey
>
>
>
> On Mon, Feb 15, 2016 at 1:24 AM Peng Zhao <peng at hyper.sh> wrote:
>
> Hi,
>
>
>
> I wanted to give some thoughts to the thread.
>
>
>
> There are various perspective around “Hosted vs Self-managed COE”, But if
> you stand at the developer's position, it basically comes down to “Ops vs
> Flexibility”.
>
>
>
> For those who want more control of the stack, so as to customize in anyway
> they see fit, self-managed is a more appealing option. However, one may
> argue that the same job can be done with a heat template+some patchwork of
> cinder/neutron. And the heat template is more customizable than magnum,
> which probably introduces some requirements on the COE configuration.
>
>
>
> For people who don't want to manage the COE, hosted is a no-brainer. The
> question here is that which one is the core compute engine is the stack,
> nova or COE? Unless you are running a public, multi-tenant OpenStack
> deployment, it is highly likely that you are sticking with only one COE.
> Supposing k8s is what your team is dealing with everyday, then why you need
> nova sitting under k8s, whose job is just launching some VMs. After all, it
> is the COE that orchestrates cinder/neutron.
>
>
>
> One idea of this is to put COE at the same layer of nova. Instead of
> running atop nova, these two run side by side. So you got two compute
> engines: nova for IaaS workload, k8s for CaaS workload. If you go this way, hypernetes
> <https://github.com/hyperhq/hypernetes>is probably what you are looking
> for.
>
>
>
> Another idea is “Dockerized (Immutable) IaaS”, e.g. replace Glance with
> Docker registry, and use nova to launch Docker images. But this is not done
> by nova-docker, simply because it is hard to integrate things like
> cinder/neutron with lxc. The idea is a nova-hyper driver
> <https://openstack.nimeyo.com/49570/openstack-dev-proposal-of-nova-hyper-driver>.
> Since Hyper is hypervisor-based, it is much easier to make it work with
> others. SHAMELESS PROMOTION: if you are interested in this idea, we've
> submitted a proposal at the Austin summit:
> https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/8211
> .
>
>
>
> Peng
>
>
>
> Disclaim: I maintainer Hyper.
>
>
>
> -----------------------------------------------------
>
> Hyper - Make VM run like Container
>
>
>
>
>
> On Mon, Feb 15, 2016 at 9:53 AM, Hongbin Lu <hongbin.lu at huawei.com> wrote:
>
> My replies are inline.
>
>
>
> *From:* Kai Qiang Wu [mailto:wkqwu at cn.ibm.com]
> *Sent:* February-14-16 7:17 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [magnum]swarm + compose = k8s?
>
>
>
> HongBin,
>
> See my replies and questions in line. >>
>
>
> Thanks
>
> Best Wishes,
>
> --------------------------------------------------------------------------------
> Kai Qiang Wu (吴开强 Kennan)
> IBM China System and Technology Lab, Beijing
>
> E-mail: wkqwu at cn.ibm.com
> Tel: 86-10-82451647
> Address: Building 28(Ring Building), ZhongGuanCun Software Park,
> No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193
>
> --------------------------------------------------------------------------------
> Follow your heart. You are miracle!
>
> Hongbin Lu ---15/02/2016 01:26:09 am---Kai Qiang, A major benefit is to
> have Magnum manage the COEs for end-users. Currently, Magnum basica
>
> From: Hongbin Lu <hongbin.lu at huawei.com>
> To: “OpenStack Development Mailing List (not for usage questions)“ <
> openstack-dev at lists.openstack.org>
> Date: 15/02/2016 01:26 am
> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>
> ------------------------------
>
>
>
>
> Kai Qiang,
>
> A major benefit is to have Magnum manage the COEs for end-users.
> Currently, Magnum basically have its end-users manage the COEs by
> themselves after a successful deployment. This might work well for domain
> users, but it is a pain for non-domain users to manage their COEs. By
> moving master nodes out of users’ tenants, Magnum could offer users a COE
> management service. For example, Magnum could offer to monitor the
> etcd/swarm-manage clusters and recover them on failure. Again, the pattern
> of managing COEs for end-users is what Google container service and AWS
> container service offer. I guess it is fair to conclude that there are use
> cases out there?
>
> >> I am not sure when you talked about domain here, is it keystone domain
> or other case ? What's the non-domain users case to manage the COEs?
>
> Reply: I mean domain experts, someone who are experts of
> kubernetes/swarm/mesos.
>
>
>
> If we decide to offer a COE management service, we could discuss further
> on how to consolidate the IaaS resources for improving utilization.
> Solutions could be (i) introducing a centralized control services for all
> tenants/clusters, or (ii) keeping the control services separated but
> isolating them by containers (instead of VMs). A typical use case is what
> Kris mentioned below.
>
> >> for (i) it is more complicated than (ii), and I did not see much
> benefits gain for utilization case here for (i), instead it could introduce
> much burden for upgrade case and service interference for all
> tenants/clusters
>
> Reply: Definitely we could discuss it further. I don’t have preference in
> mind right now.
>
>
>
>
> Best regards,
> Hongbin
>
> *From:* Kai Qiang Wu [mailto:wkqwu at cn.ibm.com <wkqwu at cn.ibm.com>]
> * Sent:* February-13-16 11:32 PM
> * To:* OpenStack Development Mailing List (not for usage questions)
> * Subject:* Re: [openstack-dev] [magnum]swarm + compose = k8s?
>
> Hi HongBin and Egor,
> I went through what you talked about, and thinking what's the great
> benefits for utilisation here.
> For user cases, looks like following:
>
> user A want to have a COE provision.
> user B want to have a separate COE. (different tenant, non-share)
> user C want to use existed COE (same tenant as User A, share)
>
> When you talked about utilisation case, it seems you mentioned:
> different tenant users want to use same control node to manage different
> nodes, it seems that try to make COE openstack tenant aware, it also means
> you want to introduce another control schedule layer above the COEs, we
> need to think about the if it is typical user case, and what's the benefit
> compared with containerisation.
>
>
> And finally, it is a topic can be discussed in middle cycle meeting.
>
>
> Thanks
>
> Best Wishes,
>
> --------------------------------------------------------------------------------
> Kai Qiang Wu (吴开强 Kennan)
> IBM China System and Technology Lab, Beijing
>
> E-mail: wkqwu at cn.ibm.com
> Tel: 86-10-82451647
> Address: Building 28(Ring Building), ZhongGuanCun Software Park,
> No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193
>
> --------------------------------------------------------------------------------
> Follow your heart. You are miracle!
>
> Hongbin Lu ---13/02/2016 11:02:13 am---Egor, Thanks for sharing your
> insights. I gave it more thoughts. Maybe the goal can be achieved with
>
> From: Hongbin Lu <hongbin.lu at huawei.com>
> To: Guz Egor <guz_egor at yahoo.com>, “OpenStack Development Mailing List
> (not for usage questions)“ <openstack-dev at lists.openstack.org>
> Date: 13/02/2016 11:02 am
> Subject: Re: [openstack-dev] [magnum]swarm + compose = k8s?
>
> ------------------------------
>
>
>
>
>
> Egor,
>
> Thanks for sharing your insights. I gave it more thoughts. Maybe the goal
> can be achieved without implementing a shared COE. We could move all the
> master nodes out of user tenants, containerize them, and consolidate them
> into a set of VMs/Physical servers.
>
> I think we could separate the discussion into two:
>
> 1. Should Magnum introduce a new bay type, in which master nodes are
> managed by Magnum (not users themselves)? Like what GCE [1] or ECS [2] does.
> 2. How to consolidate the control services that originally runs on master
> nodes of each cluster?
>
>
> Note that the proposal is for adding a new COE (not for changing the
> existing COEs). That means users will continue to provision existing
> self-managed COE (k8s/swarm/mesos) if they choose to.
>
> [1] https://cloud.google.com/container-engine/
> [2]
> http://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html
>
> Best regards,
> Hongbin
>
> * From:* Guz Egor [mailto:guz_egor at yahoo.com <guz_egor at yahoo.com>]
> * Sent:* February-12-16 2:34 PM
> * To:* OpenStack Development Mailing List (not for usage questions)
> * Cc:* Hongbin Lu
> * Subject:* Re: [openstack-dev] [magnum]swarm + compose = k8s?
>
> Hongbin,
>
> I am not sure that it's good idea, it looks you propose Magnum enter to
> “schedulers war” (personally I tired from these debates Mesos vs Kub vs
> Swarm).
> If your concern is just utilization you can always run control plane at
> “agent/slave” nodes, there main reason why operators (at least in our case)
> keep them
> separate because they need different attention (e.g. I almost don't care
> why/when “agent/slave” node died, but always double check that master node
> was
> repaired or replaced).
>
> One use case I see for shared COE (at least in our environment), when
> developers want run just docker container without installing anything
> locally
> (e.g docker-machine). But in most cases it's just examples from internet
> or there own experiments ):
>
> But we definitely should discuss it during midcycle next week.
>
> ---
> Egor
> ------------------------------
>
> *From:* Hongbin Lu <hongbin.lu at huawei.com>
> * To:* OpenStack Development Mailing List (not for usage questions) <
> openstack-dev at lists.openstack.org>
> * Sent:* Thursday, February 11, 2016 8:50 PM
> * Subject:* Re: [openstack-dev] [magnum]swarm + compose = k8s?
>
> Hi team,
>
> Sorry for bringing up this old thread, but a recent debate on container
> resource [1] reminded me the use case Kris mentioned below. I am going to
> propose a preliminary idea to address the use case. Of course, we could
> continue the discussion in the team meeting or midcycle.
>
> * Idea*: Introduce a docker-native COE, which consists of only
> minion/worker/slave nodes (no master nodes).
> * Goal*: Eliminate duplicated IaaS resources (master node VMs, lbaas
> vips, floating ips, etc.)
> * Details*: Traditional COE (k8s/swarm/mesos) consists of master nodes
> and worker nodes. In these COEs, control services (i.e. scheduler) run on
> master nodes, and containers run on worker nodes. If we can port the COE
> control services to Magnum control plate and share them with all tenants,
> we eliminate the need of master nodes thus improving resource utilization.
> In the new COE, users create/manage containers through Magnum API
> endpoints. Magnum is responsible to spin tenant VMs, schedule containers to
> the VMs, and manage the life-cycle of those containers. Unlike other COEs,
> containers created by this COE are considered as OpenStack-manage
> resources. That means they will be tracked in Magnum DB, and accessible by
> other OpenStack services (i.e. Horizon, Heat, etc.).
>
> What do you feel about this proposal? Let’s discuss.
>
> [1] https://etherpad.openstack.org/p/magnum-native-api
>
> Best regards,
> Hongbin
>
> * From:* Kris G. Lindgren [mailto:klindgren at godaddy.com
> <klindgren at godaddy.com>]
> * Sent:* September-30-15 7:26 PM
> * To:* openstack-dev at lists.openstack.org
> * Subject:* Re: [openstack-dev] [magnum]swarm + compose = k8s?
>
> We are looking at deploying magnum as an answer for how do we do
> containers company wide at Godaddy. I am going to agree with both you and
> josh.
>
> I agree that managing one large system is going to be a pain and pas
> experience tells me this wont be practical/scale, however from experience I
> also know exactly the pain Josh is talking about.
>
> We currently have ~4k projects in our internal openstack cloud, about 1/4
> of the projects are currently doing some form of containers on their own,
> with more joining every day. If all of these projects were to convert of to
> the current magnum configuration we would suddenly be attempting to
> support/configure ~1k magnum clusters. Considering that everyone will want
> it HA, we are looking at a minimum of 2 kube nodes per cluster + lbaas vips
> + floating ips. From a capacity standpoint this is an excessive amount of
> duplicated infrastructure to spinup in projects where people maybe running
> 10–20 containers per project. From an operator support perspective this is
> a special level of hell that I do not want to get into. Even if I am off by
> 75%, 250 still sucks.
>
> From my point of view an ideal use case for companies like ours
> (yahoo/godaddy) would be able to support hierarchical projects in magnum.
> That way we could create a project for each department, and then the
> subteams of those departments can have their own projects. We create a a
> bay per department. Sub-projects if they want to can support creation of
> their own bays (but support of the kube cluster would then fall to that
> team). When a sub-project spins up a pod on a bay, minions get created
> inside that teams sub projects and the containers in that pod run on the
> capacity that was spun up under that project, the minions for each pod
> would be a in a scaling group and as such grow/shrink as dictated by load.
>
> The above would make it so where we support a minimal, yet imho
> reasonable, number of kube clusters, give people who can't/don’t want to
> fall inline with the provided resource a way to make their own and still
> offer a “good enough for a single company” level of multi-tenancy.
> >Joshua,
> >
> >If you share resources, you give up multi-tenancy. No COE system has the
> >concept of multi-tenancy (kubernetes has some basic implementation but it
> >is totally insecure). Not only does multi-tenancy have to “look like” it
> >offers multiple tenants isolation, but it actually has to deliver the
> >goods.
> >
> >I understand that at first glance a company like Yahoo may not want
> >separate bays for their various applications because of the perceived
> >administrative overhead. I would then challenge Yahoo to go deploy a COE
> >like kubernetes (which has no multi-tenancy or a very basic implementation
> >of such) and get it to work with hundreds of different competing
> >applications. I would speculate the administrative overhead of getting
> >all that to work would be greater then the administrative overhead of
> >simply doing a bay create for the various tenants.
> >
> >Placing tenancy inside a COE seems interesting, but no COE does that
> >today. Maybe in the future they will. Magnum was designed to present an
> >integration point between COEs and OpenStack today, not five years down
> >the road. Its not as if we took shortcuts to get to where we are.
> >
> >I will grant you that density is lower with the current design of Magnum
> >vs a full on integration with OpenStack within the COE itself. However,
> >that model which is what I believe you proposed is a huge design change to
> >each COE which would overly complicate the COE at the gain of increased
> >density. I personally don’t feel that pain is worth the gain.
>
>
> ___________________________________________________________________
> Kris Lindgren
> Senior Linux Systems Engineer
> GoDaddy
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160216/007a63ac/attachment.html>
More information about the OpenStack-dev
mailing list