<div dir="ltr">Hi all,<div><br></div><div>A few thoughts to add:</div><div><br></div><div>I like the idea of isolating the masters so that they are not tenant-controllable, but I don't think the Magnum control plane is the right place for them. They still need to be running on tenant-owned resources so that they have access to things like isolated tenant networks or that any bandwidth they consume can still be attributed and billed to tenants.</div><div><br></div><div>I think we should extend that concept a little to include worker nodes as well. While they should live in the tenant like the masters, they shouldn't be controllable by the tenant through anything other than the COE API. The main use case that Magnum should be addressing is providing a managed COE environment. Like Hongbin mentioned, Magnum users won't have the domain knowledge to properly maintain the swarm/k8s/mesos infrastructure the same way that Nova users aren't expected to know how to manage a hypervisor.</div><div><br></div><div>I agree with <span style="color:rgb(0,0,0);font-family:'Helvetica Neue',Helvetica,Arial,'Lucida Grande',sans-serif;line-height:1.5">Egor that trying to have Magnum schedule containers is going to be a losing battle. Swarm/K8s/Mesos are always going to have better scheduling for their containers. We don't have the resources to try to be yet another container orchestration engine. Besides that, as a developer, I don't want to learn another set of orchestration semantics when I already know swarm or k8s or mesos.</span></div><div><span style="color:rgb(0,0,0);font-family:'Helvetica Neue',Helvetica,Arial,'Lucida Grande',sans-serif;line-height:1.5"><br></span></div><div>@Kris, I appreciate the real use case you outlined. In your idea of having multiple projects use the same masters, how would you intend to isolate them? As far as I can tell none of the COEs would have any way to isolate those teams from each other if they share a master. I think this is a big problem with the idea of sharing masters even within a single tenant. As an operator, I definitely want to know that users can isolate their resources from other users and tenants can isolate their resources from other tenants.</div><div><br></div><div>Corey</div></div><br><div class="gmail_quote"><div dir="ltr">On Mon, Feb 15, 2016 at 1:24 AM Peng Zhao <peng@hyper.sh> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<u></u>
<div style="word-wrap:normal;word-break:break-word">
<table lang="container" border="0" cellpadding="0" cellspacing="0" valign="top" style="width:100%;margin-top:6px">
<tr>
<td valign="top" style="line-height:1.31;color:#222;font-family:arial,sans-serif">
<div style="max-width:590px">Hi,</div><div style="max-width:590px"><br></div><div style="max-width:590px">I wanted to give some thoughts to the thread.</div><div style="max-width:590px"><br></div><div style="max-width:590px">There are various perspective around “Hosted vs Self-managed COE”, But if you stand at the developer's position, it basically comes down to “Ops vs Flexibility”.</div><div style="max-width:590px"><br></div><div style="max-width:590px">For those who want more control of the stack, so as to customize in anyway they see fit, self-managed is a more appealing option. However, one may argue that the same job can be done with a heat template+some patchwork of cinder/neutron. And the heat template is more customizable than magnum, which probably introduces some requirements on the COE configuration.</div><div style="max-width:590px"><br></div><div style="max-width:590px">For people who don't want to manage the COE, hosted is a no-brainer. The question here is that which one is the core compute engine is the stack, nova or COE? Unless you are running a public, multi-tenant OpenStack deployment, it is highly likely that you are sticking with only one COE. Supposing k8s is what your team is dealing with everyday, then why you need nova sitting under k8s, whose job is just launching some VMs. After all, it is the COE that orchestrates cinder/neutron.</div><div style="max-width:590px"><br></div><div style="max-width:590px">One idea of this is to put COE at the same layer of nova. Instead of running atop nova, these two run side by side. So you got two compute engines: nova for IaaS workload, k8s for CaaS workload. If you go this way, <a href="https://github.com/hyperhq/hypernetes" target="_blank">hypernetes </a>is probably what you are looking for.</div><div style="max-width:590px"><br></div><div style="max-width:590px">Another idea is “Dockerized (Immutable) IaaS”, e.g. replace Glance with Docker registry, and use nova to launch Docker images. But this is not done by nova-docker, simply because it is hard to integrate things like cinder/neutron with lxc. The idea is a <a href="https://openstack.nimeyo.com/49570/openstack-dev-proposal-of-nova-hyper-driver" target="_blank">nova-hyper driver</a>. Since Hyper is hypervisor-based, it is much easier to make it work with others. SHAMELESS PROMOTION: if you are interested in this idea, we've submitted a proposal at the Austin summit: <span style="line-height:1.31"><a href="https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/8211" target="_blank">https://www.openstack.org/summit/austin-2016/vote-for-speakers/presentation/8211</a>.</span></div><div style="max-width:590px"><span style="line-height:1.31"><br></span></div><div style="max-width:590px">Peng</div><div style="max-width:590px"><br></div><div style="max-width:590px">Disclaim: I maintainer Hyper.</div><div style="max-width:590px"><br></div><div style="max-width:590px"><div style="font-size:13px;line-height:1.25;max-width:590px"><div style="font-size:14px;max-width:590px">-----------------------------------------------------</div><div style="font-size:14px;max-width:590px">Hyper - Make VM run like Container</div><div style="font-size:14px;max-width:590px"><br></div></div></div></td></tr></table></div><div style="word-wrap:normal;word-break:break-word"><table lang="container" border="0" cellpadding="0" cellspacing="0" valign="top" style="width:100%;margin-top:6px"><tr><td valign="top" style="line-height:1.31;color:#222;font-family:arial,sans-serif"><div style="max-width:590px"><br></div><div class="gmail_extra" style="max-width:590px"><br><div class="gmail_quote">On Mon, Feb 15, 2016 at 9:53 AM, Hongbin Lu <span dir="ltr"><<a href="mailto:hongbin.lu@huawei.com" target="_blank">hongbin.lu@huawei.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div lang="EN-CA" link="blue" vlink="purple">
<div>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d">My replies are inline.<u></u><u></u></span></p>
<p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d"><u></u> <u></u></span></p>
<div>
<div style="border:none;border-top:solid #b5c4df 1.0pt;padding:3.0pt 0cm 0cm 0cm">
<p class="MsoNormal"><b><span lang="EN-US" style="font-size:10.0pt;font-family:"Tahoma","sans-serif"">From:</span></b><span lang="EN-US" style="font-size:10.0pt;font-family:"Tahoma","sans-serif""> Kai Qiang Wu [mailto:<a href="mailto:wkqwu@cn.ibm.com" target="_blank">wkqwu@cn.ibm.com</a>]
<br>
<b>Sent:</b> February-14-16 7:17 PM<span><br>
<b>To:</b> OpenStack Development Mailing List (not for usage questions)<br>
<b>Subject:</b> Re: [openstack-dev] [magnum]swarm + compose = k8s?<u></u><u></u></span></span></p>
</div>
</div>
<p class="MsoNormal"><u></u> <u></u></p>
<p>HongBin,<span><br>
<br>
See my replies and questions in line. >><br>
<br>
<br>
Thanks<br>
<br>
Best Wishes,<br>
--------------------------------------------------------------------------------<br>
Kai Qiang Wu (<span lang="ZH-CN">吴开强</span> Kennan<span lang="ZH-CN">)</span><br>
IBM China System and Technology Lab, Beijing<br>
<br>
E-mail: <a href="mailto:wkqwu@cn.ibm.com" target="_blank">wkqwu@cn.ibm.com</a><br>
Tel: <a href="tel:86-10-82451647" value="+861082451647" target="_blank">86-10-82451647</a><br>
Address: Building 28(Ring Building), ZhongGuanCun Software Park, <br>
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193<br>
--------------------------------------------------------------------------------<br>
Follow your heart. You are miracle! <br>
<br>
<img border="0" width="16" height="16" src="cid:6d02f93eeb731eaacbac89402e605fe2" alt="Inactive hide details for Hongbin Lu ---15/02/2016 01:26:09 am---Kai Qiang, A major benefit is to have Magnum manage the COEs f"><font color="#424282">Hongbin
Lu ---15/02/2016 01:26:09 am---Kai Qiang, A major benefit is to have Magnum manage the COEs for end-users. Currently, Magnum basica</font><br>
<br>
<span style="font-size:10.0pt;color:#5f5f5f">From: </span><span style="font-size:10.0pt">Hongbin Lu <<a href="mailto:hongbin.lu@huawei.com" target="_blank">hongbin.lu@huawei.com</a>></span><br>
<span style="font-size:10.0pt;color:#5f5f5f">To: </span><span style="font-size:10.0pt">“OpenStack Development Mailing List (not for usage questions)“ <<a href="mailto:openstack-dev@lists.openstack.org" target="_blank">openstack-dev@lists.openstack.org</a>></span><br>
<span style="font-size:10.0pt;color:#5f5f5f">Date: </span><span style="font-size:10.0pt">15/02/2016 01:26 am</span><br>
<span style="font-size:10.0pt;color:#5f5f5f">Subject: </span><span style="font-size:10.0pt">Re: [openstack-dev] [magnum]swarm + compose = k8s?</span><u></u><u></u></span></p><span>
<div class="MsoNormal">
<hr size="2" width="100%" noshade style="color:#8091a5" align="left">
</div>
<p class="MsoNormal"><br>
<br>
<br>
<font face="Calibri, sans-serif" color="#1f497d">Kai Qiang,</font><br>
<br>
<font face="Calibri, sans-serif" color="#1f497d">A major benefit is to have Magnum manage the COEs for end-users. Currently, Magnum basically have its end-users manage the COEs by themselves after a successful deployment. This might work well
for domain users, but it is a pain for non-domain users to manage their COEs. By moving master nodes out of users’ tenants, Magnum could offer users a COE management service. For example, Magnum could offer to monitor the etcd/swarm-manage clusters and recover
them on failure. Again, the pattern of managing COEs for end-users is what Google container service and AWS container service offer. I guess it is fair to conclude that there are use cases out there?</font><br>
<br>
<font face="Calibri, sans-serif" color="#1f497d">>></font><font face="Calibri, sans-serif" color="#0020c2"> I am not sure when you talked about domain here, is it keystone domain or other case ? What's the non-domain users case to
manage the COEs?</font><font face="Calibri, sans-serif" color="#1f497d"><u></u><u></u></font></p>
</span><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d">Reply: I mean domain experts, someone who are experts of kubernetes/swarm/mesos.<u></u><u></u></span></p><span>
<p class="MsoNormal"><br>
<br>
<font face="Calibri, sans-serif" color="#1f497d">If we decide to offer a COE management service, we could discuss further on how to consolidate the IaaS resources for improving utilization. Solutions could be (i) introducing a centralized control
services for all tenants/clusters, or (ii) keeping the control services separated but isolating them by containers (instead of VMs). A typical use case is what Kris mentioned below.</font><br>
<br>
<font face="Calibri, sans-serif" color="#1f497d">>> </font><font face="Calibri, sans-serif" color="blue">for (i) it is more complicated than (ii), and I did not see much benefits gain for utilization case here for (i), instead it
could introduce much burden for upgrade case and service interference for all tenants/clusters</font><font face="Calibri, sans-serif" color="#1f497d"><u></u><u></u></font></p>
</span><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1f497d">Reply: Definitely we could discuss it further. I don’t have preference in mind right now.<u></u><u></u></span></p><div><div>
<p class="MsoNormal"><br>
<br>
<br>
<font face="Calibri, sans-serif" color="#1f497d">Best regards,</font><br>
<font face="Calibri, sans-serif" color="#1f497d">Hongbin</font><br>
<br>
<b><font face="Tahoma, sans-serif">From:</font></b><font face="Tahoma, sans-serif"> Kai Qiang Wu [<a href="mailto:wkqwu@cn.ibm.com" target="_blank">mailto:wkqwu@cn.ibm.com</a>]
<b><br>
Sent:</b> February-13-16 11:32 PM<b><br>
To:</b> OpenStack Development Mailing List (not for usage questions)<b><br>
Subject:</b> Re: [openstack-dev] [magnum]swarm + compose = k8s?</font><u></u><u></u></p>
<p><span style="font-size:13.5pt">Hi HongBin and Egor,<br>
I went through what you talked about, and thinking what's the great benefits for utilisation here.<br>
For user cases, looks like following:<br>
<br>
user A want to have a COE provision.<br>
user B want to have a separate COE. (different tenant, non-share)<br>
user C want to use existed COE (same tenant as User A, share)<br>
<br>
When you talked about utilisation case, it seems you mentioned:<br>
different tenant users want to use same control node to manage different nodes, it seems that try to make COE openstack tenant aware, it also means you want to introduce another control schedule layer above the COEs, we need to think about the if it is typical
user case, and what's the benefit compared with containerisation. <br>
<br>
<br>
And finally, it is a topic can be discussed in middle cycle meeting. <br>
<br>
<br>
Thanks<br>
<br>
Best Wishes,<br>
--------------------------------------------------------------------------------<br>
Kai Qiang Wu (<span lang="ZH-CN">吴开强</span> Kennan<span lang="ZH-CN">)</span><br>
IBM China System and Technology Lab, Beijing<br>
<br>
E-mail: </span><a href="mailto:wkqwu@cn.ibm.com" target="_blank"><span style="font-size:13.5pt">wkqwu@cn.ibm.com</span></a><span style="font-size:13.5pt"><br>
Tel: <a href="tel:86-10-82451647" value="+861082451647" target="_blank">86-10-82451647</a><br>
Address: Building 28(Ring Building), ZhongGuanCun Software Park, <br>
No.8 Dong Bei Wang West Road, Haidian District Beijing P.R.China 100193<br>
--------------------------------------------------------------------------------<br>
Follow your heart. You are miracle! <br>
<br>
</span><img border="0" width="16" height="16" src="cid:cc24da8d0a71f350ca473946593b2467" alt="Inactive hide details for Hongbin Lu ---13/02/2016 11:02:13 am---Egor, Thanks for sharing your insights. I gave it more thought"><span style="font-size:13.5pt;color:#424282">Hongbin
Lu ---13/02/2016 11:02:13 am---Egor, Thanks for sharing your insights. I gave it more thoughts. Maybe the goal can be achieved with</span><span style="font-size:13.5pt"><br>
</span><font color="#5f5f5f"><br>
From: </font>Hongbin Lu <<a href="mailto:hongbin.lu@huawei.com" target="_blank">hongbin.lu@huawei.com</a>><font color="#5f5f5f"><br>
To: </font>Guz Egor <<a href="mailto:guz_egor@yahoo.com" target="_blank">guz_egor@yahoo.com</a>>, “OpenStack Development Mailing List (not for usage questions)“ <<a href="mailto:openstack-dev@lists.openstack.org" target="_blank">openstack-dev@lists.openstack.org</a>><font color="#5f5f5f"><br>
Date: </font>13/02/2016 11:02 am<font color="#5f5f5f"><br>
Subject: </font>Re: [openstack-dev] [magnum]swarm + compose = k8s?<u></u><u></u></p>
<div class="MsoNormal">
<hr size="2" width="100%" noshade style="color:#a0a0a0" align="left">
</div>
<p class="MsoNormal"><br>
<span style="font-size:13.5pt"><br>
<br>
</span><span style="font-size:13.5pt;font-family:"Calibri","sans-serif";color:#1f497d"><br>
Egor,</span><span style="font-size:13.5pt"><br>
</span><span style="font-size:13.5pt;font-family:"Calibri","sans-serif";color:#1f497d"><br>
Thanks for sharing your insights. I gave it more thoughts. Maybe the goal can be achieved without implementing a shared COE. We could move all the master nodes out of user tenants, containerize them, and consolidate them into a set of VMs/Physical servers.</span><span style="font-size:13.5pt"><br>
</span><span style="font-size:13.5pt;font-family:"Calibri","sans-serif";color:#1f497d"><br>
I think we could separate the discussion into two:</span><u></u><u></u></p>
<p class="MsoNormal" style="margin-left:144.0pt"><span style="font-size:13.5pt;font-family:"Calibri","sans-serif";color:#1f497d">1. Should Magnum introduce a new bay type, in which master nodes are managed by Magnum (not users themselves)? Like what GCE [1]
or ECS [2] does.<br>
2. How to consolidate the control services that originally runs on master nodes of each cluster?</span><u></u><u></u></p>
<p class="MsoNormal"><span style="font-size:13.5pt;font-family:"Calibri","sans-serif";color:#1f497d"><br>
Note that the proposal is for adding a new COE (not for changing the existing COEs). That means users will continue to provision existing self-managed COE (k8s/swarm/mesos) if they choose to.</span><span style="font-size:13.5pt"><br>
</span><span style="font-size:13.5pt;font-family:"Calibri","sans-serif";color:#1f497d"><br>
[1] </span><a href="https://cloud.google.com/container-engine/" target="_blank"><span style="font-size:13.5pt;font-family:"Calibri","sans-serif"">https://cloud.google.com/container-engine/</span></a><span style="font-size:13.5pt;font-family:"Calibri","sans-serif";color:#1f497d"><br>
[2] </span><a href="http://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html" target="_blank"><span style="font-size:13.5pt;font-family:"Calibri","sans-serif"">http://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html</span></a><span style="font-size:13.5pt"><br>
</span><span style="font-size:13.5pt;font-family:"Calibri","sans-serif";color:#1f497d"><br>
Best regards,<br>
Hongbin</span><span style="font-size:13.5pt"><br>
</span><b><span style="font-size:13.5pt;font-family:"Tahoma","sans-serif""><br>
From:</span></b><span style="font-size:13.5pt;font-family:"Tahoma","sans-serif""> Guz Egor [</span><a href="mailto:guz_egor@yahoo.com" target="_blank"><span style="font-size:13.5pt;font-family:"Tahoma","sans-serif"">mailto:guz_egor@yahoo.com</span></a><span style="font-size:13.5pt;font-family:"Tahoma","sans-serif"">]
<b><br>
Sent:</b> February-12-16 2:34 PM<b><br>
To:</b> OpenStack Development Mailing List (not for usage questions)<b><br>
Cc:</b> Hongbin Lu<b><br>
Subject:</b> Re: [openstack-dev] [magnum]swarm + compose = k8s?</span><span style="font-size:13.5pt"><br>
</span><span style="font-size:18.0pt;font-family:"Arial","sans-serif""><br>
Hongbin,</span><span style="font-size:13.5pt"><br>
</span><span style="font-size:13.5pt;font-family:"Helvetica","sans-serif""><br>
I am not sure that it's good idea, it looks you propose Magnum enter to “schedulers war” (personally I tired from these debates Mesos vs Kub vs Swarm).<br>
If your concern is just utilization you can always run control plane at “agent/slave” nodes, there main reason why operators (at least in our case) keep them<br>
separate because they need different attention (e.g. I almost don't care why/when “agent/slave” node died, but always double check that master node was
<br>
repaired or replaced). </span><span style="font-size:13.5pt"><br>
</span><span style="font-size:13.5pt;font-family:"Helvetica","sans-serif""><br>
One use case I see for shared COE (at least in our environment), when developers want run just docker container without installing anything locally
<br>
(e.g docker-machine). But in most cases it's just examples from internet or there own experiments ):
</span><span style="font-size:13.5pt"><br>
</span><span style="font-size:13.5pt;font-family:"Helvetica","sans-serif""><br>
But we definitely should discuss it during midcycle next week. </span><span style="font-size:13.5pt"><br>
</span><span style="font-size:13.5pt;font-family:"Helvetica","sans-serif""><br>
--- <br>
Egor</span><u></u><u></u></p>
<div class="MsoNormal" align="center" style="text-align:center">
<hr size="2" width="100%" align="center">
</div>
<p class="MsoNormal" style="margin-bottom:12.0pt"><b><span style="font-size:13.5pt;font-family:"Arial","sans-serif"">From:</span></b><span style="font-size:13.5pt;font-family:"Arial","sans-serif""> Hongbin Lu <</span><a href="mailto:hongbin.lu@huawei.com" target="_blank"><span style="font-size:13.5pt;font-family:"Arial","sans-serif"">hongbin.lu@huawei.com</span></a><span style="font-size:13.5pt;font-family:"Arial","sans-serif"">><b><br>
To:</b> OpenStack Development Mailing List (not for usage questions) <</span><a href="mailto:openstack-dev@lists.openstack.org" target="_blank"><span style="font-size:13.5pt;font-family:"Arial","sans-serif"">openstack-dev@lists.openstack.org</span></a><span style="font-size:13.5pt;font-family:"Arial","sans-serif"">>
<b><br>
Sent:</b> Thursday, February 11, 2016 8:50 PM<b><br>
Subject:</b> Re: [openstack-dev] [magnum]swarm + compose = k8s?</span><span style="font-size:13.5pt"><br>
</span><span style="font-size:13.5pt;font-family:"Helvetica","sans-serif";color:#1f497d"><br>
Hi team,</span><span style="font-size:13.5pt"><br>
</span><span style="font-size:13.5pt;font-family:"Helvetica","sans-serif";color:#1f497d"><br>
Sorry for bringing up this old thread, but a recent debate on container resource [1] reminded me the use case Kris mentioned below. I am going to propose a preliminary idea to address the use case. Of course, we could continue the discussion in the team meeting
or midcycle.</span><span style="font-size:13.5pt"><br>
</span><b><span style="font-size:13.5pt;font-family:"Helvetica","sans-serif";color:#1f497d"><br>
Idea</span></b><span style="font-size:13.5pt;font-family:"Helvetica","sans-serif";color:#1f497d">: Introduce a docker-native COE, which consists of only minion/worker/slave nodes (no master nodes).<b><br>
Goal</b>: Eliminate duplicated IaaS resources (master node VMs, lbaas vips, floating ips, etc.)<b><br>
Details</b>: Traditional COE (k8s/swarm/mesos) consists of master nodes and worker nodes. In these COEs, control services (i.e. scheduler) run on master nodes, and containers run on worker nodes. If we can port the COE control services to Magnum control plate
and share them with all tenants, we eliminate the need of master nodes thus improving resource utilization. In the new COE, users create/manage containers through Magnum API endpoints. Magnum is responsible to spin tenant VMs, schedule containers to the VMs,
and manage the life-cycle of those containers. Unlike other COEs, containers created by this COE are considered as OpenStack-manage resources. That means they will be tracked in Magnum DB, and accessible by other OpenStack services (i.e. Horizon, Heat, etc.).</span><span style="font-size:13.5pt"><br>
</span><span style="font-size:13.5pt;font-family:"Helvetica","sans-serif";color:#1f497d"><br>
What do you feel about this proposal? Let’s discuss.</span><span style="font-size:13.5pt"><br>
</span><span style="font-size:13.5pt;font-family:"Helvetica","sans-serif";color:#1f497d"><br>
[1] </span><a href="https://etherpad.openstack.org/p/magnum-native-api" target="_blank"><span style="font-size:13.5pt;font-family:"Helvetica","sans-serif"">https://etherpad.openstack.org/p/magnum-native-api</span></a><span style="font-size:13.5pt"><br>
</span><span style="font-size:13.5pt;font-family:"Helvetica","sans-serif";color:#1f497d"><br>
Best regards,<br>
Hongbin</span><span style="font-size:13.5pt"><br>
</span><b><span style="font-size:13.5pt;font-family:"Helvetica","sans-serif""><br>
From:</span></b><span style="font-size:13.5pt;font-family:"Helvetica","sans-serif""> Kris G. Lindgren [</span><a href="mailto:klindgren@godaddy.com" target="_blank"><span style="font-size:13.5pt;font-family:"Helvetica","sans-serif"">mailto:klindgren@godaddy.com</span></a><span style="font-size:13.5pt;font-family:"Helvetica","sans-serif"">]
<b><br>
Sent:</b> September-30-15 7:26 PM<b><br>
To:</b> </span><a href="mailto:openstack-dev@lists.openstack.org" target="_blank"><span style="font-size:13.5pt;font-family:"Helvetica","sans-serif"">openstack-dev@lists.openstack.org</span></a><b><span style="font-size:13.5pt;font-family:"Helvetica","sans-serif""><br>
Subject:</span></b><span style="font-size:13.5pt;font-family:"Helvetica","sans-serif""> Re: [openstack-dev] [magnum]swarm + compose = k8s?</span><span style="font-size:13.5pt"><br>
</span><span style="font-size:13.5pt;font-family:"Helvetica","sans-serif""><br>
We are looking at deploying magnum as an answer for how do we do containers company wide at Godaddy. I am going to agree with both you and josh.</span><span style="font-size:13.5pt"><br>
</span><span style="font-size:13.5pt;font-family:"Helvetica","sans-serif""><br>
I agree that managing one large system is going to be a pain and pas experience tells me this wont be practical/scale, however from experience I also know exactly the pain Josh is talking about.</span><span style="font-size:13.5pt"><br>
</span><span style="font-size:13.5pt;font-family:"Helvetica","sans-serif""><br>
We currently have ~4k projects in our internal openstack cloud, about 1/4 of the projects are currently doing some form of containers on their own, with more joining every day. If all of these projects were to convert of to the current magnum configuration
we would suddenly be attempting to support/configure ~1k magnum clusters. Considering that everyone will want it HA, we are looking at a minimum of 2 kube nodes per cluster + lbaas vips + floating ips. From a capacity standpoint this is an excessive amount
of duplicated infrastructure to spinup in projects where people maybe running 10–20 containers per project. From an operator support perspective this is a special level of hell that I do not want to get into. Even if I am off by 75%, 250 still sucks.</span><span style="font-size:13.5pt"><br>
</span><span style="font-size:13.5pt;font-family:"Helvetica","sans-serif""><br>
>From my point of view an ideal use case for companies like ours (yahoo/godaddy) would be able to support hierarchical projects in magnum. That way we could create a project for each department, and then the subteams of those departments can have their own projects.
We create a a bay per department. Sub-projects if they want to can support creation of their own bays (but support of the kube cluster would then fall to that team). When a sub-project spins up a pod on a bay, minions get created inside that teams sub projects
and the containers in that pod run on the capacity that was spun up under that project, the minions for each pod would be a in a scaling group and as such grow/shrink as dictated by load.</span><span style="font-size:13.5pt"><br>
</span><span style="font-size:13.5pt;font-family:"Helvetica","sans-serif""><br>
The above would make it so where we support a minimal, yet imho reasonable, number of kube clusters, give people who can't/don’t want to fall inline with the provided resource a way to make their own and still offer a “good enough for a single company” level
of multi-tenancy.</span><font face="'Courier New'" color="#535353"><br>
>Joshua,<br>
> <br>
>If you share resources, you give up multi-tenancy. No COE system has the<br>
>concept of multi-tenancy (kubernetes has some basic implementation but it<br>
>is totally insecure). Not only does multi-tenancy have to “look like” it<br>
>offers multiple tenants isolation, but it actually has to deliver the<br>
>goods.<br>
> <br>
>I understand that at first glance a company like Yahoo may not want<br>
>separate bays for their various applications because of the perceived<br>
>administrative overhead. I would then challenge Yahoo to go deploy a COE<br>
>like kubernetes (which has no multi-tenancy or a very basic implementation<br>
>of such) and get it to work with hundreds of different competing<br>
>applications. I would speculate the administrative overhead of getting<br>
>all that to work would be greater then the administrative overhead of<br>
>simply doing a bay create for the various tenants.<br>
> <br>
>Placing tenancy inside a COE seems interesting, but no COE does that<br>
>today. Maybe in the future they will. Magnum was designed to present an<br>
>integration point between COEs and OpenStack today, not five years down<br>
>the road. Its not as if we took shortcuts to get to where we are.<br>
> <br>
>I will grant you that density is lower with the current design of Magnum<br>
>vs a full on integration with OpenStack within the COE itself. However,<br>
>that model which is what I believe you proposed is a huge design change to<br>
>each COE which would overly complicate the COE at the gain of increased<br>
>density. I personally don’t feel that pain is worth the gain.</font><span style="font-size:13.5pt"><br>
<br>
</span><span style="font-size:13.5pt;font-family:"Helvetica","sans-serif""><br>
___________________________________________________________________<br>
Kris Lindgren<br>
Senior Linux Systems Engineer<br>
GoDaddy</span><span style="font-size:18.0pt;font-family:"Helvetica","sans-serif""><br>
<br>
__________________________________________________________________________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: </span><a href="mailto:OpenStack-dev-request@lists.openstack.org" target="_blank"><span style="font-size:18.0pt;font-family:"Helvetica","sans-serif"">OpenStack-dev-request@lists.openstack.org</span></a><span style="font-size:18.0pt;font-family:"Helvetica","sans-serif"">?subject:unsubscribe</span><u><span style="font-size:13.5pt;color:blue"><br>
</span></u><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank"><span style="font-size:18.0pt;font-family:"Helvetica","sans-serif"">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</span></a><span style="font-size:13.5pt"><br>
__________________________________________________________________________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: </span><a href="mailto:OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" target="_blank"><span style="font-size:13.5pt">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</span></a><u><span style="font-size:13.5pt;color:blue"><br>
</span></u><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank"><span style="font-size:13.5pt">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</span></a><span style="font-size:13.5pt"><br>
<br>
</span><tt>__________________________________________________________________________</tt><br>
<tt>OpenStack Development Mailing List (not for usage questions)</tt><br>
<tt>Unsubscribe: <a href="mailto:OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" target="_blank">
OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a></tt><br>
<tt><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a></tt><br>
<br>
<br>
<u></u><u></u></p>
</div></div></div>
</div>
<br>__________________________________________________________________________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br></blockquote></div><br></div></td></tr></table></div><div style="word-wrap:normal;word-break:break-word"><table lang="container" border="0" cellpadding="0" cellspacing="0" valign="top" style="width:100%;margin-top:6px"><tr><td valign="top" style="line-height:1.31;color:#222;font-family:arial,sans-serif"><img width="0" height="0" style="border:0;width:0px;min-height:0px" src="https://app.mixmax.com/api/track/v2/lku35nCGRX4OH0Usb/ig2cuIXZwlHaAdmblBnI/IyZy9mLrNWY0NnblB3buMHdzlGbAZXZk1yajFGdz5WZw9mI/iQ3cpxEIn5WaslWYNBCduVWbw9GblZXZEByajFGdT5WZw9kI?sc=false" alt="">
</td>
</tr>
</table>
</div>
__________________________________________________________________________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</blockquote></div>