<div dir="ltr">+1 Kevin<div><br></div><div><font color="#212121" face="-apple-system, Helvetica, sans-serif" style="font-size:13px">“heterogeneous cluster is more advanced and harder to control”</font><br style="font-size:13px"><div style="font-size:13px"><font color="#212121" face="-apple-system, Helvetica, sans-serif">So, I believe that Magnum should control and overcome this problem.</font></div><div style="font-size:13px"><font color="#212121" face="-apple-system, Helvetica, sans-serif">Magnum is a container infrastructure as a service.</font></div><div style="font-size:13px"><font color="#212121" face="-apple-system, Helvetica, sans-serif">Managing heterogeneous environment seems scope of Magnum’s mission.</font></div><br><div class="gmail_quote"><div dir="ltr">2016年6月3日(金) 8:55 Fox, Kevin M <<a href="mailto:Kevin.Fox@pnnl.gov">Kevin.Fox@pnnl.gov</a>>:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">As an operator that has clouds that are partitioned into different host aggregates with different flavors targeting them, I totally believe we will have users that want to have a single k8s cluster span multiple different flavor types. I'm sure once I deploy magnum, I will want it too. You could have some special hardware on some nodes, not on others. but you can still have cattle, if you have enough of them and the labels are set appropriately. Labels allow you to continue to partition things when you need to, and ignore it when you dont, making administration significantly easier.<br>
<br>
Say I have a tenant with 5 gpu nodes, and 10 regular nodes allocated into a k8s cluster. I may want 30 instances of container x that doesn't care where they land, and prefer 5 instances that need cuda. The former can be deployed with a k8s deployment. The latter can be deployed with a daemonset. All should work well and very non pet'ish. The whole tenant could be viewed with a single pane of glass, making it easy to manage.<br>
<br>
Thanks,<br>
Kevin<br>
________________________________________<br>
From: Adrian Otto [<a href="mailto:adrian.otto@rackspace.com" target="_blank">adrian.otto@rackspace.com</a>]<br>
Sent: Thursday, June 02, 2016 4:24 PM<br>
To: OpenStack Development Mailing List (not for usage questions)<br>
Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually managing the bay nodes<br>
<br>
I am really struggling to accept the idea of heterogeneous clusters. My experience causes me to question whether a heterogeneus cluster makes sense for Magnum. I will try to explain why I have this hesitation:<br>
<br>
1) If you have a heterogeneous cluster, it suggests that you are using external intelligence to manage the cluster, rather than relying on it to be self-managing. This is an anti-pattern that I refer to as “pets" rather than “cattle”. The anti-pattern results in brittle deployments that rely on external intelligence to manage (upgrade, diagnose, and repair) the cluster. The automation of the management is much harder when a cluster is heterogeneous.<br>
<br>
2) If you have a heterogeneous cluster, it can fall out of balance. This means that if one of your “important” or “large” members fail, there may not be adequate remaining members in the cluster to continue operating properly in the degraded state. The logic of how to track and deal with this needs to be handled. It’s much simpler in the heterogeneous case.<br>
<br>
3) Heterogeneous clusters are complex compared to homogeneous clusters. They are harder to work with, and that usually means that unplanned outages are more frequent, and last longer than they with a homogeneous cluster.<br>
<br>
Summary:<br>
<br>
Heterogeneous:<br>
- Complex<br>
- Prone to imbalance upon node failure<br>
- Less reliable<br>
<br>
Heterogeneous:<br>
- Simple<br>
- Don’t get imbalanced when a min_members concept is supported by the cluster controller<br>
- More reliable<br>
<br>
My bias is to assert that applications that want a heterogeneous mix of system capacities at a node level should be deployed on multiple homogeneous bays, not a single heterogeneous one. That way you end up with a composition of simple systems rather than a larger complex one.<br>
<br>
Adrian<br>
<br>
<br>
> On Jun 1, 2016, at 3:02 PM, Hongbin Lu <<a href="mailto:hongbin.lu@huawei.com" target="_blank">hongbin.lu@huawei.com</a>> wrote:<br>
><br>
> Personally, I think this is a good idea, since it can address a set of similar use cases like below:<br>
> * I want to deploy a k8s cluster to 2 availability zone (in future 2 regions/clouds).<br>
> * I want to spin up N nodes in AZ1, M nodes in AZ2.<br>
> * I want to scale the number of nodes in specific AZ/region/cloud. For example, add/remove K nodes from AZ1 (with AZ2 untouched).<br>
><br>
> The use case above should be very common and universal everywhere. To address the use case, Magnum needs to support provisioning heterogeneous set of nodes at deploy time and managing them at runtime. It looks the proposed idea (manually managing individual nodes or individual group of nodes) can address this requirement very well. Besides the proposed idea, I cannot think of an alternative solution.<br>
><br>
> Therefore, I vote to support the proposed idea.<br>
><br>
> Best regards,<br>
> Hongbin<br>
><br>
>> -----Original Message-----<br>
>> From: Hongbin Lu<br>
>> Sent: June-01-16 11:44 AM<br>
>> To: OpenStack Development Mailing List (not for usage questions)<br>
>> Subject: RE: [openstack-dev] [magnum] Discuss the idea of manually<br>
>> managing the bay nodes<br>
>><br>
>> Hi team,<br>
>><br>
>> A blueprint was created for tracking this idea:<br>
>> <a href="https://blueprints.launchpad.net/magnum/+spec/manually-manage-bay-" rel="noreferrer" target="_blank">https://blueprints.launchpad.net/magnum/+spec/manually-manage-bay-</a><br>
>> nodes . I won't approve the BP until there is a team decision on<br>
>> accepting/rejecting the idea.<br>
>><br>
>> From the discussion in design summit, it looks everyone is OK with the<br>
>> idea in general (with some disagreements in the API style). However,<br>
>> from the last team meeting, it looks some people disagree with the idea<br>
>> fundamentally. so I re-raised this ML to re-discuss.<br>
>><br>
>> If you agree or disagree with the idea of manually managing the Heat<br>
>> stacks (that contains individual bay nodes), please write down your<br>
>> arguments here. Then, we can start debating on that.<br>
>><br>
>> Best regards,<br>
>> Hongbin<br>
>><br>
>>> -----Original Message-----<br>
>>> From: Cammann, Tom [mailto:<a href="mailto:tom.cammann@hpe.com" target="_blank">tom.cammann@hpe.com</a>]<br>
>>> Sent: May-16-16 5:28 AM<br>
>>> To: OpenStack Development Mailing List (not for usage questions)<br>
>>> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually<br>
>>> managing the bay nodes<br>
>>><br>
>>> The discussion at the summit was very positive around this<br>
>> requirement<br>
>>> but as this change will make a large impact to Magnum it will need a<br>
>>> spec.<br>
>>><br>
>>> On the API of things, I was thinking a slightly more generic approach<br>
>>> to incorporate other lifecycle operations into the same API.<br>
>>> Eg:<br>
>>> magnum bay-manage <bay> <life-cycle-op><br>
>>><br>
>>> magnum bay-manage <bay> reset –hard<br>
>>> magnum bay-manage <bay> rebuild<br>
>>> magnum bay-manage <bay> node-delete <name/uuid> magnum bay-manage<br>
>>> <bay> node-add –flavor <flavor> magnum bay-manage <bay> node-reset<br>
>>> <name> magnum bay-manage <bay> node-list<br>
>>><br>
>>> Tom<br>
>>><br>
>>> From: Yuanying OTSUKA <<a href="mailto:yuanying@oeilvert.org" target="_blank">yuanying@oeilvert.org</a>><br>
>>> Reply-To: "OpenStack Development Mailing List (not for usage<br>
>>> questions)" <<a href="mailto:openstack-dev@lists.openstack.org" target="_blank">openstack-dev@lists.openstack.org</a>><br>
>>> Date: Monday, 16 May 2016 at 01:07<br>
>>> To: "OpenStack Development Mailing List (not for usage questions)"<br>
>>> <<a href="mailto:openstack-dev@lists.openstack.org" target="_blank">openstack-dev@lists.openstack.org</a>><br>
>>> Subject: Re: [openstack-dev] [magnum] Discuss the idea of manually<br>
>>> managing the bay nodes<br>
>>><br>
>>> Hi,<br>
>>><br>
>>> I think, user also want to specify the deleting node.<br>
>>> So we should manage “node” individually.<br>
>>><br>
>>> For example:<br>
>>> $ magnum node-create —bay …<br>
>>> $ magnum node-list —bay<br>
>>> $ magnum node-delete $NODE_UUID<br>
>>><br>
>>> Anyway, if magnum want to manage a lifecycle of container<br>
>>> infrastructure.<br>
>>> This feature is necessary.<br>
>>><br>
>>> Thanks<br>
>>> -yuanying<br>
>>><br>
>>><br>
>>> 2016年5月16日(月) 7:50 Hongbin Lu<br>
>>> <<a href="mailto:hongbin.lu@huawei.com" target="_blank">hongbin.lu@huawei.com</a><mailto:<a href="mailto:hongbin.lu@huawei.com" target="_blank">hongbin.lu@huawei.com</a>>>:<br>
>>> Hi all,<br>
>>><br>
>>> This is a continued discussion from the design summit. For recap,<br>
>>> Magnum manages bay nodes by using ResourceGroup from Heat. This<br>
>>> approach works but it is infeasible to manage the heterogeneity<br>
>> across<br>
>>> bay nodes, which is a frequently demanded feature. As an example,<br>
>>> there is a request to provision bay nodes across availability zones<br>
>> [1].<br>
>>> There is another request to provision bay nodes with different set of<br>
>>> flavors [2]. For the request features above, ResourceGroup won’t work<br>
>>> very well.<br>
>>><br>
>>> The proposal is to remove the usage of ResourceGroup and manually<br>
>>> create Heat stack for each bay nodes. For example, for creating a<br>
>>> cluster with 2 masters and 3 minions, Magnum is going to manage 6<br>
>> Heat<br>
>>> stacks (instead of 1 big Heat stack as right now):<br>
>>> * A kube cluster stack that manages the global resources<br>
>>> * Two kube master stacks that manage the two master nodes<br>
>>> * Three kube minion stacks that manage the three minion nodes<br>
>>><br>
>>> The proposal might require an additional API endpoint to manage nodes<br>
>>> or a group of nodes. For example:<br>
>>> $ magnum nodegroup-create --bay XXX --flavor m1.small --count 2 --<br>
>>> availability-zone us-east-1 ….<br>
>>> $ magnum nodegroup-create --bay XXX --flavor m1.medium --count 3 --<br>
>>> availability-zone us-east-2 …<br>
>>><br>
>>> Thoughts?<br>
>>><br>
>>> [1] <a href="https://blueprints.launchpad.net/magnum/+spec/magnum-" rel="noreferrer" target="_blank">https://blueprints.launchpad.net/magnum/+spec/magnum-</a><br>
>> availability-<br>
>>> zones<br>
>>> [2] <a href="https://blueprints.launchpad.net/magnum/+spec/support-multiple-" rel="noreferrer" target="_blank">https://blueprints.launchpad.net/magnum/+spec/support-multiple-</a><br>
>>> flavor<br>
>>><br>
>>> Best regards,<br>
>>> Hongbin<br>
>>><br>
>> ______________________________________________________________________<br>
>>> _<br>
>>> ___<br>
>>> OpenStack Development Mailing List (not for usage questions)<br>
>>> Unsubscribe: OpenStack-dev-<br>
>>> <a href="http://request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">request@lists.openstack.org?subject:unsubscribe</a><<a href="http://OpenStack-dev-" rel="noreferrer" target="_blank">http://OpenStack-dev-</a><br>
>>> <a href="http://request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">request@lists.openstack.org?subject:unsubscribe</a>><br>
>>> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
>>><br>
>> ______________________________________________________________________<br>
>>> _<br>
>>> ___<br>
>>> OpenStack Development Mailing List (not for usage questions)<br>
>>> Unsubscribe: OpenStack-dev-<br>
>>> <a href="http://request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">request@lists.openstack.org?subject:unsubscribe</a><br>
>>> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
> __________________________________________________________________________<br>
> OpenStack Development Mailing List (not for usage questions)<br>
> Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a><br>
> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br>
__________________________________________________________________________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
__________________________________________________________________________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</blockquote></div></div></div>