<div dir="ltr"><div>Someone else expressed this more gracefully than I:</div><div><br></div><div><i>'Because sans Ironic, compute-nodes still have physical characteristics</i></div><div><i>that make grouping on them attractive for things like anti-affinity. I</i></div><div><i>don't really want my HA instances "not on the same compute node", I want</i></div><div><i>them "not in the same failure domain". It becomes a way for all</i></div><div><i>OpenStack workloads to have more granularity than "availability zone".</i></div><div>(<a href="https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg14891.html">https://www.mail-archive.com/openstack-dev@lists.openstack.org/msg14891.html</a>)</div><div><br></div><div>^That guy definitely has a good head on his shoulders ;)</div><div><br></div><div>-James</div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Dec 16, 2015 at 12:40 PM, James Penick <span dir="ltr"><<a href="mailto:jpenick@gmail.com" target="_blank">jpenick@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><span class=""><div><span style="font-size:12.8px">>Affinity is mostly meaningless with baremetal. It's entirely a</span><br style="font-size:12.8px"><span style="font-size:12.8px">>virtualization related thing. If you try and group things by TOR, or</span><br style="font-size:12.8px"><span style="font-size:12.8px">>chassis, or anything else, it's going to start meaning something entirely</span><br style="font-size:12.8px"><span style="font-size:12.8px">>different than it means in Nova, </span></div><div><br></div></span><div><span style="font-size:12.8px">I disagree, in fact, we need TOR and power affinity/anti-affinity for VMs as well as baremetal. As an example, there are cases where certain compute resources move significant amounts of data between one or two other instances</span><span style="font-size:12.8px">, but you want to ensure those instances are not on the same hypervisor. In that scenario it makes sense to have instances on different hypervisors, but on the same TOR to reduce unnecessary traffic across the fabric.</span></div><span class=""><div><br></div><div><span style="font-size:12.8px">>and it would probably be better to just</span><br></div><div><div><span style="font-size:12.8px">>make lots of AZ's and have users choose their AZ mix appropriately,</span><br style="font-size:12.8px"><span style="font-size:12.8px">>since that is the real meaning of AZ's.</span><br></div></div><div><br></div></span><div>Yes, at some level certain things should be expressed in the form of an AZ, power seems like a good candidate for that. But , expressing something like a TOR as an AZ in an environment with hundreds of thousands of physical hosts, would not scale. Further, it would require users to have a deeper understanding of datacenter toplogy, which is exactly the opposite of why IaaS exists.</div><div><br></div><div>The whole point of a service-oriented infrastructure is to be able to give the end user the ability to boot compute resources that match a variety of constraints, and have those resources selected and provisioned for them. IE "Give me 12 instances of m1.blah, all running Linux, and make sure they're spread across 6 different TORs and 2 different power domains in network zone Blah."</div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><br></div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Dec 16, 2015 at 10:38 AM, Clint Byrum <span dir="ltr"><<a href="mailto:clint@fewbar.com" target="_blank">clint@fewbar.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Excerpts from Jim Rollenhagen's message of 2015-12-16 08:03:22 -0800:<br>
<span>> Nobody is talking about running a compute per flavor or capability. All<br>
> compute hosts will be able to handle all ironic nodes. We *do* still<br>
> need to figure out how to handle availability zones or host aggregates,<br>
> but I expect we would pass along that data to be matched against. I<br>
> think it would just be metadata on a node. Something like<br>
> node.properties['availability_zone'] = 'rackspace-iad-az3' or what have<br>
> you. Ditto for host aggregates - add the metadata to ironic to match<br>
> what's in the host aggregate. I'm honestly not sure what to do about<br>
> (anti-)affinity filters; we'll need help figuring that out.<br>
><br>
<br>
</span>Affinity is mostly meaningless with baremetal. It's entirely a<br>
virtualization related thing. If you try and group things by TOR, or<br>
chassis, or anything else, it's going to start meaning something entirely<br>
different than it means in Nova, and it would probably be better to just<br>
make lots of AZ's and have users choose their AZ mix appropriately,<br>
since that is the real meaning of AZ's.<br>
<div><div><br>
__________________________________________________________________________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</div></div></blockquote></div><br></div>
</div></div></blockquote></div><br></div>