<div dir="ltr">We did some early affinity work and discovered some interesting problems with affinity and scheduling. =/ by default openstack used to ( may still ) deploy nodes across hosts evenly. <div><br></div><div>Personally, I think this is a bad approach. Most cloud providers stack across a couple racks at a time filling them then moving to the next. This allows older equipment to age out instances more easily for removal / replacement. </div><div><br></div><div>The problem then is, if you have super large capacity instances they can never be deployed once you've got enough tiny instances deployed across the environment. So now you are fighting with the scheduler to ensure you have deployment targets for specific instance types ( not very elastic / ephemeral ). goes back to the wave scheduling model being superior. </div><div><br></div><div>Anyways we had the braindead idea of locking whole physical nodes out from the scheduler for a super ( full node ) instance type. And I suppose you could do this with AZs or regions if you really needed to. But, it's not a great approach.</div><div><br></div><div>I would say that you almost need a wave style scheduler to do this sort of affinity work.</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, Mar 3, 2016 at 12:34 PM, Jay Pipes <span dir="ltr"><<a href="mailto:jaypipes@gmail.com" target="_blank">jaypipes@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">On 03/03/2016 08:57 AM, Robert Starmer wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
There was work done on enabling much more dynamic scheduling, including<br>
cross project scheduling (e.g. get additional placement hints from<br>
Neutron or Cinder), and I believe the framework is even in place to make<br>
use of this, but I don't believe anyone has written a scheduling<br>
component to make use of this. I think your best bet would be to build<br>
a custom weighted scheduler, which could be as simple as a linearly<br>
decreasing weight for one group and the inverse for the other group.<br>
Certainly this wouldn't be perfect, but might address your needs.<br>
</blockquote>
<br>
My ideas for replacing server groups with generic placement policies are written down here, with some excellent feedback from a number of reviewers:<br>
<br>
<a href="https://review.openstack.org/#/c/183837/" rel="noreferrer" target="_blank">https://review.openstack.org/#/c/183837/</a><br>
<br>
Would be great to get additional eyeballs on it. I was planning on reviving these ideas in the Newton and Ocata releases once the scheduler is split out from Nova.<br>
<br>
Best,<br>
-jay<br>
<br>
_______________________________________________<br>
OpenStack-operators mailing list<br>
<a href="mailto:OpenStack-operators@lists.openstack.org" target="_blank">OpenStack-operators@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
</blockquote></div><br></div>