<div dir="ltr">Hi Chris,<br><br><br><div class="gmail_extra"><br><br><div class="gmail_quote">2014-03-17 23:08 GMT+01:00 Chris Friesen <span dir="ltr"><<a href="mailto:chris.friesen@windriver.com" target="_blank">chris.friesen@windriver.com</a>></span>:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="">On 03/17/2014 02:30 PM, Sylvain Bauza wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
There is a global concern here about how an holistic scheduler can<br>
perform decisions, and from which key metrics.<br>
The current effort is leading to having the Gantt DB updated thanks to<br>
resource tracker for scheduling appropriately the hosts.<br>
<br>
If we consider these metrics as not enough, ie. that Gantt should<br>
perform an active check to another project, that's something which needs<br>
to be considered carefully. IMHO, on that case, Gantt should only access<br>
metrics thanks to the project REST API (and python client) in order to<br>
make sure that rolling upgrades could happen.<br>
tl;dr: If Gantt requires accessing Nova data, it should request Nova<br>
REST API, and not perform database access directly (even thru the conductor)<br>
</blockquote>
<br></div>
Consider the case in point.<br>
<br>
1) We create a server group with anti-affinity policy. (So no two instances in the group should run on the same compute node.)<br>
2) We boot a server in this group.<br>
3) Either simultaneously (on a different scheduler) or immediately after (on the same scheduler) we boot another server in the same group.<br>
<br>
Ideally the scheduler should enforce the policy without any races. However, in the current code we don't update the instance entry in the database with the chosen host until we actually try and create it on the host. Because of this we can end up putting both of them on the same compute node.<br>
<br></blockquote><div><br></div><div>There are 2 distinct cases :<br></div><div>1. there are multiple schedulers involved in the decision<br></div><div>2. there is one single scheduler but there is a race condition on it<br>
<br></div><div>About 1., I agree we need to see how the scheduler (and later on Gantt) could address decision-making based on distributed engines. At least, I consider the no-db scheduler blueprint responsible for using memcache instead of a relational DB could help some of these issues, as memcached can be distributed efficiently.<br>
<br></div><div>About 2., that's a concurrency issue which can be addressed thanks to common practices for synchronizing actions. IMHO, a local lock can be enough for ensuring isolation<br></div><div><br> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Currently we only detect the problem when we go to actually boot the instance on the compute node because we have a special-case check to validate the policy. Personally I think this is sort of a hack and it would be better to detect the problem within the scheduler itself.<br>
<br>
This is something that the scheduler should reasonably consider. I see it as effectively consuming resources, except that in this case the resource is "the set of compute nodes not used by servers in the server group".<div class="HOEnZb">
<div class="h5"><br>
<br></div></div></blockquote><div><br></div><div>Agree. IMHO, scheduler could take decisions based on inputs and should guarantee the result. That said, at the moment, we need to address the issue at the compute manager level, because of the point above.<br>
<br></div><div>-Sylvain<br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">
Chris<br>
<br>
______________________________<u></u>_________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.<u></u>org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/<u></u>cgi-bin/mailman/listinfo/<u></u>openstack-dev</a><br>
</div></div></blockquote></div><br></div></div>