<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">2015-01-09 17:17 GMT+08:00 Sylvain Bauza <span dir="ltr"><<a href="mailto:sbauza@redhat.com" target="_blank">sbauza@redhat.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div bgcolor="#FFFFFF" text="#000000">
<br>
<div>Le 09/01/2015 09:01, Alex Xu a écrit :<br>
</div><div><div class="h5">
<blockquote type="cite">
<div dir="ltr">Hi, All
<div><br>
</div>
<div>There is bug when running nova with ironic <a href="https://bugs.launchpad.net/nova/+bug/1402658" target="_blank">https://bugs.launchpad.net/nova/+bug/1402658</a></div>
<div><br>
</div>
<div>The case is simple: one baremetal node with 1024MB ram,
then boot two instances with 512MB ram flavor.</div>
<div>Those two instances will be scheduling to same baremetal
node.</div>
<div><br>
</div>
<div>The problem is at scheduler side the IronicHostManager will
consume all the resources for that node whatever</div>
<div>how much resource the instance used. But at compute node
side, the ResourceTracker won't consume resources</div>
<div>like that, just consume like normal virtual instance. And
ResourceTracker will update the resource usage once the</div>
<div>instance resource claimed, then scheduler will know there
are some free resource on that node, then will try to</div>
<div>schedule other new instance to that node.</div>
<div><br>
</div>
<div>I take look at that, there is NumInstanceFilter, it will
limit how many instance can schedule to one host. So can</div>
<div>we just use this filter to finish the goal? The max
instance is configured by option 'max_instances_per_host', we</div>
<div>can make the virt driver to report how many instances it
supported. The ironic driver can just report
max_instances_per_host=1.</div>
<div>And libvirt driver can report max_instance_per_host=-1,
that means no limit. And then we can just remove the</div>
<div>IronicHostManager, then make the scheduler side is more
simpler. Does make sense? or there are more trap?</div>
<div><br>
</div>
<div>Thanks in advance for any feedback and suggestion.</div>
<div><br>
</div>
</div>
</blockquote>
<br>
<br></div></div>
Mmm, I think I disagree with your proposal. Let me explain by the
best I can why :<br>
<br>
tl;dr: Any proposal unless claiming at the scheduler level tends to
be wrong<br>
<br>
The ResourceTracker should be only a module for providing stats
about compute nodes to the Scheduler.<br>
How the Scheduler is consuming these resources for making a decision
should only be a Scheduler thing.</div></blockquote><div><br></div><div>agreed, but we can't implement this for now, the reason is you described as below.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div bgcolor="#FFFFFF" text="#000000">
<br>
Here, the problem is that the decision making is also shared with
the ResourceTracker because of the claiming system managed by the
context manager when booting an instance. It means that we have 2
distinct decision makers for validating a resource.<br>
<br></div></blockquote><div><br></div><div>Totally agreed! This is the root cause.</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div bgcolor="#FFFFFF" text="#000000">
Let's stop to be realistic for a moment and discuss about what could
mean a decision for something else than a compute node. Ok, let say
a volume.<br>
Provided that *something* would report the volume statistics to the
Scheduler, that would be the Scheduler which would manage if a
volume manager could accept a volume request. There is no sense to
validate the decision of the Scheduler on the volume manager, just
maybe doing some error management.<br>
<br>
We know that the current model is kinda racy with Ironic because
there is a 2-stage validation (see [1]). I'm not in favor of
complexifying the model, but rather put all the claiming logic in
the scheduler, which is a longer path to win, but a safier one.<br></div></blockquote><div><br></div><div>Yea, I have thought about add same resource consume at compute manager side, but it's ugly because we implement ironic's resource consuming method in two places. If we move the claiming in the scheduler the thing will become easy, we can just provide some extension for different consuming method (If I understand right the discussion in the IRC). As gantt will be standalone service, so validating a resource shouldn't spread into different service. So I agree with you.</div><div><br></div><div>But for now, as you said this is long term plan. We can't provide different resource consuming in compute manager side now, also can't move the claiming into scheduler now. So the method I proposed is more easy for now, at least we won't have different resource consuming way between scheduler(IonricHostManger) and compute(ResourceTracker) for ironic. And ironic can works fine.</div><div><br></div><div>The method I propose have a little problem. When all the node allocated, we still can saw there are some resource are free if the flavor's resource is less than baremetal's resource. But it can be done by expose max_instance to hypervisor api(running instances already exposed), then user will now why can't allocated more instance. And if we can configure max_instance for each node, sounds like useful for operator also :)</div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div bgcolor="#FFFFFF" text="#000000">
<br>
-Sylvain<br>
<br>
[1] <a href="https://bugs.launchpad.net/nova/+bug/1341420" target="_blank">https://bugs.launchpad.net/nova/+bug/1341420</a><br>
<br>
<blockquote type="cite">
<div dir="ltr">
<div>Thanks</div>
<div>Alex</div>
</div>
<br>
<fieldset></fieldset>
<br>
<pre>_______________________________________________
OpenStack-dev mailing list
<a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a>
</pre>
</blockquote>
<br>
</div>
<br>_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br></blockquote></div><br></div></div>