<div dir="ltr"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><span style="font-family:arial,sans-serif;font-size:13px">Like a restaurant reservation, it would "claim" the resources for use by someone at a later date. That way nobody else can use them.</span><br style="font-family:arial,sans-serif;font-size:13px">
<span style="font-family:arial,sans-serif;font-size:13px">That way the scheduler would be responsible for determining where the resource should be allocated from, and getting a reservation for that resource. It would not have anything to do with actually instantiating the instance/volume/etc.</span></blockquote>
<div><br></div><div>Although I'm quite new to topic of Solver Scheduler, it seems to me that in that case you need to look on Climate project. It aims to provide resource reservation to OS clouds (and by resource I mean here instance/compute host/volume/etc.)</div>
<div><br></div><div>And Climate logic is like: create lease - get resources from common pool - do smth with them when lease start time will come.</div><div><br></div><div>I'll say one more time - I'm not really common with this discussion, but it looks like Climate might help here.</div>
<div><br></div><div>Thanks</div><div>Dina</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On Tue, Feb 11, 2014 at 7:09 PM, Chris Friesen <span dir="ltr"><<a href="mailto:chris.friesen@windriver.com" target="_blank">chris.friesen@windriver.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="">On 02/11/2014 03:21 AM, Khanh-Toan Tran wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Second, there is nothing wrong with booting the instances (or<br>
</blockquote>
instantiating other<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
resources) as separate commands as long as we support some kind of<br>
reservation token.<br>
</blockquote>
<br>
I'm not sure what reservation token would do, is it some kind of informing<br>
the scheduler that the resources would not be initiated until later ?<br>
</blockquote>
<br></div>
Like a restaurant reservation, it would "claim" the resources for use by someone at a later date. That way nobody else can use them.<br>
<br>
That way the scheduler would be responsible for determining where the resource should be allocated from, and getting a reservation for that resource. It would not have anything to do with actually instantiating the instance/volume/etc.<div class="">
<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Let's consider a following example:<br>
<br>
A user wants to create 2 VMs, a small one with 20 GB RAM, and a big one<br>
with 40 GB RAM in a datacenter consisted of 2 hosts: one with 50 GB RAM<br>
left, and another with 30 GB RAM left, using Filter Scheduler's default<br>
RamWeigher.<br>
<br>
If we pass the demand as two commands, there is a chance that the small VM<br>
arrives first. RamWeigher will put it in the 50 GB RAM host, which will be<br>
reduced to 30 GB RAM. Then, when the big VM request arrives, there will be<br>
no space left to host it. As a result, the whole demand is failed.<br>
<br>
Now if we can pass the two VMs in a command, SolverScheduler can put their<br>
constraints all together into one big LP as follow (x_uv = 1 if VM u is<br>
hosted in host v, 0 if not):<br>
</blockquote>
<br></div>
Yes. So what I'm suggesting is that we schedule the two VMs as one call to the SolverScheduler. The scheduler then gets reservations for the necessary resources and returns them to the caller. This would be sort of like the existing Claim object in nova/compute/claims.py but generalized somewhat to other resources as well.<br>
<br>
The caller could then boot each instance separately (passing the appropriate reservation/claim along with the boot request). Because the caller has a reservation the core code would know it doesn't need to schedule or allocate resources, that's already been done.<br>
<br>
The advantage of this is that the scheduling and resource allocation is done separately from the instantiation. The instantiation API could remain basically as-is except for supporting an optional reservation token.<div class="">
<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
That responses to your first point, too. If we don't mind that some VMs<br>
are placed and some are not (e.g. they belong to different apps), then<br>
it's OK to pass them to the scheduler without Instance Group. However, if<br>
the VMs are together (belong to an app), then we have to put them into an<br>
Instance Group.<br>
</blockquote>
<br></div>
When I think of an "Instance Group", I think of "<a href="https://blueprints.launchpad.net/nova/+spec/instance-group-api-extension" target="_blank">https://blueprints.launchpad.<u></u>net/nova/+spec/instance-group-<u></u>api-extension</a>". Fundamentally Instance Groups" describes a runtime relationship between different instances.<br>
<br>
The scheduler doesn't necessarily care about a runtime relationship, it's just trying to allocate resources efficiently.<br>
<br>
In the above example, there is no need for those two instances to necessarily be part of an Instance Group--we just want to schedule them both at the same time to give the scheduler a better chance of fitting them both.<br>
<br>
More generally, the more instances I want to start up the more beneficial it can be to pass them all to the scheduler at once in order to give the scheduler more information. Those instances could be parts of completely independent Instance Groups, or not part of an Instance Group at all...the scheduler can still do a better job if it has more information to work with.<div class="HOEnZb">
<div class="h5"><br>
<br>
Chris<br>
<br>
______________________________<u></u>_________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.<u></u>org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/<u></u>cgi-bin/mailman/listinfo/<u></u>openstack-dev</a><br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div dir="ltr"><div style="color:rgb(34,34,34);font-family:arial,sans-serif;font-size:13px;background-color:rgb(255,255,255)"><p style="font-size:small;margin:0px;font-family:Helvetica">
Best regards,</p><p style="font-size:small;margin:0px;font-family:Helvetica">Dina Belova</p><p style="font-size:small;margin:0px;font-family:Helvetica">Software Engineer</p><p style="font-size:small;margin:0px;font-family:Helvetica">
Mirantis Inc.</p></div></div>
</div>