[openstack-dev] [Nova][Scheduler] Policy Based Scheduler and Solver Scheduler

Khanh-Toan Tran khanh-toan.tran at cloudwatt.com
Tue Feb 11 09:21:21 UTC 2014


> Second, there is nothing wrong with booting the instances (or
instantiating other
> resources) as separate commands as long as we support some kind of
> reservation token.

I'm not sure what reservation token would do, is it some kind of informing
the scheduler that the resources would not be initiated until later ?
Let's consider a following example:

A user wants to create 2 VMs, a small one with 20 GB RAM, and a big one
with 40 GB RAM in a datacenter consisted of 2 hosts: one with 50 GB RAM
left, and another with 30 GB RAM left, using Filter Scheduler's default
RamWeigher.

If we pass the demand as two commands, there is a chance that the small VM
arrives first. RamWeigher will put it in the 50 GB RAM host, which will be
reduced to 30 GB RAM. Then, when the big VM request arrives, there will be
no space left to host it. As a result, the whole demand is failed.

Now if we can pass the two VMs in a command, SolverScheduler can put their
constraints all together into one big LP as follow (x_uv = 1 if VM u is
hosted in host v, 0 if not):

  50GB RAM host constraint:          20 *x_11 + 40 * x_21 <=50
  30GB RAM host constraint:          20 *x_12 + 40 * x_22 <=30
  Small VM presence constraint:    x_11 + x_12 = 1
  Big VM presence constraint:        x_21 + x_22 = 1

>From these constraints there is only one root that is: x_11 = 0, x12 = 1;
x_21 = 1; x_22 = 0; i.e, small VM hosted in 30 GB RAM host, and big VM
hosted in 50 GB RAM host.

As a conclusion, if we have VMs of multiple flavors to deal with, we
cannot give the correct answer if we do not have all information.
Therefore, if by reservation you mean that the scheduler would hold off
the scheduling process and save the information until it receives all
necessary information, then I'm agreed. But it just a workaround of
passing the whole demand as a whole, which would better be handled by an
API.

That responses to your first point, too. If we don't mind that some VMs
are placed and some are not (e.g. they belong to different apps), then
it's OK to pass them to the scheduler without Instance Group. However, if
the VMs are together (belong to an app), then we have to put them into an
Instance Group.

> -----Message d'origine-----
> De : Chris Friesen [mailto:chris.friesen at windriver.com]
> Envoyé : lundi 10 février 2014 18:45
> À : openstack-dev at lists.openstack.org
> Objet : Re: [openstack-dev] [Nova][Scheduler] Policy Based Scheduler and
Solver
> Scheduler
>
> On 02/10/2014 10:54 AM, Khanh-Toan Tran wrote:
>
> > Heat
> > may orchestrate the provisioning process, but eventually the instances
> > will be passed to Nova-scheduler (Gantt) as separated commands, which
> > is exactly the problem Solver Scheduler wants to correct. Therefore
> > the Instance Group API is needed, wherever it is used
(nova-scheduler/Gantt).
>
> I'm not sure that this follows.
>
> First, the instance groups API is totally separate since we may want to
schedule
> a number of instances simultaneously without them being part of an
instance
> group.  Certainly in the case of using instance groups that would be one
input
> into the scheduler, but it's an optional input.
>
> Second, there is nothing wrong with booting the instances (or
instantiating other
> resources) as separate commands as long as we support some kind of
> reservation token.
>
> In that model, we would pass a bunch of information about multiple
resources
> to the solver scheduler, have it perform scheduling *and reserve the
resources*,
> then return some kind of resource reservation tokens back to the caller
for each
> resource.  The caller could then allocate each resource, pass in the
reservation
> token indicating both that the resources had already been reserved as
well as
> what the specific resource that had been reserved (the compute-host in
the case
> of an instance, for example).
>
> Chris
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



More information about the OpenStack-dev mailing list