[openstack-dev] [Nova][Scheduler] Policy Based Scheduler and Solver Scheduler

Dina Belova dbelova at mirantis.com
Tue Feb 11 16:48:11 UTC 2014


>
> This is something to explore to add in Nova, using
> a local service or external service (need to explore Climate).


I need to find out more about Climate.


Here is Climate Launchpad: https://launchpad.net/climate

That's still really young project, but I believe it'll have great future
speaking about resource reservation. So if you need some kind of
reservation logic, I also believe that should be implemented in Climate (as
it's proposed and currently implemented as Reservation-as-a-Service)

Thanks


On Tue, Feb 11, 2014 at 8:23 PM, Yathiraj Udupi (yudupi)
<yudupi at cisco.com>wrote:

>  Hi Dina,
>
>  Thanks for note about Climate logic.  This is something that will be
> very useful, when we will have to schedule from Nova multiple instances (of
> potentially different flavors) as a single request.  If the Solver
> Scheduler, can make a request to the Climate service to reserve the
> resources soon after the placement decision has been made, then the nova
> provisioning logic can handle the resource provisioning using the climate
> reserved leases.  Regarding Solver Scheduler for your reference, just sent
> another email about this with some pointers about it.  Otherwise this is
> the blueprint -
> https://blueprints.launchpad.net/nova/+spec/solver-scheduler
> I guess this is something to explore more and see how Nova provisioning
> logic to work with Climate leases. Or this is something that already works.
>  I need to find out more about Climate.
>
>  Thanks,
> Yathi.
>
>
>   On 2/11/14, 7:44 AM, "Dina Belova" <dbelova at mirantis.com> wrote:
>
>    Like a restaurant reservation, it would "claim" the resources for use
>> by someone at a later date.  That way nobody else can use them.
>> That way the scheduler would be responsible for determining where the
>> resource should be allocated from, and getting a reservation for that
>> resource.  It would not have anything to do with actually instantiating the
>> instance/volume/etc.
>
>
>  Although I'm quite new to topic of Solver Scheduler, it seems to me that
> in that case you need to look on Climate project. It aims to provide
> resource reservation to OS clouds (and by resource I mean here
> instance/compute host/volume/etc.)
>
>  And Climate logic is like: create lease - get resources from common pool
> - do smth with them when lease start time will come.
>
>  I'll say one more time - I'm not really common with this discussion, but
> it looks like Climate might help here.
>
>  Thanks
> Dina
>
>
> On Tue, Feb 11, 2014 at 7:09 PM, Chris Friesen <
> chris.friesen at windriver.com> wrote:
>
>> On 02/11/2014 03:21 AM, Khanh-Toan Tran wrote:
>>
>>>  Second, there is nothing wrong with booting the instances (or
>>>>
>>> instantiating other
>>>
>>>> resources) as separate commands as long as we support some kind of
>>>> reservation token.
>>>>
>>>
>>> I'm not sure what reservation token would do, is it some kind of
>>> informing
>>> the scheduler that the resources would not be initiated until later ?
>>>
>>
>>  Like a restaurant reservation, it would "claim" the resources for use by
>> someone at a later date.  That way nobody else can use them.
>>
>> That way the scheduler would be responsible for determining where the
>> resource should be allocated from, and getting a reservation for that
>> resource.  It would not have anything to do with actually instantiating the
>> instance/volume/etc.
>>
>>
>>  Let's consider a following example:
>>>
>>> A user wants to create 2 VMs, a small one with 20 GB RAM, and a big one
>>> with 40 GB RAM in a datacenter consisted of 2 hosts: one with 50 GB RAM
>>> left, and another with 30 GB RAM left, using Filter Scheduler's default
>>> RamWeigher.
>>>
>>> If we pass the demand as two commands, there is a chance that the small
>>> VM
>>> arrives first. RamWeigher will put it in the 50 GB RAM host, which will
>>> be
>>> reduced to 30 GB RAM. Then, when the big VM request arrives, there will
>>> be
>>> no space left to host it. As a result, the whole demand is failed.
>>>
>>> Now if we can pass the two VMs in a command, SolverScheduler can put
>>> their
>>> constraints all together into one big LP as follow (x_uv = 1 if VM u is
>>> hosted in host v, 0 if not):
>>>
>>
>>  Yes.  So what I'm suggesting is that we schedule the two VMs as one call
>> to the SolverScheduler.  The scheduler then gets reservations for the
>> necessary resources and returns them to the caller.  This would be sort of
>> like the existing Claim object in nova/compute/claims.py but generalized
>> somewhat to other resources as well.
>>
>> The caller could then boot each instance separately (passing the
>> appropriate reservation/claim along with the boot request).  Because the
>> caller has a reservation the core code would know it doesn't need to
>> schedule or allocate resources, that's already been done.
>>
>> The advantage of this is that the scheduling and resource allocation is
>> done separately from the instantiation.  The instantiation API could remain
>> basically as-is except for supporting an optional reservation token.
>>
>>
>>  That responses to your first point, too. If we don't mind that some VMs
>>> are placed and some are not (e.g. they belong to different apps), then
>>> it's OK to pass them to the scheduler without Instance Group. However, if
>>> the VMs are together (belong to an app), then we have to put them into an
>>> Instance Group.
>>>
>>
>>  When I think of an "Instance Group", I think of "
>> https://blueprints.launchpad.net/nova/+spec/instance-group-api-extension".
>>   Fundamentally Instance Groups" describes a runtime relationship between
>> different instances.
>>
>> The scheduler doesn't necessarily care about a runtime relationship, it's
>> just trying to allocate resources efficiently.
>>
>> In the above example, there is no need for those two instances to
>> necessarily be part of an Instance Group--we just want to schedule them
>> both at the same time to give the scheduler a better chance of fitting them
>> both.
>>
>> More generally, the more instances I want to start up the more beneficial
>> it can be to pass them all to the scheduler at once in order to give the
>> scheduler more information.  Those instances could be parts of completely
>> independent Instance Groups, or not part of an Instance Group at all...the
>> scheduler can still do a better job if it has more information to work
>> with.
>>
>>
>> Chris
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>
>  --
>
> Best regards,
>
> Dina Belova
>
> Software Engineer
>
> Mirantis Inc.
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>


-- 

Best regards,

Dina Belova

Software Engineer

Mirantis Inc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140211/b1b1d678/attachment.html>


More information about the OpenStack-dev mailing list