[openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

Jay Pipes jaypipes at gmail.com
Wed Jun 4 18:03:18 UTC 2014


On 06/04/2014 03:08 AM, Yingjun Li wrote:
> +1, if doing so, a related bug related bug may be solved as well:
> https://bugs.launchpad.net/nova/+bug/1323538

Yep, I agree that the above bug would be addressed.

Best,
-jay

> On Jun 3, 2014, at 21:29, Jay Pipes <jaypipes at gmail.com
> <mailto:jaypipes at gmail.com>> wrote:
>
>> Hi Stackers,
>>
>> tl;dr
>> =====
>>
>> Move CPU and RAM allocation ratio definition out of the Nova scheduler
>> and into the resource tracker. Remove the calculations for overcommit
>> out of the core_filter and ram_filter scheduler pieces.
>>
>> Details
>> =======
>>
>> Currently, in the Nova code base, the thing that controls whether or
>> not the scheduler places an instance on a compute host that is already
>> "full" (in terms of memory or vCPU usage) is a pair of configuration
>> options* called cpu_allocation_ratio and ram_allocation_ratio.
>>
>> These configuration options are defined in, respectively,
>> nova/scheduler/filters/core_filter.py and
>> nova/scheduler/filters/ram_filter.py.
>>
>> Every time an instance is launched, the scheduler loops through a
>> collection of host state structures that contain resource consumption
>> figures for each compute node. For each compute host, the core_filter
>> and ram_filter's host_passes() method is called. In the host_passes()
>> method, the host's reported total amount of CPU or RAM is multiplied
>> by this configuration option, and the product is then subtracted from
>> the reported used amount of CPU or RAM. If the result is greater than
>> or equal to the number of vCPUs needed by the instance being launched,
>> True is returned and the host continues to be considered during
>> scheduling decisions.
>>
>> I propose we move the definition of the allocation ratios out of the
>> scheduler entirely, as well as the calculation of the total amount of
>> resources each compute node contains. The resource tracker is the most
>> appropriate place to define these configuration options, as the
>> resource tracker is what is responsible for keeping track of total and
>> used resource amounts for all compute nodes.
>>
>> Benefits:
>>
>> * Allocation ratios determine the amount of resources that a compute
>> node advertises. The resource tracker is what determines the amount of
>> resources that each compute node has, and how much of a particular
>> type of resource have been used on a compute node. It therefore makes
>> sense to put calculations and definition of allocation ratios where
>> they naturally belong.
>> * The scheduler currently needlessly re-calculates total resource
>> amounts on every call to the scheduler. This isn't necessary. The
>> total resource amounts don't change unless either a configuration
>> option is changed on a compute node (or host aggregate), and this
>> calculation can be done more efficiently once in the resource tracker.
>> * Move more logic out of the scheduler
>> * With the move to an extensible resource tracker, we can more easily
>> evolve to defining all resource-related options in the same place
>> (instead of in different filter files in the scheduler...)
>>
>> Thoughts?
>>
>> Best,
>> -jay
>>
>> * Host aggregates may also have a separate allocation ratio that
>> overrides any configuration setting that a particular host may have
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> <mailto:OpenStack-dev at lists.openstack.org>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>




More information about the OpenStack-dev mailing list