[openstack-dev] [nova] Proposal: Move CPU and memory allocation ratio out of scheduler

Jay Pipes jaypipes at gmail.com
Wed Jun 4 18:16:31 UTC 2014


On 06/04/2014 06:10 AM, Murray, Paul (HP Cloud) wrote:
> Hi Jay,
>
> This sounds good to me. You left out the part of limits from the
> discussion – these filters set the limits used at the resource tracker.

Yes, and that is, IMO, bad design. Allocation ratios are the domain of 
the compute node and the resource tracker. Not the scheduler. The 
allocation ratios simply adjust the amount of resources that the compute 
node advertises to others. Allocation ratios are *not* scheduler policy, 
and they aren't related to flavours.

> You also left out the force-to-host and its effect on limits.

force-to-host is definitively non-cloudy. It was a bad idea that should 
never have been added to Nova in the first place.

That said, I don't see how force-to-host has any affect on limits. 
Limits should not be output from the scheduler. In fact, they shouldn't 
be anything other than an *input* to the scheduler, provided in each 
host state struct that gets built from records updated in the resource 
tracker and the Nova database.

 > Yes, I
> would agree with doing this at the resource tracker too.
>
> And of course the extensible resource tracker is the right way to do it J

:) Yes, clearly this is something that I ran into while brainstorming 
around the extensible resource tracker patches.

Best,
-jay

> Paul.
>
> *From:*Jay Lau [mailto:jay.lau.513 at gmail.com]
> *Sent:* 04 June 2014 10:04
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [nova] Proposal: Move CPU and memory
> allocation ratio out of scheduler
>
> Does there is any blueprint related to this? Thanks.
>
> 2014-06-03 21:29 GMT+08:00 Jay Pipes <jaypipes at gmail.com
> <mailto:jaypipes at gmail.com>>:
>
> Hi Stackers,
>
> tl;dr
> =====
>
> Move CPU and RAM allocation ratio definition out of the Nova scheduler
> and into the resource tracker. Remove the calculations for overcommit
> out of the core_filter and ram_filter scheduler pieces.
>
> Details
> =======
>
> Currently, in the Nova code base, the thing that controls whether or not
> the scheduler places an instance on a compute host that is already
> "full" (in terms of memory or vCPU usage) is a pair of configuration
> options* called cpu_allocation_ratio and ram_allocation_ratio.
>
> These configuration options are defined in, respectively,
> nova/scheduler/filters/core_filter.py and
> nova/scheduler/filters/ram_filter.py.
>
> Every time an instance is launched, the scheduler loops through a
> collection of host state structures that contain resource consumption
> figures for each compute node. For each compute host, the core_filter
> and ram_filter's host_passes() method is called. In the host_passes()
> method, the host's reported total amount of CPU or RAM is multiplied by
> this configuration option, and the product is then subtracted from the
> reported used amount of CPU or RAM. If the result is greater than or
> equal to the number of vCPUs needed by the instance being launched, True
> is returned and the host continues to be considered during scheduling
> decisions.
>
> I propose we move the definition of the allocation ratios out of the
> scheduler entirely, as well as the calculation of the total amount of
> resources each compute node contains. The resource tracker is the most
> appropriate place to define these configuration options, as the resource
> tracker is what is responsible for keeping track of total and used
> resource amounts for all compute nodes.
>
> Benefits:
>
>   * Allocation ratios determine the amount of resources that a compute
> node advertises. The resource tracker is what determines the amount of
> resources that each compute node has, and how much of a particular type
> of resource have been used on a compute node. It therefore makes sense
> to put calculations and definition of allocation ratios where they
> naturally belong.
>   * The scheduler currently needlessly re-calculates total resource
> amounts on every call to the scheduler. This isn't necessary. The total
> resource amounts don't change unless either a configuration option is
> changed on a compute node (or host aggregate), and this calculation can
> be done more efficiently once in the resource tracker.
>   * Move more logic out of the scheduler
>   * With the move to an extensible resource tracker, we can more easily
> evolve to defining all resource-related options in the same place
> (instead of in different filter files in the scheduler...)
>
> Thoughts?
>
> Best,
> -jay
>
> * Host aggregates may also have a separate allocation ratio that
> overrides any configuration setting that a particular host may have
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org <mailto:OpenStack-dev at lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
>
> --
>
> Thanks,
>
> Jay
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>




More information about the OpenStack-dev mailing list