<div dir="ltr"><span style="font-size:12.8px">Thanks for the info, so it seems we are not going to implement aggregate overcommit ratio in placement at least in the near future?</span><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Jan 17, 2018 at 9:19 AM, Zhenyu Zheng <span dir="ltr"><<a href="mailto:zhengzhenyulixi@gmail.com" target="_blank">zhengzhenyulixi@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Thanks for the info, so it seems we are not going to implement aggregate overcommit ratio in placement at least in the near future?</div><div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><div class="gmail_quote">On Wed, Jan 17, 2018 at 5:24 AM, melanie witt <span dir="ltr"><<a href="mailto:melwittt@gmail.com" target="_blank">melwittt@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hello Stackers,<br>
<br>
This is a heads up to any of you using the AggregateCoreFilter, AggregateRamFilter, and/or AggregateDiskFilter in the filter scheduler. These filters have effectively allowed operators to set overcommit ratios per aggregate rather than per compute node in <= Newton.<br>
<br>
Beginning in Ocata, there is a behavior change where aggregate-based overcommit ratios will no longer be honored during scheduling. Instead, overcommit values must be set on a per compute node basis in nova.conf.<br>
<br>
Details: as of Ocata, instead of considering all compute nodes at the start of scheduler filtering, an optimization has been added to query resource capacity from placement and prune the compute node list with the result *before* any filters are applied. Placement tracks resource capacity and usage and does *not* track aggregate metadata [1]. Because of this, placement cannot consider aggregate-based overcommit and will exclude compute nodes that do not have capacity based on per compute node overcommit.<br>
<br>
How to prepare: if you have been relying on per aggregate overcommit, during your upgrade to Ocata, you must change to using per compute node overcommit ratios in order for your scheduling behavior to stay consistent. Otherwise, you may notice increased NoValidHost scheduling failures as the aggregate-based overcommit is no longer being considered. You can safely remove the AggregateCoreFilter, AggregateRamFilter, and AggregateDiskFilter from your enabled_filters and you do not need to replace them with any other core/ram/disk filters. The placement query takes care of the core/ram/disk filtering instead, so CoreFilter, RamFilter, and DiskFilter are redundant.<br>
<br>
Thanks,<br>
-melanie<br>
<br>
[1] Placement has been a new slate for resource management and prior to placement, there were conflicts between the different methods for setting overcommit ratios that were never addressed, such as, "which value to take if a compute node has overcommit set AND the aggregate has it set? Which takes precedence?" And, "if a compute node is in more than one aggregate, which overcommit value should be taken?" So, the ambiguities were not something that was desirable to bring forward into placement.<br>
<br>
______________________________<wbr>______________________________<wbr>______________<br>
OpenStack Development Mailing List (not for usage questions)<br>
Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.op<wbr>enstack.org?subject:unsubscrib<wbr>e</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi<wbr>-bin/mailman/listinfo/openstac<wbr>k-dev</a><br>
</blockquote></div><br></div>
</div></div></blockquote></div><br></div>