[nova][scheduler] - Stack VMs based on RAM

melanie witt melwittt at gmail.com
Tue Apr 16 22:37:01 UTC 2019


On Wed, 17 Apr 2019 01:15:49 +0300, Georgios Dimitrakakis 
<giorgis at acmac.uoc.gr> wrote:
>   Is there a way I can further debug this?
> 
>   Setting debug to "true" didn't show a reason why VMs are spreading and
>   which filter is used every time which will make VMs spread.
> 
>   In addition I think that I 've read somewhere that placement does this
>   (stack VMs) by default but have not seen that behavior at all...
> 
>   Any ideas appreciated...
> 
>   Regards,
> 
>   G.
> 
>   P.S: Added the "scheduler" tag to catch more attention from the correct
>   people.

I had a look through the code and config options and have not yet found 
anything that could have caused the default stack behavior to have changed.

The related config options are:

* [placement]randomize_allocation_candidates which defaults to False
* [filter_scheduler]host_subset_size which defaults to 1
* [filter_scheduler]shuffle_best_same_weighed_hosts which defaults to False

If you have these set to the defaults, you should be getting stacking 
behavior. Can you double check whether those options are set (or left as 
default) properly in your scheduler configs?

-melanie

>> All,
>>
>> in the past I used to stack VMs based on RAM on hosts by putting the
>> following settings in my nova.conf file:
>>
>> ->ram_weight_multiplier=-1.0
>> ->available_filters=nova.scheduler.filters.all_filters
>>
>> ->enabled_filters=RetryFilter,AvailabilityZoneFilter,CoreFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter
>>
>>
>> I have now a Rocky installation and although I have set the above on
>> my controller node and restarted corresponding services the result
>> when spawning two VMs (one after another) is still to be distributed
>> on two different hosts. Seems that the something is missing in order
>> to override the default behavior.
>>
>> Any suggestions please?
>>
>> Best regards,
>>
>> G.
> 







More information about the openstack-discuss mailing list