[nova][scheduler] - Stack VMs based on RAM

melanie witt melwittt at gmail.com
Wed Apr 17 00:03:57 UTC 2019


On Wed, 17 Apr 2019 01:51:48 +0300, Georgios Dimitrakakis 
<giorgis at acmac.uoc.gr> wrote:
> Hi Melanie,
> 
> thx for looking into this!
> 
> The values you are referring to have not been set in “nova.conf” neither at the controller nor at the compute hosts which as far as I understand means that they have their default values and the initial behavior was to spread VMs across hosts.
> 
> In order to change that behavior what I have changed are:
> 1)ram_weight_multiplier=-1.0
> 2)available_filters=nova.scheduler.filters.all_filters
> 3)enabled_filters=RetryFilter,AvailabilityZoneFilter,CoreFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter
> 
> but the behavior remained the same without any change since when launching 2VMs the second one always go to the other host that the first is running.
> Launching more VMs has the same result.

OK, yeah if you have not set those config options in nova.conf then they 
should be defaulting to stacking behavior.

I see and understand now that the default value 1.0 for 
[filter_scheduler]ram_weight_multiplier will spread VMs, but you have it 
correctly set to a negative value to instead stack VMs [1].

So, it seems by everything you have set up, you should be seeing 
stacking VM behavior.

To debug further, you should set debug to True in the nova.conf on your 
scheduler host and look for which filter is removing the desired host 
for the second VM. You can find where to start by looking for a message 
like, "Starting with N host(s)". If you have two hosts with enough RAM, 
you should see "Starting with 2 host(s)" and then look for the log 
message where it says "Filter returned 1 host(s)" and that will be the 
filter that is removing the desired host. Once you know which filter is 
removing it, you can debug further.

-melanie

[1] 
https://docs.openstack.org/nova/latest/user/filter-scheduler.html#weights

>>> On Wed, 17 Apr 2019 01:15:49 +0300, Georgios Dimitrakakis <giorgis at acmac.uoc.gr> wrote:
>>> Is there a way I can further debug this?
>>> Setting debug to "true" didn't show a reason why VMs are spreading and
>>> which filter is used every time which will make VMs spread.
>>> In addition I think that I 've read somewhere that placement does this
>>> (stack VMs) by default but have not seen that behavior at all...
>>> Any ideas appreciated...
>>> Regards,
>>> G.
>>> P.S: Added the "scheduler" tag to catch more attention from the correct
>>> people.
>>
>> I had a look through the code and config options and have not yet found anything that could have caused the default stack behavior to have changed.
>>
>> The related config options are:
>>
>> * [placement]randomize_allocation_candidates which defaults to False
>> * [filter_scheduler]host_subset_size which defaults to 1
>> * [filter_scheduler]shuffle_best_same_weighed_hosts which defaults to False
>>
>> If you have these set to the defaults, you should be getting stacking behavior. Can you double check whether those options are set (or left as default) properly in your scheduler configs?
>>
>> -melanie
>>
>>>> All,
>>>>
>>>> in the past I used to stack VMs based on RAM on hosts by putting the
>>>> following settings in my nova.conf file:
>>>>
>>>> ->ram_weight_multiplier=-1.0
>>>> ->available_filters=nova.scheduler.filters.all_filters
>>>>
>>>> ->enabled_filters=RetryFilter,AvailabilityZoneFilter,CoreFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter
>>>>
>>>>
>>>> I have now a Rocky installation and although I have set the above on
>>>> my controller node and restarted corresponding services the result
>>>> when spawning two VMs (one after another) is still to be distributed
>>>> on two different hosts. Seems that the something is missing in order
>>>> to override the default behavior.
>>>>
>>>> Any suggestions please?
>>>>
>>>> Best regards,
>>>>
>>>> G.
>>
>>
>>
>>
> 







More information about the openstack-discuss mailing list