All VMs fail when --max exceeds available resources
Matt Riedemann
mriedemos at gmail.com
Wed Nov 20 22:00:29 UTC 2019
On 11/20/2019 3:21 PM, Albert Braden wrote:
> I think the document is saying that we need to set them in nova.conf on each HV. I tried that and it seems to fix the allocation failure:
>
> root at us01odc-dev1-ctrl1:~# os resource provider inventory list f20fa03d-18f4-486b-9b40-ceaaf52dabf8
> +----------------+------------------+----------+----------+-----------+----------+--------+
> | resource_class | allocation_ratio | max_unit | reserved | step_size | min_unit | total |
> +----------------+------------------+----------+----------+-----------+----------+--------+
> | VCPU | 1.0 | 16 | 2 | 1 | 1 | 16 |
> | MEMORY_MB | 1.0 | 128888 | 8192 | 1 | 1 | 128888 |
> | DISK_GB | 1.0 | 1208 | 246 | 1 | 1 | 1208 |
> +----------------+------------------+----------+----------+-----------+----------+--------+
Yup, the config on the controller doesn't apply to the computes or
placement because the computes are what report the inventory to
placement so you have to configure the allocation ratios there, or
starting in stein via (resource provider) aggregate.
>
> This fixed the "allocation ratio" issue but I still see the --max issue. What could be causing that?
That's something else yeah? I didn't quite dig into that email and the
allocation ratio thing popped up to me since it's been a long standing
known painful issue/behavior change since Ocata.
One question though, I read your original email as essentially "(1) I
did x and got some failures, then (2) I changed something and now
everything fails", but are you running from a clean environment in both
test scenarios because if you have VMs on the computes when you're doing
(2) then that's going to change the scheduling results in (2), i.e. the
computes will have less capacity since there are resources allocated on
them in placement.
--
Thanks,
Matt
More information about the openstack-discuss
mailing list