[nova]host cpu reserve

Dmitriy Rabotyagov noonedeadpunk at gmail.com
Thu Mar 23 13:51:14 UTC 2023


Just in case, you DO have options to control cpu and ram reservation
for the hypervisor. It's just more about that, that it's not the best
way to do it, especially if you're overcommitting, as things in real
life are more complicated then just defining the amount of reserved
CPUs.

For example, if you have cpu_allocation_ratio set to 3, then you're
getting 3 times more CPUs to signup VMs then you actually have
(cores*sockets*threads*cpu_allocation_ratio). With that you really
can't set any decent amount of reserved CPUs that will 100% ensure
that hypervisor will be able to gain required resources at any given
time. So with that approach the only option is to disable cpu
overcommit, but even then you might get CPU in socket 1 fully utilized
which might have negative side-effects for the hypervisor.

And based on that, as Sean has mentioned, you can tell nova to
explicitly exclude specific cores from being utilized, which will make
them reserved for the hypervisor.

чт, 23 мар. 2023 г. в 14:35, Nguyễn Hữu Khôi <nguyenhuukhoinw at gmail.com>:
>
> Ok. I will try to understand it. I will let you know when I get it.
> Many thanks for your help. :)
>
> On Thu, Mar 23, 2023, 8:14 PM Dmitriy Rabotyagov <noonedeadpunk at gmail.com> wrote:
>>
>> Just to double check with you, given that you have
>> cpu_overcommit_ratio>1, 2 sockets and HT enabled, and each CPU has 32
>> physical cores, then it should be defined like:
>>
>> [compute]
>> cpu_shared_set="2-32,34-64,66-96,98-128"?
>>
>> > in general you shoudl reserve the first core on each cpu socket for the host os.
>> > if you use hyperthreading then both hyperthread of the first cpu core on each socket shoudl be omitted
>> > form the cpu_shared_set and cpu_dedicated_set
>>
>> чт, 23 мар. 2023 г. в 13:12, Sean Mooney <smooney at redhat.com>:
>> >
>> > generally you should not
>> > you can use it but the preferd way to do this is use
>> > cpu_shared_set and cpu_dedicated_set (in old releases you would have used vcpu_pin_set)
>> > https://docs.openstack.org/nova/latest/configuration/config.html#compute.cpu_shared_set
>> > https://docs.openstack.org/nova/latest/configuration/config.html#compute.cpu_dedicated_set
>> >
>> > if you dont need cpu pinning just use cpu_share_set to spcify the cores that can be sued for floatign vms
>> > when you use cpu_shared_set and cpu_dedicated_set any cpu not specified are reseved for host use.
>> >
>> > https://that.guru/blog/cpu-resources/ and https://that.guru/blog/cpu-resources-redux/
>> >
>> > have some useful info but that mostly looking at it form a cpu pinning angel althoguh the secon one covers cpu_shared_set,
>> >
>> > the issue with usein
>> > https://docs.openstack.org/nova/latest/configuration/config.html#DEFAULT.reserved_host_cpus
>> >
>> > is that you have to multiple the number of cores that are resverved by the
>> > https://docs.openstack.org/nova/latest/configuration/config.html#DEFAULT.cpu_allocation_ratio
>> >
>> > which means if you decide to manage that via placement api by using
>> > https://docs.openstack.org/nova/latest/configuration/config.html#DEFAULT.initial_cpu_allocation_ratio instead
>> > then you need to update your nova.conf to modify the reservationfi you change the allocation ratio.
>> >
>> > if instead you use cpu_shared_set and cpu_dedicated_set
>> > you are specifying exactly which cpus nova can use and the allocation ration nolonger needs to be conisderd.
>> >
>> > in general you shoudl reserve the first core on each cpu socket for the host os.
>> > if you use hyperthreading then both hyperthread of the first cpu core on each socket shoudl be omitted
>> > form the cpu_shared_set and cpu_dedicated_set
>> >
>> >
>> >
>> > On Thu, 2023-03-23 at 14:44 +0700, Nguyễn Hữu Khôi wrote:
>> > > Hello guys.
>> > > I am trying google for nova host cpu reserve to prevent host overload but I
>> > > cannot find any resource about it. Could you give me some information?
>> > > Thanks.
>> > > Nguyen Huu Khoi
>> >
>> >



More information about the openstack-discuss mailing list