[nova]Can I modify the xml of openstac instance by 'virsh edit'
Hello, I have a All-in-One OpenStack Victoria. And I want to bind vCPU of openstack instance(guest) to pCPU of host. # virsh edit instance-00000107 origin: ``` ... <vcpu placement='static'>8</vcpu> <cputune> <shares>8192</shares> </cputune> ... ``` after modify: ``` <vcpu placement='static' cpuset='104-111'>8</vcpu> <cputune> <shares>8192</shares> <vcpupin vcpu='0' cpuset='104'/> <vcpupin vcpu='1' cpuset='105'/> <vcpupin vcpu='2' cpuset='106'/> <vcpupin vcpu='3' cpuset='107'/> <vcpupin vcpu='4' cpuset='108'/> <vcpupin vcpu='5' cpuset='109'/> <vcpupin vcpu='6' cpuset='110'/> <vcpupin vcpu='7' cpuset='111'/> <emulatorpin cpuset='104-111'/> </cputune> ``` But If I stop and start instance by openstack command, xml automatically recover. And If I user `virsh shutdown instance-00000107 && virsh start instance-00000107`, instance can't be start(I don't konw if the openstack instance must be started using openstack command). So: 1. Can I modify the xml of openstac instance by 'virsh edit'? 2. If I can't modify the xml, how can I bind vcpu of guest and pcpu of host I would appreciate any kind of guidance or help. HanGuangyu
HanGuangyu, For testing, e.g. to see if your workloads are sensitive to pinning, you should be able to use 'virsh vcpupin ...' to pin vCPUs to pCPUs. To configure pinning in your deployment, the CPU topology docs for Victoria should help: https://docs.openstack.org/nova/victoria/admin/cpu-topologies.html From our experience (and our workloads), pinning to a NUMA node gives the same performance as pinning to individual pCPUs and is therefore good enough. Hope this helps, cheers, Arne On 23.03.22 08:35, 韩光宇 wrote:
Hello,
I have a All-in-One OpenStack Victoria. And I want to bind vCPU of openstack instance(guest) to pCPU of host. # virsh edit instance-00000107 origin: ``` ... <vcpu placement='static'>8</vcpu> <cputune> <shares>8192</shares> </cputune> ... ```
after modify: ``` <vcpu placement='static' cpuset='104-111'>8</vcpu> <cputune> <shares>8192</shares> <vcpupin vcpu='0' cpuset='104'/> <vcpupin vcpu='1' cpuset='105'/> <vcpupin vcpu='2' cpuset='106'/> <vcpupin vcpu='3' cpuset='107'/> <vcpupin vcpu='4' cpuset='108'/> <vcpupin vcpu='5' cpuset='109'/> <vcpupin vcpu='6' cpuset='110'/> <vcpupin vcpu='7' cpuset='111'/> <emulatorpin cpuset='104-111'/> </cputune> ``` But If I stop and start instance by openstack command, xml automatically recover. And If I user `virsh shutdown instance-00000107 && virsh start instance-00000107`, instance can't be start(I don't konw if the openstack instance must be started using openstack command).
So: 1. Can I modify the xml of openstac instance by 'virsh edit'? 2. If I can't modify the xml, how can I bind vcpu of guest and pcpu of host
I would appreciate any kind of guidance or help.
HanGuangyu
On Wed, 2022-03-23 at 08:53 +0100, Arne Wiebalck wrote:
HanGuangyu,
For testing, e.g. to see if your workloads are sensitive to pinning, you should be able to use 'virsh vcpupin ...' to pin vCPUs to pCPUs.
To configure pinning in your deployment, the CPU topology docs for Victoria should help:
https://docs.openstack.org/nova/victoria/admin/cpu-topologies.html
openstakc support virtual cpu toplogies and cpu pinning but they are two different things that doc does cover both. https://docs.openstack.org/nova/victoria/admin/cpu-topologies.html#customizi... is the releve section for pinning which is normally enable via the hw:cpu_policy=dedicated flavaor extra spec but can also be enabeld using hw_cpu_policy=dedicated image property.
From our experience (and our workloads), pinning to a NUMA node gives the same performance as pinning to individual pCPUs and is therefore good enough.
for the most part yes but you need to ensure you set hw:mem_page_size to enable numa aware memory tracking setting hw:numa_nodes is not sufficent for correct memory accounting.
Hope this helps, cheers, Arne
On 23.03.22 08:35, 韩光宇 wrote:
Hello,
I have a All-in-One OpenStack Victoria. And I want to bind vCPU of openstack instance(guest) to pCPU of host. # virsh edit instance-00000107 origin: ``` ... <vcpu placement='static'>8</vcpu> <cputune> <shares>8192</shares> </cputune> ... ```
after modify: ``` <vcpu placement='static' cpuset='104-111'>8</vcpu> <cputune> <shares>8192</shares> <vcpupin vcpu='0' cpuset='104'/> <vcpupin vcpu='1' cpuset='105'/> <vcpupin vcpu='2' cpuset='106'/> <vcpupin vcpu='3' cpuset='107'/> <vcpupin vcpu='4' cpuset='108'/> <vcpupin vcpu='5' cpuset='109'/> <vcpupin vcpu='6' cpuset='110'/> <vcpupin vcpu='7' cpuset='111'/> <emulatorpin cpuset='104-111'/> </cputune> ``` But If I stop and start instance by openstack command, xml automatically recover. And If I user `virsh shutdown instance-00000107 && virsh start instance-00000107`, instance can't be start(I don't konw if the openstack instance must be started using openstack command).
So: 1. Can I modify the xml of openstac instance by 'virsh edit'? no that is not supported. the xmls are regenerated by nova after many operations so change you make directly will not persist and it will potentially break nova so its unsupported. 2. If I can't modify the xml, how can I bind vcpu of guest and pcpu of host as noted above we support requesting cpu pinning using flavor extra specs and image propperties. depending on the release you are usign there are some other feature such as the ablity to have some pinned and some unbinidn cores in the same vm.
on thing to keep in mind is we do not support mixing vms with a numa toplogy and vms without a numa toplogy. so that means that you have to enabel numa aware memofy trackign for all vms on a host if you are using cpu pinning. that generally means grouping host for numa workloads and non numa workloads using host aggreates and configuring fileters to isolate them
I would appreciate any kind of guidance or help.
HanGuangyu
participants (3)
-
Arne Wiebalck
-
Sean Mooney
-
韩光宇