[openstack-dev] vhost-{pid} takes up cpu

Nick Ma skywalker.nick at gmail.com
Fri Feb 14 13:25:52 UTC 2014


You can run the command "taskset -pc {pid}" for both kvm guest process
and its vhost-{pid}. If they are not identical, you can change the
affinity to achieve NUMA/cache sharing.

Not sure it will solve the problem.

-- 

cheers,
Li Ma


On 2/14/2014 7:42 PM, Yongsheng Gong wrote:
> Hi deal stackers,
>
> I am running a devstack with two nodes:
> one is controller (no nova-compute running) and other is compute.
>
> I am using neutron ml2 plugin and ovs agent with GRE tunnel.
>
> I started a VM and tried to run iperf testing:
> 1. start iperf as server role in the VM, which has a floating ip
> 192.168.10.10 <http://192.168.10.10>:
> iperf -s
>
> 2 start iperf as client role on the controler node to test the VM via
> floating ip:
> iperf -c 192.168.10.10 -t 120
>
> at the same time, run top on compute node, I found the vhost-{pid} was
> taking too much cpu resource ( more than 75%):
>
> Tasks: 230 total,   2 running, 228 sleeping,   0 stopped,   0 zombie
> %Cpu0  : 23.7 us,  9.1 sy,  0.0 ni, 66.9 id,  0.0 wa,  0.0 hi,  0.3
> si,  0.0 st
> %Cpu1  : 10.2 us,  2.7 sy,  0.0 ni, 86.3 id,  0.0 wa,  0.0 hi,  0.7
> si,  0.0 st
> %Cpu2  :  0.0 us, 19.8 sy,  0.0 ni,  5.2 id,  0.0 wa,  2.2 hi, 72.8
> si,  0.0 st
> %Cpu3  : 12.6 us,  4.2 sy,  0.0 ni, 83.2 id,  0.0 wa,  0.0 hi,  0.0
> si,  0.0 st
> KiB Mem:  12264832 total,  1846428 used, 10418404 free,    43692 buffers
> KiB Swap:        0 total,        0 used,        0 free,   581572 cached
>
>   PID USER      PR  NI  VIRT  RES  SHR S  %CPU %MEM    TIME+  COMMAND
>           
>  4073 root      20   0     0    0    0 R  79.8  0.0   3:26.97 vhost-4070  
>
>
> gongysh at gongysh-p6535cn:~$ ps -ef | grep 4070
> 119       4070     1 31 19:24 ?        00:04:13 qemu-system-x86_64
> -machine accel=kvm:tcg -name instance-00000002 -S -M pc-i440fx-1.4 -m
> 2048 -smp 1,sockets=1,cores=1,threads=1 -uuid
> 5cbfb914-d16c-4b51-a057-50f2da827830 -smbios
> type=1,manufacturer=OpenStack Foundation,product=OpenStack
> Nova,version=2014.1,serial=94b43180-e901-1015-b061-90cecbca80a3,uuid=5cbfb914-d16c-4b51-a057-50f2da827830
> -no-user-config -nodefaults -chardev
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-00000002.monitor,server,nowait
> -mon chardev=charmonitor,id=monitor,mode=control -rtc
> base=utc,driftfix=slew -no-hpet -no-shutdown -device
> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
> file=/opt/stack/data/nova/instances/5cbfb914-d16c-4b51-a057-50f2da827830/disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none
> -device
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
> -drive
> file=/opt/stack/data/nova/instances/5cbfb914-d16c-4b51-a057-50f2da827830/disk.config,if=none,id=drive-ide0-1-1,readonly=on,format=raw,cache=none
> -device ide-cd,bus=ide.1,unit=1,drive=drive-ide0-1-1,id=ide0-1-1
> -netdev tap,fd=23,id=hostnet0,vhost=on,vhostfd=24 -device
> virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:ce:da:ae,bus=pci.0,addr=0x3
> -chardev
> file,id=charserial0,path=/opt/stack/data/nova/instances/5cbfb914-d16c-4b51-a057-50f2da827830/console.log
> -device isa-serial,chardev=charserial0,id=serial0 -chardev
> pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1
> -vnc 0.0.0.0:0 <http://0.0.0.0:0> -k en-us -vga cirrus -device
> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
>
>
> This is a big performance problem, imagine almost all the cpu
> resources will be consumed by the vhost-{pid}s if we have many VMs and
> all are running with full speed network traffic.
>
>
> any ideas?
>
> regards,
> yong sheng gong
>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




More information about the OpenStack-dev mailing list