<div dir="ltr"><div>Hi deal stackers,</div><div><br></div>I am running a devstack with two nodes:<div>one is controller (no nova-compute running) and other is compute.</div><div><br></div><div>I am using neutron ml2 plugin and ovs agent with GRE tunnel.</div>
<div><br></div><div>I started a VM and tried to run iperf testing:</div><div>1. start iperf as server role in the VM, which has a floating ip <a href="http://192.168.10.10">192.168.10.10</a>:</div><div>iperf -s</div><div>
<br></div><div>2 start iperf as client role on the controler node to test the VM via floating ip:</div><div>iperf -c 192.168.10.10 -t 120</div><div><br></div><div>at the same time, run top on compute node, I found the vhost-{pid} was taking too much cpu resource ( more than 75%):</div>
<div><br></div><div><div>Tasks: 230 total, 2 running, 228 sleeping, 0 stopped, 0 zombie</div><div>%Cpu0 : 23.7 us, 9.1 sy, 0.0 ni, 66.9 id, 0.0 wa, 0.0 hi, 0.3 si, 0.0 st</div><div>%Cpu1 : 10.2 us, 2.7 sy, 0.0 ni, 86.3 id, 0.0 wa, 0.0 hi, 0.7 si, 0.0 st</div>
<div>%Cpu2 : 0.0 us, 19.8 sy, 0.0 ni, 5.2 id, 0.0 wa, 2.2 hi, 72.8 si, 0.0 st</div><div>%Cpu3 : 12.6 us, 4.2 sy, 0.0 ni, 83.2 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st</div><div>KiB Mem: 12264832 total, 1846428 used, 10418404 free, 43692 buffers</div>
<div>KiB Swap: 0 total, 0 used, 0 free, 581572 cached</div><div><br></div><div> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND </div><div> 4073 root 20 0 0 0 0 R 79.8 0.0 3:26.97 vhost-4070 </div>
</div><div><br></div><div><br></div><div><div>gongysh@gongysh-p6535cn:~$ ps -ef | grep 4070</div><div>119 4070 1 31 19:24 ? 00:04:13 qemu-system-x86_64 -machine accel=kvm:tcg -name instance-00000002 -S -M pc-i440fx-1.4 -m 2048 -smp 1,sockets=1,cores=1,threads=1 -uuid 5cbfb914-d16c-4b51-a057-50f2da827830 -smbios type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=2014.1,serial=94b43180-e901-1015-b061-90cecbca80a3,uuid=5cbfb914-d16c-4b51-a057-50f2da827830 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-00000002.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -no-hpet -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/opt/stack/data/nova/instances/5cbfb914-d16c-4b51-a057-50f2da827830/disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/opt/stack/data/nova/instances/5cbfb914-d16c-4b51-a057-50f2da827830/disk.config,if=none,id=drive-ide0-1-1,readonly=on,format=raw,cache=none -device ide-cd,bus=ide.1,unit=1,drive=drive-ide0-1-1,id=ide0-1-1 -netdev tap,fd=23,id=hostnet0,vhost=on,vhostfd=24 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:ce:da:ae,bus=pci.0,addr=0x3 -chardev file,id=charserial0,path=/opt/stack/data/nova/instances/5cbfb914-d16c-4b51-a057-50f2da827830/console.log -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -vnc <a href="http://0.0.0.0:0">0.0.0.0:0</a> -k en-us -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5</div>
</div><div><br></div><div><br></div><div>This is a big performance problem, imagine almost all the cpu resources will be consumed by the vhost-{pid}s if we have many VMs and all are running with full speed network traffic.</div>
<div><br></div><div><br></div><div>any ideas?</div><div><br></div><div>regards,</div><div>yong sheng gong</div><div><br></div></div>