[Openstack] Poor instance network performance
Satish Patel
satish.txt at gmail.com
Thu Dec 17 19:48:38 UTC 2015
in your email response, i do have vhost_net driver loaded
[root at compute-1 ~]# lsmod | grep vhost
vhost_net 33961 1
macvtap 22398 1 vhost_net
tun 27183 3 vhost_net
On Thu, Dec 17, 2015 at 2:47 PM, Satish Patel <satish.txt at gmail.com> wrote:
> Thanks for response,
>
> Yes if i ping from remote machine to that VM i am getting timeout at
> some interval.
>
> My application is CPU intensive, it is not doing any disk operation,
> actually this application is base on VoIP. so pure RTP traffic flowing
> on network. small UDP packets.
>
> On Thu, Dec 17, 2015 at 1:20 PM, Rick Jones <rick.jones2 at hpe.com> wrote:
>> On 12/17/2015 08:11 AM, Satish Patel wrote:
>>> Following is TOP command on guest machine, at this point i am getting
>>> ping breaks
>>
>> Just to clarify, when you say "getting ping breaks" you mean that is when
>> you start seeing pings not getting responses yes?
>>
>>> top - 16:10:30 up 20:46, 1 user, load average: 4.63, 4.41, 3.60
>>> Tasks: 165 total, 6 running, 159 sleeping, 0 stopped, 0 zombie
>>> %Cpu0 : 15.1 us, 12.3 sy, 0.0 ni, 60.9 id, 0.0 wa, 0.0 hi, 0.3 si,
>>> 11.4 st
>>> %Cpu1 : 22.9 us, 17.2 sy, 0.0 ni, 51.4 id, 0.0 wa, 0.0 hi, 0.3 si,
>>> 8.2 st
>>> %Cpu2 : 28.8 us, 22.4 sy, 0.0 ni, 47.5 id, 0.0 wa, 0.0 hi, 1.0 si,
>>> 0.3 st
>>> %Cpu3 : 16.6 us, 15.0 sy, 0.0 ni, 66.4 id, 0.0 wa, 0.0 hi, 0.3 si,
>>> 1.7 st
>>> %Cpu4 : 9.8 us, 11.8 sy, 0.0 ni, 0.0 id, 75.4 wa, 0.0 hi, 0.3 si,
>>> 2.6 st
>>> %Cpu5 : 7.6 us, 6.1 sy, 0.0 ni, 81.4 id, 0.0 wa, 0.0 hi, 4.2 si,
>>> 0.8 st
>>> %Cpu6 : 8.1 us, 7.4 sy, 0.0 ni, 83.0 id, 0.0 wa, 0.0 hi, 1.4 si,
>>> 0.0 st
>>> %Cpu7 : 17.8 us, 17.8 sy, 0.0 ni, 64.1 id, 0.0 wa, 0.0 hi, 0.3 si,
>>> 0.0 st
>>> KiB Mem : 8175332 total, 4630124 free, 653284 used, 2891924
>>> buff/cache
>>> KiB Swap: 0 total, 0 free, 0 used. 7131540 avail Mem
>>
>> 75% wait time on a vCPU in the guest suggests the application(s) on that
>> guest are trying to do a lot of I/O and bottlenecking on it. I am not all
>> that well versed on libvirt/KVM, but if there is just the one I/O thread for
>> the VM, and it is saturated (perhaps waiting on disc) doing I/O for the
>> VM/instance, that could cause other I/O processing like network I/O to be
>> held-off, and it could be that either the transmit queue of the interface in
>> the guest is filling as it goes to send the ICMP Echo Replies (ping
>> replies), and/or the queue for the instance's tap device (the inbound
>> traffic) is filling as the ICMP Echo Requests are arriving.
>>
>> I would suggest looking further into the apparent I/O bottleneck.
>>
>> Drifting a bit, perhaps...
>>
>> I'm not sure if it would happen automagically, but if the "vhost_net" module
>> isn't loaded into the compute node's kernel you might consider loading that.
>> From that point on, newly launched instances/VM on that node will start
>> using it for networking and should get a boost. I cannot say though whether
>> that would bypass the VMs I/O thread. Existing instances should pick it up
>> if you "nova reboot" them. (I don't think a reboot initiated from within the
>> instance/VM would do it).
>>
>> Whether there is something similar for disc I/O I don't know - I've not had
>> to go looking for that yet.
>>
>> happy benchmarking,
>>
>> rick jones
>> http://www.netperf.org/
>>
>>
More information about the Openstack
mailing list