[Openstack] Poor instance network performance

Satish Patel satish.txt at gmail.com
Thu Dec 17 17:55:17 UTC 2015


I think i found solution, it improve little bit performance

on tapdd55b834-f8  interface default txqueuelen size was 500, I have
changed it to 10000

On Thu, Dec 17, 2015 at 12:07 PM, Satish Patel <satish.txt at gmail.com> wrote:
> On host (compute) machine tap device showing lots of TX packet loss.
>
> tapdd55b834-f8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
>         inet6 fe80::fc16:3eff:fecd:2aa4  prefixlen 64  scopeid 0x20<link>
>         ether fe:16:3e:cd:2a:a4  txqueuelen 500  (Ethernet)
>         RX packets 1060731031  bytes 298573599135 (278.0 GiB)
>         RX errors 0  dropped 0  overruns 0  frame 0
>         TX packets 503372157  bytes 81345389820 (75.7 GiB)
>         TX errors 0  dropped 171333348 overruns 0  carrier 0  collisions 0
>
> On Thu, Dec 17, 2015 at 11:45 AM, Satish Patel <satish.txt at gmail.com> wrote:
>> I just ran iperf and here is the results, bi-directional
>>
>> [ ID] Interval       Transfer     Bandwidth
>> [  5]  0.0-10.0 sec  1.04 GBytes   895 Mbits/sec
>> [  4]  0.0-10.0 sec  1.09 GBytes   935 Mbits/sec
>>
>> Look like when guest is under load, it can't handle traffic and start
>> dropping packets.
>>
>>
>>
>> On Thu, Dec 17, 2015 at 11:36 AM, Vahric Muhtaryan <vahric at doruk.net.tr> wrote:
>>> Maybe this also could be reference
>>>
>>> http://42.62.73.30/wordpress/wp-content/uploads/2013/10/Neutron-performance
>>> -testing.pdf
>>>
>>>
>>> On 17/12/15 18:32, "Satish Patel" <satish.txt at gmail.com> wrote:
>>>
>>>>We have our own custom application running on guest VM. i am just
>>>>checking performance of VM. but i found if load go little up machine
>>>>start breaking ping.
>>>>
>>>>I didn't run iperf test yet. which i am going to run in a min.
>>>>
>>>>On Thu, Dec 17, 2015 at 11:28 AM, Vahric Muhtaryan <vahric at doruk.net.tr>
>>>>wrote:
>>>>> Okay Satish ,
>>>>>
>>>>> Im not expert and do not want to divert you wrong direction.
>>>>> I want to make your test on my server to cross check , can you pls
>>>>>explain
>>>>> how and with what tool and to where you are making this test.
>>>>>
>>>>> Regards
>>>>> VM
>>>>>
>>>>> On 17/12/15 18:11, "Satish Patel" <satish.txt at gmail.com> wrote:
>>>>>
>>>>>>Thanks for reply,
>>>>>>
>>>>>>Following is my guest xml file.
>>>>>>
>>>>>>I am using Openstack JUNO and it use  OVS.
>>>>>>
>>>>>><interface type='bridge'>
>>>>>>      <mac address='fa:16:3e:cd:2a:a4'/>
>>>>>>      <source bridge='qbrdd55b834-f8'/>
>>>>>>      <target dev='tapdd55b834-f8'/>
>>>>>>      <model type='virtio'/>
>>>>>>      <alias name='net0'/>
>>>>>>      <address type='pci' domain='0x0000' bus='0x00' slot='0x03'
>>>>>>function='0x0'/>
>>>>>>    </interface>
>>>>>>
>>>>>>
>>>>>>Following is TOP command on guest machine, at this point i am getting
>>>>>>ping breaks
>>>>>>
>>>>>>top - 16:10:30 up 20:46,  1 user,  load average: 4.63, 4.41, 3.60
>>>>>>Tasks: 165 total,   6 running, 159 sleeping,   0 stopped,   0 zombie
>>>>>>%Cpu0  : 15.1 us, 12.3 sy,  0.0 ni, 60.9 id,  0.0 wa,  0.0 hi,  0.3 si,
>>>>>>11.4 st
>>>>>>%Cpu1  : 22.9 us, 17.2 sy,  0.0 ni, 51.4 id,  0.0 wa,  0.0 hi,  0.3 si,
>>>>>>8.2 st
>>>>>>%Cpu2  : 28.8 us, 22.4 sy,  0.0 ni, 47.5 id,  0.0 wa,  0.0 hi,  1.0 si,
>>>>>>0.3 st
>>>>>>%Cpu3  : 16.6 us, 15.0 sy,  0.0 ni, 66.4 id,  0.0 wa,  0.0 hi,  0.3 si,
>>>>>>1.7 st
>>>>>>%Cpu4  :  9.8 us, 11.8 sy,  0.0 ni,  0.0 id, 75.4 wa,  0.0 hi,  0.3 si,
>>>>>>2.6 st
>>>>>>%Cpu5  :  7.6 us,  6.1 sy,  0.0 ni, 81.4 id,  0.0 wa,  0.0 hi,  4.2 si,
>>>>>>0.8 st
>>>>>>%Cpu6  :  8.1 us,  7.4 sy,  0.0 ni, 83.0 id,  0.0 wa,  0.0 hi,  1.4 si,
>>>>>>0.0 st
>>>>>>%Cpu7  : 17.8 us, 17.8 sy,  0.0 ni, 64.1 id,  0.0 wa,  0.0 hi,  0.3 si,
>>>>>>0.0 st
>>>>>>KiB Mem :  8175332 total,  4630124 free,   653284 used,  2891924
>>>>>>buff/cache
>>>>>>KiB Swap:        0 total,        0 free,        0 used.  7131540 avail
>>>>>>Mem
>>>>>>
>>>>>>
>>>>>>On Thu, Dec 17, 2015 at 11:06 AM, Vahric Muhtaryan <vahric at doruk.net.tr>
>>>>>>wrote:
>>>>>>> Hello ,
>>>>>>>
>>>>>>> Cool !  Todays I am testing to but not such way just only KVM.
>>>>>>>
>>>>>>> Wonder 2/3 things ;
>>>>>>>
>>>>>>> 1) Are you using virtio?
>>>>>>> <model type='virtio¹/>
>>>>>>>
>>>>>>>
>>>>>>> 2) is your vm saturating %100 of cpu inside guest ?
>>>>>>>
>>>>>>> 3) Using OVS or bridge ?
>>>>>>>
>>>>>>> Regards
>>>>>>> Vahric Muhtaryan
>>>>>>>
>>>>>>> On 17/12/15 17:41, "Satish Patel" <satish.txt at gmail.com> wrote:
>>>>>>>
>>>>>>>>I am doing some testing on Openstack VM network performance. following
>>>>>>>>i am doing
>>>>>>>>
>>>>>>>>Comute node: 8 CPU / 32GB mem
>>>>>>>>VM instance: 8vCPU / 8GB mem
>>>>>>>>
>>>>>>>>when i am running heavy load application, immediate my ping started
>>>>>>>>breaking for VM.. but at same time if i ping Compute Host, its working
>>>>>>>>fine without any packet loss.
>>>>>>>>
>>>>>>>>I have disabled GSO/TSO setting on eth0 but still same result.  i have
>>>>>>>>notice when it reach 250mbps my ping start breaking...
>>>>>>>>
>>>>>>>>_______________________________________________
>>>>>>>>Mailing list:
>>>>>>>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>>>>>>Post to     : openstack at lists.openstack.org
>>>>>>>>Unsubscribe :
>>>>>>>>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>>>>>>
>>>>>>>
>>>>>
>>>>>
>>>
>>>




More information about the Openstack mailing list