[Openstack] URGENT: packet loss on openstack instance
Satish Patel
satish.txt at gmail.com
Sun Sep 16 13:18:39 UTC 2018
Hi Liping,
Thank you for your reply,
We notice packet drops during high load, I did try txqueue and didn't help so I believe I am going to try miltiqueue.
For SRIOV I have to look if I have support in my nic.
We are using queens so I think queue size option not possible :(
We are using voip application and traffic is udp so our pps rate is 60k to 80k per vm instance.
I will share my result as soon as I try multiqueue.
Sent from my iPhone
> On Sep 16, 2018, at 2:27 AM, Liping Mao (limao) <limao at cisco.com> wrote:
>
> Hi Satish,
>
>
>
> Did your packet loss happen always or it only happened when heavy load?
>
> AFAIK, if you do not tun anything, the vm tap can process about 50kpps before the tap device start to drop packets.
>
>
>
> If it happened in heavy load, couple of things you can try:
>
> 1) increase tap queue length, usually the default value is 500, you can try larger. (seems like you already tried)
>
> 2) Try to use virtio multi queues feature , see [1]. Virtio use one queue for rx/tx in vm, with this feature you can get more queues. You can check
>
> 3) In rock version, you can use [2] to increase virtio queue size, the default queues size is 256/512, you may increase it to 1024, this would help to increase pps of the tap device.
>
>
>
> If all these things can not get your network performance requirement, you may need to move to use dpdk / sriov stuff to get more vm performance.
>
> I did not actually used them in our env, you may refer to [3]
>
>
>
> [1] https://specs.openstack.org/openstack/nova-specs/specs/liberty/implemented/libvirt-virtiomq.html
>
> [2] https://specs.openstack.org/openstack/nova-specs/specs/rocky/implemented/libvirt-virtio-set-queue-sizes.html
>
> [3] https://docs.openstack.org/ocata/networking-guide/config-sriov.html
>
>
>
> Regards,
>
> Liping Mao
>
>
>
> 在 2018/9/16 13:07,“Satish Patel”<satish.txt at gmail.com> 写入:
>
>
>
> [root at compute-33 ~]# ifconfig tap5af7f525-5f | grep -i drop
>
> RX errors 0 dropped 0 overruns 0 frame 0
>
> TX errors 0 dropped 2528788837 overruns 0 carrier 0 collisions 0
>
>
>
> Noticed tap interface dropping TX packets and even after increasing
>
> txqueue from 1000 to 10000 nothing changed, still getting packet
>
> drops.
>
>
>
>> On Sat, Sep 15, 2018 at 4:22 PM Satish Patel <satish.txt at gmail.com> wrote:
>>
>>
>
>> Folks,
>
>>
>
>> I need some advice or suggestion to find out what is going on with my
>
>> network, we have notice high packet loss on openstack instance and not
>
>> sure what is going on, same time if i check on host machine and it has
>
>> zero packet loss.. this is what i did for test...
>
>>
>
>> ping 8.8.8.8
>
>>
>
>> from instance: 50% packet loss
>
>> from compute host: 0% packet loss
>
>>
>
>> I have disabled TSO/GSO/SG setting on physical compute node but still
>
>> getting packet loss.
>
>>
>
>> We have 10G NIC on our network, look like something related to tap
>
>> interface setting..
>
>
>
> _______________________________________________
>
> Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
> Post to : openstack at lists.openstack.org
>
> Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
>
>
More information about the Openstack
mailing list