<p dir="ltr">Have you tried disabling offloading on the network cards?</p>
<div class="gmail_quote">El 15/12/2014 18:21, "André Aranha" <<a href="mailto:andre.f.aranha@gmail.com">andre.f.aranha@gmail.com</a>> escribió:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><span style="color:rgb(38,38,38);font-size:13px;line-height:16px">Our kernel version in controller is 3.13.0-37-generic, on ComputeNode is 3.13.0-24-generic and in the NetworkNode is 3.13.0-35-generic.</span><br></div><div class="gmail_extra"><br><div class="gmail_quote">On 13 December 2014 at 04:39, Min Pae <span dir="ltr"><<a href="mailto:sputnik13@gmail.com" target="_blank">sputnik13@gmail.com</a>></span> wrote:<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">What kernel version are you running on the host?<br>
<div><div><br>
On Fri, Dec 12, 2014 at 12:09 PM, André Aranha <<a href="mailto:andre.f.aranha@gmail.com" target="_blank">andre.f.aranha@gmail.com</a>> wrote:<br>
> Our compute nodes are using vhost_net, we haven't made any changes to buffer<br>
> our NIC.<br>
> The system is not over loaded, cpu usage aren't higher than 30%<br>
><br>
> On 12 December 2014 at 02:35, mad Engineer <<a href="mailto:themadengin33r@gmail.com" target="_blank">themadengin33r@gmail.com</a>> wrote:<br>
>><br>
>> so looks like its not the issue with openvswitch,missed is quite<br>
>> normal,it is not the reason for packet loss<br>
>> is your guests using vhost_net?<br>
>> do<br>
>> ps aux|grep vhost<br>
>> also have you made any changes to buffer size of your NIC?<br>
>> is the system over loaded what is the cpu usage<br>
>><br>
>> On Thu, Dec 11, 2014 at 6:20 PM, André Aranha <<a href="mailto:andre.f.aranha@gmail.com" target="_blank">andre.f.aranha@gmail.com</a>><br>
>> wrote:<br>
>> > Thanks for the advice, i've run the command in NetworkNode and in a<br>
>> > ComputeNode and lost is 0, but missed is a high value.<br>
>> ><br>
>> > NetworkNode<br>
>> > system@ovs-system:<br>
>> > lookups: hit:425667155 missed:2962922 lost:0<br>
>> > flows: 27<br>
>> > port 0: ovs-system (internal)<br>
>> > port 1: br-ex (internal)<br>
>> > port 2: br-tun (internal)<br>
>> > port 3: eth1<br>
>> > port 4: br-int (internal)<br>
>> > port 5: tapbdc3d959-d8 (internal)<br>
>> > port 6: gre_system (gre: df_default=false, ttl=0)<br>
>> > port 7: qr-4063db49-6b (internal)<br>
>> > port 8: qg-e427e527-92 (internal)<br>
>> ><br>
>> ><br>
>> > ComputeNode<br>
>> > system@ovs-system:<br>
>> > lookups: hit:28660666 missed:200922 lost:0<br>
>> > flows: 19<br>
>> > port 0: ovs-system (internal)<br>
>> > port 1: br-int (internal)<br>
>> > port 2: br-tun (internal)<br>
>> > port 3: gre_system (gre: df_default=false, ttl=0)<br>
>> > port 4: em1<br>
>> > port 5: br-private (internal)<br>
>> > port 6: qvo9a959049-a0<br>
>> > port 7: qvodd0ef077-e1<br>
>> > port 8: qvoac2b566b-65<br>
>> > port 9: qvo9e4ab149-5c<br>
>> > port 10: qvoc2d2625c-0c<br>
>> > port 11: qvo3069daeb-4a<br>
>> > port 12: qvo7f82a3cf-0c<br>
>> > port 13: qvo83b77d2d-1a<br>
>> > port 14: qvobbadd8c2-30<br>
>> > port 15: qvocfd0b8e8-ad<br>
>> > port 16: qvo714fab88-60<br>
>> > port 17: qvob9ddde49-86<br>
>> > port 18: qvo42ef9f3b-ac<br>
>> > port 19: qvof4ae7868-41<br>
>> > port 20: qvoa4408a18-03<br>
>> > port 22: qvo36c64d52-9b<br>
>> ><br>
>> > On 11 December 2014 at 06:17, mad Engineer <<a href="mailto:themadengin33r@gmail.com" target="_blank">themadengin33r@gmail.com</a>><br>
>> > wrote:<br>
>> >><br>
>> >> sorry its 2.3.0 not 2.1.3<br>
>> >><br>
>> >> On Thu, Dec 11, 2014 at 2:43 PM, mad Engineer<br>
>> >> <<a href="mailto:themadengin33r@gmail.com" target="_blank">themadengin33r@gmail.com</a>><br>
>> >> wrote:<br>
>> >> > Not in openstack,i had performance issue, with OVS and bursty traffic<br>
>> >> > upgrading to later version improved the performance.A lot of<br>
>> >> > performance features have been added in 2.1.3.<br>
>> >> ><br>
>> >> > Do you have lots of lost: value in<br>
>> >> > ovs-dpctl show<br>
>> >> ><br>
>> >> ><br>
>> >> > On Thu, Dec 11, 2014 at 2:33 AM, André Aranha<br>
>> >> > <<a href="mailto:andre.f.aranha@gmail.com" target="_blank">andre.f.aranha@gmail.com</a>><br>
>> >> > wrote:<br>
>> >> >> Yes, we are using version 2.0.2.<br>
>> >> >> The process uses only about 0.3% on network node and compute node.<br>
>> >> >> Did you have the same issue?<br>
>> >> >><br>
>> >> >> On 10 December 2014 at 14:31, mad Engineer<br>
>> >> >> <<a href="mailto:themadengin33r@gmail.com" target="_blank">themadengin33r@gmail.com</a>><br>
>> >> >> wrote:<br>
>> >> >>><br>
>> >> >>> are you using openvswitch? which version?<br>
>> >> >>> if yes,is it consuming a lot of CPU?<br>
>> >> >>><br>
>> >> >>> On Wed, Dec 10, 2014 at 7:45 PM, André Aranha<br>
>> >> >>> <<a href="mailto:andre.f.aranha@gmail.com" target="_blank">andre.f.aranha@gmail.com</a>><br>
>> >> >>> wrote:<br>
>> >> >>> > Well, here we are using de Icehouse with Ubuntu 14.04 LTS<br>
>> >> >>> ><br>
>> >> >>> > We found this thread in the community and we apply the changes<br>
>> >> >>> > in<br>
>> >> >>> > the<br>
>> >> >>> > compute nodes (change VHOST_NET_ENABLED to 1 in<br>
>> >> >>> > /etc/default/qemu-kvm).<br>
>> >> >>> > After do this, a few instances the problem doesn't exists<br>
>> >> >>> > anymore.<br>
>> >> >>> > This<br>
>> >> >>> > link<br>
>> >> >>> > show an investigation to find the problem.<br>
>> >> >>> ><br>
>> >> >>> > About the MTU in our cloud (using iperf),<br>
>> >> >>> ><br>
>> >> >>> > 1-from any the Desktop to the Network Node<br>
>> >> >>> > MSS size 1448 bytes (MTU 1500 bytes, ethernet)<br>
>> >> >>> ><br>
>> >> >>> > 2-from any Desktop to the instance<br>
>> >> >>> > MSS size 1348 bytes (MTU 1388 bytes, unknown interface)<br>
>> >> >>> ><br>
>> >> >>> > 3- from any instance to the Network Node<br>
>> >> >>> > MSS size 1348 bytes (MTU 1388 bytes, unknown interface)<br>
>> >> >>> ><br>
>> >> >>> > 4- from any instance to the Desktop<br>
>> >> >>> > MSS size 1348 bytes (MTU 1388 bytes, unknown interface)<br>
>> >> >>> ><br>
>> >> >>> > 5-from Network Node to any ComputeNode<br>
>> >> >>> > MSS size 1448 bytes (MTU 1500 bytes, ethernet)<br>
>> >> >>> ><br>
>> >> >>> > 6-from any ComputeNode to NetworkNode<br>
>> >> >>> > MSS size 1448 bytes (MTU 1500 bytes, ethernet)<br>
>> >> >>> ><br>
>> >> >>> > On 10 December 2014 at 10:31, somshekar kadam<br>
>> >> >>> > <<a href="mailto:som_kadam@yahoo.co.in" target="_blank">som_kadam@yahoo.co.in</a>><br>
>> >> >>> > wrote:<br>
>> >> >>> >><br>
>> >> >>> >> Sorry for wrong post mail chain.<br>
>> >> >>> >><br>
>> >> >>> >><br>
>> >> >>> >> Regards<br>
>> >> >>> >> Neelu<br>
>> >> >>> >><br>
>> >> >>> >><br>
>> >> >>> >> On Wednesday, 10 December 2014 6:59 PM, somshekar kadam<br>
>> >> >>> >> <<a href="mailto:som_kadam@yahoo.co.in" target="_blank">som_kadam@yahoo.co.in</a>> wrote:<br>
>> >> >>> >><br>
>> >> >>> >><br>
>> >> >>> >> Hi All,<br>
>> >> >>> >><br>
>> >> >>> >> Please recommend which stable Host OS to use for Controller and<br>
>> >> >>> >> Compute<br>
>> >> >>> >> node.<br>
>> >> >>> >> I have tried Fedora20 seems lot of tweaking is required, corerct<br>
>> >> >>> >> me<br>
>> >> >>> >> If<br>
>> >> >>> >> I<br>
>> >> >>> >> am wrong.<br>
>> >> >>> >> I see that most of it is tested on ubuntu and centos.<br>
>> >> >>> >> I am planning to use JUNO stable version.<br>
>> >> >>> >> Please help on this<br>
>> >> >>> >><br>
>> >> >>> >><br>
>> >> >>> >> Regards<br>
>> >> >>> >> Neelu<br>
>> >> >>> >><br>
>> >> >>> >><br>
>> >> >>> >> On Wednesday, 10 December 2014 5:42 PM, Hannah Fordham<br>
>> >> >>> >> <<a href="mailto:hfordham@radiantworlds.com" target="_blank">hfordham@radiantworlds.com</a>> wrote:<br>
>> >> >>> >><br>
>> >> >>> >><br>
>> >> >>> >> I'm afraid we didn't, we're still struggling with some VMs with<br>
>> >> >>> >> this<br>
>> >> >>> >> problem. Sorry!<br>
>> >> >>> >><br>
>> >> >>> >> On 9 December 2014 14:09:32 GMT+00:00, "André Aranha"<br>
>> >> >>> >> <<a href="mailto:andre.f.aranha@gmail.com" target="_blank">andre.f.aranha@gmail.com</a>> wrote:<br>
>> >> >>> >><br>
>> >> >>> >> Hi,<br>
>> >> >>> >><br>
>> >> >>> >> We are with the same issue here, and already try some solutions<br>
>> >> >>> >> that<br>
>> >> >>> >> didn't work at all. Did you solved this problem?<br>
>> >> >>> >><br>
>> >> >>> >> Thank you,<br>
>> >> >>> >> Andre Aranha<br>
>> >> >>> >><br>
>> >> >>> >> On 27 August 2014 at 08:17, Hannah Fordham<br>
>> >> >>> >> <<a href="mailto:hfordham@radiantworlds.com" target="_blank">hfordham@radiantworlds.com</a>><br>
>> >> >>> >> wrote:<br>
>> >> >>> >><br>
>> >> >>> >> I’ve been trying to figure this one out for a while, so I’ll try<br>
>> >> >>> >> and be<br>
>> >> >>> >> as<br>
>> >> >>> >> thorough as possible in this post but apologies if I miss<br>
>> >> >>> >> anything<br>
>> >> >>> >> pertinent<br>
>> >> >>> >> out.<br>
>> >> >>> >><br>
>> >> >>> >> First off, I’m running a set up with one control node and 5<br>
>> >> >>> >> compute<br>
>> >> >>> >> nodes,<br>
>> >> >>> >> all created using the Stackgeek scripts -<br>
>> >> >>> >> <a href="http://www.stackgeek.com/guides/gettingstarted.html" target="_blank">http://www.stackgeek.com/guides/gettingstarted.html</a>. The first<br>
>> >> >>> >> two<br>
>> >> >>> >> (compute1<br>
>> >> >>> >> and compute 2) were created at the same time, compute3, 4 and 5<br>
>> >> >>> >> were<br>
>> >> >>> >> added<br>
>> >> >>> >> as needed later. My VMs are predominantly CentOS, while my<br>
>> >> >>> >> Openstack<br>
>> >> >>> >> nodes<br>
>> >> >>> >> are Ubuntu 14.04.1<br>
>> >> >>> >><br>
>> >> >>> >> The symptom: irregular high latency/packet loss to VMs on all<br>
>> >> >>> >> compute<br>
>> >> >>> >> boxes except compute3. Mostly a pain when trying to do anything<br>
>> >> >>> >> via<br>
>> >> >>> >> ssh<br>
>> >> >>> >> on a<br>
>> >> >>> >> VM because the lag makes it difficult to do anything, but it<br>
>> >> >>> >> shows<br>
>> >> >>> >> itself<br>
>> >> >>> >> quite nicely through pings as well:<br>
>> >> >>> >> --- 10.0.102.47 ping statistics ---<br>
>> >> >>> >> 111 packets transmitted, 103 received, 7% packet loss, time<br>
>> >> >>> >> 110024ms<br>
>> >> >>> >> rtt min/avg/max/mdev = 0.096/367.220/5593.100/1146.920 ms, pipe<br>
>> >> >>> >> 6<br>
>> >> >>> >><br>
>> >> >>> >><br>
>> >> >>> >> I have tested these pings:<br>
>> >> >>> >> VM to itself (via its external IP) seems fine<br>
>> >> >>> >> VM to another VM is not fine<br>
>> >> >>> >> Hosting compute node to VM is not fine<br>
>> >> >>> >> My PC to VM is not fine (however the other way round works fine)<br>
>> >> >>> >><br>
>> >> >>> >><br>
>> >> >>> >> Top on a (32 core) compute node with laggy VMs:<br>
>> >> >>> >> top - 12:09:20 up 33 days, 21:35, 1 user, load average: 2.37,<br>
>> >> >>> >> 4.95,<br>
>> >> >>> >> 6.23<br>
>> >> >>> >> Tasks: 431 total, 2 running, 429 sleeping, 0 stopped, 0<br>
>> >> >>> >> zombie<br>
>> >> >>> >> %Cpu(s): 0.6 us, 3.4 sy, 0.0 ni, 96.0 id, 0.0 wa, 0.0 hi,<br>
>> >> >>> >> 0.0<br>
>> >> >>> >> si,<br>
>> >> >>> >> 0.0 st<br>
>> >> >>> >> KiB Mem: 65928256 total, 44210348 used, 21717908 free, 341172<br>
>> >> >>> >> buffers<br>
>> >> >>> >> KiB Swap: 7812092 total, 1887864 used, 5924228 free. 7134740<br>
>> >> >>> >> cached<br>
>> >> >>> >> Mem<br>
>> >> >>> >><br>
>> >> >>> >> And for comparison, on the one compute node that doesn’t seem to<br>
>> >> >>> >> be<br>
>> >> >>> >> suffering from this:<br>
>> >> >>> >> top - 12:12:20 up 33 days, 21:38, 1 user, load average: 0.28,<br>
>> >> >>> >> 0.18,<br>
>> >> >>> >> 0.15<br>
>> >> >>> >> Tasks: 399 total, 3 running, 396 sleeping, 0 stopped, 0<br>
>> >> >>> >> zombie<br>
>> >> >>> >> %Cpu(s): 0.3 us, 0.1 sy, 0.0 ni, 98.9 id, 0.6 wa, 0.0 hi,<br>
>> >> >>> >> 0.0<br>
>> >> >>> >> si,<br>
>> >> >>> >> 0.0 st<br>
>> >> >>> >> KiB Mem: 65928256 total, 49986064 used, 15942192 free, 335788<br>
>> >> >>> >> buffers<br>
>> >> >>> >> KiB Swap: 7812092 total, 919392 used, 6892700 free. 39272312<br>
>> >> >>> >> cached<br>
>> >> >>> >> Mem<br>
>> >> >>> >><br>
>> >> >>> >> Top on a laggy VM:<br>
>> >> >>> >> top - 11:02:53 up 27 days, 33 min, 3 users, load average:<br>
>> >> >>> >> 0.00,<br>
>> >> >>> >> 0.00,<br>
>> >> >>> >> 0.00<br>
>> >> >>> >> Tasks: 91 total, 1 running, 90 sleeping, 0 stopped, 0<br>
>> >> >>> >> zombie<br>
>> >> >>> >> Cpu(s): 0.2%us, 0.1%sy, 0.0%ni, 99.5%id, 0.1%wa, 0.0%hi,<br>
>> >> >>> >> 0.0%si,<br>
>> >> >>> >> 0.0%st<br>
>> >> >>> >> Mem: 1020400k total, 881004k used, 139396k free, 162632k<br>
>> >> >>> >> buffers<br>
>> >> >>> >> Swap: 1835000k total, 14984k used, 1820016k free, 220644k<br>
>> >> >>> >> cached<br>
>> >> >>> >><br>
>> >> >>> >> <a href="http://imgur.com/blULjDa" target="_blank">http://imgur.com/blULjDa</a> shows the hypervisor panel of Horizon.<br>
>> >> >>> >> As<br>
>> >> >>> >> you<br>
>> >> >>> >> can<br>
>> >> >>> >> see, Compute 3 has fewer resources used, but none of the compute<br>
>> >> >>> >> nodes<br>
>> >> >>> >> should be anywhere near overloaded from what I can tell.<br>
>> >> >>> >><br>
>> >> >>> >> Any ideas? Let me know if I’m missing anything obvious that<br>
>> >> >>> >> would<br>
>> >> >>> >> help<br>
>> >> >>> >> with figuring this out!<br>
>> >> >>> >><br>
>> >> >>> >> Hannah<br>
>> >> >>> >><br>
>> >> >>> >><br>
>> >> >>> >><br>
>> >> >>> >><br>
>> >> >>> >><br>
>> >> >>> >><br>
>> >> >>> >> ***********<br>
>> >> >>> >><br>
>> >> >>> >> Radiant Worlds Limited is registered in England (company no:<br>
>> >> >>> >> 07822337).<br>
>> >> >>> >> This message is intended solely for the addressee and may<br>
>> >> >>> >> contain<br>
>> >> >>> >> confidential information. If you have received this message in<br>
>> >> >>> >> error<br>
>> >> >>> >> please<br>
>> >> >>> >> send it back to us and immediately and permanently delete it<br>
>> >> >>> >> from<br>
>> >> >>> >> your<br>
>> >> >>> >> system. Do not use, copy or disclose the information contained<br>
>> >> >>> >> in<br>
>> >> >>> >> this<br>
>> >> >>> >> message or in any attachment. Please also note that transmission<br>
>> >> >>> >> cannot<br>
>> >> >>> >> be<br>
>> >> >>> >> guaranteed to be secure or error-free.<br>
>> >> >>> >><br>
>> >> >>> >> _______________________________________________<br>
>> >> >>> >> Mailing list:<br>
>> >> >>> >> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>
>> >> >>> >> Post to : <a href="mailto:openstack@lists.openstack.org" target="_blank">openstack@lists.openstack.org</a><br>
>> >> >>> >> Unsubscribe :<br>
>> >> >>> >> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>
>> >> >>> >><br>
>> >> >>> >><br>
>> >> >>> >><br>
>> >> >>> >> --<br>
>> >> >>> >> Sent from my Android device with K-9 Mail. Please excuse my<br>
>> >> >>> >> brevity.<br>
>> >> >>> >><br>
>> >> >>> >><br>
>> >> >>> >> ***********<br>
>> >> >>> >><br>
>> >> >>> >> Radiant Worlds Limited is registered in England (company no:<br>
>> >> >>> >> 07822337).<br>
>> >> >>> >> This message is intended solely for the addressee and may<br>
>> >> >>> >> contain<br>
>> >> >>> >> confidential information. If you have received this message in<br>
>> >> >>> >> error<br>
>> >> >>> >> please<br>
>> >> >>> >> send it back to us and immediately and permanently delete it<br>
>> >> >>> >> from<br>
>> >> >>> >> your<br>
>> >> >>> >> system. Do not use, copy or disclose the information contained<br>
>> >> >>> >> in<br>
>> >> >>> >> this<br>
>> >> >>> >> message or in any attachment. Please also note that transmission<br>
>> >> >>> >> cannot<br>
>> >> >>> >> be<br>
>> >> >>> >> guaranteed to be secure or error-free.<br>
>> >> >>> >><br>
>> >> >>> >> _______________________________________________<br>
>> >> >>> >> Mailing list:<br>
>> >> >>> >> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>
>> >> >>> >> Post to : <a href="mailto:openstack@lists.openstack.org" target="_blank">openstack@lists.openstack.org</a><br>
>> >> >>> >> Unsubscribe :<br>
>> >> >>> >> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>
>> >> >>> >><br>
>> >> >>> >><br>
>> >> >>> >><br>
>> >> >>> >> _______________________________________________<br>
>> >> >>> >> Mailing list:<br>
>> >> >>> >> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>
>> >> >>> >> Post to : <a href="mailto:openstack@lists.openstack.org" target="_blank">openstack@lists.openstack.org</a><br>
>> >> >>> >> Unsubscribe :<br>
>> >> >>> >> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>
>> >> >>> >><br>
>> >> >>> >><br>
>> >> >>> ><br>
>> >> >>> ><br>
>> >> >>> > _______________________________________________<br>
>> >> >>> > Mailing list:<br>
>> >> >>> > <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>
>> >> >>> > Post to : <a href="mailto:openstack@lists.openstack.org" target="_blank">openstack@lists.openstack.org</a><br>
>> >> >>> > Unsubscribe :<br>
>> >> >>> > <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>
>> >> >>> ><br>
>> >> >><br>
>> >> >><br>
>> ><br>
>> ><br>
><br>
><br>
> _______________________________________________<br>
> Mailing list: <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>
> Post to : <a href="mailto:openstack@lists.openstack.org" target="_blank">openstack@lists.openstack.org</a><br>
> Unsubscribe : <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>
><br>
</div></div></blockquote></div></div>
<br>_______________________________________________<br>
Mailing list: <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>
Post to : <a href="mailto:openstack@lists.openstack.org">openstack@lists.openstack.org</a><br>
Unsubscribe : <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack</a><br>
<br></blockquote></div>