<div dir="ltr">Hi Rick,<div class="gmail_extra"><br><div class="gmail_quote">On 25 October 2013 13:44, Rick Jones <span dir="ltr"><<a href="mailto:rick.jones2@hp.com" target="_blank">rick.jones2@hp.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div>On 10/25/2013 08:19 AM, Martinx - $B%8%'!<%`%:(B wrote:<br>
</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div>
I think can say... "YAY!!" :-D<br>
<br>
With "LibvirtOpenVswitchDriver" my internal communication is the double<br>
now! It goes from ~200 (with LibvirtHybridOVSBridgeDriver) to<br></div>
*_400Mbit/s_* (with LibvirtOpenVswitchDriver)! Still far from 1Gbit/s<div><br>
(my physical path limit) but, more acceptable now.<br>
<br>
The command "ethtool -K eth1 gro off" still makes no difference.<br>
</div></blockquote>
<br>
Does GRO happen if there isn't RX CKO on the NIC?</blockquote><div> </div><div><br></div><div>Ouch! I missed that lesson... hehe</div><div><br></div><div>No idea, how can I check / test this?</div><div><br></div><div>
If I "disable RX CKO" (using ethtool?) on the NIC, how can I verify if the GRO is actually happening or not?</div>
<div><br></div><div>Anyway, I'm goggling about all this stuff right now. Thanks for pointing it out!</div><div><br></div><div>Refs:</div><div><br></div><div>* JLS2009: Generic receive offload - <a href="http://lwn.net/Articles/358910/" target="_blank">http://lwn.net/Articles/358910/</a></div>
<div><br></div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">Can your NIC peer-into a GRE tunnel (?) to do CKO on the encapsulated traffic?<br>
</blockquote><div><br></div><div><br></div><div>Again, no idea... No idea... :-/</div><div><br></div><div>Listen, maybe this sounds too dumb from my part but, it is the first time I'm talking about this stuff (like "NIC peer-into GRE" ?, or GRO / CKO...</div>
<div><br></div>
<div>GRE tunnels sounds too damn complex and problematic... I guess it is time to try VXLAN (or NVP ?)...</div><div><br></div><div>If you guys say: VXLAN is a completely different beast (i.e. it does not touch with ANY GRE tunnel), and it works smoothly (without GRO / CKO / MTU / lags / low speed troubles and issues), I'll move to it right now (is VXLAN docs ready?).</div>
<div><br></div><div>NOTE: I don't want to hijack this thread because of other (internal communication VS "Directional network performance issues with Neutron + OpenvSwitch" thread subject) problems with my OpenStack environment, please, let me know if this becomes a problem for you guys.</div>
<div><br></div><div><br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div>
So, there is only 1 remain problem, when traffic pass trough L3 /<br>
Namespace, it is still useless. Even the SSH connection into my<br>
Instances, via its Floating IPs, is slow as hell, sometimes it just<br>
stops responding for a few seconds, and becomes online again<br>
"out-of-nothing"...<br>
<br>
I just detect a weird "behavior", when I run "apt-get update" from<br>
instance-1, it is slow as I said plus, its ssh connection (where I'm<br>
running apt-get update), stops responding right after I run "apt-get<br></div>
update" AND, _all my others ssh connections also stops working too!_ For<div><br>
a few seconds... This means that when I run "apt-get update" from within<br>
instance-1, the SSH session of instance-2 is affected too!! There is<br>
something pretty bad going on at L3 / Namespace.<br>
<br>
BTW, do you think that a ~400MBit/sec intra-vm-communication (GRE<br>
tunnel) on top of a 1Gbit ethernet is acceptable?! It is still less than<br>
a half...<br>
</div></blockquote>
<br>
I would suggest checking for individual CPUs maxing-out during the 400 Mbit/s transfers.</blockquote><div><br></div><div>Okay, I'll.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<span><font color="#888888"><br>
<br>
rick jones<br></font></span></blockquote><div><br></div><div>Thiago </div></div></div></div>