<p dir="ltr">Disable offloading on the nodes with: ethtool -K interfaceName gro off gso off tso off</p>
<p dir="ltr">And then try it again</p>
<div class="gmail_quote">El 16/12/2014 18:36, "Georgios Dimitrakakis" <<a href="mailto:giorgis@acmac.uoc.gr">giorgis@acmac.uoc.gr</a>> escribió:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><br>
Hi all!<br>
<br>
In my OpenStack installation (Icehouse and use nova legacy networking) the VMs are talking to each other over a 1Gbps network link.<br>
<br>
My issue is that although file transfers between physical (hypervisor) nodes can saturate that link transfers between VMs reach very lower speeds e.g. 30MB/s (approx. 240Mbps).<br>
<br>
My tests are performed by scp'ing a large image file (approx. 4GB) between the nodes and between the VMs.<br>
<br>
I have updated my images to use e1000 nic driver but the results remain the same.<br>
<br>
What are any other limiting factors?<br>
<br>
Does it has to do with the disk driver I am using? Does it play significant role the filesystem of the hypervisor node?<br>
<br>
Any ideas on how to approach the saturation of the 1Gbps link?<br>
<br>
<br>
Best regards,<br>
<br>
<br>
George<br>
<br>
______________________________<u></u>_________________<br>
Mailing list: <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/<u></u>cgi-bin/mailman/listinfo/<u></u>openstack</a><br>
Post to : <a href="mailto:openstack@lists.openstack.org" target="_blank">openstack@lists.openstack.org</a><br>
Unsubscribe : <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" target="_blank">http://lists.openstack.org/<u></u>cgi-bin/mailman/listinfo/<u></u>openstack</a><br>
</blockquote></div>