[Openstack] Network speed issue

Adrián Norte Fernández adrian at bashlines.com
Tue Dec 16 18:02:43 UTC 2014


That shows that those 3 offload settibgs are enabled.
El 16/12/2014 19:01, "Georgios Dimitrakakis" <giorgis at acmac.uoc.gr>
escribió:

> I believe that they are already disabled.
>
> Here is the ethtool output:
>
> # ethtool --show-offload eth1
> Features for eth1:
> rx-checksumming: on
> tx-checksumming: on
>         tx-checksum-ipv4: off
>         tx-checksum-unneeded: off
>         tx-checksum-ip-generic: on
>         tx-checksum-ipv6: off
>         tx-checksum-fcoe-crc: off [fixed]
>         tx-checksum-sctp: off [fixed]
> scatter-gather: on
>         tx-scatter-gather: on
>         tx-scatter-gather-fraglist: off [fixed]
> tcp-segmentation-offload: on
>         tx-tcp-segmentation: on
>         tx-tcp-ecn-segmentation: off
>         tx-tcp6-segmentation: on
> udp-fragmentation-offload: off [fixed]
> generic-segmentation-offload: on
> generic-receive-offload: on
> large-receive-offload: off [fixed]
> rx-vlan-offload: on [fixed]
> tx-vlan-offload: on [fixed]
> ntuple-filters: off [fixed]
> receive-hashing: off [fixed]
> highdma: on [fixed]
> rx-vlan-filter: off [fixed]
> vlan-challenged: off [fixed]
> tx-lockless: off [fixed]
> netns-local: off [fixed]
> tx-gso-robust: off [fixed]
> tx-fcoe-segmentation: off [fixed]
> tx-gre-segmentation: off [fixed]
> tx-udp_tnl-segmentation: off [fixed]
> fcoe-mtu: off [fixed]
> loopback: off [fixed]
>
>
>
> Regards,
>
>
> George
>
>  Disable offloading on the nodes with: ethtool -K interfaceName gro off
>> gso off tso off
>>
>> And then try it again
>> El 16/12/2014 18:36, "Georgios Dimitrakakis"  escribió:
>>
>>  Hi all!
>>>
>>> In my OpenStack installation (Icehouse and use nova legacy
>>> networking) the VMs are talking to each other over a 1Gbps network
>>> link.
>>>
>>> My issue is that although file transfers between physical
>>> (hypervisor) nodes can saturate that link transfers between VMs
>>> reach very lower speeds e.g. 30MB/s (approx. 240Mbps).
>>>
>>> My tests are performed by scping a large image file (approx. 4GB)
>>> between the nodes and between the VMs.
>>>
>>> I have updated my images to use e1000 nic driver but the results
>>> remain the same.
>>>
>>> What are any other limiting factors?
>>>
>>> Does it has to do with the disk driver I am using? Does it play
>>> significant role the filesystem of the hypervisor node?
>>>
>>> Any ideas on how to approach the saturation of the 1Gbps link?
>>>
>>> Best regards,
>>>
>>> George
>>>
>>> _______________________________________________
>>> Mailing list:
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [1]
>>> Post to     : openstack at lists.openstack.org [2]
>>> Unsubscribe :
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack [3]
>>>
>>
>>
>> Links:
>> ------
>> [1] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> [2] mailto:openstack at lists.openstack.org
>> [3] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> [4] mailto:giorgis at acmac.uoc.gr
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20141216/97c3130f/attachment.html>


More information about the Openstack mailing list