[Openstack-operators] improve perfomance Neutron VXLAN

Mathieu Rohon mathieu.rohon at gmail.com
Fri Jan 23 13:00:32 UTC 2015


Hi pedro,

This thread might interest you  :

http://lists.openstack.org/pipermail/openstack-dev/2015-January/054953.html

Mathieu

On Fri, Jan 23, 2015 at 12:07 PM, Pedro Sousa <pgsousa at gmail.com> wrote:

> Hi Slawek,
>
> I've tried with 8950/9000 but I had problems communicating  with external
> hosts from the VM.
>
> Regards,
> Pedro Sousa
>
>
>
>
> On Thu, Jan 22, 2015 at 9:36 PM, Sławek Kapłoński <slawek at kaplonski.pl>
> wrote:
>
>> As I wrote earlier, for me it is best to have 9000 on hosts and 8950 on
>> instances. Then I have full speed between instances. With lower mtu on
>> instances I have about 2-2.5 Gbps and I saw that vhost-net process on host
>> is using 100 of 1 cpu core. I'm using libvirt with kvm - maybe You are
>> using something else and it will be different on Your hosts.
>>
>> Slawek Kaplonski
>>
>>
>> W dniu 22.01.2015 o 20:45, Pedro Sousa pisze:
>>
>>> Hi Slawek,
>>>
>>> I've tried several options but that one that seems to work better is MTU
>>> 1450 on VM and MTU 1600 on the host. With MTU 1400 on the VM I would get
>>> freezes and timeouts.
>>>
>>> Still I get about 2.2Gbit/Sec while in the host I get 9 Gbit/Sec, do you
>>> think is normal?
>>>
>>> Thanks,
>>> Pedro Sousa
>>>
>>>
>>>
>>>
>>> On Thu, Jan 22, 2015 at 7:32 PM, Sławek Kapłoński <slawek at kaplonski.pl
>>> <mailto:slawek at kaplonski.pl>> wrote:
>>>
>>>     Hello,
>>>
>>>     In dnsmasq file in neutron will be ok. It will then force option 26
>>>     on vm.
>>>     You can also manually change it on vms to tests.
>>>
>>>     Slawek Kaplonski
>>>
>>>     W dniu 22.01.2015 o 17:06, Pedro Sousa pisze:
>>>
>>>         Hi Slawek,
>>>
>>>         I'll test this, did you change the mtu on dnsmasq file in
>>>         /etc/neutron/?
>>>         Or do you need to change on other places too?
>>>
>>>         Thanks,
>>>         Pedro Sousa
>>>
>>>         On Wed, Jan 21, 2015 at 4:26 PM, Sławek Kapłoński
>>>         <slawek at kaplonski.pl <mailto:slawek at kaplonski.pl>
>>>         <mailto:slawek at kaplonski.pl <mailto:slawek at kaplonski.pl>>>
>>> wrote:
>>>
>>>              I have similar and I also got something like 2-2,5Gbps
>>>         between vms.
>>>              When I
>>>              change it to 8950 on vms (so in neutron conf) (50 less then
>>> on
>>>              hosts) then it
>>>              is much better.
>>>              You can check that probably when You make test between vms
>>>         on host
>>>              there is
>>>              process called "vhost-net" (or something like that) and it
>>>         uses 100%
>>>              of one cpu
>>>              core and that is imho bottleneck
>>>
>>>              Slawek Kaplonski
>>>
>>>              On Wed, Jan 21, 2015 at 04:12:02PM +0000, Pedro Sousa wrote:
>>>               > Hi Slawek,
>>>               >
>>>               > I have dhcp-option-force=26,1400 in neutron-dnsmasq.conf
>>> and
>>>              MTU=9000 on
>>>               > network-interfaces in the operating system.
>>>               >
>>>               > Do I need to change somewhere else?
>>>               >
>>>               > Thanks,
>>>               > Pedro Sousa
>>>               >
>>>               > On Wed, Jan 21, 2015 at 4:07 PM, Sławek Kapłoński
>>>              <slawek at kaplonski.pl <mailto:slawek at kaplonski.pl>
>>>         <mailto:slawek at kaplonski.pl <mailto:slawek at kaplonski.pl>>>
>>>
>>>               > wrote:
>>>               >
>>>               > > Hello,
>>>               > >
>>>               > > Try to set bigger jumbo framse on hosts and vms. For
>>>         example on
>>>              hosts You
>>>               > > can
>>>               > > set 9000 and then 8950 and check then. It helps me
>>>         with similar
>>>              problem.
>>>               > >
>>>               > > Slawek Kaplonski
>>>               > >
>>>               > > On Wed, Jan 21, 2015 at 03:22:50PM +0000, Pedro Sousa
>>>         wrote:
>>>               > > > Hi all,
>>>               > > >
>>>               > > > is there a way to improve network performance on my
>>>         instances
>>>              with
>>>               > > VXLAN? I
>>>               > > > changed the MTU on physical interfaces to 1600, still
>>>              performance it's
>>>               > > > lower than in baremetal hosts:
>>>               > > >
>>>               > > > *On Instance:*
>>>               > > >
>>>               > > > [root at vms6-149a71e8-1f2a-4d6e-__bba4-e70dfa42b289
>>>         ~]# iperf3 -s
>>>               > > >
>>>         ------------------------------__-----------------------------
>>>               > > > Server listening on 5201
>>>               > > >
>>>         ------------------------------__-----------------------------
>>>
>>>               > > > Accepted connection from 10.0.66.35, port 42900
>>>               > > > [  5] local 10.0.66.38 port 5201 connected to
>>>         10.0.66.35 port
>>>              42901
>>>               > > > [ ID] Interval           Transfer     Bandwidth
>>>               > > > [  5]   0.00-1.00   sec   189 MBytes  1.59 Gbits/sec
>>>               > > > [  5]   1.00-2.00   sec   245 MBytes  2.06 Gbits/sec
>>>               > > > [  5]   2.00-3.00   sec   213 MBytes  1.78 Gbits/sec
>>>               > > > [  5]   3.00-4.00   sec   227 MBytes  1.91 Gbits/sec
>>>               > > > [  5]   4.00-5.00   sec   235 MBytes  1.97 Gbits/sec
>>>               > > > [  5]   5.00-6.00   sec   235 MBytes  1.97 Gbits/sec
>>>               > > > [  5]   6.00-7.00   sec   234 MBytes  1.96 Gbits/sec
>>>               > > > [  5]   7.00-8.00   sec   235 MBytes  1.97 Gbits/sec
>>>               > > > [  5]   8.00-9.00   sec   244 MBytes  2.05 Gbits/sec
>>>               > > > [  5]   9.00-10.00  sec   234 MBytes  1.97 Gbits/sec
>>>               > > > [  5]  10.00-10.04  sec  9.30 MBytes  1.97 Gbits/sec
>>>               > > > - - - - - - - - - - - - - - - - - - - - - - - - -
>>>               > > > [ ID] Interval           Transfer     Bandwidth
>>>           Retr
>>>               > > > [  5]   0.00-10.04  sec  2.25 GBytes  1.92
>>>         Gbits/sec   43
>>>               > >  sender
>>>               > > > [  5]   0.00-10.04  sec  2.25 GBytes  1.92 Gbits/sec
>>>               > > >  receiver
>>>               > > >
>>>               > > >
>>>               > > > *On baremetal:*
>>>               > > > iperf3 -s
>>>               > > > warning: this system does not seem to support IPv6 -
>>>         trying IPv4
>>>               > > >
>>>         ------------------------------__-----------------------------
>>>               > > > Server listening on 5201
>>>               > > >
>>>         ------------------------------__-----------------------------
>>>
>>>               > > > Accepted connection from 172.16.21.4, port 51408
>>>               > > > [  5] local 172.16.21.5 port 5201 connected to
>>>         172.16.21.4
>>>              port 51409
>>>               > > > [ ID] Interval           Transfer     Bandwidth
>>>               > > > [  5]   0.00-1.00   sec  1.02 GBytes  8.76 Gbits/sec
>>>               > > > [  5]   1.00-2.00   sec  1.07 GBytes  9.23 Gbits/sec
>>>               > > > [  5]   2.00-3.00   sec  1.08 GBytes  9.29 Gbits/sec
>>>               > > > [  5]   3.00-4.00   sec  1.08 GBytes  9.27 Gbits/sec
>>>               > > > [  5]   4.00-5.00   sec  1.08 GBytes  9.27 Gbits/sec
>>>               > > > [  5]   5.00-6.00   sec  1.08 GBytes  9.28 Gbits/sec
>>>               > > > [  5]   6.00-7.00   sec  1.08 GBytes  9.28 Gbits/sec
>>>               > > > [  5]   7.00-8.00   sec  1.08 GBytes  9.29 Gbits/sec
>>>               > > > [  5]   8.00-9.00   sec  1.08 GBytes  9.28 Gbits/sec
>>>               > > > [  5]   9.00-10.00  sec  1.08 GBytes  9.29 Gbits/sec
>>>               > > > [  5]  10.00-10.04  sec  42.8 MBytes  9.31 Gbits/sec
>>>               > > > - - - - - - - - - - - - - - - - - - - - - - - - -
>>>               > > > [ ID] Interval           Transfer     Bandwidth
>>>           Retr
>>>               > > > [  5]   0.00-10.04  sec  10.8 GBytes  9.23
>>>         Gbits/sec   95
>>>               > >  sender
>>>               > > > [  5]   0.00-10.04  sec  10.8 GBytes  9.22 Gbits/sec
>>>               > > >  receiver
>>>               > > >
>>>               > > >
>>>               > > > Thanks,
>>>               > > > Pedro Sousa
>>>               > >
>>>               > > > _________________________________________________
>>>               > > > OpenStack-operators mailing list
>>>               > > > OpenStack-operators at lists.__openstack.org
>>>         <mailto:OpenStack-operators at lists.openstack.org>
>>>              <mailto:OpenStack-operators at __lists.openstack.org
>>>         <mailto:OpenStack-operators at lists.openstack.org>>
>>>               > > >
>>>         http://lists.openstack.org/__cgi-bin/mailman/listinfo/__
>>> openstack-operators
>>>         <http://lists.openstack.org/cgi-bin/mailman/listinfo/
>>> openstack-operators>
>>>               > >
>>>               > >
>>>
>>>
>>>
>>>     --
>>>     Pozdrawiam
>>>     Sławek Kapłonski
>>>     slawek at kaplonski.pl <mailto:slawek at kaplonski.pl>
>>>
>>>
>>>
>> --
>> Pozdrawiam
>> Sławek Kapłonski
>> slawek at kaplonski.pl
>>
>
>
> _______________________________________________
> OpenStack-operators mailing list
> OpenStack-operators at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20150123/6283b94e/attachment.html>


More information about the OpenStack-operators mailing list