[openstack-dev] [Neutron] One performance issue about VXLAN pool initiation

Xurong Yang idopra at gmail.com
Thu Jun 5 09:06:09 UTC 2014


great.
I will do more test base on Eugene Nikanorov's modification.

*Thanks,*


2014-06-05 11:01 GMT+08:00 Isaku Yamahata <isaku.yamahata at gmail.com>:

> Wow great.
> I think the same applies to gre type driver.
> so we should create similar one after vxlan case is resolved.
>
> thanks,
>
>
> On Thu, Jun 05, 2014 at 12:36:54AM +0400,
> Eugene Nikanorov <enikanorov at mirantis.com> wrote:
>
> > We hijacked the vxlan initialization performance thread with ipam! :)
> > I've tried to address initial problem with some simple sqla stuff:
> > https://review.openstack.org/97774
> > With sqlite it gives ~3x benefit over existing code in master.
> > Need to do a little bit more testing with real backends to make sure
> > parameters are optimal.
> >
> > Thanks,
> > Eugene.
> >
> >
> > On Thu, Jun 5, 2014 at 12:29 AM, Carl Baldwin <carl at ecbaldwin.net>
> wrote:
> >
> > > Yes, memcached is a candidate that looks promising.  First things
> first,
> > > though.  I think we need the abstraction of an ipam interface merged.
>  That
> > > will take some more discussion and work on its own.
> > >
> > > Carl
> > > On May 30, 2014 4:37 PM, "Eugene Nikanorov" <enikanorov at mirantis.com>
> > > wrote:
> > >
> > >> > I was thinking it would be a separate process that would
> communicate over
> > >> the RPC channel or something.
> > >> memcached?
> > >>
> > >> Eugene.
> > >>
> > >>
> > >> On Sat, May 31, 2014 at 2:27 AM, Carl Baldwin <carl at ecbaldwin.net>
> wrote:
> > >>
> > >>> Eugene,
> > >>>
> > >>> That was part of the "whole new set of complications" that I
> > >>> dismissively waved my hands at.  :)
> > >>>
> > >>> I was thinking it would be a separate process that would communicate
> > >>> over the RPC channel or something.  More complications come when you
> > >>> think about making this process HA, etc.  It would mean going over
> RPC
> > >>> to rabbit to get an allocation which would be slow.  But the current
> > >>> implementation is slow.  At least going over RPC is greenthread
> > >>> friendly where going to the database doesn't seem to be.
> > >>>
> > >>> Carl
> > >>>
> > >>> On Fri, May 30, 2014 at 2:56 PM, Eugene Nikanorov
> > >>> <enikanorov at mirantis.com> wrote:
> > >>> > Hi Carl,
> > >>> >
> > >>> > The idea of in-memory storage was discussed for similar problem,
> but
> > >>> might
> > >>> > not work for multiple server deployment.
> > >>> > Some hybrid approach though may be used, I think.
> > >>> >
> > >>> > Thanks,
> > >>> > Eugene.
> > >>> >
> > >>> >
> > >>> > On Fri, May 30, 2014 at 8:53 PM, Carl Baldwin <carl at ecbaldwin.net>
> > >>> wrote:
> > >>> >>
> > >>> >> This is very similar to IPAM...  There is a space of possible ids
> or
> > >>> >> addresses that can grow very large.  We need to track the
> allocation
> > >>> >> of individual ids or addresses from that space and be able to
> quickly
> > >>> >> come up with a new allocations and recycle old ones.  I've had
> this in
> > >>> >> the back of my mind for a week or two now.
> > >>> >>
> > >>> >> A similar problem came up when the database would get populated
> with
> > >>> >> the entire free space worth of ip addresses to reflect the
> > >>> >> availability of all of the individual addresses.  With a large
> space
> > >>> >> (like an ip4 /8 or practically any ip6 subnet) this would take a
> very
> > >>> >> long time or never finish.
> > >>> >>
> > >>> >> Neutron was a little smarter about this.  It compressed
> availability
> > >>> >> in to availability ranges in a separate table.  This solved the
> > >>> >> original problem but is not problem free.  It turns out that
> writing
> > >>> >> database operations to manipulate both the allocations table and
> the
> > >>> >> availability table atomically is very difficult and ends up being
> very
> > >>> >> slow and has caused us some grief.  The free space also gets
> > >>> >> fragmented which degrades performance.  This is what led me --
> > >>> >> somewhat reluctantly -- to change how IPs get recycled back in to
> the
> > >>> >> free pool which hasn't been very popular.
> > >>> >>
> > >>> >> I wonder if we can discuss a good pattern for handling allocations
> > >>> >> where the free space can grow very large.  We could use the
> pattern
> > >>> >> for the allocation of both IP addresses, VXlan ids, and other
> similar
> > >>> >> resource spaces.
> > >>> >>
> > >>> >> For IPAM, I have been entertaining the idea of creating an
> allocation
> > >>> >> agent that would manage the availability of IPs in memory rather
> than
> > >>> >> in the database.  I hesitate, because that brings up a whole new
> set
> > >>> >> of complications.  I'm sure there are other potential solutions
> that I
> > >>> >> haven't yet considered.
> > >>> >>
> > >>> >> The L3 subteam is currently working on a pluggable IPAM model.
>  Once
> > >>> >> the initial framework for this is done, we can more easily play
> around
> > >>> >> with changing the underlying IPAM implementation.
> > >>> >>
> > >>> >> Thoughts?
> > >>> >>
> > >>> >> Carl
> > >>> >>
> > >>> >> On Thu, May 29, 2014 at 4:01 AM, Xurong Yang <idopra at gmail.com>
> > >>> wrote:
> > >>> >> > Hi, Folks,
> > >>> >> >
> > >>> >> > When we configure VXLAN range [1,16M], neutron-server service
> costs
> > >>> long
> > >>> >> > time and cpu rate is very high(100%) when initiation. One test
> base
> > >>> on
> > >>> >> > postgresql has been verified: more than 1h when VXLAN range is
> [1,
> > >>> 1M].
> > >>> >> >
> > >>> >> > So, any good solution about this performance issue?
> > >>> >> >
> > >>> >> > Thanks,
> > >>> >> > Xurong Yang
> > >>> >> >
> > >>> >> >
> > >>> >> >
> > >>> >> > _______________________________________________
> > >>> >> > OpenStack-dev mailing list
> > >>> >> > OpenStack-dev at lists.openstack.org
> > >>> >> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >>> >> >
> > >>> >>
> > >>> >> _______________________________________________
> > >>> >> OpenStack-dev mailing list
> > >>> >> OpenStack-dev at lists.openstack.org
> > >>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >>> >
> > >>> >
> > >>> >
> > >>> > _______________________________________________
> > >>> > OpenStack-dev mailing list
> > >>> > OpenStack-dev at lists.openstack.org
> > >>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >>> >
> > >>>
> > >>> _______________________________________________
> > >>> OpenStack-dev mailing list
> > >>> OpenStack-dev at lists.openstack.org
> > >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >>>
> > >>
> > >>
> > >> _______________________________________________
> > >> OpenStack-dev mailing list
> > >> OpenStack-dev at lists.openstack.org
> > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >>
> > >>
> > > _______________________________________________
> > > OpenStack-dev mailing list
> > > OpenStack-dev at lists.openstack.org
> > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > >
> > >
>
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> --
> Isaku Yamahata <isaku.yamahata at gmail.com>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140605/acd50748/attachment.html>


More information about the OpenStack-dev mailing list