<div dir="ltr">Hi,<div>i have reported a bug[1]</div><div>[1]<a href="https://bugs.launchpad.net/neutron/+bug/1324875" target="_blank">https://bugs.launchpad.net/neutron/+bug/1324875</a></div><div><br></div><div>but no better idea about this issue now, maybe need more discussion.</div>
<div><br></div><div>any thoughts?</div><div>:)</div><div><br></div><div>Xurong Yang </div></div><div class="gmail_extra"><br><br><div class="gmail_quote">2014-05-31 6:33 GMT+08:00 Eugene Nikanorov <span dir="ltr"><<a href="mailto:enikanorov@mirantis.com" target="_blank">enikanorov@mirantis.com</a>></span>:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="">> <span style="font-family:arial,sans-serif;font-size:13px">I was thinking it would be a separate process that would communicate </span><span style="font-family:arial,sans-serif;font-size:13px">over the RPC channel or something.</span></div>
<div>
<font face="arial, sans-serif">memcached?</font></div><span class="HOEnZb"><font color="#888888"><div><font face="arial, sans-serif"><br></font></div><div><font face="arial, sans-serif">Eugene.</font></div></font></span></div>
<div class="HOEnZb"><div class="h5"><div class="gmail_extra"><br><br><div class="gmail_quote">
On Sat, May 31, 2014 at 2:27 AM, Carl Baldwin <span dir="ltr"><<a href="mailto:carl@ecbaldwin.net" target="_blank">carl@ecbaldwin.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Eugene,<br>
<br>
That was part of the "whole new set of complications" that I<br>
dismissively waved my hands at. :)<br>
<br>
I was thinking it would be a separate process that would communicate<br>
over the RPC channel or something. More complications come when you<br>
think about making this process HA, etc. It would mean going over RPC<br>
to rabbit to get an allocation which would be slow. But the current<br>
implementation is slow. At least going over RPC is greenthread<br>
friendly where going to the database doesn't seem to be.<br>
<span><font color="#888888"><br>
Carl<br>
</font></span><div><div><br>
On Fri, May 30, 2014 at 2:56 PM, Eugene Nikanorov<br>
<<a href="mailto:enikanorov@mirantis.com" target="_blank">enikanorov@mirantis.com</a>> wrote:<br>
> Hi Carl,<br>
><br>
> The idea of in-memory storage was discussed for similar problem, but might<br>
> not work for multiple server deployment.<br>
> Some hybrid approach though may be used, I think.<br>
><br>
> Thanks,<br>
> Eugene.<br>
><br>
><br>
> On Fri, May 30, 2014 at 8:53 PM, Carl Baldwin <<a href="mailto:carl@ecbaldwin.net" target="_blank">carl@ecbaldwin.net</a>> wrote:<br>
>><br>
>> This is very similar to IPAM... There is a space of possible ids or<br>
>> addresses that can grow very large. We need to track the allocation<br>
>> of individual ids or addresses from that space and be able to quickly<br>
>> come up with a new allocations and recycle old ones. I've had this in<br>
>> the back of my mind for a week or two now.<br>
>><br>
>> A similar problem came up when the database would get populated with<br>
>> the entire free space worth of ip addresses to reflect the<br>
>> availability of all of the individual addresses. With a large space<br>
>> (like an ip4 /8 or practically any ip6 subnet) this would take a very<br>
>> long time or never finish.<br>
>><br>
>> Neutron was a little smarter about this. It compressed availability<br>
>> in to availability ranges in a separate table. This solved the<br>
>> original problem but is not problem free. It turns out that writing<br>
>> database operations to manipulate both the allocations table and the<br>
>> availability table atomically is very difficult and ends up being very<br>
>> slow and has caused us some grief. The free space also gets<br>
>> fragmented which degrades performance. This is what led me --<br>
>> somewhat reluctantly -- to change how IPs get recycled back in to the<br>
>> free pool which hasn't been very popular.<br>
>><br>
>> I wonder if we can discuss a good pattern for handling allocations<br>
>> where the free space can grow very large. We could use the pattern<br>
>> for the allocation of both IP addresses, VXlan ids, and other similar<br>
>> resource spaces.<br>
>><br>
>> For IPAM, I have been entertaining the idea of creating an allocation<br>
>> agent that would manage the availability of IPs in memory rather than<br>
>> in the database. I hesitate, because that brings up a whole new set<br>
>> of complications. I'm sure there are other potential solutions that I<br>
>> haven't yet considered.<br>
>><br>
>> The L3 subteam is currently working on a pluggable IPAM model. Once<br>
>> the initial framework for this is done, we can more easily play around<br>
>> with changing the underlying IPAM implementation.<br>
>><br>
>> Thoughts?<br>
>><br>
>> Carl<br>
>><br>
>> On Thu, May 29, 2014 at 4:01 AM, Xurong Yang <<a href="mailto:idopra@gmail.com" target="_blank">idopra@gmail.com</a>> wrote:<br>
>> > Hi, Folks,<br>
>> ><br>
>> > When we configure VXLAN range [1,16M], neutron-server service costs long<br>
>> > time and cpu rate is very high(100%) when initiation. One test base on<br>
>> > postgresql has been verified: more than 1h when VXLAN range is [1, 1M].<br>
>> ><br>
>> > So, any good solution about this performance issue?<br>
>> ><br>
>> > Thanks,<br>
>> > Xurong Yang<br>
>> ><br>
>> ><br>
>> ><br>
>> > _______________________________________________<br>
>> > OpenStack-dev mailing list<br>
>> > <a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
>> > <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
>> ><br>
>><br>
>> _______________________________________________<br>
>> OpenStack-dev mailing list<br>
>> <a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
>> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
><br>
><br>
><br>
> _______________________________________________<br>
> OpenStack-dev mailing list<br>
> <a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
><br>
<br>
_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</div></div></blockquote></div><br></div>
</div></div><br>_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br></blockquote></div><br></div>