[openstack-dev] [Neutron] Random IP address allocations
gkotton at vmware.com
Mon Jun 13 12:18:02 UTC 2016
I think for a major change like this we should have at least had a mail list discussion.
On 6/10/16, 7:24 PM, "Carl Baldwin" <carl at ecbaldwin.net> wrote:
>On Fri, Jun 10, 2016 at 2:40 AM, Gary Kotton <gkotton at vmware.com> wrote:
>> The patch https://review.openstack.org/#/c/292207/ has broken decomposed
>> plugins. I am not sure if we can classify this as a API change – basically
>Can you be more specific about how it "has broken decomposed plugins?"
> What's broken? There are no links or anything.
The unit tests break and took a lot of work to get them fixed. The reason for this is that the plugin has native DHCP. This mean that a port is created for DHCP. This randomly would fail as it ‘may’ have been the IP that was configured for the test. There are a number of ways for addressing this.
>> the IP address allocation model has changed. So someone prior to the patch
>> could create a network and expect addresses A, B and C to be allocated in
>Nothing has ever been documented indicating that you can expect this.
>People just noticed a pattern and assumed they could count on it.
>Anyone depending on this has been depending on an implementation
>detail. This has come up before . I think we need flexibility to
>allocate IPs as we need to to scale well. I don't think we should be
>restricted by defacto patterns in IP allocation that people have come
>to depend on.
>Any real world use should always take in to account the fact there
>there may be other users of the system trying to get IP allocations in
>parallel. To them, the expected behavior doesn't change: they could
>get any address from a window of the next few available addresses.
>So, the problem here must be in tests running in a contrived setup
>making too many assumptions.
>> that order. Now random addresses will be allocated.
>After nearly 3 years of dealing with DB contention around IP
>allocation, this is the only way that we've been able to come up with
>to finally relieve it. When IPAM gets busy, there is a lot of racing
>to get that next IP address which results in contention between worker
>processes. Allocating from a random window relieves it considerably.
What about locking? I know a lot of people wanted to discuss the distributed locking. Just doing a random retry also looks veryt error prone. Have you guys tested this on scale.
In addition to this it also seems like the IPM is called under a DB transaction – so that will break things – that is when a IPAM driver is talking to a external service and there is a little load.
>> I think that this requires some discussion and we should consider reverting
>> the patch. Maybe I am missing something here but this may break people who
>> are working according the existing outputs of Neutron according to existing
>> behavior (which may have been wrong from the start).
>Some discussion was had  leading up to this change. I didn't
>think we needed broader discussion because we've already established
>that IP allocation is an implementation detail . The only contract
>in place for IP allocation is that an IP address will be allocated
>from within the allocation_pools defined on the subnet if available.
>I am against reverting this patch as I have stated on the review to
>revert it .
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
More information about the OpenStack-dev