[openstack-dev] IPAM reference driver status and other stuff

Carl Baldwin carl at ecbaldwin.net
Sat Mar 14 16:52:19 UTC 2015


Here is an update from our discussion this morning in IRC [1].  The
discussion involved mainly Pavel, Salvatore, and me.

We first discussed testing the integration of Salvatore's work [2] on
the reference driver with Pavel's work to load a driver [3] and
refactor the db_base_plugin [4].  Pavel will continue the integration
efforts which he has already begun.  The final patch [4] makes the
reference driver the default IPAM in Neutron so that it is being run
in the check queue.  At some point before this patch merges, that will
be moved to a new patch so that merging this patch will not make the
new IPAM driver the default in Neutron.

Pavel's final patch [4] will merge when the tests pass with the driver
as default.  We hope to hit this point soon.  At that point, we will
create a non-voting job to run the new IPAM driver as default on all
new patches.  This will give us many more test runs to work out bugs
for the Kilo release and will help us to gauge the eventual readiness
of the new driver to become the default IPAM in Neutron.  This latter
step will not happen until the Liberty time frame.

In Kilo, the new IPAM driver will only support greenfield deployments
and will not support using multiple drivers simultaneously.  There
will be no migration from the old to the new until Liberty.  However,
a migration plan is required in order to eventually deprecate and
remove the baked-in IPAM solution.  We will continue to evaluate use
cases around using multiple drivers in the same deployment (e.g.
ability to set the driver per subnet or support some sort of flavors).

At the point that the non-voting check job is stable, it will be moved
to voting.  When the new driver graduates to be the default driver in
Neutron, the separate gate check job will no longer be necessary and
will be removed.

Please let me know if I left out any significant detail.

Carl

[1]http://eavesdrop.openstack.org/irclogs/%23openstack-neutron/%23openstack-neutron.2015-03-13.log
at 2015-03-13T15:08:15
[2] https://review.openstack.org/#/c/150485/
[3] https://review.openstack.org/#/c/147479/
[4] https://review.openstack.org/#/c/153236/

On Mon, Mar 9, 2015 at 4:44 AM, Salvatore Orlando <sorlando at nicira.com> wrote:
> Aloha everybody!
>
> This is another long email, so here's the summary for people with 5-minute
> or less attention span:
>
> The IPAM reference driver is almost ready [1]. Unfortunately letting the
> driver doing allocation pools required a few major changes, so the latest
> patchset is rather big. I would like reviewers to advice on how they prefer
> to split it out in multiple patches.
>
> I also had to add an additional method to the subnet interface [2]
>
> However, I am not sure we are doing the right thing wrt the introduction of
> the driver. I think we might need to think about it more thoroughly, and
> possibly introduce a concept of "allocator". Otherwise if we switch an
> existing deployment to the driver, it's very likely things will just break.
>
> So here's the detailed part.
> There are only a few bits needed to complete the IPAM reference driver:
> - A thorough unit test case for the newly introduced "subnet manager", which
> provides db apis.
> - Some more unit tests for driver behaviour
> - Logic for handling allocation pools update (which pretty much needs to be
> copied over from db base plugin with minor changes)
> - Rework the synchronisation between the ipam object and the neutron db
> object. The current one is very error prone.
>
> While dealing with management of allocation pools, I found out that it's not
> convenient for the IPAM driver to rely on Neutron's DB allocation pools.
> This is a bit of a chicken and egg problem, since:
> A) Neutron needs to call the driver to define allocation pools
> B) The driver needs neutron to define allocation pools before creating
> availability ranges
> C) Neutron cannot create allocation pools if the driver has not defined them
> I therefore decided, reluctantly, to add more data duplication - introducing
> IPAM tables for allocation pools and availability ranges. The latter is
> meant to replace Neutron's one when the IPAM driver is enabled, whereas the
> former is pure data duplication. There is also an association table between
> neutron subnets and ipam objects for the same subnet; I decided to do so to
> not do further duplication.
> I dislike this data separation; on the other hand the only viable
> alternative would be to give the IPAM driver full control over neutron's
> database tables for IP Allocation and Subnet allocation pools. While this is
> feasible, I think it would be even worse to give code which can potentially
> be 3rd party control over data structures used by the Neutron API.
>
> As a result, the patch is different and larger. I would like to split it to
> make it simpler to review, but rather than just doing that of my own accord
> I would like to ask reviewers how they would prefer to have it split. At the
> end of the day I'm not the one who has to spend a lot of time reviewing that
> code.
>
> Nevertheless, I think I've realised our refactoring approach is kind of
> flawed, since it might work for greenfields deployments, but I don't think
> it will work for brownfields deployments. Also, switching between drivers is
> pretty much impossible (but we were already aware of that and we agreed to
> address this in the future).
> The decorator approach currently taken in [3] allows to either use the
> driver or not - which means that if an operator decides to switch to the
> IPAM driver, then all allocation data for existing subnets would be simply
> lost, unless we provide a solution for migrating data. This is feasible and
> rather trivial, but implies an additional management step.
>
> An alternative solution would be to introduce a concept of "allocator" for
> subnets. Such information should be stored in the database. It could point
> to an IPAM driver or to nothing. In the latter case it would simply instruct
> to use the "usual" baked-in IPAM code. This would allow us to make the
> driver work on existing deployments, and pave the way for multiple drivers.
> Indeed in this way, the new IPAM driver will be used for subnets created
> after enabling it, whereas existing subnets will keep using the existing
> logic. This should also make safe cases in which operators revert the
> decision of using the IPAM driver. Additionally, administrative APIs might
> be provided to migrate existing subnets to/from the driver. However, when
> adopting solutions like this, it is important to pay extra care in ensuring
> that the database does not start relying on the current configuration, and
> therefore we need to find a way to decouple the allocator from the actual
> IPAM driver, which should represent its realization.
>
> Any feedback is very appreciated as usual.
>
> Salvatore
>
> [1] https://review.openstack.org/#/c/150485/
> [2] https://review.openstack.org/#/c/150485/6/neutron/ipam/driver.py
> [3]
> https://review.openstack.org/#/c/153236/15/neutron/db/db_base_plugin_v2.py
>
>
>
>
>
>
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



More information about the OpenStack-dev mailing list