[openstack-dev] [Neutron] IPAM reference driver status and other stuff
sorlando at nicira.com
Mon Mar 16 00:40:28 UTC 2015
Thanks for the update Carl!
I spent some time trying to make the refactored base plugin class with the
I have been partially successful. The results are so far in  (it's a
patch that applies on top of Pavel's topmost patch); this patch will be
soon split into smaller patch which I will apply to gerrit reviews in
I have some additional comments inline, but I would like to raise some
technical points which might need attention
* the ML2 plugin overrides several methods from the base "db" class. From
what I gather from unit tests results, we have not yet refactored it. I
think to provide users something usable in Kilo we should ensure the ML2
plugin at least works with the IPAM driver.
* the current refactoring has ipam-driver-enabled and
non-ipam-driver-enabled version of some API operations. While this the less
ugly way to introduce the driver and keeping at the same time the old
logic, it adds quite a bit of code duplication. I wonder if there is any
effort we can make without too much yak shaving to reduce that code
duplication, because in this conditions I suspect it would a hard sell to
the Neutron core team.
Have a nice one,
On 14 March 2015 at 17:52, Carl Baldwin <carl at ecbaldwin.net> wrote:
> Here is an update from our discussion this morning in IRC . The
> discussion involved mainly Pavel, Salvatore, and me.
> We first discussed testing the integration of Salvatore's work  on
> the reference driver with Pavel's work to load a driver  and
> refactor the db_base_plugin . Pavel will continue the integration
> efforts which he has already begun. The final patch  makes the
> reference driver the default IPAM in Neutron so that it is being run
> in the check queue. At some point before this patch merges, that will
> be moved to a new patch so that merging this patch will not make the
> new IPAM driver the default in Neutron.
>From a unit test perspective, it is important that we test both code with
and without the IPAM driver, as we should strive to be as close as possible
to 100% code coverage - in particular since the refactoring, in its current
form adds quite a bit of code to neutron.db.db_base_plugin_v2 (possibly
with a non negligible amount of duplicated logic).
> Pavel's final patch  will merge when the tests pass with the driver
> as default.
In order to avoid raising some eyebrows, it might be worth mentioning that
I think you meant to write that the patch will be *ready to merge* when the
tests pass with the IPAM driver as default!
> We hope to hit this point soon. At that point, we will
> create a non-voting job to run the new IPAM driver as default on all
> new patches. This will give us many more test runs to work out bugs
> for the Kilo release and will help us to gauge the eventual readiness
> of the new driver to become the default IPAM in Neutron. This latter
> step will not happen until the Liberty time frame.
The goal for Liberty should be to deprecate the baked-in IPAM logic, aiming
at its removal for M.
Non-voting jobs are useful, but mostly for the people interested in it.
Other contributors tend to ignore what does not cause them a build failure.
So - in my opinion - the idea should be that as soon as we're confident the
IPAM driver is reliable enough, we'll switch it on either on the mysql or
the postgres job.
> In Kilo, the new IPAM driver will only support greenfield deployments
> and will not support using multiple drivers simultaneously. There
> will be no migration from the old to the new until Liberty.
I will leave this decision to the rest of the community, but I believe that
a data migration script, similar to the one that was done for ML2, should
be doable in a fairly easy way.
> a migration plan is required in order to eventually deprecate and
> remove the baked-in IPAM solution. We will continue to evaluate use
> cases around using multiple drivers in the same deployment (e.g.
> ability to set the driver per subnet or support some sort of flavors).
> At the point that the non-voting check job is stable, it will be moved
> to voting. When the new driver graduates to be the default driver in
> Neutron, the separate gate check job will no longer be necessary and
> will be removed.
> Please let me know if I left out any significant detail.
> at 2015-03-13T15:08:15
>  https://review.openstack.org/#/c/150485/
>  https://review.openstack.org/#/c/147479/
>  https://review.openstack.org/#/c/153236/
> On Mon, Mar 9, 2015 at 4:44 AM, Salvatore Orlando <sorlando at nicira.com>
> > Aloha everybody!
> > This is another long email, so here's the summary for people with
> > or less attention span:
> > The IPAM reference driver is almost ready . Unfortunately letting the
> > driver doing allocation pools required a few major changes, so the latest
> > patchset is rather big. I would like reviewers to advice on how they
> > to split it out in multiple patches.
> > I also had to add an additional method to the subnet interface 
> > However, I am not sure we are doing the right thing wrt the introduction
> > the driver. I think we might need to think about it more thoroughly, and
> > possibly introduce a concept of "allocator". Otherwise if we switch an
> > existing deployment to the driver, it's very likely things will just
> > So here's the detailed part.
> > There are only a few bits needed to complete the IPAM reference driver:
> > - A thorough unit test case for the newly introduced "subnet manager",
> > provides db apis.
> > - Some more unit tests for driver behaviour
> > - Logic for handling allocation pools update (which pretty much needs to
> > copied over from db base plugin with minor changes)
> > - Rework the synchronisation between the ipam object and the neutron db
> > object. The current one is very error prone.
> > While dealing with management of allocation pools, I found out that it's
> > convenient for the IPAM driver to rely on Neutron's DB allocation pools.
> > This is a bit of a chicken and egg problem, since:
> > A) Neutron needs to call the driver to define allocation pools
> > B) The driver needs neutron to define allocation pools before creating
> > availability ranges
> > C) Neutron cannot create allocation pools if the driver has not defined
> > I therefore decided, reluctantly, to add more data duplication -
> > IPAM tables for allocation pools and availability ranges. The latter is
> > meant to replace Neutron's one when the IPAM driver is enabled, whereas
> > former is pure data duplication. There is also an association table
> > neutron subnets and ipam objects for the same subnet; I decided to do so
> > not do further duplication.
> > I dislike this data separation; on the other hand the only viable
> > alternative would be to give the IPAM driver full control over neutron's
> > database tables for IP Allocation and Subnet allocation pools. While
> this is
> > feasible, I think it would be even worse to give code which can
> > be 3rd party control over data structures used by the Neutron API.
> > As a result, the patch is different and larger. I would like to split it
> > make it simpler to review, but rather than just doing that of my own
> > I would like to ask reviewers how they would prefer to have it split. At
> > end of the day I'm not the one who has to spend a lot of time reviewing
> > code.
> > Nevertheless, I think I've realised our refactoring approach is kind of
> > flawed, since it might work for greenfields deployments, but I don't
> > it will work for brownfields deployments. Also, switching between
> drivers is
> > pretty much impossible (but we were already aware of that and we agreed
> > address this in the future).
> > The decorator approach currently taken in  allows to either use the
> > driver or not - which means that if an operator decides to switch to the
> > IPAM driver, then all allocation data for existing subnets would be
> > lost, unless we provide a solution for migrating data. This is feasible
> > rather trivial, but implies an additional management step.
> > An alternative solution would be to introduce a concept of "allocator"
> > subnets. Such information should be stored in the database. It could
> > to an IPAM driver or to nothing. In the latter case it would simply
> > to use the "usual" baked-in IPAM code. This would allow us to make the
> > driver work on existing deployments, and pave the way for multiple
> > Indeed in this way, the new IPAM driver will be used for subnets created
> > after enabling it, whereas existing subnets will keep using the existing
> > logic. This should also make safe cases in which operators revert the
> > decision of using the IPAM driver. Additionally, administrative APIs
> > be provided to migrate existing subnets to/from the driver. However, when
> > adopting solutions like this, it is important to pay extra care in
> > that the database does not start relying on the current configuration,
> > therefore we need to find a way to decouple the allocator from the actual
> > IPAM driver, which should represent its realization.
> > Any feedback is very appreciated as usual.
> > Salvatore
> >  https://review.openstack.org/#/c/150485/
> >  https://review.openstack.org/#/c/150485/6/neutron/ipam/driver.py
> > 
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the OpenStack-dev