[openstack-dev] [neutron] [ipam] Migration to pluggable IPAM

Pavel Bondar pbondar at infoblox.com
Thu Feb 4 14:23:50 UTC 2016


I am trying to bring more attention to [1] to make final decision on
approach to use.
There are a few point that are not 100% clear for me at this point.

1) Do we plan to switch all current clouds to pluggable ipam
implementation in Mitaka?

yes -->
Then data migration can be done as alembic_migration and it is what
currently implemented in [2] PS54.
In this case during upgrade from Liberty to Mitaka all users are
unconditionally switched to reference ipam driver
from built-in ipam implementation.
If operator wants to continue using build-in ipam implementation it can
manually turn off ipam_driver in neutron.conf
immediately after upgrade (data is not deleted from old tables).

no -->
Operator is free to choose whether it will switch to pluggable ipam
and when. And it leads to no automatic data migration.
In this case operator is supplied with script for migration to pluggable
ipam (and probably from pluggable ipam),
which can be executed by operator during upgrade or at any point after
upgrade is done.
I was testing this approach in [2] PS53 (have unresolved issues in it
for now).

Or we could do both, i.e. migrate data during upgrade from built-in to
pluggable ipam implementation
and supply operator with scripts to migrate from/to pluggable ipam at
any time after upgrade.

According to current feedback in [1] it most likely we go with script
so would like to confirm if that is the case.

2) Do we plan to make ipam implementation default in Mitaka for greenfields?

If answer for this question is the same as for previous (yes/yes,
no/no), then it doesn't introduce additional issues.
But if answer is different from previous then it might complicate stuff.
For example, greyfields might be migrated manually by operator to
pluggable ipam, or continue to work using built-in implementation after
upgrade in Mitaka.
But greenfields might be set to pluggable ipam implementation by default.

Is it what we are going to support?

3) How the script approach should be tested?

Currently if pluggable implementation is set as default, then grenade
test fails.
Data has to be migrated during upgrade automatically to make grenade pass.
In [1] PS53 I was using alembic migration that internally just call
external migrate script.
Is it a valid approach? I expect that better way to test script
execution during upgrade might exist.

[1] https://bugs.launchpad.net/neutron/+bug/1516156
[2] https://review.openstack.org/#/c/181023


More information about the OpenStack-dev mailing list