[openstack-dev] [neutron] [ipam] Migration to pluggable IPAM

John Belamaric jbelamaric at infoblox.com
Thu Feb 4 16:22:18 UTC 2016


> On Feb 4, 2016, at 11:09 AM, Carl Baldwin <carl at ecbaldwin.net> wrote:
> 
> On Thu, Feb 4, 2016 at 7:23 AM, Pavel Bondar <pbondar at infoblox.com> wrote:
>> I am trying to bring more attention to [1] to make final decision on
>> approach to use.
>> There are a few point that are not 100% clear for me at this point.
>> 
>> 1) Do we plan to switch all current clouds to pluggable ipam
>> implementation in Mitaka?
> 
> I think our plan originally was only to deprecate the non-pluggable
> implementation in Mitaka and remove it in Newton.  However, this is
> worth some more consideration.  The pluggable version of the reference
> implementation should, in theory, be at parity with the current
> non-pluggable implementation.  We've tested it before and shown
> parity.  What we're missing is regular testing in the gate to ensure
> it continues this way.
> 

Yes, it certainly should be at parity, and gate testing to ensure it would be best.

>> yes -->
>> Then data migration can be done as alembic_migration and it is what
>> currently implemented in [2] PS54.
>> In this case during upgrade from Liberty to Mitaka all users are
>> unconditionally switched to reference ipam driver
>> from built-in ipam implementation.
>> If operator wants to continue using build-in ipam implementation it can
>> manually turn off ipam_driver in neutron.conf
>> immediately after upgrade (data is not deleted from old tables).
> 
> This has a certain appeal to it.  I think the migration will be
> straight-forward since the table structure doesn't really change much.
> Doing this as an alembic migration would be the easiest from an
> upgrade point of view because it fits seamlessly in to our current
> upgrade strategy.
> 
> If we go this way, we should get this in soon so that we can get the
> gate and others running with this code for the remainder of the cycle.
> 

If we do this, and the operator reverts back to the non-pluggable version,
then we will leave stale records in the new IPAM tables. At the very least,
we would need a way to clean those up and to migrate at a later time.

>> no -->
>> Operator is free to choose whether it will switch to pluggable ipam
>> implementation
>> and when. And it leads to no automatic data migration.
>> In this case operator is supplied with script for migration to pluggable
>> ipam (and probably from pluggable ipam),
>> which can be executed by operator during upgrade or at any point after
>> upgrade is done.
>> I was testing this approach in [2] PS53 (have unresolved issues in it
>> for now).
> 
> If there is some risk in changing over then this should still be
> considered.  But, the more I think about it, the more I think that we
> should just make the switch seamlessly for the operator and be done
> with it.  This approach puts a certain burden on the operator to
> choose when to do the migration and go through the steps manually to
> do it.  And, since our intention is to deprecate and remove the
> non-pluggable implementation, it is inevitable that they will have to
> eventually switch anyway.
> 
> This also makes testing much more difficult.  If we go this route, we
> really should be testing both equally.  Does this mean that we need to
> set up a whole new job to run the pluggable implementation along side
> the old implementation?  This kind of feels like a nightmare to me.
> What do you think?
> 

Originally (as I mentioned in the meeting), I was thinking that we should not automatically migrate. However, I see the appeal of your arguments. Seamless is best, of course. But if we offer going back to non-pluggable, (which I think we need to at this point in the Mitaka cycle), we probably need to provide a script as mentioned above. Seems feasible, though.





More information about the OpenStack-dev mailing list