[openstack-dev] [nova] Required data migrations in Kilo, need Turbo Hipster tests updated

Dan Smith dms at danplanet.com
Wed Apr 22 15:31:41 UTC 2015


> Sure, but for people doing continuous deployment, they clearly haven't
> ran the migrate_flavor_data (or if they have, they haven't filed any
> bugs about it not working[0]).

Hence the usefulness of T-H here, right? The point of the migration
check is to make sure that people _do_ run it before the leave kilo.
Right now, they have nothing other than the big note in the release
notes about doing it. Evidence seems to show that they're not seeing it,
which is exactly why we need the check :)

> I also found what I believe to be a bug with the flavor migration code.
> I started working on a fix by my limited knowledge of nova's objects has
> hindered me. Any thoughts on the legitimacy of the bug would be helpful
> though: https://bugs.launchpad.net/nova/+bug/1447132 . Basically for
> some of the datasets that turbo-hipster uses there are no entries in the
> new instance_extra table stopping any flavor migration from actually
> running. Then in your change (174480) you check the metadata table
> instead of the extras table causing the migration to fail even though
> migrate_flavor_data thinks there is nothing to do.

I don't think this has anything to do with the objects, it's probably
just a result of my lack of sqlalchemy-fu. Sounds like you weren't close
to a fix, so I'll try to cook something up.

So, a question here: These data sets were captured at some point in
time, right? Does that mean that they were from say Icehouse era and
have had nothing done to them since? Meaning, are there data sets that
actually had juno or kilo running on them? This instance_extra thing
would be the case for any instance that hasn't been touched in a long
time, so it's legit. However, as we move to more online migration of
data, I do wonder if taking an ancient dataset, doing schema migrations
forward to current and then expecting it to work is really reflective of
reality.

Can you shed some light on what is really going on?

> I think it's worth noting that your change (174480) will require
> operators (particularly continuous deployers) to run migrate_flavor_data
> and given the difficulties I've found I'm not sure it's ready to be ran.

Right, that's the point.

> If we encounter bugs against real datasets with migrate_flavor_data then
> deployers will be unable to upgrade until migrate_flavor_data is fixed.
> This may make things awkward if a deployer updates their codebase and
> can't run the migrations. Clearly they'll have to roll-back the changes.
> This is the scenario turbo-hipster is meant to catch - migrations that
> don't work on real datasets.

Right, that's why we're holding until we make sure that it works.

> If migrate_flavor_data is broken a backport into Kilo will be needed so
> that if Liberty requires all the flavor migrations to be finished they
> can indeed be ran before upgrading to Liberty. This may give reason not
> to block on having the flavors migrated until the M-release but I
> realise that has other undersiable consequences (ie high code maintenance).

Backports to fix this are fine IMHO, and just like any other bug found
in actual running of things. It's too bad that none of our continuous
deployment people seem to have found this yet, but that's a not uncommon
occurrence. Obviously if we hit something that makes it too painful to
get right in kilo, then we can punt for another cycle.

Thanks!

--Dan



More information about the OpenStack-dev mailing list