[Openstack-operators] [all] SQL Schema Downgrades: A Good Idea?
Tim.Bell at cern.ch
Fri Jan 30 19:37:20 UTC 2015
> -----Original Message-----
> From: Jay Pipes [mailto:jaypipes at gmail.com]
> Sent: 30 January 2015 20:15
> To: openstack-operators at lists.openstack.org
> Subject: Re: [Openstack-operators] [all] SQL Schema Downgrades: A Good Idea?
> Great topic, Morgan. Coments inline.
> On 01/29/2015 11:26 AM, Morgan Fainberg wrote:
> > From an operator perspective I wanted to get input on the SQL Schema
> > Downgrades.
> > Today most projects (all?) provide a way to downgrade the SQL Schemas
> > after you've upgraded. Example would be moving from Juno to Kilo and
> > then back to Juno. There are some odd concepts when handling a SQL
> > migration downgrade specifically around the state of the data. A
> > downgrade, in many cases, causes permanent and irrevocable data loss.
> > When phrased like that (and dusting off my deployer/operator hat) I
> > would be hesitant to run a downgrade in any production, stagings, or
> > even QA environment.
> > In light of what a downgrade actually means I would like to get the
> > views of the operators on SQL Migration Downgrades:
> > 1) Would you actually perform a programatic downgrade via the cli
> > tools or would you just do a restore-to-last-known-good-before-upgrade (e.g.
> > from a DB dump)?
> I would never, ever perform a programmatic downgrade operation on a
> Some operations people seem to believe that reversing deployment changes is
> both always possible and always the safest possible route. This is not the case.
> Specifically for database migrations, I always recommend that running database
> schema migrations be:
> * a forward-only process
> * pre-tested against a copy of the production database
> * if anything at all goes wrong with the upgrade schema migrations, simply
> restore from a backup taken immediately before upgrades are done.
> > 2) Would you trust the data after a programatic downgrade or would the
> > data only really be trustworthy if from a restore? Specifically the
> > new code *could* be relying on new data structures and a downgrade
> > could result in weird states of services.
> No. See above :)
> > I'm looking at the expectation that a downgrade is possible. Each time
> > I look at the downgrades I feel that it doesn't make sense to ever
> > really perform a downgrade outside of a development environment. The
> > potential for permanent loss of data / inconsistent data leads me to
> > believe the downgrade is a flawed design. Input from the operators on
> > real-world cases would be great to have.
> Schema downgrades are a horrible idea, should never have been added to our
> functionality, and should be gotten rid of immediately, IMO.
> > This is an operator specific set of questions related to a post I made
> > to the OpenStack development mailing list:
> > http://lists.openstack.org/pipermail/openstack-dev/2015-January/055586
> > .html
> > Cheers,
> > Morgan
Personally, I would also restore the databases in the current state of things. This means stopping the APIs before the backup and upgrade.
There are often other parts to the upgrade to roll back too which comes more into the packaging / configuration management domain. Rolling back a particular RPM to a previous version, disabling a new daemon which was needed with the latest version, removing the new parameters from the configuration file etc.
Looking forward, if we want online upgrades without API interruption, we do need to find a way to handle transactions which are happening at the time that the upgrade is running. A backup is good for rolling back but that also would mean cleaning up activities done while the upgrade was running.
Any ideas how are the online upgrade teams looking to address this ? Not an easy problem to solve.
More information about the OpenStack-operators