[openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects
clint at fewbar.com
Wed Aug 31 00:04:32 UTC 2016
Excerpts from Mike Bayer's message of 2016-08-30 18:15:14 -0400:
> On 08/30/2016 04:43 PM, Clint Byrum wrote:
> > Correct, it is harder for development. Since the database server has all
> > of the potential for the worst problems, being a stateful service, then
> > I believe moving complexity _out_ of it, is generally an operational
> > win, at the expense of some development effort. The development effort,
> > however, is mostly on the front of the pipeline where timelines can be
> > longer. Operations typically is operating under SLA's and with
> > requirements to move slowly in defense of peoples' data and performance
> > of the system. So I suggest that paying costs in dev, vs. at the
> > database is usually the highest value choice.
> > This is of course not the case if timelines are short for development as
> > well, but I can't really answer the question in that case. For OpenStack,
> > we nearly always find ourselves with more time to develop, than operators
> > do to operate.
> So the idea of triggers is hey, for easy things like column X is now
> column Y elsewhere, instead of complicating the code, use a trigger to
> maintain that value. Your argument against triggers is: "Triggers
> introduce emergent behaviors and complicate scaling and reasonable
> debugging in somewhat hidden ways that
> can frustrate even the most experienced DBA."
> I'd wager that triggers probably work a little more smoothly in modern
> MySQL/Postgresql than a more classical "DBA" platform like a crusty old
> MS SQL Server or Oracle, but more examples on these emergent behaviors
> would be useful, as well as evidence that they apply to current versions
> of database software that are in use within Openstack, and are
> disruptive enough that even the most clear-cut case for triggers vs.
> in-application complexity should favor in-app complexity without question.
My direct experience with this was MySQL 5.0 and 5.1. They worked as
documented, and no I don't think they've changed much since then.
When they were actually installed into the schema and up to date with
the code that expected them, and the debugging individual was aware of them, things were fine.
However, every other imperative part of the code was asserted with git,
package managers, ansible, puppet, pick your choice of thing that puts
file on disk and restarts daemons. These things all have obvious entry
points too. X is where wsgi starts running code. Y is where flask hands
off to the app, etc. But triggers are special and go in the database at
whatever time they go in. This means you lose all the benefit of all of
the tools you're used to using to debug and operate on imperative code.
> >>> I don't think it's all that ambitious to think we can just use tried and
> >>> tested schema evolution techniques that work for everyone else.
> >> People have been asking me for over a year how to do this, and I have no
> >> easy answer, I'm glad that you do. I would like to see some examples of
> >> these techniques.
> >> If you can show me the SQL access code that deals with the above change,
> >> that would help a lot.
> > So schema changes fall into several categories. But basically, the only
> > one that is hard, is a relationship change. Basically, a new PK. Here's
> > an example:
> > Book.isbn was the PK, but we want to have a record per edition, so the
> > new primary key is (isbn, edition).
> > Solution: Maintain two tables. You have created an entirely new object!
> > CREATE TABLE book (
> > isbn varchar(30) not null primary key,
> > description text,
> > )
> > CREATE TABLE book_editions (
> > isbn varchar(30) not null,
> > edition int not null,
> > description text,
> > primary key (isbn, edition),
> > )
> > And now on read, your new code has to do this:
> > SELECT b.isbn,
> > COALESCE(be.edition, 0) AS edition,
> > COALESCE(be.description, b.description) AS description
> > FROM book b
> > LEFT OUTER JOIN book_editions be
> > ON b.isbn = be.isbn
> > WHERE b.isbn = 'fooisbn'
> > And now, if a book has only ever been written by old code, you get one
> > record with a 0 edition. And if it were written by the new system, the
> > new system would need to go ahead and duplicate the book description into
> > the old table for as long as we have code that might expect it.
> So some pain points here are:
> 1. you really can't ever trust what's in book_editions.description as
> long as any "old" application is running, since it can put new data into
> book.description at any time. You shouldn't bother reading from it at
> all, just write to it. You won't be able to use it until the next
> version of the application, e.g. "new" + 1. Or if you support some kind
> of "old app is gone! " flag that modifies the behavior of "new" app to
> modify all its queries, which is even more awkward.
Of course, you can have books that get their edition 0 updated in book
while you're upgrading. But the editions feature code always treats
that old update as an update to edition 0. It's still the same object
it always was, your app just makes some assumptions about it. You can
use a union in some cases where you need to see them all for instance,
and just select a literal '0' for the edition column of your union.
And one can say "old app is gone" when one knows it's gone. At that point,
one can run a migration that inserts 0 editions into book_edition, and
drops the book table. For OpenStack, we can say "all releases that used
that old schema are EOL, so we can simplify the code now". Our 6 month
pace and short EOL windows are built for this kind of thing.
> 2. deletes by "old" app of entries in "book" have to be synchronized
> offline by a background script of some kind. You at least need to run a
> final, authoritative "clean up all the old book deletions" job before
> you go into "old app is gone" mode and the new app begins reading from
> book_editions alone.
You'll notice the query was a left join from book -> book_edition.
Deletes will work just fine. The migration mentioned above would need to
clean up any orphaned book editions that lack a 0 edition.
> 3. LEFT OUTER JOINs can be a major performance hit. You can't turn it
> off here until you go to version "new + 1" (bad performance locked in
> for a whole release cycle) or your app has a "turn off old app mode"
> flag (basically you have to write two different database access layers).
I'm not sure I agree that they're that big of a performance hit. When
they're returning lots and lots of nulled out rows and gumming up the
optimizer, yes. But this is a specific use case, where one is _always_
going to be doing a many to one lookup, and so, it results in a single
key lookup on a table's PK, which is the fastest thing we can do in a
database. This specific case is a generic solution to the problem though,
and works if you replace books with users and editions with domains,
> Contrast to the trigger approach, which removes all the SELECT pain and
> moves it all to writes:
> 1. new application has no code whatsoever referring to old application
Except it does, hidden in the schema as triggers.
> 2. no performance hit on SELECT
I'll take extra single key lookups that I can read the code for over
hidden triggering any day.
> 3. no "wait til version "new+1"" and/or "old app is gone" switch
One will need to drop the triggers at some point.
> If we have evidence that triggers are always, definitely, universally
> going to make even this extremely simple use case non-feasible, great,
> let's measure and test for that. But in a case like this they look
> very attractive and I'd hate to just dispense with them unilaterally
> without a case-by-case examination.
It's pretty hard to measure and test complexity.
But we can implement a system with one way or another, and then measure
how often it breaks, how long it takes to fix it, and how skilled the
workers must be to address issues. Then after that, we should take the
approach that we found has the higher break/fix efficiency, and use
that. I'm suggesting that my experience has found triggers to have an
undesirable level of break/fix complexity compared to code that deals
with legacy schemas.
> As I wrote this, I did begin to come up with yet another approach. I'd
> be pessimistic about acceptance here because instead of using
> scary-and-mistrusted triggers, it would use
> even-scarier-and-more-mistrusted SQLAlchemy. That is, write a
> triggering system as a SQLAlchemy library that embeds into either "old"
> or "new" application as a plugin. Pull it in via the SQLAlchemy URL in
> the .conf file, and it would apply events to all tables that do the same
> things that server-side triggers would do, except you can write the
> usual platform-agnostic SQLAlchemy Core / ORM code to do it. The code
> within this layer would either be created custom for each app migration,
> or perhaps some of it could be somewhat abstracted into a series of
> common "triggers" like "column move", "column rename", etc.
> Since the real trick is, how to we get data written from the "old" app
> to be compatible with "new". Triggers are one way to put something "in
> the middle", but an in-process Python plugin could be another. The
> "new" app would use the same technique and the plugin could be removed
> once the application is fully migrated.
> This does meet your criteria of the transition happening in "safe, warm
> development". It meets mine of, "the new application can look
> forwards and SELECT at will without worrying about backwards
> compatibility". I'm not too optimistic that I could convince anyone of
> this approach though. As Dan notes, the kinds of migrations Nova is
> doing don't have these problems at all. For others, like the one
> Keystone is working on, it apparently is more about getting a DEFAULT to
> work on all platforms (MySQL is being picky) and definitely a DEFAULT is
> better than any other approach if it is all that's needed.
> Going to put this one in my back pocket though, stay tuned...
It does sound interesting, no doubt.
More information about the OpenStack-dev