[openstack-dev] [keystone][nova][neutron][all] Rolling upgrades: database triggers and oslo.versionedobjects

Lance Bragstad lbragstad at gmail.com
Tue Aug 30 23:44:25 UTC 2016


Since the encrypted credential work is currently based on triggers, I spent
most of today documenting a walk-though migration from Mitaka to Newton
[0]. Regardless of the outcome discussed here - figured it would be worth
sharing since it's relevant to the thread. Most of the gist contains stuff
not directly related to the upgrade from Mitaka to Newton, like config
files and install processes. I included them anyway since I started with a
green field deployment on Ubuntu 16.04.

Everything is technically still up for review so if you notice anything
fishy about the implementation via the walkthrough feel free to leave a
comment.

[0]
https://gist.github.com/lbragstad/ddfb10f9f9048414d1f781ba006e95d1#file-migration-md

On Tue, Aug 30, 2016 at 5:15 PM, Mike Bayer <mbayer at redhat.com> wrote:

>
>
> On 08/30/2016 04:43 PM, Clint Byrum wrote:
>
>>
>>>
>> Correct, it is harder for development. Since the database server has all
>> of the potential for the worst problems, being a stateful service, then
>> I believe moving complexity _out_ of it, is generally an operational
>> win, at the expense of some development effort. The development effort,
>> however, is mostly on the front of the pipeline where timelines can be
>> longer. Operations typically is operating under SLA's and with
>> requirements to move slowly in defense of peoples' data and performance
>> of the system. So I suggest that paying costs in dev, vs. at the
>> database is usually the highest value choice.
>>
>> This is of course not the case if timelines are short for development as
>> well, but I can't really answer the question in that case. For OpenStack,
>> we nearly always find ourselves with more time to develop, than operators
>> do to operate.
>>
>
> So the idea of triggers is hey, for easy things like column X is now
> column Y elsewhere, instead of complicating the code, use a trigger to
> maintain that value.   Your argument against triggers is: "Triggers
> introduce emergent behaviors and complicate scaling and reasonable
> debugging in somewhat hidden ways that
> can frustrate even the most experienced DBA."
>
> I'd wager that triggers probably work a little more smoothly in modern
> MySQL/Postgresql than a more classical "DBA" platform like a crusty old MS
> SQL Server or Oracle, but more examples on these emergent behaviors would
> be useful, as well as evidence that they apply to current versions of
> database software that are in use within Openstack, and are disruptive
> enough that even the most clear-cut case for triggers vs. in-application
> complexity should favor in-app complexity without question.
>
>
>
>
>
>> I don't think it's all that ambitious to think we can just use tried and
>>>> tested schema evolution techniques that work for everyone else.
>>>>
>>>
>>> People have been asking me for over a year how to do this, and I have no
>>> easy answer, I'm glad that you do.  I would like to see some examples of
>>> these techniques.
>>>
>>> If you can show me the SQL access code that deals with the above change,
>>> that would help a lot.
>>>
>>>
>> So schema changes fall into several categories. But basically, the only
>> one that is hard, is a relationship change. Basically, a new PK. Here's
>> an example:
>>
>> Book.isbn was the PK, but we want to have a record per edition, so the
>> new primary key is (isbn, edition).
>>
>> Solution: Maintain two tables. You have created an entirely new object!
>>
>> CREATE TABLE book (
>>   isbn varchar(30) not null primary key,
>>   description text,
>> )
>>
>> CREATE TABLE book_editions (
>>   isbn varchar(30) not null,
>>   edition int not null,
>>   description text,
>>   primary key (isbn, edition),
>> )
>>
>> And now on read, your new code has to do this:
>>
>> SELECT b.isbn,
>>        COALESCE(be.edition, 0) AS edition,
>>        COALESCE(be.description, b.description) AS description
>> FROM book b
>>      LEFT OUTER JOIN book_editions be
>>      ON b.isbn = be.isbn
>> WHERE b.isbn = 'fooisbn'
>>
>> And now, if a book has only ever been written by old code, you get one
>> record with a 0 edition. And if it were written by the new system, the
>> new system would need to go ahead and duplicate the book description into
>> the old table for as long as we have code that might expect it.
>>
>
> So some pain points here are:
>
> 1. you really can't ever trust what's in book_editions.description as long
> as any "old" application is running, since it can put new data into
> book.description at any time.  You shouldn't bother reading from it at all,
> just write to it. You won't be able to use it until the next version of the
> application, e.g. "new" + 1. Or if you support some kind of "old app is
> gone! " flag that modifies the behavior of "new" app to modify all its
> queries, which is even more awkward.
>
> 2. deletes by "old" app of entries in "book" have to be synchronized
> offline by a background script of some kind.  You at least need to run a
> final, authoritative "clean up all the old book deletions" job before you
> go into "old app is gone" mode and the new app begins reading from
> book_editions alone.
>
> 3. LEFT OUTER JOINs can be a major performance hit.   You can't turn it
> off here until you go to version "new + 1" (bad performance locked in for a
> whole release cycle) or your app has a "turn off old app mode" flag
> (basically you have to write two different database access layers).
>
> Contrast to the trigger approach, which removes all the SELECT pain and
> moves it all to writes:
>
> 1. new application has no code whatsoever referring to old application
>
> 2. no performance hit on SELECT
>
> 3. no "wait til version "new+1"" and/or "old app is gone" switch
>
> If we have evidence that triggers are always, definitely, universally
> going to make even this extremely simple use case non-feasible, great,
> let's measure and test for that.   But in a case like this they look very
> attractive and I'd hate to just dispense with them unilaterally without a
> case-by-case examination.
>
> As I wrote this, I did begin to come up with yet another approach.  I'd be
> pessimistic about acceptance here because instead of using
> scary-and-mistrusted triggers, it would use even-scarier-and-more-mistrusted
> SQLAlchemy.  That is, write a triggering system as a SQLAlchemy library
> that embeds into either "old" or "new" application as a plugin.   Pull it
> in via the SQLAlchemy URL in the .conf file, and it would apply events to
> all tables that do the same things that server-side triggers would do,
> except you can write the usual platform-agnostic SQLAlchemy Core / ORM code
> to do it.   The code within this layer would either be created custom for
> each app migration, or perhaps some of it could be somewhat abstracted into
> a series of common "triggers" like "column move", "column rename", etc.
>
> Since the real trick is, how to we get data written from the "old" app to
> be compatible with "new".  Triggers are one way to put something "in the
> middle", but an in-process Python plugin could be another.   The "new" app
> would use the same technique and the plugin could be removed once the
> application is fully migrated.
>
> This does meet your criteria of the transition happening in "safe, warm
> development".    It meets mine of, "the new application can look forwards
> and SELECT at will without worrying about backwards compatibility".  I'm
> not too optimistic that I could convince anyone of this approach though.
> As Dan notes, the kinds of migrations Nova is doing don't have these
> problems at all.   For others, like the one Keystone is working on, it
> apparently is more about getting a DEFAULT to work on all platforms (MySQL
> is being picky) and definitely a DEFAULT is better than any other approach
> if it is all that's needed.
>
> Going to put this one in my back pocket though, stay tuned...
>
>
>
>> Most other things are simpler and have quite obvious solutions.
>>
>> If the answer is, "oh well just don't do a schema change like that",
>>> then we're basically saying we aren't really changing our schemas
>>> anymore except for totally new features that otherwise aren't accessed
>>> by the older version of the code.  That's fine.   It's not what people
>>> coming to me are saying, though.
>>>
>>>
>> I mean, yes and no. We should pay some respect to operators who have to
>> deal with our desire for the schema to be "Right".If it scales well,
>> maintains integrity, and is 98% clear and well formed, then that 2%
>> where we store the "project id" redundantly in the "tenant_id" column
>> for a few releases, that isn't really a bother to me.
>>
>
> I've no problem with downvoting changes that incur painful schema changes
> for no clear operational benefit and I don't think anyone else has a
> problem with that either.    IMO an application that wants to do online
> upgrades has to be mostly done with their "getting the schema right"
> problems.
>
>
>
>
>
>> ____________________________________________________________
>> ______________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscrib
>> e
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160830/efdd335c/attachment.html>


More information about the OpenStack-dev mailing list