[openstack-dev] [Heat] Versioned objects upgrade patterns

Zane Bitter zbitter at redhat.com
Wed May 18 20:39:00 UTC 2016

On 17/05/16 20:27, Crag Wolfe wrote:
> Now getting very Heat-specific. W.r.t. to
> https://review.openstack.org/#/c/303692/ , the goal is to de-duplicate
> raw_template.files (this is a dict of template filename to contents),
> both in the DB and in RAM. The approach this patch is taking is that,
> when one template is created by reference to another, we just re-use the
> original template's files (ultimately in a new table,
> raw_template_files). In the case of nested stacks, this saves on quite a
> bit of duplication.
> If we follow the 3-step pattern discussed earlier in this thread, we
> would be looking at P release as to when we start seeing DB storage
> improvements. As far as RAM is concerned, we would see improvement in
> the O release since that is when we would start reading from the new
> column location (and could cache the template files object by its ID).
> It also means that for the N release, we wouldn't see any RAM or DB
> improvements, we'll just start writing template files to the new
> location (in addition to the old location). Is this acceptable, or do
> impose some sort of downtime restrictions on the next Heat upgrade?
> A compromise could be to introduce a little bit of downtime:
> For the N release:

There's also a step 0, which is to run the DB migrations for Newton.

>   1. Add the new column (no need to shut down heat-engine).
>   2. Shut down all heat-engine's.
>   3. Upgrade code base to N throughout cluster.
>   4. Start all heat engine's. Read from new and old template files
> locations, but only write to the new one.
> For the O release, we could perform a rolling upgrade with no downtime
> where we are only reading and writing to the new location, and then drop
> the old column as a post-upgrade migration (i.e, the typical N+2 pattern
> [1] that Michal referenced earlier and I'm re-referencing :-).
> The advantage to the compromise is we would immediately start seeing RAM
> and DB improvements with the N-release.

+1, and in fact this has been the traditional way of doing it. To be 
able to stop recommending that to operators, we need a solution both to 
the DB problem we're discussing here and to the problem of changes to 
the RPC API parameters. (Before anyone asks, and I know someone will... 
NO, versioned objects do *not* solve either of those problems.)

I've already personally made one backwards-incompatible change to the 
RPC in this version:


So we won't be able to recommend rolling updates from Mitaka->Newton anyway.

I suggest that as far as this patch is concerned, we should implement 
the versioning that allows the VO to decide whether to write old or new 
data and leave it at that. That way, if someone manages to implement 
rolling upgrade support in Newton we'll have it, and if we don't we'll 
just fall back to the way we've done it in the past.


> [1]
> http://docs.openstack.org/developer/cinder/devref/rolling.upgrades.html#database-schema-and-data-migrations
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

More information about the OpenStack-dev mailing list