[openstack-dev] [Heat] Versioned objects upgrade patterns

Michał Dulko michal.dulko at intel.com
Thu May 19 08:29:19 UTC 2016


On 05/18/2016 10:39 PM, Zane Bitter wrote:
> On 17/05/16 20:27, Crag Wolfe wrote:
>> Now getting very Heat-specific. W.r.t. to
>> https://review.openstack.org/#/c/303692/ , the goal is to de-duplicate
>> raw_template.files (this is a dict of template filename to contents),
>> both in the DB and in RAM. The approach this patch is taking is that,
>> when one template is created by reference to another, we just re-use the
>> original template's files (ultimately in a new table,
>> raw_template_files). In the case of nested stacks, this saves on quite a
>> bit of duplication.
>>
>> If we follow the 3-step pattern discussed earlier in this thread, we
>> would be looking at P release as to when we start seeing DB storage
>> improvements. As far as RAM is concerned, we would see improvement in
>> the O release since that is when we would start reading from the new
>> column location (and could cache the template files object by its ID).
>> It also means that for the N release, we wouldn't see any RAM or DB
>> improvements, we'll just start writing template files to the new
>> location (in addition to the old location). Is this acceptable, or do
>> impose some sort of downtime restrictions on the next Heat upgrade?
>>
>> A compromise could be to introduce a little bit of downtime:
>>
>> For the N release:
>
> There's also a step 0, which is to run the DB migrations for Newton.
>
>>   1. Add the new column (no need to shut down heat-engine).
>>   2. Shut down all heat-engine's.
>>   3. Upgrade code base to N throughout cluster.
>>   4. Start all heat engine's. Read from new and old template files
>> locations, but only write to the new one.
>>
>> For the O release, we could perform a rolling upgrade with no downtime
>> where we are only reading and writing to the new location, and then drop
>> the old column as a post-upgrade migration (i.e, the typical N+2 pattern
>> [1] that Michal referenced earlier and I'm re-referencing :-).
>>
>> The advantage to the compromise is we would immediately start seeing RAM
>> and DB improvements with the N-release.
>
> +1, and in fact this has been the traditional way of doing it. To be
> able to stop recommending that to operators, we need a solution both
> to the DB problem we're discussing here and to the problem of changes
> to the RPC API parameters. (Before anyone asks, and I know someone
> will... NO, versioned objects do *not* solve either of those problems.)
>
> I've already personally made one backwards-incompatible change to the
> RPC in this version:
>
> https://review.openstack.org/#/c/315275/

If you want to support rolling upgrades, you need a way to prevent
introduction of such incompatibilities. This particular one seems pretty
easy once you get RPC version pinning framework (either auto or
config-based) in place. Nova and Cinder already have such features.

It would work by just don' send template_id when there are older
services in the deployment and make your RPC server be able to
understand also the requests without template_id.

> So we won't be able to recommend rolling updates from Mitaka->Newton
> anyway.
>
> I suggest that as far as this patch is concerned, we should implement
> the versioning that allows the VO to decide whether to write old or
> new data and leave it at that. That way, if someone manages to
> implement rolling upgrade support in Newton we'll have it, and if we
> don't we'll just fall back to the way we've done it in the past.
>
> cheers,
> Zane.
>
>> [1]
>> http://docs.openstack.org/developer/cinder/devref/rolling.upgrades.html#database-schema-and-data-migrations
>>
>>




More information about the OpenStack-dev mailing list