[openstack-dev] [heat]Policy on upgades required config changes

Sean Dague sean at dague.net
Tue Mar 11 12:14:33 UTC 2014


On 03/11/2014 07:48 AM, Steven Hardy wrote:
> On Tue, Mar 11, 2014 at 07:04:32AM -0400, Sean Dague wrote:
>> On 03/04/2014 12:39 PM, Steven Hardy wrote:
>>> Hi all,
>>>
>>> As some of you know, I've been working on the instance-users blueprint[1].
>>>
>>> This blueprint implementation requires three new items to be added to the
>>> heat.conf, or some resources (those which create keystone users) will not
>>> work:
>>>
>>> https://review.openstack.org/#/c/73978/
>>> https://review.openstack.org/#/c/76035/
>>>
>>> So on upgrade, the deployer must create a keystone domain and domain-admin
>>> user, add the details to heat.conf, as already been done in devstack[2].
>>>
>>> The changes requried for this to work have already landed in devstack, but
>>> it was discussed to day and Clint suggested this may be unacceptable
>>> upgrade behavior - I'm not sure so looking for guidance/comments.
>>>
>>> My plan was/is:
>>> - Make devstack work
>>> - Talk to tripleo folks to assist in any transition (what prompted this
>>>   discussion)
>>> - Document the upgrade requirements in the Icehouse release notes so the
>>>   wider community can upgrade from Havana.
>>> - Try to give a heads-up to those maintaining downstream heat deployment
>>>   tools (e.g stackforge/puppet-heat) that some tweaks will be required for
>>>   Icehouse.
>>>
>>> However some have suggested there may be an openstack-wide policy which
>>> requires peoples old config files to continue working indefinitely on
>>> upgrade between versions - is this right?  If so where is it documented?
>>
>> This is basically enforced in code in grenade, the language for this
>> actually got lost in the project requirements discussion in the TC, I'll
>> bring that back in the post graduation requirements discussion we're
>> having again.
>>
>> The issue is - Heat still doesn't materially participate in grenade.
>> Heat is substantially far behind the other integrated projects in it's
>> integration with the upstream testing. Only monday did we finally start
>> gating on a real unit of work for Heat (the heat-slow jobs). If I was
>> letter grading projects right now on upstream testing I'd give Nova an
>> A, Neutron a C (still no full run, no working grenade), and Heat a D.
> 
> Thanks for this, I know we have a lot more work to do in tempest, but
> evidently grenade integration is something we should priotitize as soon as
> possible.  Any volunteers out there? :)
> 
>> So in short. Heat did the wrong thing. You should be able to use your
>> configs from the last release. This is what all the mature projects in
>> OpenStack do. In the event that you *have* to make a change like that it
>> requires an UpgradeImpact tag in the commit. And those should be limited
>> really aggressively. This is the whole point of the deprecation cycle.
> 
> Ok, got that message loud and clear now, thanks ;)
> 
> Do you have a link to docs which describe the deprecation cycle and
> openstack-wide policy for introducing backwards incompatible changes?
> 
> The thing I'm still not that clear on, is if we want to eventually require
> a specific config option, and we can't just have an upgrade requirement to
> add it as I was expecting - is it enough to just output a warning for one
> release cycle then require it?

If it has a sane default, so will just work for people, you can add it.
If not, there has to be *BIG RED FLAGS*. UpgradeImpact was designed for
that as an easy way for CD folks to know how bad a weekend they were
going to have.

You could also deprecate whatever the old method was, make the new
options optional, cross a cycle boundary, then move to the new method.

> Then I guess my question is how do we rationalize the requirements of
> trunk-chasing downstream users wrt the time based releases as part of the
> deprecation cycle policy?
> 
> i.e if we branch stable/icehouse then I immediately post a patch removing
> the deprecated fallback path, it may still break downstream users who don't
> care about the stable-branch process and I have no way of knowing (other
> than, as in this case, finding out too late when they shout at me..).

So I will not say the model is anything close to perfect, however we are
under freeze right now. So if the last patch before freeze specified
deprecation, and the first patch in new master was to remove the thing,
we're still talking about 6 weeks signaling in tree. For CDing folks
that should be sufficient.

I do think we probably need to move to release or time based deprecation
models. So what is intended by a 1 release deprecation is really 5 - 6
months. And what's intended by a 2 release deprecation is really 11 - 12
months.

That's probably a reasonable conversation all on it's own.

> Thanks for contributing to the discussion, hopefully it's not only me who's
> somewhat confused by the process, and the requirement to satisfy two quite
> different sets of release constraints for downstream deployers.
> 
> Perhaps we need a wiki page similar to the StableBranch page which spells
> out the requirements for projects wrt trunk-chasing deployers, unless one
> exists already?.

Sure. I think with a lot of OpenStack, knowledge grows up as lore as we
figure out what works and doesn't, and it takes a while to get this
stuff written down.

	-Sean

-- 
Sean Dague
Samsung Research America
sean at dague.net / sean.dague at samsung.com
http://dague.net

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 482 bytes
Desc: OpenPGP digital signature
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140311/758c89e1/attachment.pgp>


More information about the OpenStack-dev mailing list