[openstack-dev] [heat]Policy on upgades required config changes

Keith Bray keith.bray at RACKSPACE.COM
Tue Mar 11 05:05:21 UTC 2014


I want to echo Clint's responses... We do run close to Heat master here at
Rackspace, and we'd be happy to set up a non-voting job to notify when a
review would break Heat on our cloud if that would be beneficial.  Some of
the breaks we have seen have been things that simply weren't caught in
code review (a human intensive effort), were specific to the way we
configure Heat for large-scale cloud use, applicable to the entire Heat
project, and not necessarily service provider specific.

-Keith

On 3/10/14 5:19 PM, "Clint Byrum" <clint at fewbar.com> wrote:

>Excerpts from Steven Hardy's message of 2014-03-05 04:24:51 -0800:
>> On Tue, Mar 04, 2014 at 02:06:16PM -0800, Clint Byrum wrote:
>> > Excerpts from Steven Hardy's message of 2014-03-04 09:39:21 -0800:
>> > > Hi all,
>> > > 
>> > > As some of you know, I've been working on the instance-users
>>blueprint[1].
>> > > 
>> > > This blueprint implementation requires three new items to be added
>>to the
>> > > heat.conf, or some resources (those which create keystone users)
>>will not
>> > > work:
>> > > 
>> > > https://review.openstack.org/#/c/73978/
>> > > https://review.openstack.org/#/c/76035/
>> > > 
>> > > So on upgrade, the deployer must create a keystone domain and
>>domain-admin
>> > > user, add the details to heat.conf, as already been done in
>>devstack[2].
>> > > 
>> > > The changes requried for this to work have already landed in
>>devstack, but
>> > > it was discussed to day and Clint suggested this may be unacceptable
>> > > upgrade behavior - I'm not sure so looking for guidance/comments.
>> > > 
>> > > My plan was/is:
>> > > - Make devstack work
>> > > - Talk to tripleo folks to assist in any transition (what prompted
>>this
>> > >   discussion)
>> > > - Document the upgrade requirements in the Icehouse release notes
>>so the
>> > >   wider community can upgrade from Havana.
>> > > - Try to give a heads-up to those maintaining downstream heat
>>deployment
>> > >   tools (e.g stackforge/puppet-heat) that some tweaks will be
>>required for
>> > >   Icehouse.
>> > > 
>> > > However some have suggested there may be an openstack-wide policy
>>which
>> > > requires peoples old config files to continue working indefinitely
>>on
>> > > upgrade between versions - is this right?  If so where is it
>>documented?
>> > > 
>> > 
>> > I don't think I said indefinitely, and I certainly did not mean
>> > indefinitely.
>> > 
>> > What is required though, is that we be able to upgrade to the next
>> > release without requiring a new config setting.
>> 
>> So log a warning for one cycle, then it's OK to expect the config after
>> that?
>> 
>
>Correct.
>
>> I'm still unclear if there's an openstack-wide policy on this, as the
>>whole
>> time-based release with release-notes (which all of openstack is
>>structured
>> around and adheres to) seems to basically be an uncomfortable fit for
>>folks
>> like tripleo who are trunk chasing and doing CI.
>>
>
>So we're continuous delivery focused, but we are not special. HP Cloud
>and Rackspace both do this, and really anyone running a large cloud will
>most likely do so with CD, as the value proposition is that you don't
>have big scary upgrades, you just keep incrementally upgrading and
>getting newer, better code. We can only do this if we have excellent
>testing, which upstream already does and which the public clouds all
>do privately as well of course.
>
>Changes like the one that was merged last week in Heat turn into
>stressful fire drills for those deployment teams.
>
>> > Also as we scramble to deal with these things in TripleO (as all of
>>our
>> > users are now unable to spin up new images), it is clear that it is
>>more
>> > than just a setting. One must create domain users carefully and roll
>>out
>> > a new password.
>> 
>> Such are the pitfalls of life at the bleeding edge ;)
>> 
>
>This is mildly annoying as a stance, as that's not how we've been
>operating with all of the other services of OpenStack. We're not crazy
>for wanting to deploy master and for wanting master to keep working. We
>are a _little_ crazy for wanting that without being in the gate.
>
>> Seriously though, apologies for the inconvenience - I have been asking
>>for
>> feedback on these patches for at least a month, but clearly I should've
>> asked harder.
>> 
>
>Mea culpa too, I did not realize what impact this would have until it
>was too late.
>
>> As was discussed on IRC yesterday, I think some sort of (initially
>>non-voting)
>> feedback from tripleo CI to heat gerrit is pretty much essential given
>>that
>> you're so highly coupled to us or this will just keep happening.
>> 
>
>TripleO will be in the gate some day (hopefully soon!) and then this
>will be less of an issue as you'd see failures early on, and could open
>bugs and get us to fix our issue sooner.
>
>However you'd still need to provide the backward compatibility for a
>single cycle. Servers aren't upgraded instantly, and keystone may not be
>ready for this v3/domain change until after users have fully rolled out
>Icehouse. Whether one is CD or stable release focused, one still needs a
>simple solution to rolling out a massive update.
>
>> > What I'm suggesting is that we should instead _warn_ that the old
>> > behavior is being used and will be deprecated.
>> > 
>> > At this point, out of urgency, we're landing fixes. But in the future,
>> > this should be considered carefully.
>> 
>> Ok, well I raised this bug:
>> 
>> https://bugs.launchpad.net/heat/+bug/1287980
>> 
>> So we can modify the stuff so that it falls back to the old behavior
>> gracefully and will solve the issue for folks on the time-based
>>releases.
>> 
>> Hopefully we can work towards the tripleo gate feedback so next time
>>this
>> is less of a suprise for all of us ;)
>> 
>
>Yes, and hopefully it is also clear that we really do need to make things
>simpler for upgraders, whether they upgrade at each commit, or at each
>stable release. :)
>
>_______________________________________________
>OpenStack-dev mailing list
>OpenStack-dev at lists.openstack.org
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




More information about the OpenStack-dev mailing list