[openstack-dev] [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

Matt Riedemann mriedem at linux.vnet.ibm.com
Tue Nov 10 16:40:11 UTC 2015



On 11/9/2015 10:15 PM, Matthew Treinish wrote:
> On Mon, Nov 09, 2015 at 05:05:36PM +1100, Tony Breeds wrote:
>> On Fri, Nov 06, 2015 at 10:12:21AM -0800, Clint Byrum wrote:
>>
>>> The argument in the original post, I think, is that we should not
>>> stand in the way of the vendors continuing to collaborate on stable
>>> maintenance in the upstream context after the EOL date. We already have
>>> distro vendors doing work in the stable branches, but at EOL we push
>>> them off to their respective distro-specific homes.
>>
>> That is indeed a better summary than I started with.
>>
>> I have a half formed idea that creates a state between the current EOL (where
>> we delete the branches) to what we have today (where we have full CI/VMT/Release
>> management.
>
> So this idea has come up before and it has been rejected for good reasons. If
> vendors want to collaborate, but not to actually work to keep things working in
> the gate is that really collaboration? It's just vendors pushing whatever random
> backports they want into a shared repo. There is a reason we do all the testing
> and it's deeply encoded into the openstack culture. If managing to keep things
> verifiably working with the same set of test jobs when a branch was created is
> too much of a burden for people then there isn't a reason to keep it around.
>
> This is the crux of why we have shorter branch support windows. No matter how
> much complaining people do about how they want LTS releases or longer support
> windows, and we'll have world peace when they can use Grizzly with upstream
> support for 17 years it doesn't change that barely anyone ever steps up to work
> on keeping the gate working on stable branches.

Heh, speaking of Grizzly, I was just looking at backporting 
https://review.openstack.org/#/c/219301/ to Grizzly yesterday since we 
still support it. There is an unholy trifecta of things in that change 
that aren't in Grizzly: nova objects, an API method to get filtered 
resources from the DB, and a database migration in the change that added 
the virt driver method used as part of the fix (all added in Havana).

So fixing that in Grizzly is basically going to be a Frankenstein change 
to avoid the REST API backport and DB migration backport. Good times.

Why am I pointing this out? Just to give an example of the kind of mess 
that is involved in maintaining a branch that old.

>
> Tony, as someone who has done a good job coming up to speed on fixing issues on the
> stable branches you know this firsthand. We're always end up talking to the same
> handful of people to debug issues. We're also almost always in firefighting mode
> and regular failure rates on stable just keep going up when we look away. People
> also burn out quickly debugging these issues all the time. Personally, I know I
> don't keep an eye on things nearly as closely as I did before.
>
>>
>> There is a massive ammount of trust involved in that simple statement and I don't
>> underestimate that.
>>
>> This would provide a place where interested vendors can work together.
>> We could disable grenade in juno which isn't awesome but removes the gotta land
>> patch in juno, kilo and liberty to unblock the master gate.
>> We could reduce the VMT impact by nominating a point of contact for juno and
>> granting that person access to the embargoed bugs.  Similarly we could
>> trust/delegate a certain about of the release management to that team (I say
>> team but so far we've only got one volunteer)
>>
>> We can't ignore the fact that fixing things in Juno *may* still require fixes
>> in Kilo (and later releases) esp. given the mess that is requirements in
>> stable/juno
>>
>>> As much as I'd like everyone to get on the CD train, I think it might
>>> make sense to enable the vendors to not diverge, but instead let them
>>> show up with people and commitment and say "Hey we're going to keep
>>> Juno/Mitaka/etc alive!".
>>>
>>> So perhaps what would make sense is defining a process by which they can
>>> make that happen.
>>>
>>> Note that it's not just backporters though. It's infra resources too.
>>
>> Sure.  CI and Python 2.6 I have a little understanding of.  I guess I can
>> extrapolate the additional burden on {system,project}-config.  I willingly
>> admit I don't have a detailed feel for what I'm asking here.
>
> It's more than just that too, there is additional resources on Tempest and other
> QA projects like devstack and grenade too. Tempest is branchless, so to keep
> branches around longer you have to have additional jobs running on tempest to
> ensure incoming changes work across all releases. In Tokyo we were discussing
> doing the same thing on the client repos too because they make similar
> guarantees about backwards compat but we never test it. There is a ton of extra
> load generated by keeping things around for longer, it's not to be taken
> lightly. Especially given the historical lack of contribution in this space.
> This is honestly why our 1 experiment in a longer support ended in failure,
> nobody stepped up to support the extra branch. To even attempt it again we need
> proof that things have improved, which it clearly hasn't.
>
> -Matt Treinish
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

-- 

Thanks,

Matt Riedemann




More information about the OpenStack-dev mailing list