[PTL][TC] library *feature* freeze at Milestone-2

Stephen Finucane stephenfin at redhat.com
Fri Oct 21 10:48:41 UTC 2022


On Fri, 2022-10-21 at 11:42 +0100, Stephen Finucane wrote:
> On Wed, 2022-10-19 at 17:10 +0000, Elõd Illés wrote:
> > Hi,
> > 
> > During 'TC + Community leaders interaction' [1] a case was discussed, where a
> > late library release caused last minute fire fighting in Zed cycle, and people
> > discussed the possibility to introduce a (non-client) library *feature* freeze
> > at Milestone-2 to avoid similar issues in the future.
> > 
> > I've started to propose the possible schedule change [2] (note: it's not ready
> > yet as it does not emphasize that at Milestone-2 we mean *feature* freeze for
> > libraries, not "final library release"). The patch already got some reviews
> > from library maintainers so I'm calling the attention to this change here on
> > the ML.
> 
> Repeating what I said on the reviews, I'd really rather not do this. There are a
> couple of reasons for this. Firstly, regarding the proposal itself, this is
> going to make my life as an oslo maintainer harder than it already is. This is a
> crucial point. I'm not aware of anyone whose official job responsibilities
> extend to oslo and it's very much a case of doing it because no one else is
> doing it. We're a tiny team and pretty overwhelmed with multiple other non-oslo
> $things and for me at least this means I tend to do oslo work (including
> reviews) in spurts. Introducing a rather large window (6 weeks per cycle, which
> is approximately 1/4 of the total available time in a cycle) during which we
> can't merge the larger, harder to review feature patches is simply too long:
> whatever context I would have built up before the freeze would be long-since
> gone after a month and a half.
> 
> Secondly, regarding the issue that led to this proposal, I don't think this
> proposal would have actually helped. The patch that this proposal stems from was
> actually merged back on July 20th [1]. This was technically after Zed M2 but
> barely (5 days [2]). However, reports of issues didn't appear until September,
> when this was released as oslo.db 12.1.0 [3][4]. If we had released 12.1.0 in
> late July or early August, the issue would have been spotted far earlier, but as
> noted above the oslo team is tiny and overwhelmed, and I would guess the release
> team is in a similar boat (and can't be expected to know about all these
> things).
> 
> I also feel compelled to note that this didn't arrive out of the blue. I have
> been shouting about SQLAlchemy 2.0 for over a year now [5] and I have also been
> quite vocal about other oslo.db-related changes on their way [6][7]. For the
> SQLAlchemy 2.0 case specifically, clearly not enough people have been listening.
> I sympathise (again, tiny, overwhelmed teams are not an oslo-specific
> phenomenon) but the pain was going to arrive eventually and it's just
> unfortunate that it landed with an oslo.db release that was cut so close to the
> deadline (see above). I manged to get nova, cinder and placement prepared well
> ahead of time but it isn't sustainable for one person to do this for all
> projects. Project teams need to prioritise this stuff ahead of time rather than
> waiting until things are on fire.
> 
> Finally, it's worth remembering that this isn't a regular occurence. Yes, there
> was some pain, but we handled the issue pretty well (IMO) and affected projects
> are now hopefully aware of the ticking tech debt bomb 💣 sitting in their
> codebase. However, as far as I can tell, there's no trend of the oslo team (or
> any other library project) introducing breaking changes like this so close to
> release deadlines, so it does feel a bit like putting the cart before the horse.

Oh, and one final point here: I didn't actually _know_ this was going to cause
as many issue as it did. Perhaps there's value in an oslo-tips job that tests
service projects against the HEAD of the various oslo libraries. However, that's
a whole load of extra CI resources that we'd have to find resources for. Testing
in oslo.db itself didn't and wouldn't catch this because all the affected
projects were all projects that were not deployed by default in 'tempest-full-
py3' job.

Stephen

> 
> To repeat myself from the top, I'd really rather not do this. If we wanted to
> start cutting oslo releases faster, by all means let's figure out how to do
> that. If we wanted to branch earlier and keep master moving, I'm onboard.
> Preventing us from merging features for a combined ~3 months of the year is a
> non-starter IMO though.
> 
> Cheers,
> Stephen
> 
> 
> [1] https://review.opendev.org/c/openstack/oslo.db/+/804775
> [2] https://releases.openstack.org/zed/schedule.html
> [3] https://review.opendev.org/c/openstack/releases/+/853975/
> [4] https://lists.openstack.org/pipermail/openstack-discuss/2022-September/030317.html
> [5] https://lists.openstack.org/pipermail/openstack-discuss/2021-August/024122.html
> [6] https://lists.openstack.org/pipermail/openstack-discuss/2022-April/028197.html
> [7] https://lists.openstack.org/pipermail/openstack-discuss/2022-April/028198.html
> 
> > 
> > Thanks everyone for the responses in advance,
> > 
> > Előd
> > 
> > [1]
> > https://lists.openstack.org/pipermail/openstack-discuss/2022-October/030718.html
> > [2] https://review.opendev.org/c/openstack/releases/+/861900
> 




More information about the openstack-discuss mailing list