<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<style type="text/css" style="display:none;"> P {margin-top:0;margin-bottom:0;} </style>
</head>
<body dir="ltr">
<div style="font-family: Calibri, Arial, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0); background-color: rgb(255, 255, 255);" class="elementToProof ContentPasted0">
Thanks Stephen for the detailed summary and explanation of the situation,
<div class="ContentPasted0">in this respect I agree with you that the library feature freeze date should</div>
<div class="ContentPasted0">not be on such early date in the cycle as Milestone-2.</div>
<div><br class="ContentPasted0">
</div>
<div class="ContentPasted0">If anyone else has any opinion then please let us know.</div>
<div><br class="ContentPasted0">
</div>
<div class="ContentPasted0">Thanks,</div>
<div><br class="ContentPasted0">
</div>
<div class="ContentPasted0">Előd</div>
<div class="ContentPasted0">irc: elodilles @ #openstack-release</div>
<br>
</div>
<div id="appendonsend"></div>
<hr style="display:inline-block;width:98%" tabindex="-1">
<div id="divRplyFwdMsg" dir="ltr"><font face="Calibri, sans-serif" style="font-size:11pt" color="#000000"><b>From:</b> Stephen Finucane <stephenfin@redhat.com><br>
<b>Sent:</b> Friday, October 21, 2022 12:48 PM<br>
<b>To:</b> Elõd Illés <elod.illes@est.tech>; openstack-discuss@lists.openstack.org <openstack-discuss@lists.openstack.org><br>
<b>Subject:</b> Re: [PTL][TC] library *feature* freeze at Milestone-2</font>
<div> </div>
</div>
<div class="BodyFragment"><font size="2"><span style="font-size:11pt;">
<div class="PlainText">On Fri, 2022-10-21 at 11:42 +0100, Stephen Finucane wrote:<br>
> On Wed, 2022-10-19 at 17:10 +0000, Elõd Illés wrote:<br>
> > Hi,<br>
> > <br>
> > During 'TC + Community leaders interaction' [1] a case was discussed, where a<br>
> > late library release caused last minute fire fighting in Zed cycle, and people<br>
> > discussed the possibility to introduce a (non-client) library *feature* freeze<br>
> > at Milestone-2 to avoid similar issues in the future.<br>
> > <br>
> > I've started to propose the possible schedule change [2] (note: it's not ready<br>
> > yet as it does not emphasize that at Milestone-2 we mean *feature* freeze for<br>
> > libraries, not "final library release"). The patch already got some reviews<br>
> > from library maintainers so I'm calling the attention to this change here on<br>
> > the ML.<br>
> <br>
> Repeating what I said on the reviews, I'd really rather not do this. There are a<br>
> couple of reasons for this. Firstly, regarding the proposal itself, this is<br>
> going to make my life as an oslo maintainer harder than it already is. This is a<br>
> crucial point. I'm not aware of anyone whose official job responsibilities<br>
> extend to oslo and it's very much a case of doing it because no one else is<br>
> doing it. We're a tiny team and pretty overwhelmed with multiple other non-oslo<br>
> $things and for me at least this means I tend to do oslo work (including<br>
> reviews) in spurts. Introducing a rather large window (6 weeks per cycle, which<br>
> is approximately 1/4 of the total available time in a cycle) during which we<br>
> can't merge the larger, harder to review feature patches is simply too long:<br>
> whatever context I would have built up before the freeze would be long-since<br>
> gone after a month and a half.<br>
> <br>
> Secondly, regarding the issue that led to this proposal, I don't think this<br>
> proposal would have actually helped. The patch that this proposal stems from was<br>
> actually merged back on July 20th [1]. This was technically after Zed M2 but<br>
> barely (5 days [2]). However, reports of issues didn't appear until September,<br>
> when this was released as oslo.db 12.1.0 [3][4]. If we had released 12.1.0 in<br>
> late July or early August, the issue would have been spotted far earlier, but as<br>
> noted above the oslo team is tiny and overwhelmed, and I would guess the release<br>
> team is in a similar boat (and can't be expected to know about all these<br>
> things).<br>
> <br>
> I also feel compelled to note that this didn't arrive out of the blue. I have<br>
> been shouting about SQLAlchemy 2.0 for over a year now [5] and I have also been<br>
> quite vocal about other oslo.db-related changes on their way [6][7]. For the<br>
> SQLAlchemy 2.0 case specifically, clearly not enough people have been listening.<br>
> I sympathise (again, tiny, overwhelmed teams are not an oslo-specific<br>
> phenomenon) but the pain was going to arrive eventually and it's just<br>
> unfortunate that it landed with an oslo.db release that was cut so close to the<br>
> deadline (see above). I manged to get nova, cinder and placement prepared well<br>
> ahead of time but it isn't sustainable for one person to do this for all<br>
> projects. Project teams need to prioritise this stuff ahead of time rather than<br>
> waiting until things are on fire.<br>
> <br>
> Finally, it's worth remembering that this isn't a regular occurence. Yes, there<br>
> was some pain, but we handled the issue pretty well (IMO) and affected projects<br>
> are now hopefully aware of the ticking tech debt bomb 💣 sitting in their<br>
> codebase. However, as far as I can tell, there's no trend of the oslo team (or<br>
> any other library project) introducing breaking changes like this so close to<br>
> release deadlines, so it does feel a bit like putting the cart before the horse.<br>
<br>
Oh, and one final point here: I didn't actually _know_ this was going to cause<br>
as many issue as it did. Perhaps there's value in an oslo-tips job that tests<br>
service projects against the HEAD of the various oslo libraries. However, that's<br>
a whole load of extra CI resources that we'd have to find resources for. Testing<br>
in oslo.db itself didn't and wouldn't catch this because all the affected<br>
projects were all projects that were not deployed by default in 'tempest-full-<br>
py3' job.<br>
<br>
Stephen<br>
<br>
> <br>
> To repeat myself from the top, I'd really rather not do this. If we wanted to<br>
> start cutting oslo releases faster, by all means let's figure out how to do<br>
> that. If we wanted to branch earlier and keep master moving, I'm onboard.<br>
> Preventing us from merging features for a combined ~3 months of the year is a<br>
> non-starter IMO though.<br>
> <br>
> Cheers,<br>
> Stephen<br>
> <br>
> <br>
> [1] <a href="https://review.opendev.org/c/openstack/oslo.db/+/804775">https://review.opendev.org/c/openstack/oslo.db/+/804775</a><br>
> [2] <a href="https://releases.openstack.org/zed/schedule.html">https://releases.openstack.org/zed/schedule.html</a><br>
> [3] <a href="https://review.opendev.org/c/openstack/releases/+/853975/">https://review.opendev.org/c/openstack/releases/+/853975/</a><br>
> [4] <a href="https://lists.openstack.org/pipermail/openstack-discuss/2022-September/030317.html">
https://lists.openstack.org/pipermail/openstack-discuss/2022-September/030317.html</a><br>
> [5] <a href="https://lists.openstack.org/pipermail/openstack-discuss/2021-August/024122.html">
https://lists.openstack.org/pipermail/openstack-discuss/2021-August/024122.html</a><br>
> [6] <a href="https://lists.openstack.org/pipermail/openstack-discuss/2022-April/028197.html">
https://lists.openstack.org/pipermail/openstack-discuss/2022-April/028197.html</a><br>
> [7] <a href="https://lists.openstack.org/pipermail/openstack-discuss/2022-April/028198.html">
https://lists.openstack.org/pipermail/openstack-discuss/2022-April/028198.html</a><br>
> <br>
> > <br>
> > Thanks everyone for the responses in advance,<br>
> > <br>
> > Előd<br>
> > <br>
> > [1]<br>
> > <a href="https://lists.openstack.org/pipermail/openstack-discuss/2022-October/030718.html">
https://lists.openstack.org/pipermail/openstack-discuss/2022-October/030718.html</a><br>
> > [2] <a href="https://review.opendev.org/c/openstack/releases/+/861900">https://review.opendev.org/c/openstack/releases/+/861900</a><br>
> <br>
<br>
</div>
</span></font></div>
</body>
</html>