[puppet] Artificially inflated dependencies in metadata.json of all modules
Hi, Each time there's a new release, all of the modules are getting a new set of lower bounds for: - openstack/keystone - openstack/openstacklib - openstack/oslo I just did a quick check and this really doesn't make sense. For example, for going from 18.2.0 to 18.3.0, there's no breakage of the API that would require a bump of version in all modules. So, could we please stop this non-sense and restore some kind of sanity in our dependency management ? FYI, I'm expressing these in the packaged version of the puppet modules, and it increase complexity for no reasons. A lower bound of a dependency should be increased *only* when really mandatory (ie: a backward compatibility breakage, a change in the API, etc.). Your thoughts? Cheers, Thomas Goirand (zigo)
On Thu, Mar 25, 2021 at 10:30 AM Thomas Goirand <zigo@debian.org> wrote:
Hi,
Each time there's a new release, all of the modules are getting a new set of lower bounds for: - openstack/keystone - openstack/openstacklib - openstack/oslo
I just did a quick check and this really doesn't make sense. For example, for going from 18.2.0 to 18.3.0, there's no breakage of the API that would require a bump of version in all modules.
These are milestone releases and not tracked independently across all the modules. I believe you are basically asking for the modules to go independent and not have milestone releases. While technically the version interactions you are describing may be true for the most part in that it isn't necessary, we assume the deployment as a whole in case something lands that actually warrants this. The reality is this only affects master and you likely want to not track the latest versions until GA.
So, could we please stop this non-sense and restore some kind of sanity in our dependency management ? FYI, I'm expressing these in the packaged version of the puppet modules, and it increase complexity for no reasons.
It feels like the ask is for more manual version management on the Puppet OpenStack team (because we have to manually manage metadata.json before releasing), rather than just automating version updates for your packaging. This existing release model has been in place for at least 5 years now if not longer so reworking the tooling/process to understand your requirement seems a bit much given the lack of contributors. If you feel that you could get away with requiring >= 18 rather than a minor version, perhaps you could add that logic in your packaging tooling instead of asking us to stop releasing versions.
A lower bound of a dependency should be increased *only* when really mandatory (ie: a backward compatibility breakage, a change in the API, etc.).
Your thoughts?
Cheers,
Thomas Goirand (zigo)
Hi Alex, Thanks for your time replying to my original post. On 3/25/21 5:39 PM, Alex Schultz wrote:
It feels like the ask is for more manual version management on the Puppet OpenStack team (because we have to manually manage metadata.json before releasing), rather than just automating version updates for your packaging.
Not at all. I'm asking for dependency to vaguely reflect reality, like we've been doing this for years in the Python world of OpenStack.
This existing release model has been in place for at least 5 years now if not longer
Well... hum... how can I put it nicely... :) Well, it's been wrong for 5 years then! :) And if I'm being vocal now, it's because it's been annoying me for that long.
so reworking the tooling/process to understand your requirement
That's not for *my* requirement, but for dependencies to really mean what they should: express incompatibility with earlier versions when this happens.
seems a bit much given the lack of contributors.
The above sentence is IMO the only valid one of your argumentation: I can understand "not enough time", no problem! :)
If you feel that you could get away with requiring >= 18 rather than a minor version, perhaps you could add that logic in your packaging tooling instead of asking us to stop releasing versions.
I could get away with no version relationship at all, but that's really not the right thing to do. I'd like the dependencies to vaguely express some kind of reality, which isn't possible the current way. There's no way to get away from that problem with tooling: the tooling will not understand that an API has changed in a module or a puppet provider. It's only the authors of the patches that will know. Perhaps we should also try to have some kind of CI testing to validate lower bounds to solve that problem? </troll> Cheers, Thomas Goirand (zigo)
On 3/25/21 9:22 PM, Thomas Goirand wrote:
Hi Alex,
Thanks for your time replying to my original post.
On 3/25/21 5:39 PM, Alex Schultz wrote:
It feels like the ask is for more manual version management on the Puppet OpenStack team (because we have to manually manage metadata.json before releasing), rather than just automating version updates for your packaging.
Not at all. I'm asking for dependency to vaguely reflect reality, like we've been doing this for years in the Python world of OpenStack.
This existing release model has been in place for at least 5 years now if not longer
Well... hum... how can I put it nicely... :) Well, it's been wrong for 5 years then! :)
Let me give an example. Today, puppet-ironic got released in version 18.3.0. The only thing that changed in it since 18.2.0 is a bunch of metadata bumping to 18.2.0... Why haven't we just kept version 18.2.0? It's the exact same content... Cheers, Thomas Goirand (zigo)
On Thu, Mar 25, 2021 at 2:48 PM Thomas Goirand <zigo@debian.org> wrote:
On 3/25/21 9:22 PM, Thomas Goirand wrote:
Hi Alex,
Thanks for your time replying to my original post.
On 3/25/21 5:39 PM, Alex Schultz wrote:
It feels like the ask is for more manual version management on the Puppet OpenStack team (because we have to manually manage metadata.json before releasing), rather than just automating version updates for your packaging.
Not at all. I'm asking for dependency to vaguely reflect reality, like we've been doing this for years in the Python world of OpenStack.
This existing release model has been in place for at least 5 years now if not longer
Well... hum... how can I put it nicely... :) Well, it's been wrong for 5 years then! :)
Let me give an example. Today, puppet-ironic got released in version 18.3.0. The only thing that changed in it since 18.2.0 is a bunch of metadata bumping to 18.2.0...
Why haven't we just kept version 18.2.0? It's the exact same content...
Release due to milestone 3. Like I said, we could switch to independent or just stop doing milestone releases, but then that causes other problems and overhead. Given the lower amount of changes in the more recent releases, it might make sense to switch but I think that's a conversation that isn't necessarily puppet specific but could be expanded to openstack releases in general. From a RDO standpoint, we build the packages in dlrn which include dates/hashes and so the versions only matter for upgrades (we don't enforce the metadata.json requirements). Dropping milestones wouldn't affect us too badly, but we'd still want an initial metadata.json rev at the start of a cycle. We could hold off on releasing until much later and you wouldn't get the churn. You'd also not be able to match the puppet modules to any milestone release during the current development cycle.
Cheers,
Thomas Goirand (zigo)
My two cents. In a ideal world we should just skip milestones and release when we either 1) need to or 2) have a new major release for a new OpenStack coordinated release. That said, there is the painpoint of having to update metadata.json and it's been on my todo list to template all metadata.json and have the OpenStack release tooling handle it instead, ever since I fixed the automatic upload to Puppet Forge [1]. Another thing I have wanted to do is pretty much have a CI job running integration testing by simply installing modules from Puppet Forge (with r10k for example) so that we can actually test that our constraints in metadata.json actually results in a working deployment as well. All this is due to lack of contributors (and time on my part). I would support changes to improve releasing, if somebody wants to take that on. However I would really like it to be finished and not halfway through which would make it even worse than today. Best regards [1] https://review.opendev.org/c/openstack/project-config/+/627573 ________________________________ From: Alex Schultz <aschultz@redhat.com> Sent: Thursday, March 25, 2021 9:56:56 PM To: Thomas Goirand Cc: OpenStack Discuss Subject: Re: [puppet] Artificially inflated dependencies in metadata.json of all modules On Thu, Mar 25, 2021 at 2:48 PM Thomas Goirand <zigo@debian.org<mailto:zigo@debian.org>> wrote: On 3/25/21 9:22 PM, Thomas Goirand wrote:
Hi Alex,
Thanks for your time replying to my original post.
On 3/25/21 5:39 PM, Alex Schultz wrote:
It feels like the ask is for more manual version management on the Puppet OpenStack team (because we have to manually manage metadata.json before releasing), rather than just automating version updates for your packaging.
Not at all. I'm asking for dependency to vaguely reflect reality, like we've been doing this for years in the Python world of OpenStack.
This existing release model has been in place for at least 5 years now if not longer
Well... hum... how can I put it nicely... :) Well, it's been wrong for 5 years then! :)
Let me give an example. Today, puppet-ironic got released in version 18.3.0. The only thing that changed in it since 18.2.0 is a bunch of metadata bumping to 18.2.0... Why haven't we just kept version 18.2.0? It's the exact same content... Release due to milestone 3. Like I said, we could switch to independent or just stop doing milestone releases, but then that causes other problems and overhead. Given the lower amount of changes in the more recent releases, it might make sense to switch but I think that's a conversation that isn't necessarily puppet specific but could be expanded to openstack releases in general. From a RDO standpoint, we build the packages in dlrn which include dates/hashes and so the versions only matter for upgrades (we don't enforce the metadata.json requirements). Dropping milestones wouldn't affect us too badly, but we'd still want an initial metadata.json rev at the start of a cycle. We could hold off on releasing until much later and you wouldn't get the churn. You'd also not be able to match the puppet modules to any milestone release during the current development cycle. Cheers, Thomas Goirand (zigo)
On Thu, Mar 25, 2021 at 2:26 PM Thomas Goirand <zigo@debian.org> wrote:
Hi Alex,
Thanks for your time replying to my original post.
On 3/25/21 5:39 PM, Alex Schultz wrote:
It feels like the ask is for more manual version management on the Puppet OpenStack team (because we have to manually manage metadata.json before releasing), rather than just automating version updates for your packaging.
Not at all. I'm asking for dependency to vaguely reflect reality, like we've been doing this for years in the Python world of OpenStack.
This existing release model has been in place for at least 5 years now if not longer
Well... hum... how can I put it nicely... :) Well, it's been wrong for 5 years then! :)
And if I'm being vocal now, it's because it's been annoying me for that long.
Versions in puppet have always been problematic because of the incompatibilities between how python can handle milestones vs puppet in the openstack ecosystem. Puppet being purely semver means we can't do any of the pre-release (there is no 13.3.0.0a1) things that the other openstack projects can do. So in order to do this, we're releasing minor versions as points during the main development to allow for folks to match up to the upstream milestones. Is it ideal? no. Does it really matter? no. Puppet modules inherently aren't package friendly. metadata.json is for the forge where the module dependencies will automagically be sorted out and you want the modules with the correct versions.
so reworking the tooling/process to understand your requirement
That's not for *my* requirement, but for dependencies to really mean what they should: express incompatibility with earlier versions when this happens.
It is your requirement because you're putting these constraints in place in your packaging while tracking the current development. As I mentioned this is only really an issue until GA. Once GA hits, we don't unnecessarily rev these versions which means there isn't the churn you don't particularly care for.
seems a bit much given the lack of contributors.
The above sentence is IMO the only valid one of your argumentation: I can understand "not enough time", no problem! :)
Given the number of modules, trying to maintain the versions with the super strict semver reasons causes more issues than following a looser strategy to handle versions knowing that there are likely going to be breaking changes between major releases. In puppet we try to maintain N and N-1 backwards compatibilities but given the number of modules it makes it really really hard to track and the overall benefit to do so is minimal. If we follow the patterns usually a major version bump hits on m1 and X.3 is the GA or RC version.
If you feel that you could get away with requiring >= 18 rather than a minor version, perhaps you could add that logic in your packaging tooling instead of asking us to stop releasing versions.
I could get away with no version relationship at all, but that's really not the right thing to do. I'd like the dependencies to vaguely express some kind of reality, which isn't possible the current way. There's no way to get away from that problem with tooling: the tooling will not understand that an API has changed in a module or a puppet provider. It's only the authors of the patches that will know.
There is, don't add a strict requirement beyond the major version or wait until GA before implementing minimum versions. Perhaps we should also try to have some kind of CI testing to validate
lower bounds to solve that problem? </troll>
Cheers,
Thomas Goirand (zigo)
participants (3)
-
Alex Schultz
-
Thomas Goirand
-
Tobias Urdin