[puppet] Artificially inflated dependencies in metadata.json of all modules
aschultz at redhat.com
Thu Mar 25 20:49:03 UTC 2021
On Thu, Mar 25, 2021 at 2:26 PM Thomas Goirand <zigo at debian.org> wrote:
> Hi Alex,
> Thanks for your time replying to my original post.
> On 3/25/21 5:39 PM, Alex Schultz wrote:
> > It feels like the ask is for more manual version management on the
> > Puppet OpenStack team (because we have to manually manage metadata.json
> > before releasing), rather than just automating version updates for your
> > packaging.
> Not at all. I'm asking for dependency to vaguely reflect reality, like
> we've been doing this for years in the Python world of OpenStack.
> > This existing release model has been in place for at least 5
> > years now if not longer
> Well... hum... how can I put it nicely... :) Well, it's been wrong for 5
> years then! :)
> And if I'm being vocal now, it's because it's been annoying me for that
Versions in puppet have always been problematic because of the
incompatibilities between how python can handle milestones vs puppet in the
openstack ecosystem. Puppet being purely semver means we can't do any of
the pre-release (there is no 18.104.22.168a1) things that the other openstack
projects can do. So in order to do this, we're releasing minor versions as
points during the main development to allow for folks to match up to the
upstream milestones. Is it ideal? no. Does it really matter? no. Puppet
modules inherently aren't package friendly. metadata.json is for the forge
where the module dependencies will automagically be sorted out and you want
the modules with the correct versions.
> > so reworking the tooling/process to understand your requirement
> That's not for *my* requirement, but for dependencies to really mean
> what they should: express incompatibility with earlier versions when
> this happens.
It is your requirement because you're putting these constraints in place in
your packaging while tracking the current development. As I mentioned this
is only really an issue until GA. Once GA hits, we don't unnecessarily rev
these versions which means there isn't the churn you don't particularly
> > seems a bit much given the lack of contributors.
> The above sentence is IMO the only valid one of your argumentation: I
> can understand "not enough time", no problem! :)
Given the number of modules, trying to maintain the versions with the super
strict semver reasons causes more issues than following a looser strategy
to handle versions knowing that there are likely going to be breaking
changes between major releases. In puppet we try to maintain N and N-1
backwards compatibilities but given the number of modules it makes it
really really hard to track and the overall benefit to do so is minimal.
If we follow the patterns usually a major version bump hits on m1 and X.3
is the GA or RC version.
> > If you feel that you could get away with requiring >= 18 rather than a
> > minor version, perhaps you could add that logic in your packaging
> > tooling instead of asking us to stop releasing versions.
> I could get away with no version relationship at all, but that's really
> not the right thing to do. I'd like the dependencies to vaguely express
> some kind of reality, which isn't possible the current way. There's no
> way to get away from that problem with tooling: the tooling will not
> understand that an API has changed in a module or a puppet provider.
> It's only the authors of the patches that will know.
There is, don't add a strict requirement beyond the major version or wait
until GA before implementing minimum versions.
Perhaps we should also try to have some kind of CI testing to validate
> lower bounds to solve that problem? </troll>
> Thomas Goirand (zigo)
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the openstack-discuss