[release][infra] Discrepancy between release jobs and the "normal" jobs CI in terms of distro
I did not quote the original subject because it was too long. It was: [release][masakari] - [Release-job-failures] Release of openstack/masakari-monitors for ref refs/tags/(7.0.1|8.0.2|9.0.1) failed Particularly, we are continuing the discussion from: http://lists.openstack.org/pipermail/openstack-discuss/2020-November/018743.... Would it be possible what Hervé is proposing? I.e. better segregation of distro per branch of release? I believe it would be a rare situation but surely testing something on Bionic and trying to release on Focal might have its quirks. -yoctozepto
On Fri, Nov 13, 2020, at 4:58 AM, Radosław Piliszek wrote:
I did not quote the original subject because it was too long. It was: [release][masakari] - [Release-job-failures] Release of openstack/masakari-monitors for ref refs/tags/(7.0.1|8.0.2|9.0.1) failed
Particularly, we are continuing the discussion from: http://lists.openstack.org/pipermail/openstack-discuss/2020-November/018743....
Would it be possible what Hervé is proposing? I.e. better segregation of distro per branch of release? I believe it would be a rare situation but surely testing something on Bionic and trying to release on Focal might have its quirks.
The underlying issue is that git tags (what we use to specify a release state) don't map 1:1 to git branches (where we specify zuul job config). For a long time this essentially meant that if you tried to apply branch specific job matcher rules to jobs that ran in a refs/tags/* context those jobs were just skipped. More recently Zuul has made changes to do its best to map a tag state to a branch and load configs from that branch value. The behavior if multiple branches match should be considered though: "If a tag item is enqueued, we look up the branches which contain the commit referenced by the tag. If any of those branches match a branch matcher, the matcher is considered to have matched." [0]. Long story short this was not possible for a long time but is now possible if you are careful. This means the jobs can be updated to have different behaviors based on branches and the tags will be matched to branches. Separately keep in mind that the jobs were moved from bionic to focal to address problems with markdown [1]. It is possible that by moving some branches back to bionic that the jobs will break there. [0] https://zuul-ci.org/docs/zuul/reference/job_def.html#attr-job.branches [1] https://review.opendev.org/#/c/761776/
On 2020-11-13 13:58:18 +0100 (+0100), Radosław Piliszek wrote: [...]
I believe it would be a rare situation but surely testing something on Bionic and trying to release on Focal might have its quirks.
Honestly, I think the real problem here is that we have a bunch of unnecessary cruft in the release-openstack-python job held over from when we used to use tox to create release artifacts. If you look through the log of a successful build you'll see that we're not actually running tox or installing the projects being released, but we're using the ensure-tox and bindep roles anyway. We may not even need ensure-pip in there. The important bits of the job are that it checks out the correct state of the repository and then runs `python3 setup.py sdist bdist_wheel` and then pulls the resulting files back to the executor to be published. That should be fairly consistent no matter what project is being built and no matter what distro it's being built on. -- Jeremy Stanley
On 2020-11-13 17:20:44 +0000 (+0000), Jeremy Stanley wrote: [...]
Honestly, I think the real problem here is that we have a bunch of unnecessary cruft in the release-openstack-python job held over from when we used to use tox to create release artifacts. If you look through the log of a successful build you'll see that we're not actually running tox or installing the projects being released, but we're using the ensure-tox and bindep roles anyway. [...]
This solution has been proposed: https://review.opendev.org/762699 -- Jeremy Stanley
I confirm that these changes fixed our issue, thanks! Le ven. 13 nov. 2020 à 19:21, Jeremy Stanley <fungi@yuggoth.org> a écrit :
On 2020-11-13 17:20:44 +0000 (+0000), Jeremy Stanley wrote: [...]
Honestly, I think the real problem here is that we have a bunch of unnecessary cruft in the release-openstack-python job held over from when we used to use tox to create release artifacts. If you look through the log of a successful build you'll see that we're not actually running tox or installing the projects being released, but we're using the ensure-tox and bindep roles anyway. [...]
This solution has been proposed: https://review.opendev.org/762699 -- Jeremy Stanley
-- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE-----
participants (4)
-
Clark Boylan
-
Herve Beraud
-
Jeremy Stanley
-
Radosław Piliszek