Nefarious cap bandits are running amok in the OpenStack community! Won't someone take a stand against these villainous headwear thieves?! Oh, sorry, just pasted the elevator pitch for my new novel. ;-) Actually, this email is to summarize the plan we came up with in the Oslo meeting this morning. Since we have a bunch of projects affected by the Bandit breakage I wanted to make sure we had a common fix so we don't have a bunch of slightly different approaches in each project. The plan we agreed on in the meeting was to push a two patch series to each repo - one to cap bandit <1.6.0 and one to uncap it with a !=1.6.0 exclusion. The first should be merged immediately to unblock ci, and the latter can be rechecked once bandit 1.6.1 releases to verify that it fixes the problem for us. We chose this approach instead of just tweaking the exclusion in tox.ini because it's not clear that the current behavior will continue once Bandit fixes the bug. Assuming they restore the old behavior, this should require the least churn in our repos and means we're still compatible with older versions that people may already have installed. I started pushing patches under https://review.opendev.org/#/q/topic:cap-bandit (which prompted the digression to start this email ;-) to implement this plan. This is mostly intended to be informational, but if you have any concerns with the plan above please do let us know immediately. Thanks. -Ben
On 5/13/19 12:23 PM, Ben Nemec wrote:
Nefarious cap bandits are running amok in the OpenStack community! Won't someone take a stand against these villainous headwear thieves?!
Oh, sorry, just pasted the elevator pitch for my new novel. ;-)
Actually, this email is to summarize the plan we came up with in the Oslo meeting this morning. Since we have a bunch of projects affected by the Bandit breakage I wanted to make sure we had a common fix so we don't have a bunch of slightly different approaches in each project. The plan we agreed on in the meeting was to push a two patch series to each repo - one to cap bandit <1.6.0 and one to uncap it with a !=1.6.0 exclusion. The first should be merged immediately to unblock ci, and the latter can be rechecked once bandit 1.6.1 releases to verify that it fixes the problem for us.
Oh, and since sphinx is also breaking the Oslo world, I guess we're going to have to include the sphinx requirements fix in these first patches: https://review.opendev.org/#/c/658857/ That's passing the requirements job so it should unblock us. /me is off to squash some patches
We chose this approach instead of just tweaking the exclusion in tox.ini because it's not clear that the current behavior will continue once Bandit fixes the bug. Assuming they restore the old behavior, this should require the least churn in our repos and means we're still compatible with older versions that people may already have installed.
I started pushing patches under https://review.opendev.org/#/q/topic:cap-bandit (which prompted the digression to start this email ;-) to implement this plan. This is mostly intended to be informational, but if you have any concerns with the plan above please do let us know immediately.
Thanks.
-Ben
I started an ethercalc to track this since we have patches from multiple people now: https://ethercalc.openstack.org/ml1qj9xrnyfg If you're interested in pumping your commit stats feel free to take one of the projects that doesn't have a review listed and get that submitted. It would be great to have some non-cores do that so the cores can approve them. We've been following a single-approver model for this since all of the patches are basically the same. If you do submit a patch to fix this, please add it to the ethercalc too. I've been marking merged patches in green to keep track of our progress. Thanks. On 5/13/19 12:40 PM, Ben Nemec wrote:
On 5/13/19 12:23 PM, Ben Nemec wrote:
Nefarious cap bandits are running amok in the OpenStack community! Won't someone take a stand against these villainous headwear thieves?!
Oh, sorry, just pasted the elevator pitch for my new novel. ;-)
Actually, this email is to summarize the plan we came up with in the Oslo meeting this morning. Since we have a bunch of projects affected by the Bandit breakage I wanted to make sure we had a common fix so we don't have a bunch of slightly different approaches in each project. The plan we agreed on in the meeting was to push a two patch series to each repo - one to cap bandit <1.6.0 and one to uncap it with a !=1.6.0 exclusion. The first should be merged immediately to unblock ci, and the latter can be rechecked once bandit 1.6.1 releases to verify that it fixes the problem for us.
Oh, and since sphinx is also breaking the Oslo world, I guess we're going to have to include the sphinx requirements fix in these first patches: https://review.opendev.org/#/c/658857/
That's passing the requirements job so it should unblock us.
/me is off to squash some patches
We chose this approach instead of just tweaking the exclusion in tox.ini because it's not clear that the current behavior will continue once Bandit fixes the bug. Assuming they restore the old behavior, this should require the least churn in our repos and means we're still compatible with older versions that people may already have installed.
I started pushing patches under https://review.opendev.org/#/q/topic:cap-bandit (which prompted the digression to start this email ;-) to implement this plan. This is mostly intended to be informational, but if you have any concerns with the plan above please do let us know immediately.
Thanks.
-Ben
On 13/05/19 1:40 PM, Ben Nemec wrote:
On 5/13/19 12:23 PM, Ben Nemec wrote:
Nefarious cap bandits are running amok in the OpenStack community! Won't someone take a stand against these villainous headwear thieves?!
Oh, sorry, just pasted the elevator pitch for my new novel. ;-)
Actually, this email is to summarize the plan we came up with in the Oslo meeting this morning. Since we have a bunch of projects affected by the Bandit breakage I wanted to make sure we had a common fix so we don't have a bunch of slightly different approaches in each project. The plan we agreed on in the meeting was to push a two patch series to each repo - one to cap bandit <1.6.0 and one to uncap it with a !=1.6.0 exclusion. The first should be merged immediately to unblock ci, and the latter can be rechecked once bandit 1.6.1 releases to verify that it fixes the problem for us.
I take it that just blocking 1.6.0 in global-requirements isn't an option? (Would it not work, or just break every project's requirements job? I could live with the latter since they're broken anyway because of the sphinx issue below...)
Oh, and since sphinx is also breaking the Oslo world, I guess we're going to have to include the sphinx requirements fix in these first patches: https://review.opendev.org/#/c/658857/
It's breaking the whole world and I'm actually not sure there's a good reason for it. Who cares if sphinx 2.0 doesn't run on Python 2.7 when we set and achieved a goal in Stein to only run docs jobs under Python 3? It's unavoidable for stable/rocky and earlier but it seems like the pain on master is not necessary.
That's passing the requirements job so it should unblock us.
/me is off to squash some patches
We chose this approach instead of just tweaking the exclusion in tox.ini because it's not clear that the current behavior will continue once Bandit fixes the bug. Assuming they restore the old behavior, this should require the least churn in our repos and means we're still compatible with older versions that people may already have installed.
I started pushing patches under https://review.opendev.org/#/q/topic:cap-bandit (which prompted the digression to start this email ;-) to implement this plan. This is mostly intended to be informational, but if you have any concerns with the plan above please do let us know immediately.
Thanks.
-Ben
Zane Bitter <zbitter@redhat.com> writes:
On 13/05/19 1:40 PM, Ben Nemec wrote:
On 5/13/19 12:23 PM, Ben Nemec wrote:
Nefarious cap bandits are running amok in the OpenStack community! Won't someone take a stand against these villainous headwear thieves?!
Oh, sorry, just pasted the elevator pitch for my new novel. ;-)
Actually, this email is to summarize the plan we came up with in the Oslo meeting this morning. Since we have a bunch of projects affected by the Bandit breakage I wanted to make sure we had a common fix so we don't have a bunch of slightly different approaches in each project. The plan we agreed on in the meeting was to push a two patch series to each repo - one to cap bandit <1.6.0 and one to uncap it with a !=1.6.0 exclusion. The first should be merged immediately to unblock ci, and the latter can be rechecked once bandit 1.6.1 releases to verify that it fixes the problem for us.
I take it that just blocking 1.6.0 in global-requirements isn't an option? (Would it not work, or just break every project's requirements job? I could live with the latter since they're broken anyway because of the sphinx issue below...)
Because bandit is a "linter" it is in the blacklist in the requirements repo, which means it is not constrained there. Projects are expected to manage the versions of linters they use, and roll forward when they are ready to deal with any new rules introduced by the linters (either by following or disabling them). So, no, unfortunately we can't do this globally through the requirements repo right now. -- Doug
Should uncap patches be -W until next bandit release? Em ter, 14 de mai de 2019 às 17:26, Doug Hellmann <doug@doughellmann.com> escreveu:
Zane Bitter <zbitter@redhat.com> writes:
On 13/05/19 1:40 PM, Ben Nemec wrote:
On 5/13/19 12:23 PM, Ben Nemec wrote:
Nefarious cap bandits are running amok in the OpenStack community! Won't someone take a stand against these villainous headwear thieves?!
Oh, sorry, just pasted the elevator pitch for my new novel. ;-)
Actually, this email is to summarize the plan we came up with in the Oslo meeting this morning. Since we have a bunch of projects affected by the Bandit breakage I wanted to make sure we had a common fix so we don't have a bunch of slightly different approaches in each project. The plan we agreed on in the meeting was to push a two patch series to each repo - one to cap bandit <1.6.0 and one to uncap it with a !=1.6.0 exclusion. The first should be merged immediately to unblock ci, and the latter can be rechecked once bandit 1.6.1 releases to verify that it fixes the problem for us.
I take it that just blocking 1.6.0 in global-requirements isn't an option? (Would it not work, or just break every project's requirements job? I could live with the latter since they're broken anyway because of the sphinx issue below...)
Because bandit is a "linter" it is in the blacklist in the requirements repo, which means it is not constrained there. Projects are expected to manage the versions of linters they use, and roll forward when they are ready to deal with any new rules introduced by the linters (either by following or disabling them).
So, no, unfortunately we can't do this globally through the requirements repo right now.
-- Doug
-- Moisés Guimarães Software Engineer Red Hat <https://www.redhat.com> <https://red.ht/sig>
Moises Guimaraes de Medeiros <moguimar@redhat.com> writes:
Should uncap patches be -W until next bandit release?
I would expect them to fail the linter job until then, so I don't think that's strictly needed.
Em ter, 14 de mai de 2019 às 17:26, Doug Hellmann <doug@doughellmann.com> escreveu:
Zane Bitter <zbitter@redhat.com> writes:
On 13/05/19 1:40 PM, Ben Nemec wrote:
On 5/13/19 12:23 PM, Ben Nemec wrote:
Nefarious cap bandits are running amok in the OpenStack community! Won't someone take a stand against these villainous headwear thieves?!
Oh, sorry, just pasted the elevator pitch for my new novel. ;-)
Actually, this email is to summarize the plan we came up with in the Oslo meeting this morning. Since we have a bunch of projects affected by the Bandit breakage I wanted to make sure we had a common fix so we don't have a bunch of slightly different approaches in each project. The plan we agreed on in the meeting was to push a two patch series to each repo - one to cap bandit <1.6.0 and one to uncap it with a !=1.6.0 exclusion. The first should be merged immediately to unblock ci, and the latter can be rechecked once bandit 1.6.1 releases to verify that it fixes the problem for us.
I take it that just blocking 1.6.0 in global-requirements isn't an option? (Would it not work, or just break every project's requirements job? I could live with the latter since they're broken anyway because of the sphinx issue below...)
Because bandit is a "linter" it is in the blacklist in the requirements repo, which means it is not constrained there. Projects are expected to manage the versions of linters they use, and roll forward when they are ready to deal with any new rules introduced by the linters (either by following or disabling them).
So, no, unfortunately we can't do this globally through the requirements repo right now.
-- Doug
--
Moisés Guimarães
Software Engineer
Red Hat <https://www.redhat.com>
-- Doug
Doug, they pass now, and might fail once 1.6.1 is out and the behavior is not fixed, but that will probably need a recheck on a passed job. The -W would be just a reminder not to merge them by mistake. Em qua, 15 de mai de 2019 às 14:52, Doug Hellmann <doug@doughellmann.com> escreveu:
Moises Guimaraes de Medeiros <moguimar@redhat.com> writes:
Should uncap patches be -W until next bandit release?
I would expect them to fail the linter job until then, so I don't think that's strictly needed.
Em ter, 14 de mai de 2019 às 17:26, Doug Hellmann <doug@doughellmann.com
escreveu:
Zane Bitter <zbitter@redhat.com> writes:
On 13/05/19 1:40 PM, Ben Nemec wrote:
On 5/13/19 12:23 PM, Ben Nemec wrote:
Nefarious cap bandits are running amok in the OpenStack community! Won't someone take a stand against these villainous headwear
thieves?!
Oh, sorry, just pasted the elevator pitch for my new novel. ;-)
Actually, this email is to summarize the plan we came up with in the Oslo meeting this morning. Since we have a bunch of projects
affected
by the Bandit breakage I wanted to make sure we had a common fix so we don't have a bunch of slightly different approaches in each project. The plan we agreed on in the meeting was to push a two patch series to each repo - one to cap bandit <1.6.0 and one to uncap it with a !=1.6.0 exclusion. The first should be merged immediately to unblock ci, and the latter can be rechecked once bandit 1.6.1 releases to verify that it fixes the problem for us.
I take it that just blocking 1.6.0 in global-requirements isn't an option? (Would it not work, or just break every project's requirements job? I could live with the latter since they're broken anyway because of the sphinx issue below...)
Because bandit is a "linter" it is in the blacklist in the requirements repo, which means it is not constrained there. Projects are expected to manage the versions of linters they use, and roll forward when they are ready to deal with any new rules introduced by the linters (either by following or disabling them).
So, no, unfortunately we can't do this globally through the requirements repo right now.
-- Doug
--
Moisés Guimarães
Software Engineer
Red Hat <https://www.redhat.com>
-- Doug
-- Moisés Guimarães Software Engineer Red Hat <https://www.redhat.com> <https://red.ht/sig>
Yeah, I've just been relying on our cores to not merge the uncap patches before we're ready. I'm fine with marking them WIP too though. On 5/15/19 7:55 AM, Moises Guimaraes de Medeiros wrote:
Doug, they pass now, and might fail once 1.6.1 is out and the behavior is not fixed, but that will probably need a recheck on a passed job. The -W would be just a reminder not to merge them by mistake.
Em qua, 15 de mai de 2019 às 14:52, Doug Hellmann <doug@doughellmann.com <mailto:doug@doughellmann.com>> escreveu:
Moises Guimaraes de Medeiros <moguimar@redhat.com <mailto:moguimar@redhat.com>> writes:
> Should uncap patches be -W until next bandit release?
I would expect them to fail the linter job until then, so I don't think that's strictly needed.
> > Em ter, 14 de mai de 2019 às 17:26, Doug Hellmann <doug@doughellmann.com <mailto:doug@doughellmann.com>> > escreveu: > >> Zane Bitter <zbitter@redhat.com <mailto:zbitter@redhat.com>> writes: >> >> > On 13/05/19 1:40 PM, Ben Nemec wrote: >> >> >> >> >> >> On 5/13/19 12:23 PM, Ben Nemec wrote: >> >>> Nefarious cap bandits are running amok in the OpenStack community! >> >>> Won't someone take a stand against these villainous headwear thieves?! >> >>> >> >>> Oh, sorry, just pasted the elevator pitch for my new novel. ;-) >> >>> >> >>> Actually, this email is to summarize the plan we came up with in the >> >>> Oslo meeting this morning. Since we have a bunch of projects affected >> >>> by the Bandit breakage I wanted to make sure we had a common fix so we >> >>> don't have a bunch of slightly different approaches in each project. >> >>> The plan we agreed on in the meeting was to push a two patch series to >> >>> each repo - one to cap bandit <1.6.0 and one to uncap it with a >> >>> !=1.6.0 exclusion. The first should be merged immediately to unblock >> >>> ci, and the latter can be rechecked once bandit 1.6.1 releases to >> >>> verify that it fixes the problem for us. >> > >> > I take it that just blocking 1.6.0 in global-requirements isn't an >> > option? (Would it not work, or just break every project's requirements >> > job? I could live with the latter since they're broken anyway because of >> > the sphinx issue below...) >> >> Because bandit is a "linter" it is in the blacklist in the requirements >> repo, which means it is not constrained there. Projects are expected to >> manage the versions of linters they use, and roll forward when they are >> ready to deal with any new rules introduced by the linters (either by >> following or disabling them). >> >> So, no, unfortunately we can't do this globally through the requirements >> repo right now. >> >> -- >> Doug >> >> > > -- > > Moisés Guimarães > > Software Engineer > > Red Hat <https://www.redhat.com> > > <https://red.ht/sig>
-- Doug
--
Moisés Guimarães
Software Engineer
Red Hat <https://www.redhat.com>
If it helps, upper-constraints still has not been updated (and is -W'd) https://review.opendev.org/658767 On 19-05-15 10:38:13, Ben Nemec wrote:
Yeah, I've just been relying on our cores to not merge the uncap patches before we're ready. I'm fine with marking them WIP too though.
On 5/15/19 7:55 AM, Moises Guimaraes de Medeiros wrote:
Doug, they pass now, and might fail once 1.6.1 is out and the behavior is not fixed, but that will probably need a recheck on a passed job. The -W would be just a reminder not to merge them by mistake.
Em qua, 15 de mai de 2019 às 14:52, Doug Hellmann <doug@doughellmann.com <mailto:doug@doughellmann.com>> escreveu:
Moises Guimaraes de Medeiros <moguimar@redhat.com <mailto:moguimar@redhat.com>> writes:
> Should uncap patches be -W until next bandit release?
I would expect them to fail the linter job until then, so I don't think that's strictly needed.
> > Em ter, 14 de mai de 2019 às 17:26, Doug Hellmann <doug@doughellmann.com <mailto:doug@doughellmann.com>> > escreveu: > >> Zane Bitter <zbitter@redhat.com <mailto:zbitter@redhat.com>> writes: >> >> > On 13/05/19 1:40 PM, Ben Nemec wrote: >> >> >> >> >> >> On 5/13/19 12:23 PM, Ben Nemec wrote: >> >>> Nefarious cap bandits are running amok in the OpenStack community! >> >>> Won't someone take a stand against these villainous headwear thieves?! >> >>> >> >>> Oh, sorry, just pasted the elevator pitch for my new novel. ;-) >> >>> >> >>> Actually, this email is to summarize the plan we came up with in the >> >>> Oslo meeting this morning. Since we have a bunch of projects affected >> >>> by the Bandit breakage I wanted to make sure we had a common fix so we >> >>> don't have a bunch of slightly different approaches in each project. >> >>> The plan we agreed on in the meeting was to push a two patch series to >> >>> each repo - one to cap bandit <1.6.0 and one to uncap it with a >> >>> !=1.6.0 exclusion. The first should be merged immediately to unblock >> >>> ci, and the latter can be rechecked once bandit 1.6.1 releases to >> >>> verify that it fixes the problem for us. >> > >> > I take it that just blocking 1.6.0 in global-requirements isn't an >> > option? (Would it not work, or just break every project's requirements >> > job? I could live with the latter since they're broken anyway because of >> > the sphinx issue below...) >> >> Because bandit is a "linter" it is in the blacklist in the requirements >> repo, which means it is not constrained there. Projects are expected to >> manage the versions of linters they use, and roll forward when they are >> ready to deal with any new rules introduced by the linters (either by >> following or disabling them). >> >> So, no, unfortunately we can't do this globally through the requirements >> repo right now. >> >> -- >> Doug >> >> > > -- > > Moisés Guimarães > > Software Engineer > > Red Hat <https://www.redhat.com> > > <https://red.ht/sig>
-- Doug
--
Moisés Guimarães
Software Engineer
Red Hat <https://www.redhat.com>
-- Matthew Thode
On 5/15/19 10:49 AM, Matthew Thode wrote:
If it helps, upper-constraints still has not been updated (and is -W'd)
I'm a little confused by this patch. We don't use upper-constraints for linters or this probably wouldn't have broken us. It looks like that is just updating a test file?
On 19-05-15 10:38:13, Ben Nemec wrote:
Yeah, I've just been relying on our cores to not merge the uncap patches before we're ready. I'm fine with marking them WIP too though.
On 5/15/19 7:55 AM, Moises Guimaraes de Medeiros wrote:
Doug, they pass now, and might fail once 1.6.1 is out and the behavior is not fixed, but that will probably need a recheck on a passed job. The -W would be just a reminder not to merge them by mistake.
Em qua, 15 de mai de 2019 às 14:52, Doug Hellmann <doug@doughellmann.com <mailto:doug@doughellmann.com>> escreveu:
Moises Guimaraes de Medeiros <moguimar@redhat.com <mailto:moguimar@redhat.com>> writes:
> Should uncap patches be -W until next bandit release?
I would expect them to fail the linter job until then, so I don't think that's strictly needed.
> > Em ter, 14 de mai de 2019 às 17:26, Doug Hellmann <doug@doughellmann.com <mailto:doug@doughellmann.com>> > escreveu: > >> Zane Bitter <zbitter@redhat.com <mailto:zbitter@redhat.com>> writes: >> >> > On 13/05/19 1:40 PM, Ben Nemec wrote: >> >> >> >> >> >> On 5/13/19 12:23 PM, Ben Nemec wrote: >> >>> Nefarious cap bandits are running amok in the OpenStack community! >> >>> Won't someone take a stand against these villainous headwear thieves?! >> >>> >> >>> Oh, sorry, just pasted the elevator pitch for my new novel. ;-) >> >>> >> >>> Actually, this email is to summarize the plan we came up with in the >> >>> Oslo meeting this morning. Since we have a bunch of projects affected >> >>> by the Bandit breakage I wanted to make sure we had a common fix so we >> >>> don't have a bunch of slightly different approaches in each project. >> >>> The plan we agreed on in the meeting was to push a two patch series to >> >>> each repo - one to cap bandit <1.6.0 and one to uncap it with a >> >>> !=1.6.0 exclusion. The first should be merged immediately to unblock >> >>> ci, and the latter can be rechecked once bandit 1.6.1 releases to >> >>> verify that it fixes the problem for us. >> > >> > I take it that just blocking 1.6.0 in global-requirements isn't an >> > option? (Would it not work, or just break every project's requirements >> > job? I could live with the latter since they're broken anyway because of >> > the sphinx issue below...) >> >> Because bandit is a "linter" it is in the blacklist in the requirements >> repo, which means it is not constrained there. Projects are expected to >> manage the versions of linters they use, and roll forward when they are >> ready to deal with any new rules introduced by the linters (either by >> following or disabling them). >> >> So, no, unfortunately we can't do this globally through the requirements >> repo right now. >> >> -- >> Doug >> >> > > -- > > Moisés Guimarães > > Software Engineer > > Red Hat <https://www.redhat.com> > > <https://red.ht/sig>
-- Doug
--
Moisés Guimarães
Software Engineer
Red Hat <https://www.redhat.com>
On 19-05-15 11:18:51, Ben Nemec wrote:
On 5/15/19 10:49 AM, Matthew Thode wrote:
If it helps, upper-constraints still has not been updated (and is -W'd)
I'm a little confused by this patch. We don't use upper-constraints for linters or this probably wouldn't have broken us. It looks like that is just updating a test file?
You're right, I'm not sure why that was done, commented on the review (going to suggest abandoning it). -- Matthew Thode
Moises Guimaraes de Medeiros <moguimar@redhat.com> writes:
Doug, they pass now, and might fail once 1.6.1 is out and the behavior is not fixed, but that will probably need a recheck on a passed job. The -W would be just a reminder not to merge them by mistake.
Oh, I guess I assumed we would only be going through this process for repos that are broken. It makes sense to be consistent across all of them, though, if that was the goal. -- Doug
On 2019-05-15 12:52:05 -0400 (-0400), Doug Hellmann wrote:
Moises Guimaraes de Medeiros <moguimar@redhat.com> writes:
Doug, they pass now, and might fail once 1.6.1 is out and the behavior is not fixed, but that will probably need a recheck on a passed job. The -W would be just a reminder not to merge them by mistake.
Oh, I guess I assumed we would only be going through this process for repos that are broken. It makes sense to be consistent across all of them, though, if that was the goal.
Only doing it for projects which actually hit that problem seems like a reasonable approach, since we don't expect them to all coordinate on a common version of static analyzers and linters anyway (hence bandit being in the constraints blacklist to start with). -- Jeremy Stanley
On 5/15/19 11:52 AM, Doug Hellmann wrote:
Moises Guimaraes de Medeiros <moguimar@redhat.com> writes:
Doug, they pass now, and might fail once 1.6.1 is out and the behavior is not fixed, but that will probably need a recheck on a passed job. The -W would be just a reminder not to merge them by mistake.
Oh, I guess I assumed we would only be going through this process for repos that are broken. It makes sense to be consistent across all of them, though, if that was the goal.
The reason they pass right now is that there is no newer release than 1.6.0, so the != exclusion is effectively the same as the < cap. Once 1.6.1 releases that won't be the case, but in the meantime it means that both forms behave the same. The reason we did it this way is to prevent 1.6.1 from blocking all of the repos again if it doesn't fix the problem or introduces a new one. If so, it blocks the uncapping patches only and we can deal with it on our own schedule.
On 2019-05-15 13:08:32 -0500 (-0500), Ben Nemec wrote: [...]
The reason we did it this way is to prevent 1.6.1 from blocking all of the repos again if it doesn't fix the problem or introduces a new one. If so, it blocks the uncapping patches only and we can deal with it on our own schedule.
Normally, if it had been treated like other linters, projects should have been guarding against unanticipated upgrades by specifying something like a <1.6.0 version and then expressly advancing that cap at the start of a new cycle when they're prepared to deal with fixing whatever problems are identified. -- Jeremy Stanley
On 5/15/19 1:40 PM, Jeremy Stanley wrote:
On 2019-05-15 13:08:32 -0500 (-0500), Ben Nemec wrote: [...]
The reason we did it this way is to prevent 1.6.1 from blocking all of the repos again if it doesn't fix the problem or introduces a new one. If so, it blocks the uncapping patches only and we can deal with it on our own schedule.
Normally, if it had been treated like other linters, projects should have been guarding against unanticipated upgrades by specifying something like a <1.6.0 version and then expressly advancing that cap at the start of a new cycle when they're prepared to deal with fixing whatever problems are identified.
Yeah, I guess I don't know why we weren't doing that with bandit. Maybe just that it hadn't broken us previously, in which case we might want to drop the uncap patches entirely.
On 5/15/19 2:07 PM, Ben Nemec wrote:
On 5/15/19 1:40 PM, Jeremy Stanley wrote:
On 2019-05-15 13:08:32 -0500 (-0500), Ben Nemec wrote: [...]
The reason we did it this way is to prevent 1.6.1 from blocking all of the repos again if it doesn't fix the problem or introduces a new one. If so, it blocks the uncapping patches only and we can deal with it on our own schedule.
Normally, if it had been treated like other linters, projects should have been guarding against unanticipated upgrades by specifying something like a <1.6.0 version and then expressly advancing that cap at the start of a new cycle when they're prepared to deal with fixing whatever problems are identified.
Yeah, I guess I don't know why we weren't doing that with bandit. Maybe just that it hadn't broken us previously, in which case we might want to drop the uncap patches entirely.
We discussed this in the Oslo meeting and agreed to leave the cap in place until we choose to move to a newer version of bandit. That brings bandit into alignment with the rest of our linters. I'll go through and abandon the existing uncap patches unless someone objects.
On Tue, May 14, 2019 at 11:09:26AM -0400, Zane Bitter wrote:
It's breaking the whole world and I'm actually not sure there's a good reason for it. Who cares if sphinx 2.0 doesn't run on Python 2.7 when we set and achieved a goal in Stein to only run docs jobs under Python 3? It's unavoidable for stable/rocky and earlier but it seems like the pain on master is not necessary.
While we support python2 *anywhere* we need to do this. The current tools (both ours and the broader python ecosystem) need to have these markers. I apologise that we managed to mess this up we're looking at how we can avoid this in the future but we don't really get any kinda of signals about $library dropping support for $python_version. The py2 things is more visible than a py3 minor release but they're broadly the same thing Yours Tony.
Hello, To help us to be more reactive on similar issues related to requirements who drop python 2 (the sphinx use case) I've submit a patch https://review.opendev.org/659289 to schedule "check-requirements" daily. Normally with that if openstack/requirements add somes changes who risk to break our CI we will be informed quickly by this periodical job. I guess we will facing a many similar issues in the next month due to the python 2.7 final countdown and libs who will drop python 2.7 support. For the moment only submit my patch on oslo.log, but if it work, in a second time, we can copy it to all the oslo projects. I'm not a zuul expert and I don't know if my patch is correct or not, so please feel free to review it and to put comments to let me know how to proceed with periodic jobs. Also oslo core could check daily the result of this job to know if actions are needed and inform team via the ML or something like that in fix the issue efficiently. Thoughts? Yours Hervé. Le jeu. 16 mai 2019 à 07:44, Tony Breeds <tony@bakeyournoodle.com> a écrit :
On Tue, May 14, 2019 at 11:09:26AM -0400, Zane Bitter wrote:
It's breaking the whole world and I'm actually not sure there's a good reason for it. Who cares if sphinx 2.0 doesn't run on Python 2.7 when we set and achieved a goal in Stein to only run docs jobs under Python 3? It's unavoidable for stable/rocky and earlier but it seems like the pain on master is not necessary.
While we support python2 *anywhere* we need to do this. The current tools (both ours and the broader python ecosystem) need to have these markers.
I apologise that we managed to mess this up we're looking at how we can avoid this in the future but we don't really get any kinda of signals about $library dropping support for $python_version. The py2 things is more visible than a py3 minor release but they're broadly the same thing
Yours Tony.
-- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE-----
In other words, I propose to schedule a periodical requirements check on the oslo projects to detect as soon as possible CI errors related to requirements check (related to py2.7 support drop), and fix it as soon as possible to avoid to fix it during standard review process (patches related to common fix or feat). Le jeu. 16 mai 2019 à 11:29, Herve Beraud <hberaud@redhat.com> a écrit :
Hello,
To help us to be more reactive on similar issues related to requirements who drop python 2 (the sphinx use case) I've submit a patch https://review.opendev.org/659289 to schedule "check-requirements" daily.
Normally with that if openstack/requirements add somes changes who risk to break our CI we will be informed quickly by this periodical job.
I guess we will facing a many similar issues in the next month due to the python 2.7 final countdown and libs who will drop python 2.7 support.
For the moment only submit my patch on oslo.log, but if it work, in a second time, we can copy it to all the oslo projects.
I'm not a zuul expert and I don't know if my patch is correct or not, so please feel free to review it and to put comments to let me know how to proceed with periodic jobs.
Also oslo core could check daily the result of this job to know if actions are needed and inform team via the ML or something like that in fix the issue efficiently.
Thoughts?
Yours Hervé.
Le jeu. 16 mai 2019 à 07:44, Tony Breeds <tony@bakeyournoodle.com> a écrit :
On Tue, May 14, 2019 at 11:09:26AM -0400, Zane Bitter wrote:
It's breaking the whole world and I'm actually not sure there's a good reason for it. Who cares if sphinx 2.0 doesn't run on Python 2.7 when we set and achieved a goal in Stein to only run docs jobs under Python 3? It's unavoidable for stable/rocky and earlier but it seems like the pain on master is not necessary.
While we support python2 *anywhere* we need to do this. The current tools (both ours and the broader python ecosystem) need to have these markers.
I apologise that we managed to mess this up we're looking at how we can avoid this in the future but we don't really get any kinda of signals about $library dropping support for $python_version. The py2 things is more visible than a py3 minor release but they're broadly the same thing
Yours Tony.
-- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE-----
wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE-----
-- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE-----
On 19-05-16 11:55:47, Herve Beraud wrote:
In other words, I propose to schedule a periodical requirements check on the oslo projects to detect as soon as possible CI errors related to requirements check (related to py2.7 support drop), and fix it as soon as possible to avoid to fix it during standard review process (patches related to common fix or feat).
Le jeu. 16 mai 2019 à 11:29, Herve Beraud <hberaud@redhat.com> a écrit :
Would it be better to have one job that people monitor? requirements-tox-py27-check-uc may work for you (in the requirements project) as it tests co-installability. -- Matthew Thode
On 5/16/19 4:29 AM, Herve Beraud wrote:
Hello,
To help us to be more reactive on similar issues related to requirements who drop python 2 (the sphinx use case) I've submit a patch https://review.opendev.org/659289 to schedule "check-requirements" daily.
Normally with that if openstack/requirements add somes changes who risk to break our CI we will be informed quickly by this periodical job.
I guess we will facing a many similar issues in the next month due to the python 2.7 final countdown and libs who will drop python 2.7 support.
For the moment only submit my patch on oslo.log, but if it work, in a second time, we can copy it to all the oslo projects.
I'm not a zuul expert and I don't know if my patch is correct or not, so please feel free to review it and to put comments to let me know how to proceed with periodic jobs.
Also oslo core could check daily the result of this job to know if actions are needed and inform team via the ML or something like that in fix the issue efficiently.
This is generally the problem with periodic jobs. People don't pay attention to them so issues still don't get noticed until they start breaking live patches. As I said in IRC, if you're willing to commit to checking the periodic jobs daily I'm okay with adding them. I know when dims was PTL he had nightly jobs running on all of the Oslo repos, but I think that was in his own private infra so I don't know that we could reuse what he had.
Thoughts?
Yours Hervé.
Le jeu. 16 mai 2019 à 07:44, Tony Breeds <tony@bakeyournoodle.com <mailto:tony@bakeyournoodle.com>> a écrit :
On Tue, May 14, 2019 at 11:09:26AM -0400, Zane Bitter wrote:
> It's breaking the whole world and I'm actually not sure there's a good > reason for it. Who cares if sphinx 2.0 doesn't run on Python 2.7 when we set > and achieved a goal in Stein to only run docs jobs under Python 3? It's > unavoidable for stable/rocky and earlier but it seems like the pain on > master is not necessary.
While we support python2 *anywhere* we need to do this. The current tools (both ours and the broader python ecosystem) need to have these markers.
I apologise that we managed to mess this up we're looking at how we can avoid this in the future but we don't really get any kinda of signals about $library dropping support for $python_version. The py2 things is more visible than a py3 minor release but they're broadly the same thing
Yours Tony.
-- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE-----
wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE-----
Le jeu. 16 mai 2019 à 17:30, Ben Nemec <openstack@nemebean.com> a écrit :
On 5/16/19 4:29 AM, Herve Beraud wrote:
Hello,
To help us to be more reactive on similar issues related to requirements who drop python 2 (the sphinx use case) I've submit a patch https://review.opendev.org/659289 to schedule "check-requirements" daily.
Normally with that if openstack/requirements add somes changes who risk to break our CI we will be informed quickly by this periodical job.
I guess we will facing a many similar issues in the next month due to the python 2.7 final countdown and libs who will drop python 2.7 support.
For the moment only submit my patch on oslo.log, but if it work, in a second time, we can copy it to all the oslo projects.
I'm not a zuul expert and I don't know if my patch is correct or not, so please feel free to review it and to put comments to let me know how to proceed with periodic jobs.
Also oslo core could check daily the result of this job to know if actions are needed and inform team via the ML or something like that in fix the issue efficiently.
This is generally the problem with periodic jobs. People don't pay attention to them so issues still don't get noticed until they start breaking live patches. As I said in IRC, if you're willing to commit to checking the periodic jobs daily I'm okay with adding them.
I'm ok to pay attention and to checking the periodic jobs, but sometimes I think I'll be away (PTO, etc..) and others people will need to pay attention during this period.
I know when dims was PTL he had nightly jobs running on all of the Oslo repos, but I think that was in his own private infra so I don't know that we could reuse what he had.
Thoughts?
Yours Hervé.
Le jeu. 16 mai 2019 à 07:44, Tony Breeds <tony@bakeyournoodle.com <mailto:tony@bakeyournoodle.com>> a écrit :
On Tue, May 14, 2019 at 11:09:26AM -0400, Zane Bitter wrote:
> It's breaking the whole world and I'm actually not sure there's a good > reason for it. Who cares if sphinx 2.0 doesn't run on Python 2.7 when we set > and achieved a goal in Stein to only run docs jobs under Python 3? It's > unavoidable for stable/rocky and earlier but it seems like the pain on > master is not necessary.
While we support python2 *anywhere* we need to do this. The current tools (both ours and the broader python ecosystem) need to have these markers.
I apologise that we managed to mess this up we're looking at how we
can
avoid this in the future but we don't really get any kinda of signals about $library dropping support for $python_version. The py2 things
is
more visible than a py3 minor release but they're broadly the same
thing
Yours Tony.
-- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE-----
wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE-----
-- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE-----
On 5/16/19 12:41 AM, Tony Breeds wrote:
On Tue, May 14, 2019 at 11:09:26AM -0400, Zane Bitter wrote:
It's breaking the whole world and I'm actually not sure there's a good reason for it. Who cares if sphinx 2.0 doesn't run on Python 2.7 when we set and achieved a goal in Stein to only run docs jobs under Python 3? It's unavoidable for stable/rocky and earlier but it seems like the pain on master is not necessary.
While we support python2 *anywhere* we need to do this. The current tools (both ours and the broader python ecosystem) need to have these markers.
I apologise that we managed to mess this up we're looking at how we can avoid this in the future but we don't really get any kinda of signals about $library dropping support for $python_version. The py2 things is more visible than a py3 minor release but they're broadly the same thing
The biggest problem here was the timing with the Bandit issue. Normally this would have only blocked patches that needed to change requirements, but because most of our repos needed a requirements change to unblock them it became a bigger issue than it normally would have been. That said, it would be nice if we could come up with a less intrusive way to handle this in the future. I'd rather not have to keep merging a ton of requirements patches when dependencies drop py2 support.
On 19-05-16 10:21:04, Ben Nemec wrote:
On 5/16/19 12:41 AM, Tony Breeds wrote:
On Tue, May 14, 2019 at 11:09:26AM -0400, Zane Bitter wrote:
It's breaking the whole world and I'm actually not sure there's a good reason for it. Who cares if sphinx 2.0 doesn't run on Python 2.7 when we set and achieved a goal in Stein to only run docs jobs under Python 3? It's unavoidable for stable/rocky and earlier but it seems like the pain on master is not necessary.
While we support python2 *anywhere* we need to do this. The current tools (both ours and the broader python ecosystem) need to have these markers.
I apologise that we managed to mess this up we're looking at how we can avoid this in the future but we don't really get any kinda of signals about $library dropping support for $python_version. The py2 things is more visible than a py3 minor release but they're broadly the same thing
The biggest problem here was the timing with the Bandit issue. Normally this would have only blocked patches that needed to change requirements, but because most of our repos needed a requirements change to unblock them it became a bigger issue than it normally would have been.
That said, it would be nice if we could come up with a less intrusive way to handle this in the future. I'd rather not have to keep merging a ton of requirements patches when dependencies drop py2 support.
We are trying to determine if using constraints alone is suficient. pip not having a depsolver strikes again. -- Matthew Thode
Since it seems we need to backport this to the stable branches, I've added stable branch columns to https://ethercalc.openstack.org/ml1qj9xrnyfg I know some backports have already been proposed, so if people can fill in the appropriate columns that would help avoid unnecessary work on projects that are already done. Hopefully these will be clean backports, but I know at least one included a change to requirements.txt too. We'll need to make sure we don't accidentally backport any of those or we won't be able to release the stable branches. As discussed in the meeting this week, we're only planning to backport to the active branches. The em branches can be updated if necessary, but we don't need to do a mass backport to them. I think that's it. Let me know if you have any comments or questions. Thanks. -Ben On 5/13/19 12:23 PM, Ben Nemec wrote:
Nefarious cap bandits are running amok in the OpenStack community! Won't someone take a stand against these villainous headwear thieves?!
Oh, sorry, just pasted the elevator pitch for my new novel. ;-)
Actually, this email is to summarize the plan we came up with in the Oslo meeting this morning. Since we have a bunch of projects affected by the Bandit breakage I wanted to make sure we had a common fix so we don't have a bunch of slightly different approaches in each project. The plan we agreed on in the meeting was to push a two patch series to each repo - one to cap bandit <1.6.0 and one to uncap it with a !=1.6.0 exclusion. The first should be merged immediately to unblock ci, and the latter can be rechecked once bandit 1.6.1 releases to verify that it fixes the problem for us.
We chose this approach instead of just tweaking the exclusion in tox.ini because it's not clear that the current behavior will continue once Bandit fixes the bug. Assuming they restore the old behavior, this should require the least churn in our repos and means we're still compatible with older versions that people may already have installed.
I started pushing patches under https://review.opendev.org/#/q/topic:cap-bandit (which prompted the digression to start this email ;-) to implement this plan. This is mostly intended to be informational, but if you have any concerns with the plan above please do let us know immediately.
Thanks.
-Ben
On 2019-06-05 10:28:35 -0500 (-0500), Ben Nemec wrote:
Since it seems we need to backport this to the stable branches [...]
You've probably been following along, but a fix for https://github.com/PyCQA/bandit/issues/488 was merged upstream on May 26, so now we're just waiting for a new release to be tagged. It may make sense to spend some time lobbying them to accelerate their release process if it means less time spent backporting exclusions to a bazillion projects. -- Jeremy Stanley
I think that waiting the bandit release is a good idea Le mer. 5 juin 2019 à 17:54, Jeremy Stanley <fungi@yuggoth.org> a écrit :
On 2019-06-05 10:28:35 -0500 (-0500), Ben Nemec wrote:
Since it seems we need to backport this to the stable branches [...]
You've probably been following along, but a fix for https://github.com/PyCQA/bandit/issues/488 was merged upstream on May 26, so now we're just waiting for a new release to be tagged. It may make sense to spend some time lobbying them to accelerate their release process if it means less time spent backporting exclusions to a bazillion projects. -- Jeremy Stanley
-- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE-----
Agreed. There's probably an argument that we should cap bandit on stable branches anyway, but it would save us a lot of tedious patches if we just hope bandit doesn't break us again. :-) On 6/5/19 11:16 AM, Herve Beraud wrote:
I think that waiting the bandit release is a good idea
Le mer. 5 juin 2019 à 17:54, Jeremy Stanley <fungi@yuggoth.org <mailto:fungi@yuggoth.org>> a écrit :
On 2019-06-05 10:28:35 -0500 (-0500), Ben Nemec wrote: > Since it seems we need to backport this to the stable branches [...]
You've probably been following along, but a fix for https://github.com/PyCQA/bandit/issues/488 was merged upstream on May 26, so now we're just waiting for a new release to be tagged. It may make sense to spend some time lobbying them to accelerate their release process if it means less time spent backporting exclusions to a bazillion projects. -- Jeremy Stanley
-- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE-----
wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE-----
On 2019-06-05 11:27:09 -0500 (-0500), Ben Nemec wrote:
Agreed. There's probably an argument that we should cap bandit on stable branches anyway, but it would save us a lot of tedious patches if we just hope bandit doesn't break us again. :-) [...]
Oh, yes, I think capping on stable is probably a fine idea regardless (we should be doing that anyway for all our static analyzers on principle). What I meant is that it would likely render those updates no longer urgent. -- Jeremy Stanley
+1 Le mer. 5 juin 2019 à 18:38, Jeremy Stanley <fungi@yuggoth.org> a écrit :
On 2019-06-05 11:27:09 -0500 (-0500), Ben Nemec wrote:
Agreed. There's probably an argument that we should cap bandit on stable branches anyway, but it would save us a lot of tedious patches if we just hope bandit doesn't break us again. :-) [...]
Oh, yes, I think capping on stable is probably a fine idea regardless (we should be doing that anyway for all our static analyzers on principle). What I meant is that it would likely render those updates no longer urgent. -- Jeremy Stanley
-- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE-----
+1 Jeremy Em qui, 6 de jun de 2019 às 10:42, Herve Beraud <hberaud@redhat.com> escreveu:
+1
Le mer. 5 juin 2019 à 18:38, Jeremy Stanley <fungi@yuggoth.org> a écrit :
On 2019-06-05 11:27:09 -0500 (-0500), Ben Nemec wrote:
Agreed. There's probably an argument that we should cap bandit on stable branches anyway, but it would save us a lot of tedious patches if we just hope bandit doesn't break us again. :-) [...]
Oh, yes, I think capping on stable is probably a fine idea regardless (we should be doing that anyway for all our static analyzers on principle). What I meant is that it would likely render those updates no longer urgent. -- Jeremy Stanley
-- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE-----
wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE-----
-- Moisés Guimarães Software Engineer Red Hat <https://www.redhat.com> <https://red.ht/sig>
participants (8)
-
Ben Nemec
-
Doug Hellmann
-
Herve Beraud
-
Jeremy Stanley
-
Matthew Thode
-
Moises Guimaraes de Medeiros
-
Tony Breeds
-
Zane Bitter