[stable][requirements][neutron] Capping pip in stable branches or not
Hi, now that master branches have recovered from pip new resolver use, I started looking at stable branches status. tl;dr for those with open pending backports, all branches are broken at the moment so please do not recheck. Thinking about fixing gates for these branches, older EM branches may be fine once the bandit 1.6.3 issue [1] is sorted out, but most need a fix against the new pip resolver. pip has a flag to switch back to old resolver, but this is a temporary one that will only be there for a few weeks [2]
From a quick IRC chat, the general guidance for us was always to leave pip uncapped, and the new resolver issues are actually broken requirements.
But looking at master fixes, these indicate large and complicated changes on requirements and lower-contraints. Neutron fix [3] required a few major linter bumps and major version bumps in l-c. I guess it may be doable as victoria backport, but this will be messy for previous branches. ovn-octavia-provider is a scarier example [4], from stable point of view the change by itself does not look good for backport, even just for victoria. Also, in master, some fixes were possible by bumping versions on dependencies, but how to fix them if the max possible versions have broken deps themselves? So, how do we proceed to fix stable gates? Ideas and feedback will be most welcome [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019292.... [2] http://pyfound.blogspot.com/2020/11/pip-20-3-new-resolver.html [3] https://review.opendev.org/c/openstack/neutron/+/766000 [4] https://review.opendev.org/c/openstack/ovn-octavia-provider/+/765872/32/lowe... -- Bernard Cafarelli
On Fri, 2020-12-11 at 10:20 +0100, Bernard Cafarelli wrote:
Hi,
now that master branches have recovered from pip new resolver use, I started looking at stable branches status. tl;dr for those with open pending backports, all branches are broken at the moment so please do not recheck.
Thinking about fixing gates for these branches, older EM branches may be fine once the bandit 1.6.3 issue [1] is sorted out, but most need a fix against the new pip resolver.
pip has a flag to switch back to old resolver, but this is a temporary one that will only be there for a few weeks [2]
From a quick IRC chat, the general guidance for us was always to leave pip uncapped, and the new resolver issues are actually broken requirements.
But looking at master fixes, these indicate large and complicated changes on requirements and lower-contraints. Neutron fix [3] required a few major linter bumps and major version bumps in l-c. I guess it may be doable as victoria backport, but this will be messy for previous branches.
To make this effort slightly simpler, is there any reason we couldn't drag linters out of 'test-requirements.txt' across the board? Those seem to be the most problematic from what I've seen and they're generally not required to use the project nor to run tests. The exception to this rule is projects that have custom hacking plugins and tests for same, in which case I'm not yet sure what to do.
ovn-octavia-provider is a scarier example [4], from stable point of view the change by itself does not look good for backport, even just for victoria.
Also, in master, some fixes were possible by bumping versions on dependencies, but how to fix them if the max possible versions have broken deps themselves?
So, how do we proceed to fix stable gates? Ideas and feedback will be most welcome
This isn't a proper answer, but are there any circumstances where it would be possible to get a functioning deployment using the supposedly incorrect dependencies in lower-constraints.txt right now? For example, considering [4], would the deployment actually work with 'amqp==2.1.1' rather than 'amqp==5.0.2'? In fact, would pip < 20.3, in all its apparent brokenness, truly constrain amqp like this? I'm going to guess that in many cases it wouldn't be an issue, since these minimum dependencies were most likely selected arbitrarily, however, I also suspect there are cases where this would be an issue and we simply hadn't noticed. Assuming this to be the case, I think the question is more do we want to continue to rely on this known broken feature (by sticking to pip < 20.3) because it's "good enough" for these older branches, or do we want to spend our valuable time going through the dull but necessary work of fixing the dependencies? All of this assumes we find a way to work around dependencies that have broken dependencies themselves. That might well force our hand. Cheers, Stephen
[1] http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019292.... [2] http://pyfound.blogspot.com/2020/11/pip-20-3-new-resolver.html [3] https://review.opendev.org/c/openstack/neutron/+/766000 [4] https://review.opendev.org/c/openstack/ovn-octavia-provider/+/765872/32/lowe...
On 2020-12-11 10:20:35 +0100 (+0100), Bernard Cafarelli wrote:
now that master branches have recovered from pip new resolver use, I started looking at stable branches status. tl;dr for those with open pending backports, all branches are broken at the moment so please do not recheck.
Was there significant breakage on master branches (aside from lower-constraints jobs, which I've always argued are inherently broken for this very reason)? If so, it didn't come to my attention. Matthew did some fairly extensive testing with the new algorithm across our coordinated dependency set well in advance of the pip release to actually turn it on by default.
Thinking about fixing gates for these branches, older EM branches may be fine once the bandit 1.6.3 issue [1] is sorted out, but most need a fix against the new pip resolver.
pip has a flag to switch back to old resolver, but this is a temporary one that will only be there for a few weeks [2]
From a quick IRC chat, the general guidance for us was always to leave pip uncapped, and the new resolver issues are actually broken requirements. [...]
Yes, it bears repeating that anywhere the new dep solver is breaking represents a situation where we were previously testing/building things with a different version of some package than we meant to. This is exposing latent bugs in our declared dependencies within those branches. If we decide to use "older" pip, that's basically admitting we don't care because it's easier to ignore those problems than actually fix them (which, yes, might turn out to be effectively impossible). I'm not trying to be harsh, it's certainly a valid approach, but let's be clear that this is the compromise we're making in that case. My proposal: actually come to terms with the reality that lower-constraints jobs are a fundamentally broken concept, unless someone does the hard work to implement an inverse version sort in pip itself. If pretty much all the struggle is with those jobs, then dropping them can't hurt because they failed at testing exactly the thing they were created for in the first place. -- Jeremy Stanley
On Fri, Dec 11, 2020 at 6:41 AM Jeremy Stanley <fungi@yuggoth.org> wrote:
On 2020-12-11 10:20:35 +0100 (+0100), Bernard Cafarelli wrote:
now that master branches have recovered from pip new resolver use, I started looking at stable branches status. tl;dr for those with open pending backports, all branches are broken at the moment so please do not recheck.
Was there significant breakage on master branches (aside from lower-constraints jobs, which I've always argued are inherently broken for this very reason)? If so, it didn't come to my attention. Matthew did some fairly extensive testing with the new algorithm across our coordinated dependency set well in advance of the pip release to actually turn it on by default.
Thinking about fixing gates for these branches, older EM branches may be fine once the bandit 1.6.3 issue [1] is sorted out, but most need a fix against the new pip resolver.
pip has a flag to switch back to old resolver, but this is a temporary one that will only be there for a few weeks [2]
From a quick IRC chat, the general guidance for us was always to leave pip uncapped, and the new resolver issues are actually broken requirements. [...]
Yes, it bears repeating that anywhere the new dep solver is breaking represents a situation where we were previously testing/building things with a different version of some package than we meant to. This is exposing latent bugs in our declared dependencies within those branches. If we decide to use "older" pip, that's basically admitting we don't care because it's easier to ignore those problems than actually fix them (which, yes, might turn out to be effectively impossible). I'm not trying to be harsh, it's certainly a valid approach, but let's be clear that this is the compromise we're making in that case.
My proposal: actually come to terms with the reality that lower-constraints jobs are a fundamentally broken concept, unless someone does the hard work to implement an inverse version sort in pip itself. If pretty much all the struggle is with those jobs, then dropping them can't hurt because they failed at testing exactly the thing they were created for in the first place.
I completely agree with Jeremy's proposal. And sentiment in ironic seems to be leaning in this direction as well. The bottom line is WE as a community have one of two options: Constantly track and increment l-c, or try to roll forward with the most recent and attempt to identify issues as we go. The original push of g-r updates out seemed to be far less painful and gave us visibility to future breakages. Now we're looking at yet another round where we need to fix CI jobs on every repository and branch we maintain. This impinges on our ability to deliver new features and cripples our ability to deliver upstream bug fixes when we are constantly fighting stable CI breakages. I guess it is kind of obvious that I'm frustrated with breaking stable CI as it seems to be a giant time sink for myself. -Julia
-- Jeremy Stanley
Yes, it bears repeating that anywhere the new dep solver is breaking represents a situation where we were previously testing/building things with a different version of some package than we meant to. This is exposing latent bugs in our declared dependencies within those branches. If we decide to use "older" pip, that's basically admitting we don't care because it's easier to ignore those problems than actually fix them (which, yes, might turn out to be effectively impossible). I'm not trying to be harsh, it's certainly a valid approach, but let's be clear that this is the compromise we're making in that case.
+1
My proposal: actually come to terms with the reality that lower-constraints jobs are a fundamentally broken concept, unless someone does the hard work to implement an inverse version sort in pip itself. If pretty much all the struggle is with those jobs, then dropping them can't hurt because they failed at testing exactly the thing they were created for in the first place.
As someone that has spent some time working on l-c jobs/issues, I kind of have to agree with this. For historical reference, here's the initial proposal for performing lower constraint testing: http://lists.openstack.org/pipermail/openstack-dev/2018-March/128352.html I wasn't part of most of the requirements team discussions around the need for this, but my understanding was it was to provide a set range of package versions that are expected to work. So anyone packaging OpenStack projects downstream would have an easy way to filter out options to figure out what versions they can include that will minimize conflicts between all the various packages included in a distro. I'm not a downstream packager, so I don't have any direct experience to go on here, but my assumption is that the value of providing this is pretty low. I don't think the time the community has put in to trying to maintain (or not maintain) their lower-constraints.txt files and making sure the jobs are configured properly to apply those constraints has been worth the effort for the value anyone downstream might get out of them. My vote would be to get rid of these jobs. Distros will need to perform testing of the versions they ultimately package together anyway, so I don't think it is worth the community's time to repeatedly struggle with keeping these things updated. I do think one useful bit can be when we're tracking our own direct dependencies. One thing that comes to mind from the recent past is we've had cases where something new has been added to something like oslo.config. Lower constraints updates were a good way to make it explicit that we needed at least the newer version of that lib so that we could safely assume the expected functionality would be present. There is some value to that. So if we keep lower-constraints, maybe we just limit it to those specific instances where we have things like that and not try to constrain the entire world. Sean
---- On Fri, 11 Dec 2020 09:58:19 -0600 Sean McGinnis <sean.mcginnis@gmx.com> wrote ----
Yes, it bears repeating that anywhere the new dep solver is breaking represents a situation where we were previously testing/building things with a different version of some package than we meant to. This is exposing latent bugs in our declared dependencies within those branches. If we decide to use "older" pip, that's basically admitting we don't care because it's easier to ignore those problems than actually fix them (which, yes, might turn out to be effectively impossible). I'm not trying to be harsh, it's certainly a valid approach, but let's be clear that this is the compromise we're making in that case.
+1
My proposal: actually come to terms with the reality that lower-constraints jobs are a fundamentally broken concept, unless someone does the hard work to implement an inverse version sort in pip itself. If pretty much all the struggle is with those jobs, then dropping them can't hurt because they failed at testing exactly the thing they were created for in the first place.
As someone that has spent some time working on l-c jobs/issues, I kind of have to agree with this.
For historical reference, here's the initial proposal for performing lower constraint testing:
http://lists.openstack.org/pipermail/openstack-dev/2018-March/128352.html
I wasn't part of most of the requirements team discussions around the need for this, but my understanding was it was to provide a set range of package versions that are expected to work. So anyone packaging OpenStack projects downstream would have an easy way to filter out options to figure out what versions they can include that will minimize conflicts between all the various packages included in a distro.
I'm not a downstream packager, so I don't have any direct experience to go on here, but my assumption is that the value of providing this is pretty low. I don't think the time the community has put in to trying to maintain (or not maintain) their lower-constraints.txt files and making sure the jobs are configured properly to apply those constraints has been worth the effort for the value anyone downstream might get out of them.
My vote would be to get rid of these jobs. Distros will need to perform testing of the versions they ultimately package together anyway, so I don't think it is worth the community's time to repeatedly struggle with keeping these things updated.
I do think one useful bit can be when we're tracking our own direct dependencies. One thing that comes to mind from the recent past is we've had cases where something new has been added to something like oslo.config. Lower constraints updates were a good way to make it explicit that we needed at least the newer version of that lib so that we could safely assume the expected functionality would be present. There is some value to that. So if we keep lower-constraints, maybe we just limit it to those specific instances where we have things like that and not try to constrain the entire world.
I agree. One of the big chunks of work and time we spent on this is during moving the testing from Ubuntu Bionic to Focal[1] where we had to fix many lower-constraints in all the repos (nearly ~400) in OpenStack. For knowing the compatible version of deps, we have the information in the requirements.txt file, where we can get to know what is working version of all deps are. What is the min working version is not a hard thing to know (basically when env break for any incompatible one). Maintaining it up to date is not so worth compare to the effort it is taking. I will also suggest to remove this. [1] https://storyboard.openstack.org/#!/story/2007865 -gmann
Sean
Jeremy nailed it very well. Tripleo already removed lower-constraints from most places (some changes may be still waiting to be gated). Regarding decoupling linting from test-requirements: yes! This was already done by some when conflicts appeared. For old branches I personally do not care much even if maintainers decide to disable linting, their main benefit is on main branches. On Fri, 11 Dec 2020 at 18:14, Radosław Piliszek <radoslaw.piliszek@gmail.com> wrote:
On Fri, Dec 11, 2020 at 5:16 PM Ghanshyam Mann <gmann@ghanshyammann.com> wrote:
Maintaining it up to date is not so worth compare to the effort it is
taking. I will also suggest to
remove this.
Kolla dropped lower-constraints from all the branches.
-yoctozepto
-- -- /sorin
Hi, I hope you won't mind me shifting this discussion to [all] - many projects have had to make changes due to the dependency resolver catching some of our uncaught lies. In manila, i've pushed up three changes to fix the CI on the main, stable/victoria and stable/ussuri [1] branches. I used fungi's method of installing things and playing whack-a-mole [2] and Brain Rosmaita's approach [3] of taking the opportunity to raise the minimum required packages for Wallaby. However, this all seems kludgy maintenance - and possibly no-one is benefitting from the effort we're putting into this as called out. Can more distributors and deployment tooling folks comment? [1] https://review.opendev.org/q/project:openstack/manila+topic:update-requireme... [2] http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019285.... [3] https://review.opendev.org/c/openstack/cinder/+/766085 On Fri, Dec 11, 2020 at 12:51 PM Sorin Sbarnea <ssbarnea@redhat.com> wrote:
Jeremy nailed it very well.
Tripleo already removed lower-constraints from most places (some changes may be still waiting to be gated).
Regarding decoupling linting from test-requirements: yes! This was already done by some when conflicts appeared. For old branches I personally do not care much even if maintainers decide to disable linting, their main benefit is on main branches.
On Fri, 11 Dec 2020 at 18:14, Radosław Piliszek < radoslaw.piliszek@gmail.com> wrote:
On Fri, Dec 11, 2020 at 5:16 PM Ghanshyam Mann <gmann@ghanshyammann.com> wrote:
Maintaining it up to date is not so worth compare to the effort it is
taking. I will also suggest to
remove this.
Kolla dropped lower-constraints from all the branches.
-yoctozepto
-- -- /sorin
On Fri, Dec 11, 2020 at 9:10 PM Goutham Pacha Ravi <gouthampravi@gmail.com> wrote:
Hi,
I hope you won't mind me shifting this discussion to [all] - many projects have had to make changes due to the dependency resolver catching some of our uncaught lies. In manila, i've pushed up three changes to fix the CI on the main, stable/victoria and stable/ussuri [1] branches. I used fungi's method of installing things and playing whack-a-mole [2] and Brain Rosmaita's approach [3] of taking the opportunity to raise the minimum required packages for Wallaby. However, this all seems kludgy maintenance - and possibly no-one is benefitting from the effort we're putting into this as called out.
Can more distributors and deployment tooling folks comment?
[1] https://review.opendev.org/q/project:openstack/manila+topic:update-requireme...
[2] http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019285.... [3] https://review.opendev.org/c/openstack/cinder/+/766085
On Fri, Dec 11, 2020 at 12:51 PM Sorin Sbarnea <ssbarnea@redhat.com> wrote:
Jeremy nailed it very well.
Tripleo already removed lower-constraints from most places (some changes may be still waiting to be gated).
Regarding decoupling linting from test-requirements: yes! This was already done by some when conflicts appeared. For old branches I personally do not care much even if maintainers decide to disable linting, their main benefit is on main branches.
On Fri, 11 Dec 2020 at 18:14, Radosław Piliszek < radoslaw.piliszek@gmail.com> wrote:
On Fri, Dec 11, 2020 at 5:16 PM Ghanshyam Mann <gmann@ghanshyammann.com> wrote:
Maintaining it up to date is not so worth compare to the effort it is
taking. I will also suggest to
remove this.
Kolla dropped lower-constraints from all the branches.
-yoctozepto
-- -- /sorin
Hello all, While being frustrated to the point I was willing to throw away the check-requirements job to get around what I thought failed on my efforts to fix the lower-constraints job (due to me misreading what actually failed in the check-requirements job) , I think scrapping the lower-constraints job would be very counterproductive. We in Glance have been hands full for the past few cycles and assuming lower-constraints job actually working as intended has led us to neglect some of our requirements housekeeping quite a bit. If it had not broken now, we likely would have neglected it for quite a few cycles more. Due to fixing the said job I had to fix the minimums in our requirements.txt too. While I'm not sure maintaining the lower-constraints.txt has direct benefit for many, it actually keeps us honest with our requirements and prevents stuff breaking down the line. (Expecting that the lower-constraints job actually works from now on and highlights when we start breaking up on our dependency chain.) Yes it's a hideous task to get up to date once you have neglected it for a long time, but I see it as a very valuable tool to highlight that I should pay more attention to the requirements and what versions of dependencies we claim to work with. - jokke
Hi, I'm probably missing something, but not sure why multiple OpenStack projects that only communicate through APIs would need to coexist in the same virtual environment (which leads to exponential dependency hell). Regardless of the deployment type or packager, makes sense to always have exactly one virtual environment per OpenStack project. Projects have various needs and priorities, own upgrade paths for third party libraries, therefore totally independent requirements.txt. And all lib versions pinpointed, no low or highs. The usual best practice. So what am I missing? Regards, Adrian Andreias https://fleio.com On Fri, Dec 11, 2020, 11:10 PM Goutham Pacha Ravi <gouthampravi@gmail.com> wrote:
Hi,
I hope you won't mind me shifting this discussion to [all] - many projects have had to make changes due to the dependency resolver catching some of our uncaught lies. In manila, i've pushed up three changes to fix the CI on the main, stable/victoria and stable/ussuri [1] branches. I used fungi's method of installing things and playing whack-a-mole [2] and Brain Rosmaita's approach [3] of taking the opportunity to raise the minimum required packages for Wallaby. However, this all seems kludgy maintenance - and possibly no-one is benefitting from the effort we're putting into this as called out.
Can more distributors and deployment tooling folks comment?
[1] https://review.opendev.org/q/project:openstack/manila+topic:update-requireme...
[2] http://lists.openstack.org/pipermail/openstack-discuss/2020-December/019285.... [3] https://review.opendev.org/c/openstack/cinder/+/766085
On Fri, Dec 11, 2020 at 12:51 PM Sorin Sbarnea <ssbarnea@redhat.com> wrote:
Jeremy nailed it very well.
Tripleo already removed lower-constraints from most places (some changes may be still waiting to be gated).
Regarding decoupling linting from test-requirements: yes! This was already done by some when conflicts appeared. For old branches I personally do not care much even if maintainers decide to disable linting, their main benefit is on main branches.
On Fri, 11 Dec 2020 at 18:14, Radosław Piliszek < radoslaw.piliszek@gmail.com> wrote:
On Fri, Dec 11, 2020 at 5:16 PM Ghanshyam Mann <gmann@ghanshyammann.com> wrote:
Maintaining it up to date is not so worth compare to the effort it is
taking. I will also suggest to
remove this.
Kolla dropped lower-constraints from all the branches.
-yoctozepto
-- -- /sorin
On 2020-12-15 20:10:36 +0200 (+0200), Adrian Andreias wrote:
I'm probably missing something, but not sure why multiple OpenStack projects that only communicate through APIs would need to coexist in the same virtual environment (which leads to exponential dependency hell).
Regardless of the deployment type or packager, makes sense to always have exactly one virtual environment per OpenStack project. Projects have various needs and priorities, own upgrade paths for third party libraries, therefore totally independent requirements.txt. And all lib versions pinpointed, no low or highs. The usual best practice. [...]
Got it. So you've developed some magic new containment technology which will allow you to use incompatible versions of nova and oslo.messaging, for example? Those separate OpenStack projects no longer need to be coinstallable? ;) But also, coinstallability is fundamental to inclusion in any coordinated software distribution. Red Hat or Debian are not going to want to have to maintain lots of different versions of the same dependencies (and duplicate security fix backporting work that many times over). Being able to use consistent versions of your dependency chain has lots of benefits even if you're not going to actually install all the components into one system together at the same time. -- Jeremy Stanley
On 2020-12-11 20:38:30 +0000 (+0000), Sorin Sbarnea wrote: [...]
Regarding decoupling linting from test-requirements: yes! This was already done by some when conflicts appeared. For old branches I personally do not care much even if maintainers decide to disable linting, their main benefit is on main branches. [...]
To be honest, if I had my way, test-requirements.txt files would die in a fire. Sure it's a little more work to be specific about the individual requirements for each of your testenvs in tox.ini, but the payoff is that people aren't needlessly installing bandit when they run flake8 (for example). The thing we got into the PTI about using a separate doc/requirements.txt is a nice compromise in that direction, at least. -- Jeremy Stanley
On Saturday, 12 December 2020 00:12:36 CET Jeremy Stanley wrote:
On 2020-12-11 20:38:30 +0000 (+0000), Sorin Sbarnea wrote: [...]
Regarding decoupling linting from test-requirements: yes! This was already done by some when conflicts appeared. For old branches I personally do not care much even if maintainers decide to disable linting, their main benefit is on main branches.
[...]
To be honest, if I had my way, test-requirements.txt files would die in a fire. Sure it's a little more work to be specific about the individual requirements for each of your testenvs in tox.ini, but the payoff is that people aren't needlessly installing bandit when they run flake8 (for example). The thing we got into the PTI about using a separate doc/requirements.txt is a nice compromise in that direction, at least.
Wouldn't this mean tracking requirements into two different kind of places:the main requirements.txt file, which is still going to be needed even for tests, and the tox environment definitions? -- Luigi
On 2020-12-13 14:39:58 +0100 (+0100), Luigi Toscano wrote:
On Saturday, 12 December 2020 00:12:36 CET Jeremy Stanley wrote:
On 2020-12-11 20:38:30 +0000 (+0000), Sorin Sbarnea wrote: [...]
Regarding decoupling linting from test-requirements: yes! This was already done by some when conflicts appeared. For old branches I personally do not care much even if maintainers decide to disable linting, their main benefit is on main branches. [...]
To be honest, if I had my way, test-requirements.txt files would die in a fire. Sure it's a little more work to be specific about the individual requirements for each of your testenvs in tox.ini, but the payoff is that people aren't needlessly installing bandit when they run flake8 (for example). The thing we got into the PTI about using a separate doc/requirements.txt is a nice compromise in that direction, at least.
Wouldn't this mean tracking requirements into two different kind of places:the main requirements.txt file, which is still going to be needed even for tests, and the tox environment definitions?
Technically we already do. The requirements.txt file contains actual runtime Python dependencies of the software (technically setup_requires in Setuptools parlance). Then we have this vague test-requirements.txt file which installs everything under the sun a test might want, including the kitchen sink. Tox doesn't reuse one virtualenv for multiple testenv definitions, it creates a separate one for each, so for example... In the nova repo, if you `tox -e bandit` or `tox -e pep8` it's going to install coverage, psycopg2, PyMySQL, requests, python-barbicanclient, python-ironicclient, and a whole host of other stuff, including the entire transitive dependency set for everything in there, rather than just the one tool it needs to run. I can't even run the pep8 testenv locally because to do that I apparently need a Python package named zVMCloudConnector which wants root access to create files like /lib/systemd/system/sdkserver.service and /etc/sudoers.d/sudoers-zvmsdk and /var/lib/zvmsdk/* and /etc/zvmsdk/* in my system. WHAT?!? Do nova's developers actually ever run any of this themselves? Okay, so that one's actually in requirements.txt (might be a good candidate for a separate extras in the setup.cfg instead), but seriously, it's trying to install 182 packages (present count on master) just to do a "quick" style check, and the resulting .tox created from that is 319MB in size. How is that in any way sane? If I tweak the testenv:pep8 definition in tox.ini to set deps=flake8,hacking,mypy and and usedevelop=False, and set skipsdist=True in the general tox section, it installs a total of 9 packages for a 36MB .tox directory. It's an extreme example, sure, but remember this is also happening in CI for each patch uploaded, and this setup cost is incurred every time in that context. This is already solved in a few places in the nova repo, in different ways. One is the docs testenv, which installs doc/requirements.txt (currently 10 mostly Sphinx-related entries) instead of combining all that into test-requirements.txt too. Another is the osprofiler extra in setup.cfg allowing you to `pip install nova[osprofiler]` to get that specific dependency. Yet still another is the bindep testenv, which explicitly declares deps=bindep and so installs absolutely nothing else (save bindep's own dependencies)... or, well, it would except skipsdist got set to False by https://review.openstack.org/622972 making that testenv effectively pointless because now `tox -e bindep` has to install nova before it can tell you what packages you're missing to be able to install nova. *sigh* So anyway, there's a lot of opportunity for improvement, and that's just in nova, I'm sure there are similar situations throughout many of our projects. Using a test-requirements.txt file as a dumping ground for every last package any tox testenv could want may be convenient for tracking things, but it's far from convenient to actually use. The main thing we risk losing is that the requirements-check job currently reports whether entries in test-requirements.txt are compatible with the global upper-constraints.txt in openstack/requirements, so extending that to check dependencies declared in tox.ini or in package extras or additional external requirements lists would be needed if we wanted to preserve that capability. -- Jeremy Stanley
On 13-12-20 16:33:39, Jeremy Stanley wrote:
On 2020-12-13 14:39:58 +0100 (+0100), Luigi Toscano wrote:
On Saturday, 12 December 2020 00:12:36 CET Jeremy Stanley wrote:
On 2020-12-11 20:38:30 +0000 (+0000), Sorin Sbarnea wrote: [...]
Regarding decoupling linting from test-requirements: yes! This was already done by some when conflicts appeared. For old branches I personally do not care much even if maintainers decide to disable linting, their main benefit is on main branches. [...]
To be honest, if I had my way, test-requirements.txt files would die in a fire. Sure it's a little more work to be specific about the individual requirements for each of your testenvs in tox.ini, but the payoff is that people aren't needlessly installing bandit when they run flake8 (for example). The thing we got into the PTI about using a separate doc/requirements.txt is a nice compromise in that direction, at least.
Wouldn't this mean tracking requirements into two different kind of places:the main requirements.txt file, which is still going to be needed even for tests, and the tox environment definitions?
Technically we already do. The requirements.txt file contains actual runtime Python dependencies of the software (technically setup_requires in Setuptools parlance). Then we have this vague test-requirements.txt file which installs everything under the sun a test might want, including the kitchen sink. Tox doesn't reuse one virtualenv for multiple testenv definitions, it creates a separate one for each, so for example...
That isn't technically true within Nova, multiple tox envs use the {toxworkdir}/shared envdir for the virtualenv. mypy, pep8, fast8, genconfig, genpolicy, cover, debug and bandit.
In the nova repo, if you `tox -e bandit` or `tox -e pep8` it's going to install coverage, psycopg2, PyMySQL, requests, python-barbicanclient, python-ironicclient, and a whole host of other stuff, including the entire transitive dependency set for everything in there, rather than just the one tool it needs to run.
Yup that's pointless.
I can't even run the pep8 testenv locally because to do that I apparently need a Python package named zVMCloudConnector which wants root access to create files like /lib/systemd/system/sdkserver.service and /etc/sudoers.d/sudoers-zvmsdk and /var/lib/zvmsdk/* and /etc/zvmsdk/* in my system. WHAT?!? Do nova's developers actually ever run any of this themselves?
... Which version of that package is the pep8 env pulling in for you? I don't see any such issues with zVMCloudConnector==1.4.1 locally on Fedora 33, tox 3.19.0, pip 20.2.2 etc. Would you mind writing up a launchpad bug for this?
Okay, so that one's actually in requirements.txt (might be a good candidate for a separate extras in the setup.cfg instead), but seriously, it's trying to install 182 packages (present count on master) just to do a "quick" style check, and the resulting .tox created from that is 319MB in size. How is that in any way sane? If I tweak the testenv:pep8 definition in tox.ini to set deps=flake8,hacking,mypy and and usedevelop=False, and set skipsdist=True in the general tox section, it installs a total of 9 packages for a 36MB .tox directory. It's an extreme example, sure, but remember this is also happening in CI for each patch uploaded, and this setup cost is incurred every time in that context.
EWww yeah this is awful.
This is already solved in a few places in the nova repo, in different ways. One is the docs testenv, which installs doc/requirements.txt (currently 10 mostly Sphinx-related entries) instead of combining all that into test-requirements.txt too. Another is the osprofiler extra in setup.cfg allowing you to `pip install nova[osprofiler]` to get that specific dependency. Yet still another is the bindep testenv, which explicitly declares deps=bindep and so installs absolutely nothing else (save bindep's own dependencies)... or, well, it would except skipsdist got set to False by https://review.openstack.org/622972 making that testenv effectively pointless because now `tox -e bindep` has to install nova before it can tell you what packages you're missing to be able to install nova. *sigh*
So anyway, there's a lot of opportunity for improvement, and that's just in nova, I'm sure there are similar situations throughout many of our projects. Using a test-requirements.txt file as a dumping ground for every last package any tox testenv could want may be convenient for tracking things, but it's far from convenient to actually use. The main thing we risk losing is that the requirements-check job currently reports whether entries in test-requirements.txt are compatible with the global upper-constraints.txt in openstack/requirements, so extending that to check dependencies declared in tox.ini or in package extras or additional external requirements lists would be needed if we wanted to preserve that capability.
Gibi, should we track all of this in a few launchpad bugs for Nova? Cheers, -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76
On Mon, Dec 14, 2020 at 09:54, Lee Yarwood <lyarwood@redhat.com> wrote:
On 13-12-20 16:33:39, Jeremy Stanley wrote:
On Saturday, 12 December 2020 00:12:36 CET Jeremy Stanley wrote:
On 2020-12-11 20:38:30 +0000 (+0000), Sorin Sbarnea wrote: [...]
Regarding decoupling linting from test-requirements: yes! This was already done by some when conflicts appeared. For old branches I personally do not care much even if maintainers decide to disable linting, their main benefit is on main branches. [...]
To be honest, if I had my way, test-requirements.txt files would die in a fire. Sure it's a little more work to be specific about the individual requirements for each of your testenvs in tox.ini, but the payoff is that people aren't needlessly installing bandit when they run flake8 (for example). The thing we got into the PTI about using a separate doc/requirements.txt is a nice compromise in
On 2020-12-13 14:39:58 +0100 (+0100), Luigi Toscano wrote: that
direction, at least.
Wouldn't this mean tracking requirements into two different kind of places:the main requirements.txt file, which is still going to be needed even for tests, and the tox environment definitions?
Technically we already do. The requirements.txt file contains actual runtime Python dependencies of the software (technically setup_requires in Setuptools parlance). Then we have this vague test-requirements.txt file which installs everything under the sun a test might want, including the kitchen sink. Tox doesn't reuse one virtualenv for multiple testenv definitions, it creates a separate one for each, so for example...
That isn't technically true within Nova, multiple tox envs use the {toxworkdir}/shared envdir for the virtualenv.
mypy, pep8, fast8, genconfig, genpolicy, cover, debug and bandit.
In the nova repo, if you `tox -e bandit` or `tox -e pep8` it's going to install coverage, psycopg2, PyMySQL, requests, python-barbicanclient, python-ironicclient, and a whole host of other stuff, including the entire transitive dependency set for everything in there, rather than just the one tool it needs to run.
Yup that's pointless.
I can't even run the pep8 testenv locally because to do that I apparently need a Python package named zVMCloudConnector which wants root access to create files like /lib/systemd/system/sdkserver.service and /etc/sudoers.d/sudoers-zvmsdk and /var/lib/zvmsdk/* and /etc/zvmsdk/* in my system. WHAT?!? Do nova's developers actually ever run any of this themselves?
...
Which version of that package is the pep8 env pulling in for you?
I don't see any such issues with zVMCloudConnector==1.4.1 locally on Fedora 33, tox 3.19.0, pip 20.2.2 etc.
Would you mind writing up a launchpad bug for this?
Okay, so that one's actually in requirements.txt (might be a good candidate for a separate extras in the setup.cfg instead), but seriously, it's trying to install 182 packages (present count on master) just to do a "quick" style check, and the resulting .tox created from that is 319MB in size. How is that in any way sane? If I tweak the testenv:pep8 definition in tox.ini to set deps=flake8,hacking,mypy and and usedevelop=False, and set skipsdist=True in the general tox section, it installs a total of 9 packages for a 36MB .tox directory. It's an extreme example, sure, but remember this is also happening in CI for each patch uploaded, and this setup cost is incurred every time in that context.
EWww yeah this is awful.
This is already solved in a few places in the nova repo, in different ways. One is the docs testenv, which installs doc/requirements.txt (currently 10 mostly Sphinx-related entries) instead of combining all that into test-requirements.txt too. Another is the osprofiler extra in setup.cfg allowing you to `pip install nova[osprofiler]` to get that specific dependency. Yet still another is the bindep testenv, which explicitly declares deps=bindep and so installs absolutely nothing else (save bindep's own dependencies)... or, well, it would except skipsdist got set to False by https://review.openstack.org/622972 making that testenv effectively pointless because now `tox -e bindep` has to install nova before it can tell you what packages you're missing to be able to install nova. *sigh*
So anyway, there's a lot of opportunity for improvement, and that's just in nova, I'm sure there are similar situations throughout many of our projects. Using a test-requirements.txt file as a dumping ground for every last package any tox testenv could want may be convenient for tracking things, but it's far from convenient to actually use. The main thing we risk losing is that the requirements-check job currently reports whether entries in test-requirements.txt are compatible with the global upper-constraints.txt in openstack/requirements, so extending that to check dependencies declared in tox.ini or in package extras or additional external requirements lists would be needed if we wanted to preserve that capability.
Gibi, should we track all of this in a few launchpad bugs for Nova?
Sure, we can open couple of low prio low-hanging-fruit bugs for these. Cheers, gibi
Cheers,
-- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76
On 2020-12-14 09:54:53 +0000 (+0000), Lee Yarwood wrote:
On 13-12-20 16:33:39, Jeremy Stanley wrote: [...]
Tox doesn't reuse one virtualenv for multiple testenv definitions, it creates a separate one for each, so for example...
That isn't technically true within Nova, multiple tox envs use the {toxworkdir}/shared envdir for the virtualenv.
mypy, pep8, fast8, genconfig, genpolicy, cover, debug and bandit. [...]
Neat, I suppose that's not a terrible workaround for some of this, though I wonder if we'll see it cause problems over time with the new dep solver in pip if some of those tools grow any conflicting transitive dependencies.
I can't even run the pep8 testenv locally because to do that I apparently need a Python package named zVMCloudConnector which wants root access to create files like /lib/systemd/system/sdkserver.service and /etc/sudoers.d/sudoers-zvmsdk and /var/lib/zvmsdk/* and /etc/zvmsdk/* in my system. WHAT?!? Do nova's developers actually ever run any of this themselves?
...
Which version of that package is the pep8 env pulling in for you?
I don't see any such issues with zVMCloudConnector==1.4.1 locally on Fedora 33, tox 3.19.0, pip 20.2.2 etc.
Would you mind writing up a launchpad bug for this? [...]
I think I've worked out why you're not seeing it. My tox is installed with the tox-venv plugin so that it will use the venv module instead of virtualenv, and that doesn't seed a copy of the wheel library into the testenv by default. Apparently if you try to install zVMCloudConnector via `setup.py install` instead of making a wheel (which is what happens by default if the wheel library is absent), this is the result. For the curious, a simpler reproducer is: rm -rf ~/.cache/pip # in case you have a wheel cached for it python3 -m venv foo # simple venv without the wheel module foo/bin/pip install zVMCloudConnector Install wheel into the venv first and then zVMCloudConnector installs cleanly, and indeed if I test with just plain tox (no tox-venv plugin installed) it ends up getting a wheel for zVMCloudConnector so doesn't hit the root-only build steps from its sdist. A bit of research indicates tox-venv was deprecated earlier this year once virtualenv gained the ability to delegate creation to the venv module itself, and even if you set VIRTUALENV_CREATOR=venv in the setenv list in tox.ini or passenv it from the calling environment you're not going to run into this because venvs as built from virtualenv get wheel seeded in them by default. Now that I've gotten to the bottom of this, given it's a bit of a corner case, I'm on the fence about filing a bug about it against python-zvm-sdk in LP and likely won't unless folks actually think it'll be useful to relate.
Gibi, should we track all of this in a few launchpad bugs for Nova?
To be clear, I wasn't trying to single out nova, it was simply a convenient example of where the idea of using a single test-requirements.txt file may have tipped over from convenience into inconvenience. (And then the zVMCloudConnector tangent of course.) -- Jeremy Stanley
On Sun, Dec 13, 2020 at 5:36 PM Jeremy Stanley <fungi@yuggoth.org> wrote:
On 2020-12-13 14:39:58 +0100 (+0100), Luigi Toscano wrote:
On Saturday, 12 December 2020 00:12:36 CET Jeremy Stanley wrote:
On 2020-12-11 20:38:30 +0000 (+0000), Sorin Sbarnea wrote: [...]
Regarding decoupling linting from test-requirements: yes! This was already done by some when conflicts appeared. For old branches I personally do not care much even if maintainers decide to disable linting, their main benefit is on main branches. [...]
To be honest, if I had my way, test-requirements.txt files would die in a fire. Sure it's a little more work to be specific about the individual requirements for each of your testenvs in tox.ini, but the payoff is that people aren't needlessly installing bandit when they run flake8 (for example). The thing we got into the PTI about using a separate doc/requirements.txt is a nice compromise in that direction, at least.
Wouldn't this mean tracking requirements into two different kind of places:the main requirements.txt file, which is still going to be needed even for tests, and the tox environment definitions?
Technically we already do. The requirements.txt file contains actual runtime Python dependencies of the software (technically setup_requires in Setuptools parlance). Then we have this vague test-requirements.txt file which installs everything under the sun a test might want, including the kitchen sink. Tox doesn't reuse one virtualenv for multiple testenv definitions, it creates a separate one for each, so for example...
In the nova repo, if you `tox -e bandit` or `tox -e pep8` it's going to install coverage, psycopg2, PyMySQL, requests, python-barbicanclient, python-ironicclient, and a whole host of other stuff, including the entire transitive dependency set for everything in there, rather than just the one tool it needs to run. I can't even run the pep8 testenv locally because to do that I apparently need a Python package named zVMCloudConnector which wants root access to create files like /lib/systemd/system/sdkserver.service and /etc/sudoers.d/sudoers-zvmsdk and /var/lib/zvmsdk/* and /etc/zvmsdk/* in my system. WHAT?!? Do nova's developers actually ever run any of this themselves?
Okay, so that one's actually in requirements.txt (might be a good candidate for a separate extras in the setup.cfg instead), but seriously, it's trying to install 182 packages (present count on master) just to do a "quick" style check, and the resulting .tox created from that is 319MB in size. How is that in any way sane? If I tweak the testenv:pep8 definition in tox.ini to set deps=flake8,hacking,mypy and and usedevelop=False, and set skipsdist=True in the general tox section, it installs a total of 9 packages for a 36MB .tox directory. It's an extreme example, sure, but remember this is also happening in CI for each patch uploaded, and this setup cost is incurred every time in that context.
Thanks for the hint btw, I'll apply it to our repos.
This is already solved in a few places in the nova repo, in different ways. One is the docs testenv, which installs doc/requirements.txt (currently 10 mostly Sphinx-related entries) instead of combining all that into test-requirements.txt too. Another is the osprofiler extra in setup.cfg allowing you to `pip install nova[osprofiler]` to get that specific dependency. Yet still another is the bindep testenv, which explicitly declares deps=bindep and so installs absolutely nothing else (save bindep's own dependencies)... or, well, it would except skipsdist got set to False by https://review.openstack.org/622972 making that testenv effectively pointless because now `tox -e bindep` has to install nova before it can tell you what packages you're missing to be able to install nova. *sigh*
So anyway, there's a lot of opportunity for improvement, and that's just in nova, I'm sure there are similar situations throughout many of our projects. Using a test-requirements.txt file as a dumping ground for every last package any tox testenv could want may be convenient for tracking things, but it's far from convenient to actually use. The main thing we risk losing is that the requirements-check job currently reports whether entries in test-requirements.txt are compatible with the global upper-constraints.txt in openstack/requirements, so extending that to check dependencies declared in tox.ini or in package extras or additional external requirements lists would be needed if we wanted to preserve that capability. -- Jeremy Stanley
-- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill
Top-posting to recap all the interesting answers and answer my initial mail. The overall feeling I get is that even with the changes that may be needed to satisfy the new resolver, we should be fine to apply these to stable branches: * lower-constraints was discussed a lot, this is where largest changes were spotted but they are OK given the current use/effectiveness of these jobs (or maybe even dropped soon) * linters can be extracted from test-requirements, to limit linters version bumps. I had quickly tried that for the neutron fix and it had failed in some other job, but I will take another look in a separate patch. Then if needed this change can be squashed with pip requirements fixes in stable branches. * For some recent branches (victoria for example), style fixes are small so this can be just cherry-picked from master to have a working branch * Other requirements bumps should be OK as they actually indicate the proper needed versions now * If we ever hit a change (old third-pary dependency) that cannot be fixed without going over upper-constraints, then we may have to cap pip. Hopefully, this will not be hit. * https://review.opendev.org/q/I8f24b839bf42e2fb9803dc7df3a30ae20cf264eb fix for bandit 1.6.3 may help to limit the impact (I did not retest yet) If all of this sounds good, then I guess it will be time to play whack-a-stable-mole On Mon, 14 Dec 2020 at 14:03, Dmitry Tantsur <dtantsur@redhat.com> wrote:
On Sun, Dec 13, 2020 at 5:36 PM Jeremy Stanley <fungi@yuggoth.org> wrote:
On 2020-12-13 14:39:58 +0100 (+0100), Luigi Toscano wrote:
On Saturday, 12 December 2020 00:12:36 CET Jeremy Stanley wrote:
On 2020-12-11 20:38:30 +0000 (+0000), Sorin Sbarnea wrote: [...]
Regarding decoupling linting from test-requirements: yes! This was already done by some when conflicts appeared. For old branches I personally do not care much even if maintainers decide to disable linting, their main benefit is on main branches. [...]
To be honest, if I had my way, test-requirements.txt files would die in a fire. Sure it's a little more work to be specific about the individual requirements for each of your testenvs in tox.ini, but the payoff is that people aren't needlessly installing bandit when they run flake8 (for example). The thing we got into the PTI about using a separate doc/requirements.txt is a nice compromise in that direction, at least.
Wouldn't this mean tracking requirements into two different kind of places:the main requirements.txt file, which is still going to be needed even for tests, and the tox environment definitions?
Technically we already do. The requirements.txt file contains actual runtime Python dependencies of the software (technically setup_requires in Setuptools parlance). Then we have this vague test-requirements.txt file which installs everything under the sun a test might want, including the kitchen sink. Tox doesn't reuse one virtualenv for multiple testenv definitions, it creates a separate one for each, so for example...
In the nova repo, if you `tox -e bandit` or `tox -e pep8` it's going to install coverage, psycopg2, PyMySQL, requests, python-barbicanclient, python-ironicclient, and a whole host of other stuff, including the entire transitive dependency set for everything in there, rather than just the one tool it needs to run. I can't even run the pep8 testenv locally because to do that I apparently need a Python package named zVMCloudConnector which wants root access to create files like /lib/systemd/system/sdkserver.service and /etc/sudoers.d/sudoers-zvmsdk and /var/lib/zvmsdk/* and /etc/zvmsdk/* in my system. WHAT?!? Do nova's developers actually ever run any of this themselves?
Okay, so that one's actually in requirements.txt (might be a good candidate for a separate extras in the setup.cfg instead), but seriously, it's trying to install 182 packages (present count on master) just to do a "quick" style check, and the resulting .tox created from that is 319MB in size. How is that in any way sane? If I tweak the testenv:pep8 definition in tox.ini to set deps=flake8,hacking,mypy and and usedevelop=False, and set skipsdist=True in the general tox section, it installs a total of 9 packages for a 36MB .tox directory. It's an extreme example, sure, but remember this is also happening in CI for each patch uploaded, and this setup cost is incurred every time in that context.
Thanks for the hint btw, I'll apply it to our repos.
I will have to check that too, making these jobs lighter for CI is always nice!
This is already solved in a few places in the nova repo, in different ways. One is the docs testenv, which installs doc/requirements.txt (currently 10 mostly Sphinx-related entries) instead of combining all that into test-requirements.txt too. Another is the osprofiler extra in setup.cfg allowing you to `pip install nova[osprofiler]` to get that specific dependency. Yet still another is the bindep testenv, which explicitly declares deps=bindep and so installs absolutely nothing else (save bindep's own dependencies)... or, well, it would except skipsdist got set to False by https://review.openstack.org/622972 making that testenv effectively pointless because now `tox -e bindep` has to install nova before it can tell you what packages you're missing to be able to install nova. *sigh*
So anyway, there's a lot of opportunity for improvement, and that's just in nova, I'm sure there are similar situations throughout many of our projects. Using a test-requirements.txt file as a dumping ground for every last package any tox testenv could want may be convenient for tracking things, but it's far from convenient to actually use. The main thing we risk losing is that the requirements-check job currently reports whether entries in test-requirements.txt are compatible with the global upper-constraints.txt in openstack/requirements, so extending that to check dependencies declared in tox.ini or in package extras or additional external requirements lists would be needed if we wanted to preserve that capability. -- Jeremy Stanley
-- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill
-- Bernard Cafarelli
Seeing the caused problems by lower-constraint jobs (not only now), and reading the opinions, I also vote for removing them. Though, the intention of lower-constraints job was good, it seems to be clearly broken and it would be quite resource (and time) consuming to fix properly every issue in every project and on every branch. (The other way is to constrain pip - or its behavior -, which does not solve really the issue just hides it). Előd On 2020. 12. 11. 16:58, Sean McGinnis wrote:
Yes, it bears repeating that anywhere the new dep solver is breaking represents a situation where we were previously testing/building things with a different version of some package than we meant to. This is exposing latent bugs in our declared dependencies within those branches. If we decide to use "older" pip, that's basically admitting we don't care because it's easier to ignore those problems than actually fix them (which, yes, might turn out to be effectively impossible). I'm not trying to be harsh, it's certainly a valid approach, but let's be clear that this is the compromise we're making in that case.
+1
My proposal: actually come to terms with the reality that lower-constraints jobs are a fundamentally broken concept, unless someone does the hard work to implement an inverse version sort in pip itself. If pretty much all the struggle is with those jobs, then dropping them can't hurt because they failed at testing exactly the thing they were created for in the first place.
As someone that has spent some time working on l-c jobs/issues, I kind of have to agree with this.
For historical reference, here's the initial proposal for performing lower constraint testing:
http://lists.openstack.org/pipermail/openstack-dev/2018-March/128352.html
I wasn't part of most of the requirements team discussions around the need for this, but my understanding was it was to provide a set range of package versions that are expected to work. So anyone packaging OpenStack projects downstream would have an easy way to filter out options to figure out what versions they can include that will minimize conflicts between all the various packages included in a distro.
I'm not a downstream packager, so I don't have any direct experience to go on here, but my assumption is that the value of providing this is pretty low. I don't think the time the community has put in to trying to maintain (or not maintain) their lower-constraints.txt files and making sure the jobs are configured properly to apply those constraints has been worth the effort for the value anyone downstream might get out of them.
My vote would be to get rid of these jobs. Distros will need to perform testing of the versions they ultimately package together anyway, so I don't think it is worth the community's time to repeatedly struggle with keeping these things updated.
I do think one useful bit can be when we're tracking our own direct dependencies. One thing that comes to mind from the recent past is we've had cases where something new has been added to something like oslo.config. Lower constraints updates were a good way to make it explicit that we needed at least the newer version of that lib so that we could safely assume the expected functionality would be present. There is some value to that. So if we keep lower-constraints, maybe we just limit it to those specific instances where we have things like that and not try to constrain the entire world.
Sean
participants (16)
-
Adrian Andreias
-
Balázs Gibizer
-
Bernard Cafarelli
-
Dmitry Tantsur
-
Előd Illés
-
Erno Kuvaja
-
Ghanshyam Mann
-
Goutham Pacha Ravi
-
Jeremy Stanley
-
Julia Kreger
-
Lee Yarwood
-
Luigi Toscano
-
Radosław Piliszek
-
Sean McGinnis
-
Sorin Sbarnea
-
Stephen Finucane