On Wed, Feb 10, 2021 at 08:20:02AM +0000, Sorin Sbarnea wrote:
While switch from testr to stestr is a no brainer short term move, I want to mention the maintenance risks.
I personally see the stestr test dependency a liability because the project is not actively maintained and mainly depends on a single person. It is not unmaintained either.
Due to such risks I preferred to rely on pytest for running tests, as I prefer to depend on an ecosystem that has a *big* pool of maintainers.
I have to take exception with this argument. The number of maintainers of a project by itself doesn't indicate anything. It's a poor mask for other concerns like the longevity of support for a project, responsiveness to issues, etc. If you really had issues with projects that only had a single primary maintainer you'd be significantly limiting the pool of software you'd use. A lot of software we use every day has only a single maintainer. So what is your real concern here? I can take some guesses, like issues going unaddressed, but it's hard to actually address the issues you're having with the tool if you just point out the number of maintainers. My suspicion is that this post is actually all a thinly veiled reference to your discontent with the state of the testtools library which is not very actively maintained (despite having **19 people** with write access to the project). But, I should point out that stestr != testtools and there is no hard requirement for a test suite to use testtools to be run with stestr. While testtools is used internally in stestr to handle results streaming (which replacing with a native stestr implementation is on my super long-term plan) the actual unittest compatible framework portion of the library isn't used or required by stestr. The primary feature using testools as your base test class provides is the attachments support for embedding things like stdout and stderr in the result stream and built-in support for fixtures. This can (and I have) be implemented without using testools though, it just requires writing a base test class and result handler that adds the expected (but poorly documented) hook points for passing attachments to python-subunit for serialization (in other words, copying the code that does this from testtools to your local test suite). I can say as the "single person" you call out here that I'm committed to the long term support of stestr, it has a large user base outside of just OpenStack (including being used in parts of my current day job) I'm actually constantly surprised when I get contacted by unexpected users that I've never heard about before; it's not just an instance of "not invented here". OpenStack is still by far the largest user of stestr though, so I do prioritize issues that come up in OpenStack. I've also continued to maintain it through several job changes the past 5 years. I'm also not aware of any pressing issues or bugs that are going unaddressed. Especially from OpenStack I haven't seen any issues filed since Stephen dove deep and fixed that nasty short read bug with python3 in python-subunit that we had all been banging our heads on for a long time (which I'm still super thankful that he did the work on that). While I'll admit I haven't had time the past couple years get to some of the feature development I'd like (mainly finishing https://github.com/mtreinish/stestr/pull/271 and adding https://github.com/mtreinish/stestr/issues/224), none of that seems to be a priority for anyone, just nice to have features. That all being said, if your concern is just the bus factor and when I'm no longer around at some future date there's nobody to continue maintenance. I should point out that I'm not a sole maintainer, I'm just the primary maintainer. masayukig is also a maintainer and has all the same access and permissions on the repo and project that I do. We're also open to adding more maintainers, but nobody has ever stepped up and started contributing consistently (or weren't interested if they were contributing in the past).
Do not take my remark as a proposal to switch to pytest, is only about risk assessment. I am fully aware of how easy is to write impure unittests with pytest, but so far I did not regret going this route.
I know that OpenStack historically loved to redo everything in house and minimise involvement with other open source python libraries. There are pros and cons on each approach but I personally prefer to bet on projects that are thriving and that are unlikely to need me to fix framework problems myself.
I think Stephen and Sean expanded on this well elsewhere in the thread, that using stdlib unittest everywhere has a lot of value, including letting you use pytest if that's your preferred runner. It's also worth pointing out that stestr was originally designed to fit the needs of OpenStack which are pretty unique in all the python projects I've interacted with, because existing actively maintained test runners (so excluding testr) couldn't get the throughput we needed for all the Python testing that goes on daily. None of the other python runners I've used are able to manage the same levels of throughput that stestr does or handle managing parallel execution as well. Especially considering there is another large thread going on right now about how to beter utilize gate resources this seems like a weird time to abandon a tool that tries to maximize our test execution throughput. stestr is also a hard dependency for Tempest and the entire execution model used in tempest is dependent on stestr to handle scheduling and execution of tests. That's unlikely to ever change because it would require basically rewriting the core of Tempest for unclear benefit. -Matt Treinish