On Thu, Nov 6, 2025, at 7:52 AM, Artem Goncharov wrote:
Hi,
processes are good, but they are not there to stay forever unchanged. Technology stack evolves permanently. What worked well yesterday does not work today, and we need to address those issues. At the same time we should not shoot ourselves in the foot with all the restrictions that slow down progress.
Yes, from memory the major considerations for trying to enforce a standards compliant toolchain were to enable parallel test runs to take advantage of the CI resources we have and to ensure that developers can use whatever test runner they choose whether that be the default, their IDE, or even pytest. At the time pytest didn't even exist and when pytest did show up its support for parallel testing was poor as was its integration into other tools. Since then I think both of these issues are no longer problems with using pytest. Its popularity means it integrates well despite many non standard behaviors and others' need for parallel testing have fixed up the support for that functionality. I think both parallel testing and the ability to run tests outside the CI system are both important today, but I don't think pytest is an issue with achieving those goals anymore. Another consideration to keep in mind is whether or not pytest poses any problems for the support time frames for openstack releases. I suspect not as old releases can simply use old versions of pytests, but it may be good to double check. Note, I think restrictions can be a good thing. Design constraints and considerations for use cases outside of a developers laptop are often useful. I don't think this should be taken as an endorsement for a free for all, but we should consider what our goals are and model the rules around them. I don't think pytest is in opposition to the goals we once had, and we can modify those goals too.
We can still stick to tox (whether this is what we want in the age of uv or not is a different question). But the policy should not state which framework should be invoked by tox from my point of view. I do not believe there would be ever a single set of tools and libs that just work for every single project. It is good to try to unify what we all use, but that should not block us. I totally agree that pytest is so common now that we should just accept it. But I want to prevent just extending the whitelist - get rid of the list completely. Tox is there to wrap and lets just draw the line here.
If someone would propose a change to the policy making it possible to start using pytest - count on my vote. And I would definitely start using it in certain projects since I was having more positive experience from using it in some side projects compared to unittest.
P.S this is what AI tells when you ask it to compare unittest and pytest: Here is a comparison of the `pytest` and `unittest` testing frameworks in Python.
In short, *`pytest` is the modern, more flexible, and feature-rich standard* for Python testing, favored for its simple syntax and powerful features. *`unittest` is the built-in standard library module*, which is reliable and requires no installation, but is more verbose and less flexible.
Regards, Artem
On Thu, 6 Nov 2025 at 16:04, Jan Jasek <jjasek@redhat.com> wrote:
Hello everyone, the previous Horizon tests were unstable, hard to maintain, slow in execution. Before completely rewriting the tests we were experimenting and discussing what way to go with the goal to try to find the best way to make the tests stable, easy to maintain and fast. And Using PyTest with all the features it provides - Fixtures, scopings, parameterizations, etc. Rich plugin ecosystem like pytest-html for generating test reports. Basically cover the Horizon well, using modern ways and not reinvent the wheel again.
Note the tools openstack has chosen to use support most (all?) of these features. They are also python unittest standard compliant. That means you have to apply them differently, but you can accomplish the same goals. If anything pytest is the wheel reinventor.
To be completely honest - I implemented the majority of the new tests for Horizon and I did not know that PyTest is not allowed to be used. If I knew about it I would definitely have discussed it with someone (TC) before.
I see PyTest as a very popular, widely used, and feature-rich open-source framework. In my point of view PyTest is the modern, de facto standard/industry-adopted for testing in the Python ecosystem - That is the reason why I am so surprised that after reimplementation of all Horizon tests (that are now, after years, super stable, easy to maintain and running well for multiple cycles already) and started with coverage for Manila-UI plugin, it came out on PTG from watcher team that it is probably not Allowed to use PyTest in OpenStack project.
It is a defacto standard today, but OpenStack predates that by some time. And note it is not *the* standard which lives in the python standard lib. I don't want to argue the merits of using pytest from a stability and ease of use standpoint, but I do think it is worth noting that a generally accepted practice is that when you work within an existing code base you should work within the frameworks that it has already established whether that be code formatting or library/tool choice. There is value is asking "why are things done this way?" before we unilaterally blaze ahead and make a different decision. It is possible that we may still decide to change things, but that should be done with an understanding of the existing choices (at least as much as possible).
So to answer the Goutham part - We (Horizon team) see it as a way to write and maintain tests easier and in a modern way. Who was on our Horizon PTG where we discussed/presented part of plugin coverage knows that we are also trying to make the base (elementary fixtures) reusable for all the other projects so they can just import a few elementary fixtures and build tests for their plugins on the top of it how they want with significantly less effort.
If something is not okay, I am responsible for this in the Horizon team. And as I said, I did not know it, I did it in the best will and of course I am open to discussion. Thank you! Jan
On Thu, Nov 6, 2025 at 7:11 AM Goutham Pacha Ravi <gouthampravi@gmail.com> wrote:
On Tue, Nov 4, 2025 at 9:58 PM Chandan Kumar <chkumar@redhat.com> wrote:
Hello TC,
We are writing to seek your guidance regarding the Python testing standards (PTI) as they relate to Horizon plugins.
The current integration tests of the watcher-dashboard project (the Horizon plugin for OpenStack Watcher) are broken due to integration code changes in the Horizon integration suite. We are currently working on rewriting the watcher-dashboard integration tests.
We noted that the Horizon project itself has developed a robust set of reusable, pytest-based integration tests, fixtures, tox targets and zuul job[1]. We also see that other plugins, like manila-ui, are already reusing these pytest fixtures.[2].
Several other projects within the OpenStack ecosystem (such as skyline-apiserver, rally, and projects under the Airship and StarlingX namespaces) are already using pytest.
This presents a conflict for the Watcher team. The official Python PTI states that tests should be written using the Python stdlib unittest module[3].
The Watcher team advocates for adhering to the Python PTI and using unittest. Our main watcher project uses unittest, and we prefer to maintain this standard for consistency and PTI compliance in watcher-dashboard. This topic came up during watcher PTG discussion[4].
This leaves us with a dilemma: - Follow the Python PTI: This aligns with the PTI and our team's standards but requires us to ignore Horizon's reusable pytest tests and build our own testing framework from scratch, duplicating effort.
- Follow the parent project (Horizon) to use pytest to reuse their fixtures. This would be more efficient but appears to violate the Python PTI and creates inconsistency with our main project.
- Do we want to improve Python PTI documentation to include pytest usage?
I honestly believe the PTI could evolve, and not require a specific tool. The core requirement as I see it was to provide a consistent "interface" - define tox as an entry point for testing, and produce recognizable result artifacts. It's been several years since we last updated that portion of the PTI, and during that update we noted in the commit message that some projects continued relying on nosetests, and horizon in particular could remain an exception [5]. So perhaps we can clarify this in the PTI.
Project maintainers of Horizon can probably explain their choice with specifics, but, from our discussion on #openstack-tc [6], we seemed to think that pytest has evolved over the years, and if it is easier to write and maintain tests with it, we could.
We just need guidance on this topic.
Links: [1]. https://github.com/openstack/horizon/tree/master/openstack_dashboard/test/se... [2]. https://github.com/openstack/manila-ui/tree/master/manila_ui/tests/selenium [3]. https://github.com/openstack/governance/blob/master/reference/pti/python.rst... [4]. https://etherpad.opendev.org/p/watcher-2026.1-ptg#L404
[5] https://opendev.org/openstack/governance/commit/759c42b10cb3728f5549b05f68e8... [7] https://meetings.opendev.org/irclogs/%23openstack-tc/%23openstack-tc.2025-10...
With Regards,
Chandan Kumar