[tc][horizon]Request for guidance on improving Python PTI doc to include pytest for Horizon plugins testing
Hello TC, We are writing to seek your guidance regarding the Python testing standards (PTI) as they relate to Horizon plugins. The current integration tests of the watcher-dashboard project (the Horizon plugin for OpenStack Watcher) are broken due to integration code changes in the Horizon integration suite. We are currently working on rewriting the watcher-dashboard integration tests. We noted that the Horizon project itself has developed a robust set of reusable, pytest-based integration tests, fixtures, tox targets and zuul job[1]. We also see that other plugins, like manila-ui, are already reusing these pytest fixtures.[2]. Several other projects within the OpenStack ecosystem (such as skyline-apiserver, rally, and projects under the Airship and StarlingX namespaces) are already using pytest. This presents a conflict for the Watcher team. The official Python PTI states that tests should be written using the Python stdlib unittest module[3]. The Watcher team advocates for adhering to the Python PTI and using unittest. Our main watcher project uses unittest, and we prefer to maintain this standard for consistency and PTI compliance in watcher-dashboard. This topic came up during watcher PTG discussion[4]. This leaves us with a dilemma: - Follow the Python PTI: This aligns with the PTI and our team's standards but requires us to ignore Horizon's reusable pytest tests and build our own testing framework from scratch, duplicating effort. - Follow the parent project (Horizon) to use pytest to reuse their fixtures. This would be more efficient but appears to violate the Python PTI and creates inconsistency with our main project. - Do we want to improve Python PTI documentation to include pytest usage? We just need guidance on this topic. Links: [1]. https://github.com/openstack/horizon/tree/master/openstack_dashboard/test/se... [2]. https://github.com/openstack/manila-ui/tree/master/manila_ui/tests/selenium [3]. https://github.com/openstack/governance/blob/master/reference/pti/python.rst... [4]. https://etherpad.opendev.org/p/watcher-2026.1-ptg#L404 With Regards, Chandan Kumar
On Tue, Nov 4, 2025 at 9:58 PM Chandan Kumar <chkumar@redhat.com> wrote:
Hello TC,
We are writing to seek your guidance regarding the Python testing standards (PTI) as they relate to Horizon plugins.
The current integration tests of the watcher-dashboard project (the Horizon plugin for OpenStack Watcher) are broken due to integration code changes in the Horizon integration suite. We are currently working on rewriting the watcher-dashboard integration tests.
We noted that the Horizon project itself has developed a robust set of reusable, pytest-based integration tests, fixtures, tox targets and zuul job[1]. We also see that other plugins, like manila-ui, are already reusing these pytest fixtures.[2].
Several other projects within the OpenStack ecosystem (such as skyline-apiserver, rally, and projects under the Airship and StarlingX namespaces) are already using pytest.
This presents a conflict for the Watcher team. The official Python PTI states that tests should be written using the Python stdlib unittest module[3].
The Watcher team advocates for adhering to the Python PTI and using unittest. Our main watcher project uses unittest, and we prefer to maintain this standard for consistency and PTI compliance in watcher-dashboard. This topic came up during watcher PTG discussion[4].
This leaves us with a dilemma: - Follow the Python PTI: This aligns with the PTI and our team's standards but requires us to ignore Horizon's reusable pytest tests and build our own testing framework from scratch, duplicating effort.
- Follow the parent project (Horizon) to use pytest to reuse their fixtures. This would be more efficient but appears to violate the Python PTI and creates inconsistency with our main project.
- Do we want to improve Python PTI documentation to include pytest usage?
I honestly believe the PTI could evolve, and not require a specific tool. The core requirement as I see it was to provide a consistent "interface" - define tox as an entry point for testing, and produce recognizable result artifacts. It's been several years since we last updated that portion of the PTI, and during that update we noted in the commit message that some projects continued relying on nosetests, and horizon in particular could remain an exception [5]. So perhaps we can clarify this in the PTI. Project maintainers of Horizon can probably explain their choice with specifics, but, from our discussion on #openstack-tc [6], we seemed to think that pytest has evolved over the years, and if it is easier to write and maintain tests with it, we could.
We just need guidance on this topic.
Links: [1]. https://github.com/openstack/horizon/tree/master/openstack_dashboard/test/se... [2]. https://github.com/openstack/manila-ui/tree/master/manila_ui/tests/selenium [3]. https://github.com/openstack/governance/blob/master/reference/pti/python.rst... [4]. https://etherpad.opendev.org/p/watcher-2026.1-ptg#L404
[5] https://opendev.org/openstack/governance/commit/759c42b10cb3728f5549b05f68e8... [7] https://meetings.opendev.org/irclogs/%23openstack-tc/%23openstack-tc.2025-10...
With Regards,
Chandan Kumar
Hello everyone, the previous Horizon tests were unstable, hard to maintain, slow in execution. Before completely rewriting the tests we were experimenting and discussing what way to go with the goal to try to find the best way to make the tests stable, easy to maintain and fast. And Using PyTest with all the features it provides - Fixtures, scopings, parameterizations, etc. Rich plugin ecosystem like pytest-html for generating test reports. Basically cover the Horizon well, using modern ways and not reinvent the wheel again. To be completely honest - I implemented the majority of the new tests for Horizon and I did not know that PyTest is not allowed to be used. If I knew about it I would definitely have discussed it with someone (TC) before. I see PyTest as a very popular, widely used, and feature-rich open-source framework. In my point of view PyTest is the modern, de facto standard/industry-adopted for testing in the Python ecosystem - That is the reason why I am so surprised that after reimplementation of all Horizon tests (that are now, after years, super stable, easy to maintain and running well for multiple cycles already) and started with coverage for Manila-UI plugin, it came out on PTG from watcher team that it is probably not Allowed to use PyTest in OpenStack project. So to answer the Goutham part - We (Horizon team) see it as a way to write and maintain tests easier and in a modern way. Who was on our Horizon PTG where we discussed/presented part of plugin coverage knows that we are also trying to make the base (elementary fixtures) reusable for all the other projects so they can just import a few elementary fixtures and build tests for their plugins on the top of it how they want with significantly less effort. If something is not okay, I am responsible for this in the Horizon team. And as I said, I did not know it, I did it in the best will and of course I am open to discussion. Thank you! Jan On Thu, Nov 6, 2025 at 7:11 AM Goutham Pacha Ravi <gouthampravi@gmail.com> wrote:
On Tue, Nov 4, 2025 at 9:58 PM Chandan Kumar <chkumar@redhat.com> wrote:
Hello TC,
We are writing to seek your guidance regarding the Python testing standards (PTI) as they relate to Horizon plugins.
The current integration tests of the watcher-dashboard project (the Horizon plugin for OpenStack Watcher) are broken due to integration code changes in the Horizon integration suite. We are currently working on rewriting the watcher-dashboard integration tests.
We noted that the Horizon project itself has developed a robust set of reusable, pytest-based integration tests, fixtures, tox targets and zuul job[1]. We also see that other plugins, like manila-ui, are already reusing these pytest fixtures.[2].
Several other projects within the OpenStack ecosystem (such as skyline-apiserver, rally, and projects under the Airship and StarlingX namespaces) are already using
pytest.
This presents a conflict for the Watcher team. The official Python PTI states that tests should be written using the Python stdlib unittest module[3].
The Watcher team advocates for adhering to the Python PTI and using unittest. Our main watcher project uses unittest, and we prefer to maintain this standard for consistency and PTI compliance in watcher-dashboard. This topic came up during watcher PTG discussion[4].
This leaves us with a dilemma: - Follow the Python PTI: This aligns with the PTI and our team's standards but requires us to ignore Horizon's reusable pytest tests and build our own testing framework from scratch, duplicating effort.
- Follow the parent project (Horizon) to use pytest to reuse their
fixtures.
This would be more efficient but appears to violate the Python PTI and creates inconsistency with our main project.
- Do we want to improve Python PTI documentation to include pytest usage?
I honestly believe the PTI could evolve, and not require a specific tool. The core requirement as I see it was to provide a consistent "interface" - define tox as an entry point for testing, and produce recognizable result artifacts. It's been several years since we last updated that portion of the PTI, and during that update we noted in the commit message that some projects continued relying on nosetests, and horizon in particular could remain an exception [5]. So perhaps we can clarify this in the PTI.
Project maintainers of Horizon can probably explain their choice with specifics, but, from our discussion on #openstack-tc [6], we seemed to think that pytest has evolved over the years, and if it is easier to write and maintain tests with it, we could.
We just need guidance on this topic.
Links: [1].
https://github.com/openstack/horizon/tree/master/openstack_dashboard/test/se...
[2]. https://github.com/openstack/manila-ui/tree/master/manila_ui/tests/selenium [3]. https://github.com/openstack/governance/blob/master/reference/pti/python.rst... [4]. https://etherpad.opendev.org/p/watcher-2026.1-ptg#L404
[5] https://opendev.org/openstack/governance/commit/759c42b10cb3728f5549b05f68e8... [7] https://meetings.opendev.org/irclogs/%23openstack-tc/%23openstack-tc.2025-10...
With Regards,
Chandan Kumar
Hi, processes are good, but they are not there to stay forever unchanged. Technology stack evolves permanently. What worked well yesterday does not work today, and we need to address those issues. At the same time we should not shoot ourselves in the foot with all the restrictions that slow down progress. We can still stick to tox (whether this is what we want in the age of uv or not is a different question). But the policy should not state which framework should be invoked by tox from my point of view. I do not believe there would be ever a single set of tools and libs that just work for every single project. It is good to try to unify what we all use, but that should not block us. I totally agree that pytest is so common now that we should just accept it. But I want to prevent just extending the whitelist - get rid of the list completely. Tox is there to wrap and lets just draw the line here. If someone would propose a change to the policy making it possible to start using pytest - count on my vote. And I would definitely start using it in certain projects since I was having more positive experience from using it in some side projects compared to unittest. P.S this is what AI tells when you ask it to compare unittest and pytest: Here is a comparison of the pytest and unittest testing frameworks in Python. In short, *pytest is the modern, more flexible, and feature-rich standard* for Python testing, favored for its simple syntax and powerful features. *unittest is the built-in standard library module*, which is reliable and requires no installation, but is more verbose and less flexible. Regards, Artem On Thu, 6 Nov 2025 at 16:04, Jan Jasek <jjasek@redhat.com> wrote:
Hello everyone, the previous Horizon tests were unstable, hard to maintain, slow in execution. Before completely rewriting the tests we were experimenting and discussing what way to go with the goal to try to find the best way to make the tests stable, easy to maintain and fast. And Using PyTest with all the features it provides - Fixtures, scopings, parameterizations, etc. Rich plugin ecosystem like pytest-html for generating test reports. Basically cover the Horizon well, using modern ways and not reinvent the wheel again.
To be completely honest - I implemented the majority of the new tests for Horizon and I did not know that PyTest is not allowed to be used. If I knew about it I would definitely have discussed it with someone (TC) before.
I see PyTest as a very popular, widely used, and feature-rich open-source framework. In my point of view PyTest is the modern, de facto standard/industry-adopted for testing in the Python ecosystem - That is the reason why I am so surprised that after reimplementation of all Horizon tests (that are now, after years, super stable, easy to maintain and running well for multiple cycles already) and started with coverage for Manila-UI plugin, it came out on PTG from watcher team that it is probably not Allowed to use PyTest in OpenStack project.
So to answer the Goutham part - We (Horizon team) see it as a way to write and maintain tests easier and in a modern way. Who was on our Horizon PTG where we discussed/presented part of plugin coverage knows that we are also trying to make the base (elementary fixtures) reusable for all the other projects so they can just import a few elementary fixtures and build tests for their plugins on the top of it how they want with significantly less effort.
If something is not okay, I am responsible for this in the Horizon team. And as I said, I did not know it, I did it in the best will and of course I am open to discussion. Thank you! Jan
On Thu, Nov 6, 2025 at 7:11 AM Goutham Pacha Ravi <gouthampravi@gmail.com> wrote:
On Tue, Nov 4, 2025 at 9:58 PM Chandan Kumar <chkumar@redhat.com> wrote:
Hello TC,
We are writing to seek your guidance regarding the Python testing standards (PTI) as they relate to Horizon plugins.
The current integration tests of the watcher-dashboard project (the Horizon plugin for OpenStack Watcher) are broken due to integration code changes in the Horizon integration suite. We are currently working on rewriting the watcher-dashboard integration tests.
We noted that the Horizon project itself has developed a robust set of reusable, pytest-based integration tests, fixtures, tox targets and zuul job[1]. We also see that other plugins, like manila-ui, are already reusing these pytest fixtures.[2].
Several other projects within the OpenStack ecosystem (such as skyline-apiserver, rally, and projects under the Airship and StarlingX namespaces) are already using
pytest.
This presents a conflict for the Watcher team. The official Python PTI states that tests should be written using the Python stdlib unittest module[3].
The Watcher team advocates for adhering to the Python PTI and using unittest. Our main watcher project uses unittest, and we prefer to maintain this standard for consistency and PTI compliance in watcher-dashboard. This topic came up during watcher PTG discussion[4].
This leaves us with a dilemma: - Follow the Python PTI: This aligns with the PTI and our team's standards but requires us to ignore Horizon's reusable pytest tests and build our own testing framework from scratch, duplicating effort.
- Follow the parent project (Horizon) to use pytest to reuse their
fixtures.
This would be more efficient but appears to violate the Python PTI and creates inconsistency with our main project.
- Do we want to improve Python PTI documentation to include pytest usage?
I honestly believe the PTI could evolve, and not require a specific tool. The core requirement as I see it was to provide a consistent "interface" - define tox as an entry point for testing, and produce recognizable result artifacts. It's been several years since we last updated that portion of the PTI, and during that update we noted in the commit message that some projects continued relying on nosetests, and horizon in particular could remain an exception [5]. So perhaps we can clarify this in the PTI.
Project maintainers of Horizon can probably explain their choice with specifics, but, from our discussion on #openstack-tc [6], we seemed to think that pytest has evolved over the years, and if it is easier to write and maintain tests with it, we could.
We just need guidance on this topic.
Links: [1].
https://github.com/openstack/horizon/tree/master/openstack_dashboard/test/se...
[2]. https://github.com/openstack/manila-ui/tree/master/manila_ui/tests/selenium [3]. https://github.com/openstack/governance/blob/master/reference/pti/python.rst... [4]. https://etherpad.opendev.org/p/watcher-2026.1-ptg#L404
[5] https://opendev.org/openstack/governance/commit/759c42b10cb3728f5549b05f68e8... [7] https://meetings.opendev.org/irclogs/%23openstack-tc/%23openstack-tc.2025-10...
With Regards,
Chandan Kumar
On Thu, Nov 6, 2025, at 7:52 AM, Artem Goncharov wrote:
Hi,
processes are good, but they are not there to stay forever unchanged. Technology stack evolves permanently. What worked well yesterday does not work today, and we need to address those issues. At the same time we should not shoot ourselves in the foot with all the restrictions that slow down progress.
Yes, from memory the major considerations for trying to enforce a standards compliant toolchain were to enable parallel test runs to take advantage of the CI resources we have and to ensure that developers can use whatever test runner they choose whether that be the default, their IDE, or even pytest. At the time pytest didn't even exist and when pytest did show up its support for parallel testing was poor as was its integration into other tools. Since then I think both of these issues are no longer problems with using pytest. Its popularity means it integrates well despite many non standard behaviors and others' need for parallel testing have fixed up the support for that functionality. I think both parallel testing and the ability to run tests outside the CI system are both important today, but I don't think pytest is an issue with achieving those goals anymore. Another consideration to keep in mind is whether or not pytest poses any problems for the support time frames for openstack releases. I suspect not as old releases can simply use old versions of pytests, but it may be good to double check. Note, I think restrictions can be a good thing. Design constraints and considerations for use cases outside of a developers laptop are often useful. I don't think this should be taken as an endorsement for a free for all, but we should consider what our goals are and model the rules around them. I don't think pytest is in opposition to the goals we once had, and we can modify those goals too.
We can still stick to tox (whether this is what we want in the age of uv or not is a different question). But the policy should not state which framework should be invoked by tox from my point of view. I do not believe there would be ever a single set of tools and libs that just work for every single project. It is good to try to unify what we all use, but that should not block us. I totally agree that pytest is so common now that we should just accept it. But I want to prevent just extending the whitelist - get rid of the list completely. Tox is there to wrap and lets just draw the line here.
If someone would propose a change to the policy making it possible to start using pytest - count on my vote. And I would definitely start using it in certain projects since I was having more positive experience from using it in some side projects compared to unittest.
P.S this is what AI tells when you ask it to compare unittest and pytest: Here is a comparison of the `pytest` and `unittest` testing frameworks in Python.
In short, *`pytest` is the modern, more flexible, and feature-rich standard* for Python testing, favored for its simple syntax and powerful features. *`unittest` is the built-in standard library module*, which is reliable and requires no installation, but is more verbose and less flexible.
Regards, Artem
On Thu, 6 Nov 2025 at 16:04, Jan Jasek <jjasek@redhat.com> wrote:
Hello everyone, the previous Horizon tests were unstable, hard to maintain, slow in execution. Before completely rewriting the tests we were experimenting and discussing what way to go with the goal to try to find the best way to make the tests stable, easy to maintain and fast. And Using PyTest with all the features it provides - Fixtures, scopings, parameterizations, etc. Rich plugin ecosystem like pytest-html for generating test reports. Basically cover the Horizon well, using modern ways and not reinvent the wheel again.
Note the tools openstack has chosen to use support most (all?) of these features. They are also python unittest standard compliant. That means you have to apply them differently, but you can accomplish the same goals. If anything pytest is the wheel reinventor.
To be completely honest - I implemented the majority of the new tests for Horizon and I did not know that PyTest is not allowed to be used. If I knew about it I would definitely have discussed it with someone (TC) before.
I see PyTest as a very popular, widely used, and feature-rich open-source framework. In my point of view PyTest is the modern, de facto standard/industry-adopted for testing in the Python ecosystem - That is the reason why I am so surprised that after reimplementation of all Horizon tests (that are now, after years, super stable, easy to maintain and running well for multiple cycles already) and started with coverage for Manila-UI plugin, it came out on PTG from watcher team that it is probably not Allowed to use PyTest in OpenStack project.
It is a defacto standard today, but OpenStack predates that by some time. And note it is not *the* standard which lives in the python standard lib. I don't want to argue the merits of using pytest from a stability and ease of use standpoint, but I do think it is worth noting that a generally accepted practice is that when you work within an existing code base you should work within the frameworks that it has already established whether that be code formatting or library/tool choice. There is value is asking "why are things done this way?" before we unilaterally blaze ahead and make a different decision. It is possible that we may still decide to change things, but that should be done with an understanding of the existing choices (at least as much as possible).
So to answer the Goutham part - We (Horizon team) see it as a way to write and maintain tests easier and in a modern way. Who was on our Horizon PTG where we discussed/presented part of plugin coverage knows that we are also trying to make the base (elementary fixtures) reusable for all the other projects so they can just import a few elementary fixtures and build tests for their plugins on the top of it how they want with significantly less effort.
If something is not okay, I am responsible for this in the Horizon team. And as I said, I did not know it, I did it in the best will and of course I am open to discussion. Thank you! Jan
On Thu, Nov 6, 2025 at 7:11 AM Goutham Pacha Ravi <gouthampravi@gmail.com> wrote:
On Tue, Nov 4, 2025 at 9:58 PM Chandan Kumar <chkumar@redhat.com> wrote:
Hello TC,
We are writing to seek your guidance regarding the Python testing standards (PTI) as they relate to Horizon plugins.
The current integration tests of the watcher-dashboard project (the Horizon plugin for OpenStack Watcher) are broken due to integration code changes in the Horizon integration suite. We are currently working on rewriting the watcher-dashboard integration tests.
We noted that the Horizon project itself has developed a robust set of reusable, pytest-based integration tests, fixtures, tox targets and zuul job[1]. We also see that other plugins, like manila-ui, are already reusing these pytest fixtures.[2].
Several other projects within the OpenStack ecosystem (such as skyline-apiserver, rally, and projects under the Airship and StarlingX namespaces) are already using pytest.
This presents a conflict for the Watcher team. The official Python PTI states that tests should be written using the Python stdlib unittest module[3].
The Watcher team advocates for adhering to the Python PTI and using unittest. Our main watcher project uses unittest, and we prefer to maintain this standard for consistency and PTI compliance in watcher-dashboard. This topic came up during watcher PTG discussion[4].
This leaves us with a dilemma: - Follow the Python PTI: This aligns with the PTI and our team's standards but requires us to ignore Horizon's reusable pytest tests and build our own testing framework from scratch, duplicating effort.
- Follow the parent project (Horizon) to use pytest to reuse their fixtures. This would be more efficient but appears to violate the Python PTI and creates inconsistency with our main project.
- Do we want to improve Python PTI documentation to include pytest usage?
I honestly believe the PTI could evolve, and not require a specific tool. The core requirement as I see it was to provide a consistent "interface" - define tox as an entry point for testing, and produce recognizable result artifacts. It's been several years since we last updated that portion of the PTI, and during that update we noted in the commit message that some projects continued relying on nosetests, and horizon in particular could remain an exception [5]. So perhaps we can clarify this in the PTI.
Project maintainers of Horizon can probably explain their choice with specifics, but, from our discussion on #openstack-tc [6], we seemed to think that pytest has evolved over the years, and if it is easier to write and maintain tests with it, we could.
We just need guidance on this topic.
Links: [1]. https://github.com/openstack/horizon/tree/master/openstack_dashboard/test/se... [2]. https://github.com/openstack/manila-ui/tree/master/manila_ui/tests/selenium [3]. https://github.com/openstack/governance/blob/master/reference/pti/python.rst... [4]. https://etherpad.opendev.org/p/watcher-2026.1-ptg#L404
[5] https://opendev.org/openstack/governance/commit/759c42b10cb3728f5549b05f68e8... [7] https://meetings.opendev.org/irclogs/%23openstack-tc/%23openstack-tc.2025-10...
With Regards,
Chandan Kumar
hi all! From the Rally perspective: All Rally tests are written on top of the raw unittests library (previously it was testtools). Only a single place uses a simple pytest fixture, which can easily be replaced with a custom one, so no issues there. Personally, I find unittest-style test cases much more readable and less "magic". I have also faced several situations where pytest-style assertions produced unreadable error outputs, while wrapping the same test in a unittest class and rewriting asserts with "self.assert*" helpers made the errors clear and easy to debug. That said, the Rally project has been using pytest as the test runner for almost 10 years now, without any problems. Before switching to pytest, the Rally team had to leave the "global-requirements": https://review.opendev.org/c/openstack/requirements/+/32884 https://review.opendev.org/c/openstack/project-config/+/328682 <https://review.opendev.org/c/openstack/requirements/+/328849> As far as I remember, the inability to choose our own test tools was the primary reason we moved away from global-requirements (though we remain compatible with them). The main concern raised at the time we adopted it was the lack of support for the subunit format and the reduced ability to distribute tests across workers based on previous results efficiently. чт, 6 лист. 2025 р. о 17:11 Clark Boylan <cboylan@sapwetik.org> пише:
On Thu, Nov 6, 2025, at 7:52 AM, Artem Goncharov wrote:
Hi,
processes are good, but they are not there to stay forever unchanged. Technology stack evolves permanently. What worked well yesterday does not work today, and we need to address those issues. At the same time we should not shoot ourselves in the foot with all the restrictions that slow down progress.
Yes, from memory the major considerations for trying to enforce a standards compliant toolchain were to enable parallel test runs to take advantage of the CI resources we have and to ensure that developers can use whatever test runner they choose whether that be the default, their IDE, or even pytest. At the time pytest didn't even exist and when pytest did show up its support for parallel testing was poor as was its integration into other tools. Since then I think both of these issues are no longer problems with using pytest. Its popularity means it integrates well despite many non standard behaviors and others' need for parallel testing have fixed up the support for that functionality.
I think both parallel testing and the ability to run tests outside the CI system are both important today, but I don't think pytest is an issue with achieving those goals anymore.
Another consideration to keep in mind is whether or not pytest poses any problems for the support time frames for openstack releases. I suspect not as old releases can simply use old versions of pytests, but it may be good to double check.
Note, I think restrictions can be a good thing. Design constraints and considerations for use cases outside of a developers laptop are often useful. I don't think this should be taken as an endorsement for a free for all, but we should consider what our goals are and model the rules around them. I don't think pytest is in opposition to the goals we once had, and we can modify those goals too.
We can still stick to tox (whether this is what we want in the age of uv or not is a different question). But the policy should not state which framework should be invoked by tox from my point of view. I do not believe there would be ever a single set of tools and libs that just work for every single project. It is good to try to unify what we all use, but that should not block us. I totally agree that pytest is so common now that we should just accept it. But I want to prevent just extending the whitelist - get rid of the list completely. Tox is there to wrap and lets just draw the line here.
If someone would propose a change to the policy making it possible to start using pytest - count on my vote. And I would definitely start using it in certain projects since I was having more positive experience from using it in some side projects compared to unittest.
P.S this is what AI tells when you ask it to compare unittest and pytest: Here is a comparison of the `pytest` and `unittest` testing frameworks in Python.
In short, *`pytest` is the modern, more flexible, and feature-rich standard* for Python testing, favored for its simple syntax and powerful features. *`unittest` is the built-in standard library module*, which is reliable and requires no installation, but is more verbose and less flexible.
Regards, Artem
On Thu, 6 Nov 2025 at 16:04, Jan Jasek <jjasek@redhat.com> wrote:
Hello everyone, the previous Horizon tests were unstable, hard to maintain, slow in
execution. Before completely rewriting the tests we were experimenting and discussing what way to go with the goal to try to find the best way to make the tests stable, easy to maintain and fast. And Using PyTest with all the features it provides - Fixtures, scopings, parameterizations, etc. Rich plugin ecosystem like pytest-html for generating test reports. Basically cover the Horizon well, using modern ways and not reinvent the wheel again.
Note the tools openstack has chosen to use support most (all?) of these features. They are also python unittest standard compliant. That means you have to apply them differently, but you can accomplish the same goals. If anything pytest is the wheel reinventor.
To be completely honest - I implemented the majority of the new tests
for Horizon and I did not know that PyTest is not allowed to be used. If I knew about it I would definitely have discussed it with someone (TC) before.
I see PyTest as a very popular, widely used, and feature-rich
open-source framework. In my point of view PyTest is the modern, de facto standard/industry-adopted for testing in the Python ecosystem - That is the reason why I am so surprised that after reimplementation of all Horizon tests (that are now, after years, super stable, easy to maintain and running well for multiple cycles already) and started with coverage for Manila-UI plugin, it came out on PTG from watcher team that it is probably not Allowed to use PyTest in OpenStack project.
It is a defacto standard today, but OpenStack predates that by some time. And note it is not *the* standard which lives in the python standard lib. I don't want to argue the merits of using pytest from a stability and ease of use standpoint, but I do think it is worth noting that a generally accepted practice is that when you work within an existing code base you should work within the frameworks that it has already established whether that be code formatting or library/tool choice. There is value is asking "why are things done this way?" before we unilaterally blaze ahead and make a different decision. It is possible that we may still decide to change things, but that should be done with an understanding of the existing choices (at least as much as possible).
So to answer the Goutham part - We (Horizon team) see it as a way to
Who was on our Horizon PTG where we discussed/presented part of plugin coverage knows that we are also trying to make the base (elementary fixtures) reusable for all the other projects so they can just import a few elementary fixtures and build tests for their plugins on the top of it how
write and maintain tests easier and in a modern way. they want with significantly less effort.
If something is not okay, I am responsible for this in the Horizon
team. And as I said, I did not know it, I did it in the best will and of course I am open to discussion.
Thank you! Jan
On Thu, Nov 6, 2025 at 7:11 AM Goutham Pacha Ravi < gouthampravi@gmail.com> wrote:
On Tue, Nov 4, 2025 at 9:58 PM Chandan Kumar <chkumar@redhat.com> wrote:
Hello TC,
We are writing to seek your guidance regarding the Python testing standards (PTI) as they relate to Horizon plugins.
The current integration tests of the watcher-dashboard project (the Horizon plugin for OpenStack Watcher) are broken due to integration code changes in the Horizon integration suite. We are currently working on rewriting the watcher-dashboard integration tests.
We noted that the Horizon project itself has developed a robust set
of
reusable, pytest-based integration tests, fixtures, tox targets and zuul job[1]. We also see that other plugins, like manila-ui, are already reusing these pytest fixtures.[2].
Several other projects within the OpenStack ecosystem (such as skyline-apiserver, rally, and projects under the Airship and StarlingX namespaces) are already using pytest.
This presents a conflict for the Watcher team. The official Python PTI states that tests should be written using the Python stdlib unittest module[3].
The Watcher team advocates for adhering to the Python PTI and using unittest. Our main watcher project uses unittest, and we prefer to maintain this standard for consistency and PTI compliance in watcher-dashboard. This topic came up during watcher PTG discussion[4].
This leaves us with a dilemma: - Follow the Python PTI: This aligns with the PTI and our team's standards but requires us to ignore Horizon's reusable pytest tests and build our own testing framework from scratch, duplicating effort.
- Follow the parent project (Horizon) to use pytest to reuse their fixtures. This would be more efficient but appears to violate the Python PTI and creates inconsistency with our main project.
- Do we want to improve Python PTI documentation to include pytest usage?
I honestly believe the PTI could evolve, and not require a specific tool. The core requirement as I see it was to provide a consistent "interface" - define tox as an entry point for testing, and produce recognizable result artifacts. It's been several years since we last updated that portion of the PTI, and during that update we noted in the commit message that some projects continued relying on nosetests, and horizon in particular could remain an exception [5]. So perhaps we can clarify this in the PTI.
Project maintainers of Horizon can probably explain their choice with specifics, but, from our discussion on #openstack-tc [6], we seemed to think that pytest has evolved over the years, and if it is easier to write and maintain tests with it, we could.
We just need guidance on this topic.
Links: [1].
https://github.com/openstack/horizon/tree/master/openstack_dashboard/test/se...
[2]. https://github.com/openstack/manila-ui/tree/master/manila_ui/tests/selenium [3]. https://github.com/openstack/governance/blob/master/reference/pti/python.rst... [4]. https://etherpad.opendev.org/p/watcher-2026.1-ptg#L404
[5] https://opendev.org/openstack/governance/commit/759c42b10cb3728f5549b05f68e8... [7] https://meetings.opendev.org/irclogs/%23openstack-tc/%23openstack-tc.2025-10...
With Regards,
Chandan Kumar
-- Best regards, Andriy Kurilin.
On 2025-11-06 08:10:44 -0800 (-0800), Clark Boylan wrote: [...]
It is a defacto standard today, but OpenStack predates that by some time. And note it is not *the* standard which lives in the python standard lib. I don't want to argue the merits of using pytest from a stability and ease of use standpoint, but I do think it is worth noting that a generally accepted practice is that when you work within an existing code base you should work within the frameworks that it has already established whether that be code formatting or library/tool choice. There is value is asking "why are things done this way?" before we unilaterally blaze ahead and make a different decision. It is possible that we may still decide to change things, but that should be done with an understanding of the existing choices (at least as much as possible). [...]
This is the point where I would typically refer to the principle of (G.K.) Chesterton's Fence:
The more modern type of reformer goes gaily up to it and says, "I don't see the use of this; let us clear it away." To which the more intelligent type of reformer will do well to answer: "If you don't see the use of it, I certainly won't let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it."
I'll also note that "modernization" on its own is a hollow goal. What's modern today will be outdated tomorrow, so that's chasing a prize you can never win. Our collective time is far better spent investing in improvements the solutions we have than replacing them with those that bring their own new and yet-unknown problems. -- Jeremy Stanley
On 11/6/25 4:03 PM, Jan Jasek wrote:
Hello everyone, the previous Horizon tests were unstable, hard to maintain, slow in execution. Before completely rewriting the tests we were experimenting and discussing what way to go with the goal to try to find the best way to make the tests stable, easy to maintain and fast. And Using PyTest with all the features it provides - Fixtures, scopings, parameterizations, etc. Rich plugin ecosystem like pytest-html for generating test reports. Basically cover the Horizon well, using modern ways and not reinvent the wheel again.
To be completely honest - I implemented the majority of the new tests for Horizon and I did not know that PyTest is not allowed to be used. If I knew about it I would definitely have discussed it with someone (TC) before.
I see PyTest as a very popular, widely used, and feature-rich open- source framework. In my point of view PyTest is the modern, de facto standard/industry-adopted for testing in the Python ecosystem - That is the reason why I am so surprised that after reimplementation of all Horizon tests (that are now, after years, super stable, easy to maintain and running well for multiple cycles already) and started with coverage for Manila-UI plugin, it came out on PTG from watcher team that it is probably not Allowed to use PyTest in OpenStack project. Hi Jan,
What I'm surprised is why a test runner is so important? Can't we run your tests with stestr, even if they use pytest decorator / libs inside? Also, from a package maintainer perspective, it's ok for me if you use pytest, as long as you don't use weird extension not packaged in distro (I'd have to package them just for running Horizon tests, which is a lot of extra work I prefer to avoid). Apart from that, sticking with the most common pytest stuff is very much ok. What I've been frustrated with Horizon though, is the impossibility to run tests in parallel. When one does, everything falls apart. Is this now fixed? Can I use pytest-xdist, and run pytest with "-n auto", so I can fully use all the cores of my laptop to run Horizon tests? Thanks for your work, Cheers, Thomas Goirand (zigo)
Hello everyone, thank you so much for all your opinions and points of view. I did a little deep dive into Horizon testing history because I am only(compared to you) 3 years around OpenStack/Horizon. In horizon, for Integration tests there were historically used: nosetests, Django built-in test runner and then pytest as a test runner (a long time before I even started work on OpenStack/Horizon). So the point is that as far as I know Horizon NEVER used stestr runner for Integration tests. At least from what I was able to find across old branches (feel free to correct me If I am wrong). I can try to find out why but stestr is not used and (at least majority time in Horizon history) was not used for integration tests. That is a fact. Based on all your messages and comments, you are here much longer than me and you are part of this discussion so you obviously somehow participate on Horizon, so feel free to share here why stestr was never default runner for Integration tests (or send me where it was). I would really like to know and I was not here when those decisions were made. I saw you mentioned parallel running multiple times. Pytest-xdist, I was experimenting very very little with it some time ago. It worked fine for “ui-pytests” (where tests are completely independent and running in django live server). I did not have time to experiment much with “integration-pytests” but I am afraid that as the tests there include pagination tests for instance, image, volume, snapshot and all the tests require “resources” in general (like volume resource for all volume snapshot tests, etc.). It can probably lead super easily into some race conditions/random failures. Maybe there will be a possibility to divide the tests into groups that will not interfere with each other or maybe I am not completely right here but as I am saying, being able to run the tests in parallel was not super high priority on my plate - it was to make tests somehow stable and maintainable (at the time when I rewrote the tests from scratch we needed recheck previous tests 5-10 times in Zuul to get rid of all random failures and be able to merge patch). And I am quite sure that parallel running did not exist in previous integration tests (as they were barely stable). But I would definitely like to take a look at possibilities of parallel running when I have some time. If you have any other questions/points (historical context), feel free to share it. Thank you Jan On Thu, Nov 6, 2025 at 11:03 PM Thomas Goirand <zigo@debian.org> wrote:
On 11/6/25 4:03 PM, Jan Jasek wrote:
Hello everyone, the previous Horizon tests were unstable, hard to maintain, slow in execution. Before completely rewriting the tests we were experimenting and discussing what way to go with the goal to try to find the best way to make the tests stable, easy to maintain and fast. And Using PyTest with all the features it provides - Fixtures, scopings, parameterizations, etc. Rich plugin ecosystem like pytest-html for generating test reports. Basically cover the Horizon well, using modern ways and not reinvent the wheel again.
To be completely honest - I implemented the majority of the new tests for Horizon and I did not know that PyTest is not allowed to be used. If I knew about it I would definitely have discussed it with someone (TC) before.
I see PyTest as a very popular, widely used, and feature-rich open- source framework. In my point of view PyTest is the modern, de facto standard/industry-adopted for testing in the Python ecosystem - That is the reason why I am so surprised that after reimplementation of all Horizon tests (that are now, after years, super stable, easy to maintain and running well for multiple cycles already) and started with coverage for Manila-UI plugin, it came out on PTG from watcher team that it is probably not Allowed to use PyTest in OpenStack project. Hi Jan,
What I'm surprised is why a test runner is so important? Can't we run your tests with stestr, even if they use pytest decorator / libs inside?
Also, from a package maintainer perspective, it's ok for me if you use pytest, as long as you don't use weird extension not packaged in distro (I'd have to package them just for running Horizon tests, which is a lot of extra work I prefer to avoid). Apart from that, sticking with the most common pytest stuff is very much ok.
What I've been frustrated with Horizon though, is the impossibility to run tests in parallel. When one does, everything falls apart. Is this now fixed? Can I use pytest-xdist, and run pytest with "-n auto", so I can fully use all the cores of my laptop to run Horizon tests?
Thanks for your work, Cheers,
Thomas Goirand (zigo)
On 2025-11-07 13:55:01 +0100 (+0100), Jan Jasek wrote: [...]
So the point is that as far as I know Horizon NEVER used stestr runner for Integration tests. At least from what I was able to find across old branches (feel free to correct me If I am wrong). I can try to find out why but stestr is not used and (at least majority time in Horizon history) was not used for integration tests. That is a fact.
Based on all your messages and comments, you are here much longer than me and you are part of this discussion so you obviously somehow participate on Horizon, so feel free to share here why stestr was never default runner for Integration tests (or send me where it was). I would really like to know and I was not here when those decisions were made. [...]
While I was never heavily involved in Horizon development, a quick grep through its Git history indicates Matthias introduced testr support a decade ago in https://review.opendev.org/c/openstack/horizon/+/255136 so that may be a good place to begin researching from. -- Jeremy Stanley
Hi Jeremy, I am talking about the history of runners for Integration tests. And if you want to share something, please give some context to it, not just blindly throw links. Ivan and Akihiro removed testr support almost a decade ago. https://review.opendev.org/c/openstack/horizon/+/520214 Thank you Jan On Fri, Nov 7, 2025 at 4:13 PM Jeremy Stanley <fungi@yuggoth.org> wrote:
On 2025-11-07 13:55:01 +0100 (+0100), Jan Jasek wrote: [...]
So the point is that as far as I know Horizon NEVER used stestr runner for Integration tests. At least from what I was able to find across old branches (feel free to correct me If I am wrong). I can try to find out why but stestr is not used and (at least majority time in Horizon history) was not used for integration tests. That is a fact.
Based on all your messages and comments, you are here much longer than me and you are part of this discussion so you obviously somehow participate on Horizon, so feel free to share here why stestr was never default runner for Integration tests (or send me where it was). I would really like to know and I was not here when those decisions were made. [...]
While I was never heavily involved in Horizon development, a quick grep through its Git history indicates Matthias introduced testr support a decade ago in https://review.opendev.org/c/openstack/horizon/+/255136 so that may be a good place to begin researching from. -- Jeremy Stanley
On Fri, Nov 7, 2025, at 7:39 AM, Jan Jasek wrote:
Hi Jeremy, I am talking about the history of runners for Integration tests. And if you want to share something, please give some context to it, not just blindly throw links.
Ivan and Akihiro removed testr support almost a decade ago. https://review.opendev.org/c/openstack/horizon/+/520214
Thank you Jan
On Fri, Nov 7, 2025 at 4:13 PM Jeremy Stanley <fungi@yuggoth.org> wrote:
On 2025-11-07 13:55:01 +0100 (+0100), Jan Jasek wrote: [...]
So the point is that as far as I know Horizon NEVER used stestr runner for Integration tests. At least from what I was able to find across old branches (feel free to correct me If I am wrong). I can try to find out why but stestr is not used and (at least majority time in Horizon history) was not used for integration tests. That is a fact.
Interesting. Going back to my statement on the value of understanding why prior decisions were made understanding why this is the case could be useful. If Horizon never used stestr in the first place then I don't know that we can blame it for instability of test cases for example. Maybe part of the issue here was deciding to go off piste in the first place? Or maybe there were really good reasons for these decisions (I know django is a large framework with lots of batteries included). At this point it is probably all moot. As I explained in my previous response, I suspect that pytest can largely meet the existing goals set in the PTI. I was trying to explain that there are real goals behind the decision to use stestr, that pytest is probably sufficient to address those goals, and if necessary goals can be updated to meet current needs or expectations. On top of that, if we don't have an understanding of our goals or how well existing tools meet those goals I don't know how we can evaluate if the current tools should be replaced by something else. Looking at historical decisions can be a useful activity for understanding that better.
Based on all your messages and comments, you are here much longer than me and you are part of this discussion so you obviously somehow participate on Horizon, so feel free to share here why stestr was never default runner for Integration tests (or send me where it was). I would really like to know and I was not here when those decisions were made.
I would have to look it up myself. I don't know if anyone knows why these specific actions were taken by Horizon at this point without digging through git logs, code reviews, or project meetings.
[...]
While I was never heavily involved in Horizon development, a quick grep through its Git history indicates Matthias introduced testr support a decade ago in https://review.opendev.org/c/openstack/horizon/+/255136 so that may be a good place to begin researching from. -- Jeremy Stanley
Hi Clark, Thanks for relevant points and investigation! I am definitely not blaming stestr for instabilities. I am only saying that Horizon historically never used it as a runner (as it was pointed out here multiple times as something “required”). And as I said in my very first message - And I say it again I did not know that PyTest is somehow forbidden (it was not explicitly mentioned anywhere where I was looking and for me PyTest is the de facto standard in the modern world so I did not expect it can be a problem). When I started with rewriting tests from scratch I was new in Horizon and the whole Horizon team agreed that the PyTest is the way we will go. And during the implementation no one from reviewers had any problem with it and during the last few cycles when the tests are running without any issue no one said a word. That all are facts and I can not go back in time to discuss it with all the people here. If it is okay from your point of view to add it into PTI - perfect, we have nice, stable, easy to maintain tests for Horizon where the base can be easily used also for Horizon plugins. If it is such a huge problem/risk like someone is indicating here - fine we can start talking about why previous tests were super unstable for quite some time and no one who is commenting here cared and no one fixed them/rewrote them. We can remove the new tests completely and activate the old one again and then execute a recheck 10 times every time we want to merge something (to hit a run without random failures). And someone from here can take it as a task to rewrite it "correct way". That is probably all I can say here. It is Friday and I am done. Have a nice weekend :-). Jan On Fri, Nov 7, 2025 at 5:12 PM Clark Boylan <cboylan@sapwetik.org> wrote:
On Fri, Nov 7, 2025, at 7:39 AM, Jan Jasek wrote:
Hi Jeremy, I am talking about the history of runners for Integration tests. And if you want to share something, please give some context to it, not just blindly throw links.
Ivan and Akihiro removed testr support almost a decade ago. https://review.opendev.org/c/openstack/horizon/+/520214
Thank you Jan
On Fri, Nov 7, 2025 at 4:13 PM Jeremy Stanley <fungi@yuggoth.org> wrote:
On 2025-11-07 13:55:01 +0100 (+0100), Jan Jasek wrote: [...]
So the point is that as far as I know Horizon NEVER used stestr runner for Integration tests. At least from what I was able to find across old branches (feel free to correct me If I am wrong). I can try to find out why but stestr is not used and (at least majority time in Horizon history) was not used for integration tests. That is a fact.
Interesting. Going back to my statement on the value of understanding why prior decisions were made understanding why this is the case could be useful. If Horizon never used stestr in the first place then I don't know that we can blame it for instability of test cases for example. Maybe part of the issue here was deciding to go off piste in the first place? Or maybe there were really good reasons for these decisions (I know django is a large framework with lots of batteries included).
At this point it is probably all moot. As I explained in my previous response, I suspect that pytest can largely meet the existing goals set in the PTI. I was trying to explain that there are real goals behind the decision to use stestr, that pytest is probably sufficient to address those goals, and if necessary goals can be updated to meet current needs or expectations.
On top of that, if we don't have an understanding of our goals or how well existing tools meet those goals I don't know how we can evaluate if the current tools should be replaced by something else. Looking at historical decisions can be a useful activity for understanding that better.
Based on all your messages and comments, you are here much longer than me and you are part of this discussion so you obviously somehow participate on Horizon, so feel free to share here why stestr was never default runner for Integration tests (or send me where it was). I would really like to know and I was not here when those decisions were made.
I would have to look it up myself. I don't know if anyone knows why these specific actions were taken by Horizon at this point without digging through git logs, code reviews, or project meetings.
[...]
While I was never heavily involved in Horizon development, a quick grep through its Git history indicates Matthias introduced testr support a decade ago in https://review.opendev.org/c/openstack/horizon/+/255136 so that may be a good place to begin researching from. -- Jeremy Stanley
On 07/11/2025 17:30, Jan Jasek wrote:
Hi Clark, Thanks for relevant points and investigation!
I am definitely not blaming stestr for instabilities. I am only saying that Horizon historically never used it as a runner (as it was pointed out here multiple times as something “required”).
And as I said in my very first message - And I say it again I did not know that PyTest is somehow forbidden (it was not explicitly mentioned anywhere where I was looking and for me PyTest is the de facto standard in the modern world so I did not expect it can be a problem). When I started with rewriting tests from scratch I was new in Horizon and the whole Horizon team agreed that the PyTest is the way we will go. And during the implementation no one from reviewers had any problem with it and during the last few cycles when the tests are running without any issue no one said a word. That all are facts and I can not go back in time to discuss it with all the people here.
If it is okay from your point of view to add it into PTI - perfect, we have nice, stable, easy to maintain tests for Horizon where the base can be easily used also for Horizon plugins.
adding it to the pti does nto imidally mena that we will choose to build on it for watcher-dashbaord testing but form my point of view if its not allowed by the pti its a hard blocker to watcher using the work you did. one of the options we are considering is building a test suite based on the python standard testing packages but again we have to consider the technical debt of having to learn and maintain 2 different way for writing tests vs the technical debt of maintain our own testing. we are also considering wether selenium or python-playwright would be the best test framework for integration test of ui but again we dont want to have to invent our own way to test things end to end. if why test was allowed under the pti its a point in favor of trying to adopt the work done in horizon but at present all opetion are effectivly equal form my prespecitve with a slight bias against using the exisitng hoizon work based on consitc and familiarity with non pytest based testing.
If it is such a huge problem/risk like someone is indicating here - fine we can start talking about why previous tests were super unstable for quite some time and no one who is commenting here cared and no one fixed them/rewrote them. We can remove the new tests completely and activate the old one again and then execute a recheck 10 times every time we want to merge something (to hit a run without random failures). And someone from here can take it as a task to rewrite it "correct way".
That is probably all I can say here. It is Friday and I am done. Have a nice weekend :-). Jan
On Fri, Nov 7, 2025 at 5:12 PM Clark Boylan <cboylan@sapwetik.org> wrote:
On Fri, Nov 7, 2025, at 7:39 AM, Jan Jasek wrote: > Hi Jeremy, > I am talking about the history of runners for Integration tests. And if > you want to share something, please give some context to it, not just > blindly throw links. > > Ivan and Akihiro removed testr support almost a decade ago. > https://review.opendev.org/c/openstack/horizon/+/520214 > > Thank you > Jan > > On Fri, Nov 7, 2025 at 4:13 PM Jeremy Stanley <fungi@yuggoth.org> wrote: >> On 2025-11-07 13:55:01 +0100 (+0100), Jan Jasek wrote: >> [...] >> > So the point is that as far as I know Horizon NEVER used stestr >> > runner for Integration tests. At least from what I was able to >> > find across old branches (feel free to correct me If I am wrong). >> > I can try to find out why but stestr is not used and (at least >> > majority time in Horizon history) was not used for integration >> > tests. That is a fact.
Interesting. Going back to my statement on the value of understanding why prior decisions were made understanding why this is the case could be useful. If Horizon never used stestr in the first place then I don't know that we can blame it for instability of test cases for example. Maybe part of the issue here was deciding to go off piste in the first place? Or maybe there were really good reasons for these decisions (I know django is a large framework with lots of batteries included).
At this point it is probably all moot. As I explained in my previous response, I suspect that pytest can largely meet the existing goals set in the PTI. I was trying to explain that there are real goals behind the decision to use stestr, that pytest is probably sufficient to address those goals, and if necessary goals can be updated to meet current needs or expectations.
On top of that, if we don't have an understanding of our goals or how well existing tools meet those goals I don't know how we can evaluate if the current tools should be replaced by something else. Looking at historical decisions can be a useful activity for understanding that better.
>> > >> > Based on all your messages and comments, you are here much longer >> > than me and you are part of this discussion so you obviously >> > somehow participate on Horizon, so feel free to share here why >> > stestr was never default runner for Integration tests (or send me >> > where it was). I would really like to know and I was not here when >> > those decisions were made.
I would have to look it up myself. I don't know if anyone knows why these specific actions were taken by Horizon at this point without digging through git logs, code reviews, or project meetings.
>> [...] >> >> While I was never heavily involved in Horizon development, a quick >> grep through its Git history indicates Matthias introduced testr >> support a decade ago in >> https://review.opendev.org/c/openstack/horizon/+/255136 so that may >> be a good place to begin researching from. >> -- >> Jeremy Stanley
On 07/11/2025 18:39, Sean Mooney wrote:
On 07/11/2025 17:30, Jan Jasek wrote:
Hi Clark, Thanks for relevant points and investigation!
I am definitely not blaming stestr for instabilities. I am only saying that Horizon historically never used it as a runner (as it was pointed out here multiple times as something “required”).
And as I said in my very first message - And I say it again I did not know that PyTest is somehow forbidden (it was not explicitly mentioned anywhere where I was looking and for me PyTest is the de facto standard in the modern world so I did not expect it can be a problem). When I started with rewriting tests from scratch I was new in Horizon and the whole Horizon team agreed that the PyTest is the way we will go. And during the implementation no one from reviewers had any problem with it and during the last few cycles when the tests are running without any issue no one said a word. That all are facts and I can not go back in time to discuss it with all the people here.
If it is okay from your point of view to add it into PTI - perfect, we have nice, stable, easy to maintain tests for Horizon where the base can be easily used also for Horizon plugins.
adding it to the pti does nto imidally mena that we will choose to build on it for watcher-dashbaord testing
but form my point of view if its not allowed by the pti its a hard blocker to watcher using the work you did.
one of the options we are considering is building a test suite based on the python standard testing packages but again we have to consider the technical debt of having to learn and maintain 2 different way for writing tests vs the technical debt of maintain our own testing.
we are also considering wether selenium or python-playwright would be the best test framework for integration test of ui but again we dont want to have to invent our own way to test things end to end.
if why test was allowed under the pti its a point in favor of trying to adopt the work done in horizon but at present all opetion are effectivly equal form my prespecitve with a slight bias against using the exisitng hoizon work based on consitc and familiarity with non pytest based testing.
by the way im not commenting on what the horizon project should do or manilla-ui just on the impact form a watcher-dashboard point of view.
If it is such a huge problem/risk like someone is indicating here - fine we can start talking about why previous tests were super unstable for quite some time and no one who is commenting here cared and no one fixed them/rewrote them. We can remove the new tests completely and activate the old one again and then execute a recheck 10 times every time we want to merge something (to hit a run without random failures). And someone from here can take it as a task to rewrite it "correct way".
That is probably all I can say here. It is Friday and I am done. Have a nice weekend :-). Jan
On Fri, Nov 7, 2025 at 5:12 PM Clark Boylan <cboylan@sapwetik.org> wrote:
On Fri, Nov 7, 2025, at 7:39 AM, Jan Jasek wrote: > Hi Jeremy, > I am talking about the history of runners for Integration tests. And if > you want to share something, please give some context to it, not just > blindly throw links. > > Ivan and Akihiro removed testr support almost a decade ago. > https://review.opendev.org/c/openstack/horizon/+/520214 > > Thank you > Jan > > On Fri, Nov 7, 2025 at 4:13 PM Jeremy Stanley <fungi@yuggoth.org> wrote: >> On 2025-11-07 13:55:01 +0100 (+0100), Jan Jasek wrote: >> [...] >> > So the point is that as far as I know Horizon NEVER used stestr >> > runner for Integration tests. At least from what I was able to >> > find across old branches (feel free to correct me If I am wrong). >> > I can try to find out why but stestr is not used and (at least >> > majority time in Horizon history) was not used for integration >> > tests. That is a fact.
Interesting. Going back to my statement on the value of understanding why prior decisions were made understanding why this is the case could be useful. If Horizon never used stestr in the first place then I don't know that we can blame it for instability of test cases for example. Maybe part of the issue here was deciding to go off piste in the first place? Or maybe there were really good reasons for these decisions (I know django is a large framework with lots of batteries included).
At this point it is probably all moot. As I explained in my previous response, I suspect that pytest can largely meet the existing goals set in the PTI. I was trying to explain that there are real goals behind the decision to use stestr, that pytest is probably sufficient to address those goals, and if necessary goals can be updated to meet current needs or expectations.
On top of that, if we don't have an understanding of our goals or how well existing tools meet those goals I don't know how we can evaluate if the current tools should be replaced by something else. Looking at historical decisions can be a useful activity for understanding that better.
>> > >> > Based on all your messages and comments, you are here much longer >> > than me and you are part of this discussion so you obviously >> > somehow participate on Horizon, so feel free to share here why >> > stestr was never default runner for Integration tests (or send me >> > where it was). I would really like to know and I was not here when >> > those decisions were made.
I would have to look it up myself. I don't know if anyone knows why these specific actions were taken by Horizon at this point without digging through git logs, code reviews, or project meetings.
>> [...] >> >> While I was never heavily involved in Horizon development, a quick >> grep through its Git history indicates Matthias introduced testr >> support a decade ago in >> https://review.opendev.org/c/openstack/horizon/+/255136 so that may >> be a good place to begin researching from. >> -- >> Jeremy Stanley
On 11/7/25 1:55 PM, Jan Jasek wrote:
I saw you mentioned parallel running multiple times.
Yes, because it's very annoying that it takes forever, when it could take 32 times less time to build the Horizon package on my 16 cores laptop (my boss bought it to me so I spend less time waiting for builds...).
Pytest-xdist, I was experimenting very very little with it some time ago. It worked fine for “ui-pytests” (where tests are completely independent and running in django live server). I did not have time to experiment much with “integration-pytests” but I am afraid that as the tests there include pagination tests for instance, image, volume, snapshot and all the tests require “resources” in general (like volume resource for all volume snapshot tests, etc.).
I do not run integration tests when building packages. IMO, integration tests should be living in tempest, not in a per-project thing, otherwise it's difficult for me to run them. What I would like to run in parallel is unit tests, which in the past in Horizon, wouldn't run in parallel. As for pytest vs stestr, again, the best thing would be if you could use stestr as test runners, because it has a nice interface (for selecting tests with a regex), even if you're using pytest extensions. That's asking a lot less than just removing all traces of pytest. Though I wont complain too much about this, it's more a strong suggestion than a hard requirement.
being able to run the tests in parallel was not super high priority on my plate - it was to make tests somehow stable and maintainable
Thanks for that already! :)
And I am quite sure that parallel running did not exist in previous integration tests (as they were barely stable).
As I wrote, even unit tests couldn't run in parallel... Switching subject now... One other thing which I think would be awesome, would be getting rid of local_settings.py and get an openstack_dashboard.conf instead, using oslo.config. This has been discussed for like 10 years, but was never achieved. The local_settings.py of Horizon is just horrible to deal with when using config management. In puppet and in Kolla, there's no other choice but using Jinja2 templates, which is barely readable when you add conditionals. Is this still something the team is working on? Or has this been just given-up ? Cheers, Thomas Goirand (zigo) P.S: Do you CC me please, I'm registered to the list, and that's breaking my mail filtering.
Hello Thomas, Right now this discussion is not about unit tests (parallel running). This discussion is about Integration tests, runner for Horizon integration tests and using Pytest in general. - Specifically for Watcher. As for pytest vs stestr, again, Horizon never used stestr as a test runner and I am still waiting if someone - who is here longer than me and maybe was the part of the historical decisions - why nosetests, why Django built-in test runner, why Pytest and NEVER stestr. If I do not get an answer here I will try to directly ping longer active members of Horizon or will dig into some old meeting notes, etc. But now I am still hearing from you why we should use stestr and why it is required but I do not hear why Horizon NEVER used it for integration tests. So I am not arguing we should not use it, but I am really curious why it is so important as you are saying and why it was never used for Horizon Integration tests. I understand that there are gaps in Horizon and now this thread appeared and you are using it to share all things that you are not satisfied with and although I am trying to answer them all, we should stick here with the original topic and you can come to discuss the others (unit tests parallel running, oslo.config, etc.) on our weekly meetings or PTG. But to very briefly answer your question: Oslo.config was discussed on previous (Flamingo) PTG as something that would be nice to have: https://etherpad.opendev.org/p/horizon-flamingo-ptg#L85 but as there are still higher priority topics, the current state is that the team is not working on it but also we did not give-up. We know about it, we would like to complete this topic but it is waiting for its time. Thank you Jan On Sat, Nov 8, 2025 at 3:36 PM Thomas Goirand <zigo@debian.org> wrote:
On 11/7/25 1:55 PM, Jan Jasek wrote:
I saw you mentioned parallel running multiple times.
Yes, because it's very annoying that it takes forever, when it could take 32 times less time to build the Horizon package on my 16 cores laptop (my boss bought it to me so I spend less time waiting for builds...).
Pytest-xdist, I was experimenting very very little with it some time ago. It worked fine for “ui-pytests” (where tests are completely independent and running in django live server). I did not have time to experiment much with “integration-pytests” but I am afraid that as the tests there include pagination tests for instance, image, volume, snapshot and all the tests require “resources” in general (like volume resource for all volume snapshot tests, etc.).
I do not run integration tests when building packages. IMO, integration tests should be living in tempest, not in a per-project thing, otherwise it's difficult for me to run them.
What I would like to run in parallel is unit tests, which in the past in Horizon, wouldn't run in parallel.
As for pytest vs stestr, again, the best thing would be if you could use stestr as test runners, because it has a nice interface (for selecting tests with a regex), even if you're using pytest extensions. That's asking a lot less than just removing all traces of pytest.
Though I wont complain too much about this, it's more a strong suggestion than a hard requirement.
being able to run the tests in parallel was not super high priority on my plate - it was to make tests somehow stable and maintainable
Thanks for that already! :)
And I am quite sure that parallel running did not exist in previous integration tests (as they were barely stable).
As I wrote, even unit tests couldn't run in parallel...
Switching subject now...
One other thing which I think would be awesome, would be getting rid of local_settings.py and get an openstack_dashboard.conf instead, using oslo.config. This has been discussed for like 10 years, but was never achieved. The local_settings.py of Horizon is just horrible to deal with when using config management. In puppet and in Kolla, there's no other choice but using Jinja2 templates, which is barely readable when you add conditionals.
Is this still something the team is working on? Or has this been just given-up ?
Cheers,
Thomas Goirand (zigo)
P.S: Do you CC me please, I'm registered to the list, and that's breaking my mail filtering.
On 08/11/2025 14:35, Thomas Goirand wrote:
On 11/7/25 1:55 PM, Jan Jasek wrote:
I saw you mentioned parallel running multiple times.
Yes, because it's very annoying that it takes forever, when it could take 32 times less time to build the Horizon package on my 16 cores laptop (my boss bought it to me so I spend less time waiting for builds...).
Pytest-xdist, I was experimenting very very little with it some time ago. It worked fine for “ui-pytests” (where tests are completely independent and running in django live server). I did not have time to experiment much with “integration-pytests” but I am afraid that as the tests there include pagination tests for instance, image, volume, snapshot and all the tests require “resources” in general (like volume resource for all volume snapshot tests, etc.).
I do not run integration tests when building packages. IMO, integration tests should be living in tempest, not in a per-project thing, otherwise it's difficult for me to run them.
thats an interesting point. i guess there is not echinical reason why selenium or playright test could not be in the tempest plugin. ill raise that with the watcher team. i still want ot have effecticly "fucntional" tests (in the sensce of nova fucntional test) using the same where we can test watcher dashbaord in isolation as much as reasonable similar to the ui tests in horizon but we could certenly group the intergration/end-2-end tests that need a devstack or cloud with horzion with the tempest test. my only reservation with that is it also makes sense for them ot be in the dashboard project the saem way we put the "devstack funtional" tests int he python-<service>client proejcts its very nic ot be able ot have the test update be in the same commit as the ux change with that said we have depend on so ill at least add this to the list of decsion to make and review with the team.
What I would like to run in parallel is unit tests, which in the past in Horizon, wouldn't run in parallel.
As for pytest vs stestr, again, the best thing would be if you could use stestr as test runners, because it has a nice interface (for selecting tests with a regex), even if you're using pytest extensions. That's asking a lot less than just removing all traces of pytest.
some one problyneed to try. my concern is that pytest does depnecy injection of "fixutres" into fucntion based on teh name of the argumetn in the fucntion defition i dont think tehre is any standard for that and as a resutl if you use that fucntioanlity to write yoru test i dont think its possibel to use any other test runner.
Though I wont complain too much about this, it's more a strong suggestion than a hard requirement.
being able to run the tests in parallel was not super high priority on my plate - it was to make tests somehow stable and maintainable
Thanks for that already! :)
And I am quite sure that parallel running did not exist in previous integration tests (as they were barely stable).
As I wrote, even unit tests couldn't run in parallel...
Switching subject now...
One other thing which I think would be awesome, would be getting rid of local_settings.py and get an openstack_dashboard.conf instead, using oslo.config. This has been discussed for like 10 years, but was never achieved. The local_settings.py of Horizon is just horrible to deal with when using config management. In puppet and in Kolla, there's no other choice but using Jinja2 templates, which is barely readable when you add conditionals.
ya the adoption of oslo.config was agreed to in an old spec and it was somthin gi was sad to see had effectivly stalled out. as far as i am aware no one is currectly working on that but i think that woudl be a nice enhacnemetn even if it moving horizon to use even less fo django thet it already use. https://docs.openstack.org/horizon/latest/contributor/topics/ini-based-confi...
Is this still something the team is working on? Or has this been just given-up ?
Cheers,
Thomas Goirand (zigo)
P.S: Do you CC me please, I'm registered to the list, and that's breaking my mail filtering.
On Mon, Nov 10, 2025 at 6:27 PM Sean Mooney <smooney@redhat.com> wrote:
On 08/11/2025 14:35, Thomas Goirand wrote:
On 11/7/25 1:55 PM, Jan Jasek wrote:
I saw you mentioned parallel running multiple times.
Yes, because it's very annoying that it takes forever, when it could take 32 times less time to build the Horizon package on my 16 cores laptop (my boss bought it to me so I spend less time waiting for builds...).
Pytest-xdist, I was experimenting very very little with it some time ago. It worked fine for “ui-pytests” (where tests are completely independent and running in django live server). I did not have time to experiment much with “integration-pytests” but I am afraid that as the tests there include pagination tests for instance, image, volume, snapshot and all the tests require “resources” in general (like volume resource for all volume snapshot tests, etc.).
I do not run integration tests when building packages. IMO, integration tests should be living in tempest, not in a per-project thing, otherwise it's difficult for me to run them.
thats an interesting point. i guess there is not echinical reason why selenium or playright test could not be in the tempest plugin.
ill raise that with the watcher team. i still want ot have effecticly "fucntional" tests (in the sensce of nova fucntional test) using the same where we can test watcher dashbaord in isolation as much as reasonable similar to the ui tests in horizon but we could certenly group the intergration/end-2-end tests that need a devstack or cloud with horzion with the tempest test.
my only reservation with that is it also makes sense for them ot be in the dashboard project the saem way we put the "devstack funtional" tests int he python-<service>client proejcts
its very nic ot be able ot have the test update be in the same commit as the ux change with that said we have depend on so ill at least add this to the list of decsion to make and review with the team.
What I would like to run in parallel is unit tests, which in the past in Horizon, wouldn't run in parallel.
As for pytest vs stestr, again, the best thing would be if you could use stestr as test runners, because it has a nice interface (for selecting tests with a regex), even if you're using pytest extensions. That's asking a lot less than just removing all traces of pytest.
some one problyneed to try. my concern is that pytest does depnecy injection of "fixutres" into fucntion based on teh name of the argumetn in the fucntion defition
i dont think tehre is any standard for that and as a resutl if you use that fucntioanlity to write yoru test i dont think its possibel to use any other test runner.
Though I wont complain too much about this, it's more a strong suggestion than a hard requirement.
being able to run the tests in parallel was not super high priority on my plate - it was to make tests somehow stable and maintainable
Thanks for that already! :)
And I am quite sure that parallel running did not exist in previous integration tests (as they were barely stable).
As I wrote, even unit tests couldn't run in parallel...
Switching subject now...
One other thing which I think would be awesome, would be getting rid of local_settings.py and get an openstack_dashboard.conf instead, using oslo.config. This has been discussed for like 10 years, but was never achieved. The local_settings.py of Horizon is just horrible to deal with when using config management. In puppet and in Kolla, there's no other choice but using Jinja2 templates, which is barely readable when you add conditionals.
ya the adoption of oslo.config was agreed to in an old spec and it was somthin gi was sad to see had effectivly stalled out.
as far as i am aware no one is currectly working on that but i think that woudl be a nice enhacnemetn even if it moving horizon to use even less fo django thet it already use.
https://docs.openstack.org/horizon/latest/contributor/topics/ini-based-confi...
Thank you everyone for the input in this thread. The Watcher team discussed this topic in Thursday Watcher weekly irc meeting[1]. PTI allows the use of pytest as an optional test runner. It does not allow the use of pytest for writing tests. Watcher team is going with unittests module for writing watcher-dashboard integration tests. We will keep an eye on the PTI doc update. Links: [1]. https://meetings.opendev.org/irclogs/%23openstack-watcher/%23openstack-watch... With Regards, Chandan Kumar
On Thu, Nov 6, 2025, at 2:02 PM, Thomas Goirand wrote:
On 11/6/25 4:03 PM, Jan Jasek wrote:
Hello everyone, the previous Horizon tests were unstable, hard to maintain, slow in execution. Before completely rewriting the tests we were experimenting and discussing what way to go with the goal to try to find the best way to make the tests stable, easy to maintain and fast. And Using PyTest with all the features it provides - Fixtures, scopings, parameterizations, etc. Rich plugin ecosystem like pytest-html for generating test reports. Basically cover the Horizon well, using modern ways and not reinvent the wheel again.
To be completely honest - I implemented the majority of the new tests for Horizon and I did not know that PyTest is not allowed to be used. If I knew about it I would definitely have discussed it with someone (TC) before.
I see PyTest as a very popular, widely used, and feature-rich open- source framework. In my point of view PyTest is the modern, de facto standard/industry-adopted for testing in the Python ecosystem - That is the reason why I am so surprised that after reimplementation of all Horizon tests (that are now, after years, super stable, easy to maintain and running well for multiple cycles already) and started with coverage for Manila-UI plugin, it came out on PTG from watcher team that it is probably not Allowed to use PyTest in OpenStack project. Hi Jan,
What I'm surprised is why a test runner is so important? Can't we run your tests with stestr, even if they use pytest decorator / libs inside?
Pytest the test runner is capable of running standard compliant test cases. Pytest the test framework library introduces features that are not standards compliant that require pytest to execute (or at least this was the case the last time pytest came up). This is one of the motivations for using a standard compliant test runner in the gate. It ensures that you can opt into using pytest or any other runner of your choice. But if you start using pytest it is easy to unknowingly stop working with any other test runner. I think we're all operating under the assumption that the horizon test cases cannot execute under stestr and require pytest (otherwise I'm not sure what the concern is). That said I don't know if anyone has actually tested if this is the case.
Also, from a package maintainer perspective, it's ok for me if you use pytest, as long as you don't use weird extension not packaged in distro (I'd have to package them just for running Horizon tests, which is a lot of extra work I prefer to avoid). Apart from that, sticking with the most common pytest stuff is very much ok.
What I've been frustrated with Horizon though, is the impossibility to run tests in parallel. When one does, everything falls apart. Is this now fixed? Can I use pytest-xdist, and run pytest with "-n auto", so I can fully use all the cores of my laptop to run Horizon tests?
Thanks for your work, Cheers,
Thomas Goirand (zigo)
Hi Clark, I've been reading this discussion, and I don't really have anything to add to this, except that I've seen the term "standard compliant test cases" used quite a lot, and I'm a little bit confused by it. Would you be so nice and please link to the relevant specification, so that everyone can see what standard exactly you are talking about? Thank you! On Fri, Nov 7, 2025 at 4:57 PM Clark Boylan <cboylan@sapwetik.org> wrote:
On 11/6/25 4:03 PM, Jan Jasek wrote:
Hello everyone, the previous Horizon tests were unstable, hard to maintain, slow in execution. Before completely rewriting the tests we were experimenting and discussing what way to go with the goal to try to find the best way to make the tests stable, easy to maintain and fast. And Using PyTest with all the features it provides - Fixtures, scopings, parameterizations, etc. Rich plugin ecosystem like pytest-html for generating test reports. Basically cover the Horizon well, using modern ways and not reinvent the wheel again.
To be completely honest - I implemented the majority of the new tests for Horizon and I did not know that PyTest is not allowed to be used. If I knew about it I would definitely have discussed it with someone (TC) before.
I see PyTest as a very popular, widely used, and feature-rich open- source framework. In my point of view PyTest is the modern, de facto standard/industry-adopted for testing in the Python ecosystem - That is the reason why I am so surprised that after reimplementation of all Horizon tests (that are now, after years, super stable, easy to
On Thu, Nov 6, 2025, at 2:02 PM, Thomas Goirand wrote: maintain
and running well for multiple cycles already) and started with coverage for Manila-UI plugin, it came out on PTG from watcher team that it is probably not Allowed to use PyTest in OpenStack project. Hi Jan,
What I'm surprised is why a test runner is so important? Can't we run your tests with stestr, even if they use pytest decorator / libs inside?
Pytest the test runner is capable of running standard compliant test cases. Pytest the test framework library introduces features that are not standards compliant that require pytest to execute (or at least this was the case the last time pytest came up). This is one of the motivations for using a standard compliant test runner in the gate. It ensures that you can opt into using pytest or any other runner of your choice. But if you start using pytest it is easy to unknowingly stop working with any other test runner.
I think we're all operating under the assumption that the horizon test cases cannot execute under stestr and require pytest (otherwise I'm not sure what the concern is). That said I don't know if anyone has actually tested if this is the case.
Also, from a package maintainer perspective, it's ok for me if you use pytest, as long as you don't use weird extension not packaged in distro (I'd have to package them just for running Horizon tests, which is a lot of extra work I prefer to avoid). Apart from that, sticking with the most common pytest stuff is very much ok.
What I've been frustrated with Horizon though, is the impossibility to run tests in parallel. When one does, everything falls apart. Is this now fixed? Can I use pytest-xdist, and run pytest with "-n auto", so I can fully use all the cores of my laptop to run Horizon tests?
Thanks for your work, Cheers,
Thomas Goirand (zigo)
On 2025-11-24 09:41:03 +0100 (+0100), Radomir Dopieralski wrote:
I've been reading this discussion, and I don't really have anything to add to this, except that I've seen the term "standard compliant test cases" used quite a lot, and I'm a little bit confused by it. Would you be so nice and please link to the relevant specification, so that everyone can see what standard exactly you are talking about? Thank you! [...]
My understanding is that it's shorthand for "works with any runner compatible with the unittest module in CPython's standard library." So basically keeping things flexible and not locking the project into a specific test runner or breaking the ability to e.g. serialize results into subunit protocol format. -- Jeremy Stanley
On Mon, Nov 24, 2025, at 7:35 AM, Jeremy Stanley wrote:
On 2025-11-24 09:41:03 +0100 (+0100), Radomir Dopieralski wrote:
I've been reading this discussion, and I don't really have anything to add to this, except that I've seen the term "standard compliant test cases" used quite a lot, and I'm a little bit confused by it. Would you be so nice and please link to the relevant specification, so that everyone can see what standard exactly you are talking about? Thank you! [...]
My understanding is that it's shorthand for "works with any runner compatible with the unittest module in CPython's standard library." So basically keeping things flexible and not locking the project into a specific test runner or breaking the ability to e.g. serialize results into subunit protocol format.
Yes the unittest module defines a number of interfaces and behaviors for test cases, test classes, test case discovery, test loaders, and test runners. The documentation for this can be found in the python unittest documentation: https://docs.python.org/3/library/unittest.html. A simplified way of thinking about this would be to check if the built in high level unittest test runner can load and run your tests: https://docs.python.org/3/library/unittest.html#unittest.main. Maintaining compatiblity with these interfaces generally ensures that you can use whichever test runner you prefer. Whether that be nose, testr, stestr, pytest, unittest.main, or whatever tooling is built into your IDE.
participants (10)
-
Andriy Kurilin
-
Artem Goncharov
-
Chandan Kumar
-
Clark Boylan
-
Goutham Pacha Ravi
-
Jan Jasek
-
Jeremy Stanley
-
Radomir Dopieralski
-
Sean Mooney
-
Thomas Goirand