Hello everyone,
the previous Horizon tests were unstable, hard to maintain, slow in execution. Before completely rewriting the tests we were experimenting and discussing what way to go with the goal to try to find the best way to make the tests stable, easy to maintain and fast. And Using PyTest with all the features it provides - Fixtures, scopings, parameterizations, etc. Rich plugin ecosystem like pytest-html for generating test reports. Basically cover the Horizon well, using modern ways and not reinvent the wheel again.

To be completely honest - I implemented the majority of the new tests for Horizon and I did not know that PyTest is not allowed to be used. If I knew about it I would definitely have discussed it with someone (TC) before.

I see PyTest as a very popular, widely used, and feature-rich open-source framework. In my point of view PyTest is the modern, de facto standard/industry-adopted for testing in the Python ecosystem - That is the reason why I am so surprised that after reimplementation of all Horizon tests (that are now, after years, super stable, easy to maintain and running well for multiple cycles already) and started with coverage for Manila-UI plugin, it came out on PTG from watcher team that it is probably not Allowed to use PyTest in OpenStack project.

So to answer the Goutham part - We (Horizon team) see it as a way to write and maintain tests easier and in a modern way.
Who was on our Horizon PTG where we discussed/presented part of plugin coverage knows that we are also trying to make the base (elementary fixtures) reusable for all the other projects so they can just import a few elementary fixtures and build tests for their plugins on the top of it how they want with significantly less effort.

If something is not okay, I am responsible for this in the Horizon team. And as I said, I did not know it, I did it in the best will and of course I am open to discussion.
Thank you!
Jan

On Thu, Nov 6, 2025 at 7:11 AM Goutham Pacha Ravi <gouthampravi@gmail.com> wrote:
On Tue, Nov 4, 2025 at 9:58 PM Chandan Kumar <chkumar@redhat.com> wrote:
>
> Hello TC,
>
> We are writing to seek your guidance regarding the Python testing
> standards (PTI) as they relate to Horizon plugins.
>
> The current integration tests of the watcher-dashboard project (the
> Horizon plugin for OpenStack Watcher) are broken due to integration
> code changes in the Horizon integration suite. We are currently
> working on rewriting the watcher-dashboard integration tests.
>
> We noted that the Horizon project itself has developed a robust set of
> reusable, pytest-based integration tests, fixtures, tox targets and
> zuul job[1].
> We also see that other plugins, like manila-ui, are already reusing
> these pytest fixtures.[2].
>
> Several other projects within the OpenStack ecosystem (such as
> skyline-apiserver, rally, and
> projects under the Airship and StarlingX namespaces) are already using pytest.
>
> This presents a conflict for the Watcher team. The official Python PTI
> states that tests should be written using the Python stdlib unittest
> module[3].
>
> The Watcher team advocates for adhering to the Python PTI and using
> unittest. Our main watcher project uses unittest,
> and we prefer to maintain this standard for consistency and PTI
> compliance in watcher-dashboard. This topic came up during
> watcher PTG discussion[4].
>
> This leaves us with a dilemma:
> - Follow the Python PTI: This aligns with the PTI and our team's
> standards but requires us
>   to ignore Horizon's reusable pytest tests and build our own testing
> framework from scratch, duplicating effort.
>
> - Follow the parent project (Horizon) to use pytest to reuse their fixtures.
>   This would be more efficient but appears to violate the Python PTI
> and creates inconsistency with our main project.
>
> - Do we want to improve Python PTI documentation to include pytest usage?

I honestly believe the PTI could evolve, and not require a specific
tool. The core requirement as I see it was to provide a consistent
"interface" - define tox as an entry point for testing, and produce
recognizable result artifacts. It's been several years since we last
updated that portion of the PTI, and during that update we noted in
the commit message that some projects continued relying on nosetests,
and horizon in particular could remain an exception [5]. So perhaps we
can clarify this in the PTI.

Project maintainers of Horizon can probably explain their choice with
specifics, but, from our discussion on #openstack-tc [6], we seemed to
think that pytest has evolved over the years, and if it is easier to
write and maintain tests with it, we could.



>
> We just need guidance on this topic.
>
> Links:
> [1]. https://github.com/openstack/horizon/tree/master/openstack_dashboard/test/selenium
> [2]. https://github.com/openstack/manila-ui/tree/master/manila_ui/tests/selenium
> [3]. https://github.com/openstack/governance/blob/master/reference/pti/python.rst#python-test-running
> [4]. https://etherpad.opendev.org/p/watcher-2026.1-ptg#L404

[5] https://opendev.org/openstack/governance/commit/759c42b10cb3728f5549b05f68e826b1c62a968c
[7] https://meetings.opendev.org/irclogs/%23openstack-tc/%23openstack-tc.2025-10-29.log.html
>
> With Regards,
>
> Chandan Kumar
>