Hello everyone, thank you so much for all your opinions and points of view.
I did a little deep dive into Horizon testing history because I am only(compared to you) 3 years around OpenStack/Horizon.
In horizon, for Integration tests there were historically used: nosetests, Django built-in test runner and then pytest as a test runner (a long time before I even started work on OpenStack/Horizon).
So the point is that as far as I know Horizon NEVER used stestr runner for Integration tests. At least from what I was able to find across old branches (feel free to correct me If I am wrong). I can try to find out why but stestr is not used and (at least majority time in Horizon history) was not used for integration tests. That is a fact.
Based on all your messages and comments, you are here much longer than me and you are part of this discussion so you obviously somehow participate on Horizon, so feel free to share here why stestr was never default runner for Integration tests (or send me where it was). I would really like to know and I was not here when those decisions were made.
I saw you mentioned parallel running multiple times.
Pytest-xdist, I was experimenting very very little with it some time ago. It worked fine for “ui-pytests” (where tests are completely independent and running in django live server). I did not have time to experiment much with “integration-pytests” but I am afraid that as the tests there include pagination tests for instance, image, volume, snapshot and all the tests require “resources” in general (like volume resource for all volume snapshot tests, etc.). It can probably lead super easily into some race conditions/random failures. Maybe there will be a possibility to divide the tests into groups that will not interfere with each other or maybe I am not completely right here but as I am saying, being able to run the tests in parallel was not super high priority on my plate - it was to make tests somehow stable and maintainable (at the time when I rewrote the tests from scratch we needed recheck previous tests 5-10 times in Zuul to get rid of all random failures and be able to merge patch).
And I am quite sure that parallel running did not exist in previous integration tests (as they were barely stable). But I would definitely like to take a look at possibilities of parallel running when I have some time.
If you have any other questions/points (historical context), feel free to share it.
Thank you
Jan
On 11/6/25 4:03 PM, Jan Jasek wrote:
> Hello everyone,
> the previous Horizon tests were unstable, hard to maintain, slow in
> execution. Before completely rewriting the tests we were experimenting
> and discussing what way to go with the goal to try to find the best way
> to make the tests stable, easy to maintain and fast. And Using PyTest
> with all the features it provides - Fixtures, scopings,
> parameterizations, etc. Rich plugin ecosystem like pytest-html for
> generating test reports. Basically cover the Horizon well, using modern
> ways and not reinvent the wheel again.
>
> To be completely honest - I implemented the majority of the new tests
> for Horizon and I did not know that PyTest is not allowed to be used. If
> I knew about it I would definitely have discussed it with someone (TC)
> before.
>
> I see PyTest as a very popular, widely used, and feature-rich open-
> source framework. In my point of view PyTest is the modern, de facto
> standard/industry-adopted for testing in the Python ecosystem - That is
> the reason why I am so surprised that after reimplementation of all
> Horizon tests (that are now, after years, super stable, easy to maintain
> and running well for multiple cycles already) and started with coverage
> for Manila-UI plugin, it came out on PTG from watcher team that it is
> probably not Allowed to use PyTest in OpenStack project.
Hi Jan,
What I'm surprised is why a test runner is so important? Can't we run
your tests with stestr, even if they use pytest decorator / libs inside?
Also, from a package maintainer perspective, it's ok for me if you use
pytest, as long as you don't use weird extension not packaged in distro
(I'd have to package them just for running Horizon tests, which is a lot
of extra work I prefer to avoid). Apart from that, sticking with the
most common pytest stuff is very much ok.
What I've been frustrated with Horizon though, is the impossibility to
run tests in parallel. When one does, everything falls apart. Is this
now fixed? Can I use pytest-xdist, and run pytest with "-n auto", so I
can fully use all the cores of my laptop to run Horizon tests?
Thanks for your work,
Cheers,
Thomas Goirand (zigo)