On 08/11/2025 14:35, Thomas Goirand wrote:
On 11/7/25 1:55 PM, Jan Jasek wrote:
I saw you mentioned parallel running multiple times.
Yes, because it's very annoying that it takes forever, when it could take 32 times less time to build the Horizon package on my 16 cores laptop (my boss bought it to me so I spend less time waiting for builds...).
Pytest-xdist, I was experimenting very very little with it some time ago. It worked fine for “ui-pytests” (where tests are completely independent and running in django live server). I did not have time to experiment much with “integration-pytests” but I am afraid that as the tests there include pagination tests for instance, image, volume, snapshot and all the tests require “resources” in general (like volume resource for all volume snapshot tests, etc.).
I do not run integration tests when building packages. IMO, integration tests should be living in tempest, not in a per-project thing, otherwise it's difficult for me to run them.
thats an interesting point. i guess there is not echinical reason why selenium or playright test could not be in the tempest plugin. ill raise that with the watcher team. i still want ot have effecticly "fucntional" tests (in the sensce of nova fucntional test) using the same where we can test watcher dashbaord in isolation as much as reasonable similar to the ui tests in horizon but we could certenly group the intergration/end-2-end tests that need a devstack or cloud with horzion with the tempest test. my only reservation with that is it also makes sense for them ot be in the dashboard project the saem way we put the "devstack funtional" tests int he python-<service>client proejcts its very nic ot be able ot have the test update be in the same commit as the ux change with that said we have depend on so ill at least add this to the list of decsion to make and review with the team.
What I would like to run in parallel is unit tests, which in the past in Horizon, wouldn't run in parallel.
As for pytest vs stestr, again, the best thing would be if you could use stestr as test runners, because it has a nice interface (for selecting tests with a regex), even if you're using pytest extensions. That's asking a lot less than just removing all traces of pytest.
some one problyneed to try. my concern is that pytest does depnecy injection of "fixutres" into fucntion based on teh name of the argumetn in the fucntion defition i dont think tehre is any standard for that and as a resutl if you use that fucntioanlity to write yoru test i dont think its possibel to use any other test runner.
Though I wont complain too much about this, it's more a strong suggestion than a hard requirement.
being able to run the tests in parallel was not super high priority on my plate - it was to make tests somehow stable and maintainable
Thanks for that already! :)
And I am quite sure that parallel running did not exist in previous integration tests (as they were barely stable).
As I wrote, even unit tests couldn't run in parallel...
Switching subject now...
One other thing which I think would be awesome, would be getting rid of local_settings.py and get an openstack_dashboard.conf instead, using oslo.config. This has been discussed for like 10 years, but was never achieved. The local_settings.py of Horizon is just horrible to deal with when using config management. In puppet and in Kolla, there's no other choice but using Jinja2 templates, which is barely readable when you add conditionals.
ya the adoption of oslo.config was agreed to in an old spec and it was somthin gi was sad to see had effectivly stalled out. as far as i am aware no one is currectly working on that but i think that woudl be a nice enhacnemetn even if it moving horizon to use even less fo django thet it already use. https://docs.openstack.org/horizon/latest/contributor/topics/ini-based-confi...
Is this still something the team is working on? Or has this been just given-up ?
Cheers,
Thomas Goirand (zigo)
P.S: Do you CC me please, I'm registered to the list, and that's breaking my mail filtering.