On 07/11/2025 17:30, Jan Jasek wrote:
Hi Clark, Thanks for relevant points and investigation!
I am definitely not blaming stestr for instabilities. I am only saying that Horizon historically never used it as a runner (as it was pointed out here multiple times as something “required”).
And as I said in my very first message - And I say it again I did not know that PyTest is somehow forbidden (it was not explicitly mentioned anywhere where I was looking and for me PyTest is the de facto standard in the modern world so I did not expect it can be a problem). When I started with rewriting tests from scratch I was new in Horizon and the whole Horizon team agreed that the PyTest is the way we will go. And during the implementation no one from reviewers had any problem with it and during the last few cycles when the tests are running without any issue no one said a word. That all are facts and I can not go back in time to discuss it with all the people here.
If it is okay from your point of view to add it into PTI - perfect, we have nice, stable, easy to maintain tests for Horizon where the base can be easily used also for Horizon plugins.
adding it to the pti does nto imidally mena that we will choose to build on it for watcher-dashbaord testing but form my point of view if its not allowed by the pti its a hard blocker to watcher using the work you did. one of the options we are considering is building a test suite based on the python standard testing packages but again we have to consider the technical debt of having to learn and maintain 2 different way for writing tests vs the technical debt of maintain our own testing. we are also considering wether selenium or python-playwright would be the best test framework for integration test of ui but again we dont want to have to invent our own way to test things end to end. if why test was allowed under the pti its a point in favor of trying to adopt the work done in horizon but at present all opetion are effectivly equal form my prespecitve with a slight bias against using the exisitng hoizon work based on consitc and familiarity with non pytest based testing.
If it is such a huge problem/risk like someone is indicating here - fine we can start talking about why previous tests were super unstable for quite some time and no one who is commenting here cared and no one fixed them/rewrote them. We can remove the new tests completely and activate the old one again and then execute a recheck 10 times every time we want to merge something (to hit a run without random failures). And someone from here can take it as a task to rewrite it "correct way".
That is probably all I can say here. It is Friday and I am done. Have a nice weekend :-). Jan
On Fri, Nov 7, 2025 at 5:12 PM Clark Boylan <cboylan@sapwetik.org> wrote:
On Fri, Nov 7, 2025, at 7:39 AM, Jan Jasek wrote: > Hi Jeremy, > I am talking about the history of runners for Integration tests. And if > you want to share something, please give some context to it, not just > blindly throw links. > > Ivan and Akihiro removed testr support almost a decade ago. > https://review.opendev.org/c/openstack/horizon/+/520214 > > Thank you > Jan > > On Fri, Nov 7, 2025 at 4:13 PM Jeremy Stanley <fungi@yuggoth.org> wrote: >> On 2025-11-07 13:55:01 +0100 (+0100), Jan Jasek wrote: >> [...] >> > So the point is that as far as I know Horizon NEVER used stestr >> > runner for Integration tests. At least from what I was able to >> > find across old branches (feel free to correct me If I am wrong). >> > I can try to find out why but stestr is not used and (at least >> > majority time in Horizon history) was not used for integration >> > tests. That is a fact.
Interesting. Going back to my statement on the value of understanding why prior decisions were made understanding why this is the case could be useful. If Horizon never used stestr in the first place then I don't know that we can blame it for instability of test cases for example. Maybe part of the issue here was deciding to go off piste in the first place? Or maybe there were really good reasons for these decisions (I know django is a large framework with lots of batteries included).
At this point it is probably all moot. As I explained in my previous response, I suspect that pytest can largely meet the existing goals set in the PTI. I was trying to explain that there are real goals behind the decision to use stestr, that pytest is probably sufficient to address those goals, and if necessary goals can be updated to meet current needs or expectations.
On top of that, if we don't have an understanding of our goals or how well existing tools meet those goals I don't know how we can evaluate if the current tools should be replaced by something else. Looking at historical decisions can be a useful activity for understanding that better.
>> > >> > Based on all your messages and comments, you are here much longer >> > than me and you are part of this discussion so you obviously >> > somehow participate on Horizon, so feel free to share here why >> > stestr was never default runner for Integration tests (or send me >> > where it was). I would really like to know and I was not here when >> > those decisions were made.
I would have to look it up myself. I don't know if anyone knows why these specific actions were taken by Horizon at this point without digging through git logs, code reviews, or project meetings.
>> [...] >> >> While I was never heavily involved in Horizon development, a quick >> grep through its Git history indicates Matthias introduced testr >> support a decade ago in >> https://review.opendev.org/c/openstack/horizon/+/255136 so that may >> be a good place to begin researching from. >> -- >> Jeremy Stanley