---- On Tue, 22 Aug 2023 13:49:35 -0700 Jeremy Stanley wrote ---
On 2023-08-22 12:26:23 -0500 (-0500), Tony Breeds wrote:
[...]
Keeping everything in sync on a fast moving target is a challenge.
While none of these are insurmountable, they're also very far from
trivial. Using testing would partially address some of this but
it's still a pretty big ask.
[...]
Pretty much this. It's been proven time and again (Gentoo, Suse
Tumbleweed, Fedora, non-LTS Ubuntu) that the constant churn in
packages means more time spent finding and working around
instability in distributions that are effectively "under
development" at the same time we're trying to use them to test
versions of OpenStack that are under development. We struggle just
to build images of new releases reliably, much less keep jobs
running on top of an ever-changing foundation of sand.
And before someone says "just use whatever images the distros are
publishing," that's what we were doing years ago. By building our
own images we can ensure things like consistent user IDs and
permissions across a diverse set of distros, insert our caches to
accelerate jobs, ensure minimal installed package sets have exactly
what's needed to bootstrap jobs without anything preinstalled that
may conflict with jobs, et cetera. We're already wrangling images
for 17 different x86 platforms and 9 different ARM platforms. If we
weren't able to enforce consistent entrypoints, access and content
on these images, we couldn't have scaled to nearly that level.
I think we have almost same model what CentOS stream follow. They are not so stable as their development goes in parallel and that is why we test them as our best effort.
Can we do the same for Debian experimental version? and just keep them in our infra with the risk of unstability. That unstability risk should be fine as we are just running some non voting testing on those to test the next python version.
-gmann
--
Jeremy Stanley