[all][tc] python 3.11 testing plan
Hi All,
Some of you are part of discussion for python 3.11 testing but If you are not aware of it, below is the plan for python 3.11 testing in OpenStack.
Non voting in 2023.2 ------------------------- You might have seen that python 3.11 job is now running as non voting in all projects[1]. Idea is to run it as non voting for this (2023.2) cycle which will give projects to fix the issue and make it green. As it is running on debian (frickler mentioned the reason of running it in debian in gerrit[2]), it need some changes in bindep.txt file to pass. Here is the example of fix[3] which you can do in your project also.
Voting in 2024.1 -------------------- In next cycle (2024.1), I am proposing to make py3.11 testing mandatory [4] and voting (automatically via common python job template). You need to fix the failure in this cycle otherwise it will block the gate once the next cycle development start (basically once 891238 is merged).
[1] https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/891227/5 [2] https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/891146/1 [3] https://review.opendev.org/c/openstack/nova/+/891256 [4] https://review.opendev.org/c/openstack/governance/+/891225 [5] https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/891238
-gmann
On 8/21/23 20:16, Ghanshyam Mann wrote:
Hi All,
Some of you are part of discussion for python 3.11 testing but If you are not aware of it, below is the plan for python 3.11 testing in OpenStack.
Non voting in 2023.2
You might have seen that python 3.11 job is now running as non voting in all projects[1]. Idea is to run it as non voting for this (2023.2) cycle which will give projects to fix the issue and make it green. As it is running on debian (frickler mentioned the reason of running it in debian in gerrit[2]), it need some changes in bindep.txt file to pass. Here is the example of fix[3] which you can do in your project also.
Voting in 2024.1
In next cycle (2024.1), I am proposing to make py3.11 testing mandatory [4] and voting (automatically via common python job template). You need to fix the failure in this cycle otherwise it will block the gate once the next cycle development start (basically once 891238 is merged).
[1] https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/891227/5 [2] https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/891146/1 [3] https://review.opendev.org/c/openstack/nova/+/891256 [4] https://review.opendev.org/c/openstack/governance/+/891225 [5] https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/891238
-gmann
Hi,
This is very nice, though, IMO, it's too late. Bookworm was released with OpenStack Zed, to which I already added Python 3.11 support (if you guys have some patches to add on top, let me know, but as much as I know, it was already functional).
So now, the current plan is to ... test on py3.11. Yeah, but from the Debian perspective, we're already on Python 3.12. The RC1 is already in Debian Experimental, and I expect 3.12 to reach Unstable by the end of this year. Once again, I'll be the sole person that will experimenting all the troubles. It's been YEARS like this. It's probably time to address it, no?
I'd really love it, if we could find a solution so that I stop to be the only person getting the shit in this world. :)
What would be awesome, would be to run Debian Unstable, with the latest interpreter, as non-voting jobs.
Your thoughts?
Cheers,
Thomas Goirand (zigo)
---- On Tue, 22 Aug 2023 09:05:56 -0700 Thomas Goirand wrote ---
On 8/21/23 20:16, Ghanshyam Mann wrote:
Hi All,
Some of you are part of discussion for python 3.11 testing but If you are not aware of it,
below is the plan for python 3.11 testing in OpenStack.
Non voting in 2023.2
You might have seen that python 3.11 job is now running as non voting in all projects[1].
Idea is to run it as non voting for this (2023.2) cycle which will give projects to fix the issue and make
it green. As it is running on debian (frickler mentioned the reason of running it in debian in gerrit[2]), it
need some changes in bindep.txt file to pass. Here is the example of fix[3] which you can do in your
project also.
Voting in 2024.1
In next cycle (2024.1), I am proposing to make py3.11 testing mandatory [4] and voting (automatically
via common python job template). You need to fix the failure in this cycle otherwise it will block the
gate once the next cycle development start (basically once 891238 is merged).
[1] https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/891227/5
[2] https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/891146/1
[4] https://review.opendev.org/c/openstack/governance/+/891225
[5] https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/891238
-gmann
Hi,
This is very nice, though, IMO, it's too late. Bookworm was released
with OpenStack Zed, to which I already added Python 3.11 support (if you
guys have some patches to add on top, let me know, but as much as I
know, it was already functional).
So now, the current plan is to ... test on py3.11. Yeah, but from the
Debian perspective, we're already on Python 3.12. The RC1 is already in
Debian Experimental, and I expect 3.12 to reach Unstable by the end of
this year. Once again, I'll be the sole person that will experimenting
all the troubles. It's been YEARS like this. It's probably time to
address it, no?
I'd really love it, if we could find a solution so that I stop to be the
only person getting the shit in this world. :)
What would be awesome, would be to run Debian Unstable, with the latest
interpreter, as non-voting jobs.
Thanks Zigo for you testing and effort, that is really appreciated.
I like the idea of testing the 3.12 in advance even with RC release. Also, as you mentioned we will be testing it in advance as non voting so doing it on Debian experimental version is not bad idea. this will give projects more time to fix the issues. And once we have stable version of distro having v3.12 then we can switch the job to that distro.
I think ubuntu also going to package it in ~Nov/Dec time so that also one option to add it as non voting.
-gmann
Your thoughts?
Cheers,
Thomas Goirand (zigo)
Hey Thomas,
I'm a Gentoo user and contributor, so I feel the pain of often being "ahead" of where OpenStack is. However, I also know that CI instability is one of the biggest time-vampires in OpenStack development.
I think we are in a pretty happy sweet spot right now: distributions are welcome to test early versions of python. I don't think I've ever seen a forward-looking compatibility change rejected. For instance, when Python 3.11 was in beta, a breakage was detected by Fedora in Ironic and fixed. This is a good example of OpenStack and distribution partners working together to make sure even the newer stuff works.
OpenStack ourselves putting these beta quality versions in the PTI is problematic though; we should ensure developers spend time building software, not tracking down python-beta bugs. As it is, we already spend an outsize amount of time and effort fixing and running CI.
There may be some value in making jobs available earlier, but not voting -- maybe in experimental queue? If folks like you, who are targeting unreleased python distributions, would like a way to check compatibility on demand. I'd happily approve a change to Ironic projects that add this as an option in the experimental queue.
Thanks, Jay Faulkner Ironic PTL TC Vice-Chair
On Tue, Aug 22, 2023 at 9:17 AM Thomas Goirand zigo@debian.org wrote:
On 8/21/23 20:16, Ghanshyam Mann wrote:
Hi All,
Some of you are part of discussion for python 3.11 testing but If you
are not aware of it,
below is the plan for python 3.11 testing in OpenStack.
Non voting in 2023.2
You might have seen that python 3.11 job is now running as non voting in
all projects[1].
Idea is to run it as non voting for this (2023.2) cycle which will give
projects to fix the issue and make
it green. As it is running on debian (frickler mentioned the reason of
running it in debian in gerrit[2]), it
need some changes in bindep.txt file to pass. Here is the example of
fix[3] which you can do in your
project also.
Voting in 2024.1
In next cycle (2024.1), I am proposing to make py3.11 testing mandatory
[4] and voting (automatically
via common python job template). You need to fix the failure in this
cycle otherwise it will block the
gate once the next cycle development start (basically once 891238 is
merged).
[1]
https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/891227/5
[2]
https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/891146/1
[3] https://review.opendev.org/c/openstack/nova/+/891256 [4] https://review.opendev.org/c/openstack/governance/+/891225 [5] https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/891238
-gmann
Hi,
This is very nice, though, IMO, it's too late. Bookworm was released with OpenStack Zed, to which I already added Python 3.11 support (if you guys have some patches to add on top, let me know, but as much as I know, it was already functional).
So now, the current plan is to ... test on py3.11. Yeah, but from the Debian perspective, we're already on Python 3.12. The RC1 is already in Debian Experimental, and I expect 3.12 to reach Unstable by the end of this year. Once again, I'll be the sole person that will experimenting all the troubles. It's been YEARS like this. It's probably time to address it, no?
I'd really love it, if we could find a solution so that I stop to be the only person getting the shit in this world. :)
What would be awesome, would be to run Debian Unstable, with the latest interpreter, as non-voting jobs.
Your thoughts?
Cheers,
Thomas Goirand (zigo)
On Tue, 22 Aug 2023 at 11:10, Thomas Goirand zigo@debian.org wrote:
What would be awesome, would be to run Debian Unstable, with the latest interpreter, as non-voting jobs.
For me a big issue with this is the impact on infra. We'd need Debian unstable nodepool images, and to add the debian unstable pool to the AFS volume. Keeping everything in sync on a fast moving target is a challenge. While none of these are insurmountable, they're also very far from trivial. Using testing would partially address some of this but it's still a pretty big ask.
Yours Tony.
Just adding that Ubuntu 23.04 (lunar) shipped with py3.11 and 2023.1 (antelope) packages, and the UCA for 2023.1 on Ubuntu 22.04 (jammy) was py3.10, so 2023/1/antelope is tested on both. Mantic (Ubuntu 23.10) looks like it will be py3.11 again, so bobcat will be on both py3.10 and py3.11, and for 24.04 (the next LTS) may well be py3.12.
So, like Thomas, we're usually a python or 2 ahead of the 'official' supported version upstream; i.e. at this point, pretty much everything that's packaged in Debian and Ubuntu at least module imports without failures on py3.11; and 'probably' works from a tempest perspective. But obviously, we don't test everything to the depth that upstream does, so perhaps that point is a bit moot.
A thought did occur to me though: as container based/docker/k8s solutions to the control plane are becoming more used, it's likely that the official python test might be the important data point as the container build would (from a risk perspective) more likely select the official supported python version; the host's python that the containers are run on becomes largely irrelevant. In this situation, upstream "next python version" testing becomes all the more important.
Cheers Alex (tinwood)
On Tue, 22 Aug 2023 at 17:27, Ghanshyam Mann gmann@ghanshyammann.com wrote:
---- On Tue, 22 Aug 2023 09:05:56 -0700 Thomas Goirand wrote ---
On 8/21/23 20:16, Ghanshyam Mann wrote:
Hi All,
Some of you are part of discussion for python 3.11 testing but If you
are not aware of it,
below is the plan for python 3.11 testing in OpenStack.
Non voting in 2023.2
You might have seen that python 3.11 job is now running as non voting
in all projects[1].
Idea is to run it as non voting for this (2023.2) cycle which will
give projects to fix the issue and make
it green. As it is running on debian (frickler mentioned the reason
of running it in debian in gerrit[2]), it
need some changes in bindep.txt file to pass. Here is the example of
fix[3] which you can do in your
project also.
Voting in 2024.1
In next cycle (2024.1), I am proposing to make py3.11 testing
mandatory [4] and voting (automatically
via common python job template). You need to fix the failure in this
cycle otherwise it will block the
gate once the next cycle development start (basically once 891238 is
merged).
[1]
https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/891227/5
[2]
https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/891146/1
[4] https://review.opendev.org/c/openstack/governance/+/891225
[5]
https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/891238
-gmann
Hi,
This is very nice, though, IMO, it's too late. Bookworm was released
with OpenStack Zed, to which I already added Python 3.11 support (if
you
guys have some patches to add on top, let me know, but as much as I
know, it was already functional).
So now, the current plan is to ... test on py3.11. Yeah, but from the
Debian perspective, we're already on Python 3.12. The RC1 is already in
Debian Experimental, and I expect 3.12 to reach Unstable by the end of
this year. Once again, I'll be the sole person that will experimenting
all the troubles. It's been YEARS like this. It's probably time to
address it, no?
I'd really love it, if we could find a solution so that I stop to be
the
only person getting the shit in this world. :)
What would be awesome, would be to run Debian Unstable, with the latest
interpreter, as non-voting jobs.
Thanks Zigo for you testing and effort, that is really appreciated.
I like the idea of testing the 3.12 in advance even with RC release. Also, as you mentioned we will be testing it in advance as non voting so doing it on Debian experimental version is not bad idea. this will give projects more time to fix the issues. And once we have stable version of distro having v3.12 then we can switch the job to that distro.
I think ubuntu also going to package it in ~Nov/Dec time so that also one option to add it as non voting.
-gmann
Your thoughts?
Cheers,
Thomas Goirand (zigo)
On 2023-08-22 12:26:23 -0500 (-0500), Tony Breeds wrote: [...]
Keeping everything in sync on a fast moving target is a challenge. While none of these are insurmountable, they're also very far from trivial. Using testing would partially address some of this but it's still a pretty big ask.
[...]
Pretty much this. It's been proven time and again (Gentoo, Suse Tumbleweed, Fedora, non-LTS Ubuntu) that the constant churn in packages means more time spent finding and working around instability in distributions that are effectively "under development" at the same time we're trying to use them to test versions of OpenStack that are under development. We struggle just to build images of new releases reliably, much less keep jobs running on top of an ever-changing foundation of sand.
And before someone says "just use whatever images the distros are publishing," that's what we were doing years ago. By building our own images we can ensure things like consistent user IDs and permissions across a diverse set of distros, insert our caches to accelerate jobs, ensure minimal installed package sets have exactly what's needed to bootstrap jobs without anything preinstalled that may conflict with jobs, et cetera. We're already wrangling images for 17 different x86 platforms and 9 different ARM platforms. If we weren't able to enforce consistent entrypoints, access and content on these images, we couldn't have scaled to nearly that level.
On 8/22/23 19:26, Tony Breeds wrote:
Using testing would partially address some of this but it's still a pretty big ask.
The problem with testing, is that it gets he latest interpreter version *AFTER* we solve all the bugs. What I'm asking for, is that we have the needed infrastructure so we can check for incompatibility *before* they actually affect us. This means testing using the non-default Python version when it's uploaded to Unstable.
Now, if we have a way to do this in Debian testing before the next interpreter version reaches it, that'd be even better, I guess.
Cheers,
Thomas Goirand (zigo)
On Wed, 2023-08-23 at 09:10 +0200, Thomas Goirand wrote:
On 8/22/23 19:26, Tony Breeds wrote:
Using testing would partially address some of this but it's still a pretty big ask.
The problem with testing, is that it gets he latest interpreter version *AFTER* we solve all the bugs. What I'm asking for, is that we have the needed infrastructure so we can check for incompatibility *before* they actually affect us. This means testing using the non-default Python version when it's uploaded to Unstable.
right unfortunetly as you are aware we use eventlet and other deps that need to have supprot for the latest python before we can actully use it. i mention eventlet beacuse its the dep that is most sensitive to python version changes because of how it works and the dep that would be the harderest for us to remove.
Now, if we have a way to do this in Debian testing before the next interpreter version reaches it, that'd be even better, I guess.
the only way we could even remotely do that would be via a perodic or other low run count job that we ocationally review. realistically this is not really something i think we can cahase with our current upstream capasity.
having said that we did just recently merge the devstack venv support and tools like https://github.com/pyenv/pyenv exist. so if adding debian unstable or testing is not an option and peopel were willing to develop a job (devstack or tox) that worked with pyenv on say ubuntu 22.04 we could in theory test any python version we wanted without the need to mirror fast moving repo to afs.
Cheers,
Thomas Goirand (zigo)
Hi Sean,
Thanks for taking the time to answer me in this tread.
On 8/23/23 13:07, smooney@redhat.com wrote:
On Wed, 2023-08-23 at 09:10 +0200, Thomas Goirand wrote:
On 8/22/23 19:26, Tony Breeds wrote:
Using testing would partially address some of this but it's still a pretty big ask.
The problem with testing, is that it gets he latest interpreter version *AFTER* we solve all the bugs. What I'm asking for, is that we have the needed infrastructure so we can check for incompatibility *before* they actually affect us. This means testing using the non-default Python version when it's uploaded to Unstable.
right unfortunetly as you are aware we use eventlet and other deps that need to have supprot for the latest python before we can actully use it.
Right, and that's precisely the type of things which I would like to detect early with this type of setup.
i mention eventlet beacuse its the dep that is most sensitive to python version changes because of how it works and the dep that would be the harderest for us to remove.
Yeah, and eventlet breaking at all minor python release is also one of the reason why I would love to see it go away from OpenStack... (let's not have this conversation again now though...).
realistically this is not really something i think we can cahase with our current upstream capacity.
Saying something like this is equivalent to say: sorry Thomas, you'll continue to be on your own fixing Python upgrades. This doesn't work, and doesn't scale, please find another answer...
having said that we did just recently merge the devstack venv support and tools like https://github.com/pyenv/pyenv exist. so if adding debian unstable or testing is not an option and peopel were willing to develop a job (devstack or tox) that worked with pyenv on say ubuntu 22.04 we could in theory test any python version we wanted without the need to mirror fast moving repo to afs.
Well, adding venv capacity doesn't mean you'll have the latest Python interpreter available. The only way to do that, is to actually either use Unstable, or build it yourself. The former is probably easier...
Cheers,
Thomas Goirand (zigo)
On Wed, 2023-08-23 at 15:56 +0200, Thomas Goirand wrote:
Hi Sean,
Thanks for taking the time to answer me in this tread.
On 8/23/23 13:07, smooney@redhat.com wrote:
On Wed, 2023-08-23 at 09:10 +0200, Thomas Goirand wrote:
On 8/22/23 19:26, Tony Breeds wrote:
Using testing would partially address some of this but it's still a pretty big ask.
The problem with testing, is that it gets he latest interpreter version *AFTER* we solve all the bugs. What I'm asking for, is that we have the needed infrastructure so we can check for incompatibility *before* they actually affect us. This means testing using the non-default Python version when it's uploaded to Unstable.
right unfortunetly as you are aware we use eventlet and other deps that need to have supprot for the latest python before we can actully use it.
Right, and that's precisely the type of things which I would like to detect early with this type of setup.
i mention eventlet beacuse its the dep that is most sensitive to python version changes because of how it works and the dep that would be the harderest for us to remove.
Yeah, and eventlet breaking at all minor python release is also one of the reason why I would love to see it go away from OpenStack... (let's not have this conversation again now though...).
realistically this is not really something i think we can cahase with our current upstream capacity.
Saying something like this is equivalent to say: sorry Thomas, you'll continue to be on your own fixing Python upgrades. This doesn't work, and doesn't scale, please find another answer...
having said that we did just recently merge the devstack venv support and tools like https://github.com/pyenv/pyenv%C2%A0exist. so if adding debian unstable or testing is not an option and peopel were willing to develop a job (devstack or tox) that worked with pyenv on say ubuntu 22.04 we could in theory test any python version we wanted without the need to mirror fast moving repo to afs.
Well, adding venv capacity doesn't mean you'll have the latest Python interpreter available. The only way to do that, is to actually either use Unstable, or build it yourself. The former is probably easier...
thats what pyvenv does
it will compile a release into a venv for you which cna then be used to run tox. now that devstack supprot installing in a global virtual env we shoudl be able to leverage that capablity to test with other interperest via pyenv if that is some we need to do.
that would allow use to test upstrem python release even before they are packaged in a distro. my concern is we need to invenst a lot in our ci stabelity and we need to be careful with our capacity. both the capsity of the ci and of human attention span.
we have recently had alot of issue with ci that fortunely have been improving in the last few days but im not really sure how would creat and maintain these py-next jobs beyond what we have previously been doing.
Cheers,
Thomas Goirand (zigo)
---- On Tue, 22 Aug 2023 13:49:35 -0700 Jeremy Stanley wrote ---
On 2023-08-22 12:26:23 -0500 (-0500), Tony Breeds wrote:
[...]
Keeping everything in sync on a fast moving target is a challenge.
While none of these are insurmountable, they're also very far from
trivial. Using testing would partially address some of this but
it's still a pretty big ask.
[...]
Pretty much this. It's been proven time and again (Gentoo, Suse
Tumbleweed, Fedora, non-LTS Ubuntu) that the constant churn in
packages means more time spent finding and working around
instability in distributions that are effectively "under
development" at the same time we're trying to use them to test
versions of OpenStack that are under development. We struggle just
to build images of new releases reliably, much less keep jobs
running on top of an ever-changing foundation of sand.
And before someone says "just use whatever images the distros are
publishing," that's what we were doing years ago. By building our
own images we can ensure things like consistent user IDs and
permissions across a diverse set of distros, insert our caches to
accelerate jobs, ensure minimal installed package sets have exactly
what's needed to bootstrap jobs without anything preinstalled that
may conflict with jobs, et cetera. We're already wrangling images
for 17 different x86 platforms and 9 different ARM platforms. If we
weren't able to enforce consistent entrypoints, access and content
on these images, we couldn't have scaled to nearly that level.
I think we have almost same model what CentOS stream follow. They are not so stable as their development goes in parallel and that is why we test them as our best effort.
Can we do the same for Debian experimental version? and just keep them in our infra with the risk of unstability. That unstability risk should be fine as we are just running some non voting testing on those to test the next python version.
-gmann
--
Jeremy Stanley
---- On Wed, 23 Aug 2023 04:07:36 -0700 wrote ---
On Wed, 2023-08-23 at 09:10 +0200, Thomas Goirand wrote:
... ...
having said that we did just recently merge the devstack venv support
and tools like https://github.com/pyenv/pyenv exist.
so if adding debian unstable or testing is not an option and peopel were willing to develop a job
(devstack or tox) that worked with pyenv on say ubuntu 22.04 we could in theory test any python version
we wanted without the need to mirror fast moving repo to afs.
I think zigo asking is valid ask here to upstream and honestly saying I am not seeing much challenges to do that. If we have image/nodepool in infra (even with unstable image like we do for CentOS stream) then running next python version job as non voting or periodic should be ok and should not cost us much. Even it give win-win situation to upstream as well as distro/packages maintainers.
-gmann
Cheers,
Thomas Goirand (zigo)
I think the problem here isn't technical in nature. We're all technical folks, we can get python running in CI, any version we want. I have no doubt about that. There's the question as to if OpenDev has the capacity for this work; but that's not what I want to address here.
It's about prioritization and time management. Any change we make to increase the matrix -- especially increasing it to include software that is newer with potentially breaking changes -- has a direct cost in terms of developer time to troubleshoot, fix, and wait while those fixes merge. This is the direct trade-off made.
On top of that, we already run a set of periodic jobs on things like stable/ branches which don't get as much attention as they should.
All this to say, regardless of technical issues, I do not think we have the capacity to run python versions in test, such as 3.12.
Thanks, Jay Faulkner Ironic PTL TC Vice Chair
On Wed, Aug 23, 2023 at 8:46 AM smooney@redhat.com wrote:
On Wed, 2023-08-23 at 15:56 +0200, Thomas Goirand wrote:
Hi Sean,
Thanks for taking the time to answer me in this tread.
On 8/23/23 13:07, smooney@redhat.com wrote:
On Wed, 2023-08-23 at 09:10 +0200, Thomas Goirand wrote:
On 8/22/23 19:26, Tony Breeds wrote:
Using testing would partially address some of this but it's still a pretty big ask.
The problem with testing, is that it gets he latest interpreter
version
*AFTER* we solve all the bugs. What I'm asking for, is that we have
the
needed infrastructure so we can check for incompatibility *before*
they
actually affect us. This means testing using the non-default Python version when it's uploaded to Unstable.
right unfortunetly as you are aware we use eventlet and other deps
that need to have
supprot for the latest python before we can actully use it.
Right, and that's precisely the type of things which I would like to detect early with this type of setup.
i mention eventlet beacuse its the dep that is most sensitive to
python version changes
because of how it works and the dep that would be the harderest for us
to remove.
Yeah, and eventlet breaking at all minor python release is also one of the reason why I would love to see it go away from OpenStack... (let's not have this conversation again now though...).
realistically this is not really something i think we can cahase with our current upstream capacity.
Saying something like this is equivalent to say: sorry Thomas, you'll continue to be on your own fixing Python upgrades. This doesn't work, and doesn't scale, please find another answer...
having said that we did just recently merge the devstack venv support and tools like https://github.com/pyenv/pyenv exist. so if adding debian unstable or testing is not an option and peopel
were willing to develop a job
(devstack or tox) that worked with pyenv on say ubuntu 22.04 we could
in theory test any python version
we wanted without the need to mirror fast moving repo to afs.
Well, adding venv capacity doesn't mean you'll have the latest Python interpreter available. The only way to do that, is to actually either use Unstable, or build it yourself. The former is probably easier...
thats what pyvenv does
it will compile a release into a venv for you which cna then be used to run tox. now that devstack supprot installing in a global virtual env we shoudl be able to leverage that capablity to test with other interperest via pyenv if that is some we need to do.
that would allow use to test upstrem python release even before they are packaged in a distro. my concern is we need to invenst a lot in our ci stabelity and we need to be careful with our capacity. both the capsity of the ci and of human attention span.
we have recently had alot of issue with ci that fortunely have been improving in the last few days but im not really sure how would creat and maintain these py-next jobs beyond what we have previously been doing.
Cheers,
Thomas Goirand (zigo)
On Wed, Aug 23, 2023, at 10:33 AM, Ghanshyam Mann wrote:
---- On Wed, 23 Aug 2023 04:07:36 -0700 wrote ---
On Wed, 2023-08-23 at 09:10 +0200, Thomas Goirand wrote:
... ...
having said that we did just recently merge the devstack venv support
and tools like https://github.com/pyenv/pyenv exist.
so if adding debian unstable or testing is not an option and peopel
were willing to develop a job
(devstack or tox) that worked with pyenv on say ubuntu 22.04 we
could in theory test any python version
we wanted without the need to mirror fast moving repo to afs.
I think zigo asking is valid ask here to upstream and honestly saying I am not seeing much challenges to do that. If we have image/nodepool in infra (even with unstable image like we do for CentOS stream) then running next python version job as non voting or periodic should be ok and should not cost us much. Even it give win-win situation to upstream as well as distro/packages maintainers.
The challenge is in adding and maintaining the image and associated mirror content. Yes, if you ignore that then this becomes simple you just tell Zuul to run nodes on the appropriate nodeset and when things break you can either debug them (ideal) or ignore them.
I think Sean's suggestion is a good one since it can be PoC'd without much effort by pushing a change to devstack. Similarly you could hack up unittest jobs to use pyenv and supply a specific newer python on those platforms.
Once upon a time we had OpenSUSE Tumbleweed images. The idea behind them was that Tumbleweed is a bleeding edge rolling release distro and any jobs running there would get new versions of all the things. Very few people took advantage though because things were regularly breaking. I would suggest starting with an existing stable base then adding the specific components on top as in Sean's pyenv suggestion. You can also do similar with container images potentially. This should ideally avoid the problems we've run into trying to run more up to date systems in CI.
Something else we did long ago that I think was actually beneficial is use the beginning of OpenStack release cycles to intentionally disrupt things that need updates. For example linter rules would get updated globally and everyone would need to adjust their linter rules within the projects or adjust the code to pass. OpenStack still isn't running python 3.11 jobs as far as I can tell despite it being available on multiple images today. Maybe OpenStack should be more aggressive at the beginning of cycles and explicitly introduce potentially problematic updates early when the cost is lower and then make things work. It doesn't make a whole lot of sense to me to worry about pre release Python versions when we aren't even up to date with released versions. Solve that problem first, then continue to optimize for Python next.
-gmann
On 2023-08-23 10:28:48 -0700 (-0700), Ghanshyam Mann wrote: [...]
Can we do the same for Debian experimental version? and just keep them in our infra with the risk of unstability. That unstability risk should be fine as we are just running some non voting testing on those to test the next python version.
[...]
Let's set aside for a moment that there is no "experimental" version of Debian (there is an experimental package suite but it's intended for use mostly with the unstable and testing versions, it's not a complete distribution of Debian on its own). What we're lacking is someone to put in the time and effort to get minimal Debian unstable (or testing) images building reliably in diskimage-builder, added to our nodepool configuration, package mirroring in place (possibly cleaning up other unused distro versions to make sufficient room for that), and then adding jobs.
Unlike, say, Ubuntu LTS or Debian stable versions, unstable and testing are constantly changing, getting new versions of packages, and (in the case of unstable) sometimes entirely uninstallable due to transition-related package conflicts. Keeping updated images for that also means having someone who is going to spend some of their time keeping on top of the image build logs from Nodepool, and making the constant adjustments and fixes it needs so that we continue to have fresh images for that platform. If volunteers step forward for this sort of thing then we generally don't tell them to go away, but also if they disappear for an extended period of time we absolutely will delete and clean it all up.
So it's fine to talk about how "easy" this would be, but who's planning to do it?
On 8/23/23 20:14, Jeremy Stanley wrote:
On 2023-08-23 10:28:48 -0700 (-0700), Ghanshyam Mann wrote: [...]
Can we do the same for Debian experimental version? and just keep them in our infra with the risk of unstability. That unstability risk should be fine as we are just running some non voting testing on those to test the next python version.
[...]
Let's set aside for a moment that there is no "experimental" version of Debian (there is an experimental package suite but it's intended for use mostly with the unstable and testing versions, it's not a complete distribution of Debian on its own). What we're lacking is someone to put in the time and effort to get minimal Debian unstable (or testing) images building reliably in diskimage-builder, added to our nodepool configuration, package mirroring in place (possibly cleaning up other unused distro versions to make sufficient room for that), and then adding jobs.
Unlike, say, Ubuntu LTS or Debian stable versions, unstable and testing are constantly changing, getting new versions of packages, and (in the case of unstable) sometimes entirely uninstallable due to transition-related package conflicts. Keeping updated images for that also means having someone who is going to spend some of their time keeping on top of the image build logs from Nodepool, and making the constant adjustments and fixes it needs so that we continue to have fresh images for that platform. If volunteers step forward for this sort of thing then we generally don't tell them to go away, but also if they disappear for an extended period of time we absolutely will delete and clean it all up.
So it's fine to talk about how "easy" this would be, but who's planning to do it?
I kind of like the idea in this thread to have a venv where we would run the latest RC version of the interpreter. This would be lighter than maintaining another image, no? At least this way, this gives upstream OpenStack the opportunity to be closer to distros, where as currently, the project is lagging 1 year and 1 interpreter version behind...
Cheers,
Thomas Goirand (zigo)
While I do like idea of testing more python versions using pyenv, I don't share the idea that OpenStack as a project must match the base requirements that are shipped in any given distro, especially when talking about ones that are still in development. I don't think we have a capacity (and thus alignment and will among projects) to support "bleeding edge". At the end, you can have any python version needed on Debian or any other distro with pyenv as well, so it should not be a problem or a blocker to use OpenStack on distros that do not pre-ship supported by OpenStack python versions.
On Fri, Aug 25, 2023, 10:21 Thomas Goirand zigo@debian.org wrote:
On 8/23/23 20:14, Jeremy Stanley wrote:
On 2023-08-23 10:28:48 -0700 (-0700), Ghanshyam Mann wrote: [...]
Can we do the same for Debian experimental version? and just keep them in our infra with the risk of unstability. That unstability risk should be fine as we are just running some non voting testing on those to test the next python version.
[...]
Let's set aside for a moment that there is no "experimental" version of Debian (there is an experimental package suite but it's intended for use mostly with the unstable and testing versions, it's not a complete distribution of Debian on its own). What we're lacking is someone to put in the time and effort to get minimal Debian unstable (or testing) images building reliably in diskimage-builder, added to our nodepool configuration, package mirroring in place (possibly cleaning up other unused distro versions to make sufficient room for that), and then adding jobs.
Unlike, say, Ubuntu LTS or Debian stable versions, unstable and testing are constantly changing, getting new versions of packages, and (in the case of unstable) sometimes entirely uninstallable due to transition-related package conflicts. Keeping updated images for that also means having someone who is going to spend some of their time keeping on top of the image build logs from Nodepool, and making the constant adjustments and fixes it needs so that we continue to have fresh images for that platform. If volunteers step forward for this sort of thing then we generally don't tell them to go away, but also if they disappear for an extended period of time we absolutely will delete and clean it all up.
So it's fine to talk about how "easy" this would be, but who's planning to do it?
I kind of like the idea in this thread to have a venv where we would run the latest RC version of the interpreter. This would be lighter than maintaining another image, no? At least this way, this gives upstream OpenStack the opportunity to be closer to distros, where as currently, the project is lagging 1 year and 1 interpreter version behind...
Cheers,
Thomas Goirand (zigo)
On 2023-08-25 10:16:29 +0200 (+0200), Thomas Goirand wrote: [...]
I kind of like the idea in this thread to have a venv where we would run the latest RC version of the interpreter. This would be lighter than maintaining another image, no? At least this way, this gives upstream OpenStack the opportunity to be closer to distros, where as currently, the project is lagging 1 year and 1 interpreter version behind...
The ensure-python role used by most of OpenStack's test jobs supports installing Python in other ways than from distro packages already, that just happens to be its default method. If you set the python_use_pyenv[*] flag then it will clone and install pyenv[**] from GitHub and proceed to build the requested Python interpreter/stdlib version.
The main downsides I see for not using a prebuilt Python packages is that the effected jobs are going to end up building Python from source on every run, and there's still no guarantee that we have an available platform where the requested Python version can be built successfully.
[*] https://zuul-ci.org/docs/zuul-jobs/latest/python-roles.html#rolevar-ensure-p... [**] https://github.com/pyenv/pyenv
On Fri, 2023-08-25 at 13:03 +0000, Jeremy Stanley wrote:
On 2023-08-25 10:16:29 +0200 (+0200), Thomas Goirand wrote: [...]
I kind of like the idea in this thread to have a venv where we would run the latest RC version of the interpreter. This would be lighter than maintaining another image, no? At least this way, this gives upstream OpenStack the opportunity to be closer to distros, where as currently, the project is lagging 1 year and 1 interpreter version behind...
The ensure-python role used by most of OpenStack's test jobs supports installing Python in other ways than from distro packages already, that just happens to be its default method. If you set the python_use_pyenv[*] flag then it will clone and install pyenv[**] from GitHub and proceed to build the requested Python interpreter/stdlib version.
The main downsides I see for not using a prebuilt Python packages is that the effected jobs are going to end up building Python from source on every run, and there's still no guarantee that we have an available platform where the requested Python version can be built successfully.
i was kind of wondering if we could optimise the proces by using dib to bake in the ones we want into the image but it kind of comes down to a question of how often this would run. i.e. if it a weekly periodic building in the job is probaly ok if it per patch probably not.
[*] https://zuul-ci.org/docs/zuul-jobs/latest/python-roles.html#rolevar-ensure-p... [**] https://github.com/pyenv/pyenv
On 2023-08-25 17:13:27 +0100 (+0100), smooney@redhat.com wrote: [...]
i was kind of wondering if we could optimise the proces by using dib to bake in the ones we want into the image but it kind of comes down to a question of how often this would run. i.e. if it a weekly periodic building in the job is probaly ok if it per patch probably not.
[...]
Potentially yes, that's what the stow option is for:
https://zuul-ci.org/docs/zuul-jobs/latest/python-roles.html#rolevar-ensure-p...
---- On Mon, 21 Aug 2023 11:16:31 -0700 Ghanshyam Mann wrote ---
Hi All,
Some of you are part of discussion for python 3.11 testing but If you are not aware of it, below is the plan for python 3.11 testing in OpenStack.
Non voting in 2023.2
You might have seen that python 3.11 job is now running as non voting in all projects[1]. Idea is to run it as non voting for this (2023.2) cycle which will give projects to fix the issue and make it green. As it is running on debian (frickler mentioned the reason of running it in debian in gerrit[2]), it need some changes in bindep.txt file to pass. Here is the example of fix[3] which you can do in your project also.
Hello Everyone,
Many projects still need to fix the py3.11 job[1]. I started fixing a few of them, so changes are up for review of those projects.
NOTE: The deadline to fix is the 2023.2 release (Oct 6th); after that, this job will become voting on the master (2024.1 dev cycle but remain non-voting on stable/2023.2) and will block the master gate.
[1] https://zuul.openstack.org/builds?job_name=openstack-tox-py311+&result=R...
-gmann
Voting in 2024.1
In next cycle (2024.1), I am proposing to make py3.11 testing mandatory [4] and voting (automatically via common python job template). You need to fix the failure in this cycle otherwise it will block the gate once the next cycle development start (basically once 891238 is merged).
[1] https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/891227/5 [2] https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/891146/1 [3] https://review.opendev.org/c/openstack/nova/+/891256 [4] https://review.opendev.org/c/openstack/governance/+/891225 [5] https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/891238
-gmann
---- On Thu, 07 Sep 2023 09:03:52 -0700 Ghanshyam Mann wrote ---
---- On Mon, 21 Aug 2023 11:16:31 -0700 Ghanshyam Mann wrote ---
Hi All,
Hello Everyone,
Many projects still need to fix the py3.11 job[1]. I started fixing a few of them, so changes are up for review of those projects.
NOTE: The deadline to fix is the 2023.2 release (Oct 6th); after that, this job will become voting on the master (2024.1 dev cycle but remain non-voting on stable/2023.2) and will block the master gate.
[1] https://zuul.openstack.org/builds?job_name=openstack-tox-py311+&result=R...
Hello Everyone,
This a gentle reminder to fix your project py3.11 job if failing ^^
This job will become voting right after the 2023.2 release on October 4th.
-gmann
-gmann
Voting in 2024.1
In next cycle (2024.1), I am proposing to make py3.11 testing mandatory [4] and voting (automatically via common python job template). You need to fix the failure in this cycle otherwise it will block the gate once the next cycle development start (basically once 891238 is merged).
[1] https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/891227/5 [2] https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/891146/1 [3] https://review.opendev.org/c/openstack/nova/+/891256 [4] https://review.opendev.org/c/openstack/governance/+/891225 [5] https://review.opendev.org/c/openstack/openstack-zuul-jobs/+/891238
-gmann
Je On Fri, 22 Sept 2023 at 21:39, Ghanshyam Mann gmann@ghanshyammann.com wrote:
---- On Thu, 07 Sep 2023 09:03:52 -0700 Ghanshyam Mann wrote ---
---- On Mon, 21 Aug 2023 11:16:31 -0700 Ghanshyam Mann wrote ---
Hi All,
Hello Everyone,
Many projects still need to fix the py3.11 job[1]. I started fixing a
few of them, so changes are up
for review of those projects.
NOTE: The deadline to fix is the 2023.2 release (Oct 6th); after that,
this job will become voting on the
master (2024.1 dev cycle but remain non-voting on stable/2023.2) and
will block the master gate.
[1]
https://zuul.openstack.org/builds?job_name=openstack-tox-py311+&result=R...
Hello Everyone,
This a gentle reminder to fix your project py3.11 job if failing ^^
This job will become voting right after the 2023.2 release on October 4th.
-gmann
Many projects initially fail on bindep trying to install mysql-client, which doesn't exist in bookworm. Fixing this was enough to make the py3.11 job successful in Blazar.
Are there templates to follow for these bindep requirements? I checked projects such as Cinder, Glance, Neutron and Nova: they are using similar package lists but with small differences.
And is it useful to keep testing with MySQL on Ubuntu, so we have coverage for both MySQL and MariaDB?
Hello all:
The errors present in this list related to Neutron belong to failed patches. In all cases, the same errors are present in py310 and py38. I've pushed [1] and all UT jobs are passing.
Regards.
[1]https://review.opendev.org/c/openstack/neutron/+/896351
On Mon, Sep 25, 2023 at 9:44 AM Pierre Riteau pierre@stackhpc.com wrote:
Je On Fri, 22 Sept 2023 at 21:39, Ghanshyam Mann gmann@ghanshyammann.com wrote:
---- On Thu, 07 Sep 2023 09:03:52 -0700 Ghanshyam Mann wrote ---
---- On Mon, 21 Aug 2023 11:16:31 -0700 Ghanshyam Mann wrote ---
Hi All,
Hello Everyone,
Many projects still need to fix the py3.11 job[1]. I started fixing a
few of them, so changes are up
for review of those projects.
NOTE: The deadline to fix is the 2023.2 release (Oct 6th); after that,
this job will become voting on the
master (2024.1 dev cycle but remain non-voting on stable/2023.2) and
will block the master gate.
[1]
https://zuul.openstack.org/builds?job_name=openstack-tox-py311+&result=R...
Hello Everyone,
This a gentle reminder to fix your project py3.11 job if failing ^^
This job will become voting right after the 2023.2 release on October 4th.
-gmann
Many projects initially fail on bindep trying to install mysql-client, which doesn't exist in bookworm. Fixing this was enough to make the py3.11 job successful in Blazar.
Are there templates to follow for these bindep requirements? I checked projects such as Cinder, Glance, Neutron and Nova: they are using similar package lists but with small differences.
And is it useful to keep testing with MySQL on Ubuntu, so we have coverage for both MySQL and MariaDB?
On 2023-09-25 09:42:26 +0200 (+0200), Pierre Riteau wrote: [...]
Many projects initially fail on bindep trying to install mysql-client, which doesn't exist in bookworm. Fixing this was enough to make the py3.11 job successful in Blazar.
As you indicate, in that particular case it's nothing to do with the version of Python, but rather changes in the distribution platforms you're testing on. You don't have to remove mysql-client from bindep.txt, for example you can just specify that it should only be installed on Ubuntu and not Debian, and that mariadb-client should be installed only on Debian. Alternatively, you can switch to installing default-mysql-client on both Debian and Ubuntu, which is a metapackage depending on mariadb-client on Debian and mysql-client on Ubuntu.
Are there templates to follow for these bindep requirements? I checked projects such as Cinder, Glance, Neutron and Nova: they are using similar package lists but with small differences.
Not really, no. Every project has different things they need to install for testing purposes, and the requirement description language bindep uses is expressive enough that there are multiple ways to go about describing the relationships you want. The only real guidance that might be important is to, where possible, use exclusions for old platforms where newer package choices aren't available and inclusions for old platforms on the old packages which aren't available on newer platforms, so that you avoid unnecessary churn on existing entries every time you want to start testing a newer release of some distro and it's clearer what you can clean up.
A good example can be found in https://review.opendev.org/895699 which installs dnf on dpkg-based platforms except if they're older, and installs yum-utils exclusively on older platforms. In the future, as the old distributions are cleaned up from the bindep.txt, the result will be that dnf is installed on all dpkg-based platforms and yum-utils is no longer listed.
And is it useful to keep testing with MySQL on Ubuntu, so we have coverage for both MySQL and MariaDB?
If projects are using the clients directly, or directly querying databases, then interface and syntax differences between them might matter. Ideally, with everything going through an ORM, you shouldn't have to care which one is used as long as their differences are abstracted away by the ORM (with the expectation that the maintainers of that ORM are thoroughly testing with both supported backends anyway).
---- On Mon, 25 Sep 2023 00:42:26 -0700 Pierre Riteau wrote ---
Je On Fri, 22 Sept 2023 at 21:39, Ghanshyam Mann gmann@ghanshyammann.com> wrote: ---- On Thu, 07 Sep 2023 09:03:52 -0700 Ghanshyam Mann wrote --- > ---- On Mon, 21 Aug 2023 11:16:31 -0700 Ghanshyam Mann wrote --- > > Hi All, > Hello Everyone, > > Many projects still need to fix the py3.11 job[1]. I started fixing a few of them, so changes are up > for review of those projects. > > NOTE: The deadline to fix is the 2023.2 release (Oct 6th); after that, this job will become voting on the > master (2024.1 dev cycle but remain non-voting on stable/2023.2) and will block the master gate. > > [1] https://zuul.openstack.org/builds?job_name=openstack-tox-py311+&result=R...
Hello Everyone,
This a gentle reminder to fix your project py3.11 job if failing ^^
This job will become voting right after the 2023.2 release on October 4th.
-gmann Many projects initially fail on bindep trying to install mysql-client, which doesn't exist in bookworm. Fixing this was enough to make the py3.11 job successful in Blazar. Are there templates to follow for these bindep requirements? I checked projects such as Cinder, Glance, Neutron and Nova: they are using similar package lists but with small differences.
Most of it are change in mysql-client not to install for debian and add maridb client, but there are some code failure too for example Tacker - https://review.opendev.org/c/openstack/tacker/+/893867
You can see the required changes with the below gerrit topic and can get idea of what all different changes are applied to projects.
- https://review.opendev.org/q/topic:py311
-gmann
And is it useful to keep testing with MySQL on Ubuntu, so we have coverage for both MySQL and MariaDB?
Hi,
On 2023-08-22 18:05, Thomas Goirand wrote:
This is very nice, though, IMO, it's too late. Bookworm was released with OpenStack Zed, to which I already added Python 3.11 support (if you guys have some patches to add on top, let me know, but as much as I know, it was already functional).
So now, the current plan is to ... test on py3.11. Yeah, but from the Debian perspective, we're already on Python 3.12. The RC1 is already in Debian Experimental, and I expect 3.12 to reach Unstable by the end of this year. Once again, I'll be the sole person that will experimenting all the troubles. It's been YEARS like this. It's probably time to address it, no?
I'd really love it, if we could find a solution so that I stop to be the only person getting the shit in this world. :)
For what it's worth, I try to help test py3XX when the release candidates come out. In the past I was able to find a regression in the sqlite module[1] and propose some of the OpenStack py311 patches[2].
Regarding Python 3.12, I have been able to fix one of our dependencies[3] and request a new package for another[4].
I run my tests by running "tox -repy312" on a bunch of OpenStack projects using a 3.12rc2 container image. I often have to rebuild a few wheels for dependencies that work with Python 3.12, but only on the master/main branch, so I make sure to use these in my testing process. I provide the image[5] for everyone to use, and intend to keep it updated for the next release candidates of Python3.13+.
Regards, Cyril
[1] https://github.com/python/cpython/issues/95132 [2] https://review.opendev.org/q/topic:py311 [3] https://github.com/ncclient/ncclient/pull/567 [4] https://github.com/sumerc/yappi/pull/148 [5] https://github.com/CyrilRoelandteNovance/py3-next-openstack
participants (12)
-
Alex Kavanagh
-
Clark Boylan
-
Cyril Roelandt
-
Dmitriy Rabotyagov
-
Ghanshyam Mann
-
Jay Faulkner
-
Jeremy Stanley
-
Pierre Riteau
-
Rodolfo Alonso Hernandez
-
smooney@redhat.com
-
Thomas Goirand
-
Tony Breeds