[openstack-dev] [requirements][infra] Maintaining constraints for several python versions
cboylan at sapwetik.org
Thu Jul 12 13:37:52 UTC 2018
On Wed, Jul 11, 2018, at 9:34 PM, Tony Breeds wrote:
> Hi Folks,
> We have a pit of a problem in openstack/requirements and I'd liek to
> chat about it.
> Currently when we generate constraints we create a venv for each
> (system) python supplied on the command line, install all of
> global-requirements into that venv and capture the pip freeze.
> Where this falls down is if we want to generate a freeze for python 3.4
> and 3.5 we need an image that has both of those. We cheated and just
> 'clone' them so if python3 is 3.4 we copy the results to 3.5 and vice
> versa. This kinda worked for a while but it has drawbacks.
> I can see a few of options:
> 1. Build pythons from source and use that to construct the venv
> [please no]
Fungi mentions that 3.3 and 3.4 don't build easily on modern linux distros. However, 3.3 and 3.4 are also unsupported by Python at this point, maybe we can ignore them and focus on 3.5 and forward? We don't build new freeze lists for the stable branches, this is just a concern for master right?
> 2. Generate the constraints in an F28 image. My F28 has ample python
> - /usr/bin/python2.6
> - /usr/bin/python2.7
> - /usr/bin/python3.3
> - /usr/bin/python3.4
> - /usr/bin/python3.5
> - /usr/bin/python3.6
> - /usr/bin/python3.7
> I don't know how valid this still is but in the past fedora images
> have been seen as unstable and hard to keep current. If that isn't
> still the feeling then we could go down this path. Currently there a
> few minor problems with bindep.txt on fedora and generate-constraints
> doesn't work with py3 but these are pretty minor really.
I think most of the problems with Fedora stability are around bringing up a new Fedora every 6 months or so. They tend to change sufficiently within that time period to make this a fairly involved exercise. But once working they work for the ~13 months of support they offer. I know Paul Belanger would like to iterate more quickly and just keep the most recent Fedora available (rather than ~2).
> 3. Use docker images for python and generate the constraints with
> them. I've hacked up something we could use as a base for that in:
> There are lots of open questions:
> - How do we make this nodepool/cloud provider friendly ?
> * Currently the containers just talk to the main debian mirrors.
> Do we have debian packages? If so we could just do sed magic.
http://$MIRROR/debian (http://mirror.dfw.rax.openstack.org/debian for example) should be a working amd64 debian package mirror.
> - Do/Can we run a registry per provider?
We do not, but we do have a caching dockerhub registry proxy in each region/provider. http://$MIRROR:8081/registry-1.docker if using older docker and http://$MIRROR:8082 for current docker. This was a compromise between caching the Internet and reliability.
> - Can we generate and caches these images and only run pip install -U
> g-r to speed up the build
Between cached upstream python docker images and prebuilt wheels mirrored in every cloud provider region I wonder if this will save a significant amount of time? May be worth starting without this and working from there if it remains slow.
> - Are we okay with using docker this way?
Should be fine, particularly if we are consuming the official Python images.
> I like #2 the most but I wanted to seek wider feedback.
I think each proposed option should work as long as we understand the limitations each presents. #2 should work fine if we have individuals interested and able to spin up new Fedora images and migrate jobs to that image after releases happen.
More information about the OpenStack-dev