[infra][tc] Container images in openstack/ on Docker Hub
Hi, As part of the recent infrastructure work described in http://lists.openstack.org/pipermail/openstack-discuss/2019-January/002026.h... we now have the ability to fairly easily support uploading of container images to the "openstack/" namespace on Docker Hub. The Infrastructure team does have an account on Docker Hub with ownership rights to this space. It is now fairly simple for us to allow any OpenStack project to upload to openstack/$short_name. As a (perhaps unlikely, but simple) example, Nova could upload images to "openstack/nova", including suffixed images, such as "openstack/nova-scheduler". The system that would enable this is described in this proposed change: https://review.openstack.org/632818 I believe it's within the TC's purview to decide whether this should happen, and if so, what policies should govern it (i.e., what projects are entitled to upload to openstack/). It's possible that the status quo where deployment projects upload to their own namespaces (e.g., loci/) while openstack/ remains empty is desirable. However, since we recently gained the technical ability to handle this, I thought it worth bringing up. Personally, I don't presently advocate one way or the other. -Jim
On 2019-01-23 15:46:01 -0800 (-0800), James E. Blair wrote: [...]
It's possible that the status quo where deployment projects upload to their own namespaces (e.g., loci/) while openstack/ remains empty is desirable. However, since we recently gained the technical ability to handle this, I thought it worth bringing up.
Personally, I don't presently advocate one way or the other.
If nothing else, it's a great opportunity to revisit our decision in https://governance.openstack.org/tc/resolutions/20170530-binary-artifacts.ht... and make sure it's still relevant for the present situation. -- Jeremy Stanley
James E. Blair wrote:
As part of the recent infrastructure work described in http://lists.openstack.org/pipermail/openstack-discuss/2019-January/002026.h... we now have the ability to fairly easily support uploading of container images to the "openstack/" namespace on Docker Hub. The Infrastructure team does have an account on Docker Hub with ownership rights to this space.
It is now fairly simple for us to allow any OpenStack project to upload to openstack/$short_name. As a (perhaps unlikely, but simple) example, Nova could upload images to "openstack/nova", including suffixed images, such as "openstack/nova-scheduler". [...]
I believe it's within the TC's purview to decide whether this should happen, and if so, what policies should govern it (i.e., what projects are entitled to upload to openstack/).
It's possible that the status quo where deployment projects upload to their own namespaces (e.g., loci/) while openstack/ remains empty is desirable. However, since we recently gained the technical ability to handle this, I thought it worth bringing up.
Thanks for bringing this up. Each solution has its benefits, and I don't have a super-strong opinion on it. I'm leaning toward status quo: unless we consistently publish containers for most (or even all) deliverables, we should keep them in separate namespaces. A centralized "openstack" namespace conveys some official-ness and completeness -- it would make sense if we published all deliverablkes as containers every cycle as part of the release management work, for example. If it only contains a few select containers published at different times under different rules, it's likely to be more confusing than helping... -- Thierry Carrez (ttx)
On Mon, 28 Jan 2019, Thierry Carrez wrote:
I'm leaning toward status quo: unless we consistently publish containers for most (or even all) deliverables, we should keep them in separate namespaces.
That makes a lot of sense but another way to look at it is: If we start publishing some containers into a consistent namespace it might encourage projects to start owning "blessed" containers of themselves, which is probably a good thing. And having a location with vacancies might encourage people to fill\ it, whereas otherwise the incentive is weak.
A centralized "openstack" namespace conveys some official-ness and completeness -- it would make sense if we published all deliverablkes as containers every cycle as part of the release management work, for example. If it only contains a few select containers published at different times under different rules, it's likely to be more confusing than helping...
The current container situation is already pretty confusing... -- Chris Dent ٩◔̯◔۶ https://anticdent.org/ freenode: cdent tw: @anticdent
On Mon, 2019-01-28 at 13:06 +0000, Chris Dent wrote:
On Mon, 28 Jan 2019, Thierry Carrez wrote:
I'm leaning toward status quo: unless we consistently publish containers for most (or even all) deliverables, we should keep them in separate namespaces.
That makes a lot of sense but another way to look at it is:
If we start publishing some containers into a consistent namespace it might encourage projects to start owning "blessed" containers of themselves, which is probably a good thing. well that raises the question of what type of containter someinthing like opesntac/nova should be
a kolla container a loci container lxd containers a container build with pbr the way zuul is published. someting else determined by the porject? having yet another way to build openstack container is proably not a good thing. even if a common way of building the container was agreed on there is also the question of what base os is it derived form. finding a vender neutral answer to the above that does not "play favorites" with projects, distros or technologies will be challenging.
And having a location with vacancies might encourage people to fill\ it, whereas otherwise the incentive is weak.
there are already pretty complete set of offical containers from the kolla project on dockerhub here https://hub.docker.com/u/kolla/ and less so from loci here https://hub.docker.com/u/loci and https://hub.docker.com/u/gantry
A centralized "openstack" namespace conveys some official-ness and completeness -- it would make sense if we published all deliverablkes as containers every cycle as part of the release management work, for example. If it only contains a few select containers published at different times under different rules, it's likely to be more confusing than helping...
The current container situation is already pretty confusing...
On Mon, Jan 28, 2019 at 8:41 AM Sean Mooney <smooney@redhat.com> wrote:
On Mon, 2019-01-28 at 13:06 +0000, Chris Dent wrote:
On Mon, 28 Jan 2019, Thierry Carrez wrote:
I'm leaning toward status quo: unless we consistently publish containers for most (or even all) deliverables, we should keep them in separate namespaces.
That makes a lot of sense but another way to look at it is:
If we start publishing some containers into a consistent namespace it might encourage projects to start owning "blessed" containers of themselves, which is probably a good thing. well that raises the question of what type of containter someinthing like opesntac/nova should be
a kolla container a loci container lxd containers a container build with pbr the way zuul is published. someting else determined by the porject?
having yet another way to build openstack container is proably not a good thing.
even if a common way of building the container was agreed on there is also the question of what base os is it derived form.
finding a vender neutral answer to the above that does not "play favorites" with projects, distros or technologies will be challenging.
And having a location with vacancies might encourage people to fill\ it, whereas otherwise the incentive is weak.
there are already pretty complete set of offical containers from the kolla project on dockerhub here https://hub.docker.com/u/kolla/ and less so from loci here https://hub.docker.com/u/loci and https://hub.docker.com/u/gantry
A centralized "openstack" namespace conveys some official-ness and completeness -- it would make sense if we published all deliverablkes as containers every cycle as part of the release management work, for example. If it only contains a few select containers published at different times under different rules, it's likely to be more confusing than helping...
The current container situation is already pretty confusing...
I think we should all agree to a certain set of way that we publish our Docker images, in the same sense that we have one way of publishing Python packages (i.e. for the most part using pbr, etc). I know the Zuul team has done work around pbrx, we also have a lot of domain knowledge from the Kolla and LOCI teams. I'm sure that by working together, we can come up with a well thought-out process of official image deliverables. I would also be in favor of basing it on top of a simple python base image (which I believe comes through Debian), however, the story of delivering something that includes binaries becomes interesting. Perhaps, we should come up with the first initial step of providing a common way of building images (so a use can clone a repo and do 'docker build .') which will eliminate the obligation of having to deal with binaries, and then afterwards reconsider the ideal way of shipping those out. -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser@vexxhost.com W. http://vexxhost.com
On 01/28/2019 10:24 AM, Mohammed Naser wrote:
Perhaps, we should come up with the first initial step of providing a common way of building images (so a use can clone a repo and do 'docker build .') which will eliminate the obligation of having to deal with binaries, and then afterwards reconsider the ideal way of shipping those out.
Isn't that precisely what LOCI offers, Mohammed? Best, -jay
On Mon, Jan 28, 2019 at 10:41 AM Jay Pipes <jaypipes@gmail.com> wrote:
On 01/28/2019 10:24 AM, Mohammed Naser wrote:
Perhaps, we should come up with the first initial step of providing a common way of building images (so a use can clone a repo and do 'docker build .') which will eliminate the obligation of having to deal with binaries, and then afterwards reconsider the ideal way of shipping those out.
Isn't that precisely what LOCI offers, Mohammed?
Best, -jay
I haven't studied LOCI as much however I think that it would be good to perhaps look into bringing that approach in-repo rather than out-of-repo so a user can simply git clone, docker build . I have to admit, I'm not super familiar with LOCI but as far as I know, that's indeed what I believe it does. -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser@vexxhost.com W. http://vexxhost.com
On 01/28/2019 10:43 AM, Mohammed Naser wrote:
On Mon, Jan 28, 2019 at 10:41 AM Jay Pipes <jaypipes@gmail.com> wrote:
On 01/28/2019 10:24 AM, Mohammed Naser wrote:
Perhaps, we should come up with the first initial step of providing a common way of building images (so a use can clone a repo and do 'docker build .') which will eliminate the obligation of having to deal with binaries, and then afterwards reconsider the ideal way of shipping those out.
Isn't that precisely what LOCI offers, Mohammed?
Best, -jay
I haven't studied LOCI as much however I think that it would be good to perhaps look into bringing that approach in-repo rather than out-of-repo so a user can simply git clone, docker build .
I have to admit, I'm not super familiar with LOCI but as far as I know, that's indeed what I believe it does.
Yes, that's what LOCI can do, kinda. :) Technically there's some Makefile foo that iterates over projects to build images for, but it's essentially what it does. Alternately, you don't even need to build locally. You can do: docker build https://git.openstack.org/openstack/loci.git \ --build-arg PROJECT=keystone \ --tag keystone:ubuntu IMHO, the real innovation that LOCI brings is the way that it builds wheel packages into an intermediary docker build container and then installs the service-specific Python code into a virtualenv inside the target project docker container after injecting the built wheels. That, and LOCI made a good (IMHO) decision to just focus on building the images and not deploying those images (using Ansible, Puppet, Chef, k8s, whatever). They kept the deployment concerns separate, which is a great decision since deployment tools are a complete dumpster fire (all of them). Best, -jay
On Mon, Jan 28, 2019 at 10:58 AM Jay Pipes <jaypipes@gmail.com> wrote:
On 01/28/2019 10:43 AM, Mohammed Naser wrote:
On Mon, Jan 28, 2019 at 10:41 AM Jay Pipes <jaypipes@gmail.com> wrote:
On 01/28/2019 10:24 AM, Mohammed Naser wrote:
Perhaps, we should come up with the first initial step of providing a common way of building images (so a use can clone a repo and do 'docker build .') which will eliminate the obligation of having to deal with binaries, and then afterwards reconsider the ideal way of shipping those out.
Isn't that precisely what LOCI offers, Mohammed?
Best, -jay
I haven't studied LOCI as much however I think that it would be good to perhaps look into bringing that approach in-repo rather than out-of-repo so a user can simply git clone, docker build .
I have to admit, I'm not super familiar with LOCI but as far as I know, that's indeed what I believe it does.
Yes, that's what LOCI can do, kinda. :) Technically there's some Makefile foo that iterates over projects to build images for, but it's essentially what it does.
Alternately, you don't even need to build locally. You can do:
docker build https://git.openstack.org/openstack/loci.git \ --build-arg PROJECT=keystone \ --tag keystone:ubuntu
IMHO, the real innovation that LOCI brings is the way that it builds wheel packages into an intermediary docker build container and then installs the service-specific Python code into a virtualenv inside the target project docker container after injecting the built wheels.
That, and LOCI made a good (IMHO) decision to just focus on building the images and not deploying those images (using Ansible, Puppet, Chef, k8s, whatever). They kept the deployment concerns separate, which is a great decision since deployment tools are a complete dumpster fire (all of them).
Thanks for that, I didn't know about this, I'll do some more reading about LOCI and it how it goes about doing this. Thanks Jay.
Best, -jay
-- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser@vexxhost.com W. http://vexxhost.com
On 01/28/2019 11:00 AM, Mohammed Naser wrote:
On Mon, Jan 28, 2019 at 10:58 AM Jay Pipes <jaypipes@gmail.com> wrote:
On 01/28/2019 10:43 AM, Mohammed Naser wrote:
On Mon, Jan 28, 2019 at 10:41 AM Jay Pipes <jaypipes@gmail.com> wrote:
On 01/28/2019 10:24 AM, Mohammed Naser wrote:
Perhaps, we should come up with the first initial step of providing a common way of building images (so a use can clone a repo and do 'docker build .') which will eliminate the obligation of having to deal with binaries, and then afterwards reconsider the ideal way of shipping those out.
Isn't that precisely what LOCI offers, Mohammed?
Best, -jay
I haven't studied LOCI as much however I think that it would be good to perhaps look into bringing that approach in-repo rather than out-of-repo so a user can simply git clone, docker build .
I have to admit, I'm not super familiar with LOCI but as far as I know, that's indeed what I believe it does.
Yes, that's what LOCI can do, kinda. :) Technically there's some Makefile foo that iterates over projects to build images for, but it's essentially what it does.
Alternately, you don't even need to build locally. You can do:
docker build https://git.openstack.org/openstack/loci.git \ --build-arg PROJECT=keystone \ --tag keystone:ubuntu
IMHO, the real innovation that LOCI brings is the way that it builds wheel packages into an intermediary docker build container and then installs the service-specific Python code into a virtualenv inside the target project docker container after injecting the built wheels.
That, and LOCI made a good (IMHO) decision to just focus on building the images and not deploying those images (using Ansible, Puppet, Chef, k8s, whatever). They kept the deployment concerns separate, which is a great decision since deployment tools are a complete dumpster fire (all of them).
Thanks for that, I didn't know about this, I'll do some more reading about LOCI and it how it goes about doing this.
Thanks Jay.
No problem. Also a good thing to keep in mind is that kolla-ansible is able to deploy LOCI images, AFAIK, instead of the "normal" Kolla images. I have not tried this myself, however, so perhaps someone with experience in this might chime in. Best, -jay
On 01/28/2019 11:00 AM, Mohammed Naser wrote:
On Mon, Jan 28, 2019 at 10:58 AM Jay Pipes <jaypipes@gmail.com> wrote:
On 01/28/2019 10:43 AM, Mohammed Naser wrote:
On Mon, Jan 28, 2019 at 10:41 AM Jay Pipes <jaypipes@gmail.com> wrote:
On 01/28/2019 10:24 AM, Mohammed Naser wrote:
Perhaps, we should come up with the first initial step of providing a common way of building images (so a use can clone a repo and do 'docker build .') which will eliminate the obligation of having to deal with binaries, and then afterwards reconsider the ideal way of shipping those out.
Isn't that precisely what LOCI offers, Mohammed?
Best, -jay
I haven't studied LOCI as much however I think that it would be good to perhaps look into bringing that approach in-repo rather than out-of-repo so a user can simply git clone, docker build .
I have to admit, I'm not super familiar with LOCI but as far as I know, that's indeed what I believe it does.
Yes, that's what LOCI can do, kinda. :) Technically there's some Makefile foo that iterates over projects to build images for, but it's essentially what it does.
Alternately, you don't even need to build locally. You can do:
docker build https://git.openstack.org/openstack/loci.git \ --build-arg PROJECT=keystone \ --tag keystone:ubuntu
IMHO, the real innovation that LOCI brings is the way that it builds wheel packages into an intermediary docker build container and then installs the service-specific Python code into a virtualenv inside the target project docker container after injecting the built wheels.
That, and LOCI made a good (IMHO) decision to just focus on building the images and not deploying those images (using Ansible, Puppet, Chef, k8s, whatever). They kept the deployment concerns separate, which is a great decision since deployment tools are a complete dumpster fire (all of them).
Thanks for that, I didn't know about this, I'll do some more reading about LOCI and it how it goes about doing this.
Thanks Jay.
No problem. Also a good thing to keep in mind is that kolla-ansible is able to deploy LOCI images, AFAIK, instead of the "normal" Kolla images. I have not tried this myself, however, so perhaps someone with experience in this might chime in.
On Mon, 2019-01-28 at 11:18 -0500, Jay Pipes wrote: the loci images would have to conform to the kolla abit which requires a few files like kolla_start to existit but it principal it could if that requirement was fulfilled.
Best, -jay
On Mon, 2019-01-28 at 11:18 -0500, Jay Pipes wrote:
On 01/28/2019 11:00 AM, Mohammed Naser wrote:
On Mon, Jan 28, 2019 at 10:58 AM Jay Pipes <jaypipes@gmail.com> wrote:
On 01/28/2019 10:43 AM, Mohammed Naser wrote:
On Mon, Jan 28, 2019 at 10:41 AM Jay Pipes <jaypipes@gmail.com> wrote:
On 01/28/2019 10:24 AM, Mohammed Naser wrote: > Perhaps, we should come up with the first initial step of providing > a common way of building images (so a use can clone a repo and do > 'docker build .') which will eliminate the obligation of having to > deal with binaries, and then afterwards reconsider the ideal way of > shipping those out.
Isn't that precisely what LOCI offers, Mohammed?
Best, -jay
I haven't studied LOCI as much however I think that it would be good to perhaps look into bringing that approach in-repo rather than out-of-repo so a user can simply git clone, docker build .
I have to admit, I'm not super familiar with LOCI but as far as I know, that's indeed what I believe it does.
Yes, that's what LOCI can do, kinda. :) Technically there's some Makefile foo that iterates over projects to build images for, but it's essentially what it does.
Alternately, you don't even need to build locally. You can do:
docker build https://git.openstack.org/openstack/loci.git \ --build-arg PROJECT=keystone \ --tag keystone:ubuntu
IMHO, the real innovation that LOCI brings is the way that it builds wheel packages into an intermediary docker build container and then installs the service-specific Python code into a virtualenv inside the target project docker container after injecting the built wheels.
That, and LOCI made a good (IMHO) decision to just focus on building the images and not deploying those images (using Ansible, Puppet, Chef, k8s, whatever). They kept the deployment concerns separate, which is a great decision since deployment tools are a complete dumpster fire (all of them).
Thanks for that, I didn't know about this, I'll do some more reading about LOCI and it how it goes about doing this.
Thanks Jay.
No problem. Also a good thing to keep in mind is that kolla-ansible is able to deploy LOCI images, AFAIK, instead of the "normal" Kolla images. I have not tried this myself, however, so perhaps someone with experience in this might chime in.
the loci images would have to conform to the kolla abit which requires a few files like kolla_start to existit but it principal it could if that requirement was fulfilled.
On Mon, 2019-01-28 at 16:31 +0000, Sean Mooney wrote: this is the kolla image api for reference https://docs.openstack.org/kolla/latest/admin/kolla_api.html https://github.com/openstack/kolla/blob/master/doc/source/admin/kolla_api.rs... all kolla images share that external facing api so if you use loci to build an image an then inject the required api shim as a layer it would work. you can also use the iamge manually the same way by defining the relevent env varibale or monting configs. docker run -e KOLLA_CONFIG_STRATEGY=COPY_ALWAYS \ -e KOLLA_CONFIG_FILE=/config.json \ -v /path/to/config.json:/config.json kolla-image of cource you can bypass it too and execute command directly in the contienr too e.g. just start nova-compute. the point was to define a commmon way to inject configuration, include what command to run externally after the image was built so that they could be reused by different deployment tools like kolla-k8s, tripleo or just a buch or bash commands. the workflow is the same. prepfare a directory with a buch of config files for the service. spawn the container with that directory bind mounted into the container and set an env var to point at the kolla config.json that specifed where teh config shoudl be copied, with what owership/permission and what command to run. im not sure if thsi is a good or a bad thing but any tool that supported the kolla image api should be able to use loci built image if those image suport it too.
Best, -jay
by the way in case it was not clear i am actully in favor of having vendor indepenat container for openstack. i would recommend basing such a container on the offical python:3-alpine image as it is only 30mb and has everything we should need to just pip install the project. it has python 3.7.2 currently but the 3-alpine tag tracks both the latest release of alpine and the latest release of python alpine supports. in some rare cases we might need to also install bindeps but i would hope that between bindeps and pip we could build small images for source fairly simpely and leave the orchestration of those images to the enduser. as i said before too if we choose to go donw this route however i would stongly encorrage not packaging any of our thirdparty dependcies like libvirt, mysql rabbitmq or ovs and deply all service api that can be deployed under uwsgi with it instead of apache again to keep the images as small as possible. that said loci and kolla both do resonably good jobs i think at this already so if we say with the status quo then i think that is fine too. perhaps this would be a good topic for the fourm/summit/ptg? i would see this kindof like a comunity goal if it was something we chose to do so it would be good to get feedback/input for those why might not have engaged on the tread so far. there is also the TC question from a policy perspective too ignoring the technical aspects above. On Mon, 2019-01-28 at 16:52 +0000, Sean Mooney wrote:
On Mon, 2019-01-28 at 16:31 +0000, Sean Mooney wrote:
On Mon, 2019-01-28 at 11:18 -0500, Jay Pipes wrote:
On 01/28/2019 11:00 AM, Mohammed Naser wrote:
On Mon, Jan 28, 2019 at 10:58 AM Jay Pipes <jaypipes@gmail.com> wrote:
On 01/28/2019 10:43 AM, Mohammed Naser wrote:
On Mon, Jan 28, 2019 at 10:41 AM Jay Pipes <jaypipes@gmail.com> wrote: > > On 01/28/2019 10:24 AM, Mohammed Naser wrote: > > Perhaps, we should come up with the first initial step of providing > > a common way of building images (so a use can clone a repo and do > > 'docker build .') which will eliminate the obligation of having to > > deal with binaries, and then afterwards reconsider the ideal way of > > shipping those out. > > Isn't that precisely what LOCI offers, Mohammed? > > Best, > -jay >
I haven't studied LOCI as much however I think that it would be good to perhaps look into bringing that approach in-repo rather than out-of-repo so a user can simply git clone, docker build .
I have to admit, I'm not super familiar with LOCI but as far as I know, that's indeed what I believe it does.
Yes, that's what LOCI can do, kinda. :) Technically there's some Makefile foo that iterates over projects to build images for, but it's essentially what it does.
Alternately, you don't even need to build locally. You can do:
docker build https://git.openstack.org/openstack/loci.git \ --build-arg PROJECT=keystone \ --tag keystone:ubuntu
IMHO, the real innovation that LOCI brings is the way that it builds wheel packages into an intermediary docker build container and then installs the service-specific Python code into a virtualenv inside the target project docker container after injecting the built wheels.
That, and LOCI made a good (IMHO) decision to just focus on building the images and not deploying those images (using Ansible, Puppet, Chef, k8s, whatever). They kept the deployment concerns separate, which is a great decision since deployment tools are a complete dumpster fire (all of them).
Thanks for that, I didn't know about this, I'll do some more reading about LOCI and it how it goes about doing this.
Thanks Jay.
No problem. Also a good thing to keep in mind is that kolla-ansible is able to deploy LOCI images, AFAIK, instead of the "normal" Kolla images. I have not tried this myself, however, so perhaps someone with experience in this might chime in.
the loci images would have to conform to the kolla abit which requires a few files like kolla_start to existit but it principal it could if that requirement was fulfilled.
this is the kolla image api for reference https://docs.openstack.org/kolla/latest/admin/kolla_api.html https://github.com/openstack/kolla/blob/master/doc/source/admin/kolla_api.rs... all kolla images share that external facing api so if you use loci to build an image an then inject the required api shim as a layer it would work.
you can also use the iamge manually the same way by defining the relevent env varibale or monting configs.
docker run -e KOLLA_CONFIG_STRATEGY=COPY_ALWAYS \ -e KOLLA_CONFIG_FILE=/config.json \ -v /path/to/config.json:/config.json kolla-image
of cource you can bypass it too and execute command directly in the contienr too e.g. just start nova-compute.
the point was to define a commmon way to inject configuration, include what command to run externally after the image was built so that they could be reused by different deployment tools like kolla-k8s, tripleo or just a buch or bash commands.
the workflow is the same. prepfare a directory with a buch of config files for the service. spawn the container with that directory bind mounted into the container and set an env var to point at the kolla config.json that specifed where teh config shoudl be copied, with what owership/permission and what command to run.
im not sure if thsi is a good or a bad thing but any tool that supported the kolla image api should be able to use loci built image if those image suport it too.
Best, -jay
On 2019-01-30 13:57:03 +0000 (+0000), Sean Mooney wrote: [...]
i would recommend basing such a container on the offical python:3-alpine image as it is only 30mb and has everything we should need to just pip install the project. it has python 3.7.2 currently but the 3-alpine tag tracks both the latest release of alpine and the latest release of python alpine supports.
In a twist of irony, Python manylinux1 wheels assume glibc and so any with C extensions are unusable with Alpine's musl. As a result, we'll likely need to cross-compile any of our non-pure-Python dependencies from sdist/source with an appropriate toolchain and inject them into image.
in some rare cases we might need to also install bindeps but i would hope that between bindeps and pip we could build small images for source fairly simpely and leave the orchestration of those images to the enduser. [...]
The bindep tool does at least have support for Alpine now, so as long as there are packages available for our system dependencies that should hopefully be a viable option. -- Jeremy Stanley
I want to clear up a few things about Loci images. To start with, I would not be comfortable publishing Loci images to the OpenStack namespace in Docker Hub because currently they have no functional testing. In several instances over the past couple of months we've sent up patches to fix images that just didn't work because of dependency issues. We're working on a way to do functional testing, and once we're gating with functional testing on master and stable branches we can revisit the issue. Still, we assume that deployment tooling will want to modify images anyway, and specifically designed the build system to accomodate injecting different binary and python dependencies. Also, Loci does not provide it's own Makefile for building images. The Dockerfile and installation scripts use environment variables to control the entire build process, which makes is very easy to use tools like Make or Ansible to build the images. Supporting multiple base operating systems is trivial with Loci and Docker image tagging. If we do push images to some central location, as a community we should think about adopting a common tagging strategy for consitency across all projects. For example, in my own little deployments I use a naming scheme that follows this pattern: loci-<project>:<release>-<base> So Nova from master on Leap15 would be tagged as: loci-nova:master-leap15 We should be listening to demand for such images, but for now I encourage people interested in Loci to build their own to suit their particular needs. -Chris
On Jan 30, 2019, at 6:35 AM, Jeremy Stanley <fungi@yuggoth.org> wrote:
On 2019-01-30 13:57:03 +0000 (+0000), Sean Mooney wrote: [...]
i would recommend basing such a container on the offical python:3-alpine image as it is only 30mb and has everything we should need to just pip install the project. it has python 3.7.2 currently but the 3-alpine tag tracks both the latest release of alpine and the latest release of python alpine supports.
In a twist of irony, Python manylinux1 wheels assume glibc and so any with C extensions are unusable with Alpine's musl. As a result, we'll likely need to cross-compile any of our non-pure-Python dependencies from sdist/source with an appropriate toolchain and inject them into image.
in some rare cases we might need to also install bindeps but i would hope that between bindeps and pip we could build small images for source fairly simpely and leave the orchestration of those images to the enduser. [...]
The bindep tool does at least have support for Alpine now, so as long as there are packages available for our system dependencies that should hopefully be a viable option. -- Jeremy Stanley
I performed a back to back upgrade of one of my kubernetes clusters across 2 separate major versions yesterday (1.11.x -> 1.13.x) in under 30 minutes. The prep time for it was about the same. I'm not writing this to sing k8s's praises and slam on OpenStack. I'm trying to help ensure folks have an understanding of OpenStacks continual situation here.... What OpenStack asks of Operators is a huge amount of work while similar software does not, while achieving very similar things. While its good that your not pushing folks to use untested stuff, that should be top priority to fix I think. One of the big reasons the k8s upgrade was so easy was not needing to rebuild the universe. The software deployed as part of the upgrade was 1, built upstream, 2, tested upstream, 3, upgrade tested upstream. What I deployed was completely binary identical, all the way down to libc, to what they released. This ensured to a high level of reliability that upgrades would be smooth. I pushed for a while to get all of that workflow in kolla/kolla-kubernetes and infra just wasn't ready at the time. they are now though, which is fantastic. Please seize this opportunity cause it really has the potential to help OpenStack's Operators in a big way. There are a few other reasons the upgrade was so easy/quick. Those should be tackled by OpenStack too. but that's for another thread... Thanks, Kevin ________________________________________ From: Chris Hoge [chris@openstack.org] Sent: Wednesday, January 30, 2019 7:56 AM To: openstack-discuss@lists.openstack.org Subject: Re: [infra][tc] Container images in openstack/ on Docker Hub I want to clear up a few things about Loci images. To start with, I would not be comfortable publishing Loci images to the OpenStack namespace in Docker Hub because currently they have no functional testing. In several instances over the past couple of months we've sent up patches to fix images that just didn't work because of dependency issues. We're working on a way to do functional testing, and once we're gating with functional testing on master and stable branches we can revisit the issue. Still, we assume that deployment tooling will want to modify images anyway, and specifically designed the build system to accomodate injecting different binary and python dependencies. Also, Loci does not provide it's own Makefile for building images. The Dockerfile and installation scripts use environment variables to control the entire build process, which makes is very easy to use tools like Make or Ansible to build the images. Supporting multiple base operating systems is trivial with Loci and Docker image tagging. If we do push images to some central location, as a community we should think about adopting a common tagging strategy for consitency across all projects. For example, in my own little deployments I use a naming scheme that follows this pattern: loci-<project>:<release>-<base> So Nova from master on Leap15 would be tagged as: loci-nova:master-leap15 We should be listening to demand for such images, but for now I encourage people interested in Loci to build their own to suit their particular needs. -Chris
On Jan 30, 2019, at 6:35 AM, Jeremy Stanley <fungi@yuggoth.org> wrote:
On 2019-01-30 13:57:03 +0000 (+0000), Sean Mooney wrote: [...]
i would recommend basing such a container on the offical python:3-alpine image as it is only 30mb and has everything we should need to just pip install the project. it has python 3.7.2 currently but the 3-alpine tag tracks both the latest release of alpine and the latest release of python alpine supports.
In a twist of irony, Python manylinux1 wheels assume glibc and so any with C extensions are unusable with Alpine's musl. As a result, we'll likely need to cross-compile any of our non-pure-Python dependencies from sdist/source with an appropriate toolchain and inject them into image.
in some rare cases we might need to also install bindeps but i would hope that between bindeps and pip we could build small images for source fairly simpely and leave the orchestration of those images to the enduser. [...]
The bindep tool does at least have support for Alpine now, so as long as there are packages available for our system dependencies that should hopefully be a viable option. -- Jeremy Stanley
On 01/30/2019 10:56 AM, Chris Hoge wrote:
Also, Loci does not provide it's own Makefile for building images. The Dockerfile and installation scripts use environment variables to control the entire build process, which makes is very easy to use tools like Make or Ansible to build the images.
Apologies. I may have been remembering a script in openstack-helm-infra or elsewhere (maybe the Zuul gate jobs? [1]) that looped through projects, setting the $PROJECT environs variable, and executing docker build. Sorry for the bad info. Best, -jay [1] https://github.com/openstack/loci/tree/master/.zuul.d
On Mon, Jan 28, 2019 at 10:41 AM Jay Pipes <jaypipes@gmail.com> wrote:
On 01/28/2019 10:24 AM, Mohammed Naser wrote:
Perhaps, we should come up with the first initial step of providing a common way of building images (so a use can clone a repo and do 'docker build .') which will eliminate the obligation of having to deal with binaries, and then afterwards reconsider the ideal way of shipping those out.
Isn't that precisely what LOCI offers, Mohammed?
Best, -jay
On Mon, 2019-01-28 at 10:43 -0500, Mohammed Naser wrote: the problem with that appraoch is we have is we have to bless a specific base image which effectivly mean that it is unlikely that this would form the basis of a vendor product. if that is not the goal. e.g. support a common set of images that can be used in vendor distrobutions and the target is instead developers, testing and role your own deployemnts that dont use a downstream vendor distobution that is fine. if we did want to support vendor distobutiosn we would likely have to do one of the following. dynamicaly generate the docker file form a template like kolla does so we can set the base image. that could be as simple as "tox -e continer-build -- base_image=ubuntu:latest"
I haven't studied LOCI as much however I think that it would be good to perhaps look into bringing that approach in-repo rather than out-of-repo so a user can simply git clone, docker build . well im not sure if you have noticed but alot of people cant even agree on "docker build" lately. personcally i like the idea of tiny base image with python-3, pip and a compiler. and have a work worflow similar to "tox -e continer-build" which would create the root files system image by then just pip installing the current project.
if that just does docker build fine by me. i personally dont like the push to abandon all thinks docker inc and create new tools for exactly the same thing. the one think i would urge however regradless of what we decided to do or not do is, lets not package our depencies under openstack/. e.g. we should not aim to have openstack/mysql or openstack/rabbitmq but i would also argue we should not aim to have a nova-libvirt contianer or neutron ovs contianer either. if the goal was to have in repo definitons of the services own blessed contianer there is no repo that these dependencies would naturaly fit with and im sure the mysql comunity can proably do a better job of contaierising it then us.
I have to admit, I'm not super familiar with LOCI but as far as I know, that's indeed what I believe it does.
loci basically does this but unlike kolla it does not define a common abi for how the contiers are to be run. that has pros and cons but it does mean that every deployment tool that consumes the loci image basically has to invent it itself which is kind of wastful. when containerising openstack, neutron and horizon are often the elephants in the room as managing the installation of vendor/service specific plugins or neutron extension is a pain. e.g how do you build an image with networking-ovn and vpnaas and use the same mechanium to build ml2/ovs with networking-sfc. you either end up installing them all or choosing a subset and people end up building there own image. kolla adresses this by generating the dockerfiles dynamically form a template so that you can build with only the plugins you want and using that same templating you cna select source (git or tarball) or binary installs and the base distro and architecture. the last point is something that is often forgotten. we do most of our ci on x86 but i hear openstack works really well on arm and power too so what ever images are produced should supprot those too. loci does not to my knoladge whve mulit distor or multi arch support but based on what little i know i dont think that is a fundemetal limitation of how it works.
participants (9)
-
Chris Dent
-
Chris Hoge
-
corvus@inaugust.com
-
Fox, Kevin M
-
Jay Pipes
-
Jeremy Stanley
-
Mohammed Naser
-
Sean Mooney
-
Thierry Carrez