[OpenStack-Infra] Creating OpenDev control-plane docker images and naming
iwienand at redhat.com
Tue Nov 26 06:31:07 UTC 2019
I'm trying to get us to a point where we can use nodepool container
images in production, particularly because I want to use updated tools
available in later distributions than our current Xenial builders 
We have hit the hardest problem; naming :)
To build a speculative nodepool-builder container image that is
suitable for a CI job (the prerequisite for production), we need to
somehow layer openstacksdk, diskimage-builder and finally nodepool
itself into one image for testing. 
These all live in different namespaces, and the links between them are
not always clear. Maybe a builder doesn't need diskimage-builder if
images come from elsewhere. Maybe a launcher doesn't need
openstacksdk if it's talking to some other cloud.
This becomes weird when the zuul/nodepool-builder image depends on
opendev/python-base but also openstack/diskimage-builder and
openstack/openstacksdk. You've got 3 different namespaces crossing
with no clear indication of what is supposed to work together.
I feel like we've been (or at least I have been) thinking that each
project will have *a* Dockerfile that produces some canonical
<namespace/project> image. I think I've come to the conclusion this
There can't be a single container that suits everyone, and indeed this
isn't the Zen of containers anyway.
What I would propose is that projects do *not* have a single,
top-level Dockerfile, but only (potentially many) specifically
So for example, everything in the opendev/ namespace will be expected
to build from opendev/python-base. Even though dib, openstacksdk and
zuul come from difference source-repo namespaces, it will make sense
because these containers are expected to work together as the opendev
control plane containers. Since opendev/nodepool-builder is defined
as an image that expected to make RAX compatible, OpenStack uploadable
images it makes logical sense for it to bundle the kitchen sink.
I would expect that nodepool would also have a Docker.zuul file to
create images in the zuul/ namespace as the "reference"
implementation. Maybe that looks a lot like Dockerfile.opendev -- but
then again maybe it makes different choices and does stuff like
Windows support etc. that the opendev ecosystem will not be interested
in. You can still build and test these images just the same; just
we'll know they're targeted at doing something different.
As an example:
https://review.opendev.org/696015 - create opendev/openstacksdk image
https://review.opendev.org/693971 - create opendev/diskimage-builder
(a nodepool change will follow, but it's a bit harder as it's
cross-tenant so projects need to be imported).
Perhaps codifying that there's no such thing as *a* Dockerfile, and
possibly rules about what happens in the opendev/ namespace is spec
worthy, I'm not sure.
I hope this makes some sense!
Otherwise, I'd be interested in any and all ideas of how we basically
convert the nodepool-functional-openstack-base job to containers (that
means, bring up a devstack, and test nodepool, dib & openstacksdk with
full Depends-On: support to make sure it can build, upload and boot).
I consider that a pre-requisite before we start rolling anything out
 I know we have ideas to work around the limitations of using host
tools to build images, but one thing at a time! :)
 I started looking at installing these together from a Dockerfile
in system-config. The problem is that you have a "build context",
basically the directory the Dockerfile is in and everything under
it. You can't reference anything outside this. This does not
play well with Zuul, which has checked out the code for dib,
openstacksdk & nodepool into three different sibling directories.
So to speculatively build them together, you have to start copying
Zuul checkouts of code underneath your system-config Dockerfile
which is crazy. It doesn't use any of the speculative build
registry stuff and just feels wrong because you're not building
small parts ontop of each other, as Docker is designed to do. I
still don't really know how it will work across all the projects
for testing either.
More information about the OpenStack-Infra