[tripleo] Container image tooling roadmap

Sean Mooney smooney at redhat.com
Tue May 5 14:45:58 UTC 2020


On Tue, 2020-05-05 at 15:56 +0200, Bogdan Dobrelya wrote:
> Let's for a minute imagine that each of the raised concerns is 
> addressable. And as a thought experiment, let's put here WHAT has to be 
> addressed for Kolla w/o the need of abandoning it for a custom tooling:
> 
> On 03.05.2020 21:26, Alex Schultz wrote:
> > On Sat, May 2, 2020 at 7:45 AM Jeremy Stanley <fungi at yuggoth.org> wrote:
> > > 
> > > On 2020-05-01 15:18:13 -0500 (-0500), Kevin Carter wrote:
> > > > As you may have seen, the TripleO project has been testing the
> > > > idea of building container images using a simplified toolchain
> > > 
> > > [...]
> > > 
> > > Is there an opportunity to collaborate around the proposed plan for
> > > publishing basic docker-image-based packages for OpenStack services?
> > > 
> > >      https://review.opendev.org/720107
> > > 
> > > Obviously you're aiming at solving this for a comprehensive
> > > deployment rather than at a packaging level, just wondering if
> > > there's a way to avoid having an explosion of different images for
> > > the same services if they could ultimately use the same building
> > > blocks. (A cynical part of me worries that distro "party lines" will
> > > divide folks on what the source of underlying files going into
> > > container images should be, but I'm sure our community is better
> > > than that, after all we're all in this together.)
> > > 
> > 
> > I think this assumes we want an all-in-one system to provide
> > containers. And we don't.  That I think is the missing piece that
> > folks don't understand about containers and what we actually need.
> > 
> > I believe the issue is that the overall process to go from zero to an
> > application in the container is something like the following:
> > 
> > 1) input image (centos/ubi0/ubuntu/clear/whatever)
> 
> * support ubi8 base images
> 
> > 2) Packaging method for the application (source/rpm/dpkg/magic)
> 
> * abstract away all the packaging methods (at least above the base 
> image) to some (better?) DSL perhaps
im not sure a custome dsl is the anser to any of the issues.
> 
> > 3) dependencies provided depending on item #1 & 2
> > (venv/rpm/dpkg/RDO/ubuntu-cloud/custom)
> 
> * abstract away all the dependencies (atm I can only think of go.mod & 
> Go's vendor packages example, sorry) to some extra DSL & CLI tooling may be
we have bindeps which is ment to track all binary depenices in a multi disto
way. so that is the solution to package depencies
> 
> > 4) layer dependency declaration (base -> nova-base -> nova-api,
> > nova-compute, etc)
> 
> * is already fully covered above, I suppose
> 
> > 5) How configurations are provided to the application (at run time or at build)
> 
> (what is missing for the run time, almost a perfection yet?)
> * for the build time, is already fully covered above, i.e. extra DSL & 
> CLI tooling (my biased example: go mod tidy?)
kolla does it all outside the container si this is a non issue really. its done at
runtime by design wich is the most flexible desing and the the correct one IMO
> 
> > 6) How application is invoked when container is ultimately launched
> > (via docker/podman/k8s/etc)
> 
> * have a better DSL to abstract away all the container runtime & 
> orchestration details beneath
this has nothing to do with kolla. this is a consern for ooo or kolla ansible but kolla
just proivdes image it does not execute them. a dsl is not useful to adress this.

> 
> > 7) Container build method (docker/buildah/other)
> 
> * support for buildah (more fancy abstractions and DSL extenstions ofc!)
buildah support the docker file format as far as i am aware so no we dont need a dsl
againg i stongly think that is the wrong approch.
we just need to ad a config option to the kolla-build binary and add a second module
that will invoke buildah instead of docker. so we would just have to modify
https://github.com/openstack/kolla/blob/master/kolla/image/build.py
we shoudl not need to modify the docker file templates to support this in any way.

if kolla ansible wanted to also support podman it would jsut need to reimplment
https://github.com/openstack/kolla-ansible/blob/master/ansible/library/kolla_docker.py
to provide the same interface but invoke podman. simiarly you caoudl add a config
option to select which module to invoke in a task+

> 
> > 
> > The answer to each one of these is dependent on the expectations of
> > the user or application consuming these containers.  Additionally this
> > has to be declared for each dependent application as well
> > (rabbitmq/mariadb/etc). Kolla has provided this at a complexity cost
> > because it needs to support any number of combinations for each of
> 
> * have better modularity: offload some of the "combinations" to 
> interested 3rd party maintainers (split repos into pluggable modules) 
> and their own CI/CD.
we had talked about having a kolla-infra repo for non openstack service in the
past but the effort was deamed not worth it. so non openstack continer like rabbit mariadb and openvswitch
could be split out or we could use comunity provided contaienr but im not sure this is needed.

the source vs binary disticion only appplies ot openstack services
it does not apply to infra contianers. regardless of the name those are always built
using disto binary packages.
> 
> > these.  Today TripleO doesn't use the build method provided by Kolla
> > anymore because we no longer support docker.  This means we only use
> > Kolla to generate Dockerfiles as inputs to other processes. It should
> 
> NOTE: there is also kolla startup/config APIs on which TripleO will 
> *have to* rely for the next 3-5 years or so. Its compatibility shall not 
> be violated.
> 
> > be noted that we also only want Dockerfiles for the downstream because
> > they get rebuilt with yet another different process. So for us, we
> > don't want the container and we want a method for generating the
> > contents of the container.
> 
> * and again, have better pluggability to abstract away all the 
> downstream vs upstream specifics (btw, I'm not bought on the new custom 
> tooling can solve this problem in a different way but still using 
> better/simpler DSL & tooling)
plugablity wont help.
the downstream issue is just that we need to build the contianer using downstream
repos which have patches and in some case we want to add or remove depencices based on
what is supported in the product.

if we use bindeps correctly. we can achive this witout needing to add plugabilty and the
complexity that would invovle.

if we simpley have a list of bindep lables to install per image and then update all bindep files
in the complent repos to have lable per backend we could use a single template with defaults labes
that when building downstream could simple be overriden to use the labels we support.

for example if nova had say 

libvirt,vmware,xen,ceph we could install all of them by default installing the bindeps for ceph libvirt vmware and ceph.
downstream we coudl jsut enable libvirt and ceph  since we dont support xen or vmware

you woudl do this via the build config file with a list of lables per image.

in the docker file you would just loop over the lables
doing "bindep <lable> | <package mager> install ..."
to contorl what got installed.

that could be abstracted behind a macro fairly simply by either extending the existing

source config opts with a labels section
https://github.com/openstack/kolla/blob/master/kolla/common/config.py#L288-L292
or creating a similar one.

we woud need to create a comuntiy goal to have all service adopt and use bindeps
to discribe the deps for the different backends they support but that would be a good goal
apart for this discussio
> 
> > 
> > IMHO containers are just glorified packaging (yet again and one that
> > lacks ways of expressing dependencies which is really not beneficial
> > for OpenStack).  I do not believe you can or should try to unify the
> > entire container declaration and building into a single application.
> > You could rally around a few different sets of tooling that could
> > provide you the pieces for consumption. e.g. A container file
> > templating engine, a building engine, and a way of
> > expressing/consuming configuration+execution information.
> > 
> > I applaud the desire to try and unify all the things, but as we've
> 
> So the final call: have pluggable and modular design, adjust DSL and 
> tooling to meet those goals for Kolla. So that one who doesn't chase for 
> unification, just sets up his own module and plugs it into build 
> pipeline. Hint: that "new simpler tooling for TripleO" may be that 
> pluggable module!

i dont think this is the right direction but that said im not going to be working
on ooo or kolla in either case to implement my alternitive.
modeularity and plugablity is not the aswer in this case in my view.
unifying and simplifying build system so that it can be used downstream with no
overrides and minimal configuration cannot be achive by plugins and modules.
> 
> > seen time and time again when it comes to deployment, configuration
> > and use cases. Trying to solve for all the things ends up having a
> > negative effect on the UX because of the complexity required to handle
> > all the cases (look at tripleo for crying out loud).  I believe it's
> > time to stop trying to solve all the things with a giant hammer and
> > work on a bunch of smaller nails and let folks construct their own
> > hammer.
> > 
> > Thanks,
> > -Alex
> > 
> > 
> > 
> > 
> > > Either way, if they can both make use of the same speculative
> > > container building workflow pioneered in Zuul/OpenDev, that seems
> > > like a huge win (and I gather the Kolla "krew" are considering
> > > redoing their CI jobs along those same lines as well).
> > > --
> > > Jeremy Stanley
> > 
> > 
> 
> 




More information about the openstack-discuss mailing list