[openstack-dev] [TripleO][Edge] Reduce base layer of containers for security and size of images (maintenance) sakes

Alex Schultz aschultz at redhat.com
Wed Nov 28 19:27:42 UTC 2018

On Wed, Nov 28, 2018 at 11:44 AM Chris Dent <cdent+os at anticdent.org> wrote:
> On Wed, 28 Nov 2018, James Slagle wrote:
> > Why would we even run the exact same puppet binary + manifest
> > individually 40,000 times so that we can produce the exact same set of
> > configuration files that differ only by things such as IP address,
> > hostnames, and passwords?
> This has been my confusion and question throughout this entire
> thread. It sounds like containers are being built (and configured) at
> something akin to runtime, instead of built once and then configured
> (only) at runtime. Isn't it more the "norm" to, when there's a security
> fix, build again, once, and cause the stuff at edge (keeping its config)
> to re-instantiate fetching newly built stuff?

No we aren't building container items, we're building configurations.
The way it work s in tripleo is that we use the same containers to
generate the configurations  as we do to run the services themselves.
These configurations are mounted off the host as to not end up in the
container.  This is primarily because things like the puppet modules
assume certain chunks of software/configuration files exist. So we're
generating the configuration files to be mounted into the run time
container.  The puppet providers are extremely mature and allow for in
place editing and no templates which is how we can get away with this
in containers.  The containers themselves are not build or modified on
the fly in this case.

IMHO this is a side effect of configurations (files) for openstack
services and their service dependencies where we need to somehow
inject the running config into the container rather than being able to
load it from an external source (remember the etcd oslo stuff from a
few cycles ago?).  Our problem is our reliance on puppet due to
existing established configuration patterns and the sheer amount of
code required to configure openstack & company.  So we end up having
to carry these package dependencies in the  service containers because
that's where we generate the configs.  There are additional
dependencies on being able to know about hardware specifics (facts)
that come into play with the configurations such that we may not be
able to generate the configs off the deployment host and just ship
those with the containers.

> Throughout the discussion I've been assuming I must be missing some
> critical detail because isn't the whole point to have immutable
> stuff? Maybe it is immutable and you all are talking about it in
> ways that make it seem otherwise. I dunno. I suspect I am missing
> some bit of operational experience.

The application is immutable, but the configs need to be generated
depending on where they end up or the end users desired configuration.
For some service that includes pulling in some information about the
host and including that (SRIOV, pci, etc).

> In any case, the "differ only by things..." situation is exactly why
> I added the get-config-from-environment support to oslo.config, so
> that the different bits can be in the orchestrator, not the
> containers themselves. More on that at:
> http://lists.openstack.org/pipermail/openstack-discuss/2018-November/000173.html

Given the vast amount of configurations exposed in each service, i'm
not sure environment variables help here. Additionally that doesn't
solve for non-oslo services (mysql/rabbitmq/etc) so then you'd end up
having two ways of having to configure the containers/services.

> --
> Chris Dent                       ٩◔̯◔۶           https://anticdent.org/
> freenode: cdent                                         tw: @anticdent

More information about the openstack-discuss mailing list