[openstack-dev] [infra][diskimage-builder] containers, Containers, CONTAINERS!

Paul Belanger pabelanger at redhat.com
Wed Jan 11 21:24:48 UTC 2017


On Wed, Jan 11, 2017 at 04:04:10PM -0500, Paul Belanger wrote:
> On Sun, Jan 08, 2017 at 02:45:28PM -0600, Gregory Haynes wrote:
> > On Fri, Jan 6, 2017, at 09:57 AM, Paul Belanger wrote:
> > > On Fri, Jan 06, 2017 at 09:48:31AM +0100, Andre Florath wrote:
> > > > Hello Paul,
> > > > 
> > > > thank you very much for your contribution - it is very appreciated.
> > > > 
> > 
> > Seconded - I'm very excited for some effort to be put in to improving
> > the use case of making containers with DIB. Thanks :).
> > 
> > > > You addressed a topic with your patch set that was IMHO not in a wide
> > > > focus: generating images for containers.  The ideas in the patches are
> > > > good and should be implemented.
> > > > 
> > > > Nevertheless I'm missing the concept behind your patches. What I saw
> > > > are a couple of (independent?) patches - and it looks that there is
> > > > one 'big goal' - but I did not really get it.  My proposal is (as it
> > > > is done for other bigger changes or introducing new concepts) that
> > > > you write a spec for this first [1].  That would help other people
> > > > (see e.g. Matthew) to use the same blueprint also for other
> > > > distributions.
> > 
> > I strongly agree with the point that this is something were going to end
> > up repeating across many distros so we should make sure there's some
> > common patterns for doing so. A spec seems fine to me, but ideally the
> > end result involves some developer documentation. A spec is probably a
> > good place to get started on getting some consensus which we can turn in
> > to the dev docs.
> > 
> This plan is to start with ubuntu, then move to debian, then fedora and finally
> centos. Fedora and CentOS are obviously harder, since a debootstrap tool doesn't
> exist.
> 
I just created a tripleo-spec outlining the current implementation. We all agree
this is the first step.

https://review.openstack.org/#/c/419139/

> > > Sure, I can write a spec if needed but the TL;DR is:
> > > 
> > > Use diskimage-builder to build debootstrap --variant=minbase chroot, and
> > > nothing
> > > else. So I can then use take the generated tarball and do something else
> > > with
> > > it.
> > > 
> > > > One possibility would be to classify different element sets and define
> > > > the dependency between them.  E.g. to have a element class 'container'
> > > > which can be referenced by other classes, but is not able to reference
> > > > these (e.g. VM or hardware specific things).
> > > > 
> > 
> > It sounds like we need to step back a bit get a clear idea of how were
> > going to manage the full use case matrix of distro * (minimal / full) *
> > (container / vm / baremetal), which is something that would be nice to
> > get consensus on in a spec. This is something that keeps tripping up
> > both users and devs and I think adding containers to the matrix is sort
> > of a tipping point in terms of complexity so again, some docs after
> > figuring out our plan would be *awesome*.
> > 
> > Currently we have distro-minimal elements which are minimal
> > vm/baremetal, and distro elements which actually are full vm/baremetal
> > elements. I assume by adding an element class you mean add a set of
> > distro-container elements? If so, I worry that we might be falling in to
> > a common dib antipattern of making distro-specific elements. I have a
> > alternate proposal:
> > 
> > Lets make two elements: kernel, and minimal-userspace which,
> > respectively, install the kernel package and a minimal set of userspace
> > packages for dib to function (e.g. dependencies for dib-run-parts,
> > package-installs). The kernel package should be doable as basically a
> > package-installs and a pkg-map. The minimal-userspace element gets
> > tricky because it needs to install deps which are required for things
> > like package-installs to function (which is why the various distro
> > elements do this independently).  Even so, I think it would be nice to
> > take care of installing these from within the chroot rather than from
> > outside (see https://review.openstack.org/#/c/392253/ for a good reason
> > why). If we do this then the minimal-userspace element can have some
> > common logic to enter the chroot as part of root.d and then install the
> > needed deps.
> > 
> > The end result of this would be we have distro-minimal which depends on
> > kernel, minimal-userspace, and yum/debootstrap to build a vm/baremetal
> > capable image. We could also create a distro-container element which
> > only depends on minimal-userspace and yum/debootstrap and creates a
> > minimal container. The point being - the top level -container or
> > -minimal elements are basically convenience elements for exporting a few
> > vars and pulling in the proper elements at this point and the
> > elements/code are broken down by the functionality they provide rather
> > than use case.
> > 
> To be honest, this is a ton of work, just to create an debootstrap 'operating
> system' element. I'm actually pretty happy how things look to day with our
> -minimal elements. But it will be an uphill battle to do the work you are
> asking.
> 
> I can especially understand the need to refactor code and optimize, but just
> looking at the effort to create minimal / cloud elements[6], its been ongoing
> since Oct. 2015. We haven't even landed that.
> 
> [6] https://review.openstack.org/#/c/211859/
> 
> > > > There are additional two major points:
> > > > 
> > > > * IMHO you addressed only some elements that needs adaptions to be
> > > >   able to used in containers.  One element I stumbled over yesterday
> > > >   is the base element: it is always included until you explicitly
> > > >   exclude it.  This base element depends on a complete init-system -
> > > >   which is for a container unneeded overhead. [2]
> > 
> > I think you're on the right track with removing base - we had consensus
> > a while back that it should go away but we never got around to it. The
> > big issue is going to be preserving backwards compat or making it part
> > of a major version bump and not too painful to upgrade to. I think we
> > can have this convo on the patchset, though.
> > 
> > > 
> > > Correct, for this I simply pass the -n flag to disk-image-create. This
> > > removes
> > > the need for include the base element. If we want to make a future
> > > optimization
> > > to remove or keep, I am okay with that. But the main goal for me is to
> > > include
> > > the new ubuntu-rootfs element with minimal disruption as possible.
> > > > 
> > > > * Your patches add a lot complexity and code duplication.
> > > >   This is not the way it should be (see [3], p 110, p 345).
> > > The main reason this was done, is yes there is some code duplication, but
> > > the
> > > because, this is done in the root.d phase.  Moving this logic into
> > > another
> > > phase, then requires the need to install python into chroot, and then
> > > dpkg,
> > > dib-python, package-install, etc. This basically contaminants the
> > > pristine
> > > debootstrap environment, something I am trying hard not to do. I figure,
> > > 2 lines
> > > to delete stale data is fine.  However, if there is an objection, we can
> > > remove
> > > it.  Keep in mind, by deleting the cache we get the tarball size to 42Mb
> > > (down
> > > from 79Mb).
> > 
> > I think my above proposal about a common userspace and kernel element
> > would solve most of the duplication issues. I am unclear about the not
> > wanting python and some other elements as part of the build. Currently
> > python as part of the target image is something we require for a large
> > portion of the dib functionality. It is possible to not have it as part
> > of the target image, but there's a significant cost to doing so. As
> > such, I'd like to know what the motivation is for this? Is it purely a
> > size reason, and if so, what sizes are we talking about for including
> > python?
> > 
> A usecase would be a container needing to run go (not that I have one). Since
> containers are to be as minimal as possible, it is not a required dependency for
> the application.
> 
> However, since for of the things I would build in a container does require
> python, I can then install it after the fact in the cleanup.d phase
> (simple-playbook).  I'd like like the option to do it myself or not, this is why
> the ubuntu-rootfs element doesn't depend on package-install for example.
> 
> > > 
> > > >   One reason is, that you do everything twice: once for Debian and
> > > >   once for Ubuntu - and both in a (slightly) different way.
> > > Yes, sadly the debian elements came along after the ubuntu-minimal
> > > elements,
> > > with different people writing the code. For the most part, I've been
> > > trying to
> > > condense the code path between the 2, but we are slowly getting there.
> > > 
> > > As you can see, the debian-rootfs element does now work correctly[6]
> > > based on
> > > previous patches in the stack.
> > > 
> > > However, I don't believe this is the stack to make things better between
> > > the 2
> > > flavors. We can use the existing ubuntu-minimal and debian-minimal
> > > elements and
> > > iterate atop of that.  One next steps is to address how we handle the
> > > sources.list file, between ubuntu and debian we do things differently.
> > > 
> > > [6] https://review.openstack.org/#/c/414765/
> > > 
> > > >   Please: factor out common code.
> > > >   Please: improve code as you touch it.
> > > > 
> > > > And three minor:
> > > > 
> > > > * Release notes are missing (reno is your friend)
> > > > 
> > > Sure, I can add release notes.
> > > 
> > > > * Please do not introduce code which 'later on' can / should / will be
> > > >   cleaned up.  Do it correct right from the beginning. [4]
> > > > 
> > > I can rebase code if needed.
> > > 
> > > > * It looks that this is a bigger patch set - so maybe we should
> > > >   include it in v2?
> > > > 
> > > I'm not sure we need to wait for v2 (but I am biased).  I've recently
> > > revamped
> > > our testing infra for diskimage-builder. We now build images, along with
> > > launching them with nodepool and SSHing into them.
> > > 
> > > Side note, when is v2 landing?  I know there has been issues with
> > > tripleo.
> > > 
> > 
> > We basically have the one patch (new block-device.d system) which needs
> > to land before we can cut an RC. As a result I would prefer to not add
> > things to v2 (so we can get on with the process of getting it released).
> > This patch is blocking on a final +A and then someone doing a merge of
> > master in to the v2 branch before the +A. I plan on doing this once I'm
> > back from holiday (1/11) if someone hasn't done so by then. Once this
> > merges the rough plan is to cut an RC, mail out to the list asking folks
> > for feedback, and cycle on that until we feel comfortable releasing.
> > 
> > > > Kind regards
> > > > 
> > > > Andre
> > > > 
> > > > 
> > > > [1] https://review.openstack.org/#/c/414728/
> > > > [2] https://review.openstack.org/#/c/417310/
> > > > [3] "Refactoring - Improving the Design of Existing Code", Martin
> > > >     Fowler, Addison Wesley, Boston, 2011
> > > > [4] https://review.openstack.org/#/c/414728/8/elements/debootstrap-minimal/root.d/99-clean-up-cache
> > > > [5] https://review.openstack.org/#/c/413221/
> > > > 
> > 
> > Thanks,
> > Greg
> > 
> > __________________________________________________________________________
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



More information about the OpenStack-dev mailing list