[openstack-dev] [kolla] on Dockerfile patterns
Clint Byrum
clint at fewbar.com
Wed Oct 15 16:51:03 UTC 2014
Excerpts from Vishvananda Ishaya's message of 2014-10-15 07:52:34 -0700:
>
> On Oct 14, 2014, at 1:12 PM, Clint Byrum <clint at fewbar.com> wrote:
>
> > Excerpts from Lars Kellogg-Stedman's message of 2014-10-14 12:50:48 -0700:
> >> On Tue, Oct 14, 2014 at 03:25:56PM -0400, Jay Pipes wrote:
> >>> I think the above strategy is spot on. Unfortunately, that's not how the
> >>> Docker ecosystem works.
> >>
> >> I'm not sure I agree here, but again nobody is forcing you to use this
> >> tool.
> >>
> >>> operating system that the image is built for. I see you didn't respond to my
> >>> point that in your openstack-containers environment, you end up with Debian
> >>> *and* Fedora images, since you use the "official" MySQL dockerhub image. And
> >>> therefore you will end up needing to know sysadmin specifics (such as how
> >>> network interfaces are set up) on multiple operating system distributions.
> >>
> >> I missed that part, but ideally you don't *care* about the
> >> distribution in use. All you care about is the application. Your
> >> container environment (docker itself, or maybe a higher level
> >> abstraction) sets up networking for you, and away you go.
> >>
> >> If you have to perform system administration tasks inside your
> >> containers, my general feeling is that something is wrong.
> >>
> >
> > Speaking as a curmudgeon ops guy from "back in the day".. the reason
> > I choose the OS I do is precisely because it helps me _when something
> > is wrong_. And the best way an OS can help me is to provide excellent
> > debugging tools, and otherwise move out of the way.
> >
> > When something _is_ wrong and I want to attach GDB to mysqld in said
> > container, I could build a new container with debugging tools installed,
> > but that may lose the very system state that I'm debugging. So I need to
> > run things inside the container like apt-get or yum to install GDB.. and
> > at some point you start to realize that having a whole OS is actually a
> > good thing even if it means needing to think about a few more things up
> > front, such as "which OS will I use?" and "what tools do I need installed
> > in my containers?"
> >
> > What I mean to say is, just grabbing off the shelf has unstated
> > consequences.
>
> If this is how people are going to use and think about containers, I would
> submit they are a huge waste of time. The performance value they offer is
> dramatically outweighed by the flexibilty and existing tooling that exists
> for virtual machines. As I state in my blog post[1] if we really want to
> get value from containers, we must convert to the single application per
> container view. This means having standard ways of doing the above either
> on the host machine or in a debugging container that is as easy (or easier)
> than the workflow you mention. There are not good ways to do this yet, and
> the community hand-waves it away, saying things like, "well you could …”.
> You could isn’t good enough. The result is that a lot of people that are
> using containers today are doing fat containers with a full os.
>
I think we really agree.
What the container universe hasn't worked out is all the stuff that the
distros have worked out for a long time now: consistency.
I think it would be a good idea for containers' filesystem contents to
be a whole distro. What's at question in this thread is what should be
running. If we can just chroot into the container's FS and run apt-get/yum
install our tools, and then nsenter and attach to the running process,
then huzzah: I think we have best of both worlds.
To the container makers: consider that things can and will go wrong,
and the answer may already exist as a traditional tool, and not be
"restart the container".
More information about the OpenStack-dev
mailing list