[openstack-dev] [kolla] on Dockerfile patterns
David Vossel
dvossel at redhat.com
Wed Oct 15 17:50:08 UTC 2014
----- Original Message -----
> Excerpts from Vishvananda Ishaya's message of 2014-10-15 07:52:34 -0700:
> >
> > On Oct 14, 2014, at 1:12 PM, Clint Byrum <clint at fewbar.com> wrote:
> >
> > > Excerpts from Lars Kellogg-Stedman's message of 2014-10-14 12:50:48
> > > -0700:
> > >> On Tue, Oct 14, 2014 at 03:25:56PM -0400, Jay Pipes wrote:
> > >>> I think the above strategy is spot on. Unfortunately, that's not how
> > >>> the
> > >>> Docker ecosystem works.
> > >>
> > >> I'm not sure I agree here, but again nobody is forcing you to use this
> > >> tool.
> > >>
> > >>> operating system that the image is built for. I see you didn't respond
> > >>> to my
> > >>> point that in your openstack-containers environment, you end up with
> > >>> Debian
> > >>> *and* Fedora images, since you use the "official" MySQL dockerhub
> > >>> image. And
> > >>> therefore you will end up needing to know sysadmin specifics (such as
> > >>> how
> > >>> network interfaces are set up) on multiple operating system
> > >>> distributions.
> > >>
> > >> I missed that part, but ideally you don't *care* about the
> > >> distribution in use. All you care about is the application. Your
> > >> container environment (docker itself, or maybe a higher level
> > >> abstraction) sets up networking for you, and away you go.
> > >>
> > >> If you have to perform system administration tasks inside your
> > >> containers, my general feeling is that something is wrong.
> > >>
> > >
> > > Speaking as a curmudgeon ops guy from "back in the day".. the reason
> > > I choose the OS I do is precisely because it helps me _when something
> > > is wrong_. And the best way an OS can help me is to provide excellent
> > > debugging tools, and otherwise move out of the way.
> > >
> > > When something _is_ wrong and I want to attach GDB to mysqld in said
> > > container, I could build a new container with debugging tools installed,
> > > but that may lose the very system state that I'm debugging. So I need to
> > > run things inside the container like apt-get or yum to install GDB.. and
> > > at some point you start to realize that having a whole OS is actually a
> > > good thing even if it means needing to think about a few more things up
> > > front, such as "which OS will I use?" and "what tools do I need installed
> > > in my containers?"
> > >
> > > What I mean to say is, just grabbing off the shelf has unstated
> > > consequences.
> >
> > If this is how people are going to use and think about containers, I would
> > submit they are a huge waste of time. The performance value they offer is
> > dramatically outweighed by the flexibilty and existing tooling that exists
> > for virtual machines. As I state in my blog post[1] if we really want to
> > get value from containers, we must convert to the single application per
> > container view. This means having standard ways of doing the above either
> > on the host machine or in a debugging container that is as easy (or easier)
> > than the workflow you mention. There are not good ways to do this yet, and
> > the community hand-waves it away, saying things like, "well you could …”.
> > You could isn’t good enough. The result is that a lot of people that are
> > using containers today are doing fat containers with a full os.
> >
>
> I think we really agree.
>
> What the container universe hasn't worked out is all the stuff that the
> distros have worked out for a long time now: consistency.
I agree we need consistency. I have an idea. What if we developed an entrypoint
script standard...
Something like LSB init scripts except tailored towards the container use case.
The primary difference would be that the 'start' action of this new standard
wouldn't fork. Instead 'start' would be pid 1. The 'status' could be checked
externally by calling the exact same entry point script to invoke the 'status'
function.
This standard would lock us into the 'one service per container' concept while
giving us the ability to standardize on how the container is launched and monitored.
If we all conformed to something like this, docker could even extend the standard
so health checks could be performed using the docker cli tool.
docker status <container id>
Internally docker would just be doing a nsenter into the container and calling
the internal status function in our init script standard.
We already have docker start <container> and docker stop <container>. Being able
to generically call something like docker status <container> and have that translate
into some service specific command on the inside of the container would be kind of
neat.
Tools like kubernetes could use this functionality to poll a container's health and
be able to detect issues occurring within the container that don't necessarily
involve the container's service failing.
Does anyone else have any interest in this? I have quite a bit of of init script type
standard experience. It would be trivial for me to define something like this for us
to begin discussing.
-- Vossel
> I think it would be a good idea for containers' filesystem contents to
> be a whole distro. What's at question in this thread is what should be
> running. If we can just chroot into the container's FS and run apt-get/yum
> install our tools, and then nsenter and attach to the running process,
> then huzzah: I think we have best of both worlds.
>
> To the container makers: consider that things can and will go wrong,
> and the answer may already exist as a traditional tool, and not be
> "restart the container".
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
More information about the OpenStack-dev
mailing list