[openstack-dev] [all][tripleo] New Project -> Kolla: Deploy and Manage OpenStack using Kubernetes and Docker

Clint Byrum clint at fewbar.com
Thu Sep 25 04:13:53 UTC 2014


Excerpts from Mike Spreitzer's message of 2014-09-24 20:49:20 -0700:
> Steven Dake <sdake at redhat.com> wrote on 09/24/2014 11:02:49 PM:
> 
> > On 09/24/2014 03:31 PM, Alan Kavanagh wrote:
> > Steven
> > I have to ask what is the motivation and benefits we get from 
> > integrating Kubernetes into Openstack? Would be really useful if you
> > can elaborate and outline some use cases and benefits Openstack and 
> > Kubernetes can gain. 
> >  
> > /Alan
> >  
> > Alan,
> > 
> > I am either unaware or ignorant of another Docker scheduler that is 
> > currently available that has a big (100+ folks) development 
> > community.  Kubernetes meets these requirements and is my main 
> > motivation for using it to schedule Docker containers.  There are 
> > other ways to skin this cat - The TripleO folks wanted at one point 
> > to deploy nova with the nova docker VM manager to do such a thing.  
> > This model seemed a little clunky to me since it isn't purpose built
> > around containers.
> 
> Does TripleO require container functionality that is not available
> when using the Docker driver for Nova?
> 
> As far as I can tell, the quantitative handling of capacities and
> demands in Kubernetes is much inferior to what Nova does today.
> 

Yes, TripleO needs to manage baremetal and containers from a single
host. Nova and Neutron do not offer this as a feature unfortunately.

> > As far as use cases go, the main use case is to run a specific 
> > Docker container on a specific Kubernetes "minion" bare metal host.
> 
> If TripleO already knows it wants to run a specific Docker image
> on a specific host then TripleO does not need a scheduler.
> 

TripleO does not ever specify destination host, because Nova does not
allow that, nor should it. It does want to isolate failure domains so
that all three Galera nodes aren't on the same PDU, but we've not really
gotten to the point where we can do that yet.

> > These docker containers are then composed of the various config 
> > tools and services for each detailed service in OpenStack.  For 
> > example, mysql would be a container, and tools to configure the 
> > mysql service would exist in the container.  Kubernetes would pass 
> > config options for the mysql database prior to scheduling
> 
> I am not sure what is meant here by "pass config options" nor how it
> would be done prior to scheduling; can you please clarify?
> I do not imagine Kubernetes would *choose* the config values,
> K8s does not know anything about configuring OpenStack.
> Before scheduling, there is no running container to pass
> anything to.
> 

Docker containers tend to use environment variables passed to the initial
command to configure things. The Kubernetes API allows setting these
environment variables on creation of the container.

> >                                                           and once 
> > scheduled, Kubernetes would be responsible for connecting the 
> > various containers together.
> 
> Kubernetes has a limited role in connecting containers together.
> K8s creates the networking environment in which the containers
> *can* communicate, and passes environment variables into containers
> telling them from what protocol://host:port/ to import each imported
> endpoint.  Kubernetes creates a universal reverse proxy on each
> minion, to provide endpoints that do not vary as the servers
> move around.
> It is up to stuff outside Kubernetes to decide
> what should be connected to what, and it is up to the containers
> to read the environment variables and actually connect.
> 

This is a nice simple interface though, and I like that it is narrowly
defined, not trying to be "anything that containers want to share with
other containers."



More information about the OpenStack-dev mailing list