[openstack-dev] [all][tripleo] New Project -> Kolla: Deploy and Manage OpenStack using Kubernetes and Docker
Steven Dake
sdake at redhat.com
Thu Sep 25 15:35:50 UTC 2014
On 09/24/2014 10:01 PM, Mike Spreitzer wrote:
> Clint Byrum <clint at fewbar.com> wrote on 09/25/2014 12:13:53 AM:
>
> > Excerpts from Mike Spreitzer's message of 2014-09-24 20:49:20 -0700:
> > > Steven Dake <sdake at redhat.com> wrote on 09/24/2014 11:02:49 PM:
> > > > ...
> > > ...
> > > Does TripleO require container functionality that is not available
> > > when using the Docker driver for Nova?
> > >
> > > As far as I can tell, the quantitative handling of capacities and
> > > demands in Kubernetes is much inferior to what Nova does today.
> > >
> >
> > Yes, TripleO needs to manage baremetal and containers from a single
> > host. Nova and Neutron do not offer this as a feature unfortunately.
>
> In what sense would Kubernetes "manage baremetal" (at all)?
> By "from a single host" do you mean that a client on one host
> can manage remote baremetal and containers?
>
> I can see that Kubernetes allows a client on one host to get
> containers placed remotely --- but so does the Docker driver for Nova.
>
> >
> > > > As far as use cases go, the main use case is to run a specific
> > > > Docker container on a specific Kubernetes "minion" bare metal host.
>
> Clint, in another branch of this email tree you referred to
> "the VMs that host Kubernetes". How does that square with
> Steve's text that seems to imply bare metal minions?
>
> I can see that some people have had much more detailed design
> discussions than I have yet found. Perhaps it would be helpful
> to share an organized presentation of the design thoughts in
> more detail.
>
Mike,
I have had no such design discussions. Thus far the furthest along we
are in the project is determining we need Docker containers for each of
the OpenStack daemons. We are working a bit on how that design should
operate. For example, our current model on reconfiguration of a docker
container is to kill the docker container and start a fresh one with the
new configuration.
This is literally where the design discussions have finished. We have
not had much discussion about Kubernetes at all other then I know it is
a docker scheduler and I know it can get the job done :) I think other
folks design discussions so far on this thread are speculation about
what an architecture should look like. That is great - lets have those
Monday 2000 UTC in #openstack-medeting in our first Kolla meeting.
Regards
-steve
> > >
> > > If TripleO already knows it wants to run a specific Docker image
> > > on a specific host then TripleO does not need a scheduler.
> > >
> >
> > TripleO does not ever specify destination host, because Nova does not
> > allow that, nor should it. It does want to isolate failure domains so
> > that all three Galera nodes aren't on the same PDU, but we've not really
> > gotten to the point where we can do that yet.
>
> So I am still not clear on what Steve is trying to say is the main use
> case.
> Kubernetes is even farther from balancing among PDUs than Nova is.
> At least Nova has a framework in which this issue can be posed and
> solved.
> I mean a framework that actually can carry the necessary information.
> The Kubernetes scheduler interface is extremely impoverished in the
> information it passes and it uses GO structs --- which, like C structs,
> can not be subclassed.
> Nova's filter scheduler includes a fatal bug that bites when balancing
> and you want more than
> one element per area, see https://bugs.launchpad.net/nova/+bug/1373478.
> However: (a) you might not need more than one element per area and
> (b) fixing that bug is a much smaller job than expanding the mind of K8s.
>
> Thanks,
> Mike
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140925/78491aee/attachment.html>
More information about the OpenStack-dev
mailing list