[openstack-dev] [all][tripleo] New Project -> Kolla: Deploy and Manage OpenStack using Kubernetes and Docker
Fox, Kevin M
Kevin.Fox at pnnl.gov
Fri Sep 26 16:54:10 UTC 2014
> -----Original Message-----
> From: Angus Lees [mailto:guslees at gmail.com] On Behalf Of Angus Lees
> Sent: Thursday, September 25, 2014 9:01 PM
> To: openstack-dev at lists.openstack.org
> Cc: Fox, Kevin M
> Subject: Re: [openstack-dev] [all][tripleo] New Project -> Kolla: Deploy and
> Manage OpenStack using Kubernetes and Docker
>
> On Thu, 25 Sep 2014 04:01:38 PM Fox, Kevin M wrote:
> > Doesn't nova with a docker driver and heat autoscaling handle case 2
> > and 3 for control jobs? Has anyone tried yet?
>
> For reference, the cases were:
>
> > - Something to deploy the code (docker / distro packages / pip install
> > /
> > etc)
> > - Something to choose where to deploy
> > - Something to respond to machine outages / autoscaling and re-deploy
> > as necessary
>
>
> I tried for a while, yes. The problems I ran into (and I'd be interested to
> know if there are solutions to these):
>
> - I'm trying to deploy into VMs on rackspace public cloud (just because that's
> what I have). This means I can't use the nova docker driver, without
> constructing an entire self-contained openstack undercloud first.
That's true. But you are essentially doing the same with Kubernetes. Installing a self contained undercloud.
If that's the case, it would be nice to use the same docker containers to build an undercloud to deploy the over cloud and reuse almost all of the work, rather then deploy two different systems.
> - heat+cloud-init (afaics) can't deal with circular dependencies (like nova<-
> >neutron) since the machines need to exist first before you can refer to
> >their
> IPs.
Could this be made to work with by either a floating ip or load balancer with a floating ip in front so you know before hand what its going to be?
> From what I can see, TripleO gets around this by always scheduling them on
> the same machine and just using the known local IP. Other installs declare
> fixed IPs up front - on rackspace I can't do that (easily).
> I can't use loadbalancers via heat for this because the loadbalancers need to
> know the backend node addresses, which means the nodes have to exist
> first and you're back to a circular dependency.
Starting in icehouse, heat does not need to know the backend ip's on lb creation. There is a PoolMember resource that acts similarly to a cinder mount resource. It lets you create the lb in one stack, and then an instance template that launches the vm, installs stuff, then binds to the lb in another.
For an example, see:
https://github.com/EMSL-MSC/heat-templates/tree/master/cfn/389/
BaseCluster and RO_Replica templates.
> For comparision, with kubernetes you declare the loadbalancer-equivalents
> (services) up front with a search expression for the backends. In a second
> pass you create the backends (pods) which can refer to any of the
> loadbalanced endpoints. The loadbalancers then reconfigure themselves
> on the fly to find the new backends. You _can_ do a similar lazy-
> loadbalancer-reconfig thing with openstack too, but not with heat and not
> just "out of the box".
As above, you should be able to do this with heat now.
> - My experiences using heat for anything complex have been extremely
> frustrating. The version on rackspace public cloud is ancient and limited,
> and quite easy to get into a state where the only fix is to destroy the entire
> stack and recreate it. I'm sure these are fixed in newer versions of heat, but
> last time I tried I was unable to run it standalone against an arms-length
> keystone because some of the recursive heat callbacks became confused
> about which auth token to use.
Newer versions of heat are better. and its getting better all the time. I agree its not as fault tolerant as it needs to be yet in all cases. For now, I usually end up breaking my systems into a few separate stacks that I can deal with updates in a way that if there is a failure, nothing critical gets lost. Usually that means launching/deleting/relaunching components rather then ever updating them.
> (I'm sure this can be fixed, if it wasn't already just me using it wrong in the
> first place.)
>
> - As far as I know, nothing in a heat/loadbalancer/nova stack will actually
> reschedule jobs away from a failed machine.
That's a problem currently, yes.
There's also no lazy
> discovery/nameservice mechanism, so updating IP address declarations in
> cloud- configs tend to ripple through the heat config and cause all sorts of
> VMs/containers to be reinstalled without any sort of throttling or rolling
> update.
I try and use floating ip's for these. Designate support so that you have a nice pretty name for it would be even better. Alternatively, I've had good luck with private Neutron networks. When you own the net, you can assign the vm any ip you want in it. In cases like Ceph where you have mons that need to be at known ip's, I just make sure the vm's running the mons are always launched with those fixed ips. Then you never have to reconfig slaves.
> So: I think there's some things to learn from the kubernetes approach,
> which is why I'm trying to gain more experience with it. I know I'm learning
> more about the various OpenStack components along the way too ;)
Great. :)
Thanks for sharing your experiences with us.
Kevin
>
> --
> - Gus
More information about the OpenStack-dev
mailing list