[openstack-dev] [kolla] kolla, keystone, and endpoints (oh my!)

Angus Lees gus at inodes.org
Tue Oct 7 07:14:22 UTC 2014


On Mon, 6 Oct 2014 10:33:21 AM Lars Kellogg-Stedman wrote:
> Hello all,
> 
> I wanted to expand on a discussion we had in #tripleo on Friday
> regarding Kubernetes and the Keystone service catalog.
> 
> ## The problem
> 
> The essential problem is that Kubernetes and Keystone both provide
> service discovery mechanisms, and they are not aware of each other.
> 
> Kubernetes provides information about available services through
> Docker environment variables, whereas Keystone maintains a list of API
> endpoints in a database.

What you haven't stated here is whether the catalog endpoints should be 
reachable outside the kubernetes minions or not.

Perhaps we could even use this mysterious(*) keystone publicURL/internalURL 
division to publish different external and kubernetes-only versions, since we 
can presumably always do more efficient communication pod<->pod.


(*) Is the publicURL/internalURL/adminURL split documented anywhere?  Does 
anything other than keystone-manage use anything other than publicURL?

> When you configure the keystone endpoints, it is tempting to simply
> use the service environment variables provided by Kubernetes, but this
> is problematic: the variables point at the host-local kube-proxy
> instance.  This is a problem right now because if that host were to
> go down, that endpoint would no longer be available.  That will be a
> problem in the near future because the proxy address will soon be
> pod-local, and thus inaccessible from other pods (even on the same
> host).
> 
> One could instrument container start scripts to replace endpoints in
> keystone when the container boots, but if a service has cached
> information from the catalog it may not update in a timely fashion.
> 
> ## The (?) solution
> 
> I spent some time this weekend experimenting with setting up a
> pod-local proxy that takes all the service information provided by
> Kubernetes and generates an haproxy configuration (and starts haproxy
> with that configuration):

This sounds good.

Some alternatives off the top of my head, just to feed the discussion:

1.  Fixed hostname

Add something like this to the start.sh wrapper script:
 echo $SERVICE_HOST proxy >> /etc/hosts
and then use http://proxy:$port/... etc as the endpoint in keystone catalog.

2. Fixed IP addresses

Create a regular OpenStack loadbalancer and configure this (possibly publicly 
available) IP in keystone catalog.

I _think_ this could even be a loadbalancer controlled by the neutron we just 
set up, assuming the the loadbalancer HA works without help and the nova<-
>neutron "bootstrap" layer was setup using regular k8s service env vars and 
not the loadbalancer IPs.

Variant a)

Create an "external proxy" kubernetes job that adds itself to a regular 
OpenStack loadbalancer and forwards traffic to the local kubernetes proxy (and 
thence to the real backend).  We can just walk the standard kubernetes 
environment variables to find configured k8s services.

Variant b)

Have an out-of-band job that periodically finds the current location of 
suitable endpoints using a "list pods" with label matching and then 
(re)configures a loadbalancer with the hostports of whatever it finds.
Avoids the "double proxy" inherent in (a) (and (c) aiui), at the cost of more 
frequent loadbalancer config churn.

Variant c)

Teach kubernetes how to configure OpenStack loadbalancers directly by adding a 
kubernetes/pkg/cloudprovider/openstack.  Then we add 
"CreateExternalLoadBalancer: true" to the appropriate service definitions and 
kubernetes basically does variant (a) itself.


Fwiw, I think we should do (2.c) anyway, just to keep up with the other 
providers that already exist in kubernetes.  I haven't written any go in a 
year or so, but I might even do this myself.


In case it needs to be said, I think we should watch discussions like 
https://github.com/GoogleCloudPlatform/kubernetes/issues/1161 and try to 
follow the "standard" kubernetes approaches as they emerge.

 - Gus

> - https://github.com/larsks/kolla/tree/larsks/hautoproxy/docker/hautoproxy
> 
> This greatly simplifies the configuration of openstack service
> containers: in all cases, the "remote address" of another service will
> be at http://127.0.0.1/, so you can simply configure that address into
> the keystone catalog.
> 
> It requires minimal configuration: you simply add the "hautproxy"
> container to your pod.
> 
> This seems to do the right thing in all situations: if a pod is
> rescheduled on another host, the haproxy configuration will pick up
> the appropriate service environment variables for that host, and
> services inside the pod will contain to use 127.0.0.1 as the "remote"
> address.
> 
> If you use the .json files from
> https://github.com/larsks/kolla/tree/larsks/hautoproxy, you can see
> this in action.  Specifically, if you start the services for mariadb,
> keystone, and glance, and then start the corresponding ponds, you will
> end up with functional keystone and glance services.
> 
> Here's a short script that will do just that:
> 
>     #!/bin/sh
> 
>     for x in glance/glance-registry-service.json \
>         glance/glance-api-service.json \
>         keystone/keystone-public-service.json \
>         keystone/keystone-admin-service.json \
>         mariadb/mariadb-service.json; do
>       kubecfg -c $x create services
>     done
> 
>     for x in mariadb/mariadb.json \
>         keystone/keystone.json \
>         glance/glance.json; do
>       kubecfg -c $x create pods
>     done
> 
> With this configuration running, you can kill the keystone pod and allow
> Kubernetes to reschedule it and glance will continue to operate correctly.
> 
> You cannot kill either the glance or mariadb pods because we do not
> yet have a solution for persistent storage.
> 
> I will be cleaning up these changes and submitting them for review...but
> probably not today due to an all-day meeting.

-- 
 - Gus



More information about the OpenStack-dev mailing list