[openstack-dev] [kolla] kolla, keystone, and endpoints (oh my!)

Lars Kellogg-Stedman lars at redhat.com
Mon Oct 6 14:33:21 UTC 2014


Hello all,

I wanted to expand on a discussion we had in #tripleo on Friday
regarding Kubernetes and the Keystone service catalog.

## The problem

The essential problem is that Kubernetes and Keystone both provide
service discovery mechanisms, and they are not aware of each other.

Kubernetes provides information about available services through
Docker environment variables, whereas Keystone maintains a list of API
endpoints in a database.

When you configure the keystone endpoints, it is tempting to simply
use the service environment variables provided by Kubernetes, but this
is problematic: the variables point at the host-local kube-proxy
instance.  This is a problem right now because if that host were to
go down, that endpoint would no longer be available.  That will be a
problem in the near future because the proxy address will soon be
pod-local, and thus inaccessible from other pods (even on the same
host).

One could instrument container start scripts to replace endpoints in
keystone when the container boots, but if a service has cached
information from the catalog it may not update in a timely fashion.

## The (?) solution

I spent some time this weekend experimenting with setting up a
pod-local proxy that takes all the service information provided by
Kubernetes and generates an haproxy configuration (and starts haproxy
with that configuration):

- https://github.com/larsks/kolla/tree/larsks/hautoproxy/docker/hautoproxy

This greatly simplifies the configuration of openstack service
containers: in all cases, the "remote address" of another service will
be at http://127.0.0.1/, so you can simply configure that address into
the keystone catalog.

It requires minimal configuration: you simply add the "hautproxy"
container to your pod.

This seems to do the right thing in all situations: if a pod is
rescheduled on another host, the haproxy configuration will pick up
the appropriate service environment variables for that host, and
services inside the pod will contain to use 127.0.0.1 as the "remote"
address.

If you use the .json files from
https://github.com/larsks/kolla/tree/larsks/hautoproxy, you can see
this in action.  Specifically, if you start the services for mariadb,
keystone, and glance, and then start the corresponding ponds, you will
end up with functional keystone and glance services.

Here's a short script that will do just that:

    #!/bin/sh

    for x in glance/glance-registry-service.json \
        glance/glance-api-service.json \
        keystone/keystone-public-service.json \
        keystone/keystone-admin-service.json \
        mariadb/mariadb-service.json; do
      kubecfg -c $x create services
    done

    for x in mariadb/mariadb.json \
        keystone/keystone.json \
        glance/glance.json; do
      kubecfg -c $x create pods
    done

With this configuration running, you can kill the keystone pod and allow
Kubernetes to reschedule it and glance will continue to operate correctly.

You cannot kill either the glance or mariadb pods because we do not
yet have a solution for persistent storage.

I will be cleaning up these changes and submitting them for review...but
probably not today due to an all-day meeting.

-- 
Lars Kellogg-Stedman <lars at redhat.com> | larsks @ {freenode,twitter,github}
Cloud Engineering / OpenStack          | http://blog.oddbit.com/

-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 819 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141006/ba1bdf58/attachment.pgp>


More information about the OpenStack-dev mailing list