[openstack-dev] [magnum] Handling password for k8s

Ton Ngo ton at us.ibm.com
Sun Sep 20 18:08:23 UTC 2015



Hi everyone,
    I am running into a potential issue in implementing the support for
load balancer in k8s services.  After a chat with sdake, I would like to
run this by the team for feedback/suggestion.
First let me give a little background for context.  In the current k8s
cluster, all k8s pods and services run within a private subnet (on Flannel)
and they can access each other but they cannot be accessed from external
network.  The way to publish an endpoint to the external network is by
specifying this attribute in your service manifest:
	type: LoadBalancer
   Then k8s will talk to OpenStack Neutron to create the load balancer
pool, members, VIP, monitor.  The user would associate the VIP with a
floating IP and then the endpoint of the service would be accessible from
the external internet.
   To talk to Neutron, k8s needs the user credential and this is stored in
a config file on the master node.  This includes the username, tenant name,
password.  When k8s starts up, it will load the config file and create an
authenticated client with Keystone.
    The issue we need to find a good solution for is how to handle the
password.  With the current effort on security to make Magnum
production-ready, we want to make sure to handle the password properly.
    Ideally, the best solution is to pass the authenticated token to k8s to
use, but this will require sizeable change upstream in k8s.  We have good
reason to pursue this but it will take time.
    For now, my current implementation is as follows:
   In a bay-create, magnum client adds the password to the API call
   (normally it authenticates and sends the token)
   The conductor picks it up and uses it as an input parameter to the heat
   templates
   When configuring the master node, the password is saved in the config
   file for k8s services.
   Magnum does not store the password internally.

    This is probably not ideal, but it would let us proceed for now.  We
can deprecate it later when we have a better solution.  So leaving aside
the issue of how k8s should be changed, the question is:  is this approach
reasonable for the time, or is there a better approach?

Ton Ngo,

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150920/0873ea77/attachment.html>


More information about the OpenStack-dev mailing list