<html><body><p>Hi Keystone team,<br> We have a scenario that involves securing services for container and this has<br>turned out to be rather difficult to solve, so we would like to bring to the larger team for <br>ideas.<br> Examples of this scenario:<br>1. Kubernetes cluster: <br> To support the load balancer and persistent storage features for containers, Kubernetes <br>needs to interface with Neutron and Cinder. This requires the user credential to establish<br>a session and request Openstack services. Currently this is done by requiring the<br>user to manually enter the credential in a Kubernetes config file and restarting some <br>of the Kubernetes services. <br>2. Swarm cluster: <br> To support the Swarm networking for container, the Kuryr libnetwork agent needs to <br>interface with the Kuryr driver, so the agent needs a service credential to establish <br>a session with the driver running on some controllers.<br><br>The problem is in handling and storing these credential on the user VMs in the cluster.<br><br> For #1, Magnum deploys the Kubernetes cluster but does not handle the <br>user credential, so the automation is not complete and the user needs to perform<br>some manual steps. Even this is not desirable since if the cluster is shared within<br>a tenant, the user credential can be exposed to other users. Token does not work<br>well since token would expire and the service is required for the life of the cluster.<br> For #2, storing a Kuryr service credential on the user VM is a security exposure <br>so we are still looking for a solution.<br> <br> The Magnum and Kuryr teams have been discussing this topic for some time. <br>We would welcome any suggestion.<br><br>Ton Ngo,<br><BR>
</body></html>