[openstack-dev] [magnum] Handling password for k8s

Ton Ngo ton at us.ibm.com
Mon Sep 21 06:44:18 UTC 2015


Another option is for Magnum to do all the necessary set up and leave to
the user the final step of editing the config file with the password.  Then
the load balancer feature will be disabled by default and we provide the
instruction for the user to enable it.  This would circumvent the issue
with handling the password and would actually match the intended usage in
k8s.
Ton Ngo,



From:	Ton Ngo/Watson/IBM at IBMUS
To:	"OpenStack Development Mailing List \(not for usage questions
            \)" <openstack-dev at lists.openstack.org>
Date:	09/20/2015 09:57 PM
Subject:	Re: [openstack-dev] [magnum] Handling password for k8s



Hi Vikas,
It's correct that once the password is saved in the k8s master node, then
it would have the same security as the nova-instance. The issue is as
Hongbin noted, the password is exposed along the chain of interaction
between magnum and heat. Users in the same tenant can potentially see the
password of the user who creates the cluster. The current k8s mode of
operation is k8s-centric, where the cluster is assumed to be managed
manually so it is reasonable to configure with one OpenStack user
credential. With Magnum managing the k8s cluster, we add another layer of
management, hence the complication.

Thanks Hongbin, Steve for the suggestion. If we don't see any fundamental
flaw, we can proceed with the initial sub-optimal implementation and refine
it later with the service domain implementation.

Ton Ngo,


Inactive hide details for Vikas Choudhary ---09/20/2015 09:02:49 PM---Hi
Ton, kube-masters will be nova instances only and becaVikas Choudhary
---09/20/2015 09:02:49 PM---Hi Ton, kube-masters will be nova instances
only and because any access to

From: Vikas Choudhary <choudharyvikas16 at gmail.com>
To: openstack-dev at lists.openstack.org
Date: 09/20/2015 09:02 PM
Subject: [openstack-dev] [magnum] Handling password for k8s



Hi Ton,
kube-masters will be nova instances only and because any access to
nova-instances is already being secured using keystone, I am not able to
understand what are the concerns in storing password on master-nodes.
Can you please list down concerns in our current approach?
-Vikas Choudhary
Hi
everyone,
    I
am running into a potential issue in implementing the support for
load
balancer in k8s services. After a chat with sdake, I would like to
run
this by the team for feedback/suggestion.
First
let me give a little background for context. In the current k8s
cluster,
all k8s pods and services run within a private subnet (on Flannel)
and
they can access each other but they cannot be accessed from external
network.
The way to publish an endpoint to the external network is by
specifying
this attribute in your service manifest:
        type:
LoadBalancer
   Then
k8s will talk to OpenStack Neutron to create the load balancer
pool,
members, VIP, monitor. The user would associate the VIP with a
floating
IP and then the endpoint of the service would be accessible from
the
external internet.
   To
talk to Neutron, k8s needs the user credential and this is stored in
a
config file on the master node. This includes the username, tenant
name,
password.
When k8s starts up, it will load the config file and create an
authenticated
client with Keystone.
    The
issue we need to find a good solution for is how to handle the
password.
With the current effort on security to make Magnum
production-ready,
we want to make sure to handle the password properly.
    Ideally,
the best solution is to pass the authenticated token to k8s to
use,
but this will require sizeable change upstream in k8s. We have good
reason
to pursue this but it will take time.
    For
now, my current implementation is as follows:
   In
a bay-create, magnum client adds the password to the API call
   (normally
it authenticates and sends the token)
   The
conductor picks it up and uses it as an input parameter to the heat
   templates
   When
configuring the master node, the password is saved in the config
   file
for k8s services.
   Magnum
does not store the password internally.



    This
is probably not ideal, but it would let us proceed for now. We
can
deprecate it later when we have a better solution. So leaving aside
the
issue of how k8s should be changed, the question is: is this
approach
reasonable
for the time, or is there a better approach?



Ton
Ngo,
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150920/d847be09/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150920/d847be09/attachment.gif>


More information about the OpenStack-dev mailing list