<html><body><p>Another option is for Magnum to do all the necessary set up and leave to the user the final step of editing the config file with the password. Then the load balancer feature will be disabled by default and we provide the instruction for the user to enable it. This would circumvent the issue with handling the password and would actually match the intended usage in k8s.<br>Ton Ngo, <br><br><img width="16" height="16" src="cid:1__=07BBF454DFB71CE58f9e8a93df938690918c07B@" border="0" alt="Inactive hide details for Ton Ngo---09/20/2015 09:57:09 PM---Hi Vikas, It's correct that once the password is saved in the"><font color="#424282">Ton Ngo---09/20/2015 09:57:09 PM---Hi Vikas, It's correct that once the password is saved in the k8s master node,</font><br><br><font size="2" color="#5F5F5F">From: </font><font size="2">Ton Ngo/Watson/IBM@IBMUS</font><br><font size="2" color="#5F5F5F">To: </font><font size="2">"OpenStack Development Mailing List \(not for usage questions\)" <openstack-dev@lists.openstack.org></font><br><font size="2" color="#5F5F5F">Date: </font><font size="2">09/20/2015 09:57 PM</font><br><font size="2" color="#5F5F5F">Subject: </font><font size="2">Re: [openstack-dev] [magnum] Handling password for k8s</font><br><hr width="100%" size="2" align="left" noshade style="color:#8091A5; "><br><br><br><font size="4">Hi Vikas,<br>It's correct that once the password is saved in the k8s master node, then it would have the same security as the nova-instance. The issue is as Hongbin noted, the password is exposed along the chain of interaction between magnum and heat. Users in the same tenant can potentially see the password of the user who creates the cluster. The current k8s mode of operation is k8s-centric, where the cluster is assumed to be managed manually so it is reasonable to configure with one OpenStack user credential. With Magnum managing the k8s cluster, we add another layer of management, hence the complication. <br><br>Thanks Hongbin, Steve for the suggestion. If we don't see any fundamental flaw, we can proceed with the initial sub-optimal implementation and refine it later with the service domain implementation.<br><br>Ton Ngo,<br><br><br></font><img src="cid:1__=07BBF454DFB71CE58f9e8a93df938690918c07B@" width="16" height="16" alt="Inactive hide details for Vikas Choudhary ---09/20/2015 09:02:49 PM---Hi Ton, kube-masters will be nova instances only and beca"><font size="4" color="#424282">Vikas Choudhary ---09/20/2015 09:02:49 PM---Hi Ton, kube-masters will be nova instances only and because any access to</font><font size="4"><br></font><font color="#5F5F5F"><br>From: </font>Vikas Choudhary <choudharyvikas16@gmail.com><font color="#5F5F5F"><br>To: </font>openstack-dev@lists.openstack.org<font color="#5F5F5F"><br>Date: </font>09/20/2015 09:02 PM<font color="#5F5F5F"><br>Subject: </font>[openstack-dev] [magnum] Handling password for k8s<font size="4"><br></font><hr width="100%" size="2" align="left" noshade><font size="4"><br><br></font><font size="5" color="#535353" face="Lucida Console"><br>Hi Ton,<br>kube-masters will be nova instances only and because any access to nova-instances is already being secured using keystone, I am not able to understand what are the concerns in storing password on master-nodes.<br>Can you please list down concerns in our current approach?</font><tt><font size="5"><br>-Vikas Choudhary</font></tt><i><font color="#535353" face="Lucida Console"><br>Hi<br>everyone,</font></i><tt><font size="5" color="#535353"><br> </font></tt><i><font color="#535353" face="Lucida Console">I<br>am running into a potential issue in implementing the support for<br>load<br>balancer in k8s services. After a chat with sdake, I would like to<br>run<br>this by the team for feedback/suggestion.<br>First<br>let me give a little background for context. In the current k8s<br>cluster,<br>all k8s pods and services run within a private subnet (on Flannel)<br>and<br>they can access each other but they cannot be accessed from external<br>network.<br>The way to publish an endpoint to the external network is by<br>specifying<br>this attribute in your service manifest:</font></i><tt><font size="5" color="#535353"><br> </font></tt><i><font color="#535353" face="Lucida Console">type:<br>LoadBalancer</font></i><tt><font size="5" color="#535353"><br> </font></tt><i><font color="#535353" face="Lucida Console">Then<br>k8s will talk to OpenStack Neutron to create the load balancer<br>pool,<br>members, VIP, monitor. The user would associate the VIP with a<br>floating<br>IP and then the endpoint of the service would be accessible from<br>the<br>external internet.</font></i><tt><font size="5" color="#535353"><br> </font></tt><i><font color="#535353" face="Lucida Console">To<br>talk to Neutron, k8s needs the user credential and this is stored in<br>a<br>config file on the master node. This includes the username, tenant<br>name,<br>password.<br>When k8s starts up, it will load the config file and create an<br>authenticated<br>client with Keystone.</font></i><tt><font size="5" color="#535353"><br> </font></tt><i><font color="#535353" face="Lucida Console">The<br>issue we need to find a good solution for is how to handle the<br>password.<br>With the current effort on security to make Magnum<br>production-ready,<br>we want to make sure to handle the password properly.</font></i><tt><font size="5" color="#535353"><br> </font></tt><i><font color="#535353" face="Lucida Console">Ideally,<br>the best solution is to pass the authenticated token to k8s to<br>use,<br>but this will require sizeable change upstream in k8s. We have good<br>reason<br>to pursue this but it will take time.</font></i><tt><font size="5" color="#535353"><br> </font></tt><i><font color="#535353" face="Lucida Console">For<br>now, my current implementation is as follows:</font></i><tt><font size="5" color="#535353"><br> </font></tt><i><font color="#535353" face="Lucida Console">In<br>a bay-create, magnum client adds the password to the API call</font></i><tt><font size="5" color="#535353"><br> </font></tt><i><font color="#535353" face="Lucida Console">(normally<br>it authenticates and sends the token)</font></i><tt><font size="5" color="#535353"><br> </font></tt><i><font color="#535353" face="Lucida Console">The<br>conductor picks it up and uses it as an input parameter to the heat</font></i><tt><font size="5" color="#535353"><br> </font></tt><i><font color="#535353" face="Lucida Console">templates</font></i><tt><font size="5" color="#535353"><br> </font></tt><i><font color="#535353" face="Lucida Console">When<br>configuring the master node, the password is saved in the config</font></i><tt><font size="5" color="#535353"><br> </font></tt><i><font color="#535353" face="Lucida Console">file<br>for k8s services.</font></i><tt><font size="5" color="#535353"><br> </font></tt><i><font color="#535353" face="Lucida Console">Magnum<br>does not store the password internally.</font></i><tt><font size="5"><br><br></font></tt><font size="4"><br></font><tt><font size="5" color="#535353"><br> </font></tt><i><font color="#535353" face="Lucida Console">This<br>is probably not ideal, but it would let us proceed for now. We<br>can<br>deprecate it later when we have a better solution. So leaving aside<br>the<br>issue of how k8s should be changed, the question is: is this<br>approach<br>reasonable<br>for the time, or is there a better approach?</font></i><tt><font size="5"><br><br></font></tt><font size="4"><br></font><i><font color="#535353" face="Lucida Console"><br>Ton<br>Ngo,</font></i><tt><font size="4">__________________________________________________________________________<br>OpenStack Development Mailing List (not for usage questions)<br>Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</font></tt><tt><u><font size="4" color="#0000FF"><br></font></u></tt><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev"><tt><u><font size="4" color="#0000FF">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</font></u></tt></a><font size="4"><br><br></font><tt>__________________________________________________________________________<br>OpenStack Development Mailing List (not for usage questions)<br>Unsubscribe: OpenStack-dev-request@lists.openstack.org?subject:unsubscribe<br></tt><tt><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a></tt><tt><br></tt><br><BR>
</body></html>