[openstack-dev] [Magnum] TLS Support in Magnum

Fox, Kevin M Kevin.Fox at pnnl.gov
Wed Jun 17 15:47:01 UTC 2015


Do consider another use case, that of a private docker cluster...

I may want to use magnum to deploy a docker cluster in a private neutron network for a mid/backend tier as a component of a larger scalable cloud application. Floating ip's would not be used in this case since the machines that would need to talk to the docker cluster would be on the same private neutron network. So I'd rather use RFC-1918 space in the private network and ensure the public networks never can reach it.

Thanks,
Kevin
________________________________
From: Adrian Otto [adrian.otto at rackspace.com]
Sent: Tuesday, June 16, 2015 10:12 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum

Clint,

Hi! It’s good to hear from you!

On Jun 16, 2015, at 8:58 PM, Clint Byrum <clint at fewbar.com<mailto:clint at fewbar.com>> wrote:

I don't understand at all what you said there.

If my kubernetes minions are attached to a gateway which has a direct
route to Magnum, let's say they're at, 192.0.2.{100,101,102}, and
Magnum is at 198.51.100.1, then as long as the minions' gateway knows
how to find 198.51.100.0/24, and Magnum's gateway knows how to route to
192.0.2.0/24, then you can have two-way communication and no floating
ips or NAT. This seems orthogonal to how external users find the minions.

That’s correct. Keep in mind that large clouds use layer 3 routing protocols to get packets around, especially for north/south traffic where public IP addresses are typically used. Injecting new routes into the network fabric each time we create a bay might cause reluctance from network administrators to allow the adoption of Magnum. Pre-allocating tons of RFC-1918 addresses to Magnum may also be impractical on networks that use those addresses extensively. Steve’s explanation of using routable addresses as floating IP addresses is one approach to leverage the prevailing SDN in the cloud’s network to address this concern.

Let’s not get too far off topic on this thread. We are discussing the implementation of TLS as a mechanism of access control for API services that run on networks that are reachable by the public. We got a good suggestion to use an approach that can work regardless of network connectivity between the Magnum control plane and the Nova instances (Magnum Nodes) and the containers that run on them. I’d like to see if we could use cloud-init to get the keys into the bay nodes (docker hosts). That way we can avoid the requirement for end-to-end network connectivity between bay nodes and the Magnum control plane.

Thanks,

Adrian

Excerpts from Steven Dake (stdake)'s message of 2015-06-16 19:40:25 -0700:
Clint,

Answering Clint’s question, yes there is a reason all nodes must expose a floating IP address.

In a Kubernetes cluster, each minion has a port address space.  When an external service contacts the floating IP’s port, the request is routed over the internal network to the correct container using a proxy mechanism.  The problem then is, how do you know which minion to connect to with your external service?  The answer is you can connect to any of them.  Kubernetes only has one port address space, so Kubernetes suffers from a single namespace problem (which Magnum solves with Bays).

Longer term it may make sense to put the minion external addresses on a RFC1918 network, and put a floating VIF with a load balancer to connect to them.  Then no need for floating address per node.  We are blocked behind kubernetes implementing proper support for load balancing in OpenStack to even consider this work.

Regards
-steve

From: <Fox>, Kevin M <Kevin.Fox at pnnl.gov<mailto:Kevin.Fox at pnnl.gov><mailto:Kevin.Fox at pnnl.gov>>
Reply-To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org>>
Date: Tuesday, June 16, 2015 at 6:36 AM
To: "OpenStack Development Mailing List (not for usage questions)" <openstack-dev at lists.openstack.org<mailto:openstack-dev at lists.openstack.org><mailto:openstack-dev at lists.openstack.org>>
Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum

Out of the box, vms usually can contact the controllers though the routers nat, but not visa versa. So its preferable for guest agents to make the connection, not the controller connect to the guest agents. No floating ips, security group rules or special networks are needed then.

Thanks,
Kevin

________________________________
From: Clint Byrum
Sent: Monday, June 15, 2015 6:10:27 PM
To: openstack-dev
Subject: Re: [openstack-dev] [Magnum] TLS Support in Magnum

Excerpts from Fox, Kevin M's message of 2015-06-15 15:59:18 -0700:
No, I was confused by your statement:
"When we create a bay, we have an ssh keypair that we use to inject the ssh public key onto the nova instances we create."

It sounded like you were using that keypair to inject a public key. I just misunderstood.

It does raise the question though, are you using ssh between the controller and the instance anywhere? If so, we will still run into issues when we go to try and test it at our site. Sahara does currently, and we're forced to put a floating ip on every instance. Its less then ideal...


Why not just give each instance a port on a network which can route
directly to the controller's network? Is there some reason you feel
"forced" to use a floating IP?


__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150617/eb245620/attachment.html>


More information about the OpenStack-dev mailing list