[openstack-dev] [magnum] Difference between certs stored in keystone and certs stored in barbican

Adrian Otto adrian.otto at rackspace.com
Tue Sep 1 18:23:40 UTC 2015


John and Robert,

On Sep 1, 2015, at 10:03 AM, John Dennis <jdennis at redhat.com<mailto:jdennis at redhat.com>> wrote:

On 09/01/2015 10:57 AM, Clark, Robert Graham wrote:

The reason that is compelling is that you can have Barbican generate,
sign, and store a keypair without transmitting the private key over the
network to the client that originates the signing request. It can be
directly stored, and made available only to the clients that need access
to it.

This is absolutely _not_ how PKI for TLS is supposed to work, yes Barbican
can create keypairs etc because sometimes that¹s useful but in the
public-private PKI model that TLS expects this is completely wrong. Magnum
nodes should be creating their own private key and CSR and submitting them
to some CA for signing.

Now this gets messy because you probably don¹t want to push keystone
credentials onto each node (that they would use to communicate with
Barbican).

Exactly. Using Keystone trust tokens was one option we discussed, but placing those on the bay nodes is problematic. They currently offer us the rough equivalent of a regular keystone token because not all OpenStack services check for the scope of the token used to auth against the service, meaning that a trust token is effectively a bearer token for interacting with most of OpenStack. We woodenly to land patches in *every* OpenStack project in order to work around this gap. This is simply not practical in the time we have before the Liberty release, and is much more work than our contributors signed up for. Both our Magnum team and our Barbican team met about this together at our recent Midcycle meetup. We did talk about how to put additional trust support capabilities into Barbican to allow delegation and restricted use of Barbican by individual service accounts.

The bottom line is that we need a functional TLS implementation in Magnum for Kubernetes and Docker Swarm to use now, and we can’t in good conscious claim that Magnum is suitable for production workloads until we address this. If we have to take some shortcuts to get this done, then that’s fine, as long as we commit to revisiting our design compromises and correcting them.

There is another ML thread that references a new Nova spec for “Instance Users” which is still in concept stage:

https://review.openstack.org/186617

We need something now, even if it’s not perfect. The thing we must solve for first is that we don’t have Kubernetes and Docker API’s that are open on public networks that are unprotected (no authentication), and allow anyone to just start stuff up on your container clusters. We are going to crawl before we walk/run here. We plan to use a TLS certificate to work as an authentication mechanism so that if you don’t have the correct certificate, you can’t communicate with the TLS enabled API port.

This is what we are doing as a first step:

https://review.openstack.org/#/q/status:open+project:openstack/magnum+branch:master+topic:bp/magnum-as-a-ca,n,z

I¹m a bit conflicted writing this next bit because I¹m not particularly
familiar with the Kubernetes/Magnum architectures and also because I¹m one
of the core developers for Anchor but here goesŠ.

Have you considered using Anchor for this? It¹s a pretty lightweight
ephemeral CA that is built to work well in small PKI communities (like a
Kubernetes cluster) you can configure multiple methods for authentication
and build pretty simple validation rules for deciding if a host should be
given a certificate. Anchor is built to provide short-lifetime
certificates where each node re-requests a certificate typically every
12-24 hours, this has some really nice properties like ³passive
revocation² (Think revocation that actually works) and strong ways to
enforce issuing logic on a per host basis.

Anchor or not, I¹d like to talk to you more about how you¹re attempting to
secure Magnum - I think it¹s an extremely interesting project that I¹d
like to help out with.

-Rob
(Security Project PTL / Anchor flunkie)

Let's not reinvent the wheel. I can't comment on what Magnum is doing but I do know the members of the Barbican project are PKI experts and understand CSR's, key escrow, revocation, etc. Some of the design work is being done by engineers who currently contribute to products in use by the Dept. of Defense, an agency that takes their PKI infrastructure very seriously. They also have been involved with Keystone. I work with these engineers on a regular basis.

The Barbican blueprint states:

Barbican supports full lifecycle management including provisioning, expiration, reporting, etc. A plugin system allows for multiple certificate authority support (including public and private CAs).

Perhaps Anchor would be a great candidate for a Barbican plugin.

That would be cool. I’m not sure that the use case for Anchor exactly fits into Barbican’s concept of a CA, but if there were a clean integration point there, I’d love to use it.

What I don't want to see is spinning our wheels, going backward, or inventing one-off solutions to a very demanding and complex problem space. There have been way too many one-off solutions in the past, we want to consolidate the expertise in one project that is designed by experts and fully vetted, this is the role of Barbican. Would you like to contribute to Barbican? I'm sure your skills would be a tremendous asset.

To be clear, Magnum has no interest in overlapping with Barbican. To the extent that we can iterate, and remove the cryptography code from Magnum, and leverage it in Barbican, we will do that over time, because we don’t want a duplicated support burden for that codebase. We would applaud collaboration between Anchor and Barbican, as we found navigating the capabilities of each of these choices to be rather confusing.

Thanks,

Adrian

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150901/20f20abd/attachment.html>


More information about the OpenStack-dev mailing list