[openstack-dev] [magnum] High Availability

Adrian Otto adrian.otto at rackspace.com
Thu Mar 17 23:55:19 UTC 2016


Hongbin,

On Mar 17, 2016, at 2:25 PM, Hongbin Lu <hongbin.lu at huawei.com<mailto:hongbin.lu at huawei.com>> wrote:

Adrian,

I think we need a boarder set of inputs in this matter, so I moved the discussion from whiteboard back to here. Please check my replies inline.

I would like to get a clear problem statement written for this.
As I see it, the problem is that there is no safe place to put certificates in clouds that do not run Barbican.
It seems the solution is to make it easy to add Barbican such that it's included in the setup for Magnum.
No, the solution is to explore an non-Barbican solution to store certificates securely.

I am seeking more clarity about why a non-Barbican solution is desired. Why is there resistance to adopting both Magnum and Barbican together? I think the answer is that people think they can make Magnum work with really old clouds that were set up before Barbican was introduced. That expectation is simply not reasonable. If there were a way to easily add Barbican to older clouds, perhaps this reluctance would melt away.

Magnum should not be in the business of credential storage when there is an existing service focused on that need.

Is there an issue with running Barbican on older clouds?
Anyone can choose to use the builtin option with Magnum if hey don't have Barbican.
A known limitation of that approach is that certificates are not replicated.
I guess the *builtin* option you referred is simply placing the certificates to local file system. A few of us had concerns on this approach (In particular, Tom Cammann has gave -2 on the review [1]) because it cannot scale beyond a single conductor. Finally, we made a compromise to land this option and use it for testing/debugging only. In other words, this option is not for production. As a result, Barbican becomes the only option for production which is the root of the problem. It basically forces everyone to install Barbican in order to use Magnum.

[1] https://review.openstack.org/#/c/212395/

It's probably a bad idea to replicate them.
That's what Barbican is for. --adrian_otto
Frankly, I am surprised that you disagreed here. Back to July 2015, we all agreed to have two phases of implementation and the statement was made by you [2].

================================================================
#agreed Magnum will use Barbican for an initial implementation for certificate generation and secure storage/retrieval.  We will commit to a second phase of development to eliminating the hard requirement on Barbican with an alternate implementation that implements the functional equivalent implemented in Magnum, which may depend on libraries, but not Barbican.
================================================================

[2] http://lists.openstack.org/pipermail/openstack-dev/2015-July/069130.html

The context there is important. Barbican was considered for two purposes: (1) CA signing capability, and (2) certificate storage. My willingness to implement an alternative was based on our need to get a certificate generation and signing solution that actually worked, as Barbican did not work for that at the time. I have always viewed Barbican as a suitable solution for certificate storage, as that was what it was first designed for. Since then, we have implemented certificate generation and signing logic within a library that does not depend on Barbican, and we can use that safely in production use cases. What we don’t have built in is what Barbican is best at, secure storage for our certificates that will allow multi-conductor operation.

I am opposed to the idea that Magnum should re-implement Barbican for certificate storage just because operators are reluctant to adopt it. If we need to ship a Barbican instance along with each Magnum control plane, so be it, but I don’t see the value in re-inventing the wheel. I promised the OpenStack community that we were out to integrate with and enhance OpenStack not to replace it.

Now, with all that said, I do recognize that not all clouds are motivated to use all available security best practices. They may be operating in environments that they believe are already secure (because of a secure perimeter), and that it’s okay to run fundamentally insecure software within those environments. As misguided as this viewpoint may be, it’s common. My belief is that it’s best to offer the best practice by default, and only allow insecure operation when someone deliberately turns of fundamental security features.

With all this said, I also care about Magnum adoption as much as all of us, so I’d like us to think creatively about how to strike the right balance between re-implementing existing technology, and making that technology easily accessible.

Thanks,

Adrian


Best regards,
Hongbin

-----Original Message-----
From: Adrian Otto [mailto:adrian.otto at rackspace.com]
Sent: March-17-16 4:32 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability

I have trouble understanding that blueprint. I will put some remarks on the whiteboard. Duplicating Barbican sounds like a mistake to me.

--
Adrian

On Mar 17, 2016, at 12:01 PM, Hongbin Lu <hongbin.lu at huawei.com<mailto:hongbin.lu at huawei.com>> wrote:

The problem of missing Barbican alternative implementation has been raised several times by different people. IMO, this is a very serious issue that will hurt Magnum adoption. I created a blueprint for that [1] and set the PTL as approver. It will be picked up by a contributor once it is approved.

[1]
https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-sto
re

Best regards,
Hongbin

-----Original Message-----
From: Ricardo Rocha [mailto:rocha.porto at gmail.com]
Sent: March-17-16 2:39 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability

Hi.

We're on the way, the API is using haproxy load balancing in the same way all openstack services do here - this part seems to work fine.

For the conductor we're stopped due to bay certificates - we don't currently have barbican so local was the only option. To get them accessible on all nodes we're considering two options:
- store bay certs in a shared filesystem, meaning a new set of
credentials in the boxes (and a process to renew fs tokens)
- deploy barbican (some bits of puppet missing we're sorting out)

More news next week.

Cheers,
Ricardo

On Thu, Mar 17, 2016 at 6:46 PM, Daneyon Hansen (danehans) <danehans at cisco.com> wrote:
All,

Does anyone have experience deploying Magnum in a highly-available fashion?
If so, I'm interested in learning from your experience. My biggest
unknown is the Conductor service. Any insight you can provide is
greatly appreciated.

Regards,
Daneyon Hansen

_____________________________________________________________________
_ ____ OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

______________________________________________________________________
____ OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
______________________________________________________________________
____ OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org<mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160317/47d7811e/attachment.html>


More information about the OpenStack-dev mailing list