[openstack-dev] [magnum] High Availability
Fox, Kevin M
Kevin.Fox at pnnl.gov
Sat Mar 19 00:43:00 UTC 2016
Yeah, I get that. I've got some sizeable deployments too.
But in the case of using a library, your scattering all the security bits around the various services and it just pushes the burden to securing it, patching all the services, etc some place else. Its better then each project rolling their own security solution for sure. but if your deploying the system securely, I don't think it really is less of a burden. You switch out having to figure out how to deploy an extra service with having to pay careful attention to every other service to secure them more carefully. I'd argue it should be easier to deploy the centralized service then doing it across the other services.
Thanks,
Kevin
________________________________________
From: Steven Dake (stdake) [stdake at cisco.com]
Sent: Friday, March 18, 2016 1:33 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [magnum] High Availability
On 3/18/16, 12:59 PM, "Fox, Kevin M" <Kevin.Fox at pnnl.gov> wrote:
>+1. We should be encouraging a common way of solving these issues across
>all the openstack projects and security is a really important thing.
>spreading it across lots of projects causes more bugs and security
>related bugs cause security incidents. No one wants those.
>
>I'd also like to know why, if an old cloud is willing to deploy a new
>magnum, its unreasonable to deploy a new barbican at the same time.
>
>If its a technical reason, lets fix the issue. If its something else,
>lets discuss it. If its just an operator not wanting to install 2 things
>instead of just one, I think its a totally understandable, but
>unreasonable request.
Kevin,
I think the issue comes down to "how" the common way of solving this
problem should be approached. In barbican's case a daemon and database
are required. What I wanted early on with Magnum when I was involved was
a library approach.
Having maintained a deployment project for 2 years, I can tell you each
time we add a new big tent project it adds a bunch of footprint to our
workload. Operators typically don't even have a tidy deployment tool like
Kolla to work with. As an example, ceilometer has had containers
available in Kolla for 18 months yet nobody has finished the job on
implementing ceilometer playbooks, even though ceilometer is a soft
dependency of heat for autoscaling.
Many Operators self-deploy so they understand how the system operates.
They lack the ~200 contributors Kolla has to maintain a deployment tool,
and as such, I really don't think the idea that deploying "Y to get X when
Y could and should be a small footprint library" is unreasonable.
Regards,
-steve
>
>Thanks,
>Kevin
>________________________________________
>From: Douglas Mendizábal [douglas.mendizabal at rackspace.com]
>Sent: Friday, March 18, 2016 6:45 AM
>To: openstack-dev at lists.openstack.org
>Subject: Re: [openstack-dev] [magnum] High Availability
>
>Hongbin,
>
>I think Adrian makes some excellent points regarding the adoption of
>Barbican. As the PTL for Barbican, it's frustrating to me to constantly
>hear from other projects that securing their sensitive data is a
>requirement but then turn around and say that deploying Barbican is a
>problem.
>
>I guess I'm having a hard time understanding the operator persona that
>is willing to deploy new services with security features but unwilling
>to also deploy the service that is meant to secure sensitive data across
>all of OpenStack.
>
>I understand one barrier to entry for Barbican is the high cost of
>Hardware Security Modules, which we recommend as the best option for the
>Storage and Crypto backends for Barbican. But there are also other
>options for securing Barbican using open source software like DogTag or
>SoftHSM.
>
>I also expect Barbican adoption to increase in the future, and I was
>hoping that Magnum would help drive that adoption. There are also other
>projects that are actively developing security features like Swift
>Encryption, and DNSSEC support in Desginate. Eventually these features
>will also require Barbican, so I agree with Adrian that we as a
>community should be encouraging deployers to adopt the best security
>practices.
>
>Regarding the Keystone solution, I'd like to hear the Keystone team's
>feadback on that. It definitely sounds to me like you're trying to put
>a square peg in a round hole.
>
>- Doug
>
>On 3/17/16 8:45 PM, Hongbin Lu wrote:
>> Thanks Adrian,
>>
>>
>>
>> I think the Keystone approach will work. For others, please speak up if
>> it doesn¹t work for you.
>>
>>
>>
>> Best regards,
>>
>> Hongbin
>>
>>
>>
>> *From:*Adrian Otto [mailto:adrian.otto at rackspace.com]
>> *Sent:* March-17-16 9:28 PM
>> *To:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* Re: [openstack-dev] [magnum] High Availability
>>
>>
>>
>> Hongbin,
>>
>>
>>
>> I tweaked the blueprint in accordance with this approach, and approved
>> it for Newton:
>>
>> https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-store
>>
>>
>>
>> I think this is something we can all agree on as a middle ground, If
>> not, I¹m open to revisiting the discussion.
>>
>>
>>
>> Thanks,
>>
>>
>>
>> Adrian
>>
>>
>>
>> On Mar 17, 2016, at 6:13 PM, Adrian Otto <adrian.otto at rackspace.com
>> <mailto:adrian.otto at rackspace.com>> wrote:
>>
>>
>>
>> Hongbin,
>>
>> One alternative we could discuss as an option for operators that
>> have a good reason not to use Barbican, is to use Keystone.
>>
>> Keystone credentials store:
>>
>>http://specs.openstack.org/openstack/keystone-specs/api/v3/identity-api-v
>>3.html#credentials-v3-credentials
>>
>> The contents are stored in plain text in the Keystone DB, so we
>> would want to generate an encryption key per bay, encrypt the
>> certificate and store it in keystone. We would then use the same key
>> to decrypt it upon reading the key back. This might be an acceptable
>> middle ground for clouds that will not or can not run Barbican. This
>> should work for any OpenStack cloud since Grizzly. The total amount
>> of code in Magnum would be small, as the API already exists. We
>> would need a library function to encrypt and decrypt the data, and
>> ideally a way to select different encryption algorithms in case one
>> is judged weak at some point in the future, justifying the use of an
>> alternate.
>>
>> Adrian
>>
>>
>> On Mar 17, 2016, at 4:55 PM, Adrian Otto <adrian.otto at rackspace.com
>> <mailto:adrian.otto at rackspace.com>> wrote:
>>
>> Hongbin,
>>
>>
>> On Mar 17, 2016, at 2:25 PM, Hongbin Lu <hongbin.lu at huawei.com
>> <mailto:hongbin.lu at huawei.com>> wrote:
>>
>> Adrian,
>>
>> I think we need a boarder set of inputs in this matter, so I moved
>> the discussion from whiteboard back to here. Please check my replies
>> inline.
>>
>>
>> I would like to get a clear problem statement written for this.
>> As I see it, the problem is that there is no safe place to put
>> certificates in clouds that do not run Barbican.
>> It seems the solution is to make it easy to add Barbican such that
>> it's included in the setup for Magnum.
>>
>> No, the solution is to explore an non-Barbican solution to store
>> certificates securely.
>>
>>
>> I am seeking more clarity about why a non-Barbican solution is
>> desired. Why is there resistance to adopting both Magnum and
>> Barbican together? I think the answer is that people think they can
>> make Magnum work with really old clouds that were set up before
>> Barbican was introduced. That expectation is simply not reasonable.
>> If there were a way to easily add Barbican to older clouds, perhaps
>> this reluctance would melt away.
>>
>>
>> Magnum should not be in the business of credential storage when
>> there is an existing service focused on that need.
>>
>> Is there an issue with running Barbican on older clouds?
>> Anyone can choose to use the builtin option with Magnum if hey
>> don't have Barbican.
>> A known limitation of that approach is that certificates are not
>> replicated.
>>
>> I guess the *builtin* option you referred is simply placing the
>> certificates to local file system. A few of us had concerns on this
>> approach (In particular, Tom Cammann has gave -2 on the review [1])
>> because it cannot scale beyond a single conductor. Finally, we made
>> a compromise to land this option and use it for testing/debugging
>> only. In other words, this option is not for production. As a
>> result, Barbican becomes the only option for production which is the
>> root of the problem. It basically forces everyone to install
>> Barbican in order to use Magnum.
>>
>> [1] https://review.openstack.org/#/c/212395/
>>
>>
>> It's probably a bad idea to replicate them.
>> That's what Barbican is for. --adrian_otto
>>
>> Frankly, I am surprised that you disagreed here. Back to July 2015,
>> we all agreed to have two phases of implementation and the statement
>> was made by you [2].
>>
>> ================================================================
>> #agreed Magnum will use Barbican for an initial implementation for
>> certificate generation and secure storage/retrieval. We will commit
>> to a second phase of development to eliminating the hard requirement
>> on Barbican with an alternate implementation that implements the
>> functional equivalent implemented in Magnum, which may depend on
>> libraries, but not Barbican.
>> ================================================================
>>
>> [2]
>>
>>http://lists.openstack.org/pipermail/openstack-dev/2015-July/069130.html
>>
>>
>> The context there is important. Barbican was considered for two
>> purposes: (1) CA signing capability, and (2) certificate storage. My
>> willingness to implement an alternative was based on our need to get
>> a certificate generation and signing solution that actually worked,
>> as Barbican did not work for that at the time. I have always viewed
>> Barbican as a suitable solution for certificate storage, as that was
>> what it was first designed for. Since then, we have implemented
>> certificate generation and signing logic within a library that does
>> not depend on Barbican, and we can use that safely in production use
>> cases. What we don¹t have built in is what Barbican is best at,
>> secure storage for our certificates that will allow multi-conductor
>> operation.
>>
>> I am opposed to the idea that Magnum should re-implement Barbican
>> for certificate storage just because operators are reluctant to
>> adopt it. If we need to ship a Barbican instance along with each
>> Magnum control plane, so be it, but I don¹t see the value in
>> re-inventing the wheel. I promised the OpenStack community that we
>> were out to integrate with and enhance OpenStack not to replace it.
>>
>> Now, with all that said, I do recognize that not all clouds are
>> motivated to use all available security best practices. They may be
>> operating in environments that they believe are already secure
>> (because of a secure perimeter), and that it¹s okay to run
>> fundamentally insecure software within those environments. As
>> misguided as this viewpoint may be, it¹s common. My belief is that
>> it¹s best to offer the best practice by default, and only allow
>> insecure operation when someone deliberately turns of fundamental
>> security features.
>>
>> With all this said, I also care about Magnum adoption as much as all
>> of us, so I¹d like us to think creatively about how to strike the
>> right balance between re-implementing existing technology, and
>> making that technology easily accessible.
>>
>> Thanks,
>>
>> Adrian
>>
>>
>>
>> Best regards,
>> Hongbin
>>
>> -----Original Message-----
>> From: Adrian Otto [mailto:adrian.otto at rackspace.com]
>> Sent: March-17-16 4:32 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [magnum] High Availability
>>
>> I have trouble understanding that blueprint. I will put some remarks
>> on the whiteboard. Duplicating Barbican sounds like a mistake to me.
>>
>> --
>> Adrian
>>
>>
>> On Mar 17, 2016, at 12:01 PM, Hongbin Lu <hongbin.lu at huawei.com
>> <mailto:hongbin.lu at huawei.com>> wrote:
>>
>> The problem of missing Barbican alternative implementation has been
>> raised several times by different people. IMO, this is a very
>> serious issue that will hurt Magnum adoption. I created a blueprint
>> for that [1] and set the PTL as approver. It will be picked up by a
>> contributor once it is approved.
>>
>> [1]
>>
>>https://blueprints.launchpad.net/magnum/+spec/barbican-alternative-sto
>> re
>>
>> Best regards,
>> Hongbin
>>
>> -----Original Message-----
>> From: Ricardo Rocha [mailto:rocha.porto at gmail.com]
>> Sent: March-17-16 2:39 PM
>> To: OpenStack Development Mailing List (not for usage questions)
>> Subject: Re: [openstack-dev] [magnum] High Availability
>>
>> Hi.
>>
>> We're on the way, the API is using haproxy load balancing in the
>> same way all openstack services do here - this part seems to work
>>fine.
>>
>> For the conductor we're stopped due to bay certificates - we don't
>> currently have barbican so local was the only option. To get them
>> accessible on all nodes we're considering two options:
>> - store bay certs in a shared filesystem, meaning a new set of
>> credentials in the boxes (and a process to renew fs tokens)
>> - deploy barbican (some bits of puppet missing we're sorting out)
>>
>> More news next week.
>>
>> Cheers,
>> Ricardo
>>
>>
>> On Thu, Mar 17, 2016 at 6:46 PM, Daneyon Hansen (danehans)
>> <danehans at cisco.com <mailto:danehans at cisco.com>> wrote:
>> All,
>>
>> Does anyone have experience deploying Magnum in a highly-available
>> fashion?
>> If so, I'm interested in learning from your experience. My biggest
>> unknown is the Conductor service. Any insight you can provide is
>> greatly appreciated.
>>
>> Regards,
>> Daneyon Hansen
>>
>>
>>_____________________________________________________________________
>> _ ____ OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>
>><mailto:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>______________________________________________________________________
>> ____ OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>
>><mailto:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>______________________________________________________________________
>> ____ OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>
>><mailto:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>_________________________________________________________________________
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org
>>
>><mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>_________________________________________________________________________
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>
>><mailto:OpenStack-dev-request at lists.openstack.org?subject:unsubscribe>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>_________________________________________________________________________
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org
>>
>><mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>_________________________________________________________________________
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org
>>
>><mailto:OpenStack-dev-request at lists.openstack.org>?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>>
>>
>>
>>
>>_________________________________________________________________________
>>_
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>>OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
>__________________________________________________________________________
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
More information about the OpenStack-dev
mailing list