[openstack-dev] [magnum] High Availability

Hongbin Lu hongbin.lu at huawei.com
Thu Apr 21 19:08:34 UTC 2016


Ricardo,

That is great! It is good to hear Magnum works well in your side.

Best regards,
Hongbin

> -----Original Message-----
> From: Ricardo Rocha [mailto:rocha.porto at gmail.com]
> Sent: April-21-16 1:48 PM
> To: OpenStack Development Mailing List (not for usage questions)
> Subject: Re: [openstack-dev] [magnum] High Availability
> 
> Hi.
> 
> The thread is a month old, but I sent a shorter version of this to
> Daneyon before with some info on the things we dealt with to get Magnum
> deployed successfully. We wrapped it up in a post (there's a video
> linked there with some demos at the end):
> 
> http://openstack-in-production.blogspot.ch/2016/04/containers-and-cern-
> cloud.html
> 
> Hopefully the pointers to the relevant blueprints for some of the
> issues we found will be useful for others.
> 
> Cheers,
>   Ricardo
> 
> On Fri, Mar 18, 2016 at 3:42 PM, Ricardo Rocha <rocha.porto at gmail.com>
> wrote:
> > Hi.
> >
> > We're running a Magnum pilot service - which means it's being
> > maintained just like all other OpenStack services and running on the
> > production infrastructure, but only available to a subset of tenants
> > for a start.
> >
> > We're learning a lot in the process and will happily report on this
> in
> > the next couple weeks.
> >
> > The quick summary is that it's looking good and stable with a few
> > hicks in the setup, which are handled by patches already under review.
> > The one we need the most is the trustee user (USER_TOKEN in the bay
> > heat params is preventing scaling after the token expires), but with
> > the review in good shape we look forward to try it very soon.
> >
> > Regarding barbican we'll keep you posted, we're working on the
> missing
> > puppet bits.
> >
> > Ricardo
> >
> > On Fri, Mar 18, 2016 at 2:30 AM, Daneyon Hansen (danehans)
> > <danehans at cisco.com> wrote:
> >> Adrian/Hongbin,
> >>
> >> Thanks for taking the time to provide your input on this matter.
> After reviewing your feedback, my takeaway is that Magnum is not ready
> for production without implementing Barbican or some other future
> feature such as the Keystone option Adrian provided.
> >>
> >> All,
> >>
> >> Is anyone using Magnum in production? If so, I would appreciate your
> input.
> >>
> >> -Daneyon Hansen
> >>
> >>> On Mar 17, 2016, at 6:16 PM, Adrian Otto <adrian.otto at rackspace.com>
> wrote:
> >>>
> >>> Hongbin,
> >>>
> >>> One alternative we could discuss as an option for operators that
> have a good reason not to use Barbican, is to use Keystone.
> >>>
> >>> Keystone credentials store:
> >>> http://specs.openstack.org/openstack/keystone-
> specs/api/v3/identity-
> >>> api-v3.html#credentials-v3-credentials
> >>>
> >>> The contents are stored in plain text in the Keystone DB, so we
> would want to generate an encryption key per bay, encrypt the
> certificate and store it in keystone. We would then use the same key to
> decrypt it upon reading the key back. This might be an acceptable
> middle ground for clouds that will not or can not run Barbican. This
> should work for any OpenStack cloud since Grizzly. The total amount of
> code in Magnum would be small, as the API already exists. We would need
> a library function to encrypt and decrypt the data, and ideally a way
> to select different encryption algorithms in case one is judged weak at
> some point in the future, justifying the use of an alternate.
> >>>
> >>> Adrian
> >>>
> >>>> On Mar 17, 2016, at 4:55 PM, Adrian Otto
> <adrian.otto at rackspace.com> wrote:
> >>>>
> >>>> Hongbin,
> >>>>
> >>>>> On Mar 17, 2016, at 2:25 PM, Hongbin Lu <hongbin.lu at huawei.com>
> wrote:
> >>>>>
> >>>>> Adrian,
> >>>>>
> >>>>> I think we need a boarder set of inputs in this matter, so I
> moved the discussion from whiteboard back to here. Please check my
> replies inline.
> >>>>>
> >>>>>> I would like to get a clear problem statement written for this.
> >>>>>> As I see it, the problem is that there is no safe place to put
> certificates in clouds that do not run Barbican.
> >>>>>> It seems the solution is to make it easy to add Barbican such
> that it's included in the setup for Magnum.
> >>>>> No, the solution is to explore an non-Barbican solution to store
> certificates securely.
> >>>>
> >>>> I am seeking more clarity about why a non-Barbican solution is
> desired. Why is there resistance to adopting both Magnum and Barbican
> together? I think the answer is that people think they can make Magnum
> work with really old clouds that were set up before Barbican was
> introduced. That expectation is simply not reasonable. If there were a
> way to easily add Barbican to older clouds, perhaps this reluctance
> would melt away.
> >>>>
> >>>>>> Magnum should not be in the business of credential storage when
> there is an existing service focused on that need.
> >>>>>>
> >>>>>> Is there an issue with running Barbican on older clouds?
> >>>>>> Anyone can choose to use the builtin option with Magnum if hey
> don't have Barbican.
> >>>>>> A known limitation of that approach is that certificates are not
> replicated.
> >>>>> I guess the *builtin* option you referred is simply placing the
> certificates to local file system. A few of us had concerns on this
> approach (In particular, Tom Cammann has gave -2 on the review [1])
> because it cannot scale beyond a single conductor. Finally, we made a
> compromise to land this option and use it for testing/debugging only.
> In other words, this option is not for production. As a result,
> Barbican becomes the only option for production which is the root of
> the problem. It basically forces everyone to install Barbican in order
> to use Magnum.
> >>>>>
> >>>>> [1] https://review.openstack.org/#/c/212395/
> >>>>>
> >>>>>> It's probably a bad idea to replicate them.
> >>>>>> That's what Barbican is for. --adrian_otto
> >>>>> Frankly, I am surprised that you disagreed here. Back to July
> 2015, we all agreed to have two phases of implementation and the
> statement was made by you [2].
> >>>>>
> >>>>> ================================================================
> >>>>> #agreed Magnum will use Barbican for an initial implementation
> for certificate generation and secure storage/retrieval.  We will
> commit to a second phase of development to eliminating the hard
> requirement on Barbican with an alternate implementation that
> implements the functional equivalent implemented in Magnum, which may
> depend on libraries, but not Barbican.
> >>>>> ================================================================
> >>>>>
> >>>>> [2]
> >>>>> http://lists.openstack.org/pipermail/openstack-dev/2015-
> July/06913
> >>>>> 0.html
> >>>>
> >>>> The context there is important. Barbican was considered for two
> purposes: (1) CA signing capability, and (2) certificate storage. My
> willingness to implement an alternative was based on our need to get a
> certificate generation and signing solution that actually worked, as
> Barbican did not work for that at the time. I have always viewed
> Barbican as a suitable solution for certificate storage, as that was
> what it was first designed for. Since then, we have implemented
> certificate generation and signing logic within a library that does not
> depend on Barbican, and we can use that safely in production use cases.
> What we don’t have built in is what Barbican is best at, secure storage
> for our certificates that will allow multi-conductor operation.
> >>>>
> >>>> I am opposed to the idea that Magnum should re-implement Barbican
> for certificate storage just because operators are reluctant to adopt
> it. If we need to ship a Barbican instance along with each Magnum
> control plane, so be it, but I don’t see the value in re-inventing the
> wheel. I promised the OpenStack community that we were out to integrate
> with and enhance OpenStack not to replace it.
> >>>>
> >>>> Now, with all that said, I do recognize that not all clouds are
> motivated to use all available security best practices. They may be
> operating in environments that they believe are already secure (because
> of a secure perimeter), and that it’s okay to run fundamentally
> insecure software within those environments. As misguided as this
> viewpoint may be, it’s common. My belief is that it’s best to offer the
> best practice by default, and only allow insecure operation when
> someone deliberately turns of fundamental security features.
> >>>>
> >>>> With all this said, I also care about Magnum adoption as much as
> all of us, so I’d like us to think creatively about how to strike the
> right balance between re-implementing existing technology, and making
> that technology easily accessible.
> >>>>
> >>>> Thanks,
> >>>>
> >>>> Adrian
> >>>>
> >>>>>
> >>>>> Best regards,
> >>>>> Hongbin
> >>>>>
> >>>>> -----Original Message-----
> >>>>> From: Adrian Otto [mailto:adrian.otto at rackspace.com]
> >>>>> Sent: March-17-16 4:32 PM
> >>>>> To: OpenStack Development Mailing List (not for usage questions)
> >>>>> Subject: Re: [openstack-dev] [magnum] High Availability
> >>>>>
> >>>>> I have trouble understanding that blueprint. I will put some
> remarks on the whiteboard. Duplicating Barbican sounds like a mistake
> to me.
> >>>>>
> >>>>> --
> >>>>> Adrian
> >>>>>
> >>>>>> On Mar 17, 2016, at 12:01 PM, Hongbin Lu <hongbin.lu at huawei.com>
> wrote:
> >>>>>>
> >>>>>> The problem of missing Barbican alternative implementation has
> been raised several times by different people. IMO, this is a very
> serious issue that will hurt Magnum adoption. I created a blueprint for
> that [1] and set the PTL as approver. It will be picked up by a
> contributor once it is approved.
> >>>>>>
> >>>>>> [1]
> >>>>>> https://blueprints.launchpad.net/magnum/+spec/barbican-
> alternativ
> >>>>>> e-sto
> >>>>>> re
> >>>>>>
> >>>>>> Best regards,
> >>>>>> Hongbin
> >>>>>>
> >>>>>> -----Original Message-----
> >>>>>> From: Ricardo Rocha [mailto:rocha.porto at gmail.com]
> >>>>>> Sent: March-17-16 2:39 PM
> >>>>>> To: OpenStack Development Mailing List (not for usage questions)
> >>>>>> Subject: Re: [openstack-dev] [magnum] High Availability
> >>>>>>
> >>>>>> Hi.
> >>>>>>
> >>>>>> We're on the way, the API is using haproxy load balancing in the
> same way all openstack services do here - this part seems to work fine.
> >>>>>>
> >>>>>> For the conductor we're stopped due to bay certificates - we
> don't currently have barbican so local was the only option. To get them
> accessible on all nodes we're considering two options:
> >>>>>> - store bay certs in a shared filesystem, meaning a new set of
> >>>>>> credentials in the boxes (and a process to renew fs tokens)
> >>>>>> - deploy barbican (some bits of puppet missing we're sorting out)
> >>>>>>
> >>>>>> More news next week.
> >>>>>>
> >>>>>> Cheers,
> >>>>>> Ricardo
> >>>>>>
> >>>>>>> On Thu, Mar 17, 2016 at 6:46 PM, Daneyon Hansen (danehans)
> <danehans at cisco.com> wrote:
> >>>>>>> All,
> >>>>>>>
> >>>>>>> Does anyone have experience deploying Magnum in a highly-
> available fashion?
> >>>>>>> If so, I'm interested in learning from your experience. My
> >>>>>>> biggest unknown is the Conductor service. Any insight you can
> >>>>>>> provide is greatly appreciated.
> >>>>>>>
> >>>>>>> Regards,
> >>>>>>> Daneyon Hansen
> >>>>>>>
> >>>>>>>
> ________________________________________________________________
> >>>>>>> _____ _ ____ OpenStack Development Mailing List (not for usage
> >>>>>>> questions)
> >>>>>>> Unsubscribe:
> >>>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >>>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-
> de
> >>>>>>> v
> >>>>>>
> >>>>>>
> _________________________________________________________________
> >>>>>> _____ ____ OpenStack Development Mailing List (not for usage
> >>>>>> questions)
> >>>>>> Unsubscribe:
> >>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-
> dev
> >>>>>>
> _________________________________________________________________
> >>>>>> _____ ____ OpenStack Development Mailing List (not for usage
> >>>>>> questions)
> >>>>>> Unsubscribe:
> >>>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >>>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-
> dev
> >>>>>
> >>>>>
> __________________________________________________________________
> >>>>> ________ OpenStack Development Mailing List (not for usage
> >>>>> questions)
> >>>>> Unsubscribe:
> >>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>>>
> >>>>>
> __________________________________________________________________
> >>>>> ________ OpenStack Development Mailing List (not for usage
> >>>>> questions)
> >>>>> Unsubscribe:
> >>>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >>>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>>
> >>>>
> ___________________________________________________________________
> >>>> _______ OpenStack Development Mailing List (not for usage
> >>>> questions)
> >>>> Unsubscribe:
> >>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>>
> >>>
> ____________________________________________________________________
> >>> ______ OpenStack Development Mailing List (not for usage questions)
> >>> Unsubscribe:
> >>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> _____________________________________________________________________
> >> _____ OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe:
> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> _______________________________________________________________________
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


More information about the OpenStack-dev mailing list