[openstack-dev] [TripleO] FreeIPA integration
Juan Antonio Osorio
jaosorior at gmail.com
Tue Apr 5 16:31:23 UTC 2016
I was planning to bring it up informally for TripleO. But it would be cool
to have a slot to talk about this.
BR
On 5 Apr 2016 18:51, "Fox, Kevin M" <Kevin.Fox at pnnl.gov> wrote:
> Yeah, and they just deprecated vendor data plugins too, which eliminates
> my other workaround. :/
>
> We need to really discuss this problem at the summit and get a viable path
> forward. Its just getting worse. :/
>
> Thanks,
> Kevin
> ------------------------------
> *From:* Juan Antonio Osorio [jaosorior at gmail.com]
> *Sent:* Tuesday, April 05, 2016 5:16 AM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [TripleO] FreeIPA integration
>
>
>
> On Tue, Apr 5, 2016 at 2:45 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov> wrote:
>
>> This sounds suspiciously like, "how do you get a secret to the instance
>> to get a secret from the secret store" issue.... :)
>>
> Yeah, sounds pretty familiar. We were using the nova hooks mechanism for
> this means, but it was deprecated recently. So bummer :/
>
>>
>> Nova instance user spec again?
>>
>> Thanks,
>> Kevin
>>
>> ------------------------------
>> *From:* Juan Antonio Osorio
>> *Sent:* Tuesday, April 05, 2016 4:07:06 AM
>> *To:* OpenStack Development Mailing List (not for usage questions)
>> *Subject:* Re: [openstack-dev] [TripleO] FreeIPA integration
>>
>>
>>
>> On Tue, Apr 5, 2016 at 11:36 AM, Steven Hardy <shardy at redhat.com> wrote:
>>
>>> On Sat, Apr 02, 2016 at 05:28:57PM -0400, Adam Young wrote:
>>> > I finally have enough understanding of what is going on with Tripleo to
>>> > reasonably discuss how to implement solutions for some of the main
>>> security
>>> > needs of a deployment.
>>> >
>>> >
>>> > FreeIPA is an identity management solution that can provide support
>>> for:
>>> >
>>> > 1. TLS on all network communications:
>>> > A. HTTPS for web services
>>> > B. TLS for the message bus
>>> > C. TLS for communication with the Database.
>>> > 2. Identity for all Actors in the system:
>>> > A. API services
>>> > B. Message producers and consumers
>>> > C. Database consumers
>>> > D. Keystone service users
>>> > 3. Secure DNS DNSSEC
>>> > 4. Federation Support
>>> > 5. SSH Access control to Hosts for both undercloud and overcloud
>>> > 6. SUDO management
>>> > 7. Single Sign On for Applications running in the overcloud.
>>> >
>>> >
>>> > The main pieces of FreeIPA are
>>> > 1. LDAP (the 389 Directory Server)
>>> > 2. Kerberos
>>> > 3. DNS (BIND)
>>> > 4. Certificate Authority (CA) server (Dogtag)
>>> > 5. WebUI/Web Service Management Interface (HTTPD)
>>> >
>>> > Of these, the CA is the most critical. Without a centralized CA, we
>>> have no
>>> > reasonable way to do certificate management.
>>> >
>>> > Now, I know a lot of people have an allergic reaction to some, maybe
>>> all, of
>>> > these technologies. They should not be required to be running in a
>>> > development or testbed setup. But we need to make it possible to
>>> secure an
>>> > end deployment, and FreeIPA was designed explicitly for these kinds of
>>> > distributed applications. Here is what I would like to implement.
>>> >
>>> > Assuming that the Undercloud is installed on a physical machine, we
>>> want to
>>> > treat the FreeIPA server as a managed service of the undercloud that
>>> is then
>>> > consumed by the rest of the overcloud. Right now, there are conflicts
>>> for
>>> > some ports (8080 used by both swift and Dogtag) that prevent a drop-in
>>> run
>>> > of the server on the undercloud controller. Even if we could
>>> deconflict,
>>> > there is a possible battle between Keystone and the FreeIPA server on
>>> the
>>> > undercloud. So, while I would like to see the ability to run the
>>> FreeIPA
>>> > server on the Undercloud machine eventuall, I think a more realistic
>>> > deployment is to build a separate virtual machine, parallel to the
>>> overcloud
>>> > controller, and install FreeIPA there. I've been able to modify Tripleo
>>> > Quickstart to provision this VM.
>>>
>>> IMO these services shouldn't be deployed on the undercloud - we only
>>> support a single node undercloud, and atm it's completely possible to
>>> take
>>> the undercloud down without any impact to your deployed cloud (other than
>>> losing the ability to manage it temporarily).
>>>
>> This is fair enough, however, for CI purposes, would it be acceptable to
>> deploy it there? Or where do you recommend we have it?
>>
>>>
>>> These auth pieces all appear critical to the operation of the deployed
>>> cloud, thus I'd assume you really want them independently managed
>>> (probably
>>> in an HA configuration on multiple nodes)?
>>>
>>> So, I'd say we support one of:
>>>
>>> 1. Document that FreeIPA must exist, installed by existing non-TripleO
>>> tooling
>>>
>>> 2. Support a heat template (in addition to overcloud.yaml) that can
>>> deploy
>>> FreeIPA.
>>>
>>> I feel like we should do (1), as it fits better with the TripleO vision
>>> (which is to deploy OpenStack), and it removes the need for us to
>>> maintain
>>> a bunch of non-openstack stuff.
>>>
>>> The path I'm imagining is we have a documented integration with FreeIPA,
>>> and perhaps some third-party CI, but we don't support deploying these
>>> pieces directly via TripleO.
>>
>>
>>> > I was also able to run FreeIPA in a container on the undercloud
>>> machine, but
>>> > this is, I think, not how we want to migrate to a container based
>>> strategy.
>>> > It should be more deliberate.
>>> >
>>> >
>>> > While the ideal setup would be to install the IPA layer first, and
>>> create
>>> > service users in there, this produces a different install path between
>>> > with-FreeIPA and without-FreeIPA. Thus, I suspect the right approach
>>> is to
>>> > run the overcloud deploy, then "harden" the deployment with the FreeIPA
>>> > steps.
>>>
>>> I think we should require the IPA layer to be installed first - I mean
>>> isn't it likely in many (most?) production environments that these
>>> services
>>> already exist?
>>>
>>> This simplifies things, because then you just pass inputs from the
>>> existing
>>> proven IPA environment in as a tripleo/heat environment file - same model
>>> we already support for all kinds of vendor integration, SSL etc etc.
>>>
>>> > The IdM team did just this last summer in preparing for the Tokyo
>>> summit,
>>> > using Ansible and Packstack. The Rippowam project
>>> > https://github.com/admiyo/rippowam was able to fully lock down a
>>> Packstack
>>> > based install. I'd like to reuse as much of Rippowam as possible, but
>>> > called from Heat Templates as part of an overcloud deploy. I do not
>>> really
>>> > want to re implement Rippowam in Puppet.
>>> >
>>> > So, big question: is Heat->ansible (instead of Puppet) for an overcloud
>>> > deployment an acceptable path? We are talking Ansible 1.0 Playbooks,
>>> which
>>> > should be relatively straightforward ports to 2.0 when the time comes.
>>>
>>> In short, no. I don't see how you can do the hardening with ansible,
>>> unless you're proposing to reimplement the entire overcloud deployment in
>>> the same tool.
>>>
>>> The data required to configure the OpenStack services should be passed in
>>> via an environment file, e.g
>>>
>>> openstack overcloud deploy --templates -e ipa-params.yaml
>>>
>>> Then all the data from ipa-params.yaml should be mapped into hieradata
>>> which puppet then uses to configure the OpenStack services appropriately
>>> -
>>> this is the same model we support for integration with everything atm.
>>>
>>> While it's technically possible to configure an overcloud, then
>>> reconfigure
>>> it with some other tool (or even manually), you get the worst of all
>>> worlds
>>> doing this - you modify things out-of-band (from a TripleO perspective)
>>> so
>>> that all your changed get destroyed every overcloud update, and you run
>>> the
>>> risk of "config management split brain" where e.g puppet configures a
>>> service disabled, then ansible starts it or whatever.
>>>
>>> > Thus, the sequence would be:
>>> >
>>> > 1. Run existing overcloud deploy steps.
>>> > 2. Install IPA server on the allocated VM
>>> > 3. Register the compute nodes and the controller as IPA clients
>>> > 4. Convert service users over to LDAP backed services, complete with
>>> > necessary kerberos steps to do password-less authentication.
>>> > 5. Register all web services with IPA and allocate X509 certificates
>>> for
>>> > HTTPS.
>>> > 6. Set up Host based access control (HBAC) rules for SSH access to
>>> overcloud
>>> > machines.
>>>
>>> This should be:
>>>
>>> 1. Install and validate IPA server $somewhere
>>> 2. Generate environment file with parameters (this could be automated)
>>> 3. Install overcloud passing in environment file with IPA $stuff
>>> 4. Done
>>>
>> This is an acceptable solution IMO and was according to what I was
>> thinking. It will also be easier to setup the overcloud with FreeIPA
>> specific configurations once we have the composable services work done.
>>
>> To me, the biggest interrogation is what the value of that $somewhere is.
>> To me, for testing purposes it seemed acceptable to have FreeIPA running in
>> the same node as the undercloud. And in production it would be a separate
>> node (or set of nodes).
>>
>>>
>>> Basically *anything* touching the overcloud configuration should happen
>>> via
>>> puppet in our current architecture, which I think means (4) and (5).
>>>
>>> I'm less clear about (3) - this sounds like an IPA admin action, can it
>>> be
>>> done before deploying the overcloud, or do we need each node to register
>>> itself?
>>>
>> The nodes need to get the IPA client installation which pretty much
>> involves enrollment. For this, they need to have some type of credentials.
>> So this is the main thing Rob is working towards. To have a safe method to
>> pass credentials to the nodes, and have them autoregister to FreeIPA.
>>
>>>
>>> Similarly not sure about (6), probably need more info about what that
>>> entails.
>>>
>>> > When we did the Rippowam demo, we used the Proton driver and Kerberos
>>> for
>>> > securing the message broker. Since Rabbit seems to be the tool of
>>> choice,
>>> > we would use X509 authentication and TLS for encryption. ACLs, for
>>> now,
>>> > would stay in the flat file format. In the future, we might chose to
>>> use
>>> > the LDAP backed ACLs for Rabbit, as they seem far more flexible.
>>> Rabbit
>>> > does not currently support Kerberos for either authentication or
>>> encryption,
>>> > but we can engage the upstream team to implement it if desired in the
>>> > future, or we can shift to a Proton based deployment if Kerberos is
>>> > essential for a deployment.
>>> >
>>> >
>>> > There are a couple ongoing efforts that will tie in with this:
>>> >
>>> > 1. Designate should be able to use the DNS from FreeIPA. That was the
>>> > original implementation.
>>> >
>>> > 2. Juan Antonio Osorio has been working on TLS everywhere. The
>>> issue thus
>>> > far has been Certificate management. This provides a Dogtag server for
>>> > Certs.
>>> >
>>> > 3. Rob Crittenden has been working on auto-registration of virtual
>>> machines
>>> > with an Identity Provider upon launch. This gives that efforts an IdM
>>> to
>>> > use.
>>>
>>> Aha, this may be the answer to (3) above? E.g the discussion around nova
>>> hooks?
>>>
>>> > 4. Keystone can make use of the Identity store for administrative
>>> users in
>>> > their own domain.
>>> >
>>> > 5. Many of the compliance audits have complained about cleartext
>>> passwords
>>> > in config files. This removes most of them. MySQL supports X509 based
>>> > authentication today, and there is Kerberos support in the works, which
>>> > should remove the last remaining cleartext Passwords.
>>> >
>>> > I mentioned Centralized SUDO and HBAC. These are both tools that may
>>> be
>>> > used by administrators if so desired on the install. I would recommend
>>> that
>>> > they be used, but there is no requirement to do so.
>>>
>>> Overall this sounds like a bunch of functionality we want, but I think
>>> the
>>> integration requires more discussion (possibly at summit?) - my main
>>> concern is that we integrate in a manner appropriate to our existing
>>> implementation, and that we don't inadvertently make the undercloud a
>>> mission-critical component, when it current is not.
>>
>>
>>> Thanks,
>>>
>>> Steve
>>>
>>>
>>> __________________________________________________________________________
>>> OpenStack Development Mailing List (not for usage questions)
>>> Unsubscribe:
>>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>>
>>
>>
>>
>> --
>> Juan Antonio Osorio R.
>> e-mail: jaosorior at gmail.com
>>
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe:
>> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>>
>
>
> --
> Juan Antonio Osorio R.
> e-mail: jaosorior at gmail.com
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20160405/82847170/attachment.html>
More information about the OpenStack-dev
mailing list