[openstack-dev] [keystone][congress][group-policy] Fetching policy from a remote source
Adam Young
ayoung at redhat.com
Mon Mar 16 21:59:17 UTC 2015
On 03/16/2015 03:24 PM, Doug Hellmann wrote:
> Excerpts from Adam Young's message of 2015-03-16 14:17:16 -0400:
>> On 03/16/2015 01:45 PM, Doug Hellmann wrote:
>>> All of these are reasons we have so far resisted building a service to
>>> deploy updates to oslo.config's input files, and rely on provisioning
>>> tools to update them.
>>>
>>> Have we consider using normal provisioning tools for pushing out
>>> changes to policy files, and having the policy library look at the
>>> timestamp of the file(s) to decide if it needs to re-read them
>>> before evaluating a rule? Maybe we wouldn't always scan the file
>>> system, but wait for some sort of signal that the scan needs to be
>>> done.
>> I like this last idea. Thew trigger needs to be app specific, I think.
> I was thinking of a callback to be triggered by 'kill -HUP $pid'. We can
> make a little framework for registering callbacks on signals (if there
> isn't something like that already) to allow multiple refresh actions on
> the signal.
>
>>> Doug
>> I think policy files are not config files. We've treated them as such
>> in the past as they are not dynamic, but I don't think I want to *have*
>> to do this:
>>
>> 1. Change policy in keystone (somehow)
>> 2. Tell Puppet that there is a new file
>> 3. Have puppet pick up the3 new file and sync it to the servers.
> Right, I wouldn't do that. I would modify the file in my puppet
> repository and then push that out all at once. Keystone would receive
> the policy files the same way as the other services.
>
>>
>> Although I would say that we should make it easy to support this workflow.
>>
>> For one thing, it assumes that all of the comsuers are talking to the
>> same config management system, which is only true for a subset of the
>> services.
> I'm not sure what you mean here. Do you mean that in a given deployment
> you would expect some services to be configured by puppet and others to
> be configured a different way?
I mean that there could be a puppet server for the core infrastructure,
and another one (or Ansible or Chef) for Hadoop on top of that. There
is no one puppet master that we can assume to be controlling all of the
servers. They might be run by different organizations.
>
>> I see a case for doing this same kind of management for Many of the
>> files Keystone produces. Service catalog is the most obvious candidate.
> Yes, that's another good example, although in that case we do already
> have an API that lets a cloud consumer access the service catalog data
> so it might be viewed as different from the policy rules or oslo.config
> files (the latter at least typically have private data we wouldn't want
> to share through an API).
We don't have a single, monolithic Service catalog (anymore) and, with
endpoint filtering, we expect multiple service catalogs to be the norm.
I want to pursue the idea of git style file identification here, (hash
of the file as identifier) as that works to split the service catalog
from the token, and still have multiple service catalogs, but ensure
that they are correctly linked in remote systems. It doesn't have to
be hash, but it makes the process much more verifiable. This is also
true for policy files; there can be more than one active at any given
point in time, fetchable by remote identifier. Even as we push towards
common rules for defininng the RBAC section, we have to be aware that
different endpoints might need different policy files.
>
>> If we could have a workflow for managing : PKI certs, Federatiomn
>> mappings and (Group only?) Role Assignments we could decentralize token
>> validation.
>>
>> When doing the PKI tokens, we discussed this, and ended up with a t
>> "fetch first" policy toward the certs.
>>
>> Puppet does not know how to get a token, so it can't call the keystone
>> token-protected APIs to fetch new data. What forms of authentication do
>> the config managment systems support? Is this an argument for tokenless
>> operations against Keystone?
> In my scenario puppet (or chef or whatever) is the source of truth for
> the configuration file, not one of our services. So there's no need for
> the configuration management tool to talk to any of our services beyond
> sending the HUP signal telling us to re-read the file(s).
So services would generate files to be published to Puppet. As I said,
that would work for a subset of use cases, and probably makes sense for
core infrastructure, but was cannot assume all consumers are talking to
puppet, or even if they are, talking to the same puppet master.
>
> Doug
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
More information about the OpenStack-dev
mailing list