[openstack-dev] [all][policy][keystone] Better Policy Model and Representing Capabilites

Jamie Lennox jamielennox at redhat.com
Mon Oct 20 12:26:37 UTC 2014



----- Original Message -----
> From: "Nathan Kinder" <nkinder at redhat.com>
> To: openstack-dev at lists.openstack.org
> Sent: Tuesday, October 14, 2014 2:25:35 AM
> Subject: Re: [openstack-dev] [all][policy][keystone] Better Policy Model and Representing Capabilites
> 
> 
> 
> On 10/13/2014 01:17 PM, Morgan Fainberg wrote:
> > Description of the problem: Without attempting an action on an endpoint
> > with a current scoped token, it is impossible to know what actions are
> > available to a user.
> > 
> > 
> > Horizon makes some attempts to solve this issue by sourcing all of the
> > policy files from all of the services to determine what a user can
> > accomplish with a given role. This is highly inefficient as it requires
> > processing the various policy.json files for each request in multiple
> > places and presents a mechanism that is not really scalable to understand
> > what a user can do with the current authorization. Horizon may not be the
> > only service that (in the long term) would want to know what actions a
> > token can take.
> 
> This is also extremely useful for being able to actually support more
> restricted tokens as well.  If I as an end user want to request a token
> that only has the roles required to perform a particular action, I'm
> going to need to have a way of knowing what those roles are.  I think
> that is one of the main things missing to allow the "role-filtered
> tokens" option that I wrote up after the last Summit to be a viable
> approach:
> 
>   https://blog-nkinder.rhcloud.com/?p=101
> 
> > 
> > I would like to start a discussion on how we should improve our policy
> > implementation (OpenStack wide) to help make it easier to know what is
> > possible with a current authorization context (Keystone token). The key
> > feature should be that whatever the implementation is, it doesn’t require
> > another round-trip to a third party service to “enforce” the policy which
> > avoids another scaling point like UUID Keystone token validation.
> > 
> > Here are a couple of ideas that we’ve discussed over the last few
> > development cycles (and none of this changes the requirements to manage
> > scope of authorization, e.g. project, domain, trust, ...):
> > 
> > 1. Keystone is the holder of all policy files. Each service gets it’s
> > policy file from Keystone and it is possible to validate the policy (by
> > any other service) against a token provided they get the relevant policy
> > file from the authoritative source (Keystone).
> > 
> > Pros: This is nearly completely compatible with the current policy system.
> > The biggest change is that policy files are published to Keystone instead
> > of to a local file on disk. This also could open the door to having
> > keystone build “stacked” policies (user/project/domain/endpoint/service
> > specific) where the deployer could layer policy definitions (layering
> > would allow for stricter enforcement at more specific levels, e.g. users
> > from project X can’t terminate any VMs).
> 
> I think that there are a some additional advantages to centralizing
> policy storage (not enforcement).
> 
> - The ability to centralize management of policy would be very nice.  If
> I want to update the policy for all of my compute nodes, I can do it in
> one location without the need for external configuration management
> solutions.
> 
> - We could piggy-back on Keystone's signing capabilities to allow policy
> to be signed, providing protection against policy tampering on an
> individual endpoint.
> 
> > 
> > Cons: This doesn’t ease up the processing requirement or the need to hold
> > (potentially) a significant number of policy files for each service that
> > wants to evaluate what actions a token can do.
> 
> Are you thinking of there being a call to keystone that answers "what
> can I do with token A against endpoint B"?  This seems similar in
> concept to the LDAP "get effective rights" control.  There would
> definitely be some processing overhead to this though you could set up
> multiple keystone instances and replicate the policy to spread out the
> load.  It also might be possible to index the enforcement points by role
> in an attempt to minimize the processing for this sort of call.
> 
> > 
> > 
> > 2. Each enforcement point in a service is turned into an attribute/role,
> > and the token contains all of the information on what a user can do
> > (effectively shipping the entire policy information with the token).
> > 
> > Pros: It is trivial to know what a token provides access to: the token
> > would contain something like `{“nova”: [“terminate”, “boot”], “keystone”:
> > [“create_user”, “update_user”], ...}`. It would be easily possible to
> > allow glance “get image” nova “boot” capability instead of needing to know
> > the roles for policy.json for both glance and nova work for booting a new
> > VM.
> > 
> > Cons: This would likely require a central registry of all the actions that
> > could be taken (something akin to an IANA port list). Without a grouping
> > to apply these authorizations to a user (e.g. keystone_admin would convey
> > “create_project, delete_project, update_project, create_user, delete_user,
> > update_user, ...”) this becomes unwieldy. The “roles” or “attribute” that
> > convey capabilities are also relatively static instead of highly dynamic
> > as they are today. This could also contribute to token-bloat.
> 
> I think we really want to avoid additional token bloat.

The concept of token bloat is an unfortunate side effect of PKI tokens where we have come to the conclusion that what is in a UUID token validation response is the same as what is held in the signed PKI token. I think this is wrong for UUID tokens and we need to come up with a better solution for PKI (no idea what it is). 

With UUID we can always have the validation call tell keystone that I am validating this token on behalf of nova and you should therefore not tell me all of the permissions that belong to glance. ie The token response is not necessarily a fixed blob. If we get down to a token validation call containing only the permissions for the service that is being validated then i don't see this additional size as a problem. The token can be cached per service after first use in much the same way as it is today. 

> Thanks,
> -NGK
> 
> > 
> > 
> > 
> > I’m sure there are more ways to approach this problem, so please don’t
> > hesitate to add to the conversation and expand on the options. The above
> > options are by no mean exhaustive  nor fully explored. This change may not
> > even be something to be expected within the current development cycle
> > (Kilo) or even the next, but this is a conversation that needs to be
> > started as it will help make OpenStack better.
> > 
> > Thanks,
> > Morgan
> > 
> > —
> > Morgan Fainberg
> > 
> > 
> > 
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> > 
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 



More information about the OpenStack-dev mailing list