[openstack-dev] [Keystone] RBAC limitations - was A set of Keystone blueprints relating to domains

heckj heckj at mac.com
Mon Nov 26 20:43:38 UTC 2012


On Nov 26, 2012, at 11:35 AM, David Chadwick <d.w.chadwick at kent.ac.uk> wrote:

> 
> 
> On 26/11/2012 18:23, heckj wrote:
>> 
>> On Nov 25, 2012, at 12:46 PM, David Chadwick
>> <d.w.chadwick at kent.ac.uk> wrote:
>>> Firstly I agree with Henry that segmenting an openstack
>>> installation into multiple independent domains is an important
>>> functionality to provide. But coupled with this, we should also
>>> have the ability to provide interdomain access as well. Otherwise
>>> you end up with un-interoperable silos. For example, two
>>> organisations may have separate domains on the same openstack
>>> system, and later decide they want to cooperate and have some of
>>> their users share information. This should be achievable with
>>> configuration or policy changes without having to rebuild the
>>> entire openstack system. So I would extend Henry's no 1 goal to be
>>> "interoperable domains"  rather than standalone domains.
>>> 
>>> It seems to me that Henry's second goal can be relatively easily
>>> achieved by having a proper RBAC/ABAC interface between the service
>>> (PEP) and the policy engine (PDP). This will allow different PDPs
>>> to be plugged in that support much more fine grained and
>>> sophisticated policies e.g. conditions based on time of
>>> day/week/month, different authentication levels etc without the
>>> user's role(s) necessarily needing to change. And this naturally
>>> leads onto the comments made by Joe.
>>> 
>>> Joe, if we have a set of goals of varying levels of importance and
>>> priority, rather than picking the top priority goal and making a
>>> tailored solution to it, that solves this goal and only this goal,
>>> isnt it better to produce a more generic abstracted solution that
>>> can also solve many of the other goals as well?
>> 
>> I would only give a highly qualified yes as an answer to that
>> question. A generic, abstracted solution often incurs so much
>> complexity overhead to understand the solution that a simpler, less
>> complete solution is more amenable because it's easy to understand
>> (faults and all) and gets 80% of the job done. I'm all for having a
>> vision of where we can go and driving it there, but it must be with
>> pragmatic, usable stepping stones and clear goals of "why".
>> 
>> I'm passionate about AuthN/Z - and primarily that it's functional,
>> interoperable, and clear to understand as the highest priorities.
>> 
>>> The former leads to more spaghetti code with different special
>>> parameters and APIs for different goals ie. a complex system that
>>> is very difficult to understand, whereas the latter leads to a
>>> conceptually much simpler system with cleaner code and more
>>> flexibility.
>>> 
>>> Here is a case in point. The current RBAC interface requires in
>>> your words "all the data that comes in from the roles (name, user,
>>> tenant, etc).....to provide authorization". This requires many
>>> different special parameters to be passed to the policy engine via
>>> the API, and if you want to add groups in the future, then this
>>> causes yet another parameter to be added to the interface. However,
>>> if you go up a level of abstraction and say that you pass a set of
>>> subject attributes to the policy engine, and the API is written in
>>> this way, then you have an infinitely extensible interface that
>>> does not need to change when groups are added. Group is just
>>> another subject attribute that can be passed along with role, user,
>>> tenant, name etc. The API design is future proof to whichever new
>>> subject attributes come along next year or next decade.
>>> 
>>> Joe, the above should also help to answer your question about
>>> type/value pairs. In the subject attributes example above, the type
>>> value pairs passed via the API might be name=David user=1234
>>> tenant=xyz role=prof Once you have this type of ABAC interface, it
>>> becomes trivial to add new features, by simply adding a new
>>> attribute type and value to those passed to the PDP, for example,
>>> to base authz also on the strength of authentication, you might
>>> add LOA=2 (level of assurance as per the NIST recommendation) to
>>> the set of subject attributes. A flexible policy based PDP can
>>> easily take this new attribute into account when policies are
>>> written and decisions are made, without needing to change the
>>> policy code base. You simply change the policy. You will of course
>>> need to change the service PEP, since it will need to pick up this
>>> new attribute from the Keystone authentication service, and pass it
>>> to the policy engine.
>> 
>> I like and appreciate the simplicity of the attribute based
>> approach.
>> 
>> With making the attributes as arbitrary setups, what changes do you
>> see to the REST interface that presents these as interpreted
>> abstractions such as groups, domains, users, and projects today? Any?
> 
> This is a quick (uninformed) initial response. I need to talk to Kristy on Wednesday to correct any mistakes I am making now. But I am assuming that the current API uses a JSON structure which has a fixed number of set fields such as user, tenant, role etc. The ABAC API in comparison would have a JSON stucture which was defined as a zero to many set of attributes, where each attribute is defined as a type and value. So the client software could then populate this structure with whatever type/value pairs it wanted to. In a separate Keystone specification we would define a set of standard attributes and their meanings (e.g. role, group, tenant, name etc) that must, should or may be supported by clients. As the backend Keystone implementation grows in sophistication, then more attributes would be added to this standard set. So the API would not change, but the supporting code and documentation would grow with time.

for what it's worth, we've defined "API changes" to also include changes in responses - so that if the exact same API request gets made and two different responses get returned, then the API is considered to have changed. From the description, we should be able to make easily backward compatible API mechanisms to pass along arbitrary sets of type/value pairs without disrupting the required formats that are already there, with an aim to deprecating these smoothly over the course of several releases if this plans works through to fruition.

>> Or are you focused on the internal representation as attributes and
>> passing these with an updated interface (i.e. the token interface) to
>> the policy engines?
> 
> I would like to update the interface to the policy engines to be of the generic ABAC type, so that it is infinitely extensible. Each implementor of a policy engine would document the set of attributes that his implementation requires in order to make authz decisions. (In fact our PDP does not need/require any attributes and can accept all and every type of attribute since these are configured into the implementation at runtime via an XML policy)
> 
> How would you propose to extend/modify the token
>> based interface to support additional interesting attributes for ABAC
>> that can then be parsed and used by the policy engines? We must keep
>> backwards compatibility here.
> 
> Backwards compatibility might be difficult. It might be that for a while you have to run two APIs in parallel, and eventually deprecate the existing RBAC API and finally phase it out once everyone has migrated to the new API.

I think nailing down this section, what the API might look like and if or not there will be backwards compatibility for that mechanism is the critical piece to answer my last question. If we don't have backward compatibility, it behooves us to provide scripts/tools and documentation to clearly upgrade or migrate to other systems. We'll need to keep this in mind when planning for implementation and how we roll it out.

>> The attribute based mechanism could pretty clearly provide nice
>> support for multi-factor or any "multiple attributes asserted"
>> authentication mechanism. How does it also play into supporting
>> interoperable domains?
> 
> My idea for this is as follows. Since authz is based on a set of attributes, then a domain that wants to limit access to its own users only would require the following attributes to be passed to the policy engine in order for a grant to be returned
> role=A
> tenant=B
> domain=C
> 
> A domain that wanted to let users from other domains and tenants access some of its resources would return grant to requests containing just
> role=X
> 
> A domain that wanted to allow inter-tenant access (say tenants A and B to access each others resources) might require
> role=A
> Tenant=A or B
> domain=C
> 
> So the answer is, it all depends upon the policy that the domain sets for access to its resources. The domain/resource administrator sets up the policy and specifies which attributes are needed to get access to which resources. Some resources in the domain could be limited to role/tenant/domain, and others to just role, as he wishes.

Got it - thank you. So reasonable to assume that we'll have some common conventions for some of these "keys" (what you're calling types" in a type/value pair) and that some of these may be mandatory?

> I could see adding a "groupname" attribute and
>> providing a REST interface around that for manipulating the groups,
>> but how does that cascade down into doing the equivalent of asserting
>> that I want all folks in group "computeadmin" to have the role
>> "computeadmin" and having that interpreted by the policy engine as
>> such?
> 
> This would not be dealt with by the policy engine, but by the role mapping service of Keystone. We have published a blueprint for this already. Have you seen it?

Saw it, read it, was unfortunately very confused to it's purpose and what it would achieve. I suspect I need just more concrete examples.

> The policy engine would simply say that role=computeadmin is needed. That's all. How a user gets the computeadmin role is determined by the role mapping service.
> 
>> 
>> Are the policy engine and ABAC mechanism/interface changes a
>> pre-requisite for this to happen in a reasonable form?
> 
> They are independent. YOu can use the existing RBAC interface with the proposed role mapping service to get the groups feature.
> 
> From what Henry said, its the management of the "policies" that is crucial. The openstack admin administers the access control policies (ie. sets it to role=computeadmin) whereas the organisation admin administers the role mapping policy (says that his group=computeadmin is to be mapped to role=computeadmin)

I'm afraid I'm not quite getting all the actors and responsibilities in your example, but I get that the proposed role-mapping-service is intended to provide it. I'll take another read over that blueprint and ask questions knowing that it's intended to be able to answer these questions.

>> Part of what I'm trying to understand is how much work we're talking
>> about, and where it needs to occur to enable this good stuff.
> 
> I would say, start with the role mapping service, since this is a new add on which does not effect backwards compatibility, but enables lots of good stuff once it is available. Then define the ABAC API to the policy engine, and have this run in parallel to the existing RBAC one (sorry but I dont know Python or the APIs sufficiently to know if a super API that is backwards compatible can be created or not. If it can then this would be fantastic).
> 
> We have
>> a V3 API we're getting out there now with milestone-1, and a clear
>> need to provide a few significant improvements in this release cycle
>> - including a solid V3 API. Is what we're talking about here focused
>> on slight modifications to the V3 API and implementation to make it
>> solid, or deeper/longer changes that we expect to land out in H or I
>> release cycles?
> 
> This is where your experience and Kristy's are far superior than mine. I am an architect rather than an implementer. So I dont know the answer in short.

I think understanding the role mapping service mechanism and what changes are needed to the API to support ABAC are the two critical paths based on what I'm seeing. I don't yet grok the mapper proposal, but i'll dig on that some more and see if I can come up with more reasonable questions to illuminate that setup to me.

> regards
> 
> David




More information about the OpenStack-dev mailing list