[openstack-dev] [keystone] on-behalf-of proxy identities for applications running in-instance

heckj heckj at mac.com
Fri Oct 12 16:39:45 UTC 2012


Hey Zane,

Related from a different thread, I'm going to try and corral some operators and folks with existing installations together on Wednesday around 11am (the unconference time) to pull together suggestions and start to document the roles and relevant policy.json files for the services to have some suggested deployments documentation and ideas beyond the "well, that's what was in devstack because it was convenient to test and basically functional"

-joe


On Oct 12, 2012, at 8:22 AM, Zane Bitter <zbitter at redhat.com> wrote:

> On 12/10/12 16:07, Steven Hardy wrote:
>> On Fri, Oct 12, 2012 at 03:13:13PM +0200, Zane Bitter wrote:
>>> On 11/10/12 14:30, Eoghan Glynn wrote:
>>>> 
>>>> 
>>>>>> Maybe I'm missing something fundamental, might make sense to walk
>>>>>> through a hypothetical case ...
>>>>>> 
>>>>>> - Bob has two instances, webserver & DB both of type m1.medium
>>>>>> 
>>>>> So Bob is a member of tenant "T1", and creates a stack (associated
>>>>> with T1), which contains the instances
>>>>> 
>>>>>> - Alice has a single instance, mailserver of type m1.medium
>>>>>> 
>>>>>> - Bob and Alice are associated with different tenants
>>>>> 
>>>>> So she is a member of tenant T2
>>>>> 
>>>>>> - Heat creates 3 new proxy users, webserver_user, DB_user &
>>>>>>   mailserver_user
>>>>> 
>>>>> So webserver_user, DB_user are in T1, mailserver_user is in T2
>>>> 
>>>> 
>>>> OK, cool, so that's basically the opposite of what I understood by
>>>> "separate tenant" idea mooted earlier.
>>>> 
>>>> As long as the generated {instance}_users are always associated with
>>>> the same tenant as the original instance owner then, yep, no mapping
>>>> is necessarily required, as long as everything we need to do after
>>>> the fact is scoped at the tenant level (e.g. there's no requirement
>>>> for per-user chargeback for the {instance}_user's API calls, mapping
>>>> back to the original user's identity).
>>>> 
>>>> I'm not sure what the "seperate tenant" idea was aiming to achieve
>>> >from a lock-down perspective, given that the limited-karma role
>>>> would be associated with the user as the opposed the tenant. Anyway
>>>> it seems that idea has fallen by the wayside ...
>>> 
>>> The issue it was aimed at resolving is that no specific role
>>> required for a user to access Nova within their own local tenant -
>>> i.e. by default any user has read-only access to data in their own
>>> tenant simply by virtue of being associated with that tenant. (The
>>> "admin" role is required to actually create anything.) It would be
>>> better if the "Member" role were required before granting
>>> read-access to Nova &c. but that is not the case today. The
>>> workarounds for this are all bad:
>>> 
>>> 1) Do nothing. This is bad because compromising an instance would
>>> then give an attacker access to all kinds of internal information
>>> about the tenant.
>>> 2) Require the OpenStack administrator to set up the policy.json to
>>> explicitly blacklist to these users. This is bad because people
>>> won't do it.
>>> 3) Add rules to the default policy.json to specifically blacklist
>>> these users. This is bad because requiring every project in
>>> OpenStack to know about the implementation details - specifically
>>> what users they create and what roles they are given - of every
>>> other project in OpenStack (including non-core Related Projects) is
>>> just a terrible design.
>>> 
>>> Hence the suggestion to create the {instance}_users in a separate
>>> tenant (e.g. heat_instances), so that they have no read access to
>>> the original user's tenant (instead, we give them a role that
>>> specifically whitelists access to the necessary functionality, and
>>> no more).
>>> 
>>> This obviously requires some mapping between {instance}_users and
>>> (original) tenants. This does not sound difficult to me:
>>> 
>>> 1) The {instance}_user needs a role (e.g. 'cloudwatch_reporter') in
>>> the original tenant anyway (this is true regardless of which tenant
>>> the user is created in). Just check the role list for the
>>> {instance}_user in keystone and find out which tenant it has that
>>> role for.
>>> 2) The purpose of authentication is to check that metrics originate
>>> from the actual instance whence they purport to come. Therefore a
>>> mapping from {instance}_user to instance is required. Instances are
>>> already associated with a tenant.
>>> 
>>> Having said that, I am not that familiar with the CloudWatch API and
>>> Steve is much more of an expert on it, so if he says it's hard then
>>> I could be missing something.
>> 
>> So the problem you describe above, is essentially that none of the openstack
>> services are using RBAC properly by default (other than the admin role).
>> 
>> The mechanism is there, and it would be simple to, e.g create a role per
>> service, and explicitly require users to be members of e.g the "nova_user"
>> role to access the service, and the "nova_admin" role to write to it.
>> 
>> IMHO this "open by default" problem is simply a policy configuration
>> problem (probably all services should ship with more restrictive policy.json
>> files, or at least provide an easy/scripted method to apply them)
> 
> I agree, the defaults are poor in my opinion. It would be nice to have a sensible common default in OpenStack, rather than leaving it to packagers/installers/administrators to all come up with different ones (or, worse, not).
> 
>> 
>> It's not, IMO, something we should work around with some big hack in the
>> heat core logic.
>> 
>> The keystone docs[1] do spell it out:
>> 
>> "all operations that do not require the admin role will be accessible by any
>> user that has any role in a tenant"
> 
> OK, that pretty much makes the separate tenant idea useless as well as complicated, so I think you are on the right track by keeping everything in the same tenant.
> 
>> 
>> "If you wish to restrict users from performing operations in, say, the
>> Compute service, you need to create a role in the Identity service and then
>> modify /etc/nova/policy.json so that this role is required for Compute
>> operations"
>> 
>> As you point out, the seperate-tenant thing seems initially to provide a
>> solution, but having spent some time looking at our code, I see several
>> problems, but the main one is the request context, and related DB
>> separation.
>> 
>> We use the keystone credentials associated with the request to generate the
>> context used to access the DB model - so if we have a request coming in from
>> an unrelated tenant to the one owning the stack, we not only have to map the
>> instance-user to the stack (quite easy as you point out), but we also have
>> to manage mapping request contexts (which are based on keystone
>> credentials, and separated by tenant in our data model)
>> 
>> This context mapping I think would be much harder (particularly to do it
>> securely, without storing credentials/context of the "owning" tenant/user)
>> 
>> Additionally you have the risk of data leaking between unrelated tenants due
>> to bugs, assuming you have a single "instance" tenant, perhaps you'd need an
>> "instance" tenant for every "real" tenant, more complexity and keystone
>> workload.
>> 
>> Ultimately, having considered both approaches, I think the lesser of the two
>> evils is just to configure RBAC properly, or at least provide a script/doc
>> which describes how it could be configured securely.  I can't imagine (given
>> the open nature of the default config) that anyone would seriously leave
>> things in the default, unhardened state for production deployments!?
> 
> +1
> 
> cheers,
> Zane.
> 
> 
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




More information about the OpenStack-dev mailing list