[openstack-dev] [keystone] on-behalf-of proxy identities for applications running in-instance

Zane Bitter zbitter at redhat.com
Fri Oct 12 13:13:13 UTC 2012


On 11/10/12 14:30, Eoghan Glynn wrote:
>
>
>>> Maybe I'm missing something fundamental, might make sense to walk
>>> through a hypothetical case ...
>>>
>>> - Bob has two instances, webserver & DB both of type m1.medium
>>>
>> So Bob is a member of tenant "T1", and creates a stack (associated
>> with T1), which contains the instances
>>
>>> - Alice has a single instance, mailserver of type m1.medium
>>>
>>> - Bob and Alice are associated with different tenants
>>
>> So she is a member of tenant T2
>>
>>> - Heat creates 3 new proxy users, webserver_user, DB_user &
>>>    mailserver_user
>>
>> So webserver_user, DB_user are in T1, mailserver_user is in T2
>
>
> OK, cool, so that's basically the opposite of what I understood by
> "separate tenant" idea mooted earlier.
>
> As long as the generated {instance}_users are always associated with
> the same tenant as the original instance owner then, yep, no mapping
> is necessarily required, as long as everything we need to do after
> the fact is scoped at the tenant level (e.g. there's no requirement
> for per-user chargeback for the {instance}_user's API calls, mapping
> back to the original user's identity).
>
> I'm not sure what the "seperate tenant" idea was aiming to achieve
> from a lock-down perspective, given that the limited-karma role
> would be associated with the user as the opposed the tenant. Anyway
> it seems that idea has fallen by the wayside ...

The issue it was aimed at resolving is that no specific role required 
for a user to access Nova within their own local tenant - i.e. by 
default any user has read-only access to data in their own tenant simply 
by virtue of being associated with that tenant. (The "admin" role is 
required to actually create anything.) It would be better if the 
"Member" role were required before granting read-access to Nova &c. but 
that is not the case today. The workarounds for this are all bad:

1) Do nothing. This is bad because compromising an instance would then 
give an attacker access to all kinds of internal information about the 
tenant.
2) Require the OpenStack administrator to set up the policy.json to 
explicitly blacklist to these users. This is bad because people won't do it.
3) Add rules to the default policy.json to specifically blacklist these 
users. This is bad because requiring every project in OpenStack to know 
about the implementation details - specifically what users they create 
and what roles they are given - of every other project in OpenStack 
(including non-core Related Projects) is just a terrible design.

Hence the suggestion to create the {instance}_users in a separate tenant 
(e.g. heat_instances), so that they have no read access to the original 
user's tenant (instead, we give them a role that specifically whitelists 
access to the necessary functionality, and no more).

This obviously requires some mapping between {instance}_users and 
(original) tenants. This does not sound difficult to me:

1) The {instance}_user needs a role (e.g. 'cloudwatch_reporter') in the 
original tenant anyway (this is true regardless of which tenant the user 
is created in). Just check the role list for the {instance}_user in 
keystone and find out which tenant it has that role for.
2) The purpose of authentication is to check that metrics originate from 
the actual instance whence they purport to come. Therefore a mapping 
from {instance}_user to instance is required. Instances are already 
associated with a tenant.

Having said that, I am not that familiar with the CloudWatch API and 
Steve is much more of an expert on it, so if he says it's hard then I 
could be missing something.

cheers,
Zane.

>
> Cheers,
> Eoghan
>
>
>
>> T1 will only have visibility (and hence be able to publish metric
>> data for)
>> stacks owned by T1, so webserver_user, DB_user can only publish
>> metric data
>> for Bobs stack(s)/instances, and vice-versa for T2
>>
>>> - cfn-push-stats assumes the identity of {instance}_user when
>>>    reporting metrics from within each instance
>>>
>>> - Bob calls GetMetricStatistics with his original identity,
>>>    requesting dimension InstanceType=m1.medium
>>>
>>> - we expect only the metrics for webserver & DB to be aggregated,
>>>    but not mailserver (or?)
>>>
>> Bob can only see watch rules (and hence metric-data) for stacks owned
>> by T1,
>> so we query the DB and get results for webserver & DB (not mailserver
>> as
>> it's owned by T2), which I think is what is expected.
>>
>> That said, the current heat cloudwatch implementation doesn't support
>> query/filter by dimension, or GetMetricStatistics (yet), and we do
>> need to
>> improve our DB schema to allow this to be done efficiently.  The
>> basic
>> mapping describe above should remain the same though (maybe will get
>> more
>> complex when we decouple heat orchestration and cloudwatch..)
>>
>> Steve
>>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>




More information about the OpenStack-dev mailing list