[openstack-dev] [keystone] on-behalf-of proxy identities for applications running in-instance

Steven Hardy shardy at redhat.com
Thu Oct 11 10:14:14 UTC 2012


On Thu, Oct 11, 2012 at 05:09:43AM -0400, Eoghan Glynn wrote:
> 
> 
> > > WORKER is a user account owned by the Heat service.
> > > REAL_LIMITED is owned by REAL.
> > > 
> > > WORKER does the work for HEAT.  It impersonates REAL_LIMITED when mit
> > > needs to do work for REAL.  It impersonates eglynn_limited when it
> > > needs to do work for you.
> > 
> > Just to clarify - there's no mechanism in keystone (currently) which
> > supports this "impersonates" concept, this is theoretical right?
> 
> IIUC, yes this appears to a forward-looking concept that's not
> currently built-out in keystone.
>  
> > So the way heat works at the moment is:
> > - USER creates a heat stack, template can define that keystone ec2
> >   credentials should be deployed on the instance
> > - heat creates a new keystone user INSTANCE_USER (whose name is
> > defined in
> >   the template, this could be per-instance or per-stack depending on
> >   your
> >   template), in the same tenant as USER
> > - heat asks keystone for an ec2-credentials keypair for INSTANCE_USER
> > - ec2-credentials deployed to the instance (via cloud-init)
> > - API requests from inside the instance send the ec2 access key, and
> >   signature (signed using the ec2 private key)
> > - The heat API authenticates with keystone ec2tokens API, keystone
> > returns a
> >   token if successful, in which case we process the request
> > 
> > The problem, which I'm still not 100% clear on, is how do we lock down
> > INSTANCE_USER such that it can access only specific
> > services/endpoints?
> 
> The only way I'm aware of would be via the policy.json, as mentioned
> earlier.
> 
> The problem with this is its static-ness. IIUC a change to the
> policy.json would require a roll-out across the fleet of whatever
> horizontally-scaled-service that the in-instance application needs
> to be able to invoke on.
> 
> That would seem to pretty much invalidate the idea of dynamically
> generating a custom role for the INSTANCE_USER which is precisely
> defined as the fine-grained set of actions that we want to limit
> its API calls.
> 
> However, if typical roles are known in advance (e.g. just calls
> PutMetricData in your case), then I guess it might be feasible to
> pre-deploy these archetype roles to the relevant policy.json files.
> 
> It wouldn't be a very neat, scalable, or flexible arrangement, but
> its the only way I can think of achieving that finegrained 
> role-limitation right now.

Ok, so that's what we'll have to go with in the near-term, and hopefully we
can work out a more dynamic solution in due course.

Any pointers to examples of this sort of configuration via policy.json would
be appreciated ;)

>  
> > Ideally we need to lock down the access such that INSTANCE_USER can
> > only
> > perform a subset of API actions on a subset of the keystone
> > authenticated
> > APIs, is there any method for doing this with the policy.json?
> >
> > If not then we can track the "instance users" inside heat and reject
> > API
> > requests for non-whitelisted API actions (e.g we want ability to send
> > Cloudwatch metrics, but not to query them or manipulate alarms)
> > 
> > Adam - previously you mentioned putting the "instance users" in a
> > separate
> > tenant (managed by heat), do you still see this as a good solution,
> > or do
> > you think there is a viable way to lock down INSTANCE_USER inside the
> > same
> > tenant as USER?
> 
> I'm struggling with the separate tenant idea, as there doesn't seem to be
> a way of recovering the original identity (which presumably you would
> need when querying the metrics in you use-case, or just auditing/billing
> etc. in the general case).

So I can see this being a problem generally, but within the limited heat
use-case, we simply have to enforce one INSTACE_USER for every instance, and
map the INSTANCE_USER to the actual instance resource internally.  Non-ideal
though..

The main thing seems to be having INSTANCE_USERs in a separate tenant
probably adds a lot of internal management complexity, for some percieved
additional separation/security.

Then you also have the risk of contamination between INSTANCE_USERs (e.g due
to bugs in our internal mapping code etc), so overall I'm reaching the
conclusion that sticking to a single tenant will be best overall, with
INSTANCE_USERs restricted by role/policy and possibly some whitelist sanity
logic in the WSGI code.

Steve




More information about the OpenStack-dev mailing list