[openstack-dev] [oslo][nova] Anyone interested in writing a policy generator sphinx extension?

Andrew Laski andrew at lascii.com
Wed Sep 21 20:59:39 UTC 2016



On Wed, Sep 21, 2016, at 04:18 PM, Joshua Harlow wrote:
> Andrew Laski wrote:
> >
> > On Wed, Sep 21, 2016, at 03:18 PM, Joshua Harlow wrote:
> >> Andrew Laski wrote:
> >>> On Wed, Sep 21, 2016, at 12:02 PM, Joshua Harlow wrote:
> >>>> Andrew Laski wrote:
> >>>>> However, I have asked twice now on the review what the benefit of doing
> >>>>> this is and haven't received a response so I'll ask here. The proposal
> >>>>> would add additional latency to nearly every API operation in a service
> >>>>> and in return what do they get? Now that it's possible to register sane
> >>>>> policy defaults within a project most operators do not even need to
> >>>>> think about policy for projects that do that. And any policy changes
> >>>>> that are necessary are easily handled by a config management system.
> >>>>>
> >>>>> I would expect to see a pretty significant benefit in exchange for
> >>>>> moving policy control out of Nova, and so far it's not clear to me what
> >>>>> that would be.
> >>>> One way to do this is to setup something like etc.d or zookeeper and
> >>>> have policy files be placed into certain 'keys' in there by keystone,
> >>>> then consuming projects would 'watch' those keys for being changed (and
> >>>> get notified when they are changed); the project would then reload its
> >>>> policy when the other service (keystone) write a new key/policy.
> >>>>
> >>>> https://coreos.com/etcd/docs/latest/api.html#waiting-for-a-change
> >>>>
> >>>> or
> >>>> https://zookeeper.apache.org/doc/r3.4.5/zookeeperProgrammers.html#ch_zkWatches
> >>>>
> >>>> or (pretty sure consul has something similar),
> >>>>
> >>>> This is pretty standard stuff folks :-/ and it's how afaik things like
> >>>> https://github.com/skynetservices/skydns work (and more), and it would
> >>>> avoid that 'additional latency' (unless the other service is adjusting
> >>>> the policy key every millisecond, which seems sorta unreasonable).
> >>> Sure. Or have Keystone be a frontend for ansible/puppet/chef/.... What's
> >>> not clear to me in any of this is what's the benefit to having Keystone
> >>> as a fronted to policy configuration/changes, or be involved in any real
> >>> way with authorization decisions? What issue is being solved by getting
> >>> Keystone involved?
> >>>
> >> I don't understand the puppet/chef connection, can u clarify.
> >>
> >> If I'm interpreting it right, I would assume it's the same reason that
> >> something like 'skydns' exists over etcd; to provide a useful API that
> >> focuses on the dns particulars that etcd will of course not have any
> >> idea about. So I guess the keystone API could(?)/would(?) then focus on
> >> policy particulars as its value-add.
> >>
> >> Maybe now I understand what u mean by puppet/chef, in that you are
> >> asking why isn't skydns (for example) just letting/invoking
> >> puppet/chef/ansible to distribute/send-out dns (dnsmasq) files? Is that
> >> your equivalent question?
> >
> > I'm focused on Nova/Keystone/OpenStack here, I'm sure skydns has good
> > reasons for their technical choices and I'm in no place to question
> > them.
> >
> > I'm trying to understand the value-add that Keystone could provide here.
> > Policy configuration is fairly static so I'm not understanding the
> > desire to put an API on top of it. But perhaps I'm missing the use case
> > here which is why I've been asking.
> >
> > My ansible/puppet/chef comparison was just that those are ways to
> > distribute static files and would work just as well as something built
> > on top of etcd/zookeeper. I'm not really concerned about how it's
> > implemented though. I'm just trying to understand if the desire is to
> > have Keystone handle this so that deployers don't need to work with
> > their configuration management system to configure policy files, or is
> > there something more here?
> >
> >
> 
> Gotcha, thanks for explaining.
> 
> I'll let others comment, but my semi-useful/semi-baked thoughts around 
> this are as a user I would want to:
> 
> #1 Can I query keystone (or perhaps I should ask nova) to show me what 
> (all the) APIs in nova I'm allowed to call (without actually having to 
> perform those same calls to figure it out); ie, tell me how my known 
> role/user/tenant in <something> maps to the policy stored (somewhere in 
> some project) so I can make smart decisions around which APIs I can be 
> calling.

So we are actually looking at implementing this in Nova, and Cinder is
looking at something similar. However a key difference is that what you
as a user are allowed to do is dependent on more than just policy. So
"capabilities" (what we're calling it in Nova) will return what you're
allowed to do based on policy, hypervisor versions, flavor used, etc...

A challenge with doing this in Keystone is that there's no way for
Keystone to map the policies to the API calls in Nova. Frankly we don't
have a way to do that in Nova either :) But we do have a tool for
exposing the list of policies that you will pass
https://review.openstack.org/#/c/322944/ .

> 
> #2 Can I go to one place (the same place that has my roles and tenants 
> and such?) and ensure that by changing roles or such that dependent 
> systems that may have meanings for those roles are not adversely 
> affected (or say makes a policy being used become invalid, ie similar to 
> a error saying 'the change you have requested violates the rules defined 
> in 'nova policy' and therefore is invalid and can't be applied').
> 
> The #1 kind of use-case of course would be really easy if keystone has 
> the knowledge of each projects 'policy.json' (or equivalent data 
> structure); and since keystone already has the role/user/tenant 
> information it would be straightforward to solve #2 there as well 
> (because keystone could reject a role change or tenancy change or user 
> change or ... if it negatively/violates affects some projects policy).
> 
> Of course if u distribute the policy information then each project would 
> have to implement #1 and there would need to be some 2 way mechanism to 
> ensure #2 happens correctly (because if keystone just blindly does 
> role/user/tenant changes it may violate a projects policy definition).
> 
> Just my current thoughts, I'm sure there are other thoughts around 
> distributing vs centralizing and so-on (just hoping that we can think 
> past the view that centralizing doesn't really have to imply that 
> keystone has to be called for every single REST API called as a 
> precursor, if we use systems like etcd/zookeeper/... smartly).

Thanks for those thoughts. This is the type of information I would like
to see in the spec that started this subthread.


> 
> -Josh
> 
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



More information about the OpenStack-dev mailing list