[openstack-dev] Congress: an open policy framework

Tim Hinrichs thinrichs at vmware.com
Mon Nov 18 23:19:08 UTC 2013


Inline.

----- Original Message -----
| From: "Adam Young" <ayoung at redhat.com>
| To: openstack-dev at lists.openstack.org
| Sent: Friday, November 15, 2013 2:46:02 PM
| Subject: Re: [openstack-dev] Congress: an open policy framework
|
| On 11/14/2013 11:39 AM, Tim Hinrichs wrote:
| > I completely agree that making Congress successful will rely
| > crucially on addressing performance and scalability issues.  Some
| > thoughts...
| >
| > 1. We're definitely intending to cache data locally to avoid
| > repeated API calls.  In fact, a prototype cache is already in
| > place.  We haven't yet hooked up API calls (other than to AD).  We
| > envision some data sources pushing us data (updates) and some data
| > sources requiring us to periodically pull.
| So, I think that you should start with what we already have working.
| Policy distribution is a part of Keystone now.  Policy processing
| uses
| Oslo policy.py.  I am not sure why it would make sense to have a
| separate program.  I would be more than happy to house this work
| inside
| the existing Keystone architecture.
|
| >
| > 2. My main concern for scalability/performance is for proactive
| > enforcement, where at least conceptually Congress is on the
| > critical path for API calls.
| >
| > One thought is that we could splice out, say, the network portion
| > of the Congress policy and push it down into neutron, assuming
| > neutron could enforce that policy.  This would at least eliminate
| > cross-component communication.  It would require a policy engine
| > on each of the OS components, but (a) there already is one on many
| > components and (b) if there isn't, we can rely on reactive
| > enforcement.
| >
| > The downside with policy-caching on other OS components are the
| > usual problems with staleness and data replication, e.g. maybe
| > we'd end up copying all of nova's VM data into neutron so that
| > neutron could enforce its policy.  But because we have reactive
| > enforcement to rely on, we could always approximate the policy
| > that we push down (conservatively) to catch the common mistakes
| > and leave the remainder to reactive enforcement.  For example, we
| > might be able to auto-generate the current policy.json files for
| > each component from Congress's policy.
| >
| > Keeping Congress out of the critical path for every API call is one
| > of the reasons it was designed to do reactive enforcement as well
| > as proactive enforcement.
|
| I think all these questions have come up multiple times with
| Keystone.
| There was a signifcant discussion about Configuration management,
| which
| would encompass policy.  We were discussing the Key Distribution
| Service
| as well. It bascially comes down to "inside the undercloud" versus
| outside (end user facing).
|
| Termie's suggestion was that we needed to consider using something
| like
| Zookeeper.  I think he is right.
|

I've heard different people mean different things by "configuration mgmt".  Some are talking about managing server configurations e.g. with Puppet.  Others are talking about managing the overall configuration of the cloud across components (e.g. every network connected to a VM must be owned by someone in the same group as the VMs owner).  Congress is intended to support whatever configuration management/policy you can write in terms of the cloud services you have available.  We don't differentiate between configuration management and policy.

| >
| > 3. Another option is to build high-performance optimizations for
| > certain fragments of the policy language.  Then the cloud
| > architect can decide whether she wants to utilize a more
| > expressive language whose performance is worse or a less
| > expressive language whose performance is better.
|
| So we should not be writing yet another policy language.  We have a
| simple one in Open Stack already, and there are many, many ones out
| there already.  XACML is the standard for authorization decisions, as
| an
| example.  Puppet  falls into this role as well.
|

Not all languages are created equal, which is especially true of declarative/policy languages.  The one we've prototyped is more expressive than the languages you mention (though is general-purpose, not domain-specific like Puppet).  We're not inventing yet another policy language--we're using a core fragment of SQL that's been studied and used for 50 years.  We're simply advocating its use.  We're also building a policy engine around it for both proactive enforcement (stopping violations before they occur) and reactive enforcement (correcting violations after they occur).  We're also adding algorithms that help an admin understand her current policy as well as hypothetical changes to that policy and its data sources.  Finally, we're providing data integration functionality for the cloud services over which policy is written.

|
| I don't think this effort makes sense as a stand alone project.
| Either
| it should be part of Keystone if it just for the existing acces
| policy,
| or it should be part of a larger configuration management effort.  I
| realize that there is a commonality of effort in policy enforcement
| that
| sounds like it should be a stand alone project, but I think that
| there
| is much more to the conversation than just that.

If the right thing for the OS ecosystem is that Congress is absorbed into another component, we're happy to do that.  We didn't think Keystone fit the bill, but maybe we were wrong.  Can you help me understand some more about Keystone and how Congress would fit in?

As I understand it, Keystone is an identity service, which to me means it handles authentication and manages user attributes.  When combined with oslo-policy, it also handles authorization in so far as user attributes suffice for making authorization decisions.  Is this basically right? 

The main reason I didn't see Congress as fitting under the Keystone umbrella is that Congress deals with authorization policies based on data that I wouldn't expect within an identity service.  Instead of a policy that authorizes an API request based solely on user-attributes and the parameters of the request, Congress policies can be conditioned on all kinds of information from arbitrary cloud data sources, e.g. antivirus scanners, VM attributes, network attributes, inventory management systems.

The other thing that made me think Congress might not belong in Keystone was its policy engine.  We envision Congress enforcing the policy both proactively (stopping violations before they happen) and reactively (fixing violations after they occur).  The proactive version is a natural fit with oslo-policy/Keystone.  The reactive version requires a  policy engine that constantly listens for updates to data sources, checks for policy violations, and takes action to correct violations.  There are also plans for a dashboard interface for admins to control reactive enforcement and investigate the policy, its violations, and the consequences of policy changes.  This operational model didn't seem to fit with what I knew about Keystone.

Again, we're happy to do the right thing here.  Just not sure what the right thing is.

Tim



More information about the OpenStack-dev mailing list