[openstack-dev] [policy] [congress] Protocol for Congress --> Enactor

Gregory Lebovitz gregory.ietf at gmail.com
Sat Nov 1 18:13:53 UTC 2014

Summary from IRC chat 10/14/2014 on weekly meeting [1] [2]

Topic:  Declarative Language for Congress —> Enactor/Enforcer

Question: Shall we specify a declarative language for communicating policy
configured in Congress to enactors / enforcement systems

Hypothesis (derived at conclusion of discussion):
     - Specify declarative protocol and framework for describing policy
with extensible attributes/value fields described in a base ontology, with
additional affinity ontologies, is what is needed earlier than later, to be
able to achieve it as an end-state, before too many Enactors dive into
     - We could achieve that specification once we know the right structure


   - Given the following framework:
   - Elements:
         - Congress - The policy description point, a place where:
            - (a) policy inputs are collected
            - (b) collected policy inputs are integrated
            - (c) policy is defined
            - (d) declares policy intent to enforcing / enacting systems
            - (e) observes state of environment, noting policy violations
         - Feeders - provides policy inputs to Congress
         - Enactors / Enforcers - receives policy declarations from
         Congress and enacts / enforces the policy according to its capabilities
            - E.g. Nova for VM placement, Neutron for interface
            connectivity, FWaaS for access control, etc.

What will the protocol be for the Congress —> Enactors / Enforcers?

thinrichs:  we’ve we've been assuming that Congress will leverage whatever
the Enactors (policy engines) and Feeders (and more generally datacenter
services) that exist are using. For basic datacenter services, we had
planned on teaching Congress what their API is and what it does. So there's
no new protocol there—we'd just use HTTP or whatever the service
expects. For Enactors, there are 2 pieces: (1) what policy does Congress
push and (2) what protocol does it use to do that? We don't know the answer
to (1) yet.  (2) is less important, I think. For (2) we could use opflex,
for example, or create a new one. (1) is hard because the Enactors likely
have different languages that they understand. I’m not aware of anyone
thinking about (2). I’m not thinking about (2) b/c I don't know the answer
to (1). The *really* hard thing to understand IMO is how these Enactors
should cooperate (in terms of the information they exchange and the
functionality they provide).  The bits they use to wrap the messages they
send while cooperating is a lower-level question.

jasonsb & glebo: feel the need to clarify (2)

glebo: if we come out strongly with a framework spec that identifies
a protocol for (2), and make it clear that Congress participants, including
several data center Feeders and Enactors, are in consensus, then the other
Feeders & Enactors will line up, in order to be useful in the modern
deployments. Either that, or they will remain isolated from the
new environment, or their customers will have to create custom connectors
to the new environment. It seems that we have 2 options. (a) Congress
learns any language spoken by Feeders and Enactors, or (b) specifies a
single protocol for Congress —> Enactors policy declarations, including a
highly adaptable public registry(ies) for defining the meaning of content
blobs in those messages. For (a) Congress would get VERY bloated with an
abstraction layer, modules, semantics and state for each different language
it needed to speak. And there would be 10s of these languages. For (b),
there would be one way to structure messages that were constructed of blobs
in (e.g.) some sort of Type/Length/Value (TLV) method, where the Types and
Values were specified in some Internet registry.

jasonsb: Could we attack this from the opposite direction? E.g. if Congress
wanted to provide an operational dashboard to show if things are in
compliance, it would be better served by receiving the state and stats from
the Enactors in a single protocol. Could a dashboard like this be a carrot
to lure the various players into a single protocol for Congress —> Enactor?

glebo & jasonsb: If Congress has to give Enactors precise instructions on
what to do, then Congress will bloat, having to have intelligence about
each Enactor type, and hold its state and such. If Congress can deliver
generalized policy declarations, and the Enactor is responsible for
interpreting it, and applying it, and gathering and analyzing the state so
that it knows how to react, then the intelligence and state that it is
specialized in knowing will live in the Enactor. A smaller Congress is
better, and this provides cleaner “layering” of the problem space overall.

thinrichs: would love to see a single (2) language, but doesn’t see that as
a practical solution in the short term, dubious that anyone will use
Congress if it only works when all of the Enactors speak the Congress
language. It’s an insertion question.

glebo:  the key is NOT the bits on the wire, not at all (though having that
format set is VERY helpful). The key is the lexicon, the registry of shared
types/attributes and value codes that (i) get used over and over again
across many Enactor/Enforcement domains, and (ii) have domain-specific
registries for domain-only types / attributes & values. Eg. IPv4addr will
be in the all-domains, thus a (i), and AccessControlAction, and it's value
codes of Permit, Deny, Reset, SilentDrop, Log, etc., will live in (ii)
FWaaS registry only. Just examples. This way, each domain (e.g. Neutron
L2/L3, Nova-placement, FWaaS, LBaaS, StorageaaS) can define their own
attributes and publish the TLVs for them, and do so VERY quickly,
independent of the rest of the Congress domains.

thinrichs & glebo: Agree that domains should be empowered to build their
own ontologies. We’ve shied away building them in Congress because we don’t
believe we can succeed, too many different ontologies between problem
domains (e.g., FWaaS vs StorageaaS) as well as vertical markets (e.g.,
Finance vs. Tech). E.g. maybe all the major financials get together and
develop their own ontology and publish it, based on their needs. And there
will probably need to be a base set of Types/Attributes for building policy
that get used by 80% of the varying ontology domains that would need to be
defined by Congress, to start, the specific Enactor groups can create their
own extension ontologies.

glebo: So we need to specify a language / protocol for these
various communities and vendors to send/receive their declarations of
policy that are expressed using a wide set of types/attributes and values
from a registry? And their would need to be allowance for vendor specific
thinrichs & glebo: we need to look at this from the perspective of
insertion. The above described is a great end state. How do we get from
today to insertion to desired end-state? Once we gain traction, customers
will start wanting more, and at that point we'll have the leverage to tell
them "well we need the other vendors of services that we're supposed to
manage to utilize some standard interface/language/protocol/whatever”, then
the standardization of ontologies is very useful.

For some Enactor/Enforcer (we used GBP since it's logicy) figure out how
Congress and that Enactor *should* interoperate. Some questions to think

   - What information do that need to exchange?
   - What if someone other than Congress gives that Enactor instructions?
   - What happens when the policy cannot be completely delegated to Enactor?
   - What happens when Policy is delegated to Enactor and Enactor says, “I
   can’t do that today.”?
   - What if a hierarchy of policy (reflecting organizational stake
   holders) exists?
   - What if coordination is needed between two Enactor engines? The
   Enactor can’t bear sole burden in this case, can it?

Possible path forward, that considers insertion to end-state:

   - Desired end-state for Congress —> Enactor declarations:
   - single carrying protocol for bits on wire and ordering, etc.
      - single “base” ontology covering the 80 of types needed, published
      publicly (registry)
      - multiple domain-specific ontologies for various affinity groups
      published publicly (registries)
      - vendor-specific ontologies published publicly (registries). We want
      to keep these as small as possible, and encourage participation
in the base
      or affinity group registries as much as possible.
   - Note that there really are only 4 or 5 Enactor types today (although
   many more are popping up very quickly)
   - We want to put a stake in the ground now, ASAP, so emerging Enactor
   domains and vendors can start immediately toward the end-state
   - Meanwhile, we will support existing APIs (a very small number) for
   existing Enactor types, but on a short term basis only, with a published
   plan to deprecate the use of the multiple, and transition toward the use of
   the one protocol with many ontologies.

discussion started #openstack-meeting-3 Oct 14, 2014 at 17:24:00 [1]
Discussion then moved to #congress 18:01:40 [2]

[1] *http://eavesdrop.openstack.org/meetings/congressteammeeting/2014/congressteammeeting.2014-10-14-17.01.log.html
[2] (could not find the transcript for #congress. Pointer appreciated)

Hope it helps,
Open industry related email from
Gregory M. Lebovitz
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141101/182b5a70/attachment.html>

More information about the OpenStack-dev mailing list