[openstack-dev] Fwd: FW: [Neutron] Group Based Policy and the way forward

Kevin Benton blak111 at gmail.com
Wed Aug 6 19:03:09 UTC 2014


Hi Aaron,

These are good questions, but can we move this to a different thread
labeled "what is the point of group policy?"

I don't want to derail this one again and we should stick to Salvatore's
options about the way to move forward with these code changes.
On Aug 6, 2014 12:42 PM, "Aaron Rosen" <aaronorosen at gmail.com> wrote:

> Hi,
>
> I've made my way through the group based policy code and blueprints and
> I'd like ask several questions about it.  My first question really is what
> is the advantage that the new proposed group based policy model buys us?
>
>
> Bobs says, "The group-based policy BP approved for Juno addresses the
>> critical need for a more usable, declarative, intent-based interface for
>> cloud application developers and deployers, that can co-exist with
>> Neutron's current networking-hardware-oriented API and work nicely with all
>> existing core plugins. Additionally, we believe that this declarative
>> approach is what is needed to properly integrate advanced services into
>> Neutron, and will go a long way towards resolving the difficulties so far
>> trying to integrate LBaaS, FWaaS, and VPNaaS APIs into the current Neutron
>> model."
>
> My problem with the current blueprint and that comment above is it does
> not provide any evidence or data of where the current neutron abstractions
> (ports/networks/subnets/routers) provide difficulties and what benefit this
> new model will provide.
>
> In the current proposed implementation of group policy, it's
> implementation maps onto the existing neutron primitives and the neutron
> back end(s) remains unchanged. Because of this one can map the new
> abstractions onto the previous ones so I'm curious why we want to move this
> complexity into neutron and not have it done externally similarly to how
> heat works or a client that abstracts this complexity on it's own end.
>
> From the group-based policy blueprint that was submitted [1]:
>
>
> The current Neutron model of networks, ports, subnets, routers, and
>> security
>> groups provides the necessary building blocks to build a logical network
>> topology for connectivity. However, it does not provide the right level
>> of abstraction for an application administrator who understands the
>> application's details (like application port numbers), but not the
>> infrastructure details likes networks and routes.
>
> It looks to me that application administrators still need to understand
> network primitives as the concept of networks/ports/routers are still
> present though just carrying a different name. For example, in
> ENDPOINT_GROUPS there is an attribute l2_policy_id which maps to something
> that you use to describe a l2_network and contains an attribute
> l3_policy_id which is used to describe an L3 network. This looks similar to
> the abstraction we have today where a l2_policy (network) then can have
> multiple l3_policies (subnets) mapping to it.  Because of this I'm curious
> how the GBP abstraction really provides a different level of abstraction
> for application administrators.
>
>
>  Not only that, the current
>> abstraction puts the burden of maintaining the consistency of the network
>> topology on the user. The lack of application developer/administrator
>> focussed
>> abstractions supported by a declarative model make it hard for those users
>> to consume Neutron as a connectivity layer.
>
> What is the problem in the current abstraction that puts a burden of
> maintaining the consistency of networking topology on users? It seems to me
> that the complexity of having to know about topology should be abstracted
> at the client layer if desired (and neutron should expose the basic
> building blocks for networking). For example, Horizon/Heat or the CLI could
> hide the requirement of topology by automatically creating a GROUP  (which
> is a network+subnet on a router uplinked to an external network)
> simplifying this need for the tenant to understand topology. In addition,
> topology still seems to be present in the group policy model proposed just
> in a different way as I see it.
>
> From the proposed change section the following is stated:
>
>
> This proposal suggests a model that allows application administrators to
>> express their networking requirements using group and policy
>> abstractions, with
>> the specifics of policy enforcement and implementation left to the
>> underlying
>> policy driver. The main advantage of the extensions described in this
>> blueprint
>> is that they allow for an application-centric interface to Neutron that
>> complements the existing network-centric interface.
>
>
> How is the Application-centric interface complementary to the
> network-centric interface?  Is the intention that one would use both
> interfaces at one once?
>
>  More specifically the new abstractions will achieve the following:
>> * Show clear separation of concerns between application and infrastructure
>> administrator.
>>
>
> I'm not quite sure I understand this point, how is this different than
> what we have today?
>
>
>> - The application administrator can then deal with a higher level
>> abstraction
>> that does not concern itself with networking specifics like
>> networks/routers/etc.
>>
>
> It seems like the proposed abstraction still requires one to concern
> themselves with networking specifics (l2_policies, l3_policies).  I'd
> really like to see more evidence backing this. Now they have to deal with
> specifies like: Endpoint, Endpoint Group, Contract, Policy Rule,
> Classifier, Action, Filter, Role, Contract Scope, Selector, Policy Label,
> Bridge Domain, Routing Domain...
>
>
>> - The infrastructure administrator will deal with infrastructure specific
>> policy abstractions and not have to understand application specific
>> concerns
>> like specific ports that have been opened or which of them expect to be
>> limited to secure or insecure traffic. The infrastructure admin will also
>
> have ability to direct which technologies and approaches used in rendering.
>> For example, if VLAN or VxLAN is used.
>>
>
> How is this different from what we have now? Today in neutron the
> infrastructure administrator deals with infrastructure specific policy
> abstractions i.e external networks (networks that uplink to the physical
> world) and do not have to understand any specific connectivity concerns of
> the application as mentioned is provided in this model. Since the beginning
> neutron has always given the ability for infra admins to decide which
> back-end technologies are used VXLAN/VLAN/etc which are abstracted away
> from the tenant.
>
> - Allow the infrastructure admin to introduce connectivity constraints
>> without the application administrator having to be aware of it (e.g. audit
>> all traffic between two application tiers).
>>
>
> I think this is a good point and see how this works in the Group Based
> Policy abstractions that are proposed. That said, I think there are other
> ways to provide this type of interface rather redefining the current
> abstractions. For example, providing additional attributes on the existing
> primitives (ports/networks/router) to get this information. Or similarly
> how the LBaaS/Security Group API was implemented providing a grouping
> concept.
>
>
>
>> * Allow for independent provider/consumer model with late binding and
>> n-to-m
>> relationships between them.
>>
>
> Same points as above. I still don't understand how changing this model
> provides us anything different (or advantage) that we already have/(or can)
> today. Also, reading through the current model it seems like it ties the
> bindings to endpoint-groups (networks) rather than endpoints (ports) which
> seems like a restriction we'd like to avoid. What I mean by this is it
> looks like security groups are now mapped to networks rather than ports
> requiring one to break an application up on to different networks (which we
> do not require today).
>
>
>> * Allow for automatic orchestration that can respond to changes in policy
>> or
>> infrastructure without requiring human interaction to translate intent to
>> specific actions.
>
>
> I'd be curious to hear more about this and how changing the abstractions
> today makes this easier. How does the automatic orchestration work? There
> is actually a heat blueprint that talks about getting infrastructure to
> desired state without human interaction (which is able to do this without
> changing any of the abstractions in neutron/nova)  -
> https://review.openstack.org/#/c/95907/
>
>  Another concern is that the new API provides several new constructs I
> think users will have difficultly understanding:
>
> The following new terminology is being introduced:
>> **Endpoint (EP):** An L2/L3 addressable entity.
>> **Endpoint Group (EPG):** A collection of endpoints.
>> **Contract:** It defines how the application services provided by an EPG
>> can be
>> accessed. In effect it specifies how an EPG communicates with other EPGs.
>> A
>> Contract consists of Policy Rules.
>> **Policy Rule:** These are individual rules used to define the
>> communication
>> criteria between EPGs. Each rule contains a Filter, Classifier, and
>> Action.
>> **Classifier:** Characterizes the traffic that a particular Policy Rule
>> acts on.
>> Corresponding action is taken on traffic that satisfies this
>> classification
>> criteria.
>> **Action:** The action that is taken for a matching Policy Rule defined
>> in a
>> Contract.
>> **Filter:** Provides a way to tag a Policy Rule with Capability and Role
>> labels.
>> **Capability:** It is a Policy Label that defines what part of a Contract
>> a
>> particular EPG provides.
>> **Role:** It is a Policy Label that defines what part of a Contract an
>> EPG wants
>> to consume.
>> **Contract Scope:** An EPG conveys its intent to provide or consume a
>> Contract
>> (or its part) by defining a Contract Scope which references the target
>> Contract.
>> **Selector:** A Contract Scope can define additional constraints around
>> choosing
>> the matching provider or consumer EPGs for a Contract via a Selector.
>> **Policy Labels:** These are labels contained within a namespace
>> hierarchy and
>> used to define Capability and Role tags used in Filters.
>> **Bridge Domain:** Used to define a L2 boundary and impose additional
>> constraints (such as no broadcast) within that L2 boundary.
>> **Routing Domain:** Used to define a non-overlapping IP address space.
>
>
>
> I was also not able to find out how policy labels, selector, capabilities,
> filters, and roles are used and how they work (I haven't found patches yet
> that use these either).
>
> Lastly, I believe the neutron API was built with the desire of simplicity
> and providing an abstraction that represents how networks works (similar to
> nova for servers). It provides the basic building block to allow one to
> implement any networking concept or orchestration they desire on top of it.
> I think this speaks to the point that the API we have today is flexible
> enough for the concept of group policy to be mapped directly on top of it.
>  I do see the benefit for a higher level abstraction though I don't really
> understand the benefit that this new model buys us. I look forward to
> continuing this discussion.
>
> Best,
>
> Aaron
>
> [1] -
> https://github.com/openstack/neutron-specs/blob/master/specs/juno/group-based-policy-abstraction.rst
>
> On Wed, Aug 6, 2014 at 11:04 AM, Jay Pipes <jaypipes at gmail.com> wrote:
>
>> On 08/06/2014 04:30 AM, Stefano Santini wrote:
>>
>>> Hi,
>>>
>>> In my company (Vodafone), we (DC network architecture) are following
>>> very closely the work happening on Group Based Policy since we see a
>>> great value on the new paradigm to drive network configurations with an
>>> advanced logic.
>>>
>>> We're working on a new production project for an internal private cloud
>>> deployment targeting Juno release where we plan to introduce the
>>> capabilities based on using Group Policy and we don't want to see it
>>> delayed.
>>> We strongly request/vote to see this complete as proposed without such
>>> changes to allow to move forward with the evolution of the network
>>> capabilities
>>>
>>
>> Hi Stefano,
>>
>> AFAICT, there is nothing that can be done with the GBP API that cannot be
>> done with the low-level regular Neutron API.
>>
>> Further, if the Nova integration of the GBP API does not occur in the
>> Juno timeframe, what benefit will GBP in Neutron give you? Specifics on the
>> individual API calls that you would change would be most appreciated.
>>
>> Thanks in advance for your input!
>> -jay
>>
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140806/1258b595/attachment.html>


More information about the OpenStack-dev mailing list