[openstack-dev] [neutron][group-based-policy] GP mapping driver

Armando M. armamig at gmail.com
Tue May 27 16:28:35 UTC 2014


Hi Mohammad,

Thanks, I understand now. I appreciate that the mapping driver is one way
of doing things and that the design has been familiarized for a while. I
wish I could follow infinite channels but unfortunately the openstack
information overload is astounding and sometimes I fail :) Gerrit is the
channel I strive to follow and this is when I saw the code for the first
time, hence my feedback.

It's worth noting that the PoC design document is (as it should be) very
high level and most of my feedback applies to the implementation decisions
being made. That said, I still have doubts that an ML2 like approach is
really necessary for GP and I welcome inputs to help me change my mind :)

Thanks
Armando
On May 27, 2014 5:04 PM, "Mohammad Banikazemi" <mb at us.ibm.com> wrote:

> Thanks for the continued interest in discussing Group Policy (GP). I
> believe these discussions with the larger Neutron community can benefit the
> GP work.
>
> GP like any other Neutron extension can have different implementations.
> Our idea has been to have the GP code organized similar to how ML2 and
> mechanism drivers are organized, with the possibility of having different
> drivers for realizing the GP API. One such driver (analogous to an ML2
> mechanism driver I would say) is the mapping driver that was implemented
> for the PoC. I certainly do not see it as the only implementation. The
> mapping driver is just the driver we used for our PoC implementation in
> order to gain experience in developing such a driver. Hope this clarifies
> things a bit.
>
> Please note that for better or worse we have produced several documents
> during the previous cycle. We have tried to collect them on the GP wiki
> page [1]. The latest design document [2] should give a broad view of the GP
> extension and the model being proposed. The PoC document [3] may clarify
> our PoC plans and where the mapping driver stands wrt other pieces of the
> work.  (Please note some parts of the plan as described in the PoC document
> was not implemented.)
>
> Hope my explanation and these documents (and other documents available on
> the GP wiki) are helpful.
>
> Best,
>
> Mohammad
>
> [1] https://wiki.openstack.org/wiki/Neutron/GroupPolicy   <----- GP wiki
> page
> [2]
> https://docs.google.com/presentation/d/1Nn1HjghAvk2RTPwvltSrnCUJkidWKWY2ckU7OYAVNpo/   <----- GP design document
> [3]
> https://docs.google.com/document/d/14UyvBkptmrxB9FsWEP8PEGv9kLqTQbsmlRxnqeF9Be8/   <----- GP PoC document
>
>
> [image: Inactive hide details for "Armando M." ---05/26/2014 09:46:34
> PM---On May 26, 2014 4:27 PM, "Mohammad Banikazemi" <mb at us.ibm.co]"Armando
> M." ---05/26/2014 09:46:34 PM---On May 26, 2014 4:27 PM, "Mohammad
> Banikazemi" <mb at us.ibm.com> wrote: >
>
> From: "Armando M." <armamig at gmail.com>
> To: "OpenStack Development Mailing List, (not for usage questions)" <
> openstack-dev at lists.openstack.org>,
> Date: 05/26/2014 09:46 PM
> Subject: Re: [openstack-dev] [neutron][group-based-policy] GP mapping
> driver
> ------------------------------
>
>
>
>
> On May 26, 2014 4:27 PM, "Mohammad Banikazemi" <*mb at us.ibm.com*<mb at us.ibm.com>>
> wrote:
> >
> > Armando,
> >
> > I think there are a couple of things that are being mixed up here, at
> least as I see this conversation :). The mapping driver is simply one way
> of implementing GP. Ideally I would say, you do not need to implement the
> GP in terms of other Neutron abstractions even though you may choose to do
> so. A network controller could realize the connectivities and policies
> defined by GP independent of say networks, and subnets. If we agree on this
> point, then how we organize the code will be different than the case where
> GP is always defined as something on top of current neutron API. In other
> words, we shouldn't organize the overall code for GP based solely on the
> use of the mapping driver.
>
> The mapping driver is embedded in the policy framework that Bob had
> initially proposed. If I understood what you're suggesting correctly, it
> makes very little sense to diverge or come up with a different framework
> alongside the legacy driver later on, otherwise we may end up in the same
> state of the core plugins': monolithic vs ml2-based. Could you clarify?
> >
> > In the mapping driver (aka the legacy driver) for the PoC, GP is
> implemented in terms of other Neutron abstractions. I agree that using
> python-neutronclient for the PoC would be fine and as Bob has mentioned it
> would have been probably the best/easiest way of having the PoC implemented
> in the first place. The calls to python-neutronclient in my understanding
> could be eventually easily replaced with direct calls after refactoring
> which lead me to ask a question concerning the following part of the
> conversation (being copied here again):
>
> Not sure why we keep bringing this refactoring up: my point is that if GP
> were to be implemented the way I'm suggesting the refactoring would have no
> impact on GP...even if it did, replacing remote with direct calls should be
> avoided IMO.
>
> >
> >
> > [Bob:]
> >
> > > > The overhead of using python-neutronclient is that unnecessary
> > > > serialization/deserialization are performed as well as socket
> communication
> > > > through the kernel. This is all required between processes, but not
> within a
> > > > single process. A well-defined and efficient mechanism to invoke
> resource
> > > > APIs within the process, with the same semantics as incoming REST
> calls,
> > > > seems like a generally useful addition to neutron. I'm hopeful the
> core
> > > > refactoring effort will provide this (and am willing to help make
> sure it
> > > > does), but we need something we can use until that is available.
> > > >
> >
> > [Armando:]
> >
> > > I appreciate that there is a cost involved in relying on distributed
> > > communication, but this must be negligible considered what needs to
> > > happen end-to-end. If the overhead being referred here is the price to
> > > pay for having a more dependable system (e.g. because things can be
> > > scaled out and/or made reliable independently), then I think this is a
> > > price worth paying.
> > >
> > > I do hope that the core refactoring is not aiming at what you're
> > > suggesting, as it sounds in exact opposition to some of the OpenStack
> > > design principles.
> >
> >
> > From the summit sessions (in particular the session by Mark on
> refactoring the core), I too was under the impression that there will be a
> way of invoking Neutron API within the plugin with the same semantics as
> through the REST API. Is this a misunderstanding?
>
> That was not my understanding, but I'll let Mark chime in on this.
>
> Many thanks
> Armando
> >
> > Best,
> >
> > Mohammad
> >
> >
> >
> >
> >
> >
> >
> > "Armando M." <*armamig at gmail.com* <armamig at gmail.com>> wrote on
> 05/24/2014 01:36:35 PM:
> >
> > > From: "Armando M." <*armamig at gmail.com* <armamig at gmail.com>>
> > > To: "OpenStack Development Mailing List (not for usage questions)"
> > > <*openstack-dev at lists.openstack.org*<openstack-dev at lists.openstack.org>>,
>
> > > Date: 05/24/2014 01:38 PM
> > > Subject: Re: [openstack-dev] [neutron][group-based-policy] GP mapping
> driver
> >
> > >
> > > On 24 May 2014 05:20, Robert Kukura <*kukura at noironetworks.com*<kukura at noironetworks.com>>
> wrote:
> > > >
> > > > On 5/23/14, 10:54 PM, Armando M. wrote:
> > > >>
> > > >> On 23 May 2014 12:31, Robert Kukura <*kukura at noironetworks.com*<kukura at noironetworks.com>>
> wrote:
> > > >>>
> > > >>> On 5/23/14, 12:46 AM, Mandeep Dhami wrote:
> > > >>>
> > > >>> Hi Armando:
> > > >>>
> > > >>> Those are good points. I will let Bob Kukura chime in on the
> specifics of
> > > >>> how we intend to do that integration. But if what you see in the
> > > >>> prototype/PoC was our final design for integration with Neutron
> core, I
> > > >>> would be worried about that too. That specific part of the code
> > > >>> (events/notifications for DHCP) was done in that way just for the
> > > >>> prototype
> > > >>> - to allow us to experiment with the part that was new and needed
> > > >>> experimentation, the APIs and the model.
> > > >>>
> > > >>> That is the exact reason that we did not initially check the code
> to
> > > >>> gerrit
> > > >>> - so that we do not confuse the review process with the prototype
> > > >>> process.
> > > >>> But we were requested by other cores to check in even the
> prototype code
> > > >>> as
> > > >>> WIP patches to allow for review of the API parts. That can
> unfortunately
> > > >>> create this very misunderstanding. For the review, I would
> recommend not
> > > >>> the
> > > >>> WIP patches, as they contain the prototype parts as well, but just
> the
> > > >>> final
> > > >>> patches that are not marked WIP. If you such issues in that part
> of the
> > > >>> code, please DO raise that as that would be code that we intend to
> > > >>> upstream.
> > > >>>
> > > >>> I believe Bob did discuss the specifics of this integration issue
> with
> > > >>> you
> > > >>> at the summit, but like I said it is best if he represents that
> side
> > > >>> himself.
> > > >>>
> > > >>> Armando and Mandeep,
> > > >>>
> > > >>> Right, we do need a workable solution for the GBP driver to invoke
> > > >>> neutron
> > > >>> API operations, and this came up at the summit.
> > > >>>
> > > >>> We started out in the PoC directly calling the plugin, as is
> currently
> > > >>> done
> > > >>> when creating ports for agents. But this is not sufficient because
> the
> > > >>> DHCP
> > > >>> notifications, and I think the nova notifications, are needed for
> VM
> > > >>> ports.
> > > >>> We also really should be generating the other notifications,
> enforcing
> > > >>> quotas, etc. for the neutron resources.
> > > >>
> > > >> I am at loss here: if you say that you couldn't fit at the plugin
> > > >> level, that is because it is the wrong level!! Sitting above it and
> > > >> redo all the glue code around it to add DHCP notifications etc
> > > >> continues the bad practice within the Neutron codebase where there
> is
> > > >> not a good separation of concerns: for instance everything is
> cobbled
> > > >> together like the DB and plugin logic. I appreciate that some design
> > > >> decisions have been made in the past, but there's no good reason
> for a
> > > >> nice new feature like GP to continue this bad practice; this is why
> I
> > > >> feel strongly about the current approach being taken.
> > > >
> > > > Armando, I am agreeing with you! The code you saw was a
> proof-of-concept
> > > > implementation intended as a learning exercise, not something
> intended to be
> > > > merged as-is to the neutron code base. The approach for invoking
> resources
> > > > from the driver(s) will be revisited before the driver code is
> submitted for
> > > > review.
> > > >>
> > > >>
> > > >>> We could just use python-neutronclient, but I think we'd prefer to
> avoid
> > > >>> the
> > > >>> overhead. The neutron project already depends on
> python-neutronclient for
> > > >>> some tests, the debug facility, and the metaplugin, so in
> retrospect, we
> > > >>> could have easily used it in the PoC.
> > > >>
> > > >> I am not sure I understand what overhead you mean here. Could you
> > > >> clarify? Actually looking at the code, I see a mind boggling set of
> > > >> interactions going back and forth between the GP plugin, the policy
> > > >> driver manager, the mapping driver and the core plugin: they are all
> > > >> entangled together. For instance, when creating an endpoint the GP
> > > >> plugin ends up calling the mapping driver that in turns ends up
> calls
> > > >> the GP plugin itself! If this is not overhead I don't know what is!
> > > >> The way the code has been structured makes it very difficult to
> read,
> > > >> let alone maintain and extend with other policy mappers. The
> ML2-like
> > > >> nature of the approach taken might work well in the context of core
> > > >> plugin, mechanisms drivers etc, but I would argue that it poorly
> > > >> applies to the context of GP.
> > > >
> > > > The overhead of using python-neutronclient is that unnecessary
> > > > serialization/deserialization are performed as well as socket
> communication
> > > > through the kernel. This is all required between processes, but not
> within a
> > > > single process. A well-defined and efficient mechanism to invoke
> resource
> > > > APIs within the process, with the same semantics as incoming REST
> calls,
> > > > seems like a generally useful addition to neutron. I'm hopeful the
> core
> > > > refactoring effort will provide this (and am willing to help make
> sure it
> > > > does), but we need something we can use until that is available.
> > > >
> > >
> > > I appreciate that there is a cost involved in relying on distributed
> > > communication, but this must be negligible considered what needs to
> > > happen end-to-end. If the overhead being referred here is the price to
> > > pay for having a more dependable system (e.g. because things can be
> > > scaled out and/or made reliable independently), then I think this is a
> > > price worth paying.
> > >
> > > I do hope that the core refactoring is not aiming at what you're
> > > suggesting, as it sounds in exact opposition to some of the OpenStack
> > > design principles.
> > >
> > > > One lesson we learned from the PoC is that the implicit managementof
> the GP
> >
> > > > resources (RDs and BDs) is completely independent from the mapping
> of GP
> > > > resources to neutron resources. We discussed this at the last GP
> sub-team
> > > > IRC meeting, and decided to package this functionality as a separate
> driver
> > > > that is invoked prior to the mapping_driver, and can also be used in
> > > > conjunction with other GP back-end drivers. I think this will help
> improve
> > > > the structure and readability of the code, and it also shows the
> > > > applicability of the ML2-like nature of the driver API.
> > > >
> > > > You are certainly justified in raising the question of whether the
> ML2
> > > > driver API model is appropriate for the GP plugin. I raised two
> issues with
> > > > this in the sub-team's PoC post-mortem discussion. One was whether
> calling
> > > > multiple drivers is useful. The case above seems to justify this, as
> well as
> > > > potentially supporting heterogeneous deployments involving multiple
> ways to
> > > > enforce policy. The other was whether the precommit() methods are
> useful. I
> > > > think the jury is still out on this.
> > > >>
> > > >>
> > > >>> With the existing REST code, if we could find the
> > > >>> neutron.api.v2.base.Controller class instance for each resource,
> we could
> > > >>> simply call create(), update(), delete(), and show() on these. I
> didn't
> > > >>> see
> > > >>> an easy way to find these Controller instances, so I threw
> together some
> > > >>> code similar to these Controller methods for the PoC. It probably
> > > >>> wouldn't
> > > >>> take too much work to have neutron.manager.NeutronManager provide
> access
> > > >>> to
> > > >>> the Controller classes if we want to go this route.
> > > >>>
> > > >>> The core refactoring effort may eventually provide a nice
> solution, but
> > > >>> we
> > > >>> can't wait for this. It seems we'll need to either use
> > > >>> python-neutronclient
> > > >>> or get access to the Controller classes in the meantime.
> > > >>>
> > > >>> Any thoughts on these? Any other ideas?
> > > >>
> > > >> I am still not sure why do you even need to go all the way down to
> the
> > > >> Controller class. After all it's almost like GP could be a service
> in
> > > >> its own right that makes use of Neutron to map the application
> centric
> > > >> abstractions on top of the networking constructs; this can happen
> via
> > > >> the REST interface. I don't think there is a dependency on the core
> > > >> refactoring here: the two can progress separately, so long as we
> break
> > > >> the tie, from an implementation perspective, that GP and Core
> plugins
> > > >> need to leave in the same address space. Am I missing something?
> > > >> Because I still cannot justify why things have been coded the way
> they
> > > >> have.
> > > >
> > > > I completely agree that we should try to avoid a hard architectural
> > > > requirement that the GP and core plugins have to be in the same
> address
> > > > space, and agree that if we were to use separate address spaces,
> using
> > > > python-neutronclient would be the obvious solution. Certain back-end
> drivers
> > > > for the GP plugin may be more tightly coupled with corresponding core
> > > > plugins or ML2 mechanism drivers, with both cooperating to control
> the same
> > > > underlying fabric, so we don't want to preclude putting them in the
> same
> > > > address space either.
> > >
> > > I don't believe that the two living in the same address space is a
> > > *must-have* requirement to achieve the tight coupling you think it's
> > > needed, but I appreciate that some things would need to change in
> > > order to make the two to cooperate more closely if they were to exist
> > > separately.
> > >
> > > >
> > > > In the PoC, I attempted to structure things so we could easily
> change the
> > > > mechanism used for these calls. I can't really justify why we didn't
> just
> > > > use python-neutronclient for the PoC, but remember, it was just a
> prototype
> > > > intended for learning and to facilitate these sorts of discussions.
> > > >
> > > > So, as long as we do plan to package GP as a service plugin within
> > > > neutron-server, is the overhead of going through
> python-neutronclient within
> > > > that process acceptable? Are there any other issues with this? If its
> > > > workable, I think we can go with python-neutronclient for now, and
> look at
> > > > better alternatives as the core refactoring progresses.
> > >
> > > I was suggesting to move away from the service plugin model
> > > altogether, but it's certainly something that could be explored in the
> > > short term, if you think this is something you're more comfortable
> > > with; I personally feel that it is going to be detrimental to the GP
> > > efforts because at some point down the road things will need to be
> > > re-architected radically, and I was taking this discussion as an
> > > opportunity to advocate change from the start.
> > >
> > > Thanks for clarifying some of my doubts!
> > > Cheers,
> > > Armando
> > >
> > > >
> > > > Thanks,
> > > >
> > > > -Bob
> > > >
> > > >>
> > > >> Thanks,
> > > >> Armando
> > > >>
> > > >>> Thanks,
> > > >>>
> > > >>> -Bob
> > > >>>
> > > >>>
> > > >>> Regards,
> > > >>> Mandeep
> > > >>>
> > > >>>
> > > >>>
> > > >>>
> > > >>> _______________________________________________
> > > >>> OpenStack-dev mailing list
> > > >>> *OpenStack-dev at lists.openstack.org*<OpenStack-dev at lists.openstack.org>
> > > >>>
> *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> > > >>>
> > > >> _______________________________________________
> > > >> OpenStack-dev mailing list
> > > >> *OpenStack-dev at lists.openstack.org*<OpenStack-dev at lists.openstack.org>
> > > >> *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> > > >
> > > >
> > > >
> > > > _______________________________________________
> > > > OpenStack-dev mailing list
> > > > *OpenStack-dev at lists.openstack.org*<OpenStack-dev at lists.openstack.org>
> > > > *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> > >
> > > _______________________________________________
> > > OpenStack-dev mailing list
> > > *OpenStack-dev at lists.openstack.org*<OpenStack-dev at lists.openstack.org>
> > > *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> > >
> >
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > *OpenStack-dev at lists.openstack.org* <OpenStack-dev at lists.openstack.org>
> > *http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev*<http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev>
> >_______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140527/7194bf80/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: graycol.gif
Type: image/gif
Size: 105 bytes
Desc: not available
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140527/7194bf80/attachment.gif>


More information about the OpenStack-dev mailing list