<p dir="ltr">Hi Mohammad,</p>
<p dir="ltr">Thanks, I understand now. I appreciate that the mapping driver is one way of doing things and that the design has been familiarized for a while. I wish I could follow infinite channels but unfortunately the openstack information overload is astounding and sometimes I fail :) Gerrit is the channel I strive to follow and this is when I saw the code for the first time, hence my feedback. </p>
<p dir="ltr">It's worth noting that the PoC design document is (as it should be) very high level and most of my feedback applies to the implementation decisions being made. That said, I still have doubts that an ML2 like approach is really necessary for GP and I welcome inputs to help me change my mind :)</p>
<p dir="ltr">Thanks<br>
Armando</p>
<div class="gmail_quote">On May 27, 2014 5:04 PM, "Mohammad Banikazemi" <<a href="mailto:mb@us.ibm.com">mb@us.ibm.com</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div>
<p><font face="sans-serif">Thanks for the continued interest in discussing Group Policy (GP). I believe these discussions with the larger Neutron community can benefit the GP work.</font><br>
<br>
<font face="sans-serif">GP like any other Neutron extension can have different implementations. Our idea has been to have the GP code organized similar to how ML2 and mechanism drivers are organized, with the possibility of having different drivers for realizing the GP API. One such driver (analogous to an ML2 mechanism driver I would say) is the mapping driver that was implemented for the PoC. I certainly do not see it as the only implementation. The mapping driver is just the driver we used for our PoC implementation in order to gain experience in developing such a driver. Hope this clarifies things a bit.</font><br>
<br>
<font face="sans-serif">Please note that for better or worse we have produced several documents during the previous cycle. We have tried to collect them on the GP wiki page [1]. The latest design document [2] should give a broad view of the GP extension and the model being proposed. The PoC document [3] may clarify our PoC plans and where the mapping driver stands wrt other pieces of the work. (Please note some parts of the plan as described in the PoC document was not implemented.)</font><br>
<br>
<font face="sans-serif">Hope my explanation and these documents (and other documents available on the GP wiki) are helpful.</font><br>
<br>
<font face="sans-serif">Best,</font><br>
<br>
<font face="sans-serif">Mohammad</font><br>
<br>
<font face="sans-serif">[1] <a href="https://wiki.openstack.org/wiki/Neutron/GroupPolicy" target="_blank">https://wiki.openstack.org/wiki/Neutron/GroupPolicy</a> <----- GP wiki page</font><br>
<font face="sans-serif">[2] <a href="https://docs.google.com/presentation/d/1Nn1HjghAvk2RTPwvltSrnCUJkidWKWY2ckU7OYAVNpo/" target="_blank">https://docs.google.com/presentation/d/1Nn1HjghAvk2RTPwvltSrnCUJkidWKWY2ckU7OYAVNpo/</a> <----- GP design document</font><br>
<font face="sans-serif">[3] <a href="https://docs.google.com/document/d/14UyvBkptmrxB9FsWEP8PEGv9kLqTQbsmlRxnqeF9Be8/" target="_blank">https://docs.google.com/document/d/14UyvBkptmrxB9FsWEP8PEGv9kLqTQbsmlRxnqeF9Be8/</a> <----- GP PoC document</font><br>
<br>
<br>
<img width="16" height="16" src="cid:1__=0ABBF676DFDC81128f9e8a93df938@us.ibm.com" border="0" alt="Inactive hide details for "Armando M." ---05/26/2014 09:46:34 PM---On May 26, 2014 4:27 PM, "Mohammad Banikazemi" <mb@us.ibm.co"><font color="#424282" face="sans-serif">"Armando M." ---05/26/2014 09:46:34 PM---On May 26, 2014 4:27 PM, "Mohammad Banikazemi" <<a href="mailto:mb@us.ibm.com" target="_blank">mb@us.ibm.com</a>> wrote: ></font><br>
<br>
<font size="1" color="#5F5F5F" face="sans-serif">From: </font><font size="1" face="sans-serif">"Armando M." <<a href="mailto:armamig@gmail.com" target="_blank">armamig@gmail.com</a>></font><br>
<font size="1" color="#5F5F5F" face="sans-serif">To: </font><font size="1" face="sans-serif">"OpenStack Development Mailing List, (not for usage questions)" <<a href="mailto:openstack-dev@lists.openstack.org" target="_blank">openstack-dev@lists.openstack.org</a>>, </font><br>
<font size="1" color="#5F5F5F" face="sans-serif">Date: </font><font size="1" face="sans-serif">05/26/2014 09:46 PM</font><br>
<font size="1" color="#5F5F5F" face="sans-serif">Subject: </font><font size="1" face="sans-serif">Re: [openstack-dev] [neutron][group-based-policy] GP mapping driver</font><br>
<hr width="100%" size="2" align="left" noshade style="color:#8091a5"><br>
<br>
<br>
<font size="3" face="serif"><br>
On May 26, 2014 4:27 PM, "Mohammad Banikazemi" <</font><a href="mailto:mb@us.ibm.com" target="_blank"><font size="3" color="#0000FF" face="serif"><u>mb@us.ibm.com</u></font></a><font size="3" face="serif">> wrote:<br>
><br>
> Armando,<br>
><br>
> I think there are a couple of things that are being mixed up here, at least as I see this conversation :). The mapping driver is simply one way of implementing GP. Ideally I would say, you do not need to implement the GP in terms of other Neutron abstractions even though you may choose to do so. A network controller could realize the connectivities and policies defined by GP independent of say networks, and subnets. If we agree on this point, then how we organize the code will be different than the case where GP is always defined as something on top of current neutron API. In other words, we shouldn't organize the overall code for GP based solely on the use of the mapping driver.</font>
<p><font size="3" face="serif">The mapping driver is embedded in the policy framework that Bob had initially proposed. If I understood what you're suggesting correctly, it makes very little sense to diverge or come up with a different framework alongside the legacy driver later on, otherwise we may end up in the same state of the core plugins': monolithic vs ml2-based. Could you clarify?<br>
><br>
> In the mapping driver (aka the legacy driver) for the PoC, GP is implemented in terms of other Neutron abstractions. I agree that using python-neutronclient for the PoC would be fine and as Bob has mentioned it would have been probably the best/easiest way of having the PoC implemented in the first place. The calls to python-neutronclient in my understanding could be eventually easily replaced with direct calls after refactoring which lead me to ask a question concerning the following part of the conversation (being copied here again):</font>
<p><font size="3" face="serif">Not sure why we keep bringing this refactoring up: my point is that if GP were to be implemented the way I'm suggesting the refactoring would have no impact on GP...even if it did, replacing remote with direct calls should be avoided IMO.</font>
<p><font size="3" face="serif">><br>
><br>
> [Bob:]<br>
><br>
> > > The overhead of using python-neutronclient is that unnecessary<br>
> > > serialization/deserialization are performed as well as socket communication<br>
> > > through the kernel. This is all required between processes, but not within a<br>
> > > single process. A well-defined and efficient mechanism to invoke resource<br>
> > > APIs within the process, with the same semantics as incoming REST calls,<br>
> > > seems like a generally useful addition to neutron. I'm hopeful the core<br>
> > > refactoring effort will provide this (and am willing to help make sure it<br>
> > > does), but we need something we can use until that is available.<br>
> > ><br>
><br>
> [Armando:]<br>
><br>
> > I appreciate that there is a cost involved in relying on distributed<br>
> > communication, but this must be negligible considered what needs to<br>
> > happen end-to-end. If the overhead being referred here is the price to<br>
> > pay for having a more dependable system (e.g. because things can be<br>
> > scaled out and/or made reliable independently), then I think this is a<br>
> > price worth paying.<br>
> > <br>
> > I do hope that the core refactoring is not aiming at what you're<br>
> > suggesting, as it sounds in exact opposition to some of the OpenStack<br>
> > design principles.<br>
><br>
><br>
> From the summit sessions (in particular the session by Mark on refactoring the core), I too was under the impression that there will be a way of invoking Neutron API within the plugin with the same semantics as through the REST API. Is this a misunderstanding?</font>
<p><font size="3" face="serif">That was not my understanding, but I'll let Mark chime in on this.</font>
<p><font size="3" face="serif">Many thanks<br>
Armando<br>
><br>
> Best,<br>
><br>
> Mohammad<br>
><br>
><br>
><br>
><br>
><br>
><br>
><br>
> "Armando M." <</font><a href="mailto:armamig@gmail.com" target="_blank"><font size="3" color="#0000FF" face="serif"><u>armamig@gmail.com</u></font></a><font size="3" face="serif">> wrote on 05/24/2014 01:36:35 PM:<br>
><br>
> > From: "Armando M." <</font><a href="mailto:armamig@gmail.com" target="_blank"><font size="3" color="#0000FF" face="serif"><u>armamig@gmail.com</u></font></a><font size="3" face="serif">><br>
> > To: "OpenStack Development Mailing List (not for usage questions)" <br>
> > <</font><a href="mailto:openstack-dev@lists.openstack.org" target="_blank"><font size="3" color="#0000FF" face="serif"><u>openstack-dev@lists.openstack.org</u></font></a><font size="3" face="serif">>, <br>
> > Date: 05/24/2014 01:38 PM<br>
> > Subject: Re: [openstack-dev] [neutron][group-based-policy] GP mapping driver<br>
><br>
> > <br>
> > On 24 May 2014 05:20, Robert Kukura <</font><a href="mailto:kukura@noironetworks.com" target="_blank"><font size="3" color="#0000FF" face="serif"><u>kukura@noironetworks.com</u></font></a><font size="3" face="serif">> wrote:<br>
> > ><br>
> > > On 5/23/14, 10:54 PM, Armando M. wrote:<br>
> > >><br>
> > >> On 23 May 2014 12:31, Robert Kukura <</font><a href="mailto:kukura@noironetworks.com" target="_blank"><font size="3" color="#0000FF" face="serif"><u>kukura@noironetworks.com</u></font></a><font size="3" face="serif">> wrote:<br>
> > >>><br>
> > >>> On 5/23/14, 12:46 AM, Mandeep Dhami wrote:<br>
> > >>><br>
> > >>> Hi Armando:<br>
> > >>><br>
> > >>> Those are good points. I will let Bob Kukura chime in on the specifics of<br>
> > >>> how we intend to do that integration. But if what you see in the<br>
> > >>> prototype/PoC was our final design for integration with Neutron core, I<br>
> > >>> would be worried about that too. That specific part of the code<br>
> > >>> (events/notifications for DHCP) was done in that way just for the<br>
> > >>> prototype<br>
> > >>> - to allow us to experiment with the part that was new and needed<br>
> > >>> experimentation, the APIs and the model.<br>
> > >>><br>
> > >>> That is the exact reason that we did not initially check the code to<br>
> > >>> gerrit<br>
> > >>> - so that we do not confuse the review process with the prototype<br>
> > >>> process.<br>
> > >>> But we were requested by other cores to check in even the prototype code<br>
> > >>> as<br>
> > >>> WIP patches to allow for review of the API parts. That can unfortunately<br>
> > >>> create this very misunderstanding. For the review, I would recommend not<br>
> > >>> the<br>
> > >>> WIP patches, as they contain the prototype parts as well, but just the<br>
> > >>> final<br>
> > >>> patches that are not marked WIP. If you such issues in that part of the<br>
> > >>> code, please DO raise that as that would be code that we intend to<br>
> > >>> upstream.<br>
> > >>><br>
> > >>> I believe Bob did discuss the specifics of this integration issue with<br>
> > >>> you<br>
> > >>> at the summit, but like I said it is best if he represents that side<br>
> > >>> himself.<br>
> > >>><br>
> > >>> Armando and Mandeep,<br>
> > >>><br>
> > >>> Right, we do need a workable solution for the GBP driver to invoke<br>
> > >>> neutron<br>
> > >>> API operations, and this came up at the summit.<br>
> > >>><br>
> > >>> We started out in the PoC directly calling the plugin, as is currently<br>
> > >>> done<br>
> > >>> when creating ports for agents. But this is not sufficient because the<br>
> > >>> DHCP<br>
> > >>> notifications, and I think the nova notifications, are needed for VM<br>
> > >>> ports.<br>
> > >>> We also really should be generating the other notifications, enforcing<br>
> > >>> quotas, etc. for the neutron resources.<br>
> > >><br>
> > >> I am at loss here: if you say that you couldn't fit at the plugin<br>
> > >> level, that is because it is the wrong level!! Sitting above it and<br>
> > >> redo all the glue code around it to add DHCP notifications etc<br>
> > >> continues the bad practice within the Neutron codebase where there is<br>
> > >> not a good separation of concerns: for instance everything is cobbled<br>
> > >> together like the DB and plugin logic. I appreciate that some design<br>
> > >> decisions have been made in the past, but there's no good reason for a<br>
> > >> nice new feature like GP to continue this bad practice; this is why I<br>
> > >> feel strongly about the current approach being taken.<br>
> > ><br>
> > > Armando, I am agreeing with you! The code you saw was a proof-of-concept<br>
> > > implementation intended as a learning exercise, not something intended to be<br>
> > > merged as-is to the neutron code base. The approach for invoking resources<br>
> > > from the driver(s) will be revisited before the driver code is submitted for<br>
> > > review.<br>
> > >><br>
> > >><br>
> > >>> We could just use python-neutronclient, but I think we'd prefer to avoid<br>
> > >>> the<br>
> > >>> overhead. The neutron project already depends on python-neutronclient for<br>
> > >>> some tests, the debug facility, and the metaplugin, so in retrospect, we<br>
> > >>> could have easily used it in the PoC.<br>
> > >><br>
> > >> I am not sure I understand what overhead you mean here. Could you<br>
> > >> clarify? Actually looking at the code, I see a mind boggling set of<br>
> > >> interactions going back and forth between the GP plugin, the policy<br>
> > >> driver manager, the mapping driver and the core plugin: they are all<br>
> > >> entangled together. For instance, when creating an endpoint the GP<br>
> > >> plugin ends up calling the mapping driver that in turns ends up calls<br>
> > >> the GP plugin itself! If this is not overhead I don't know what is!<br>
> > >> The way the code has been structured makes it very difficult to read,<br>
> > >> let alone maintain and extend with other policy mappers. The ML2-like<br>
> > >> nature of the approach taken might work well in the context of core<br>
> > >> plugin, mechanisms drivers etc, but I would argue that it poorly<br>
> > >> applies to the context of GP.<br>
> > ><br>
> > > The overhead of using python-neutronclient is that unnecessary<br>
> > > serialization/deserialization are performed as well as socket communication<br>
> > > through the kernel. This is all required between processes, but not within a<br>
> > > single process. A well-defined and efficient mechanism to invoke resource<br>
> > > APIs within the process, with the same semantics as incoming REST calls,<br>
> > > seems like a generally useful addition to neutron. I'm hopeful the core<br>
> > > refactoring effort will provide this (and am willing to help make sure it<br>
> > > does), but we need something we can use until that is available.<br>
> > ><br>
> > <br>
> > I appreciate that there is a cost involved in relying on distributed<br>
> > communication, but this must be negligible considered what needs to<br>
> > happen end-to-end. If the overhead being referred here is the price to<br>
> > pay for having a more dependable system (e.g. because things can be<br>
> > scaled out and/or made reliable independently), then I think this is a<br>
> > price worth paying.<br>
> > <br>
> > I do hope that the core refactoring is not aiming at what you're<br>
> > suggesting, as it sounds in exact opposition to some of the OpenStack<br>
> > design principles.<br>
> > <br>
> > > One lesson we learned from the PoC is that the implicit managementof the GP<br>
><br>
> > > resources (RDs and BDs) is completely independent from the mapping of GP<br>
> > > resources to neutron resources. We discussed this at the last GP sub-team<br>
> > > IRC meeting, and decided to package this functionality as a separate driver<br>
> > > that is invoked prior to the mapping_driver, and can also be used in<br>
> > > conjunction with other GP back-end drivers. I think this will help improve<br>
> > > the structure and readability of the code, and it also shows the<br>
> > > applicability of the ML2-like nature of the driver API.<br>
> > ><br>
> > > You are certainly justified in raising the question of whether the ML2<br>
> > > driver API model is appropriate for the GP plugin. I raised two issues with<br>
> > > this in the sub-team's PoC post-mortem discussion. One was whether calling<br>
> > > multiple drivers is useful. The case above seems to justify this, as well as<br>
> > > potentially supporting heterogeneous deployments involving multiple ways to<br>
> > > enforce policy. The other was whether the precommit() methods are useful. I<br>
> > > think the jury is still out on this.<br>
> > >><br>
> > >><br>
> > >>> With the existing REST code, if we could find the<br>
> > >>> neutron.api.v2.base.Controller class instance for each resource, we could<br>
> > >>> simply call create(), update(), delete(), and show() on these. I didn't<br>
> > >>> see<br>
> > >>> an easy way to find these Controller instances, so I threw together some<br>
> > >>> code similar to these Controller methods for the PoC. It probably<br>
> > >>> wouldn't<br>
> > >>> take too much work to have neutron.manager.NeutronManager provide access<br>
> > >>> to<br>
> > >>> the Controller classes if we want to go this route.<br>
> > >>><br>
> > >>> The core refactoring effort may eventually provide a nice solution, but<br>
> > >>> we<br>
> > >>> can't wait for this. It seems we'll need to either use<br>
> > >>> python-neutronclient<br>
> > >>> or get access to the Controller classes in the meantime.<br>
> > >>><br>
> > >>> Any thoughts on these? Any other ideas?<br>
> > >><br>
> > >> I am still not sure why do you even need to go all the way down to the<br>
> > >> Controller class. After all it's almost like GP could be a service in<br>
> > >> its own right that makes use of Neutron to map the application centric<br>
> > >> abstractions on top of the networking constructs; this can happen via<br>
> > >> the REST interface. I don't think there is a dependency on the core<br>
> > >> refactoring here: the two can progress separately, so long as we break<br>
> > >> the tie, from an implementation perspective, that GP and Core plugins<br>
> > >> need to leave in the same address space. Am I missing something?<br>
> > >> Because I still cannot justify why things have been coded the way they<br>
> > >> have.<br>
> > ><br>
> > > I completely agree that we should try to avoid a hard architectural<br>
> > > requirement that the GP and core plugins have to be in the same address<br>
> > > space, and agree that if we were to use separate address spaces, using<br>
> > > python-neutronclient would be the obvious solution. Certain back-end drivers<br>
> > > for the GP plugin may be more tightly coupled with corresponding core<br>
> > > plugins or ML2 mechanism drivers, with both cooperating to control the same<br>
> > > underlying fabric, so we don't want to preclude putting them in the same<br>
> > > address space either.<br>
> > <br>
> > I don't believe that the two living in the same address space is a<br>
> > *must-have* requirement to achieve the tight coupling you think it's<br>
> > needed, but I appreciate that some things would need to change in<br>
> > order to make the two to cooperate more closely if they were to exist<br>
> > separately.<br>
> > <br>
> > ><br>
> > > In the PoC, I attempted to structure things so we could easily change the<br>
> > > mechanism used for these calls. I can't really justify why we didn't just<br>
> > > use python-neutronclient for the PoC, but remember, it was just a prototype<br>
> > > intended for learning and to facilitate these sorts of discussions.<br>
> > ><br>
> > > So, as long as we do plan to package GP as a service plugin within<br>
> > > neutron-server, is the overhead of going through python-neutronclient within<br>
> > > that process acceptable? Are there any other issues with this? If its<br>
> > > workable, I think we can go with python-neutronclient for now, and look at<br>
> > > better alternatives as the core refactoring progresses.<br>
> > <br>
> > I was suggesting to move away from the service plugin model<br>
> > altogether, but it's certainly something that could be explored in the<br>
> > short term, if you think this is something you're more comfortable<br>
> > with; I personally feel that it is going to be detrimental to the GP<br>
> > efforts because at some point down the road things will need to be<br>
> > re-architected radically, and I was taking this discussion as an<br>
> > opportunity to advocate change from the start.<br>
> > <br>
> > Thanks for clarifying some of my doubts!<br>
> > Cheers,<br>
> > Armando<br>
> > <br>
> > ><br>
> > > Thanks,<br>
> > ><br>
> > > -Bob<br>
> > ><br>
> > >><br>
> > >> Thanks,<br>
> > >> Armando<br>
> > >><br>
> > >>> Thanks,<br>
> > >>><br>
> > >>> -Bob<br>
> > >>><br>
> > >>><br>
> > >>> Regards,<br>
> > >>> Mandeep<br>
> > >>><br>
> > >>><br>
> > >>><br>
> > >>><br>
> > >>> _______________________________________________<br>
> > >>> OpenStack-dev mailing list<br>
> > >>> </font><a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank"><font size="3" color="#0000FF" face="serif"><u>OpenStack-dev@lists.openstack.org</u></font></a><font size="3" face="serif"><br>
> > >>> </font><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank"><font size="3" color="#0000FF" face="serif"><u>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</u></font></a><font size="3" face="serif"><br>
> > >>><br>
> > >> _______________________________________________<br>
> > >> OpenStack-dev mailing list<br>
> > >> </font><a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank"><font size="3" color="#0000FF" face="serif"><u>OpenStack-dev@lists.openstack.org</u></font></a><font size="3" face="serif"><br>
> > >> </font><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank"><font size="3" color="#0000FF" face="serif"><u>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</u></font></a><font size="3" face="serif"><br>
> > ><br>
> > ><br>
> > ><br>
> > > _______________________________________________<br>
> > > OpenStack-dev mailing list<br>
> > > </font><a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank"><font size="3" color="#0000FF" face="serif"><u>OpenStack-dev@lists.openstack.org</u></font></a><font size="3" face="serif"><br>
> > > </font><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank"><font size="3" color="#0000FF" face="serif"><u>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</u></font></a><font size="3" face="serif"><br>
> > <br>
> > _______________________________________________<br>
> > OpenStack-dev mailing list<br>
> > </font><a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank"><font size="3" color="#0000FF" face="serif"><u>OpenStack-dev@lists.openstack.org</u></font></a><font size="3" face="serif"><br>
> > </font><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank"><font size="3" color="#0000FF" face="serif"><u>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</u></font></a><font size="3" face="serif"><br>
> > <br>
><br>
><br>
> _______________________________________________<br>
> OpenStack-dev mailing list<br>
> </font><a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank"><font size="3" color="#0000FF" face="serif"><u>OpenStack-dev@lists.openstack.org</u></font></a><font size="3" face="serif"><br>
> </font><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank"><font size="3" color="#0000FF" face="serif"><u>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</u></font></a><font size="3" face="serif"><br>
></font><tt><font>_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org" target="_blank">OpenStack-dev@lists.openstack.org</a><br>
</font></tt><tt><font><a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a></font></tt><tt><font><br>
</font></tt>
<p></p></p></p></p></p></p></p></div>
<br>_______________________________________________<br>
OpenStack-dev mailing list<br>
<a href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
<br></blockquote></div>