[openstack-dev] [Neutron] A big tent home for Neutron backend code
Neil Jerram
Neil.Jerram at metaswitch.com
Tue Apr 28 17:17:56 UTC 2015
Apologies for commenting so late, but I'm not clear on the concept of bringing all possible backend projects back inside Neutron.
I think my question is similar to what Henry and Mathieu are getting at below - viz:
We just recently decided to move a lot of vendor-specific ML2 mechanism driver code _out_ of the Neutron tree; and I thought that the main motivation for that was that it wasn't reasonably possible for most Neutron developers to understand, review and maintain that code to the same level as they can with the Neutron core code.
How then does it now make sense to bring a load of vendor-specific code back into the Neutron fold? Has some important factor changed? Or have I misunderstood what is now being proposed?
Many thanks,
Neil
________________________________
From: Mathieu Rohon <mathieu.rohon at gmail.com>
Sent: 23 April 2015 15:05
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [Neutron] A big tent home for Neutron backend code
On Thu, Apr 23, 2015 at 10:28 AM, henry hly <henry4hly at gmail.com<mailto:henry4hly at gmail.com>> wrote:
On Thu, Apr 23, 2015 at 10:44 AM, Armando M. <armamig at gmail.com<mailto:armamig at gmail.com>> wrote:
>>
>> Could you please also pay some attention on Cons of this ultimate
>> splitting Kyle? I'm afraid it would hurt the user experiences.
>>
>> On the position of Dev, A naked Neutron without "official" built-in
>> reference implementation probably has a more clear architecture. On
>> the other side, users would be forced to make choice between a long
>> list of backend implementations, which is very difficult for
>> non-professionals.
>>
>> In most of the time, users need a off-the-shelf solution without
>> paying much extra integration effort, and they have less interest to
>> study which SDN controller is powerful and is better than others. Can
>> we imagine Nova without KVM/Qemu virt driver, Cinder without Ceph/VKM
>> volume driver [See Deployment Profiles section in 1a] ? Shall we
>> really decide to make Neutron the only Openstack project which has not
>> any in tree official implementation?
>
>
> I think the analogy here is between the agent reference implementation vs
> KVM or Ceph, rather than the plumbing that taps into the underlying
> technology. Nova doesn't build/package KVM as Cinder doesn't build/package
> Ceph. Neutron could rely on other open source solutions (ODL, OpenContrail,
> OVN, etc), and still be similar to the other projects.
>
> I think there's still room for clarifying what the split needs to be, but I
> have always seen Neutron as the exception rather than norm, where, for
> historic reasons, we had to build everything from the ground up for lack of
> viable open source solutions at the time the project was conceived.
>
Thanks for bring in this interesting topic, maybe it should not be
scoped only inside Neutron, also I found a similar discuss from John
griffith on Cinder vs SDS controller :-)
https://griffithscorner.wordpress.com/2014/05/16/the-problem-with-sds-under-cinder/
It's clear that an typical Cloud deployment is composed of two
distinct part: workload engine vs. supervisor. The engine part
obviously do not belong to Openstack project, which include open
sourced like KVM, Ceph, OVS/Linuxstack/haproxy/openswan, and vendor's
like Vcenter/ESXi, SAN disk arrays, and all kinds of networking
hardware gears or virtualized service VMs.
However for the supervisor part, there is some blurring for debates:
Should Openstack provide complete in-house implementation of
controlling functions which could directly drive backend workload
engine (via backend driver), or just thin API/DB layer which need to
integrate some 3rd external controller projects to finish those works:
scheduling, pooling and service logic abstraction? For networking, how
should we regard the functions of plugin/agent and SDN controller, are
they classified in the same layer of "real" backends working engine
like Switchs/Routers/Firewalls?
For Nova & Cinder, it seems former is adopted: a single unified
central framework including API, scheduling, abstraction service
logic, rpc & message queue, and a common agent side framework of
compute/volume manager, then with a bunch of virt/volume drivers
plugged to abstract the all kinds of backends. There are standalone
backends like KVM and LVM, and aggregated clustering backends like
vcenter and ceph.
The Neutron was just like a developing game of continuously
re-factoring: plugin, meta plugin, ML2, and now the "platform". Next
ML2 plugin suddenly became just a "reference" for concept proving, and
no plugin/agent would be maintained in-tree officially anymore, while
the reason is confusingly "not to compete with other 3rd SDN
controllers" :-P
I agree with henry here.
Armando, If we use your analogy with nova that doesn't build and deliver KVM, we can say that Neutron doesn't build or deliver OVS. It builds a driver and an agent which manage OVS, just like nova which provides a driver to manage libvirt/KVM.
Moreover, external SDN controllers are much more complex than Neutron with its reference drivers. I feel like forcing the cloud admin to deploy and maintain an external SDN controller would be a terrible experience for him if he just needs a simple way manage connectivity between VMs.
At the end of the day, it might be detrimental for the neutron project.
>
>>
>>
>> [1a]
>> http://superuser.openstack.org/articles/openstack-user-survey-insights-november-2014
>>
>> Here is my personal suggestion: decomposition decision needs some
>> trade-off, remaining 2-3 mainstream opensource backends in tree [ML2
>> with OVS&LB, based on the survey result of 1a above]. While we are
>> progressing radically with architecture re-factoring, smooth
>> experience and easy to adoption should also be cared.
>>
>> >
>> > One thing which is worth bringing up in this context is the potential
>> > overlap between this implementations. I think having them all under the
>> > Neutron project would allow me as PTL and the rest of the team work to
>> > combine things when it makes sense.
>> >
>> > Kyle
>> >
>> > [1] http://www.faqs.org/rfcs/rfc1149.html
>> >
>> >>
>> >> b) Let each interested group define a new project team for their
>> >> backend
>> >> code.
>> >>
>> > To be honest, I don't this is a scalable option. I'm involved with 2 of
>> > these networking-foo projects, and there is not enough participation so
>> > far
>> > to warrant an entirely new project, PTL and infra around it. This is
>> > just my
>> > opinion, but it's an opinion I've taken after having contributed to
>> > networking-odl and networking-ovn for the past 5 months.
>> >
>> >>
>> >> So, as an example, the group of people working on Neutron integration
>> >> with OpenDaylight could propose a new project team that would be a
>> >> projects.yaml entry that looks something like:
>> >>
>> >> Neutron-OpenDaylight:
>> >> ptl: Some Person (ircnick)
>> >> service: OpenDaylight Networking
>> >> mission: >
>> >> To implement Neutron support for the OpenDaylight project.
>> >> url: ...
>> >> projects:
>> >> - repo: openstack/networking-odl
>> >>
>> >> Pros:
>> >> + There's no additional load on the Neutron project team and PTL.
>> >>
>> >> Cons:
>> >> - Having all of these efforts organized under a single project team
>> >> and
>> >> PTL might be better for ensuring some level of collaboration and
>> >> consistency.
>> >>
>> >> c) A single new project team could be created that is led by a single
>> >> person to coordinate the sub-teams working on each repo. In this
>> >> scenario, I could see the overall collaboration being around ensuring
>> >> consistent approaches to development process, testing, documentation,
>> >> and releases.
>> >>
>> >> That would be something like this (note that the project list would be
>> >> limited to those who actually want to be included in the official
>> >> project team and meet some set of inclusion criteria).
>> >>
>> >> Neutron-Backends:
>> >> ptl: Some Person (ircnick)
>> >> service: Networking Backends
>> >> mission: >
>> >> To implement Neutron backend support for various networking
>> >> technologies.
>> >> url: ...
>> >> projects:
>> >> - openstack/networking-arista
>> >> - openstack/networking-bagpipe-l2
>> >> - openstack/networking-bgpvpn
>> >> - openstack/networking-bigswitch
>> >> - openstack/networking-brocade
>> >> - openstack/networking-cisco
>> >> - openstack/networking-edge-vpn
>> >> - openstack/networking-hyperv
>> >> - openstack/networking-ibm
>> >> - openstack/networking-l2gw
>> >> - openstack/networking-midonet
>> >> - openstack/networking-mlnx
>> >> - openstack/networking-nec
>> >> - openstack/networking-odl
>> >> - openstack/networking-ofagent
>> >> - openstack/networking-ovn
>> >> - openstack/networking-ovs-dpdk
>> >> - openstack/networking-plumgrid
>> >> - openstack/networking-portforwarding
>> >> - openstack/networking-vsphere
>> >>
>> >> Pros:
>> >> + There's no additional load on the Neutron project team and PTL.
>> >> + Avoids a proliferation of new project teams for each Neutron
>> >> backend.
>> >> + Puts efforts under a single team and PTL to help facilitate
>> >> collaboration and consistency.
>> >>
>> >> Cons:
>> >> - Some might see this as an unnatural split from Neutron.
>> >> - The same sort of oversight and coordination could potentially happen
>> >> with a delegate of the Neutron PTL in the Neutron project team without
>> >> making it separate.
>> >>
>> >> d) I suppose the last option is to declare that none of these repos
>> >> make
>> >> sense as an OpenStack project. It's hard for me to imagine this making
>> >> sense except for cases where the teams don't want their work to be
>> >> officially included in OpenStack, or they fail to meet our base set of
>> >> project guidelines.
>> >>
>> >>
>> >> What option do you think makes sense? Or is there another option that
>> >> should be considered?
>> >>
>> >>
>> >> [1]
>> >>
>> >> http://www.openstack.org/blog/2015/02/tc-update-project-reform-progress/
>> >> [2]
>> >>
>> >>
>> >> http://specs.openstack.org/openstack/neutron-specs/specs/kilo/services-split.html
>> >> [3]
>> >>
>> >>
>> >> http://specs.openstack.org/openstack/neutron-specs/specs/kilo/core-vendor-decomposition.html
>> >> [4] http://governance.openstack.org/reference/tags/
>> >>
>> >> --
>> >> Russell Bryant
>> >>
>> >>
>> >> __________________________________________________________________________
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe:
>> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>> >
>> >
>> >
>> > __________________________________________________________________________
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe<http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150428/109544bd/attachment.html>
More information about the OpenStack-dev
mailing list