[openstack-dev] [Neutron] [Nova] [Cinder] [tc] Should Openstack project maintained by core team keep only API/DB in the future?

loy wolfe loywolfe at gmail.com
Tue Apr 28 02:23:20 UTC 2015


On Fri, Apr 24, 2015 at 9:13 PM, Kyle Mestery <mestery at mestery.com> wrote:
> On Fri, Apr 24, 2015 at 4:06 AM, loy wolfe <loywolfe at gmail.com> wrote:
>>
>> It's already away from the original thread, so I start this new one,
>> also with some extra tag because I think it touch some corss-project
>> area.
>>
>> Original discuss and reference:
>> http://lists.openstack.org/pipermail/openstack-dev/2015-April/062384.html
>>
>> https://review.openstack.org/#/c/176501/1/specs/liberty/reference-split.rst
>>
>> Background summary:
>> All in-tree implementation would be splitted from Openstack
>> networking, leaving Neutron as a naked "API/DB" platform, with a list
>> of out-tree implementation git repos, which are not maintained by core
>> team any more, but may be given a nominal "big tent" under the
>> Openstack umbrella.
>>
>
> I'm not sure what led you to this discussion, but it's patently incorrect.
> We're going to split the in-tree reference implementation into a separate
> git repository. I have not said anything about the current core revewier
> team not being responsible for that. It's natural to evolve to a core
> reviewer team which cares deeply about that, vs. those who care deeply about
> the DB/API layer. This is exactly what happened when we split out the
> advanced services.

Thanks for the simple explanation Kyle.

But today Neutron is already composed with many separate sub-teams:
ML2, L3, VPN/LBaas/Fw, etc, while each sub-team is responsible for
their own API/DB definition along with their implementations. So
what's the goal of the upcoming split: other standalone L2/L3-Core
API/DB teams, together with existing ML2 and L3 plugin/agent
implementation teams? Should we also split advanced service team with
API/DB and implementation team, and do they also need equal footing on
external 3rd SDN controllers?

It's not important whether a project team is nominally one of
openstack, under the big tent/stadium of Neutron as discussed in the
weekly meeting. Positioning is the key: Are existing built-in
ML2+OVS/LB SDN solution only used for concept proving in the future,
or continuously ensured as the native delivery ready for production
deployment? If a dedicated API/DB team has to co-ordinate so many
external 3rd SDN controllers besides the native built-in SDN, how can
it evolve with rapid feature growing?

Best Regards.

>
>>
>> Motivation: a) Smaller core team only focus on the in-tree API/DB
>> definition, released from concrete controlling function
>> implementation; b) If there is official implementation inside Neutron,
>> 3rd external SDN controller would face the competition.
>>
>> I'm not sure whether it's exactly what cloud operators want the
>> Openstack to deliver. Do they want a off-the-shelf package, or just a
>> framework and have to take the responsibility of integrating with
>> other external controlling projects? A analogy with Linux that only
>> kernel without any device driver has no use at all.
>>
>
> We're still going to deliver ML2+OVS/LB+[DHCP, L3, metadata] agents for
> Liberty. I'm not sure where your incorrect assumption on what we're going to
> deliver is coming from.
>
>>
>> There are already many debates about nova-network to Neutron parity.
>> If largely used OVS and LB driver is out of tree and has to be
>> integrated separately by customers, how do those they migrate from
>> nova network? Standalone SDN controller has steep learning curve, and
>> a lot of users don't care which one is better of ODL vs. OpenContrail
>> to be integrated, they just want Openstack package easy to go by
>> default in tree implementation,  and are ready to drive all kinds of
>> opensource or commercial backends.
>>
>
> Do you realize that ML2 is plus the L2 agent is an SDN controller already?
>
>>
>> BTW: +1 to henry and mathieu, that indeed Openstack is not responsible
>> projects of switch/router/fw, but it should be responsible for
>> scheduling, pooling, and driving of those backends, which is the same
>> case with Nova/Cinder scheduler and compute/volume manager. These
>> controlling functions shouldn't be classified as backends in Neutron
>> and be splitted out of tree.
>
>
>>
>> Regards
>>
>>
>> On Fri, Apr 24, 2015 at 2:37 AM, Kyle Mestery <mestery at mestery.com> wrote:
>> >
>> >
>> > On Thu, Apr 23, 2015 at 1:31 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov>
>> > wrote:
>> >>
>> >> Yeah. In the end, its what git repo the source for a given rpm you
>> >> install
>> >> comes from. Ops will not care that neutron-openvswitch-agent comes from
>> >> repo
>> >> foo.git instead of bar.git.
>> >>
>> >
>> >
>> > That's really the tl;dr of the proposed split.
>> >
>> > Thanks,
>> > Kyle
>> >
>> >>
>> >> Thanks,
>> >> Kevin
>> >> ________________________________
>> >> From: Armando M. [armamig at gmail.com]
>> >> Sent: Thursday, April 23, 2015 9:10 AM
>> >> To: OpenStack Development Mailing List (not for usage questions)
>> >> Subject: Re: [openstack-dev] [Neutron] A big tent home for Neutron
>> >> backend
>> >> code
>> >>
>> >>>>
>> >>> I agree with henry here.
>> >>> Armando, If we use your analogy with nova that doesn't build and
>> >>> deliver
>> >>> KVM, we can say that Neutron doesn't build or deliver OVS. It builds a
>> >>> driver and an agent which manage OVS, just like nova which provides a
>> >>> driver
>> >>> to manage libvirt/KVM.
>> >>> Moreover, external SDN controllers are much more complex than Neutron
>> >>> with its reference drivers. I feel like forcing the cloud admin to
>> >>> deploy
>> >>> and maintain an external SDN controller would be a terrible experience
>> >>> for
>> >>> him if he just needs a simple way manage connectivity between VMs.
>> >>> At the end of the day, it might be detrimental for the neutron
>> >>> project.
>> >>>
>> >>
>> >>
>> >> I don't think that anyone is saying that cloud admins are going to be
>> >> forced to deploy and maintain an external SDN controller. There are
>> >> plenty
>> >> of deployment examples where people are just happy with network
>> >> virtualization the way Neutron has been providing for years and we
>> >> should
>> >> not regress on that. To me it's mostly a matter of responsibilities of
>> >> who
>> >> develops what, and what that what is :)
>> >>
>> >> The consumption model is totally a different matter.
>> >>
>> >>
>> >>
>> >> __________________________________________________________________________
>> >> OpenStack Development Mailing List (not for usage questions)
>> >> Unsubscribe:
>> >> OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >>
>> >
>> >
>> >
>> > __________________________________________________________________________
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>> >
>>
>> __________________________________________________________________________
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



More information about the OpenStack-dev mailing list