[openstack-dev] [Nova][Neutron] Status of the nova-network to Neutron migration work

Mohammad Banikazemi mb at us.ibm.com
Fri Mar 27 15:11:42 UTC 2015

Sean Dague <sean at dague.net> wrote on 03/27/2015 07:11:18 AM:

> From: Sean Dague <sean at dague.net>
> To: openstack-dev at lists.openstack.org
> Date: 03/27/2015 07:12 AM
> Subject: Re: [openstack-dev] [Nova][Neutron] Status of the nova-
> network to Neutron migration work
> On 03/27/2015 05:22 AM, Thierry Carrez wrote:
> <snip>
> > Part of it is corner (or simplified) use cases not being optimally
> > served by Neutron, and I think Neutron could more aggressively address
> > those. But the other part is ignorance and convenience: that Neutron
> > thing is a scary beast, last time I looked into it I couldn't make
> > of it, and nova-network just works for me.
> >
> > That is why during the Ops Summit we discussed coming up with a
> > migration guide that would explain the various ways you can use Neutron
> > to cover nova-network use cases, demystify a few dark areas, and
> > the step-by-step manual process you can go through to migrate from one
> > to the other.
> >
> > We found a dev/ops volunteer for writing that migration guide but he
> > unfortunately not allowed to spend time on this. I heard we have new
> > volunteers, but I'll let them announce what their plans are, rather
> > put words in their mouth.
> >
> > This migration guide can happen even if we follow the nova-net spinout
> > plan (for the few who want to migrate to Neutron), so this is a
> > complementary solution rather than an alternative. Personally I doubt
> > there would suddenly be enough people interested in nova-net
> > to successfully spin it out and maintain it. I also agree with Russell
> > that long-term fragmentation at this layer of the stack is generally
> > desirable.
> I think if you boil everything down, you end up with 3 really important
> differences.
> 1) neutron is a fleet of services (it's very micro service) and every
> service requires multiple and different config files. Just configuring
> the fleet is a beast if it not devstack (and even if it is)
> 2) neutron assumes a primary interesting thing to you is tenant secured
> self service networks. This is actually explicitly not interesting to a
> lot of deploys for policy, security, political reasons/restrictions.
> 3) neutron open source backend defaults to OVS (largely because #2). OVS
> is it's own complicated engine that you need to learn to debug. While
> Linux bridge has challenges, it's also something that anyone who's
> worked with Linux & Virtualization for the last 10 years has some
> experience with.
> (also, the devstack setup code for neutron is a rats nest, as it was
> mostly not paid attention to. This means it's been 0 help in explaining
> anything to people trying to do neutron. For better or worse devstack is
> our executable manual for a lot of these things)
> so.... that being said, I think we need to talk about "minimum viable
> neutron" as a model and figure out how far away that is from n-net. This
> week at the QA Sprint, Dean, Sean Collins, and I have spent some time
> hashing it out, hopefully with something to show the end of the week.
> This will be the new devstack code for neutron (the old lib/neutron is
> moved to lib/neutron-legacy).
> Default setup will be provider networks (which means no tenant
> isolation). For that you should only need neutron-api, -dhcp, and -l2.
> So #1 is made a bunch better. #2 not a thing at all. And for #3 we'd
> like to revert back to linux bridge for the base case (though first code
> will probably be OVS because that's the happy path today).

Are you suggesting that for the common use cases that will use the default
setup, the external network connectivity doesn't matter much?

> First optional layer being flip from linuxbridge -> ovs. That becomes
> one bite sized thing to flip over once you understand it.
> Mixin #2: self service networks
> This will be off in the default case, but can be enabled later.
> ... and turtles all the way up.
> Provider networks w/ Linux bridge are really close to the simplicity on
> the wire people expected with n-net. The only last really difference is
> floating ips. And the problem here was best captured by Sean Collins on
> Wed, Floating ips in nova are overloaded. They are both elastic ips, but
> they are also how you get public addresses in a default environment.
> Dean shared that that dual purpose is entirely due to constraints of the
> first NASA cloud which only had a /26 of routable IPs. In neutron this
> is just different, you don't need floating ips to have public addresses.
> But the mental model has stuck.
> Anyway, while I'm not sure this is going to solve everyone's issues, I
> think it's a useful exercise anyway for devstack's neutron support to
> revert to a minimum viable neutron for learning purposes, and let you
> layer on complexity manually over time. And I'd be really curious if a
> n-net -> provider network side step (still on linux bridge) would
> actually be a more reasonable transition for most environments.
>    -Sean
> --
> Sean Dague
> http://dague.net
> [attachment "signature.asc" deleted by Mohammad Banikazemi/Watson/
> IBM]
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe:
OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20150327/d35faf4d/attachment.html>

More information about the OpenStack-dev mailing list