[openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints
Ian Wells
ijw.ubuntu at cack.org.uk
Thu Oct 23 21:58:17 UTC 2014
There are two categories of problems:
1. some networks don't pass VLAN tagged traffic, and it's impossible to
detect this from the API
2. it's not possible to pass traffic from multiple networks to one port on
one machine as (e.g.) VLAN tagged traffic
(1) is addressed by the VLAN trunking network blueprint, XXX. Nothing else
addresses this, particularly in the case that one VM is emitting tagged
packets that another one should receive and Openstack knows nothing about
what's going on.
We should get this in, and ideally in quickly and in a simple form where it
simply tells you if a network is capable of passing tagged traffic. In
general, this is possible to calculate but a bit tricky in ML2 - anything
using the OVS mechanism driver won't pass VLAN traffic, anything using
VLANs should probably also claim it doesn't pass VLAN traffic (though
actually it depends a little on the switch), and combinations of L3 tunnels
plus Linuxbridge seem to pass VLAN traffic just fine. Beyond that, it's
got a backward compatibility mode, so it's possible to ensure that any
plugin that doesn't implement VLAN reporting is still behaving correctly
per the specification.
(2) is addressed by several blueprints, and these have overlapping ideas
that all solve the problem. I would summarise the possibilities as follows:
A. Racha's L2 gateway blueprint,
https://blueprints.launchpad.net/neutron/+spec/gateway-api-extension, which
(at its simplest, though it's had features added on and is somewhat
OVS-specific in its detail) acts as a concentrator to multiplex multiple
networks onto one as a trunk. This is a very simple approach and doesn't
attempt to resolve any of the hairier questions like making DHCP work as
you might want it to on the ports attached to the trunk network.
B. Isaku's L2 gateway blueprint, https://review.openstack.org/#/c/100278/,
which is more limited in that it refers only to external connections.
C. Erik's VLAN port blueprint,
https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms, which tries
to solve the addressing problem mentioned above by having ports within
ports (much as, on the VM side, interfaces passing trunk traffic tend to
have subinterfaces that deal with the traffic streams).
D. Not a blueprint, but an idea I've come across: create a network that is
a collection of other networks, each 'subnetwork' being a VLAN in the
network trunk.
E. Kyle's very old blueprint,
https://blueprints.launchpad.net/neutron/+spec/quantum-network-bundle-api -
where we attach a port, not a network, to multiple networks. Probably
doesn't work with appliances.
I would recommend we try and find a solution that works with both external
hardware and internal networks. (B) is only a partial solution.
Considering the others, note that (C) and (D) add significant complexity to
the data model, independently of the benefits they bring. (A) adds one new
functional block to networking (similar to today's routers, or even today's
Nova instances).
Finally, I suggest we consider the most prominent use case for multiplexing
networks. This seems to be condensing traffic from many networks to either
a service VM or a service appliance. It's useful, but not essential, to
have Neutron control the addresses on the trunk port subinterfaces.
So, that said, I personally favour (A) is the simplest way to solve our
current needs, and I recommend paring (A) right down to its basics: a block
that has access ports that we tag with a VLAN ID, and one trunk port that
has all of the access networks multiplexed onto it. This is a slightly
dangerous block, in that you can actually set up forwarding blocks with it,
and that's a concern; but it's a simple service block like a router, it's
very, very simple to implement, and it solves our immediate problems so
that we can make forward progress. It also doesn't affect the other
solutions significantly, so someone could implement (C) or (D) or (E) in
the future.
--
Ian.
On 23 October 2014 02:13, Alan Kavanagh <alan.kavanagh at ericsson.com> wrote:
> +1 many thanks to Kyle for putting this as a priority, its most welcome.
> /Alan
>
> -----Original Message-----
> From: Erik Moe [mailto:erik.moe at ericsson.com]
> Sent: October-22-14 5:01 PM
> To: Steve Gordon; OpenStack Development Mailing List (not for usage
> questions)
> Cc: iawells at cisco.com
> Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking
> blueprints
>
>
> Hi,
>
> Great that we can have more focus on this. I'll attend the meeting on
> Monday and also attend the summit, looking forward to these discussions.
>
> Thanks,
> Erik
>
>
> -----Original Message-----
> From: Steve Gordon [mailto:sgordon at redhat.com]
> Sent: den 22 oktober 2014 16:29
> To: OpenStack Development Mailing List (not for usage questions)
> Cc: Erik Moe; iawells at cisco.com; Calum.Loudon at metaswitch.com
> Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking
> blueprints
>
> ----- Original Message -----
> > From: "Kyle Mestery" <mestery at mestery.com>
> > To: "OpenStack Development Mailing List (not for usage questions)"
> > <openstack-dev at lists.openstack.org>
> >
> > There are currently at least two BPs registered for VLAN trunk support
> > to VMs in neutron-specs [1] [2]. This is clearly something that I'd
> > like to see us land in Kilo, as it enables a bunch of things for the
> > NFV use cases. I'm going to propose that we talk about this at an
> > upcoming Neutron meeting [3]. Given the rotating schedule of this
> > meeting, and the fact the Summit is fast approaching, I'm going to
> > propose we allocate a bit of time in next Monday's meeting to discuss
> > this. It's likely we can continue this discussion F2F in Paris as
> > well, but getting a head start would be good.
> >
> > Thanks,
> > Kyle
> >
> > [1] https://review.openstack.org/#/c/94612/
> > [2] https://review.openstack.org/#/c/97714
> > [3] https://wiki.openstack.org/wiki/Network/Meetings
>
> Hi Kyle,
>
> Thanks for raising this, it would be great to have a converged plan for
> addressing this use case [1] for Kilo. I plan to attend the Neutron meeting
> and I've CC'd Erik, Ian, and Calum to make sure they are aware as well.
>
> Thanks,
>
> Steve
>
> [1]
> http://lists.openstack.org/pipermail/openstack-dev/2014-October/047548.html
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141023/933e9488/attachment.html>
More information about the OpenStack-dev
mailing list