[openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints
racha
benali at gmail.com
Fri Oct 24 16:43:08 UTC 2014
Hi Ian,
Here some details about integrating the L2-gateway (supporting multiple
plugins/MDs and not limited to OVS agent implementation) as a trunking
gateway:
It's a block, i.e. building block, that has multiple access ports as tenant
Neutron network ports (having that block type/uuid as
device_owner/device_id) but in different Neutron networks and up to one
gateway port as a provider external network port.
Adding the two following constraints ensure that Neutron networks and
blocks are stubby and there's no way to loop the networks thus very
simply and very easily providing one of the several means of alleviating
the raised concern:
1) Each Neutron network cannot have more than one port that could be
bound/added to any block as an access port.
2) Each block cannot own more than one gateway port that can be set/unset
to that block.
If the type of that block is "learning bridge" then the gateway port is a
Neutron port on a specific provider external network (with the segmentation
details provided as with existent Neutron API) and that block will forward
between access-ports and gateway-port in broadcast isolation (as with
private VLANs) or broadcast merge (community VLANs). For that, a very easy
implementation was provided for review a very long time ago.
If the type of that block is "trunking bridge" then the gateway-port is a
trunk-port as in "VLAN-aware VMs BP" or as a dynamic collection of Neutron
ports as in a suggested extension of the "networks collection Idea" with
each port in a different provider external network (with a 1 to 1
transparent patching hook service between 1 access-port at tenant_net_x and 1
external-port at provider_net_y that could be the place holder for a
cross-network summarized/factorized security group for tenant networks or
whatever ...)... Then we further abstract a trunk as a mix of VLANs, GREs,
VxLANs, etc. (i.e. Neutron networks) next to each others on the same
networks trunk not limited to usual VLAN trunks. What happens (match ->
block/forward/...) to this trunk in the provider external networks as well
as in the transparent patching hooks within that block is up to the
provider I guess. Just a tiny abstract idea out of the top of my head that
I can detail in the specs if there's a tiny interest/match with what is
required?
Thanks,
Best Regards,
Racha
On Thu, Oct 23, 2014 at 2:58 PM, Ian Wells <ijw.ubuntu at cack.org.uk> wrote:
> There are two categories of problems:
>
> 1. some networks don't pass VLAN tagged traffic, and it's impossible to
> detect this from the API
> 2. it's not possible to pass traffic from multiple networks to one port on
> one machine as (e.g.) VLAN tagged traffic
>
> (1) is addressed by the VLAN trunking network blueprint, XXX. Nothing else
> addresses this, particularly in the case that one VM is emitting tagged
> packets that another one should receive and Openstack knows nothing about
> what's going on.
>
> We should get this in, and ideally in quickly and in a simple form where
> it simply tells you if a network is capable of passing tagged traffic. In
> general, this is possible to calculate but a bit tricky in ML2 - anything
> using the OVS mechanism driver won't pass VLAN traffic, anything using
> VLANs should probably also claim it doesn't pass VLAN traffic (though
> actually it depends a little on the switch), and combinations of L3 tunnels
> plus Linuxbridge seem to pass VLAN traffic just fine. Beyond that, it's
> got a backward compatibility mode, so it's possible to ensure that any
> plugin that doesn't implement VLAN reporting is still behaving correctly
> per the specification.
>
> (2) is addressed by several blueprints, and these have overlapping ideas
> that all solve the problem. I would summarise the possibilities as follows:
>
> A. Racha's L2 gateway blueprint,
> https://blueprints.launchpad.net/neutron/+spec/gateway-api-extension,
> which (at its simplest, though it's had features added on and is somewhat
> OVS-specific in its detail) acts as a concentrator to multiplex multiple
> networks onto one as a trunk. This is a very simple approach and doesn't
> attempt to resolve any of the hairier questions like making DHCP work as
> you might want it to on the ports attached to the trunk network.
> B. Isaku's L2 gateway blueprint, https://review.openstack.org/#/c/100278/,
> which is more limited in that it refers only to external connections.
> C. Erik's VLAN port blueprint,
> https://blueprints.launchpad.net/neutron/+spec/vlan-aware-vms, which
> tries to solve the addressing problem mentioned above by having ports
> within ports (much as, on the VM side, interfaces passing trunk traffic
> tend to have subinterfaces that deal with the traffic streams).
> D. Not a blueprint, but an idea I've come across: create a network that is
> a collection of other networks, each 'subnetwork' being a VLAN in the
> network trunk.
> E. Kyle's very old blueprint,
> https://blueprints.launchpad.net/neutron/+spec/quantum-network-bundle-api
> - where we attach a port, not a network, to multiple networks. Probably
> doesn't work with appliances.
>
> I would recommend we try and find a solution that works with both external
> hardware and internal networks. (B) is only a partial solution.
>
> Considering the others, note that (C) and (D) add significant complexity
> to the data model, independently of the benefits they bring. (A) adds one
> new functional block to networking (similar to today's routers, or even
> today's Nova instances).
>
> Finally, I suggest we consider the most prominent use case for
> multiplexing networks. This seems to be condensing traffic from many
> networks to either a service VM or a service appliance. It's useful, but
> not essential, to have Neutron control the addresses on the trunk port
> subinterfaces.
>
> So, that said, I personally favour (A) is the simplest way to solve our
> current needs, and I recommend paring (A) right down to its basics: a block
> that has access ports that we tag with a VLAN ID, and one trunk port that
> has all of the access networks multiplexed onto it. This is a slightly
> dangerous block, in that you can actually set up forwarding blocks with it,
> and that's a concern; but it's a simple service block like a router, it's
> very, very simple to implement, and it solves our immediate problems so
> that we can make forward progress. It also doesn't affect the other
> solutions significantly, so someone could implement (C) or (D) or (E) in
> the future.
> --
> Ian.
>
>
> On 23 October 2014 02:13, Alan Kavanagh <alan.kavanagh at ericsson.com>
> wrote:
>
>> +1 many thanks to Kyle for putting this as a priority, its most welcome.
>> /Alan
>>
>> -----Original Message-----
>> From: Erik Moe [mailto:erik.moe at ericsson.com]
>> Sent: October-22-14 5:01 PM
>> To: Steve Gordon; OpenStack Development Mailing List (not for usage
>> questions)
>> Cc: iawells at cisco.com
>> Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking
>> blueprints
>>
>>
>> Hi,
>>
>> Great that we can have more focus on this. I'll attend the meeting on
>> Monday and also attend the summit, looking forward to these discussions.
>>
>> Thanks,
>> Erik
>>
>>
>> -----Original Message-----
>> From: Steve Gordon [mailto:sgordon at redhat.com]
>> Sent: den 22 oktober 2014 16:29
>> To: OpenStack Development Mailing List (not for usage questions)
>> Cc: Erik Moe; iawells at cisco.com; Calum.Loudon at metaswitch.com
>> Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking
>> blueprints
>>
>> ----- Original Message -----
>> > From: "Kyle Mestery" <mestery at mestery.com>
>> > To: "OpenStack Development Mailing List (not for usage questions)"
>> > <openstack-dev at lists.openstack.org>
>> >
>> > There are currently at least two BPs registered for VLAN trunk support
>> > to VMs in neutron-specs [1] [2]. This is clearly something that I'd
>> > like to see us land in Kilo, as it enables a bunch of things for the
>> > NFV use cases. I'm going to propose that we talk about this at an
>> > upcoming Neutron meeting [3]. Given the rotating schedule of this
>> > meeting, and the fact the Summit is fast approaching, I'm going to
>> > propose we allocate a bit of time in next Monday's meeting to discuss
>> > this. It's likely we can continue this discussion F2F in Paris as
>> > well, but getting a head start would be good.
>> >
>> > Thanks,
>> > Kyle
>> >
>> > [1] https://review.openstack.org/#/c/94612/
>> > [2] https://review.openstack.org/#/c/97714
>> > [3] https://wiki.openstack.org/wiki/Network/Meetings
>>
>> Hi Kyle,
>>
>> Thanks for raising this, it would be great to have a converged plan for
>> addressing this use case [1] for Kilo. I plan to attend the Neutron meeting
>> and I've CC'd Erik, Ian, and Calum to make sure they are aware as well.
>>
>> Thanks,
>>
>> Steve
>>
>> [1]
>> http://lists.openstack.org/pipermail/openstack-dev/2014-October/047548.html
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141024/ee285ffe/attachment.html>
More information about the OpenStack-dev
mailing list