[openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Ian Wells ijw.ubuntu at cack.org.uk
Mon Oct 27 16:15:06 UTC 2014


On 25 October 2014 15:36, Erik Moe <erik.moe at ericsson.com> wrote:

>  Then I tried to just use the trunk network as a plain pipe to the
> L2-gateway and connect to normal Neutron networks. One issue is that the
> L2-gateway will bridge the networks, but the services in the network you
> bridge to is unaware of your existence. This IMO is ok then bridging
> Neutron network to some remote network, but if you have an Neutron VM and
> want to utilize various resources in another Neutron network (since the one
> you sit on does not have any resources), things gets, let’s say non
> streamlined.
>

Indeed.  However, non-streamlined is not the end of the world, and I
wouldn't want to have to tag all VLANs a port is using on the port in
advance of using it (this works for some use cases, and makes others
difficult, particularly if you just want a native trunk and are happy for
Openstack not to have insight into what's going on on the wire).


>  Another issue with trunk network is that it puts new requirements on the
> infrastructure. It needs to be able to handle VLAN tagged frames. For a
> VLAN based network it would be QinQ.
>

Yes, and that's the point of the VLAN trunk spec, where we flag a network
as passing VLAN tagged packets; if the operator-chosen network
implementation doesn't support trunks, the API can refuse to make a trunk
network.  Without it we're still in the situation that on some clouds
passing VLANs works and on others it doesn't, and that the tenant can't
actually tell in advance which sort of cloud they're working on.

Trunk networks are a requirement for some use cases independent of the port
awareness of VLANs.  Based on the maxim, 'make the easy stuff easy and the
hard stuff possible' we can't just say 'no Neutron network passes VLAN
tagged packets'.  And even if we did, we're evading a problem that exists
with exactly one sort of network infrastructure - VLAN tagging for network
separation - while making it hard to use for all of the many other cases in
which it would work just fine.

In summary, if we did port-based VLAN knowledge I would want to be able to
use VLANs without having to use it (in much the same way that I would like,
in certain circumstances, not to have to use Openstack's address allocation
and DHCP - it's nice that I can, but I shouldn't be forced to).

My requirements were to have low/no extra cost for VMs using VLAN trunks
> compared to normal ports, no new bottlenecks/single point of failure. Due
> to this and previous issues I implemented the L2 gateway in a distributed
> fashion and since trunk network could not be realized in reality I only had
> them in the model and optimized them away.
>

Again, this is down to your choice of VLAN tagged networking and/or the OVS
ML2 driver; it doesn't apply to all deployments.


> But the L2-gateway + trunk network has a flexible API, what if someone
> connects two VMs to one trunk network, well, hard to optimize away.
>

That's certainly true, but it wasn't really intended to be optimised away.

 Anyway, due to these and other issues, I limited my scope and switched to
> the current trunk port/subport model.
>
>
>
> The code that is for review is functional, you can boot a VM with a trunk
> port + subports (each subport maps to a VLAN). The VM can send/receive VLAN
> traffic. You can add/remove subports on a running VM. You can specify IP
> address per subport and use DHCP to retrieve them etc.
>

I'm coming to realise that the two solutions address different needs - the
VLAN port one is much more useful for cases where you know what's going on
in the network and you want Openstack to help, but it's just not broad
enough to solve every problem.  It may well be that we want both solutions,
in which case we just need to agree that 'we shouldn't do trunk networking
because VLAN aware ports solve this problem' is not a valid argument during
spec review.
-- 
Ian.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141027/44365bb3/attachment.html>


More information about the OpenStack-dev mailing list