[openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints

Ian Wells ijw.ubuntu at cack.org.uk
Tue Oct 28 07:40:50 UTC 2014


This all appears to be referring to trunking ports, rather than anything
else, so I've addressed the points in that respect.

On 28 October 2014 00:03, A, Keshava <keshava.a at hp.com> wrote:

>  Hi,
>
> 1.       How many Trunk ports can be created ?
>
Why would there be a limit?

> Will there be any Active-Standby concepts will be there ?
>
I don't believe active-standby, or any HA concept, is directly relevant.
Did you have something in mind?

>   2.       Is it possible to configure multiple IP address configured on
> these ports ?
>
Yes, in the sense that you can have addresses per port.  The usual
restrictions to ports would apply, and they don't currently allow multiple
IP addresses (with the exception of the address-pair extension).

> In case IPv6 there can be multiple primary address configured will this be
> supported ?
>
No reason why not - we're expecting to re-use the usual port, so you'd
expect the features there to apply (in addition to having multiple sets of
subnet on a trunking port).

>   3.       If required can these ports can be aggregated into single one
> dynamically ?
>
That's not really relevant to trunk ports or networks.

>  4.       Will there be requirement to handle Nested tagged packet on
> such interfaces ?
>
For trunking ports, I don't believe anyone was considering it.


>
>
>
>
>
>
> Thanks & Regards,
>
> Keshava
>
>
>
> *From:* Ian Wells [mailto:ijw.ubuntu at cack.org.uk]
> *Sent:* Monday, October 27, 2014 9:45 PM
> *To:* OpenStack Development Mailing List (not for usage questions)
> *Subject:* Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking
> blueprints
>
>
>
> On 25 October 2014 15:36, Erik Moe <erik.moe at ericsson.com> wrote:
>
>  Then I tried to just use the trunk network as a plain pipe to the
> L2-gateway and connect to normal Neutron networks. One issue is that the
> L2-gateway will bridge the networks, but the services in the network you
> bridge to is unaware of your existence. This IMO is ok then bridging
> Neutron network to some remote network, but if you have an Neutron VM and
> want to utilize various resources in another Neutron network (since the one
> you sit on does not have any resources), things gets, let’s say non
> streamlined.
>
>
>
> Indeed.  However, non-streamlined is not the end of the world, and I
> wouldn't want to have to tag all VLANs a port is using on the port in
> advance of using it (this works for some use cases, and makes others
> difficult, particularly if you just want a native trunk and are happy for
> Openstack not to have insight into what's going on on the wire).
>
>
>
>   Another issue with trunk network is that it puts new requirements on
> the infrastructure. It needs to be able to handle VLAN tagged frames. For a
> VLAN based network it would be QinQ.
>
>
>
> Yes, and that's the point of the VLAN trunk spec, where we flag a network
> as passing VLAN tagged packets; if the operator-chosen network
> implementation doesn't support trunks, the API can refuse to make a trunk
> network.  Without it we're still in the situation that on some clouds
> passing VLANs works and on others it doesn't, and that the tenant can't
> actually tell in advance which sort of cloud they're working on.
>
> Trunk networks are a requirement for some use cases independent of the
> port awareness of VLANs.  Based on the maxim, 'make the easy stuff easy and
> the hard stuff possible' we can't just say 'no Neutron network passes VLAN
> tagged packets'.  And even if we did, we're evading a problem that exists
> with exactly one sort of network infrastructure - VLAN tagging for network
> separation - while making it hard to use for all of the many other cases in
> which it would work just fine.
>
> In summary, if we did port-based VLAN knowledge I would want to be able to
> use VLANs without having to use it (in much the same way that I would like,
> in certain circumstances, not to have to use Openstack's address allocation
> and DHCP - it's nice that I can, but I shouldn't be forced to).
>
>  My requirements were to have low/no extra cost for VMs using VLAN trunks
> compared to normal ports, no new bottlenecks/single point of failure. Due
> to this and previous issues I implemented the L2 gateway in a distributed
> fashion and since trunk network could not be realized in reality I only had
> them in the model and optimized them away.
>
>
>
> Again, this is down to your choice of VLAN tagged networking and/or the
> OVS ML2 driver; it doesn't apply to all deployments.
>
>
>
>  But the L2-gateway + trunk network has a flexible API, what if someone
> connects two VMs to one trunk network, well, hard to optimize away.
>
>
>
> That's certainly true, but it wasn't really intended to be optimised away.
>
>  Anyway, due to these and other issues, I limited my scope and switched
> to the current trunk port/subport model.
>
>
>
> The code that is for review is functional, you can boot a VM with a trunk
> port + subports (each subport maps to a VLAN). The VM can send/receive VLAN
> traffic. You can add/remove subports on a running VM. You can specify IP
> address per subport and use DHCP to retrieve them etc.
>
>
>
> I'm coming to realise that the two solutions address different needs - the
> VLAN port one is much more useful for cases where you know what's going on
> in the network and you want Openstack to help, but it's just not broad
> enough to solve every problem.  It may well be that we want both solutions,
> in which case we just need to agree that 'we shouldn't do trunk networking
> because VLAN aware ports solve this problem' is not a valid argument during
> spec review.
> --
>
> Ian.
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141028/8c90a6d4/attachment.html>


More information about the OpenStack-dev mailing list