[openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints
A, Keshava
keshava.a at hp.com
Tue Oct 28 10:55:11 UTC 2014
Hi,
Pl find my reply ..
Regards,
keshava
From: Alan Kavanagh [mailto:alan.kavanagh at ericsson.com]
Sent: Tuesday, October 28, 2014 3:35 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints
Hi
Please find some additions to Ian and responses below.
/Alan
From: A, Keshava [mailto:keshava.a at hp.com]
Sent: October-28-14 9:57 AM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints
Hi,
Pl fine the reply for the same.
Regards,
keshava
From: Ian Wells [mailto:ijw.ubuntu at cack.org.uk]
Sent: Tuesday, October 28, 2014 1:11 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints
This all appears to be referring to trunking ports, rather than anything else, so I've addressed the points in that respect.
On 28 October 2014 00:03, A, Keshava <keshava.a at hp.com<mailto:keshava.a at hp.com>> wrote:
Hi,
1. How many Trunk ports can be created ?
Why would there be a limit?
Will there be any Active-Standby concepts will be there ?
I don't believe active-standby, or any HA concept, is directly relevant. Did you have something in mind?
For the NFV kind of the scenario, it is very much required to run the ‘Service VM’ in Active and Standby Mode.
AK--> We have a different view on this, the “application runs as a pair” of which the application either runs in active-active or active standby…this has nothing to do with HA, its down to the application and how its provisioned and configured via Openstack. So agree with Ian on this.
Standby is more of passive entity and will not take any action to external network. It will be passive consumer of the packet/information.
AK--> Why would we need to care?
In that scenario it will be very meaningful to have
“Active port – connected to “Active Service VM”.
“Standby port – connected to ‘Standby Service VM’. Which will turn Active when old Active-VM goes down ?
AK--> Cant you just have two VM’s and then via a controller decide how to address MAC+IP_Address control…..FYI…most NFV Apps have that built-in today.
Let us know others opinion about this concept.
AK-->Perhaps I am miss reading this but I don’t understand what this would provide as opposed to having two VM’s instantiated and running, why does Neutron need to care about the port state between these two VM’s? Similarly its better to just have 2 or more VM’s up and the application will be able to address when failover occurs/requires. Lets keep it simple and not mix up with what the apps do inside the containment.
Keshava:
Since this is solution is more for Carrier Grade NFV Service VM, I have below points to make.
Let’s us say Service-VM running is BGP or BGP-VPN or ‘MPLS + LDP + BGP-VPN’.
When such kind of carrier grade service are running, how to provide the Five-9 HA ?
In my opinion,
Both (Active,/Standby) Service-VM to hook same underlying OpenStack infrastructure stack (br-ext->br-int->qxx-> VMa)
However ‘active VM’ can hooks to ‘active-port’ and ‘standby VM’ hook to ‘passive-port’ with in same stack.
Instead if Active and Standby VM hooks to 2 different stack (br-ext1->br-int1 -->qxx1-> VM-active) and (br-ext2->br-int2->qxx2-> VM-Standby) can those Service-VM achieve the 99.99999 reliability ?
Yes I may be thinking little complicated way from open-stack perspective..
2. Is it possible to configure multiple IP address configured on these ports ?
Yes, in the sense that you can have addresses per port. The usual restrictions to ports would apply, and they don't currently allow multiple IP addresses (with the exception of the address-pair extension).
In case IPv6 there can be multiple primary address configured will this be supported ?
No reason why not - we're expecting to re-use the usual port, so you'd expect the features there to apply (in addition to having multiple sets of subnet on a trunking port).
3. If required can these ports can be aggregated into single one dynamically ?
That's not really relevant to trunk ports or networks.
4. Will there be requirement to handle Nested tagged packet on such interfaces ?
For trunking ports, I don't believe anyone was considering it.
Thanks & Regards,
Keshava
From: Ian Wells [mailto:ijw.ubuntu at cack.org.uk<mailto:ijw.ubuntu at cack.org.uk>]
Sent: Monday, October 27, 2014 9:45 PM
To: OpenStack Development Mailing List (not for usage questions)
Subject: Re: [openstack-dev] [neutron] [nfv] VM-based VLAN trunking blueprints
On 25 October 2014 15:36, Erik Moe <erik.moe at ericsson.com<mailto:erik.moe at ericsson.com>> wrote:
Then I tried to just use the trunk network as a plain pipe to the L2-gateway and connect to normal Neutron networks. One issue is that the L2-gateway will bridge the networks, but the services in the network you bridge to is unaware of your existence. This IMO is ok then bridging Neutron network to some remote network, but if you have an Neutron VM and want to utilize various resources in another Neutron network (since the one you sit on does not have any resources), things gets, let’s say non streamlined.
Indeed. However, non-streamlined is not the end of the world, and I wouldn't want to have to tag all VLANs a port is using on the port in advance of using it (this works for some use cases, and makes others difficult, particularly if you just want a native trunk and are happy for Openstack not to have insight into what's going on on the wire).
Another issue with trunk network is that it puts new requirements on the infrastructure. It needs to be able to handle VLAN tagged frames. For a VLAN based network it would be QinQ.
Yes, and that's the point of the VLAN trunk spec, where we flag a network as passing VLAN tagged packets; if the operator-chosen network implementation doesn't support trunks, the API can refuse to make a trunk network. Without it we're still in the situation that on some clouds passing VLANs works and on others it doesn't, and that the tenant can't actually tell in advance which sort of cloud they're working on.
Trunk networks are a requirement for some use cases independent of the port awareness of VLANs. Based on the maxim, 'make the easy stuff easy and the hard stuff possible' we can't just say 'no Neutron network passes VLAN tagged packets'. And even if we did, we're evading a problem that exists with exactly one sort of network infrastructure - VLAN tagging for network separation - while making it hard to use for all of the many other cases in which it would work just fine.
In summary, if we did port-based VLAN knowledge I would want to be able to use VLANs without having to use it (in much the same way that I would like, in certain circumstances, not to have to use Openstack's address allocation and DHCP - it's nice that I can, but I shouldn't be forced to).
My requirements were to have low/no extra cost for VMs using VLAN trunks compared to normal ports, no new bottlenecks/single point of failure. Due to this and previous issues I implemented the L2 gateway in a distributed fashion and since trunk network could not be realized in reality I only had them in the model and optimized them away.
Again, this is down to your choice of VLAN tagged networking and/or the OVS ML2 driver; it doesn't apply to all deployments.
But the L2-gateway + trunk network has a flexible API, what if someone connects two VMs to one trunk network, well, hard to optimize away.
That's certainly true, but it wasn't really intended to be optimised away.
Anyway, due to these and other issues, I limited my scope and switched to the current trunk port/subport model.
The code that is for review is functional, you can boot a VM with a trunk port + subports (each subport maps to a VLAN). The VM can send/receive VLAN traffic. You can add/remove subports on a running VM. You can specify IP address per subport and use DHCP to retrieve them etc.
I'm coming to realise that the two solutions address different needs - the VLAN port one is much more useful for cases where you know what's going on in the network and you want Openstack to help, but it's just not broad enough to solve every problem. It may well be that we want both solutions, in which case we just need to agree that 'we shouldn't do trunk networking because VLAN aware ports solve this problem' is not a valid argument during spec review.
--
Ian.
_______________________________________________
OpenStack-dev mailing list
OpenStack-dev at lists.openstack.org<mailto:OpenStack-dev at lists.openstack.org>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20141028/e8937f38/attachment.html>
More information about the OpenStack-dev
mailing list