[openstack-dev] ML2 Type driver for supporting network overlays, with more than 4K seg

Padmanabhan Krishnan kprad1 at yahoo.com
Fri Mar 28 00:59:42 UTC 2014


Hi Mathieu,
Thanks for your reply.
Yes, 
even i think the type driver code for tunnels can remain the same since 
the segment/tunnel allocation is not going to change. But some 
distinction has to be given in the naming or by adding another tunnel 
parameter to signify a network overlay. 
For tunnels 
type, br-tun is created. For regular VLAN, br-ex/br-eth has the uplink 
also as its member port. For this, I was thinking, it's easier if we 
don't even create br-tun or VXLAN/GRE end-points since the compute nodes
 (data network in Openstack) is connected through the external fabric. We will just
 have the br-eth/r-ex and its port connecting to the fabric just like if the type was VLAN. If we had 
to do this, the changes has to be in  neutron agent code. 
Is this the right way to go or any
 suggestions?

Thanks,
Paddu




On Wednesday, March 26, 2014 11:28 AM, Padmanabhan Krishnan <kprad1 at yahoo.com> wrote:
 
Hi Mathieu,
Thanks for your reply.
Yes, even i think the type driver code for tunnels can remain the same since the segment/tunnel allocation is not going to change. But some distinction has to be given in the naming or by adding another tunnel parameter to signify a network overlay. 
For tunnels type, br-tun is created. For regular VLAN, br-ex/br-eth has the uplink also as its member port. For this, I was thinking, it's easier if we don't even create br-tun or VXLAN/GRE end-points since the compute nodes (data network in Openstack) is throug the external fabric. We will just have the br-eth/r-ex and its port connecting to the fabric. If we had to do this, the changes has to be in  neutron agent code. 
Is this the right way to go or any
 suggestions?

Thanks,
Paddu





On Wednesday, March 26, 2014 1:53 AM, Mathieu Rohon <mathieu.rohon at gmail.com> wrote:
 
Hi,

thanks for this very interesting use case!
May be you can still use VXLAN or GRE for tenant networks, to bypass
the 4k limit of vlans. then you would have to send packets to the vlan
tagged interface, with the tag assigned by the VDP protocol, and this
traffic would be encapsulated inside the segment to be carried inside
the network fabric. Of course you will have to take care about
 MTU.
The only thing you have to consider is to be sure that the default
route between VXLan endpoints go through your vlan tagged interface.



Best,
Mathieu

On Tue, Mar 25, 2014 at 12:13 AM, Padmanabhan Krishnan <kprad1 at yahoo.com> wrote:
> Hello,
> I have a topology where my Openstack compute nodes are connected to the
> external switches. The fabric comprising of the switches support more than
> 4K segments. So, i should be able to create more than 4K networks in
> Openstack. But, the VLAN to be used for communication with the switches is
> assigned by the switches using 802.1QBG (VDP) protocol. This can be thought
> of as a network overlay. The VM's sends .1q frames to the switches and the
> switches associate it to the segment (VNI in case of VXLAN).
> My question is:
> 1. I cannot use
 a type driver of VLAN because of the 4K limitation. I cannot
> use a type driver of VXLAN or GRE because that may mean host based overlay.
> Is there an integrated type driver i can use like an "external network" for
> achieving the above?
> 2. The Openstack module running in the compute should communicate with VDP
> module (lldpad) running there.
> In the computes, i see that ovs_neutron_agent.py is the one programming the
> flows. Here, for the new type driver, should i add a special case to
> provision_local_vlan() for communicating with lldpad for retrieving the
> provider VLAN? If there was a type driver component running in each
> computes, i would have added another one for my purpose. Since, the ML2
> architecture has its mechanism/type driver modules in the controller only, i
> can only make changes here.
>
> Please let me know if there's already an
 implementation for my above
> requirements. If not, should i create a blue-print?
>
> Thanks,
> Paddu
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20140327/c2a0875b/attachment.html>


More information about the OpenStack-dev mailing list