<html><body><div style="color:#000; background-color:#fff; font-family:HelveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif;font-size:8pt">Hi Mathieu,<br clear="none">Thanks for your reply.<br clear="none">Yes,
even i think the type driver code for tunnels can remain the same since
the segment/tunnel allocation is not going to change. But some
distinction has to be given in the naming or by adding another tunnel
parameter to signify a network overlay. <br clear="none">For tunnels
type, br-tun is created. For regular VLAN, br-ex/br-eth has the uplink
also as its member port. For this, I was thinking, it's easier if we
don't even create br-tun or VXLAN/GRE end-points since the compute nodes
(data network in Openstack) is connected through the external fabric. We will just
have the br-eth/r-ex and its port connecting to the fabric just like if the type was VLAN. If we had
to do this, the changes has to be in neutron agent code. <br clear="none">Is this the right way to go or any
suggestions?<br clear="none"><br clear="none">Thanks,<br clear="none">Paddu<div><span></span></div><div style="display: block;" class="yahoo_quoted"> <br> <br> <div style="font-family: HelveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif; font-size: 8pt;"> <div style="font-family: HelveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif; font-size: 12pt;"> <div dir="ltr"> <font face="Arial" size="2"> On Wednesday, March 26, 2014 11:28 AM, Padmanabhan Krishnan <kprad1@yahoo.com> wrote:<br> </font> </div> <div class="y_msg_container"><div id="yiv4058387141"><div><div style="color:#000;background-color:#fff;font-family:HelveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif;font-size:8pt;">Hi Mathieu,<br clear="none">Thanks for your reply.<br clear="none">Yes, even i think the type driver code for tunnels can remain the same since the segment/tunnel allocation is not going to change. But
some distinction has to be given in the naming or by adding another tunnel parameter to signify a network overlay. <br clear="none">For tunnels type, br-tun is created. For regular VLAN, br-ex/br-eth has the uplink also as its member port. For this, I was thinking, it's easier if we don't even create br-tun or VXLAN/GRE end-points since the compute nodes (data network in Openstack) is throug the external fabric. We will just have the br-eth/r-ex and its port connecting to the fabric. If we had to do this, the changes has to be in neutron agent code. <br clear="none">Is this the right way to go or any
suggestions?<br clear="none"><br clear="none">Thanks,<br clear="none">Paddu<br clear="none"><div><span><br clear="none"></span></div><div class="yiv4058387141yqt9235365799" id="yiv4058387141yqt52613"><div class="yiv4058387141yahoo_quoted" style="display:block;"> <br clear="none"> <br clear="none"> <div style="font-family:HelveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif;font-size:8pt;"> <div style="font-family:HelveticaNeue, Helvetica Neue, Helvetica, Arial, Lucida Grande, sans-serif;font-size:12pt;"> <div dir="ltr"> <font face="Arial" size="2"> On Wednesday, March 26, 2014 1:53 AM, Mathieu Rohon <mathieu.rohon@gmail.com> wrote:<br clear="none"> </font> </div> <div class="yiv4058387141y_msg_container">Hi,<br clear="none"><br clear="none">thanks for this very interesting use case!<br clear="none">May be you can still use VXLAN or GRE for tenant networks, to bypass<br clear="none">the 4k limit of vlans. then you would have
to send packets to the vlan<br clear="none">tagged interface, with the tag assigned by the VDP protocol, and this<br clear="none">traffic would be encapsulated inside the segment to be carried inside<br clear="none">the network fabric. Of course you will have to take care about
MTU.<br clear="none">The only thing you have to consider is to be sure that the default<br clear="none">route between VXLan endpoints go through your vlan tagged interface.<br clear="none"><br clear="none"><br clear="none"><br clear="none">Best,<br clear="none">Mathieu<br clear="none"><br clear="none">On Tue, Mar 25, 2014 at 12:13 AM, Padmanabhan Krishnan <<a rel="nofollow" shape="rect" ymailto="mailto:kprad1@yahoo.com" target="_blank" href="mailto:kprad1@yahoo.com">kprad1@yahoo.com</a>> wrote:<br clear="none">> Hello,<br clear="none">> I have a topology where my Openstack compute nodes are connected to the<br clear="none">> external switches. The fabric comprising of the switches support more than<br clear="none">> 4K segments. So, i should be able to create more than 4K networks in<br clear="none">> Openstack. But, the VLAN to be used for communication with the switches is<br clear="none">> assigned by the switches using
802.1QBG (VDP) protocol. This can be thought<br clear="none">> of as a network overlay. The VM's sends .1q frames to the switches and the<br clear="none">> switches associate it to the segment (VNI in case of VXLAN).<br clear="none">> My question is:<br clear="none">> 1. I cannot use
a type driver of VLAN because of the 4K limitation. I cannot<br clear="none">> use a type driver of VXLAN or GRE because that may mean host based overlay.<br clear="none">> Is there an integrated type driver i can use like an "external network" for<br clear="none">> achieving the above?<br clear="none">> 2. The Openstack module running in the compute should communicate with VDP<br clear="none">> module (lldpad) running there.<br clear="none">> In the computes, i see that ovs_neutron_agent.py is the one programming the<br clear="none">> flows. Here, for the new type driver, should i add a special case to<br clear="none">> provision_local_vlan() for communicating with lldpad for retrieving the<br clear="none">> provider VLAN? If there was a type driver component running in each<br clear="none">> computes, i would have added another one for my purpose. Since, the ML2<br clear="none">> architecture has its mechanism/type
driver modules in the controller only, i<br clear="none">> can only make changes here.<br clear="none">><br clear="none">> Please let me know if there's already an
implementation for my above<br clear="none">> requirements. If not, should i create a blue-print?<br clear="none">><br clear="none">> Thanks,<br clear="none">> Paddu<br clear="none">><br clear="none">> _______________________________________________<br clear="none">> OpenStack-dev mailing list<br clear="none">> <a rel="nofollow" shape="rect" ymailto="mailto:OpenStack-dev@lists.openstack.org" target="_blank" href="mailto:OpenStack-dev@lists.openstack.org">OpenStack-dev@lists.openstack.org</a><br clear="none">> <a rel="nofollow" shape="rect" target="_blank" href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br clear="none">><br clear="none"><br clear="none"><br clear="none"></div> </div> </div> </div></div> </div></div></div><br><br></div> </div> </div> </div> </div></body></html>