[Openstack] How to plan a transition to VXLAN tunnels

Gustavo Randich gustavo.randich at gmail.com
Fri Jul 10 19:52:42 UTC 2015


Thank you Leslie, Erik and Tom for your insight. Will take a look at the
alternatives!



On Fri, Jul 10, 2015 at 4:00 PM, Tom Walsh <
expresswebsys+openstack at gmail.com> wrote:

> We are currently building up our latest iteration of OpenStack to handle
> this exact setup. I was a bit confused as well as to where the different
> parts fit together, but I have had some good phone calls with the guys at
> Cumulus to help me understand how things work together and what
> technologies are out there to get everything working together.
>
> So the basic concept at work here is that right now there are two
> different parts of Neutron that are working. The first part is the actual
> networking of the L2 infrastructure. The other portion is the L3+ layers
> (routing, dhcp, FWaaS, LBaaS).
>
> Currently L2 is being handled by OVS and GRE, but the GRE portion can be
> replaced with whatever technology you want to use (VLAN, GRE, VXLANs, etc).
> When you integrate with something like Cumulus Networks, you are pushing
> this functionality out to the switch. So you will have a given
> project/tenant in OpenStack have a specific VLAN and that is used between
> the compute nodes and the leaf (TOR switch). Beyond that level you use
> VXLAN to push traffic between the leaf and spine switches to keep tenant's
> traffic isolated. So basically you have VLAN 5 being tenant 5, and that
> tenant has instances in multiple zones if the connectivity is intra-zone
> then they just use the VLAN to talk to each instance. However if you have
> the tenant in a different zone, then it will be pushed up to VXLAN to route
> the traffic to the correct zone seamlessly. (At least that is how I
> understood it.)
>
> L3 and up is another layer to Neutron. In order to avoid the SPoF and
> overloading you mentioned, we are looking into http://Akanda.io (the
> Cumulus link Eric provided is done with one of the Akanda guys). Basically
> what Akanda is doing is using the compute infrastructure to scale the L3
> functionality of Neutron. Using hooks into the L3 portion of Neutron, they
> spin up a VM that acts as a router for a given tenant (and they monitor it
> and make sure it is up and running, outside of the tenant's VMs). This
> allows you to scale your network load... As you add more compute resources
> you also scale your networking capabilities. This is another DreamHost
> project that has been spun out and following the same "enterprise services"
> model that Ceph/InkTank took. I haven't really started using this yet as we
> are in the process of purchasing the gear to build things out (QuantaMesh
> LY9 with Cumulus Networks, etc), but from what I understand of the Akanda
> project, it fits the bill and ticks most of the boxes for mitigating the
> problems you are talking about. About the only problem I see with it, is
> that it is missing a portion of the functionality that Neutron is providing
> (specifically FWaaS as the API was in a state of flux and they are waiting
> for it to stabilize).
>
> Let me know if you have any other questions or need further clarification
> on anything. Also if I made some mistakes in my explanations. Right now a
> lot of my knowledge is strictly theoretical as I haven't actually
> implemented anything yet, but this is the direction we are heading.
>
> Tom Walsh
> https://expresshosting.net/
>
> On Wed, Jul 8, 2015 at 12:35 PM, Gustavo Randich <
> gustavo.randich at gmail.com> wrote:
>
>> Hi,
>>
>> We are trying to figure out how to transition from a network model using
>> nova-network and a single VLAN to a model using Neutron and multiple VXLAN
>> tunnels.
>>
>> The main issue is not how to configure and setup the tunnels, but how to
>> expose the new virtual machines born inside the tunnels to our "legacy",
>> non-VXLAN, non-Openstack networks, i.e. DNS serrvers, databases, hardware
>> load balancers, monitoring/metrics servers, etc.
>>
>> We are trying to avoid assigning a floating IP to every instance, and
>> also avoid the SPoF and bottlenecks of Network Nodes.
>>
>> What are the alternatives? (Hardware switches supporting VXLAN and acting
>> as gateways?)
>>
>> Thanks in advance.
>>
>>
>> _______________________________________________
>> Mailing list:
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>> Post to     : openstack at lists.openstack.org
>> Unsubscribe :
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>>
>>
>
> _______________________________________________
> Mailing list:
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
> Post to     : openstack at lists.openstack.org
> Unsubscribe :
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20150710/191b8d1e/attachment.html>


More information about the Openstack mailing list