[openstack-dev] [quantum] BP ovs partial mesh

Édouard Thuleau edouard.thuleau at gmail.com
Mon Mar 18 10:38:09 UTC 2013


Hi,

I made a PoC to the proxy ARP proposed solution.
I made it on Folsom release with OVS+GRE driver.
This PoC removing broadcast and multicast traffic on the l2. Only DHCP is
authorized. The ARP traffic is DNATed to ARP proxy.

To do that, I add this openflow rules to the egress traffic of all port (in
priority order):
- Authorize BOOTP broadcast packets
- MAC DNAT to the MAC ARP proxy for all ARP broadcast traffic
- Drop all multicast and broadcast traffic
- Drop all other traffic aren't having the MAC address of the proxy ARP as
destination

I'm working to edit a blueprint for this.
I think it could be optional for l2 network and interesting to enable it to
isolated ports on public network, for example.

Édouard.

On Mon, Mar 18, 2013 at 10:53 AM, Rohon Mathieu <mathieu.rohon at gmail.com>wrote:

> hi all,
>
> I don't know if removing l2 communication is a good point, since it's
> a potential regression, some application may need it.
>
> But I agree that mac learning could be prevented since
> Openstack/quantum already know about VM Mac/IP and placement.
> Mac table/Flow tables could be populated by a controller, you can
> adapt an openflow controller like ryu, floodlight or other to do so.
> But this functionnality should be used directly within OVS and LB
> plugin.
> Instead of populating mac tables, we could also use a proxy ARP, as a
> proxy-arp agent which would be populated by the Quantum-server.
> We could use the quantum scheduler to have multiple proxy-arp-agent.
>
> On Fri, Mar 15, 2013 at 11:54 PM, Tomasz Paszkowski <ss7pro at gmail.com>
> wrote:
> > Connecting non broadcast network solution with LISP will also
> > eliminate need to build virtual mesh between compute hosts.
> >
> > On Fri, Mar 15, 2013 at 11:50 PM, Tomasz Paszkowski <ss7pro at gmail.com>
> wrote:
> >> I'am thinking about completely removing broadcast traffic from gre
> >> networks and ignore l2 addressing. It could be achieved by introducing
> >> openflow controller, which can make all the forwarding decisions
> >> (direct traffic to appropriate tunnels and interfaces on compute
> >> hosts). For IP/DHCP/ARP traffic this is very easy as we already know
> >> ip addresses of all vms , dhcp-agents, l3-agents. Questions is which
> >> openflow controller implementation to take and if we would like to
> >> build a central controller or a distributed one.
> >>
> >> What do you think about this ?
> >>
> >>
> >> On Thu, Mar 14, 2013 at 9:59 AM, Rohon Mathieu <mathieu.rohon at gmail.com>
> wrote:
> >>> hi all,
> >>>
> >>> I just wanted to share about a BP to limit broadcasting in every
> >>> tunnel while using OVS and GRE. This could also be used for VXLan
> >>> tunneling.
> >>>
> >>> https://blueprints.launchpad.net/quantum/+spec/ovs-tunnel-partial-mesh
> >>>
> >>> the specification show a call flow for the port creation.
> >>>
> >>> Does anyone see something wrong in my architecture?
> >>>
> >>> _______________________________________________
> >>> OpenStack-dev mailing list
> >>> OpenStack-dev at lists.openstack.org
> >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >>
> >>
> >> --
> >> Tomasz Paszkowski
> >> SS7, Asterisk, SAN, Datacenter, Cloud Computing
> >> +48500166299
> >
> >
> >
> > --
> > Tomasz Paszkowski
> > SS7, Asterisk, SAN, Datacenter, Cloud Computing
> > +48500166299
> >
> > _______________________________________________
> > OpenStack-dev mailing list
> > OpenStack-dev at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20130318/f91d4be1/attachment.html>


More information about the OpenStack-dev mailing list