[openstack-dev] [Quantum] VXLAN support for linuxbridge
ss7pro at gmail.com
Thu Apr 11 15:49:41 UTC 2013
On Thu, Apr 11, 2013 at 4:57 PM, Eleouet Francois <f.eleouet at gmail.com>
> Yes I have already seen this :-) I'm already working on this feature for
>> linuxbridge plugin :-) I'm still wondering what should be the behavior if
>> we detect that feature is not supported by the kernel but
>> agent is configured to use it. I don't know if I should just raise an
>> exception and exit agent or just produce error message. Do you have any
>> thoughts on that ?
> It's a good question which doesn't only concerns proxy-arp feature but
> also VXLAN support...I suppose the agent could exit as it would be an
> incoherent configuration, but the plugin should also check if VXLAN is
> enabled globally in order to prevent provider network allocation if VXLAN
> is not supported.
So we can't provider network allocation in current model as only agent know
if vxlan is supported.
>> As i wrote some time ago on this ml, i believe that for most of the
>> enviroments we could go with non-broadcast,l3-switching-based addressing
>> ethernet networks and just forward all dhcp and arp packets to appropriate
>> nodes (when running with openvswitch module). But as you mentioned it's a
>> very difficult task to distribute appropriate data to the agents and update
>> flow tables. Do you already have some ideas you can share ?
>> For now, 3 different alternatives were coming to my mind:
> -A centralized approach where plugin distributes [mac, IP, VNID, agent_ip]
> tuples to the agents (rpc distribution of these tuples could be triggered
> by update_device_up/down) in this case plugin would need an additionnal DB
> to track port to agents mappings, as well as agent_ips
> -A distributed one where agent fanout_casts [mac, IP, VNID, agent_ip] to
> other agents when a port becomes up. Agents having ports on the same
> networks should answer with the properties of the ports they handle. To
> achieve this agents should maintin a list of their ports.
> -We could also think to use an external control plane: BGP may land in
> quantum for VPNaaS, it could also be a good candidate to propagate VXLAN
> neighbors infomations (see
For my BGP based control plane seems to best the best one :-) Thank for
pointing this out.
> I believe it's better to stay with mapping VNI to physical_networks as
>> this allows to bound VNIs range to a specific interface (this is not
>> possible within OVS plugin). The only thing I need to implement is simple
>> check to avoid overlapping VNI between different physical networks (this
>> will be done during quantum-server startup).
>> Yes, but it'll remain an issue with provider networks: if several
> physicalNetworks are declared, the same VNI could still be reserved on
> different physicalNetworks...
Good point. What about modifying db.reserve_specific_network to remove
filter for physical_network when called to reserver vxlan network ? I would
like to avoid duplicating db logic for vxlan support.
SS7, Asterisk, SAN, Datacenter, Cloud Computing
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the OpenStack-dev