<div dir="ltr">On Thu, Apr 11, 2013 at 4:57 PM, Eleouet Francois <span dir="ltr"><<a href="mailto:f.eleouet@gmail.com" target="_blank">f.eleouet@gmail.com</a>> </span> <div class="gmail_extra"><div class="gmail_quote">
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div class="im"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div>Yes I have already seen this :-) I'm already working on this feature for linuxbridge plugin :-) I'm still wondering what should be the behavior if we detect that feature is not supported by the kernel but</div>
<div>agent is configured to use it. I don't know if I should just raise an exception and exit agent or just produce error message. Do you have any thoughts on that ?</div><div><div> </div></div></div></div>
</div></blockquote></div><div>It's a good question which doesn't only concerns proxy-arp feature but also VXLAN support...I suppose the agent could exit as it would be an incoherent configuration, but the plugin should also check if VXLAN is enabled globally in order to prevent provider network allocation if VXLAN is not supported.</div>
</div></div></div></blockquote><div><br></div><div style>So we can't provider network allocation in current model as only agent know if vxlan is supported.</div><div style><br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div class="im"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div><div><br></div></div><div>As i wrote some time ago on this ml, i believe that for most of the enviroments we could go with non-broadcast,l3-switching-based addressing ethernet networks and just forward all dhcp and arp packets to appropriate nodes (when running with openvswitch module). But as you mentioned it's a very difficult task to distribute appropriate data to the agents and update flow tables. Do you already have some ideas you can share ?</div>
<div>
<div><br></div></div></div></div></div></blockquote></div><div>For now, 3 different alternatives were coming to my mind:</div><div>-A centralized approach where plugin distributes [mac, IP, VNID, agent_ip] tuples to the agents (rpc distribution of these tuples could be triggered by update_device_up/down) in this case plugin would need an additionnal DB to track port to agents mappings, as well as agent_ips</div>
<div>-A distributed one where agent fanout_casts [mac, IP, VNID, agent_ip] to other agents when a port becomes up. Agents having ports on the same networks should answer with the properties of the ports they handle. To achieve this agents should maintin a list of their ports.</div>
<div>-We could also think to use an external control plane: BGP may land in quantum for VPNaaS, it could also be a good candidate to propagate VXLAN neighbors infomations (see <a href="http://tools.ietf.org/html/draft-boutros-l2vpn-vxlan-evpn-01" target="_blank">http://tools.ietf.org/html/draft-boutros-l2vpn-vxlan-evpn-01</a>)</div>
</div></div></div></blockquote><div><br></div><div style>For my BGP based control plane seems to best the best one :-) Thank for pointing this out.</div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div class="im"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div dir="ltr">
</div></blockquote></div><div>I believe it's better to stay with mapping VNI to physical_networks as this allows to bound VNIs range to a specific interface (this is not possible within OVS plugin). The only thing I need to implement is simple check to avoid overlapping VNI between different physical networks (this will be done during quantum-server startup). </div>
<span><font color="#888888">
</font></span></div><span><font color="#888888"><div><br></div></font></span></div></div></blockquote></div><div>Yes, but it'll remain an issue with provider networks: if several <span style="color:rgb(80,0,80)">physicalNetworks a</span>re declared, the same VNI could still be reserved on different <span style="color:rgb(80,0,80)">physicalNetworks...</span></div>
</div></div></div></blockquote><div><br></div><div style>Good point. What about modifying db.reserve_specific_network to remove filter for physical_network when called to reserver vxlan network ? I would like to avoid duplicating db logic for vxlan support.</div>
<div style><br></div></div>-- <br>Tomasz Paszkowski<br>SS7, Asterisk, SAN, Datacenter, Cloud Computing<br>+48500166299
</div></div>