We don't use Linux bonding at all. We use OVS bonding. Actually, mixing linux bridges and ovs bridges didn't work for us with dual 10GbE in compute nodes: so we dropped all linux bridges and our compute nodes run 100% on top of OVS (management and service networks). We use Intel cards mainly, and we have used other manufacturers in the past. Cheers Diego -- Diego Parrilla <http://www.stackops.com/>*CEO* *www.stackops.com | * diego.parrilla at stackops.com** | US: +1 (512) 646-0068 | EU: +34 91 005-2164 | skype:diegoparrilla * * On Mon, Oct 21, 2013 at 11:19 PM, matthew zeier <mrz at lookout.com> wrote: > Wondering what others have used for multi homed OpenStack nodes on 10GbE. > Linux bonding? Something else? > > In past lives I've encountered performance issues with 10GbE on Linux > bonding and have used Myricom cards for active/failover. > > What are others using? > > -- > matthew zeier | Dir. Operations | Lookout | https://twitter.com/mrz > > > _______________________________________________ > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.openstack.org/pipermail/openstack/attachments/20131022/50497f21/attachment.html>