We've had great luck with Intel dual-port 10gbe NICs and Arista 7050S switches. We have the NICs configured as an 802.3ad bond with each port going to a different switch. I haven't noticed any performance issues. I remember we did some benchmarks last summer and were a little disappointed about the transfer rate of various protocols, but then saw that we were able have multiple sessions running at the same peak rate, so we figured it was a software limitation of some sort. On Mon, Oct 21, 2013 at 3:19 PM, matthew zeier <mrz at lookout.com> wrote: > Wondering what others have used for multi homed OpenStack nodes on 10GbE. > Linux bonding? Something else? > > In past lives I've encountered performance issues with 10GbE on Linux > bonding and have used Myricom cards for active/failover. > > What are others using? > > -- > matthew zeier | Dir. Operations | Lookout | https://twitter.com/mrz > > > _______________________________________________ > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > -- Joe Topjian Systems Architect Cybera Inc. www.cybera.ca Cybera is a not-for-profit organization that works to spur and support innovation, for the economic benefit of Alberta, through the use of cyberinfrastructure. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.openstack.org/pipermail/openstack/attachments/20131021/2443cdb4/attachment.html>