We're using this schema a lot, bonding mode 2, dynamic link aggr and alternative path in case of failure, works like charm, so far no perfomance issu on intel cards On Mon, Oct 21, 2013 at 6:19 PM, matthew zeier <mrz at lookout.com> wrote: > Wondering what others have used for multi homed OpenStack nodes on 10GbE. > Linux bonding? Something else? > > In past lives I've encountered performance issues with 10GbE on Linux > bonding and have used Myricom cards for active/failover. > > What are others using? > > -- > matthew zeier | Dir. Operations | Lookout | https://twitter.com/mrz > > > _______________________________________________ > Mailing list: > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > Post to : openstack at lists.openstack.org > Unsubscribe : > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.openstack.org/pipermail/openstack/attachments/20131021/d00a9972/attachment.html>