On Fri, 2019-03-01 at 01:59 +0000, Manuel Sopena Ballesteros wrote:
Ok,
My nic is mellanox connect-4 lx so I was thinking this: Create bonding at the PF level use bond interface as br-ext Do ovs offload
That would mean I don't need to change my switch configuration and keep my LACP.
Does it sounds feasible? i think mellanox would be able to comment better but i dont think that will work for hard ware offloaded ovs the vm data path is carried by an sriov vf with a fall back to ovs via a representer netdev.
the sriov vf that is attached to the vm will be allocated from 1 of the PFs in the bond if that PF losses network conenctivty i dont think the bound will have any effect. mellanox would have and to specificaly develop ther silocon and drirvers to support this and if you intent was to bound pf form different phyical nic cards that woudl require even more supprot. some NICs pool vf between all PF on a singel card but even the ones that supprot that general wont allow a VF to float between two different phyical cards. i have not look at the connectx-4 cards to say what mellanox supprots in this regard but i would be rather surprised if bonding 2 pfs and adding them to an ovs bridge would be suffient to achive your goal
Second question:
OVS offload needs Linux Kernel >= 4.13 can I use CentOS Linux release 7.5.1804 (Core) with kernel 3.10.0- 862.el7.x86_64?
Thank you
Manuel
-----Original Message----- From: Sean Mooney [mailto:smooney@redhat.com] Sent: Wednesday, February 27, 2019 9:36 PM To: Bence Romsics; openstack@lists.openstack.org Subject: Re: sriov bonding
On Wed, 2019-02-27 at 08:38 +0100, Bence Romsics wrote:
Hi,
On Wed, Feb 27, 2019 at 8:00 AM Manuel Sopena Ballesteros <manuel.sb@garvan.org.au> wrote:
Is there a documentation that explains how to setup bonding on SR-IOV neutron?
Not right now to my knowledge, but I remember seeing effort to design and introduce this feature. I think there may have been multiple rounds of design already, this is maybe the last one that's still ongoing:
Neutron side: https://bugs.launchpad.net/neutron/+bug/1809037
Nova side: https://blueprints.launchpad.net/nova/+spec/schedule-vm-nics-to-different-pf https://blueprints.launchpad.net/nova/+spec/sriov-bond
most of the previous attempts have not proceeded as they have tried to hide the bonding from nova's and neutron's data models vai configs or opaque strings.
for bonding to really be supported at the openstack level we will need to take a similar a approch to trunk ports. e.g. we create a logical bond port and a set of bond peer ports at the neutron api level then we attach the bond port to the vm.
currently im not aware of any proposal that really has tracktion.
you can manually create a bond in the guest but you cannot today guarentee that the vf will come from different pfs.
one of the reason i think we need to go the logical bond port direction is that it will allow us to constuct resouce requests using the fucntionality that is being added for bandwidth based schduleing. that will make expressing affinity and anti affinty simpler useing the request groups syntax. i personally have stopped trying to add bond support untill that bandwith based schduling effort is finished.
Hope that helps, Bence
NOTICE Please consider the environment before printing this email. This message and any attachments are intended for the addressee named and may contain legally privileged/confidential/copyright information. If you are not the intended recipient, you should not read, use, disclose, copy or distribute this communication. If you have received this message in error please notify us at once by return email and then delete both messages. We accept no liability for the distribution of viruses or similar in electronic communications. This notice should not be removed.