sriov bonding

Moshe Levi moshele at mellanox.com
Sun Mar 3 09:05:08 UTC 2019


PSB

-----Original Message-----
From: Sean Mooney <smooney at redhat.com> 
Sent: Friday, March 1, 2019 3:21 PM
To: Manuel Sopena Ballesteros <manuel.sb at garvan.org.au>; Bence Romsics <bence.romsics at gmail.com>; openstack at lists.openstack.org
Subject: Re: sriov bonding

On Fri, 2019-03-01 at 01:59 +0000, Manuel Sopena Ballesteros wrote:
> Ok,
> 
> My nic is mellanox connect-4 lx so I was thinking this:
> Create bonding at the PF level
> use bond interface as br-ext
> Do ovs offload
> 
> That would mean I don't need to change my switch configuration and keep my LACP.
> 
> Does it sounds feasible?
i think mellanox would be able to comment better but i dont think that will work for hard ware offloaded ovs the vm data path is carried by an sriov vf with a fall back to ovs via a representer netdev.

[ML] - Mellanox introduce VF lag feature which will do bond in the  e-switch  when you will do linux bond on the PF Level. This feature will be upsteam only in rhel7.7. But I think you can also get it if you install the latest Mellanox ofed http://www.mellanox.com/page/products_dyn?product_family=26


the sriov vf that is attached to the vm will be allocated from 1 of the PFs in the bond if that PF losses network conenctivty i dont think the bound will have any effect.

mellanox would have and to specificaly develop ther silocon and drirvers to support this and if you intent was to bound pf form different phyical nic cards that woudl require even more supprot.

some NICs pool vf between all PF on a singel card but even the ones that supprot that general wont allow a VF to  float between two different phyical cards.

i have not look at the connectx-4 cards to say what mellanox supprots in this regard but i would be rather surprised if bonding 2 pfs and adding them to an ovs bridge would be suffient to achive your goal
> 
> Second question:
> 
> OVS offload needs Linux Kernel >= 4.13 can I use CentOS Linux release 
> 7.5.1804 (Core) with kernel 3.10.0- 862.el7.x86_64?
> 
> Thank you
> 
> Manuel
> 
> -----Original Message-----
> From: Sean Mooney [mailto:smooney at redhat.com]
> Sent: Wednesday, February 27, 2019 9:36 PM
> To: Bence Romsics; openstack at lists.openstack.org
> Subject: Re: sriov bonding
> 
> On Wed, 2019-02-27 at 08:38 +0100, Bence Romsics wrote:
> > Hi,
> > 
> > On Wed, Feb 27, 2019 at 8:00 AM Manuel Sopena Ballesteros 
> > <manuel.sb at garvan.org.au> wrote:
> > > Is there a documentation that explains how to setup bonding on SR-IOV neutron?
> > 
> > Not right now to my knowledge, but I remember seeing effort to 
> > design and introduce this feature. I think there may have been 
> > multiple rounds of design already, this is maybe the last one that's 
> > still
> > ongoing:
> > 
> > Neutron side:
> > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fb
> > ugs.launchpad.net%2Fneutron%2F%2Bbug%2F1809037&data=02%7C01%7Cmo
> > shele%40mellanox.com%7C8979e561cb0041ecd38608d69e49c7ad%7Ca652971c7d
> > 2e4d9ba6a4d149256f461b%7C0%7C0%7C636870437055504756&sdata=a0AFu5
> > JEKwFkr2PchDzikl9DrclFmAEp95k2juYiPbY%3D&reserved=0
> > 
> > Nova side:
> > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fb
> > lueprints.launchpad.net%2Fnova%2F%2Bspec%2Fschedule-vm-nics-to-diffe
> > rent-pf&data=02%7C01%7Cmoshele%40mellanox.com%7C8979e561cb0041ec
> > d38608d69e49c7ad%7Ca652971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636870
> > 437055504756&sdata=fKUPr6pwTezvhsOYTv6%2BJqDRFYylg65VKteaXsGlFQk
> > %3D&reserved=0
> > https://emea01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fb
> > lueprints.launchpad.net%2Fnova%2F%2Bspec%2Fsriov-bond&data=02%7C
> > 01%7Cmoshele%40mellanox.com%7C8979e561cb0041ecd38608d69e49c7ad%7Ca65
> > 2971c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C636870437055504756&sdata
> > =JU10BzjzhuaiARvyXZH8XJIu6N1eNr%2BPrzZvjjRzuHs%3D&reserved=0
> 
> most of the previous attempts have not proceeded as they have tried to 
> hide the bonding from nova's and neutron's data models vai  configs or opaque strings.
> 
> for bonding to really be supported  at the openstack level we will 
> need to take a similar a approch to trunk ports. e.g. we create a 
> logical bond port and a set of bond peer ports at the neutron api level then we attach the bond port to the vm.
> 
> currently im not aware of any proposal that really has tracktion.
> 
> you can manually create a bond in the guest but you cannot today 
> guarentee that the vf will come from different pfs.
> 
> one of the reason i think we need to go the logical bond port 
> direction is that it will allow us to constuct resouce requests using the fucntionality that is being added for bandwidth based schduleing.
> that will make expressing affinity and anti affinty simpler useing the request groups syntax.
> i personally have stopped trying to add bond support untill that bandwith based schduling effort is finished.
> > 
> > Hope that helps,
> > Bence
> > 
> 
> 
> NOTICE
> Please consider the environment before printing this email. This 
> message and any attachments are intended for the addressee named and 
> may contain legally privileged/confidential/copyright information. If 
> you are not the intended recipient, you should not read, use, 
> disclose, copy or distribute this communication. If you have received this message in error please notify us at once by return email and then delete both messages. We accept no liability for the distribution of viruses or similar in electronic communications. This notice should not be removed.




More information about the openstack-discuss mailing list