Thanks Sean,

I don't have NIC which supports hardware offloading or any kind of feature. I am using intel nic 82599 just for SRIOV and looking for bonding support which is only possible inside VM. As you know we already run a large SRIOV environment with openstack but my biggest issue is to upgrade switches without downtime. I want to be more resilient to not worry about that. 

Do you still think it's dangerous or not a good idea to bond sriov nic inside VM?  what could go wrong here just trying to understand before i go crazy :) 




On Fri, Mar 10, 2023 at 6:57 AM Sean Mooney <smooney@redhat.com> wrote:
On Thu, 2023-03-09 at 16:43 -0500, Satish Patel wrote:
> Folks,
>
> As you know, SR-IOV doesn't support bonding so the only solution is to
> implement LACP bonding inside the VM.
>
> I did some tests in the lab to create two physnet and map them with two
> physical nic and create VF and attach them to VM. So far all good but one
> problem I am seeing is each neutron port I create has an IP address
> associated and I can use only one IP on bond but that is just a waste of IP
> in the Public IP pool.
>
> Are there any way to create sriov port but without IP address?
techinially we now support adressless port in neutron and nova.
so that shoudl be possible. 
if you tried to do this with hardware offloaed ovs rather then the standard sriov with the sriov
nic agent you likel will need to also use the allowed_adress_pairs extension to ensure that ovs did not
drop the packets based on the ip adress. if you are using heriarcical port binding where you TOR is manged
by an ml2 driver you might also need the allowed_adress_pairs extension with the sriov nic agent to make sure
the packets are not drop at the swtitch level.

as you likely arlready no we do not support VF bonding in openstack or bonded ports in general in then neutron api.
there was an effort a few years ago to make a bond port extention that mirror hwo trunk ports work
i.e. hanving 2 neutron subport and a bond port that  agreates them but we never got that far with
the design. that would have enabeld boning to be implemtned in diffent ml2 driver  like ovs/sriov/ovn ectra with
a consitent/common api.

some people have used mellonox's VF lag functionalty howver that was never actully enable propelry in nova/neutron
so its not officlaly supported upstream but that functional allow you to attach only a singel VF to the guest form
bonded ports on a single card.

there is no supprot in nova/neutron for that offically as i said it just happens to work unitnetionally so i would not
advise that you use it in produciton unless your happy to work though any issues you find yourself.