<div dir="ltr">Thanks Sean,<div><br></div><div>I don't have NIC which supports hardware offloading or any kind of feature. I am using intel nic 82599 just for SRIOV and looking for bonding support which is only possible inside VM. As you know we already run a large SRIOV environment with openstack but my biggest issue is to upgrade switches without downtime. I want to be more resilient to not worry about that. </div><div><br></div><div>Do you still think it's dangerous or not a good idea to bond sriov nic inside VM? what could go wrong here just trying to understand before i go crazy :) </div><div><br><div><br></div><div><br></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Mar 10, 2023 at 6:57 AM Sean Mooney <<a href="mailto:smooney@redhat.com">smooney@redhat.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On Thu, 2023-03-09 at 16:43 -0500, Satish Patel wrote:<br>
> Folks,<br>
> <br>
> As you know, SR-IOV doesn't support bonding so the only solution is to<br>
> implement LACP bonding inside the VM.<br>
> <br>
> I did some tests in the lab to create two physnet and map them with two<br>
> physical nic and create VF and attach them to VM. So far all good but one<br>
> problem I am seeing is each neutron port I create has an IP address<br>
> associated and I can use only one IP on bond but that is just a waste of IP<br>
> in the Public IP pool.<br>
> <br>
> Are there any way to create sriov port but without IP address?<br>
techinially we now support adressless port in neutron and nova.<br>
so that shoudl be possible. <br>
if you tried to do this with hardware offloaed ovs rather then the standard sriov with the sriov<br>
nic agent you likel will need to also use the allowed_adress_pairs extension to ensure that ovs did not<br>
drop the packets based on the ip adress. if you are using heriarcical port binding where you TOR is manged<br>
by an ml2 driver you might also need the allowed_adress_pairs extension with the sriov nic agent to make sure<br>
the packets are not drop at the swtitch level.<br>
<br>
as you likely arlready no we do not support VF bonding in openstack or bonded ports in general in then neutron api.<br>
there was an effort a few years ago to make a bond port extention that mirror hwo trunk ports work<br>
i.e. hanving 2 neutron subport and a bond port that agreates them but we never got that far with<br>
the design. that would have enabeld boning to be implemtned in diffent ml2 driver like ovs/sriov/ovn ectra with<br>
a consitent/common api.<br>
<br>
some people have used mellonox's VF lag functionalty howver that was never actully enable propelry in nova/neutron<br>
so its not officlaly supported upstream but that functional allow you to attach only a singel VF to the guest form<br>
bonded ports on a single card.<br>
<br>
there is no supprot in nova/neutron for that offically as i said it just happens to work unitnetionally so i would not<br>
advise that you use it in produciton unless your happy to work though any issues you find yourself.<br>
<br>
</blockquote></div>