[i40e][vf][sriov] Disabling i40evf driver at the compute level.
Hey everyone, We are troubleshooting some weirdness with X710 Intel cards and i40evf. Looking around, it seems that most vendors recommend disabling the i40evf driver from the compute side (and keeping the i40e driver). Is anyone aware of any funkiness if both the i40e driver and the i40evf driver is present on computes running VMs with SRIOV ports. Thanks!
Hello Laurent: The i40evf is the guest driver when using VFs from a Fortville NIC. The i40e is the host driver that controls the PF and provides the VFs for the guests. Without the i40evf, the virtual machine cannot use the device provided. Both drivers have different goals. Regards. On Fri, Jul 9, 2021 at 12:58 AM Laurent Dumont <laurentfdumont@gmail.com> wrote:
Hey everyone,
We are troubleshooting some weirdness with X710 Intel cards and i40evf. Looking around, it seems that most vendors recommend disabling the i40evf driver from the compute side (and keeping the i40e driver).
Is anyone aware of any funkiness if both the i40e driver and the i40evf driver is present on computes running VMs with SRIOV ports.
Thanks!
Hey Rodolfo, That was our understanding as well. I guess our thinking is more related to if the i40evf driver is present on a compute that is providing a PF/VF to a VM, could that create a conflict at some point? The VM will use it's own Kernel with it's own version of i40evf but the compute providing the SRIOV PF is also loaded with i40evf + i40e. Just looking at understanding the expected deployment scenario. Thanks! On Fri, Jul 9, 2021 at 3:37 AM Rodolfo Alonso Hernandez <ralonsoh@redhat.com> wrote:
Hello Laurent:
The i40evf is the guest driver when using VFs from a Fortville NIC. The i40e is the host driver that controls the PF and provides the VFs for the guests.
Without the i40evf, the virtual machine cannot use the device provided.
Both drivers have different goals.
Regards.
On Fri, Jul 9, 2021 at 12:58 AM Laurent Dumont <laurentfdumont@gmail.com> wrote:
Hey everyone,
We are troubleshooting some weirdness with X710 Intel cards and i40evf. Looking around, it seems that most vendors recommend disabling the i40evf driver from the compute side (and keeping the i40e driver).
Is anyone aware of any funkiness if both the i40e driver and the i40evf driver is present on computes running VMs with SRIOV ports.
Thanks!
On Fri, 2021-07-09 at 05:33 -0400, Laurent Dumont wrote:
Hey Rodolfo,
That was our understanding as well. I guess our thinking is more related to if the i40evf driver is present on a compute that is providing a PF/VF to a VM, could that create a conflict at some point?
am in principal it should not as you are menet to be abel to mix usage of the VFs
The VM will use it's own Kernel with it's own version of i40evf but the compute providing the SRIOV PF is also loaded with i40evf + i40e. corret the guest will provid its own driver either i40evf or avf most likely unless they are using dpdk or a similar tech that leverages userspace driver.
Just looking at understanding the expected deployment scenario.
removal of the i40eVF dirver on the host will have some side effect the main ones being the inablity to use the VF to provide host networkin and the inablity to use the macvtap vnic_type in neutron. the macvtap vnic type create a macvtap device ontop of the VF netdev on the host and passes the macvtap to the guest instead fo the vf allowing the vm to live migrate and not need to have a vendeor driver for the nic at the cost of a perfromance overhead due to the macvtap device acting as n intermediary. in general its safe to revmove the dirver on the host but it may also have other implications. i belive the way we do not rely on the VF having a netdev name for bandwith qos to function just PF but normally the host has both drifers so this is less well tested. to prevent the driver being used on the hsot you can add it to the kernel module blacklist you can do that normally with modprobe.blacklist=driver_name and rd.driver.blacklist=driver_name rd.driver.blacklist is for the initram and modprobe.blacklist is for the running host after the root filesystem is loaded so you should add i40evf to both. you can also do it via the file system too https://wiki.archlinux.org/title/Kernel_module#Using_files_in_/etc/modprobe....
Thanks!
On Fri, Jul 9, 2021 at 3:37 AM Rodolfo Alonso Hernandez <ralonsoh@redhat.com> wrote:
Hello Laurent:
The i40evf is the guest driver when using VFs from a Fortville NIC. The i40e is the host driver that controls the PF and provides the VFs for the guests.
Without the i40evf, the virtual machine cannot use the device provided.
Both drivers have different goals.
Regards.
On Fri, Jul 9, 2021 at 12:58 AM Laurent Dumont <laurentfdumont@gmail.com> wrote:
Hey everyone,
We are troubleshooting some weirdness with X710 Intel cards and i40evf. Looking around, it seems that most vendors recommend disabling the i40evf driver from the compute side (and keeping the i40e driver).
Is anyone aware of any funkiness if both the i40e driver and the i40evf driver is present on computes running VMs with SRIOV ports.
Thanks!
participants (3)
-
Laurent Dumont
-
Rodolfo Alonso Hernandez
-
Sean Mooney