[openstack-dev] Fwd: PCI passthrough of 40G ethernet interface

jacob jacob opstkusr at gmail.com
Fri Mar 27 14:00:52 UTC 2015


After update to latest firmware and using version 1.2.37 of i40e
driver, things are looking better with PCI passthrough.

]# ethtool -i eth3
driver: i40e
version: 1.2.37
firmware-version: f4.33.31377 a1.2 n4.42 e1930
bus-info: 0000:00:07.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes


There are still issues running dpdk 1.8.0 from the VM using the pci
passthrough devices and looks like it puts the devices in a bad state.
i40e driver will not bind after this happens and a host reboot is
required to recover.
I'll post further updates as i make progress.
Thanks for all the help.

On Thu, Mar 26, 2015 at 8:50 PM, yongli he <yongli.he at intel.com> wrote:
> 在 2015年03月11日 22:15, jacob jacob 写道:
> Hi, jacob
>
>   we now find   przemyslaw.czesnowicz have same NIC, hope will help a little
> bit.
>
> Yongli He
>
>
> ---------- Forwarded message ----------
> From: jacob jacob <opstkusr at gmail.com>
> Date: Tue, Mar 10, 2015 at 6:00 PM
> Subject: PCI passthrough of 40G ethernet interface
> To: openstack at lists.openstack.org
>
>
>
> Hi,
> I'm interested in finding out if anyone has successfully tested PCI
> passthrough functionality for 40G interfaces in Openstack(KVM hypervisor).
>
> I am trying this out on a Fedora 21 host  with Fedora 21 VM
> image.(3.18.7-200.fc21.x86_64)
>
> Was able to successfully test PCI passthrough of 10 G interfaces:
>   Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network
> Connection (rev 01)
>
> With 40G interface testing, the PCI device is passed through to the VM but
> data transfer is failing.
>     0a:00.1 Ethernet controller: Intel Corporation Ethernet Controller XL710
> for 40GbE QSFP+ (rev 01)
>
> Tried this with both the i40e driver and latest dpdk driver but no luck so
> far.
>
> With the i40e driver, the data transfer fails.
>  Relevant dmesg output:
>  [   11.544088] i40e 0000:00:05.0 eth1: NIC Link is Up 40 Gbps Full Duplex,
> Flow Control: None
> [   11.689178] i40e 0000:00:06.0 eth2: NIC Link is Up 40 Gbps Full Duplex,
> Flow Control: None
> [   16.704071] ------------[ cut here ]------------
> [   16.705053] WARNING: CPU: 1 PID: 0 at net/sched/sch_generic.c:303
> dev_watchdog+0x23e/0x250()
> [   16.705053] NETDEV WATCHDOG: eth1 (i40e): transmit queue 1 timed out
> [   16.705053] Modules linked in: cirrus ttm drm_kms_helper i40e drm ppdev
> serio_raw i2c_piix4 virtio_net parport_pc ptp virtio_balloon
> crct10dif_pclmul pps_core parport pvpanic crc32_pclmul ghash_clmulni_intel
> virtio_blk crc32c_intel virtio_pci virtio_ring virtio ata_generic pata_acpi
> [   16.705053] CPU: 1 PID: 0 Comm: swapper/1 Not tainted
> 3.18.7-200.fc21.x86_64 #1
> [   16.705053] Hardware name: Fedora Project OpenStack Nova, BIOS
> 1.7.5-20140709_153950- 04/01/2014
> [   16.705053]  0000000000000000 2e5932b294d0c473 ffff88043fc83d48
> ffffffff8175e686
> [   16.705053]  0000000000000000 ffff88043fc83da0 ffff88043fc83d88
> ffffffff810991d1
> [   16.705053]  ffff88042958f5c0 0000000000000001 ffff88042865f000
> 0000000000000001
> [   16.705053] Call Trace:
> [   16.705053]  <IRQ>  [<ffffffff8175e686>] dump_stack+0x46/0x58
> [   16.705053]  [<ffffffff810991d1>] warn_slowpath_common+0x81/0xa0
> [   16.705053]  [<ffffffff81099245>] warn_slowpath_fmt+0x55/0x70
> [   16.705053]  [<ffffffff8166e62e>] dev_watchdog+0x23e/0x250
> [   16.705053]  [<ffffffff8166e3f0>] ? dev_graft_qdisc+0x80/0x80
> [   16.705053]  [<ffffffff810fd52a>] call_timer_fn+0x3a/0x120
> [   16.705053]  [<ffffffff8166e3f0>] ? dev_graft_qdisc+0x80/0x80
> [   16.705053]  [<ffffffff810ff692>] run_timer_softirq+0x212/0x2f0
> [   16.705053]  [<ffffffff8109d7a4>] __do_softirq+0x124/0x2d0
> [   16.705053]  [<ffffffff8109db75>] irq_exit+0x125/0x130
> [   16.705053]  [<ffffffff817681d8>] smp_apic_timer_interrupt+0x48/0x60
> [   16.705053]  [<ffffffff817662bd>] apic_timer_interrupt+0x6d/0x80
> [   16.705053]  <EOI>  [<ffffffff811005c8>] ? hrtimer_start+0x18/0x20
> [   16.705053]  [<ffffffff8105ca96>] ? native_safe_halt+0x6/0x10
> [   16.705053]  [<ffffffff810f81d3>] ? rcu_eqs_enter+0xa3/0xb0
> [   16.705053]  [<ffffffff8101ec7f>] default_idle+0x1f/0xc0
> [   16.705053]  [<ffffffff8101f64f>] arch_cpu_idle+0xf/0x20
> [   16.705053]  [<ffffffff810dad35>] cpu_startup_entry+0x3c5/0x410
> [   16.705053]  [<ffffffff8104a2af>] start_secondary+0x1af/0x1f0
> [   16.705053] ---[ end trace 7bda53aeda558267 ]---
> [   16.705053] i40e 0000:00:05.0 eth1: tx_timeout recovery level 1
> [   16.705053] i40e 0000:00:05.0: i40e_vsi_control_tx: VSI seid 519 Tx ring
> 0 disable timeout
> [   16.744198] i40e 0000:00:05.0: i40e_vsi_control_tx: VSI seid 520 Tx ring
> 64 disable timeout
> [   16.779322] i40e 0000:00:05.0: i40e_ptp_init: added PHC on eth1
> [   16.791819] i40e 0000:00:05.0: PF 40 attempted to control timestamp mode
> on port 1, which is owned by PF 1
> [   16.933869] i40e 0000:00:05.0 eth1: NIC Link is Up 40 Gbps Full Duplex,
> Flow Control: None
> [   18.853624] SELinux: initialized (dev tmpfs, type tmpfs), uses transition
> SIDs
> [   22.720083] i40e 0000:00:05.0 eth1: tx_timeout recovery level 2
> [   22.826993] i40e 0000:00:05.0: i40e_vsi_control_tx: VSI seid 519 Tx ring
> 0 disable timeout
> [   22.935288] i40e 0000:00:05.0: i40e_vsi_control_tx: VSI seid 520 Tx ring
> 64 disable timeout
> [   23.669555] i40e 0000:00:05.0: i40e_ptp_init: added PHC on eth1
> [   23.682067] i40e 0000:00:05.0: PF 40 attempted to control timestamp mode
> on port 1, which is owned by PF 1
> [   23.722423] i40e 0000:00:05.0 eth1: NIC Link is Up 40 Gbps Full Duplex,
> Flow Control: None
> [   23.800206] i40e 0000:00:06.0: i40e_ptp_init: added PHC on eth2
> [   23.813804] i40e 0000:00:06.0: PF 48 attempted to control timestamp mode
> on port 0, which is owned by PF 0
> [   23.855275] i40e 0000:00:06.0 eth2: NIC Link is Up 40 Gbps Full Duplex,
> Flow Control: None
> [   38.720091] i40e 0000:00:05.0 eth1: tx_timeout recovery level 3
> [   38.725844] random: nonblocking pool is initialized
> [   38.729874] i40e 0000:00:06.0: HMC error interrupt
> [   38.733425] i40e 0000:00:06.0: i40e_vsi_control_tx: VSI seid 518 Tx ring
> 0 disable timeout
> [   38.738886] i40e 0000:00:06.0: i40e_vsi_control_tx: VSI seid 521 Tx ring
> 64 disable timeout
> [   39.689569] i40e 0000:00:06.0: i40e_ptp_init: added PHC on eth2
> [   39.704197] i40e 0000:00:06.0: PF 48 attempted to control timestamp mode
> on port 0, which is owned by PF 0
> [   39.746879] i40e 0000:00:06.0 eth2: NIC Link is Down
> [   39.838356] i40e 0000:00:05.0: i40e_ptp_init: added PHC on eth1
> [   39.851788] i40e 0000:00:05.0: PF 40 attempted to control timestamp mode
> on port 1, which is owned by PF 1
> [   39.892822] i40e 0000:00:05.0 eth1: NIC Link is Down
> [   43.011610] i40e 0000:00:06.0 eth2: NIC Link is Up 40 Gbps Full Duplex,
> Flow Control: None
> [   43.059976] i40e 0000:00:05.0 eth1: NIC Link is Up 40 Gbps Full Duplex,
> Flow Control: None
>
>
> Similarly, with dpdk driver, no packet transfer is happening.
>
>
> The host was booted with iommu=pt intel_iommu=on.
>
> It is to be noted that everything works fine on the host itself. The issue
> is seen only when the devices are passed through to the VM.
>
> Would really appreciate any additional information on how to address or
> further debug this issue.
>
> Thanks
> Jacob
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



More information about the OpenStack-dev mailing list