[Openstack] [openstack-dev] Neutron support for passthrough of networking devices?

Prashant Upadhyaya prashant.upadhyaya at aricent.com
Fri Oct 11 10:07:02 UTC 2013


> But if there are two
> physical NIC's which were diced up with SRIOV, then VM's on the diced
> parts of the first  physical NIC cannot communicate easily with the
> VM's on the diced parts of the second physical NIC. So a native
> implementation has to be there on the Compute Node which will aid this
> (this native implementation will take over the Physical Function, PF
> of each NIC) and will be able to 'switch' the packets between VM's of
> different physical diced up NIC's [if we need that usecase]

Is this strictly necessary?  It seems like it would be simpler to let the packets be sent out over the wire and the switch/router would send them back to the other NIC.  Of course this would result in higher use of the physical link, but on the other hand it would mean less work for the CPU on the compute node.

PU> Not strictly necessary. I am from the data plane background (Intel DPDK + SRIOV) and the Intel DPDK guide suggests the above usecase for acceleration of data path for the above. I agree, it would be much simpler to go to the switch and back into the 2nd NIC, let's solve this first in OpenStack with SRIOV, that itself will be a major step forward.

Regards
-Prashant

-----Original Message-----
From: Chris Friesen [mailto:chris.friesen at windriver.com]
Sent: Thursday, October 10, 2013 8:21 PM
To: Prashant Upadhyaya
Cc: OpenStack Development Mailing List; Jiang, Yunhong; openstack at lists.openstack.org
Subject: Re: [openstack-dev] [Openstack] Neutron support for passthrough of networking devices?

On 10/10/2013 01:19 AM, Prashant Upadhyaya wrote:
> Hi Chris,
>
> I note two of your comments --

>>> When we worked on H release, we target for basic PCI support like
>>> accelerator card or encryption card etc.

> PU> So I note that you are already solving the PCI pass through
> usecase somehow ? How ? If you have solved this already in terms of
> architecture then SRIOV should not be difficult.

Notice the double indent...that was actually Jiang's statement that I quoted.


>> Do we run into the same complexity if we have spare physical NICs on
>> the host that get passed in to the guest?

> PU> In part you are correct. However there is one additional thing.
> When we have multiple physical NIC's, the Compute Node's linux is
> still in control over those.

<snip>

> In case of SRIOV, you can dice up a single physical NIC into multiple
> NIC's (effectively), and expose each of these diced up NIC's to a VM
> each. This means that the VM will now 'directly' access the NIC
> bypassing the Hypervisor.

<snip>

> But if there are two
> physical NIC's which were diced up with SRIOV, then VM's on the diced
> parts of the first  physical NIC cannot communicate easily with the
> VM's on the diced parts of the second physical NIC. So a native
> implementation has to be there on the Compute Node which will aid this
> (this native implementation will take over the Physical Function, PF
> of each NIC) and will be able to 'switch' the packets between VM's of
> different physical diced up NIC's [if we need that usecase]

Is this strictly necessary?  It seems like it would be simpler to let the packets be sent out over the wire and the switch/router would send them back to the other NIC.  Of course this would result in higher use of the physical link, but on the other hand it would mean less work for the CPU on the compute node.

Chris




===============================================================================
Please refer to http://www.aricent.com/legal/email_disclaimer.html
for important disclosures regarding this electronic communication.
===============================================================================




More information about the Openstack mailing list