<div dir="ltr">Hi,<div><br></div><div>I have sr-iov in production in some customers with maximum number of VFs and didn't notice any performance issues. </div><div><br></div><div>My understanding is that of course you will have performance penalty if you consume all those vfs, because you're dividing the bandwidth across them, but other than if they're are there doing nothing you won't notice anything.</div><div><br></div><div>But I'm just talking from my experience :) </div><div><br></div><div>Regards,</div><div>Pedro Sousa </div></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Jan 22, 2018 at 11:47 PM, Maciej Kucia <span dir="ltr"><<a href="mailto:maciej@kucia.net" target="_blank">maciej@kucia.net</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">Thank you for the reply. I am interested in SR-IOV and pci whitelisting is certainly involved.<br>I suspect that OpenStack itself can handle those numbers of devices, especially in telco applications where not much scheduling is being done. The feedback I am getting is from sysadmins who work on network virtualization but I think this is just a rumor without any proof.<br><br>The question is if performance penalty from SR-IOV drivers or PCI itself is negligible. Should cloud admin configure maximum number of VFs for flexibility or should it be manually managed and balanced depending on application?<br><br>Regards,<br>Maciej<div><div class="h5"><div> <br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div><div><div><div class="m_7286409634420404321h5"><div></div><div><div class="gmail_extra"><br><div class="gmail_quote">2018-01-22 18:38 GMT+01:00 Jay Pipes <span dir="ltr"><<a href="mailto:jaypipes@gmail.com" target="_blank">jaypipes@gmail.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><span class="m_7286409634420404321m_5470414210473197585gmail-">On 01/22/2018 11:36 AM, Maciej Kucia wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Hi!<br>
<br>
Is there any noticeable performance penalty when using multiple virtual functions?<br>
<br>
For simplicity I am enabling all available virtual functions in my NICs.<br>
</blockquote>
<br></span>
I presume by the above you are referring to setting your pci_passthrough_whitelist on your compute nodes to whitelist all VFs on a particular PF's PCI address domain/bus?<span class="m_7286409634420404321m_5470414210473197585gmail-"><br>
<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Sometimes application is using only few of them. I am using Intel and Mellanox.<br>
<br>
I do not see any performance drop but I am getting feedback that this might not be the best approach.<br>
</blockquote>
<br></span>
Who is giving you this feedback?<br>
<br>
The only issue with enabling (potentially 254 or more) VFs for each PF is that each VF will end up as a record in the pci_devices table in the Nova cell database. Multiply 254 or more times the number of PFs times the number of compute nodes in your deployment and you can get a large number of records that need to be stored. That said, the pci_devices table is well indexed and even if you had 1M or more records in the table, the access of a few hundred of those records when the resource tracker does a PciDeviceList.get_by_compute_n<wbr>ode() [1] will still be quite fast.<br>
<br>
Best,<br>
-jay<br>
<br>
[1] <a href="https://github.com/openstack/nova/blob/stable/pike/nova/compute/resource_tracker.py#L572" rel="noreferrer" target="_blank">https://github.com/openstack/n<wbr>ova/blob/stable/pike/nova/comp<wbr>ute/resource_tracker.py#L572</a> and then<br>
<a href="https://github.com/openstack/nova/blob/stable/pike/nova/pci/manager.py#L71" rel="noreferrer" target="_blank">https://github.com/openstack/n<wbr>ova/blob/stable/pike/nova/pci/<wbr>manager.py#L71</a><br>
<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Any recommendations?<br>
<br>
Thanks,<br>
Maciej<br>
<br>
<br>
______________________________<wbr>_________________<br>
OpenStack-operators mailing list<br>
<a href="mailto:OpenStack-operators@lists.openstack.org" target="_blank">OpenStack-operators@lists.open<wbr>stack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi<wbr>-bin/mailman/listinfo/openstac<wbr>k-operators</a><br>
<br>
</blockquote>
<br>
______________________________<wbr>_________________<br>
OpenStack-operators mailing list<br>
<a href="mailto:OpenStack-operators@lists.openstack.org" target="_blank">OpenStack-operators@lists.open<wbr>stack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi<wbr>-bin/mailman/listinfo/openstac<wbr>k-operators</a><br>
</blockquote></div><br></div></div></div></div></div></div></div>
</blockquote></div></div></div><br></div></div>
<br>______________________________<wbr>_________________<br>
OpenStack-operators mailing list<br>
<a href="mailto:OpenStack-operators@lists.openstack.org">OpenStack-operators@lists.<wbr>openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" rel="noreferrer" target="_blank">http://lists.openstack.org/<wbr>cgi-bin/mailman/listinfo/<wbr>openstack-operators</a><br>
<br></blockquote></div><br></div>