<div dir="ltr">Just wanted to add a few notes (I apologize for the brevity):<div><br></div><div>* The wiki page is indeed the best source of information to get started.<br></div><div>* I found that I didn't have to use EFI-based images. I wonder why that is?</div><div>* PCI devices and IDs can be found by running the following on a compute node:</div><div><br></div><div>$ lspci -nn | grep -i nvidia</div><div><div>84:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK107GL [GRID K1] [10de:0ff2] (rev a1)</div><div>85:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK107GL [GRID K1] [10de:0ff2] (rev a1)</div><div>86:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK107GL [GRID K1] [10de:0ff2] (rev a1)</div><div>87:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK107GL [GRID K1] [10de:0ff2] (rev a1)</div></div><div><br></div><div>In which 10de becomes the vendor ID and 0ff2 becomes the product ID.</div><div><br></div><div>* My nova.conf looks like this:</div><div><br></div><div><div>pci_alias={"vendor_id":"10de", "product_id":"0ff2", "name":"gpu"}</div><div>scheduler_driver=nova.scheduler.filter_scheduler.FilterScheduler</div><div>scheduler_available_filters=nova.scheduler.filters.all_filters</div><div>scheduler_available_filters=nova.scheduler.filters.pci_passthrough_filter.PciPassthroughFilter</div><div>scheduler_default_filters=RamFilter,ComputeFilter,AvailabilityZoneFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,PciPassthroughFilter</div></div><div><br></div><div>* My /etc/default/grub on the compute node has the following entries:</div><div><br></div><div><div>GRUB_CMDLINE_LINUX_DEFAULT="quiet splash intel_iommu=on iommu=pt rd.modules-load=vfio-pci"</div><div>GRUB_CMDLINE_LINUX="intel_iommu=on iommu=pt rd.modules-load=vfio-pci"</div></div><div><br></div><div>* I use the following to create a flavor with access to a single GPU:</div><div><br></div><div><div>nova flavor-create g1.large auto 8192 20 4 --ephemeral 20 --swap 2048</div><div>nova flavor-key g1.large set "pci_passthrough:alias"="gpu:1"</div></div><div><br></div><div>For NVIDIA cards in particular, it might take a few attempts to install the correct driver version, CUDA tools version, etc to get things working correctly. NVIDIA has a bundle of CUDA examples, one of which is "/usr/local/cuda-7.5/samples/1_Utilities/deviceQuery". Running this will confirm if the instance can successfully access the GPU.</div><div><br></div><div>Hope this helps!</div><div>Joe</div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, May 10, 2016 at 8:58 AM, Tomas Vondra <span dir="ltr"><<a href="mailto:vondra@czech-itc.cz" target="_blank">vondra@czech-itc.cz</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">Nordquist, Peter L <Peter.Nordquist@...> writes:<br>
<br>
> You will also have to enable iommu on your hypervisors to have libvirt<br>
expose the capability to Nova for PCI<br>
> passthrough. I use Centos 7 and had to set 'iommu=pt intel_iommu=on' for<br>
my kernel parameters. Along with<br>
> this, you'll have to start using EFI for your VMs by installing OVMF on<br>
your Hypervisors and configuring<br>
> your images appropriately. I don't have a link handy for this but the<br>
gist is that Legacy bootloaders have a<br>
> much more complicated process to initialize the devices being passed to<br>
the VM where EFI is much easier.<br>
<br>
</span>Hi!<br>
What I found out the hard way under the Xen hypervisor is that the GPU you<br>
are passing through must not be the primary GPU of the system. Otherwise,<br>
you get memory corruption as soon as something appears on the console. If<br>
not sooner :-). Test if your motherboards are capable of running on the<br>
integrated VGA even if some other graphics card is connected. Or blacklist<br>
it for the kernel.<br>
<span class="HOEnZb"><font color="#888888">Tomas<br>
</font></span><div class="HOEnZb"><div class="h5"><br>
<br>
<br>
_______________________________________________<br>
OpenStack-operators mailing list<br>
<a href="mailto:OpenStack-operators@lists.openstack.org">OpenStack-operators@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
</div></div></blockquote></div><br></div>