Tesla V100 32G GPU with openstack
satish.txt at gmail.com
Mon Jan 17 17:41:33 UTC 2022
We have Tesla V100 32G GPU and I’m trying to configure with openstack wallaby. This is first time dealing with GPU so I have couple of question.
1. What is the difference between passthrough vs vGPU? I did google but not very clear yet.
2. If I configure it passthrough then does it only work with single VM ? ( I meant whole GPU will get allocate to single VM correct?
3. Also some document saying Tesla v100 support vGPU but some folks saying you need license. I have no idea where to get that license. What is the deal here?
3. What are the config difference between configure this card with passthrough vs vGPU?
Currently I configure it with passthrough based one one article and I am able to spun up with and I can see nvidia card exposed to vm. (I used iommu and vfio based driver) so if this card support vGPU then do I need iommu and vfio or some other driver to make it virtualize ?
Sent from my iPhone
More information about the openstack-discuss