Tesla V100 32G GPU with openstack

António Paulo antonio.paulo at cern.ch
Tue Jan 18 10:55:52 UTC 2022


Hey Satish, Gustavo,

Just to clarify a bit on point 3, you will have to buy a vGPU license
per card and this gives you access to all the downloads you need through
NVIDIA's web dashboard -- both the host and guest drivers as well as the
license server setup files.

Cheers,
António

On 18/01/22 02:46, Satish Patel wrote:
> Thank you so much! This is what I was looking for. It is very odd that
> we buy a pricey card but then we have to buy a license to make those
> features available.
> 
> On Mon, Jan 17, 2022 at 2:07 PM Gustavo Faganello Santos
> <gustavofaganello.santos at windriver.com> wrote:
>>
>> Hello, Satish.
>>
>> I've been working with vGPU lately and I believe I can answer your
>> questions:
>>
>> 1. As you pointed out in question #2, the pci-passthrough will allocate
>> the entire physical GPU to one single guest VM, while vGPU allows you to
>> spawn from 1 to several VMs using the same physical GPU, depending on
>> the vGPU type you choose (check NVIDIA docs to see which vGPU types the
>> Tesla V100 supports and their properties);
>> 2. Correct;
>> 3. To use vGPU, you need vGPU drivers installed on the platform where
>> your deployment of OpenStack is running AND in the VMs, so there are two
>> drivers to be installed in order to use the feature. I believe both of
>> them have to be purchased from NVIDIA in order to be used, and you would
>> also have to deploy an NVIDIA licensing server in order to validate the
>> licenses of the drivers running in the VMs.
>> 4. You can see what the instructions are for each of these scenarios in
>> [1] and [2].
>>
>> There is also extensive documentation on vGPU at NVIDIA's website [3].
>>
>> [1] https://docs.openstack.org/nova/wallaby/admin/virtual-gpu.html
>> [2] https://docs.openstack.org/nova/wallaby/admin/pci-passthrough.html
>> [3] https://docs.nvidia.com/grid/13.0/index.html
>>
>> Regards,
>> Gustavo.
>>
>> On 17/01/2022 14:41, Satish Patel wrote:
>>> [Please note: This e-mail is from an EXTERNAL e-mail address]
>>>
>>> Folk,
>>>
>>> We have Tesla V100 32G GPU and I’m trying to configure with openstack wallaby. This is first time dealing with GPU so I have couple of question.
>>>
>>> 1. What is the difference between passthrough vs vGPU? I did google but not very clear yet.
>>> 2. If I configure it passthrough then does it only work with single VM ? ( I meant whole GPU will get allocate to single VM correct?
>>> 3. Also some document saying Tesla v100 support vGPU but some folks saying you need license. I have no idea where to get that license. What is the deal here?
>>> 3. What are the config difference between configure this card with passthrough vs vGPU?
>>>
>>>
>>> Currently I configure it with passthrough based one one article and I am able to spun up with and I can see nvidia card exposed to vm. (I used iommu and vfio based driver) so if this card support vGPU then do I need iommu and vfio or some other driver to make it virtualize ?
>>>
>>> Sent from my iPhone
>>>
> 



More information about the openstack-discuss mailing list