[openstack-dev] [Cyborg] [Nova] Cyborg traits

Jay Pipes jaypipes at gmail.com
Thu Jun 7 13:11:36 UTC 2018


Sorry for delay in responding on this. Comments inline.

On 05/29/2018 07:33 PM, Nadathur, Sundar wrote:
> Hi all,
>     The Cyborg/Nova scheduling spec [1] details what traits will be 
> applied to the resource providers that represent devices like GPUs. Some 
> of the traits referred to vendor names. I got feedback that traits must 
> not refer to products or specific models of devices.

It's not that traits are not allowed to reference vendor names or 
identifiers. Just take a look at the entire module in os-traits that is 
designated with x86 CPU instruction set extensions:

https://github.com/openstack/os-traits/blob/master/os_traits/hw/cpu/x86.py

Clearly x86 references the vendor_id for Intel, as you know.

The primary issue has never been having vendor identifiers in traits. 
The primary issue has always been the proposed (ab)use of traits as 
string categories -- in other words, using traits as "types".

That isn't what traits are for. Traits are specifically for boolean 
values -- capabilities that a provider either has or doesn't have.

That is why there's no key/value pairing for traits. There isn't a 
value. The capability is either available or not available. What you are 
trying to do is make a key/value pair where the key is "VGPU TYPE" and 
the value is the vendor's model name or moniker. And that isn't 
appropriate for traits.

The string "M60-0Q" doesn't refer to a single capability. Instead, that 
string is a moniker that NVIDIA uses to represent a set of capabilities 
and random requirements together:

* a max of 2 vGPU "display heads"
* a max resolution of 2560x1600
* 512M framebuffer per vGPU
* the host requires a Quadro vDWS license installed
* support for the following graphics APIs:
  * DirectX 12
  * Direct2D
  * DirectX Video Acceleration (DXVA)
  * OpenGL 4.5
  * Vulkan 1.0
* support for the following parallel programming platforms:
  * OpenCL (<= 2.1 I think?)
  * CUDA (<=4.0 I think?)

It's virtually impossible to tell what the actual capabilities of these 
vendor monikers are without help from the few people at NVIDIA that 
actually know these things, partly because the documentation from NVIDIA 
is so poor (or completely lacking), partly because the installation of 
the various host and guest drivers is an entirely manual process, partly 
because NVIDIA and most of the other hardware vendors are more 
interested in enabling their latest and greatest technology instead of 
documenting their "old" (read: <6 months ago released) stuff.

> I agree. However, we need some reference to device types to enable
> matching the VM driver with the device.

Well, no, you don't need to match the device type to the VM driver. You 
need to match the host (or specific pGPU)'s supported CUDA driver 
version(s) (NVIDIA calls this "Compute Capability") with the *required 
minimum CUDA driver version for the guest*.

The solution here is to have a big hash table of vendor product name 
(vGPU type) to sets of standard traits, and have the guest specify CUDA 
driver version requirements as one or more required=HW_GPU_API_CUDA_XXX 
extra specs.

In other words, we need to break down this "vGPU type" (which even 
NVIDIA admits is nothing more than a "product profile" of a set of 
capabilities) into its respective set of standardized os-traits.

I've recommended in Sylvain's multi-vgpu-types spec that we put this 
hash table in nova/virt/vgpu_capabilities.py but if Cyborg needs to use 
this as well, we could just as easily make it a module in os-traits.

This way, when the nova-compute or Cyborg worker starts up, it can query 
the sysfs mdev_supported_types bucket of randomness, take the <type-id> 
values that show up in /sys/class/mdev_bus/$device/mdev_supported_types 
and look up the actual capabilities that the <type-id> strings like 
"nvidia-35" represent.

> TL;DR We need some reference to device types, but we don't need product 
> names. I will update the spec [1] to clarify that. Rest of this email 
> clarifies why we need device types in traits, and what traits we propose 
> to include.
> 
> In general, an accelerator device is operated by two pieces of software: 
> a driver in the kernel (which may discover and handle the PF for SR-IOV  
> devices), and a driver/library in the guest (which may handle the 
> assigned VF).
> 
> The device assigned to the VM must match the driver/library packaged in 
> the VM. For this, the request must explicitly state what category of 
> devices it needs. For example, if the VM needs a GPU, it needs to say 
> whether it needs an AMD GPU or an Nvidia GPU, since it may have the 
> driver/libraries for that vendor alone.

Placement's traits and resource classes are absolutely *not* intended to 
be the vehicle by which guest *configuration details* (like proprietary 
driver setup and versioning in the guest or license activation, etc) are 
conveyed to the guest. We already have a vehicle for that: it's called 
the metadata API and userdata, vendor data, device metadata.

Let's limit the traits that are set as required by the guest to being an 
expression of which APIs the software in the VM was written against 
(e.g. what version of CUDA or OpenCL is needed, how many display heads 
are needed, what the maximum resolution needed is, etc).

Handle license activation of proprietary drivers separately without traits.

> It may also need to state what version of Cuda is needed, if it is a
> Nvidia GPU. These aspects are necessarily vendor-specific.

Actually, no, CUDA is (mostly) standardized as an API, as is OpenCL, 
OpenACC, OpenGL, etc.

The vendor-specific stuff you are referring to is mostly about license 
activation inside the guest VM.

> Further, one driver/library version may handle multiple devices. Since a 
> new driver version may be backwards compatible, multiple driver versions 
> may manage the same device. The development/release of the 
> driver/library inside the VM should be independent of the kernel driver 
> for that device.

Agreed.

> For FPGAs, there is an additional twist as the VM may need specific 
> bitstream(s), and they match only specific device/region types. The 
> bitstream for a device from a vendor will not fit any other device from 
> the same vendor, let alone other vendors. IOW, the region type is 
> specific not just to a vendor but to a device type within the vendor. 
> So, it is essential to identify the device type.
> 
> So, the proposed set of RCs and traits are as below. As we learn more 
> about actual usages by operators, we may need to evolve this set.
> 
>   * There is a resource class per device category e.g.
>     CUSTOM_ACCELERATOR_GPU, CUSTOM_ACCELERATOR_FPGA.
>   * The resource provider that represents a device has the following traits:
>       o Vendor/Category trait: e.g. CUSTOM_GPU_AMD, CUSTOM_FPGA_XILINX.
>       o Device type trait which is a refinement of vendor/category trait
>         e.g. CUSTOM_FPGA_XILINX_VU9P.
> 
>         NOTE: This is not a product or model, at least for FPGAs.
>         Multiple products may use the same FPGA chip.
>         NOTE: The reason for having both the vendor/category and this
>         one is that a flavor may ask for either, depending on the
>         granularity desired. IOW, if one driver can handle all devices
>         from a vendor (*eye roll*), the flavor can ask for the
>         vendor/category trait alone. If there are separate drivers for
>         different device families from the same vendor, the flavor must
>         specify the trait for the device family.
>         NOTE: The equivalent trait for GPUs may be like
>         CUSTOM_GPU_NVIDIA_P90, but I'll let others decide if that is a
>         product or not.
> 
>       o For FPGAs, we have additional traits:
>           + Functionality trait: e.g. CUSTOM_FPGA_COMPUTE,
>             CUSTOM_FPGA_NETWORK, CUSTOM_FPGA_STORAGE
>           + Region type ID.  e.g. CUSTOM_FPGA_INTEL_REGION_<uuid>.
>           + Optionally, a function ID, indicating what function is
>             currently programmed in the region RP. e.g.
>             CUSTOM_FPGA_INTEL_FUNCTION_<uuid>. Not all implementations
>             may provide it. The function trait may change on
>             reprogramming, but it is not expected to be frequent.
>           + Possibly, CUSTOM_PROGRAMMABLE as a separate trait

I really don't believe you should be using traits for the different 
types of FPGA bitstreams. Use custom resource classes for all of it, 
IMHO. Traits are capabilities. What you are describing above is really 
just a consumable resource (in other words, a resource class) of a 
custom bitstream program. They should be just custom resource classes. 
Use traits to represent capabilities, not types.

Best,
-jay



More information about the OpenStack-dev mailing list