[openstack-dev] [Nova] [Cyborg] Updates to os-acc proposal

Eric Fried openstack at fried.cc
Tue Jul 31 17:42:10 UTC 2018


Sundar-

>   * Cyborg drivers deal with device-specific aspects, including
>     discovery/enumeration of devices and handling the Device Half of the
>     attach (preparing devices/accelerators for attach to an instance,
>     post-attach cleanup (if any) after successful attach, releasing
>     device/accelerator resources on instance termination or failed
>     attach, etc.)
>   * os-acc plugins deal with hypervisor/system/architecture-specific
>     aspects, including handling the Instance Half of the attach (e.g.
>     for libvirt with PCI, preparing the XML snippet to be included in
>     the domain XML).

This sounds well and good, but discovery/enumeration will also be
hypervisor/system/architecture-specific. So...

> Thus, the drivers and plugins are expected to be complementary. For
> example, for 2 devices of types T1 and T2, there shall be 2 separate
> Cyborg drivers. Further, we would have separate plugins for, say,
> x86+KVM systems and Power systems. We could then have four different
> deployments -- T1 on x86+KVM, T2 on x86+KVM, T1 on Power, T2 on Power --
> by suitable combinations of the drivers and plugins.

...the discovery/enumeration code for T1 on x86+KVM (lsdev? lspci?
walking the /dev file system?) will be totally different from the
discovery/enumeration code for T1 on Power
(pypowervm.wrappers.ManagedSystem.get(adapter)).

I don't mind saying "drivers do the device side; plugins do the instance
side" but I don't see getting around the fact that both "sides" will
need to have platform-specific code.

> One secondary detail to note is that Nova compute calls os-acc per
> instance for all accelerators for that instance, not once for each
> accelerator.

You mean for getVAN()? Because AFAIK, os_vif.plug(list_of_vif_objects,
InstanceInfo) is *not* how nova uses os-vif for plugging.

Thanks,
Eric
.



More information about the OpenStack-dev mailing list