[Cyborg][Ironic][Nova][Neutron][TripleO][Cinder] accelerators management
zhipengh512 at gmail.com
Sat Jan 4 01:53:10 UTC 2020
Thanks for your interest in Cyborg project :) I would like to point out
that when we initiated the project there are two specific use cases we want
to cover: the accelerators attached locally (via PCIe or other bus type) or
remotely (via Ethernet or other fabric type).
For the latter one, it is clear that its life cycle is independent from the
server (like block device managed by Cinder). For the former one however,
its life cycle is not dependent on server for all kinds of accelerators
either. For example we already have PCIe based AI accelerator cards or
Smart NICs that could be power on/off when the server is on all the time.
Therefore it is not a good idea to move all the life cycle management part
into Ironic for the above mentioned reasons. Ironic integration is very
important for the standalone usage of Cyborg for Kubernetes, Envoy (TLS
acceleration) and others alike.
Hope this answers your question :)
On Sat, Jan 4, 2020 at 5:23 AM <Arkady.Kanevsky at dell.com> wrote:
> Fellow Open Stackers,
> I have been thinking on how to handle SmartNICs, GPUs, FPGA handling
> across different projects within OpenStack with Cyborg taking a leading
> role in it.
> Cyborg is important project and address accelerator devices that are part
> of the server and potentially switches and storage.
> It is address 3 different use cases and users there are all grouped into
> single project.
> 1. Application user need to program a portion of the device under
> management, like GPU, or SmartNIC for that app usage. Having a common way
> to do it across different device families and across different vendor is
> very important. And that has to be done every time a VM is deploy that need
> usage of a device. That is tied with VM scheduling.
> 2. Administrator need to program the whole device for specific usage.
> That covers the scenario when device can only support single tenant or
> single use case. That is done once during OpenStack deployment but may need
> reprogramming to configure device for different usage. May or may not
> require reboot of the server.
> 3. Administrator need to setup device for its use, like burning
> specific FW on it. This is typically done as part of server life-cycle
> The first 2 cases cover application life cycle of device usage.
> The last one covers device life cycle independently how it is used.
> Managing life cycle of devices is Ironic responsibility, One cannot and
> should not manage lifecycle of server components independently. Managing
> server devices outside server management violates customer service
> agreements with server vendors and breaks server support agreements.
> Nova and Neutron are getting info about all devices and their capabilities
> from Ironic; that they use for scheduling. We should avoid creating new
> project for every new component of the server and modify nova and neuron
> for each new device. (the same will also apply to cinder and manila if
> smart devices used in its data/control path on a server).
> Finally we want Cyborg to be able to be used in standalone capacity, say
> for Kubernetes.
> Thus, I propose that Cyborg cover use cases 1 & 2, and Ironic would cover
> use case 3.
> Thus, move all device Life-cycle code from Cyborg to Ironic.
> Concentrate Cyborg of fulfilling the first 2 use cases.
> Simplify integration with Nova and Neutron for using these accelerators to
> use existing Ironic mechanism for it.
> Create idempotent calls for use case 1 so Nova and Neutron can use it as
> part of VM deployment to ensure that devices are programmed for VM under
> scheduling need.
> Create idempotent call(s) for use case 2 for TripleO to setup device for
> single accelerator usage of a node.
> [Propose similar model for CNI integration.]
> Let the discussion start!
Zhipeng (Howard) Huang
OpenStack, Kubernetes, CNCF, LF Edge, ONNX, Kubeflow, OpenSDS, Open Service
Broker API, OCP, Hyperledger, ETSI, SNIA, DMTF, W3C
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the openstack-discuss