[openstack-dev] [Cyborg] [Nova] Backup plan without nested RPs
Nadathur, Sundar
sundar.nadathur at intel.com
Mon Jun 4 17:49:28 UTC 2018
Hi,
Cyborg needs to create RCs and traits for accelerators. The
original plan was to do that with nested RPs. To avoid rushing the Nova
developers, I had proposed that Cyborg could start by applying the
traits to the compute node RP, and accept the resulting caveats for
Rocky, till we get nested RP support. That proposal did not find many
takers, and Cyborg has essentially been in waiting mode.
Since it is June already, and there is a risk of not delivering anything
meaningful in Rocky, I am reviving my older proposal, which is
summarized as below:
* Cyborg shall create the RCs and traits as per spec
(https://review.openstack.org/#/c/554717/), both in Rocky and
beyond. Only the RPs will change post-Rocky.
* In Rocky:
o Cyborg will not create nested RPs. It shall apply the device
traits to the compute node RP.
o Cyborg will document the resulting caveat, i.e., all devices in
the same host should have the same traits. In particular, we
cannot have a GPU and a FPGA, or 2 FPGAs of different types, in
the same host.
o Cyborg will document that upgrades to post-Rocky releases will
require operator intervention (as described below).
* For upgrade to post-Rocky world with nested RPs:
o The operator needs to stop all running instances that use an
accelerator.
o The operator needs to run a script that removes the Cyborg
traits and the inventory for Cyborg RCs from compute node RPs.
o The operator can then perform the upgrade. The new Cyborg
agent/driver(s) shall created nested RPs and publish
inventory/traits as specified.
IMHO, it is acceptable for Cyborg to do this because it is new and we
can set expectations for the (lack of) upgrade plan. The alternative is
that potentially no meaningful use cases get addressed in Rocky for Cyborg.
Please LMK what you think.
Regards,
Sundar
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20180604/3068920a/attachment.html>
More information about the OpenStack-dev
mailing list