[nova][ptg] Summary: Resource Management Daemon

Eric Fried openstack at fried.cc
Fri May 3 04:36:19 UTC 2019


Specs:
- Base enablement: https://review.openstack.org/#/c/651130/
- Power management using CPU core P state control:
https://review.openstack.org/#/c/651024/
- Last-level cache: https://review.openstack.org/#/c/651233/

Summary:
- Represent new resources (e.g. last-level cache) which can be used for
scheduling.
- Resource Management Daemon (RMD) manages the (potentially dynamic)
assignment of these resources to VMs.

Direction:
- There shall be no direct communication between nova-compute (including
virt driver) and RMD.
- Admin/orchestration to supply "conf" [1] describing the resources.
- Nova processes this conf while updating provider trees to make the
resources appear appropriately in placement.
- Flavors can be designed to request the resources so they are
considered and allocated during scheduling.
- RMD must do its thing "out of band", e.g. triggered by listening for
events (recommended: libvirt events, which are local to the host, rather
than nova events) and requesting/introspecting information from
flavor/image/placement.
- Things not related to resource (like p-state control) can use traits
to ensure scheduling to capable hosts. (Also potential to use forbidden
aggregates [2] to isolate those hosts to only p-state-needing VMs.)
- Delivery mechanism for RMD 'policy' artifacts via an extra spec with
an opaque string which may represent e.g. a glance UUID, swift object, etc.

efried

[1] There has been a recurring theme of needing "some kind of config" -
not necessarily nova.conf or any oslo.config - that can describe:
- Resource provider name/uuid/parentage, be it an existing provider or a
new nested provider;
- Inventory (e.g. last-level cache in this case);
- Physical resource(s) to which the inventory corresponds (e.g. "cache
ways" in this case);
- Traits, aggregates, other?
As of this writing, no specifics have been decided, even to the point of
positing that it could be the same file for some/all of the specs for
which the issue arose.
[2]
http://lists.openstack.org/pipermail/openstack-discuss/2019-May/005803.html



More information about the openstack-discuss mailing list