[nova] The pros/cons for libvirt persistent assignment and DB persistent assignment.

Eric Fried openstack at fried.cc
Wed Aug 21 19:51:48 UTC 2019


Alex-

Thanks for writing this up.

> #1 Without Nova DB persistent for the assignment info, depends on
> hypervisor persistent it.

I liked the "no persistence" option in theory, but it unfortunately
turned out to be too brittle when it came to the corner cases.

> #2 With nova DB persistent, but using virt driver specific blob to store
> virt driver specific info. 
> 
>    The idea is persistent the assignment for instance into DB. The
> resource tracker gets available resources from virt driver. The resource
> tracker will calculate on the fly based on available resources and
> assigned resources from instance DB. The new field ·instance.resources·
> is designed for supporting virt driver specific metadata, then hidden
> the virt driver and platform detail from RT.
> https://etherpad.openstack.org/p/vpmems-non-virt-driver-specific-new

I just took a closer look at this, and I really like it.

Persisting local resource information with the Instance and
MigrationContext objects ensures we don't lose it in weird corner cases,
regardless of a specific hypervisor's "persistence model" (e.g. domain
XML for libvirt).

MigrationContext is already being used for this old_* new_* concept -
but the existing fields are hypervisor-specific (numa and pci).

Storing this information in a generic, opaque-outside-of-virt way means
we're not constantly bolting hypervisor-specific fields onto what
*should* be non-hypervisor-specific objects.

As you've stated in the etherpad, this framework sets us up nicely to
start transitioning existing PCI/NUMA-isms over to a Placement-driven
model in the near future.

Having the virt driver report provider tree (placement-specific) and
"real" (hypervisor-specific) resource information at the same time makes
all kinds of sense.

So, quite aside from solving the stated race condition and enabling
vpmem, all of this is excellent movement toward the "generic device
(resource) management" we've been talking about for years.

Let's make it so.

efried
.



More information about the openstack-discuss mailing list