Alex Williamson <alex.williamson@redhat.com> 于2020年7月15日周三 上午12:16写道:
On Tue, 14 Jul 2020 11:21:29 +0100 Daniel P. Berrangé <berrange@redhat.com> wrote:
On Tue, Jul 14, 2020 at 07:29:57AM +0800, Yan Zhao wrote:
hi folks, we are defining a device migration compatibility interface that helps upper layer stack like openstack/ovirt/libvirt to check if two devices are live migration compatible. The "devices" here could be MDEVs, physical devices, or hybrid of the two. e.g. we could use it to check whether - a src MDEV can migrate to a target MDEV, - a src VF in SRIOV can migrate to a target VF in SRIOV, - a src MDEV can migration to a target VF in SRIOV. (e.g. SIOV/SRIOV backward compatibility case)
The upper layer stack could use this interface as the last step to check if one device is able to migrate to another device before triggering a real live migration procedure. we are not sure if this interface is of value or help to you. please don't hesitate to drop your valuable comments.
(1) interface definition The interface is defined in below way:
__ userspace /\ \ / \write / read \ ________/__________ ___\|/_____________ | migration_version | | migration_version |-->check migration --------------------- --------------------- compatibility device A device B
a device attribute named migration_version is defined under each device's sysfs node. e.g. (/sys/bus/pci/devices/0000\:00\:02.0/$mdev_UUID/migration_version). userspace tools read the migration_version as a string from the source device, and write it to the migration_version sysfs attribute in the target device.
The userspace should treat ANY of below conditions as two devices not compatible: - any one of the two devices does not have a migration_version attribute - error when reading from migration_version attribute of one device - error when writing migration_version string of one device to migration_version attribute of the other device
The string read from migration_version attribute is defined by device vendor driver and is completely opaque to the userspace. for a Intel vGPU, string format can be defined like "parent device PCI ID" + "version of gvt driver" + "mdev type" + "aggregator count".
for an NVMe VF connecting to a remote storage. it could be "PCI ID" + "driver version" + "configured remote storage URL"
If the "configured remote storage URL" is something configuration setting before the usage, then it isn't something we need for migration compatible check. Openstack only needs to know the target device's driver and hardware compatible for migration, then the scheduler will choose a host which such device, and then Openstack will pre-configure the target host and target device before the migration, then openstack will configure the correct remote storage URL to the device. If we want, we can do a sanity check after the live migration with the os.
for a QAT VF, it may be "PCI ID" + "driver version" + "supported encryption set".
(to avoid namespace confliction from each vendor, we may prefix a
driver name to
each migration_version string. e.g. i915-v1-8086-591d-i915-GVTg_V5_8-1)
It's very strange to define it as opaque and then proceed to describe the contents of that opaque string. The point is that its contents are defined by the vendor driver to describe the device, driver version, and possibly metadata about the configuration of the device. One instance of a device might generate a different string from another. The string that a device produces is not necessarily the only string the vendor driver will accept, for example the driver might support backwards compatible migrations.
(2) backgrounds
The reason we hope the migration_version string is opaque to the userspace is that it is hard to generalize standard comparing fields and comparing methods for different devices from different vendors. Though userspace now could still do a simple string compare to check if two devices are compatible, and result should also be right, it's still too limited as it excludes the possible candidate whose migration_version string fails to be equal. e.g. an MDEV with mdev_type_1, aggregator count 3 is probably compatible with another MDEV with mdev_type_3, aggregator count 1, even their migration_version strings are not equal. (assumed mdev_type_3 is of 3 times equal resources of mdev_type_1).
besides that, driver version + configured resources are all elements demanding to take into account.
So, we hope leaving the freedom to vendor driver and let it make the final decision in a simple reading from source side and writing for test in the target side way.
we then think the device compatibility issues for live migration with assigned devices can be divided into two steps: a. management tools filter out possible migration target devices. Tags could be created according to info from product specification. we think openstack/ovirt may have vendor proprietary components to create those customized tags for each product from each vendor.
for Intel vGPU, with a vGPU(a MDEV device) in source side, the tags to search target vGPU are like: a tag for compatible parent PCI IDs, a tag for a range of gvt driver versions, a tag for a range of mdev type + aggregator count
for NVMe VF, the tags to search target VF may be like: a tag for compatible PCI IDs, a tag for a range of driver versions, a tag for URL of configured remote storage.
I interpret this as hand waving, ie. the first step is for management tools to make a good guess :-\ We don't seem to be willing to say that a given mdev type can only migrate to a device with that same type. There's this aggregation discussion happening separately where a base mdev type might be created or later configured to be equivalent to a different type. The vfio migration API we've defined is also not limited to mdev devices, for example we could create vendor specific quirks or hooks to provide migration support for a physical PF/VF device. Within the realm of possibility then is that we could migrate between a physical device and an mdev device, which are simply different degrees of creating a virtualization layer in front of the device.
Requiring management application developers to figure out this possible compatibility based on prod specs is really unrealistic. Product specs are typically as clear as mud, and with the suggestion we consider different rules for different types of devices, add up to a huge amount of complexity. This isn't something app developers should have to spend their time figuring out.
Agreed.
The suggestion that we make use of vendor proprietary helper components is totally unacceptable. We need to be able to build a solution that works with exclusively an open source software stack.
I'm surprised to see this as well, but I'm not sure if Yan was really suggesting proprietary software so much as just vendor specific knowledge.
IMHO there needs to be a mechanism for the kernel to report via sysfs what versions are supported on a given device. This puts the job of reporting compatible versions directly under the responsibility of the vendor who writes the kernel driver for it. They are the ones with the best knowledge of the hardware they've built and the rules around its compatibility.
The version string discussed previously is the version string that represents a given device, possibly including driver information, configuration, etc. I think what you're asking for here is an enumeration of every possible version string that a given device could accept as an incoming migration stream. If we consider the string as opaque, that means the vendor driver needs to generate a separate string for every possible version it could accept, for every possible configuration option. That potentially becomes an excessive amount of data to either generate or manage.
For the configuration options, there are two kinds of configuration options are needn't for the migration check. * The configuration option makes the device different, for example(could be wrong example, not matching any real hardware), A GPU supports 1024* 768 resolution and 800 * 600 resolution VGPUs, the OpenStack will separate this two kinds of VGPUs into two separate resource pool. so the scheduler already ensures we get a host with such vGPU support. so it needn't encode into the 'version string' discussed here. * The configuration option is setting before usage, just like the 'configured remote storage URL' above, it needn't encoded into the 'version string' also. Since the openstack will configure the correct value before the migration.
Am I overestimating how vendors intend to use the version string?
We'd also need to consider devices that we could create, for instance providing the same interface enumeration prior to creating an mdev device to have a confidence level that the new device would be a valid target.
We defined the string as opaque to allow vendor flexibility and because defining a common format is hard. Do we need to revisit this part of the discussion to define the version string as non-opaque with parsing rules, probably with separate incoming vs outgoing interfaces? Thanks,
Alex