[ovirt-devel] Re: device compatibility interface for live migration with assigned devices
Jason Wang
jasowang at redhat.com
Wed Aug 19 09:28:38 UTC 2020
On 2020/8/19 下午4:13, Yan Zhao wrote:
> On Wed, Aug 19, 2020 at 03:39:50PM +0800, Jason Wang wrote:
>> On 2020/8/19 下午2:59, Yan Zhao wrote:
>>> On Wed, Aug 19, 2020 at 02:57:34PM +0800, Jason Wang wrote:
>>>> On 2020/8/19 上午11:30, Yan Zhao wrote:
>>>>> hi All,
>>>>> could we decide that sysfs is the interface that every VFIO vendor driver
>>>>> needs to provide in order to support vfio live migration, otherwise the
>>>>> userspace management tool would not list the device into the compatible
>>>>> list?
>>>>>
>>>>> if that's true, let's move to the standardizing of the sysfs interface.
>>>>> (1) content
>>>>> common part: (must)
>>>>> - software_version: (in major.minor.bugfix scheme)
>>>> This can not work for devices whose features can be negotiated/advertised
>>>> independently. (E.g virtio devices)
>>>>
>>> sorry, I don't understand here, why virtio devices need to use vfio interface?
>>
>> I don't see any reason that virtio devices can't be used by VFIO. Do you?
>>
>> Actually, virtio devices have been used by VFIO for many years:
>>
>> - passthrough a hardware virtio devices to userspace(VM) drivers
>> - using virtio PMD inside guest
>>
> So, what's different for it vs passing through a physical hardware via VFIO?
The difference is in the guest, the device could be either real hardware
or emulated ones.
> even though the features are negotiated dynamically, could you explain
> why it would cause software_version not work?
Virtio device 1 supports feature A, B, C
Virtio device 2 supports feature B, C, D
So you can't migrate a guest from device 1 to device 2. And it's
impossible to model the features with versions.
>
>
>>> I think this thread is discussing about vfio related devices.
>>>
>>>>> - device_api: vfio-pci or vfio-ccw ...
>>>>> - type: mdev type for mdev device or
>>>>> a signature for physical device which is a counterpart for
>>>>> mdev type.
>>>>>
>>>>> device api specific part: (must)
>>>>> - pci id: pci id of mdev parent device or pci id of physical pci
>>>>> device (device_api is vfio-pci)API here.
>>>> So this assumes a PCI device which is probably not true.
>>>>
>>> for device_api of vfio-pci, why it's not true?
>>>
>>> for vfio-ccw, it's subchannel_type.
>>
>> Ok but having two different attributes for the same file is not good idea.
>> How mgmt know there will be a 3rd type?
> that's why some attributes need to be common. e.g.
> device_api: it's common because mgmt need to know it's a pci device or a
> ccw device. and the api type is already defined vfio.h.
> (The field is agreed by and actually suggested by Alex in previous mail)
> type: mdev_type for mdev. if mgmt does not understand it, it would not
> be able to create one compatible mdev device.
> software_version: mgmt can compare the major and minor if it understands
> this fields.
I think it would be helpful if you can describe how mgmt is expected to
work step by step with the proposed sysfs API. This can help people to
understand.
Thanks for the patience. Since sysfs is uABI, when accepted, we need
support it forever. That's why we need to be careful.
>>
>>>>> - subchannel_type (device_api is vfio-ccw)
>>>>> vendor driver specific part: (optional)
>>>>> - aggregator
>>>>> - chpid_type
>>>>> - remote_url
>>>> For "remote_url", just wonder if it's better to integrate or reuse the
>>>> existing NVME management interface instead of duplicating it here. Otherwise
>>>> it could be a burden for mgmt to learn. E.g vendor A may use "remote_url"
>>>> but vendor B may use a different attribute.
>>>>
>>> it's vendor driver specific.
>>> vendor specific attributes are inevitable, and that's why we are
>>> discussing here of a way to standardizing of it.
>>
>> Well, then you will end up with a very long list to discuss. E.g for
>> networking devices, you will have "mac", "v(x)lan" and a lot of other.
>>
>> Note that "remote_url" is not vendor specific but NVME (class/subsystem)
>> specific.
>>
> yes, it's just NVMe specific. I added it as an example to show what is
> vendor specific.
> if one attribute is vendor specific across all vendors, then it's not vendor specific,
> it's already common attribute, right?
It's common but the issue is about naming and mgmt overhead. Unless you
have a unified API per class (NVME, ethernet, etc), you can't prevent
vendor from using another name instead of "remote_url".
>
>> The point is that if vendor/class specific part is unavoidable, why not
>> making all of the attributes vendor specific?
>>
> some parts need to be common, as I listed above.
This is hard, unless VFIO knows the type of device (e.g it's a NVME or
networking device).
>
>>> our goal is that mgmt can use it without understanding the meaning of vendor
>>> specific attributes.
>>
>> I'm not sure this is the correct design of uAPI. Is there something similar
>> in the existing uAPIs?
>>
>> And it might be hard to work for virtio devices.
>>
>>
>>>>> NOTE: vendors are free to add attributes in this part with a
>>>>> restriction that this attribute is able to be configured with the same
>>>>> name in sysfs too. e.g.
>>>> Sysfs works well for common attributes belongs to a class, but I'm not sure
>>>> it can work well for device/vendor specific attributes. Does this mean mgmt
>>>> need to iterate all the attributes in both src and dst?
>>>>
>>> no. just attributes under migration directory.
>>>
>>>>> for aggregator, there must be a sysfs attribute in device node
>>>>> /sys/devices/pci0000:00/0000:00:02.0/882cc4da-dede-11e7-9180-078a62063ab1/intel_vgpu/aggregator,
>>>>> so that the userspace tool is able to configure the target device
>>>>> according to source device's aggregator attribute.
>>>>>
>>>>>
>>>>> (2) where and structure
>>>>> proposal 1:
>>>>> |- [path to device]
>>>>> |--- migration
>>>>> | |--- self
>>>>> | | |-software_version
>>>>> | | |-device_api
>>>>> | | |-type
>>>>> | | |-[pci_id or subchannel_type]
>>>>> | | |-<aggregator or chpid_type>
>>>>> | |--- compatible
>>>>> | | |-software_version
>>>>> | | |-device_api
>>>>> | | |-type
>>>>> | | |-[pci_id or subchannel_type]
>>>>> | | |-<aggregator or chpid_type>
>>>>> multiple compatible is allowed.
>>>>> attributes should be ASCII text files, preferably with only one value
>>>>> per file.
>>>>>
>>>>>
>>>>> proposal 2: use bin_attribute.
>>>>> |- [path to device]
>>>>> |--- migration
>>>>> | |--- self
>>>>> | |--- compatible
>>>>>
>>>>> so we can continue use multiline format. e.g.
>>>>> cat compatible
>>>>> software_version=0.1.0
>>>>> device_api=vfio_pci
>>>>> type=i915-GVTg_V5_{val1:int:1,2,4,8}
>>>>> pci_id=80865963
>>>>> aggregator={val1}/2
>>>> So basically two questions:
>>>>
>>>> - how hard to standardize sysfs API for dealing with compatibility check (to
>>>> make it work for most types of devices)
>>> sorry, I just know we are in the process of standardizing of it :)
>>
>> It's not easy. As I said, the current design can't work for virtio devices
>> and it's not hard to find other examples. I remember some Intel devices have
>> bitmask based capability registers.
>>
> some Intel devices have bitmask based capability registers.
> so what?
You should at least make the proposed API working for your(Intel) own
devices.
> we have defined pci_id to identify the devices.
> even two different devices have equal PCI IDs, we still allow them to
> add vendor specific fields. e.g.
> for QAT, they can add alg_set to identify hardware supported algorithms.
Well, the point is to make sure the API not work only for some specific
devices. If we agree with this, we need try to seek what is missed instead.
>
>>>> - how hard for the mgmt to learn with a vendor specific attributes (vs
>>>> existing management API)
>>> what is existing management API?
>>
>> It depends on the type of devices. E.g for NVME, we've already had one
>> (/sys/kernel/config/nvme)?
>>
> if the device is binding to vfio or vfio-mdev, I believe this interface
> is not there.
So you want to duplicate some APIs with existing NVME ones?
Thanks
>
>
> Thanks
> Yan
>
More information about the openstack-discuss
mailing list