device compatibility interface for live migration with assigned devices
Jason Wang
jasowang at redhat.com
Wed Aug 19 02:54:07 UTC 2020
On 2020/8/18 下午5:36, Cornelia Huck wrote:
> On Tue, 18 Aug 2020 10:16:28 +0100
> Daniel P. Berrangé <berrange at redhat.com> wrote:
>
>> On Tue, Aug 18, 2020 at 05:01:51PM +0800, Jason Wang wrote:
>>> On 2020/8/18 下午4:55, Daniel P. Berrangé wrote:
>>>
>>> On Tue, Aug 18, 2020 at 11:24:30AM +0800, Jason Wang wrote:
>>>
>>> On 2020/8/14 下午1:16, Yan Zhao wrote:
>>>
>>> On Thu, Aug 13, 2020 at 12:24:50PM +0800, Jason Wang wrote:
>>>
>>> On 2020/8/10 下午3:46, Yan Zhao wrote:
>>> we actually can also retrieve the same information through sysfs, .e.g
>>>
>>> |- [path to device]
>>> |--- migration
>>> | |--- self
>>> | | |---device_api
>>> | | |---mdev_type
>>> | | |---software_version
>>> | | |---device_id
>>> | | |---aggregator
>>> | |--- compatible
>>> | | |---device_api
>>> | | |---mdev_type
>>> | | |---software_version
>>> | | |---device_id
>>> | | |---aggregator
>>>
>>>
>>> Yes but:
>>>
>>> - You need one file per attribute (one syscall for one attribute)
>>> - Attribute is coupled with kobject
> Is that really that bad? You have the device with an embedded kobject
> anyway, and you can just put things into an attribute group?
Yes, but all of this could be done via devlink(netlink) as well with low
overhead.
>
> [Also, I think that self/compatible split in the example makes things
> needlessly complex. Shouldn't semantic versioning and matching already
> cover nearly everything?
That's my question as well. E.g for virtio, versioning may not even
work, some of features are negotiated independently:
Source features: A, B, C
Dest features: A, B, C, E
We just need to make sure the dest features is a superset of source then
all set.
> I would expect very few cases that are more
> complex than that. Maybe the aggregation stuff, but I don't think we
> need that self/compatible split for that, either.]
>
>>> All of above seems unnecessary.
>>>
>>> Another point, as we discussed in another thread, it's really hard to make
>>> sure the above API work for all types of devices and frameworks. So having a
>>> vendor specific API looks much better.
>>>
>>> From the POV of userspace mgmt apps doing device compat checking / migration,
>>> we certainly do NOT want to use different vendor specific APIs. We want to
>>> have an API that can be used / controlled in a standard manner across vendors.
>>>
>>> Yes, but it could be hard. E.g vDPA will chose to use devlink (there's a
>>> long debate on sysfs vs devlink). So if we go with sysfs, at least two
>>> APIs needs to be supported ...
>> NB, I was not questioning devlink vs sysfs directly. If devlink is related
>> to netlink, I can't say I'm enthusiastic as IMKE sysfs is easier to deal
>> with. I don't know enough about devlink to have much of an opinion though.
>> The key point was that I don't want the userspace APIs we need to deal with
>> to be vendor specific.
> From what I've seen of devlink, it seems quite nice; but I understand
> why sysfs might be easier to deal with (especially as there's likely
> already a lot of code using it.)
>
> I understand that some users would like devlink because it is already
> widely used for network drivers (and some others), but I don't think
> the majority of devices used with vfio are network (although certainly
> a lot of them are.)
Note that though devlink could be popular only in network devices,
netlink is widely used by a lot of subsystesm (e.g SCSI).
Thanks
>
>> What I care about is that we have a *standard* userspace API for performing
>> device compatibility checking / state migration, for use by QEMU/libvirt/
>> OpenStack, such that we can write code without countless vendor specific
>> code paths.
>>
>> If there is vendor specific stuff on the side, that's fine as we can ignore
>> that, but the core functionality for device compat / migration needs to be
>> standardized.
> To summarize:
> - choose one of sysfs or devlink
> - have a common interface, with a standardized way to add
> vendor-specific attributes
> ?
More information about the openstack-discuss
mailing list