[openstack-dev][PCI passthrough] How to use PCI passthrough feature correctly? And is this BUG in update_devices_from_hypervisor_resources?

Simon Jones batmanustc at gmail.com
Wed Mar 1 07:20:51 UTC 2023


BTW, this link (
https://docs.openstack.org/neutron/latest/admin/ovn/smartnic_dpu.html) said
I SHOULD add "remote_managed" in /etc/nova/nova.conf, is that WRONG ?

----
Simon Jones


Simon Jones <batmanustc at gmail.com> 于2023年3月1日周三 14:51写道:

> Hi,
>
> 1. I try the 2nd method, which remove "remote-managed" tag in
> /etc/nova/nova.conf, but got ERROR in creating VM in compute node's
> nova-compute service. Detail log refer to LOG-1 section bellow, I think
> it's because hypervisor has no neutron-agent as I use DPU, neutron
> anget(which is ovn-controller) is on DPU. Is right ?
>
> 2. So I want to try the 1st method in the email, which is use
> vnic-type=direct. BUT, HOW TO USE ? IS THERE ANY DOCUMENT ?
>
> THANKS.
>
> LOG-1, which is compute node's nova-compute.log
>
>> ```
>> 2023-03-01 14:24:02.631 504488 DEBUG oslo_concurrency.processutils
>> [req-d4bad4d7-71c7-498e-8fd1-bb6d8884899f ff627ad39ed94479b9c5033bc462cf78
>> 512866f9994f4ad8916d8539a7cdeec9 - default default] Running cmd
>> (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824
>> --cpu=30 -- env LC_ALL=C LANG=C qemu-img info
>> /var/lib/nova/instances/a2603eeb-8db0-489b-ba40-dff1d74be21f/disk
>> --force-share --output=json execute
>> /usr/lib/python3/dist-packages/oslo_concurrency/processutils.py:384
>> 2023-03-01 14:24:02.654 504488 DEBUG oslo_concurrency.processutils
>> [req-d4bad4d7-71c7-498e-8fd1-bb6d8884899f ff627ad39ed94479b9c5033bc462cf78
>> 512866f9994f4ad8916d8539a7cdeec9 - default default] CMD "/usr/bin/python3
>> -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C
>> qemu-img info
>> /var/lib/nova/instances/a2603eeb-8db0-489b-ba40-dff1d74be21f/disk
>> --force-share --output=json" returned: 0 in 0.023s execute
>> /usr/lib/python3/dist-packages/oslo_concurrency/processutils.py:422
>> 2023-03-01 14:24:02.655 504488 DEBUG nova.virt.disk.api
>> [req-d4bad4d7-71c7-498e-8fd1-bb6d8884899f ff627ad39ed94479b9c5033bc462cf78
>> 512866f9994f4ad8916d8539a7cdeec9 - default default] Cannot resize image
>> /var/lib/nova/instances/a2603eeb-8db0-489b-ba40-dff1d74be21f/disk to a
>> smaller size. can_resize_image
>> /usr/lib/python3/dist-packages/nova/virt/disk/api.py:172
>> 2023-03-01 14:24:02.655 504488 DEBUG nova.objects.instance
>> [req-d4bad4d7-71c7-498e-8fd1-bb6d8884899f ff627ad39ed94479b9c5033bc462cf78
>> 512866f9994f4ad8916d8539a7cdeec9 - default default] Lazy-loading
>> 'migration_context' on Instance uuid a2603eeb-8db0-489b-ba40-dff1d74be21f
>> obj_load_attr /usr/lib/python3/dist-packages/nova/objects/instance.py:1099
>> 2023-03-01 14:24:02.673 504488 DEBUG nova.virt.libvirt.driver
>> [req-d4bad4d7-71c7-498e-8fd1-bb6d8884899f ff627ad39ed94479b9c5033bc462cf78
>> 512866f9994f4ad8916d8539a7cdeec9 - default default] [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f] Created local disks _create_image
>> /usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py:4768
>> 2023-03-01 14:24:02.674 504488 DEBUG nova.virt.libvirt.driver
>> [req-d4bad4d7-71c7-498e-8fd1-bb6d8884899f ff627ad39ed94479b9c5033bc462cf78
>> 512866f9994f4ad8916d8539a7cdeec9 - default default] [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f] Ensure instance console log exists:
>> /var/lib/nova/instances/a2603eeb-8db0-489b-ba40-dff1d74be21f/console.log
>> _ensure_console_log_for_instance
>> /usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py:4531
>> 2023-03-01 14:24:02.674 504488 DEBUG oslo_concurrency.lockutils
>> [req-d4bad4d7-71c7-498e-8fd1-bb6d8884899f ff627ad39ed94479b9c5033bc462cf78
>> 512866f9994f4ad8916d8539a7cdeec9 - default default] Lock "vgpu_resources"
>> acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" ::
>> waited 0.000s inner
>> /usr/lib/python3/dist-packages/oslo_concurrency/lockutils.py:386
>> 2023-03-01 14:24:02.675 504488 DEBUG oslo_concurrency.lockutils
>> [req-d4bad4d7-71c7-498e-8fd1-bb6d8884899f ff627ad39ed94479b9c5033bc462cf78
>> 512866f9994f4ad8916d8539a7cdeec9 - default default] Lock "vgpu_resources"
>> "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" ::
>> held 0.000s inner
>> /usr/lib/python3/dist-packages/oslo_concurrency/lockutils.py:400
>> 2023-03-01 14:24:03.325 504488 ERROR nova.compute.manager
>> [req-d4bad4d7-71c7-498e-8fd1-bb6d8884899f ff627ad39ed94479b9c5033bc462cf78
>> 512866f9994f4ad8916d8539a7cdeec9 - default default] Instance failed network
>> setup after 1 attempt(s): nova.exception.PortBindingFailed: Binding failed
>> for port 2a29da9c-a6db-4eff-a073-e0f1c61fe178, please check neutron logs
>> for more information.
>> 2023-03-01 14:24:03.325 504488 ERROR nova.compute.manager Traceback (most
>> recent call last):
>> 2023-03-01 14:24:03.325 504488 ERROR nova.compute.manager   File
>> "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 1868, in
>> _allocate_network_async
>> 2023-03-01 14:24:03.325 504488 ERROR nova.compute.manager     nwinfo =
>> self.network_api.allocate_for_instance(
>> 2023-03-01 14:24:03.325 504488 ERROR nova.compute.manager   File
>> "/usr/lib/python3/dist-packages/nova/network/neutron.py", line 1215, in
>> allocate_for_instance
>> 2023-03-01 14:24:03.325 504488 ERROR nova.compute.manager
>> created_port_ids = self._update_ports_for_instance(
>> 2023-03-01 14:24:03.325 504488 ERROR nova.compute.manager   File
>> "/usr/lib/python3/dist-packages/nova/network/neutron.py", line 1357, in
>> _update_ports_for_instance
>> 2023-03-01 14:24:03.325 504488 ERROR nova.compute.manager
>> vif.destroy()
>> 2023-03-01 14:24:03.325 504488 ERROR nova.compute.manager   File
>> "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 227, in
>> __exit__
>> 2023-03-01 14:24:03.325 504488 ERROR nova.compute.manager
>> self.force_reraise()
>> 2023-03-01 14:24:03.325 504488 ERROR nova.compute.manager   File
>> "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 200, in
>> force_reraise
>> 2023-03-01 14:24:03.325 504488 ERROR nova.compute.manager     raise
>> self.value
>> 2023-03-01 14:24:03.325 504488 ERROR nova.compute.manager   File
>> "/usr/lib/python3/dist-packages/nova/network/neutron.py", line 1326, in
>> _update_ports_for_instance
>> 2023-03-01 14:24:03.325 504488 ERROR nova.compute.manager
>> updated_port = self._update_port(
>> 2023-03-01 14:24:03.325 504488 ERROR nova.compute.manager   File
>> "/usr/lib/python3/dist-packages/nova/network/neutron.py", line 584, in
>> _update_port
>> 2023-03-01 14:24:03.325 504488 ERROR nova.compute.manager
>> _ensure_no_port_binding_failure(port)
>> 2023-03-01 14:24:03.325 504488 ERROR nova.compute.manager   File
>> "/usr/lib/python3/dist-packages/nova/network/neutron.py", line 293, in
>> _ensure_no_port_binding_failure
>> 2023-03-01 14:24:03.325 504488 ERROR nova.compute.manager     raise
>> exception.PortBindingFailed(port_id=port['id'])
>> 2023-03-01 14:24:03.325 504488 ERROR nova.compute.manager
>> nova.exception.PortBindingFailed: Binding failed for port
>> 2a29da9c-a6db-4eff-a073-e0f1c61fe178, please check neutron logs for more
>> information.
>> 2023-03-01 14:24:03.325 504488 ERROR nova.compute.manager
>> 2023-03-01 14:24:03.325 504488 ERROR nova.compute.manager
>> nova.exception.PortBindingFailed: Binding failed for port
>> 2a29da9c-a6db-4eff-a073-e0f1c61fe178, please check neutron logs for more
>> information.
>> 2023-03-01 14:24:03.325 504488 ERROR nova.compute.manager
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager
>> [req-d4bad4d7-71c7-498e-8fd1-bb6d8884899f ff627ad39ed94479b9c5033bc462cf78
>> 512866f9994f4ad8916d8539a7cdeec9 - default default] [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f] Instance failed to spawn:
>> nova.exception.PortBindingFailed: Binding failed for port
>> 2a29da9c-a6db-4eff-a073-e0f1c61fe178, please check neutron logs for more
>> information.
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f] Traceback (most recent call last):
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]   File
>> "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 2743, in
>> _build_resources
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]     yield resources
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]   File
>> "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 2503, in
>> _build_and_run_instance
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]     self.driver.spawn(context,
>> instance, image_meta,
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]   File
>> "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 4329, in
>> spawn
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]     xml =
>> self._get_guest_xml(context, instance, network_info,
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]   File
>> "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 7288, in
>> _get_guest_xml
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]     network_info_str =
>> str(network_info)
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]   File
>> "/usr/lib/python3/dist-packages/nova/network/model.py", line 620, in __str__
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]     return self._sync_wrapper(fn,
>> *args, **kwargs)
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]   File
>> "/usr/lib/python3/dist-packages/nova/network/model.py", line 603, in
>> _sync_wrapper
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]     self.wait()
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]   File
>> "/usr/lib/python3/dist-packages/nova/network/model.py", line 635, in wait
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]     self[:] = self._gt.wait()
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]   File
>> "/usr/lib/python3/dist-packages/eventlet/greenthread.py", line 181, in wait
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]     return self._exit_event.wait()
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]   File
>> "/usr/lib/python3/dist-packages/eventlet/event.py", line 125, in wait
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]     result = hub.switch()
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]   File
>> "/usr/lib/python3/dist-packages/eventlet/hubs/hub.py", line 313, in switch
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]     return self.greenlet.switch()
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]   File
>> "/usr/lib/python3/dist-packages/eventlet/greenthread.py", line 221, in main
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]     result = function(*args, **kwargs)
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]   File
>> "/usr/lib/python3/dist-packages/nova/utils.py", line 656, in context_wrapper
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]     return func(*args, **kwargs)
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]   File
>> "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 1890, in
>> _allocate_network_async
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]     raise e
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]   File
>> "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 1868, in
>> _allocate_network_async
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]     result = function(*args, **kwargs)
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]   File
>> "/usr/lib/python3/dist-packages/nova/utils.py", line 656, in context_wrapper
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]     return func(*args, **kwargs)
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]   File
>> "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 1890, in
>> _allocate_network_async
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]     raise e
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]   File
>> "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 1868, in
>> _allocate_network_async
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]     nwinfo =
>> self.network_api.allocate_for_instance(
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]   File
>> "/usr/lib/python3/dist-packages/nova/network/neutron.py", line 1215, in
>> allocate_for_instance
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]     created_port_ids =
>> self._update_ports_for_instance(
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]   File
>> "/usr/lib/python3/dist-packages/nova/network/neutron.py", line 1357, in
>> _update_ports_for_instance
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]     vif.destroy()
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]   File
>> "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 227, in
>> __exit__
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]     self.force_reraise()
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]   File
>> "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 200, in
>> force_reraise
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]     raise self.value
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]   File
>> "/usr/lib/python3/dist-packages/nova/network/neutron.py", line 1326, in
>> _update_ports_for_instance
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]     updated_port = self._update_port(
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]   File
>> "/usr/lib/python3/dist-packages/nova/network/neutron.py", line 584, in
>> _update_port
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]
>> _ensure_no_port_binding_failure(port)
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]   File
>> "/usr/lib/python3/dist-packages/nova/network/neutron.py", line 293, in
>> _ensure_no_port_binding_failure
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]     raise
>> exception.PortBindingFailed(port_id=port['id'])
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f] nova.exception.PortBindingFailed:
>> Binding failed for port 2a29da9c-a6db-4eff-a073-e0f1c61fe178, please check
>> neutron logs for more information.
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]
>> 2023-03-01 14:24:03.349 504488 INFO nova.compute.manager
>> [req-d4bad4d7-71c7-498e-8fd1-bb6d8884899f ff627ad39ed94479b9c5033bc462cf78
>> 512866f9994f4ad8916d8539a7cdeec9 - default default] [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f] Terminating instance
>> a073-e0f1c61fe178, please check neutron logs for more information.
>> 2023-03-01 14:24:03.341 504488 ERROR nova.compute.manager [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f]
>> 2023-03-01 14:24:03.349 504488 INFO nova.compute.manager
>> [req-d4bad4d7-71c7-498e-8fd1-bb6d8884899f ff627ad39ed94479b9c5033bc462cf78
>> 512866f9994f4ad8916d8539a7cdeec9 - default default] [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f] Terminating instance
>> 2023-03-01 14:24:03.349 504488 DEBUG oslo_concurrency.lockutils
>> [req-d4bad4d7-71c7-498e-8fd1-bb6d8884899f ff627ad39ed94479b9c5033bc462cf78
>> 512866f9994f4ad8916d8539a7cdeec9 - default default] Acquired lock
>> "refresh_cache-a2603eeb-8db0-489b-ba40-dff1d74be21f" lock
>> /usr/lib/python3/dist-packages/oslo_concurrency/lockutils.py:294
>> 2023-03-01 14:24:03.350 504488 DEBUG nova.network.neutron
>> [req-d4bad4d7-71c7-498e-8fd1-bb6d8884899f ff627ad39ed94479b9c5033bc462cf78
>> 512866f9994f4ad8916d8539a7cdeec9 - default default] [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f] Building network info cache for
>> instance _get_instance_nw_info
>> /usr/lib/python3/dist-packages/nova/network/neutron.py:2014
>> 2023-03-01 14:24:03.431 504488 DEBUG nova.network.neutron
>> [req-d4bad4d7-71c7-498e-8fd1-bb6d8884899f ff627ad39ed94479b9c5033bc462cf78
>> 512866f9994f4ad8916d8539a7cdeec9 - default default] [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f] Instance cache missing network info.
>> _get_preexisting_port_ids
>> /usr/lib/python3/dist-packages/nova/network/neutron.py:3327
>> 2023-03-01 14:24:03.624 504488 DEBUG nova.network.neutron
>> [req-d4bad4d7-71c7-498e-8fd1-bb6d8884899f ff627ad39ed94479b9c5033bc462cf78
>> 512866f9994f4ad8916d8539a7cdeec9 - default default] [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f] Updating instance_info_cache with
>> network_info: [] update_instance_cache_with_nw_info
>> /usr/lib/python3/dist-packages/nova/network/neutron.py:117
>> 2023-03-01 14:24:03.638 504488 DEBUG oslo_concurrency.lockutils
>> [req-d4bad4d7-71c7-498e-8fd1-bb6d8884899f ff627ad39ed94479b9c5033bc462cf78
>> 512866f9994f4ad8916d8539a7cdeec9 - default default] Releasing lock
>> "refresh_cache-a2603eeb-8db0-489b-ba40-dff1d74be21f" lock
>> /usr/lib/python3/dist-packages/oslo_concurrency/lockutils.py:312
>> 2023-03-01 14:24:03.639 504488 DEBUG nova.compute.manager
>> [req-d4bad4d7-71c7-498e-8fd1-bb6d8884899f ff627ad39ed94479b9c5033bc462cf78
>> 512866f9994f4ad8916d8539a7cdeec9 - default default] [instance:
>> a2603eeb-8db0-489b-ba40-dff1d74be21f] Start destroying the instance on the
>> hypervisor. _shutdown_instance
>> /usr/lib/python3/dist-packages/nova/compute/manager.py:2999
>> 2023-03-01 14:24:03.648 504488 DEBUG nova.virt.libvirt.driver [-]
>> [instance: a2603eeb-8db0-489b-ba40-dff1d74be21f] During wait destroy,
>> instance disappeared. _wait_for_destroy
>> /usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py:1483
>> 2023-03-01 14:24:03.648 504488 INFO nova.virt.libvirt.driver [-]
>> [instance: a2603eeb-8db0-489b-ba40-dff1d74be21f] Instance destroyed
>> successfully.
>> ```
>>
>
> ----
> Simon Jones
>
>
> Sean Mooney <smooney at redhat.com> 于2023年3月1日周三 01:18写道:
>
>> On Tue, 2023-02-28 at 19:43 +0800, Simon Jones wrote:
>> > Hi all,
>> >
>> > I'm working on openstack Yoga's PCI passthrough feature, follow this
>> link:
>> > https://docs.openstack.org/nova/latest/admin/pci-passthrough.html
>> >
>> > I configure exactly as the link said, but when I create server use this
>> > command, I found ERROR:
>> > ```
>> > openstack server create --flavor cirros-os-dpu-test-1 --image cirros \
>> >         --nic net-id=066c8dc2-c98b-4fb8-a541-8b367e8f6e69 \
>> >         --security-group default --key-name mykey provider-instance
>> >
>> >
>> > > fault                               | {'code': 500, 'created':
>> > '2023-02-23T06:13:43Z', 'message': 'No valid host was found. There are
>> not
>> > enough hosts available.', 'details': 'Traceback (most recent call
>> last):\n
>> >  File "/usr/lib/python3/dist-packages/nova/conductor/manager.py", line
>> > 1548, in schedule_and_build_instances\n    host_lists =
>> > self._schedule_instances(context, request_specs[0],\n  File
>> > "/usr/lib/python3/dist-packages/nova/conductor/manager.py", line 908, in
>> > _schedule_instances\n    host_lists =
>> > self.query_client.select_destinations(\n  File
>> > "/usr/lib/python3/dist-packages/nova/scheduler/client/query.py", line
>> 41,
>> > in select_destinations\n    return
>> > self.scheduler_rpcapi.select_destinations(context, spec_obj,\n  File
>> > "/usr/lib/python3/dist-packages/nova/scheduler/rpcapi.py", line 160, in
>> > select_destinations\n    return cctxt.call(ctxt,
>> \'select_destinations\',
>> > **msg_args)\n  File
>> > "/usr/lib/python3/dist-packages/oslo_messaging/rpc/client.py", line
>> 189, in
>> > call\n    result = self.transport._send(\n  File
>> > "/usr/lib/python3/dist-packages/oslo_messaging/transport.py", line 123,
>> in
>> > _send\n    return self._driver.send(target, ctxt, message,\n  File
>> > "/usr/lib/python3/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
>> > line 689, in send\n    return self._send(target, ctxt, message,
>> > wait_for_reply, timeout,\n  File
>> > "/usr/lib/python3/dist-packages/oslo_messaging/_drivers/amqpdriver.py",
>> > line 681, in _send\n    raise
>> > result\nnova.exception_Remote.NoValidHost_Remote: No valid host was
>> found.
>> > There are not enough hosts available.\nTraceback (most recent call
>> > last):\n\n  File
>> > "/usr/lib/python3/dist-packages/oslo_messaging/rpc/server.py", line
>> 241, in
>> > inner\n    return func(*args, **kwargs)\n\n  File
>> > "/usr/lib/python3/dist-packages/nova/scheduler/manager.py", line 223, in
>> > select_destinations\n    selections = self._select_destinations(\n\n
>> File
>> > "/usr/lib/python3/dist-packages/nova/scheduler/manager.py", line 250, in
>> > _select_destinations\n    selections = self._schedule(\n\n  File
>> > "/usr/lib/python3/dist-packages/nova/scheduler/manager.py", line 416, in
>> > _schedule\n    self._ensure_sufficient_hosts(\n\n  File
>> > "/usr/lib/python3/dist-packages/nova/scheduler/manager.py", line 455, in
>> > _ensure_sufficient_hosts\n    raise
>> > exception.NoValidHost(reason=reason)\n\nnova.exception.NoValidHost: No
>> > valid host was found. There are not enough hosts available.\n\n'} |
>> >
>> > // this is what I configured:NovaInstance
>> >
>> > gyw at c1:~$ openstack flavor show cirros-os-dpu-test-1
>> > +----------------------------+------------------------------+
>> > > Field                      | Value                        |
>> > +----------------------------+------------------------------+
>> > > OS-FLV-DISABLED:disabled   | False                        |
>> > > OS-FLV-EXT-DATA:ephemeral  | 0                            |
>> > > access_project_ids         | None                         |
>> > > description                | None                         |
>> > > disk                       | 1                            |
>> > > id                         | 0                            |
>> > > name                       | cirros-os-dpu-test-1         |
>> > > os-flavor-access:is_public | True                         |
>> > > properties                 | pci_passthrough:alias='a1:1' |
>> > > ram                        | 64                           |
>> > > rxtx_factor                | 1.0                          |
>> > > swap                       |                              |
>> > > vcpus                      | 1                            |
>> > +----------------------------+------------------------------+
>> >
>> > // in controller node /etc/nova/nova.conf
>> >
>> > [filter_scheduler]
>> > enabled_filters = PciPassthroughFilter
>> > available_filters = nova.scheduler.filters.all_filters
>> >
>> > [pci]
>> > passthrough_whitelist = {"vendor_id": "15b3", "product_id": "101e",
>> > "physical_network": null, "remote_managed": "true"}
>> > alias = { "vendor_id":"15b3", "product_id":"101e",
>> "device_type":"type-VF",
>> > "name":"a1" }
>> >
>> > // in compute node /etc/nova/nova.conf
>> >
>> > [pci]
>> > passthrough_whitelist = {"vendor_id": "15b3", "product_id": "101e",
>> > "physical_network": null, "remote_managed": "true"}
>> > alias = { "vendor_id":"15b3", "product_id":"101e",
>> "device_type":"type-VF",
>> > "name":"a1" }
>>
>> "remote_managed": "true" is only valid for neutron sriov port
>> not flavor based pci passhtough.
>>
>> so you need to use vnci_type=driect asusming you are trying to use
>>
>> https://specs.openstack.org/openstack/nova-specs/specs/yoga/implemented/integration-with-off-path-network-backends.html
>>
>> which is not the same as generic pci passthough.
>>
>> if you just want to use geneic pci passthive via a flavor remove
>> "remote_managed": "true"
>>
>> >
>> > ```
>> >
>> > The detail ERROR I found is:
>> > - The reason why "There are not enough hosts available" is,
>> > nova-scheduler's log shows "There are 0 hosts available but 1 instances
>> > requested to build", which means no hosts support PCI passthough
>> feature.
>> >
>> > This is nova-schduler's log
>> > ```
>> > 2023-02-28 06:11:58.329 1942637 DEBUG nova.scheduler.manager
>> > [req-13b1baee-e02d-40fc-926d-d497e70ca0dc
>> ff627ad39ed94479b9c5033bc462cf78
>> > 512866f9994f4ad8916d8539a7cdeec9 - default default] Starting to schedule
>> > for instances: ['8ddfbe2c-f929-4b62-8b73-67902df8fb60']
>> select_destinations
>> > /usr/lib/python3/dist-packages/nova/scheduler/manager.py:141
>> > 2023-02-28 06:11:58.330 1942637 DEBUG nova.scheduler.request_filter
>> > [req-13b1baee-e02d-40fc-926d-d497e70ca0dc
>> ff627ad39ed94479b9c5033bc462cf78
>> > 512866f9994f4ad8916d8539a7cdeec9 - default default]
>> compute_status_filter
>> > request filter added forbidden trait COMPUTE_STATUS_DISABLED
>> > compute_status_filter
>> > /usr/lib/python3/dist-packages/nova/scheduler/request_filter.py:254
>> > 2023-02-28 06:11:58.330 1942637 DEBUG nova.scheduler.request_filter
>> > [req-13b1baee-e02d-40fc-926d-d497e70ca0dc
>> ff627ad39ed94479b9c5033bc462cf78
>> > 512866f9994f4ad8916d8539a7cdeec9 - default default] Request filter
>> > 'compute_status_filter' took 0.0 seconds wrapper
>> > /usr/lib/python3/dist-packages/nova/scheduler/request_filter.py:46
>> > 2023-02-28 06:11:58.331 1942637 DEBUG nova.scheduler.request_filter
>> > [req-13b1baee-e02d-40fc-926d-d497e70ca0dc
>> ff627ad39ed94479b9c5033bc462cf78
>> > 512866f9994f4ad8916d8539a7cdeec9 - default default] Request filter
>> > 'accelerators_filter' took 0.0 seconds wrapper
>> > /usr/lib/python3/dist-packages/nova/scheduler/request_filter.py:46
>> > 2023-02-28 06:11:58.332 1942637 DEBUG nova.scheduler.request_filter
>> > [req-13b1baee-e02d-40fc-926d-d497e70ca0dc
>> ff627ad39ed94479b9c5033bc462cf78
>> > 512866f9994f4ad8916d8539a7cdeec9 - default default] Request filter
>> > 'remote_managed_ports_filter' took 0.0 seconds wrapper
>> > /usr/lib/python3/dist-packages/nova/scheduler/request_filter.py:46
>> > 2023-02-28 06:11:58.485 1942637 DEBUG oslo_concurrency.lockutils
>> > [req-13b1baee-e02d-40fc-926d-d497e70ca0dc
>> ff627ad39ed94479b9c5033bc462cf78
>> > 512866f9994f4ad8916d8539a7cdeec9 - default default] Lock
>> > "567eb2f1-7173-4eee-b9e7-66932ed70fea" acquired by
>> >
>> "nova.context.set_target_cell.<locals>.get_or_set_cached_cell_and_set_connections"
>> > :: waited 0.000s inner
>> > /usr/lib/python3/dist-packages/oslo_concurrency/lockutils.py:386
>> > 2023-02-28 06:11:58.488 1942637 DEBUG oslo_concurrency.lockutils
>> > [req-13b1baee-e02d-40fc-926d-d497e70ca0dc
>> ff627ad39ed94479b9c5033bc462cf78
>> > 512866f9994f4ad8916d8539a7cdeec9 - default default] Lock
>> > "567eb2f1-7173-4eee-b9e7-66932ed70fea" "released" by
>> >
>> "nova.context.set_target_cell.<locals>.get_or_set_cached_cell_and_set_connections"
>> > :: held 0.003s inner
>> > /usr/lib/python3/dist-packages/oslo_concurrency/lockutils.py:400
>> > 2023-02-28 06:11:58.494 1942637 DEBUG oslo_db.sqlalchemy.engines
>> > [req-13b1baee-e02d-40fc-926d-d497e70ca0dc
>> ff627ad39ed94479b9c5033bc462cf78
>> > 512866f9994f4ad8916d8539a7cdeec9 - default default] MySQL server mode
>> set
>> > to
>> >
>> STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION
>> > _check_effective_sql_mode
>> > /usr/lib/python3/dist-packages/oslo_db/sqlalchemy/engines.py:314
>> > 2023-02-28 06:11:58.520 1942637 INFO nova.scheduler.host_manager
>> > [req-13b1baee-e02d-40fc-926d-d497e70ca0dc
>> ff627ad39ed94479b9c5033bc462cf78
>> > 512866f9994f4ad8916d8539a7cdeec9 - default default] Host mapping not
>> found
>> > for host c1c2. Not tracking instance info for this host.
>> > 2023-02-28 06:11:58.520 1942637 DEBUG oslo_concurrency.lockutils
>> > [req-13b1baee-e02d-40fc-926d-d497e70ca0dc
>> ff627ad39ed94479b9c5033bc462cf78
>> > 512866f9994f4ad8916d8539a7cdeec9 - default default] Lock "('c1c2',
>> 'c1c2')"
>> > acquired by
>> > "nova.scheduler.host_manager.HostState.update.<locals>._locked_update"
>> ::
>> > waited 0.000s inner
>> > /usr/lib/python3/dist-packages/oslo_concurrency/lockutils.py:386
>> > 2023-02-28 06:11:58.521 1942637 DEBUG nova.scheduler.host_manager
>> > [req-13b1baee-e02d-40fc-926d-d497e70ca0dc
>> ff627ad39ed94479b9c5033bc462cf78
>> > 512866f9994f4ad8916d8539a7cdeec9 - default default] Update host state
>> from
>> > compute node: ComputeNode(cpu_allocation_ratio=16.0,cpu_info='{"arch":
>> > "x86_64", "model": "Broadwell-noTSX-IBRS", "vendor": "Intel",
>> "topology":
>> > {"cells": 1, "sockets": 1, "cores": 6, "threads": 2}, "features":
>> > ["sse4.2", "mds-no", "stibp", "pdpe1gb", "xsaveopt", "ht", "intel-pt",
>> > "mtrr", "abm", "tm", "lm", "umip", "mca", "pku", "ds_cpl", "rdrand",
>> "adx",
>> > "rdseed", "lahf_lm", "xgetbv1", "nx", "invpcid", "rdtscp", "tsc",
>> "xsavec",
>> > "pcid", "arch-capabilities", "pclmuldq", "spec-ctrl", "fsgsbase",
>> "avx2",
>> > "md-clear", "vmx", "syscall", "mmx", "ds", "ssse3", "avx", "dtes64",
>> > "fxsr", "msr", "acpi", "vpclmulqdq", "smap", "erms", "pge", "cmov",
>> > "sha-ni", "fsrm", "x2apic", "xsaves", "cx8", "pse", "pse36",
>> "clflushopt",
>> > "vaes", "pni", "ssbd", "movdiri", "movbe", "clwb", "xtpr", "de",
>> "invtsc",
>> > "fpu", "tsc-deadline", "pae", "clflush", "ibrs-all", "waitpkg", "sse",
>> > "sse2", "bmi1", "3dnowprefetch", "cx16", "popcnt", "rdctl-no", "fma",
>> > "tsc_adjust", "xsave", "ss", "skip-l1dfl-vmentry", "sse4.1", "rdpid",
>> > "monitor", "vme", "tm2", "pat", "pschange-mc-no", "movdir64b", "gfni",
>> > "mce", "smep", "sep", "apic", "arat", "f16c", "bmi2", "aes", "pbe",
>> "est",
>> >
>> "pdcm"]}',created_at=2023-02-14T03:19:40Z,current_workload=0,deleted=False,deleted_at=None,disk_allocation_ratio=1.0,disk_available_least=415,free_disk_gb=456,free_ram_mb=31378,host='c1c2',host_ip=192.168.28.21,hypervisor_hostname='c1c2',hypervisor_type='QEMU',hypervisor_version=4002001,id=8,local_gb=456,local_gb_used=0,mapped=0,memory_mb=31890,memory_mb_used=512,metrics='[]',numa_topology='{"
>> > nova_object.name": "NUMATopology", "nova_object.namespace": "nova",
>> > "nova_object.version": "1.2", "nova_object.data": {"cells": [{"
>> > nova_object.name": "NUMACell", "nova_object.namespace": "nova",
>> > "nova_object.version": "1.5", "nova_object.data": {"id": 0, "cpuset":
>> [0,
>> > 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], "pcpuset": [0, 1, 2, 3, 4, 5, 6, 7,
>> 8,
>> > 9, 10, 11], "memory": 31890, "cpu_usage": 0, "memory_usage": 0,
>> > "pinned_cpus": [], "siblings": [[0, 1], [10, 11], [2, 3], [6, 7], [4,
>> 5],
>> > [8, 9]], "mempages": [{"nova_object.name": "NUMAPagesTopology",
>> > "nova_object.namespace": "nova", "nova_object.version": "1.1",
>> > "nova_object.data": {"size_kb": 4, "total": 8163962, "used": 0,
>> "reserved":
>> > 0}, "nova_object.changes": ["size_kb", "used", "reserved", "total"]}, {"
>> > nova_object.name": "NUMAPagesTopology", "nova_object.namespace":
>> "nova",
>> > "nova_object.version": "1.1", "nova_object.data": {"size_kb": 2048,
>> > "total": 0, "used": 0, "reserved": 0}, "nova_object.changes":
>> ["size_kb",
>> > "used", "reserved", "total"]}, {"nova_object.name":
>> "NUMAPagesTopology",
>> > "nova_object.namespace": "nova", "nova_object.version": "1.1",
>> > "nova_object.data": {"size_kb": 1048576, "total": 0, "used": 0,
>> "reserved":
>> > 0}, "nova_object.changes": ["size_kb", "used", "reserved", "total"]}],
>> > "network_metadata": {"nova_object.name": "NetworkMetadata",
>> > "nova_object.namespace": "nova", "nova_object.version": "1.0",
>> > "nova_object.data": {"physnets": [], "tunneled": false},
>> > "nova_object.changes": ["physnets", "tunneled"]}, "socket": 0},
>> > "nova_object.changes": ["cpuset", "memory_usage", "cpu_usage", "id",
>> > "pinned_cpus", "pcpuset", "socket", "network_metadata", "siblings",
>> > "mempages", "memory"]}]}, "nova_object.changes":
>> >
>> ["cells"]}',pci_device_pools=PciDevicePoolList,ram_allocation_ratio=1.5,running_vms=0,service_id=None,stats={failed_builds='0'},supported_hv_specs=[HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec,HVSpec],updated_at=2023-02-28T06:01:33Z,uuid=c360cc82-f0fd-4662-bccd-e1f02b27af51,vcpus=12,vcpus_used=0)
>> > _locked_update
>> > /usr/lib/python3/dist-packages/nova/scheduler/host_manager.py:167
>> > 2023-02-28 06:11:58.524 1942637 DEBUG nova.scheduler.host_manager
>> > [req-13b1baee-e02d-40fc-926d-d497e70ca0dc
>> ff627ad39ed94479b9c5033bc462cf78
>> > 512866f9994f4ad8916d8539a7cdeec9 - default default] Update host state
>> with
>> > aggregates: [] _locked_update
>> > /usr/lib/python3/dist-packages/nova/scheduler/host_manager.py:170
>> > 2023-02-28 06:11:58.524 1942637 DEBUG nova.scheduler.host_manager
>> > [req-13b1baee-e02d-40fc-926d-d497e70ca0dc
>> ff627ad39ed94479b9c5033bc462cf78
>> > 512866f9994f4ad8916d8539a7cdeec9 - default default] Update host state
>> with
>> > service dict: {'id': 17, 'uuid': '6d0921a6-427d-4a82-a7d2-41dfa003125a',
>> > 'host': 'c1c2', 'binary': 'nova-compute', 'topic': 'compute',
>> > 'report_count': 121959, 'disabled': False, 'disabled_reason': None,
>> > 'last_seen_up': datetime.datetime(2023, 2, 28, 6, 11, 49,
>> > tzinfo=datetime.timezone.utc), 'forced_down': False, 'version': 61,
>> > 'created_at': datetime.datetime(2023, 2, 14, 3, 19, 40,
>> > tzinfo=datetime.timezone.utc), 'updated_at': datetime.datetime(2023, 2,
>> 28,
>> > 6, 11, 49, tzinfo=datetime.timezone.utc), 'deleted_at': None, 'deleted':
>> > False} _locked_update
>> > /usr/lib/python3/dist-packages/nova/scheduler/host_manager.py:173
>> > 2023-02-28 06:11:58.524 1942637 DEBUG nova.scheduler.host_manager
>> > [req-13b1baee-e02d-40fc-926d-d497e70ca0dc
>> ff627ad39ed94479b9c5033bc462cf78
>> > 512866f9994f4ad8916d8539a7cdeec9 - default default] Update host state
>> with
>> > instances: [] _locked_update
>> > /usr/lib/python3/dist-packages/nova/scheduler/host_manager.py:176
>> > 2023-02-28 06:11:58.525 1942637 DEBUG oslo_concurrency.lockutils
>> > [req-13b1baee-e02d-40fc-926d-d497e70ca0dc
>> ff627ad39ed94479b9c5033bc462cf78
>> > 512866f9994f4ad8916d8539a7cdeec9 - default default] Lock "('c1c2',
>> 'c1c2')"
>> > "released" by
>> > "nova.scheduler.host_manager.HostState.update.<locals>._locked_update"
>> ::
>> > held 0.004s inner
>> > /usr/lib/python3/dist-packages/oslo_concurrency/lockutils.py:400
>> > 2023-02-28 06:11:58.525 1942637 DEBUG nova.filters
>> > [req-13b1baee-e02d-40fc-926d-d497e70ca0dc
>> ff627ad39ed94479b9c5033bc462cf78
>> > 512866f9994f4ad8916d8539a7cdeec9 - default default] Starting with 1
>> host(s)
>> > get_filtered_objects /usr/lib/python3/dist-packages/nova/filters.py:70
>> > 2023-02-28 06:11:58.526 1942637 DEBUG nova.pci.stats
>> > [req-13b1baee-e02d-40fc-926d-d497e70ca0dc
>> ff627ad39ed94479b9c5033bc462cf78
>> > 512866f9994f4ad8916d8539a7cdeec9 - default default] ---- before ----
>> > _filter_pools /usr/lib/python3/dist-packages/nova/pci/stats.py:542
>> > 2023-02-28 06:11:58.526 1942637 DEBUG nova.pci.stats
>> > [req-13b1baee-e02d-40fc-926d-d497e70ca0dc
>> ff627ad39ed94479b9c5033bc462cf78
>> > 512866f9994f4ad8916d8539a7cdeec9 - default default] [] _filter_pools
>> > /usr/lib/python3/dist-packages/nova/pci/stats.py:543
>> > 2023-02-28 06:11:58.526 1942637 DEBUG nova.pci.stats
>> > [req-13b1baee-e02d-40fc-926d-d497e70ca0dc
>> ff627ad39ed94479b9c5033bc462cf78
>> > 512866f9994f4ad8916d8539a7cdeec9 - default default] ---- after ----
>> > _filter_pools /usr/lib/python3/dist-packages/nova/pci/stats.py:545
>> > 2023-02-28 06:11:58.527 1942637 DEBUG nova.pci.stats
>> > [req-13b1baee-e02d-40fc-926d-d497e70ca0dc
>> ff627ad39ed94479b9c5033bc462cf78
>> > 512866f9994f4ad8916d8539a7cdeec9 - default default] [] _filter_pools
>> > /usr/lib/python3/dist-packages/nova/pci/stats.py:546
>> > 2023-02-28 06:11:58.527 1942637 DEBUG nova.pci.stats
>> > [req-13b1baee-e02d-40fc-926d-d497e70ca0dc
>> ff627ad39ed94479b9c5033bc462cf78
>> > 512866f9994f4ad8916d8539a7cdeec9 - default default] Not enough PCI
>> devices
>> > left to satisfy request _filter_pools
>> > /usr/lib/python3/dist-packages/nova/pci/stats.py:556
>> > 2023-02-28 06:11:58.527 1942637 DEBUG
>> > nova.scheduler.filters.pci_passthrough_filter
>> > [req-13b1baee-e02d-40fc-926d-d497e70ca0dc
>> ff627ad39ed94479b9c5033bc462cf78
>> > 512866f9994f4ad8916d8539a7cdeec9 - default default] (c1c2, c1c2) ram:
>> > 31378MB disk: 424960MB io_ops: 0 instances: 0 doesn't have the required
>> PCI
>> > devices
>> > (InstancePCIRequests(instance_uuid=<?>,requests=[InstancePCIRequest]))
>> > host_passes
>> >
>> /usr/lib/python3/dist-packages/nova/scheduler/filters/pci_passthrough_filter.py:52
>> > 2023-02-28 06:11:58.528 1942637 INFO nova.filters
>> > [req-13b1baee-e02d-40fc-926d-d497e70ca0dc
>> ff627ad39ed94479b9c5033bc462cf78
>> > 512866f9994f4ad8916d8539a7cdeec9 - default default] Filter
>> > PciPassthroughFilter returned 0 hosts
>> > 2023-02-28 06:11:58.528 1942637 DEBUG nova.filters
>> > [req-13b1baee-e02d-40fc-926d-d497e70ca0dc
>> ff627ad39ed94479b9c5033bc462cf78
>> > 512866f9994f4ad8916d8539a7cdeec9 - default default] Filtering removed
>> all
>> > hosts for the request with instance ID
>> > '8ddfbe2c-f929-4b62-8b73-67902df8fb60'. Filter results:
>> > [('PciPassthroughFilter', None)] get_filtered_objects
>> > /usr/lib/python3/dist-packages/nova/filters.py:114
>> > 2023-02-28 06:11:58.528 1942637 INFO nova.filters
>> > [req-13b1baee-e02d-40fc-926d-d497e70ca0dc
>> ff627ad39ed94479b9c5033bc462cf78
>> > 512866f9994f4ad8916d8539a7cdeec9 - default default] Filtering removed
>> all
>> > hosts for the request with instance ID
>> > '8ddfbe2c-f929-4b62-8b73-67902df8fb60'. Filter results:
>> > ['PciPassthroughFilter: (start: 1, end: 0)']
>> > 2023-02-28 06:11:58.529 1942637 DEBUG nova.scheduler.manager
>> > [req-13b1baee-e02d-40fc-926d-d497e70ca0dc
>> ff627ad39ed94479b9c5033bc462cf78
>> > 512866f9994f4ad8916d8539a7cdeec9 - default default] Filtered []
>> > _get_sorted_hosts
>> > /usr/lib/python3/dist-packages/nova/scheduler/manager.py:610
>> > 2023-02-28 06:11:58.529 1942637 DEBUG nova.scheduler.manager
>> > [req-13b1baee-e02d-40fc-926d-d497e70ca0dc
>> ff627ad39ed94479b9c5033bc462cf78
>> > 512866f9994f4ad8916d8539a7cdeec9 - default default] There are 0 hosts
>> > available but 1 instances requested to build. _ensure_sufficient_hosts
>> > /usr/lib/python3/dist-packages/nova/scheduler/manager.py:450
>> > ```
>> >
>> > Then I search database, I found PCI configure of compute node is not
>> upload:
>> > ```
>> > gyw at c1:~$ openstack resource provider inventory show
>> > c360cc82-f0fd-4662-bccd-e1f02b27af51 PCI_DEVICE
>> > No inventory of class PCI_DEVICE for
>> c360cc82-f0fd-4662-bccd-e1f02b27af51
>> > (HTTP 404)
>> > gyw at c1:~$ openstack resource provider inventory show
>> > c360cc82-f0fd-4662-bccd-e1f02b27af51 PCI_DEVICE
>> > No inventory of class PCI_DEVICE for
>> c360cc82-f0fd-4662-bccd-e1f02b27af51
>> > (HTTP 404)
>> > gyw at c1:~$ openstack resource class show PCI_DEVICE
>> > +-------+------------+
>> > > Field | Value      |
>> > +-------+------------+
>> > > name  | PCI_DEVICE |
>> > +-------+------------+
>> > gyw at c1:~$ openstack resource provider inventory show
>> > c360cc82-f0fd-4662-bccd-e1f02b27af51 MEMORY_MB
>> > +------------------+-------+
>> > > Field            | Value |
>> > +------------------+-------+
>> > > allocation_ratio | 1.5   |
>> > > min_unit         | 1     |
>> > > max_unit         | 31890 |
>> > > reserved         | 512   |
>> > > step_size        | 1     |
>> > > total            | 31890 |
>> > > used             | 0     |
>> > +------------------+-------+
>> >     这个 31890 能跟上面compute node resource tracker上报的日志对上。
>> > gyw at c1:~$ openstack resource provider inventory show
>> > c360cc82-f0fd-4662-bccd-e1f02b27af51 VCPU
>> > 、^Cgyw at c1:~$ openstack resource provider inventory show
>> > c360cc82-f0fd-4662-bccd-e1f02b27af51 VCPU
>> > +------------------+-------+
>> > > Field            | Value |
>> > +------------------+-------+
>> > > allocation_ratio | 16.0  |
>> > > min_unit         | 1     |
>> > > max_unit         | 12    |
>> > > reserved         | 0     |
>> > > step_size        | 1     |
>> > > total            | 12    |
>> > > used             | 0     |
>> > +------------------+-------+
>> > gyw at c1:~$ openstack resource provider inventory show
>> > c360cc82-f0fd-4662-bccd-e1f02b27af51 SRIOV_NET_VF
>> > No inventory of class SRIOV_NET_VF for
>> c360cc82-f0fd-4662-bccd-e1f02b27af51
>> > (HTTP 404)
>> > gyw at c1:~$ openstack resource provider inventory show
>> > c360cc82-f0fd-4662-bccd-e1f02b27af51 DISK_GB
>> > +------------------+-------+
>> > > Field            | Value |
>> > +------------------+-------+
>> > > allocation_ratio | 1.0   |
>> > > min_unit         | 1     |
>> > > max_unit         | 456   |
>> > > reserved         | 0     |
>> > > step_size        | 1     |
>> > > total            | 456   |
>> > > used             | 0     |
>> > +------------------+-------+
>> > gyw at c1:~$ openstack resource provider inventory show
>> > c360cc82-f0fd-4662-bccd-e1f02b27af51 IPV4_ADDRESS
>> > No inventory of class IPV4_ADDRESS for
>> c360cc82-f0fd-4662-bccd-e1f02b27af51
>> > (HTTP 404)
>> >
>> > MariaDB [nova]> select * from compute_nodes;
>> >
>> +---------------------+---------------------+---------------------+----+------------+-------+-----------+----------+------------+----------------+---------------+-----------------+--------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------+-------------+--------------+------------------+-------------+---------------------+---------+---------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------+-----------------+------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------+----------------------+----------------------+--------------------------------------+-----------------------+--------+
>> > > created_at          | updated_at          | deleted_at          | id |
>> > service_id | vcpus | memory_mb | local_gb | vcpus_used | memory_mb_used
>> |
>> > local_gb_used | hypervisor_type | hypervisor_version | cpu_info
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >                 | disk_available_least | free_ram_mb | free_disk_gb |
>> > current_workload | running_vms | hypervisor_hostname | deleted | host_ip
>> >     | supported_instances
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >                                                | pci_stats
>> >
>> >
>> > > metrics | extra_resources | stats                  | numa_topology
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >                  | host      | ram_allocation_ratio |
>> cpu_allocation_ratio
>> > > uuid                                 | disk_allocation_ratio | mapped
>> |
>> >
>> +---------------------+---------------------+---------------------+----+------------+-------+-----------+----------+------------+----------------+---------------+-----------------+--------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------+-------------+--------------+------------------+-------------+---------------------+---------+---------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------+-----------------+------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-----------+----------------------+----------------------+--------------------------------------+-----------------------+--------+
>> > > 2023-01-04 01:55:44 | 2023-01-04 03:02:28 | 2023-02-13 08:34:08 |  1 |
>> >     NULL |     4 |      3931 |       60 |          0 |            512 |
>> >         0 | QEMU            |            4002001 | {"arch": "x86_64",
>> > "model": "Broadwell-noTSX-IBRS", "vendor": "Intel", "topology":
>> {"cells":
>> > 1, "sockets": 4, "cores": 1, "threads": 1}, "features": ["pat", "cmov",
>> > "ibrs-all", "pge", "sse4.2", "sse", "mmx", "ibrs", "avx2", "syscall",
>> > "fpu", "mtrr", "xsaves", "mce", "invpcid", "tsc_adjust", "ssbd", "pku",
>> > "ibpb", "xsave", "xsaveopt", "pae", "lm", "pdcm", "bmi1", "avx512vnni",
>> > "stibp", "x2apic", "avx512dq", "pcid", "nx", "bmi2", "erms",
>> > "3dnowprefetch", "de", "avx512bw", "arch-capabilities", "pni", "fma",
>> > "rdctl-no", "sse4.1", "rdseed", "arat", "avx512vl", "avx512f",
>> "pclmuldq",
>> > "msr", "fxsr", "sse2", "amd-stibp", "hypervisor", "tsx-ctrl",
>> "clflushopt",
>> > "cx16", "clwb", "xgetbv1", "xsavec", "adx", "rdtscp", "mds-no", "cx8",
>> > "aes", "tsc-deadline", "pse36", "fsgsbase", "umip", "spec-ctrl",
>> "lahf_lm",
>> > "md-clear", "avx512cd", "amd-ssbd", "vmx", "apic", "f16c", "pse", "tsc",
>> > "movbe", "smep", "ss", "pschange-mc-no", "ssse3", "popcnt", "avx",
>> "vme",
>> > "smap", "pdpe1gb", "mca", "skip-l1dfl-vmentry", "abm", "sep", "clflush",
>> > "rdrand"]} |                   49 |        3419 |           60 |
>> >      0 |           0 | gyw                 |       1 | 192.168.2.99  |
>> > [["i686", "qemu", "hvm"], ["i686", "kvm", "hvm"], ["x86_64", "qemu",
>> > "hvm"], ["x86_64", "kvm", "hvm"]]
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >                                                 | {"nova_object.name":
>> > "PciDevicePoolList", "nova_object.namespace": "nova",
>> > "nova_object.version": "1.1", "nova_object.data": {"objects": []},
>> > "nova_object.changes": ["objects"]} | []      | NULL            |
>> > {"failed_builds": "0"} | {"nova_object.name": "NUMATopology",
>> > "nova_object.namespace": "nova", "nova_object.version": "1.2",
>> > "nova_object.data": {"cells": [{"nova_object.name": "NUMACell",
>> > "nova_object.namespace": "nova", "nova_object.version": "1.5",
>> > "nova_object.data": {"id": 0, "cpuset": [0, 1, 2, 3], "pcpuset": [0, 1,
>> 2,
>> > 3], "memory": 3931, "cpu_usage": 0, "memory_usage": 0, "pinned_cpus":
>> [],
>> > "siblings": [[0], [1], [2], [3]], "mempages": [{"nova_object.name":
>> > "NUMAPagesTopology", "nova_object.namespace": "nova",
>> > "nova_object.version": "1.1", "nova_object.data": {"size_kb": 4,
>> "total":
>> > 1006396, "used": 0, "reserved": 0}, "nova_object.changes": ["used",
>> > "reserved", "size_kb", "total"]}, {"nova_object.name":
>> "NUMAPagesTopology",
>> > "nova_object.namespace": "nova", "nova_object.version": "1.1",
>> > "nova_object.data": {"size_kb": 2048, "total": 0, "used": 0, "reserved":
>> > 0}, "nova_object.changes": ["used", "reserved", "size_kb", "total"]}, {"
>> > nova_object.name": "NUMAPagesTopology", "nova_object.namespace":
>> "nova",
>> > "nova_object.version": "1.1", "nova_object.data": {"size_kb": 1048576,
>> > "total": 0, "used": 0, "reserved": 0}, "nova_object.changes": ["used",
>> > "reserved", "size_kb", "total"]}], "network_metadata": {"
>> nova_object.name":
>> > "NetworkMetadata", "nova_object.namespace": "nova",
>> "nova_object.version":
>> > "1.0", "nova_object.data": {"physnets": [], "tunneled": false},
>> > "nova_object.changes": ["physnets", "tunneled"]}, "socket": null},
>> > "nova_object.changes": ["cpuset", "pinned_cpus", "mempages",
>> > "network_metadata", "cpu_usage", "pcpuset", "memory", "id", "socket",
>> > "siblings", "memory_usage"]}]}, "nova_object.changes": ["cells"]} | gyw
>> >   |                  1.5 |                   16 |
>> > b1bf35bd-a9ad-4f0c-9033-776a5c6d1c9b |                     1 |      1 |
>> > > 2023-01-04 03:12:17 | 2023-01-31 06:36:36 | 2023-02-23 08:50:29 |  2 |
>> >     NULL |     4 |      3931 |       60 |          0 |            512 |
>> >         0 | QEMU            |            4002001 | {"arch": "x86_64",
>> > "model": "Broadwell-noTSX-IBRS", "vendor": "Intel", "topology":
>> {"cells":
>> > 1, "sockets": 4, "cores": 1, "threads": 1}, "features": ["pclmuldq",
>> > "fsgsbase", "f16c", "fxsr", "ibpb", "adx", "movbe", "aes", "x2apic",
>> "abm",
>> > "mtrr", "arat", "sse4.2", "bmi1", "stibp", "sse4.1", "pae", "vme",
>> "msr",
>> > "skip-l1dfl-vmentry", "fma", "pcid", "avx2", "de", "ibrs-all", "ssse3",
>> > "apic", "umip", "xsavec", "3dnowprefetch", "amd-ssbd", "sse", "nx",
>> "fpu",
>> > "pse", "smap", "smep", "lahf_lm", "pni", "spec-ctrl", "xsave", "xsaves",
>> > "rdtscp", "vmx", "avx512f", "cmov", "invpcid", "hypervisor", "erms",
>> > "rdctl-no", "cx16", "cx8", "tsc", "pge", "pdcm", "rdrand", "avx",
>> > "amd-stibp", "avx512vl", "xsaveopt", "mds-no", "popcnt", "clflushopt",
>> > "sse2", "xgetbv1", "rdseed", "pdpe1gb", "pschange-mc-no", "clwb",
>> > "avx512vnni", "mca", "tsx-ctrl", "tsc_adjust", "syscall", "pse36",
>> "mmx",
>> > "avx512cd", "avx512bw", "pku", "tsc-deadline", "arch-capabilities",
>> > "avx512dq", "ssbd", "clflush", "mce", "ss", "pat", "bmi2", "lm", "ibrs",
>> > "sep", "md-clear"]} |                   49 |        3419 |           60
>> |
>> >              0 |           0 | c1c1                |       2 |
>> 192.168.2.99
>> >  | [["i686", "qemu", "hvm"], ["i686", "kvm", "hvm"], ["x86_64", "qemu",
>> > "hvm"], ["x86_64", "kvm", "hvm"]]
>> >
>> >
>> >
>> >
>> >
>> >
>> >
>> >                                                 | {"nova_object.name":
>> > "PciDevicePoolList", "nova_object.namespace": "nova",
>> > "nova_object.version": "1.1", "nova_object.data": {"objects": []},
>> > "nova_object.changes": ["objects"]} | []      | NULL            |
>> > {"failed_builds": "0"} | {"nova_object.name": "NUMATopology",
>> > "nova_object.namespace": "nova", "nova_object.version": "1.2",
>> > "nova_object.data": {"cells": [{"nova_object.name": "NUMACell",
>> > "nova_object.namespace": "nova", "nova_object.version": "1.5",
>> > "nova_object.data": {"id": 0, "cpuset": [0, 1, 2, 3], "pcpuset": [0, 1,
>> 2,
>> > 3], "memory": 3931, "cpu_usage": 0, "memory_usage": 0, "pinned_cpus":
>> [],
>> > "siblings": [[0], [1], [2], [3]], "mempages": [{"nova_object.name":
>> > "NUMAPagesTopology", "nova_object.namespace": "nova",
>> > "nova_object.version": "1.1", "nova_object.data": {"size_kb": 4,
>> "total":
>> > 1006393, "used": 0, "reserved": 0}, "nova_object.changes": ["used",
>> > "total", "size_kb", "reserved"]}, {"nova_object.name":
>> "NUMAPagesTopology",
>> > "nova_object.namespace": "nova", "nova_object.version": "1.1",
>> > "nova_object.data": {"size_kb": 2048, "total": 0, "used": 0, "reserved":
>> > 0}, "nova_object.changes": ["used", "total", "size_kb", "reserved"]}, {"
>> > nova_object.name": "NUMAPagesTopology", "nova_object.namespace":
>> "nova",
>> > "nova_object.version": "1.1", "nova_object.data": {"size_kb": 1048576,
>> > "total": 0, "used": 0, "reserved": 0}, "nova_object.changes": ["used",
>> > "total", "size_kb", "reserved"]}], "network_metadata": {"
>> nova_object.name":
>> > "NetworkMetadata", "nova_object.namespace": "nova",
>> "nova_object.version":
>> > "1.0", "nova_object.data": {"physnets": [], "tunneled": false},
>> > "nova_object.changes": ["tunneled", "physnets"]}, "socket": null},
>> > "nova_object.changes": ["memory_usage", "socket", "cpuset", "siblings",
>> > "id", "mempages", "pinned_cpus", "memory", "pcpuset",
>> "network_metadata",
>> > "cpu_usage"]}]}, "nova_object.changes": ["cells"]} | c1c1      |
>> >        1.5 |                   16 |
>> 1eac1c8d-d96a-4eeb-9868-5a341a80c6df |
>> >                     1 |      0 |
>> > > 2023-02-07 08:25:27 | 2023-02-07 08:25:27 | 2023-02-13 08:34:22 |  3 |
>> >     NULL |    12 |     31890 |      456 |          0 |            512 |
>> >         0 | QEMU            |            4002001 | {"arch": "x86_64",
>> > "model": "Broadwell-noTSX-IBRS", "vendor": "Intel", "topology":
>> {"cells":
>> > 1, "sockets": 1, "cores": 6, "threads": 2}, "features": ["sha-ni",
>> > "intel-pt", "pat", "monitor", "movbe", "nx", "msr", "avx2", "md-clear",
>> > "popcnt", "rdseed", "pse36", "mds-no", "ds", "sse", "fsrm", "rdctl-no",
>> > "pse", "dtes64", "ds_cpl", "xgetbv1", "lahf_lm", "smep", "waitpkg",
>> "smap",
>> > "fsgsbase", "sep", "tsc_adjust", "cmov", "ibrs-all", "mtrr", "cx16",
>> > "f16c", "arch-capabilities", "pclmuldq", "clflush", "erms", "umip",
>> > "xsaves", "xsavec", "ssse3", "acpi", "tsc", "movdir64b", "vpclmulqdq",
>> > "skip-l1dfl-vmentry", "xsave", "arat", "mmx", "rdpid", "sse2", "ssbd",
>> > "pdpe1gb", "spec-ctrl", "adx", "pcid", "de", "pku", "est", "pae",
>> > "tsc-deadline", "pdcm", "clwb", "vme", "rdtscp", "fxsr",
>> "3dnowprefetch",
>> > "invpcid", "x2apic", "tm", "lm", "fma", "bmi1", "sse4.1", "abm",
>> > "xsaveopt", "pschange-mc-no", "syscall", "clflushopt", "pbe", "avx",
>> "cx8",
>> > "vmx", "gfni", "fpu", "mce", "tm2", "movdiri", "invtsc", "apic", "bmi2",
>> > "mca", "pge", "rdrand", "xtpr", "sse4.2", "stibp", "ht", "ss", "pni",
>> > "vaes", "aes"]} |                  416 |       31378 |          456 |
>> >          0 |           0 | c-MS-7D42           |       3 |
>> 192.168.2.99  |
>> > [["alpha", "qemu", "hvm"], ["armv7l", "qemu", "hvm"], ["aarch64",
>> "qemu",
>> > "hvm"], ["cris", "qemu", "hvm"], ["i686", "qemu", "hvm"], ["i686",
>> "kvm",
>> > "hvm"], ["lm32", "qemu", "hvm"], ["m68k", "qemu", "hvm"], ["microblaze",
>> > "qemu", "hvm"], ["microblazeel", "qemu", "hvm"], ["mips", "qemu",
>> "hvm"],
>> > ["mipsel", "qemu", "hvm"], ["mips64", "qemu", "hvm"], ["mips64el",
>> "qemu",
>> > "hvm"], ["ppc", "qemu", "hvm"], ["ppc64", "qemu", "hvm"], ["ppc64le",
>> > "qemu", "hvm"], ["s390x", "qemu", "hvm"], ["sh4", "qemu", "hvm"],
>> ["sh4eb",
>> > "qemu", "hvm"], ["sparc", "qemu", "hvm"], ["sparc64", "qemu", "hvm"],
>> > ["unicore32", "qemu", "hvm"], ["x86_64", "qemu", "hvm"], ["x86_64",
>> "kvm",
>> > "hvm"], ["xtensa", "qemu", "hvm"], ["xtensaeb", "qemu", "hvm"]] | {"
>> > nova_object.name": "PciDevicePoolList", "nova_object.namespace":
>> "nova",
>> > "nova_object.version": "1.1", "nova_object.data": {"objects": []},
>> > "nova_object.changes": ["objects"]} | []      | NULL            |
>> > {"failed_builds": "0"} | {"nova_object.name": "NUMATopology",
>> > "nova_object.namespace": "nova", "nova_object.version": "1.2",
>> > "nova_object.data": {"cells": [{"nova_object.name": "NUMACell",
>> > "nova_object.namespace": "nova", "nova_object.version": "1.5",
>> > "nova_object.data": {"id": 0, "cpuset": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
>> 10,
>> > 11], "pcpuset": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], "memory": 31890,
>> > "cpu_usage": 0, "memory_usage": 0, "pinned_cpus": [], "siblings": [[0,
>> 1],
>> > [10, 11], [2, 3], [6, 7], [4, 5], [8, 9]], "mempages": [{"
>> nova_object.name":
>> > "NUMAPagesTopology", "nova_object.namespace": "nova",
>> > "nova_object.version": "1.1", "nova_object.data": {"size_kb": 4,
>> "total":
>> > 8163866, "used": 0, "reserved": 0}, "nova_object.changes": ["total",
>> > "reserved", "used", "size_kb"]}, {"nova_object.name":
>> "NUMAPagesTopology",
>> > "nova_object.namespace": "nova", "nova_object.version": "1.1",
>> > "nova_object.data": {"size_kb": 2048, "total": 0, "used": 0, "reserved":
>> > 0}, "nova_object.changes": ["total", "reserved", "used", "size_kb"]}, {"
>> > nova_object.name": "NUMAPagesTopology", "nova_object.namespace":
>> "nova",
>> > "nova_object.version": "1.1", "nova_object.data": {"size_kb": 1048576,
>> > "total": 0, "used": 0, "reserved": 0}, "nova_object.changes": ["total",
>> > "reserved", "used", "size_kb"]}], "network_metadata": {"
>> nova_object.name":
>> > "NetworkMetadata", "nova_object.namespace": "nova",
>> "nova_object.version":
>> > "1.0", "nova_object.data": {"physnets": [], "tunneled": false},
>> > "nova_object.changes": ["physnets", "tunneled"]}, "socket": 0},
>> > "nova_object.changes": ["network_metadata", "cpuset", "mempages", "id",
>> > "socket", "cpu_usage", "memory", "pinned_cpus", "pcpuset", "siblings",
>> > "memory_usage"]}]}, "nova_object.changes": ["cells"]} | c-MS-7D42 |
>> >          1.5 |                   16 |
>> f115a1c2-fda3-42c6-945a-8b54fef40daf
>> > >                     1 |      0 |
>> > > 2023-02-07 09:53:12 | 2023-02-13 08:38:04 | 2023-02-13 08:39:33 |  4 |
>> >     NULL |    12 |     31890 |      456 |          0 |            512 |
>> >         0 | QEMU            |            4002001 | {"arch": "x86_64",
>> > "model": "Broadwell-noTSX-IBRS", "vendor": "Intel", "topology":
>> {"cells":
>> > 1, "sockets": 1, "cores": 6, "threads": 2}, "features": ["rdctl-no",
>> > "acpi", "umip", "invpcid", "bmi1", "clflushopt", "pclmuldq",
>> "movdir64b",
>> > "ssbd", "apic", "rdpid", "ht", "fsrm", "pni", "pse", "xsaves", "cx16",
>> > "nx", "f16c", "arat", "popcnt", "mtrr", "vpclmulqdq", "intel-pt",
>> > "spec-ctrl", "syscall", "3dnowprefetch", "ds", "mce", "bmi2", "tm2",
>> > "md-clear", "fpu", "monitor", "pae", "erms", "dtes64", "tsc",
>> "fsgsbase",
>> > "xgetbv1", "est", "mds-no", "tm", "x2apic", "xsavec", "cx8", "stibp",
>> > "clflush", "ssse3", "pge", "movdiri", "pdpe1gb", "vaes", "gfni", "mmx",
>> > "clwb", "waitpkg", "xsaveopt", "pse36", "aes", "pschange-mc-no", "sse2",
>> > "abm", "ss", "pcid", "sep", "rdseed", "mca", "skip-l1dfl-vmentry",
>> "pat",
>> > "smap", "sse", "lahf_lm", "avx", "cmov", "sse4.1", "sse4.2", "ibrs-all",
>> > "smep", "vme", "tsc_adjust", "arch-capabilities", "fma", "movbe", "adx",
>> > "avx2", "xtpr", "pku", "pbe", "rdrand", "tsc-deadline", "pdcm",
>> "ds_cpl",
>> > "de", "invtsc", "xsave", "msr", "fxsr", "lm", "vmx", "sha-ni",
>> "rdtscp"]} |
>> >                  416 |       31378 |          456 |                0 |
>> >       0 | c-MS-7D42           |       4 | 192.168.28.21 | [["alpha",
>> > "qemu", "hvm"], ["armv7l", "qemu", "hvm"], ["aarch64", "qemu", "hvm"],
>> > ["cris", "qemu", "hvm"], ["i686", "qemu", "hvm"], ["i686", "kvm",
>> "hvm"],
>> > ["lm32", "qemu", "hvm"], ["m68k", "qemu", "hvm"], ["microblaze", "qemu",
>> > "hvm"], ["microblazeel", "qemu", "hvm"], ["mips", "qemu", "hvm"],
>> > ["mipsel", "qemu", "hvm"], ["mips64", "qemu", "hvm"], ["mips64el",
>> "qemu",
>> > "hvm"], ["ppc", "qemu", "hvm"], ["ppc64", "qemu", "hvm"], ["ppc64le",
>> > "qemu", "hvm"], ["s390x", "qemu", "hvm"], ["sh4", "qemu", "hvm"],
>> ["sh4eb",
>> > "qemu", "hvm"], ["sparc", "qemu", "hvm"], ["sparc64", "qemu", "hvm"],
>> > ["unicore32", "qemu", "hvm"], ["x86_64", "qemu", "hvm"], ["x86_64",
>> "kvm",
>> > "hvm"], ["xtensa", "qemu", "hvm"], ["xtensaeb", "qemu", "hvm"]] | {"
>> > nova_object.name": "PciDevicePoolList", "nova_object.namespace":
>> "nova",
>> > "nova_object.version": "1.1", "nova_object.data": {"objects": []},
>> > "nova_object.changes": ["objects"]} | []      | NULL            |
>> > {"failed_builds": "0"} | {"nova_object.name": "NUMATopology",
>> > "nova_object.namespace": "nova", "nova_object.version": "1.2",
>> > "nova_object.data": {"cells": [{"nova_object.name": "NUMACell",
>> > "nova_object.namespace": "nova", "nova_object.version": "1.5",
>> > "nova_object.data": {"id": 0, "cpuset": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
>> 10,
>> > 11], "pcpuset": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11], "memory": 31890,
>> > "cpu_usage": 0, "memory_usage": 0, "pinned_cpus": [], "siblings": [[0,
>> 1],
>> > [10, 11], [2, 3], [6, 7], [4, 5], [8, 9]], "mempages": [{"
>> nova_object.name":
>> > "NUMAPagesTopology", "nova_object.namespace": "nova",
>> > "nova_object.version": "1.1", "nova_object.data": {"size_kb": 4,
>> "total":
>> > 8163866, "used": 0, "reserved": 0}, "nova_object.changes": ["size_kb",
>> > "total", "used", "reserved"]}, {"nova_object.name":
>> "NUMAPagesTopology",
>> > "nova_object.namespace": "nova", "nova_object.version": "1.1",
>> > "nova_object.data": {"size_kb": 2048, "total": 0, "used": 0, "reserved":
>> > 0}, "nova_object.changes": ["size_kb", "total", "used", "reserved"]}, {"
>> > nova_object.name": "NUMAPagesTopology", "nova_object.namespace":
>> "nova",
>> > "nova_object.version": "1.1", "nova_object.data": {"size_kb": 1048576,
>> > "total": 0, "used": 0, "reserved": 0}, "nova_object.changes":
>> ["size_kb",
>> > "total", "used", "reserved"]}], "network_metadata": {"nova_object.name
>> ":
>> > "NetworkMetadata", "nova_object.namespace": "nova",
>> "nova_object.version":
>> > "1.0", "nova_object.data": {"physnets": [], "tunneled": false},
>> > "nova_object.changes": ["physnets", "tunneled"]}, "socket": 0},
>> > "nova_object.changes": ["siblings", "cpuset", "mempages", "socket",
>> > "pcpuset", "memory", "memory_usage", "id", "network_metadata",
>> "cpu_usage",
>> > "pinned_cpus"]}]}, "nova_object.changes": ["cells"]} | c1c2      |
>> >          1.5 |                   16 |
>> 10ea8254-ad84-4db9-9acd-5c783cb8600e
>> > >                     1 |      0 |
>> > > 2023-02-13 08:41:21 | 2023-02-13 08:41:22 | 2023-02-13 09:56:50 |  5 |
>> >     NULL |    12 |     31890 |      456 |          0 |            512 |
>> >         0 | QEMU            |            4002001 | {"arch": "x86_64",
>> > "model": "Broadwell-noTSX-IBRS", "vendor": "Intel", "topology":
>> {"cells":
>> > 1, "sockets": 1, "cores": 6, "threads": 2}, "features": ["bmi2", "ht",
>> > "pae", "pku", "monitor", "avx2", "sha-ni", "acpi", "ssbd", "syscall",
>> > "mca", "mmx", "mds-no", "erms", "fsrm", "arat", "xsaves", "movbe",
>> > "movdir64b", "fpu", "clflush", "nx", "mce", "pse", "cx8", "aes", "avx",
>> > "xsavec", "invpcid", "est", "xgetbv1", "fxsr", "rdrand", "vaes", "cmov",
>> > "intel-pt", "smep", "dtes64", "f16c", "adx", "sse2", "stibp", "rdseed",
>> > "xsave", "skip-l1dfl-vmentry", "sse4.1", "rdpid", "ds", "umip", "pni",
>> > "rdctl-no", "clwb", "md-clear", "pschange-mc-no", "msr", "popcnt",
>> > "sse4.2", "pge", "tm2", "pat", "xtpr", "fma", "gfni", "sep", "ibrs-all",
>> > "tsc", "ds_cpl", "tm", "clflushopt", "pcid", "de", "rdtscp", "vme",
>> "cx16",
>> > "lahf_lm", "ss", "pdcm", "x2apic", "pbe", "movdiri", "tsc-deadline",
>> > "invtsc", "apic", "fsgsbase", "mtrr", "vpclmulqdq", "ssse3",
>> > "3dnowprefetch", "abm", "xsaveopt", "tsc_adjust", "pse36", "pclmuldq",
>> > "bmi1", "smap", "arch-capabilities", "lm", "vmx", "sse", "pdpe1gb",
>> > "spec-ctrl", "waitpkg"]} |                  416 |       31378 |
>> >  456 |                0 |           0 | c-MS-7D42           |       5 |
>> > 192.168.28.21 | [["alpha", "qemu", "hvm"], ["armv7l", "qemu", "hvm"],
>> > ["aarch64", "qemu", "hvm"], ["cris", "qemu", "hvm"], ["i686", "qemu",
>> > "hvm"], ["i686", "kvm", "hvm"], ["lm32", "qemu", "hvm"], ["m68k",
>> "qemu",
>> > "hvm"], ["microblaze", "qemu", "hvm"], ["microblazeel", "qemu", "hvm"],
>> > ["mips", "qemu", "hvm"], ["mipsel", "qemu", "h
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.openstack.org/pipermail/openstack-discuss/attachments/20230301/98bae6de/attachment-0001.htm>


More information about the openstack-discuss mailing list