This [*] is what appears in nova-scheduler after having enabled the debug.

We performed a "yum update" so, yes, we also updated libvirt (now we are running v. 4.5.0)

Thanks, Massimo

[*]

2019-07-23 16:44:34.849 12561 DEBUG nova.scheduler.filters.image_props_filter [req-52638278-51b7-4768-836a-f70d8a8b016a ab573ba3ea014b778193b6922ffffe6d ee1865a76440481cbcff08544c7d580a - default default] Instance contains properties ImageMetaProps(hw_architecture=<?>,hw_auto_disk_config=<?>,hw_boot_menu=<?>,hw_cdrom_bus=<?>,hw_cpu_cores=<?>,hw_cpu_max_cores=<?>,hw_cpu_max_sockets=<?>,hw_cpu_max_threads=<?>,hw_cpu_policy=<?>,hw_cpu_realtime_mask=<?>,hw_cpu_sockets=<?>,hw_cpu_thread_policy=<?>,hw_cpu_threads=<?>,hw_device_id=<?>,hw_disk_bus=<?>,hw_disk_type=<?>,hw_firmware_type=<?>,hw_floppy_bus=<?>,hw_ipxe_boot=<?>,hw_machine_type=<?>,hw_mem_page_size=<?>,hw_numa_cpus=<?>,hw_numa_mem=<?>,hw_numa_nodes=<?>,hw_pointer_model=<?>,hw_qemu_guest_agent=<?>,hw_rescue_bus=<?>,hw_rescue_device=<?>,hw_rng_model=<?>,hw_scsi_model=<?>,hw_serial_port_count=<?>,hw_video_model=<?>,hw_video_ram=<?>,hw_vif_model=<?>,hw_vif_multiqueue_enabled=<?>,hw_vm_mode=<?>,hw_watchdog_action=<?>,img_bdm_v2=<?>,img_bittorrent=<?>,img_block_device_mapping=<?>,img_cache_in_nova=<?>,img_compression_level=<?>,img_config_drive=<?>,img_hide_hypervisor_id=<?>,img_hv_requested_version=<?>,img_hv_type='qemu',img_linked_clone=<?>,img_mappings=<?>,img_owner_id=<?>,img_root_device_name=<?>,img_signature=<?>,img_signature_certificate_uuid=<?>,img_signature_hash_method=<?>,img_signature_key_type=<?>,img_use_agent=<?>,img_version=<?>,os_admin_user=<?>,os_command_line=<?>,os_distro=<?>,os_require_quiesce=<?>,os_secure_boot=<?>,os_skip_agent_inject_files_at_boot=<?>,os_skip_agent_inject_ssh=<?>,os_type=<?>,traits_required=<?>) that are not provided by the compute node supported_instances [[u'i686', u'kvm', u'hvm'], [u'x86_64', u'kvm', u'hvm']] or hypervisor version 2012000 do not match _instance_supported /usr/lib/python2.7/site-packages/nova/scheduler/filters/image_props_filter.py:103
2019-07-23 16:44:34.852 12561 DEBUG nova.scheduler.filters.image_props_filter [req-52638278-51b7-4768-836a-f70d8a8b016a ab573ba3ea014b778193b6922ffffe6d ee1865a76440481cbcff08544c7d580a - default default] Instance contains properties ImageMetaProps(hw_architecture=<?>,hw_auto_disk_config=<?>,hw_boot_menu=<?>,hw_cdrom_bus=<?>,hw_cpu_cores=<?>,hw_cpu_max_cores=<?>,hw_cpu_max_sockets=<?>,hw_cpu_max_threads=<?>,hw_cpu_policy=<?>,hw_cpu_realtime_mask=<?>,hw_cpu_sockets=<?>,hw_cpu_thread_policy=<?>,hw_cpu_threads=<?>,hw_device_id=<?>,hw_disk_bus=<?>,hw_disk_type=<?>,hw_firmware_type=<?>,hw_floppy_bus=<?>,hw_ipxe_boot=<?>,hw_machine_type=<?>,hw_mem_page_size=<?>,hw_numa_cpus=<?>,hw_numa_mem=<?>,hw_numa_nodes=<?>,hw_pointer_model=<?>,hw_qemu_guest_agent=<?>,hw_rescue_bus=<?>,hw_rescue_device=<?>,hw_rng_model=<?>,hw_scsi_model=<?>,hw_serial_port_count=<?>,hw_video_model=<?>,hw_video_ram=<?>,hw_vif_model=<?>,hw_vif_multiqueue_enabled=<?>,hw_vm_mode=<?>,hw_watchdog_action=<?>,img_bdm_v2=<?>,img_bittorrent=<?>,img_block_device_mapping=<?>,img_cache_in_nova=<?>,img_compression_level=<?>,img_config_drive=<?>,img_hide_hypervisor_id=<?>,img_hv_requested_version=<?>,img_hv_type='qemu',img_linked_clone=<?>,img_mappings=<?>,img_owner_id=<?>,img_root_device_name=<?>,img_signature=<?>,img_signature_certificate_uuid=<?>,img_signature_hash_method=<?>,img_signature_key_type=<?>,img_use_agent=<?>,img_version=<?>,os_admin_user=<?>,os_command_line=<?>,os_distro=<?>,os_require_quiesce=<?>,os_secure_boot=<?>,os_skip_agent_inject_files_at_boot=<?>,os_skip_agent_inject_ssh=<?>,os_type=<?>,traits_required=<?>) that are not provided by the compute node supported_instances [[u'i686', u'kvm', u'hvm'], [u'x86_64', u'kvm', u'hvm']] or hypervisor version 2012000 do not match _instance_supported /usr/lib/python2.7/site-packages/nova/scheduler/filters/image_props_filter.py:103

On Tue, Jul 23, 2019 at 4:35 PM Matt Riedemann <mriedemos@gmail.com> wrote:
On 7/23/2019 8:57 AM, Massimo Sgaravatto wrote:
>
> When running Ocata I had as property of some images:
>
> hypervisor_type='QEMU'
>
> and this worked
>
> Now in Rocjy:
>
> hypervisor_type='QEMU' --> doesn't work (i.e. all hypervisors are
> excluded by  ImagePropertiesFilter)
> hypervisor_type='qemu' --> doesn't work  (i.e. all hypervisors are
> excluded by  ImagePropertiesFilter)
> hypervisor_type='kvm' --> works
>
> "openstack hypervisor list --long"  reports "QEMU" as Hypervisor Type
> for all compute nodes

Apparently the filter doesn't use the ComputeNode.hypervisor_type field
(which is what you see in the API/CLI output) to compare the img_hv_type
property, it relies on some ComputeNode.supported_instances tuples which
are reported differently by the driver.

Can you enable debug in the scheduler so we could see this output when
you get the NoValidHost?

https://github.com/openstack/nova/blob/stable/rocky/nova/scheduler/filters/image_props_filter.py#L97

Did you upgrade libvirt/qemu as well when you upgraded these nodes to
Rocky? I wonder if the supported instance hypervisor type reported by
the virt driver is kvm now rather than qemu even though the hypervisor
type reported in the API shows QEMU.

FWIW this is the virt driver code that reports that supported_instances
information for the compute node that's used by the scheduler filter:

https://github.com/openstack/nova/blob/stable/rocky/nova/virt/libvirt/driver.py#L5846

--

Thanks,

Matt