[ops][glance][nova] scheduling problem because of ImagePropertiesFilter

Massimo Sgaravatto massimo.sgaravatto at gmail.com
Wed Jul 24 06:37:03 UTC 2019


Melanie: I think this is indeed the problem !

But then, if I am not wrong, the note in:

https://docs.openstack.org/nova/rocky/admin/configuration/schedulers.html

<verbatim>
 Note

qemu is used for both QEMU and KVM hypervisor types.
</verbatim>

should be removed.
I can open a bug if you agree ...

And maybe this is something worth to be mentioned in the release notes ?

Thanks again for your help !

Cheers, Massimo

On Wed, Jul 24, 2019 at 2:11 AM melanie witt <melwittt at gmail.com> wrote:

> On 7/23/19 8:14 AM, Matt Riedemann wrote:
> > On 7/23/2019 9:50 AM, Massimo Sgaravatto wrote:
> >>
> >> This [*] is what appears in nova-scheduler after having enabled the
> >> debug.
> >>
> >> We performed a "yum update" so, yes, we also updated libvirt (now we
> >> are running v. 4.5.0)
> >>
> >> Thanks, Massimo
> >>
> >> [*]
> >>
> >> 2019-07-23 16:44:34.849 12561 DEBUG
> >> nova.scheduler.filters.image_props_filter
> >> [req-52638278-51b7-4768-836a-f70d8a8b016a
> >> ab573ba3ea014b778193b6922ffffe6d ee1865a76440481cbcff08544c7d580a -
> >> default default] Instance contains properties
> >>
> ImageMetaProps(hw_architecture=<?>,hw_auto_disk_config=<?>,hw_boot_menu=<?>,hw_cdrom_bus=<?>,hw_cpu_cores=<?>,hw_cpu_max_cores=<?>,hw_cpu_max_sockets=<?>,hw_cpu_max_threads=<?>,hw_cpu_policy=<?>,hw_cpu_realtime_mask=<?>,hw_cpu_sockets=<?>,hw_cpu_thread_policy=<?>,hw_cpu_threads=<?>,hw_device_id=<?>,hw_disk_bus=<?>,hw_disk_type=<?>,hw_firmware_type=<?>,hw_floppy_bus=<?>,hw_ipxe_boot=<?>,hw_machine_type=<?>,hw_mem_page_size=<?>,hw_numa_cpus=<?>,hw_numa_mem=<?>,hw_numa_nodes=<?>,hw_pointer_model=<?>,hw_qemu_guest_agent=<?>,hw_rescue_bus=<?>,hw_rescue_device=<?>,hw_rng_model=<?>,hw_scsi_model=<?>,hw_serial_port_count=<?>,hw_video_model=<?>,hw_video_ram=<?>,hw_vif_model=<?>,hw_vif_multiqueue_enabled=<?>,hw_vm_mode=<?>,hw_watchdog_action=<?>,img_bdm_v2=<?>,img_bittorrent=<?>,img_block_device_mapping=<?>,img_cache_in_nova=<?>,img_compression_level=<?>,img_config_drive=<?>,img_hide_hypervisor_id=<?>,img_hv_requested_version=<?>,img_hv_type='qemu',img_linked_clone=<?>,img_mappings=<?>,img_owner_id=<?>,img_root_device_name=<?>,img_signature=<?>,img_signature_certificate_uuid=<?>,img_signature_hash_method=<?>,img_signature_key_type=<?>,img_use_agent=<?>,img_version=<?>,os_admin_user=<?>,os_command_line=<?>,os_distro=<?>,os_require_quiesce=<?>,os_secure_boot=<?>,os_skip_agent_inject_files_at_boot=<?>,os_skip_agent_inject_ssh=<?>,os_type=<?>,traits_required=<?>)
>
> >> that are not provided by the compute node supported_instances
> >> [[u'i686', u'kvm', u'hvm'], [u'x86_64', u'kvm', u'hvm']] or hypervisor
> >> version 2012000 do not match _instance_supported
> >>
> /usr/lib/python2.7/site-packages/nova/scheduler/filters/image_props_filter.py:103
>
> >>
> >> 2019-07-23 16:44:34.852 12561 DEBUG
> >> nova.scheduler.filters.image_props_filter
> >> [req-52638278-51b7-4768-836a-f70d8a8b016a
> >> ab573ba3ea014b778193b6922ffffe6d ee1865a76440481cbcff08544c7d580a -
> >> default default] Instance contains properties
> >>
> ImageMetaProps(hw_architecture=<?>,hw_auto_disk_config=<?>,hw_boot_menu=<?>,hw_cdrom_bus=<?>,hw_cpu_cores=<?>,hw_cpu_max_cores=<?>,hw_cpu_max_sockets=<?>,hw_cpu_max_threads=<?>,hw_cpu_policy=<?>,hw_cpu_realtime_mask=<?>,hw_cpu_sockets=<?>,hw_cpu_thread_policy=<?>,hw_cpu_threads=<?>,hw_device_id=<?>,hw_disk_bus=<?>,hw_disk_type=<?>,hw_firmware_type=<?>,hw_floppy_bus=<?>,hw_ipxe_boot=<?>,hw_machine_type=<?>,hw_mem_page_size=<?>,hw_numa_cpus=<?>,hw_numa_mem=<?>,hw_numa_nodes=<?>,hw_pointer_model=<?>,hw_qemu_guest_agent=<?>,hw_rescue_bus=<?>,hw_rescue_device=<?>,hw_rng_model=<?>,hw_scsi_model=<?>,hw_serial_port_count=<?>,hw_video_model=<?>,hw_video_ram=<?>,hw_vif_model=<?>,hw_vif_multiqueue_enabled=<?>,hw_vm_mode=<?>,hw_watchdog_action=<?>,img_bdm_v2=<?>,img_bittorrent=<?>,img_block_device_mapping=<?>,img_cache_in_nova=<?>,img_compression_level=<?>,img_config_drive=<?>,img_hide_hypervisor_id=<?>,img_hv_requested_version=<?>,img_hv_type='qemu',img_linked_clone=<?>,img_mappings=<?>,img_owner_id=<?>,img_root_device_name=<?>,img_signature=<?>,img_signature_certificate_uuid=<?>,img_signature_hash_method=<?>,img_signature_key_type=<?>,img_use_agent=<?>,img_version=<?>,os_admin_user=<?>,os_command_line=<?>,os_distro=<?>,os_require_quiesce=<?>,os_secure_boot=<?>,os_skip_agent_inject_files_at_boot=<?>,os_skip_agent_inject_ssh=<?>,os_type=<?>,traits_required=<?>)
>
> >> that are not provided by the compute node supported_instances
> >> [[u'i686', u'kvm', u'hvm'], [u'x86_64', u'kvm', u'hvm']] or hypervisor
> >> version 2012000 do not match _instance_supported
> >>
> /usr/lib/python2.7/site-packages/nova/scheduler/filters/image_props_filter.py:103
>
> >>
> >
> > Yeah at this point I'm not sure what's going on but the driver is
> > reporting kvm now and your image is requesting qemu so that's why the
> > hosts are getting filtered out. I'm not sure why the upgrade of
> > libvirt/qemu would change what the driver is reporting now, but it's a
> > bit lower level than I'd know about off hand. Maybe some of the Red Hat
> > nova devs would know more about this or have seen it before.
>
> I'm not sure whether this is related, but this thread reminded me of a
> change that landed in Rocky where we started filtering hypervisor
> capabilities by the configured CONF.libvirt.virt_type:
>
> https://review.opendev.org/531347
>
> I didn't see mention so far of how CONF.libvirt.virt_type has been
> configured in this deployment. Is it set to 'kvm' or 'qemu'? If it's set
> to 'kvm', that would cause 'qemu' capabilities to be filtered out, when
> they would not have been prior to Rocky.
>
> Apologies if this was an unrelated tangent.
>
> Cheers,
> -melanie
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20190724/b0ca2186/attachment.html>


More information about the openstack-discuss mailing list