Secure Boot VM issues (libvirt / SMM) | Secure boot requires SMM feature enabled

Sean Mooney smooney at redhat.com
Wed Jan 19 17:31:51 UTC 2022


On Wed, 2022-01-19 at 14:21 +0000, Imran Hussain wrote:
> Hi,
> 
> Deployed Wallaby on Ubuntu 20.04 nodes. Having issues with libvirt XML 
> being incorrect, I need the smm bit (<smm state='on'/>) and it isn't 
> being added to the XML. Anyone seen this before? Or any ideas? More info 
> below...
> 
> Error message:
> : libvirt.libvirtError: unsupported configuration: Secure boot requires 
> SMM feature enabled
> 
> Versions:
> libvirt version: 6.0.0, package: 0ubuntu8.15
> QEMU emulator version 4.2.1 (Debian 1:4.2-3ubuntu6.18)
> Nova 23.1.1 (deployed via kolla, so 
> kolla/ubuntu-source-nova-compute:wallaby is the image)
> ovmf 0~20191122.bd85bf54-2ubuntu3.3
> 
> Context:
> https://specs.openstack.org/openstack/nova-specs/specs/wallaby/implemented/allow-secure-boot-for-qemu-kvm-guests.html
> 
> Image metadata:
> 
> hw_firmware_type: uefi
> hw_machine_type: q35
> os_secure_boot: required

ok those d seam to be allinged with the documentaiton 
https://docs.openstack.org/nova/latest/admin/secure-boot.html
how in addtion to those option the uefi firmware image used but qemu which is provide by the ovmf package also need
to provide a secure boot capable image

waht failing here is the system manamgemt mode feature.

when os_secure_boot is set
we defien  the "secure" attibute on the loader element.

https://github.com/openstack/nova/blob/7aa3a0f558ddbcac3cb97a7eef58cd878acc3f7a/nova/virt/libvirt/config.py#L2871-L2873

based on the  
https://libvirt.org/formatdomain.html#hypervisor-features

smm should be enabled by default

smm

    Depending on the state attribute (values on, off, default on) enable or disable System Management Mode. Since 2.1.0

    Optional sub-element tseg can be used to specify the amount of memory dedicated to SMM's extended TSEG. That offers a fourth option size apart
from the existing ones (1 MiB, 2 MiB and 8 MiB) that the guest OS (or rather loader) can choose from. The size can be specified as a value of that
element, optional attribute unit can be used to specify the unit of the aforementioned value (defaults to 'MiB'). If set to 0 the extended size is not
advertised and only the default ones (see above) are available.

    If the VM is booting you should leave this option alone, unless you are very certain you know what you are doing.

    This value is configurable due to the fact that the calculation cannot be done right with the guarantee that it will work correctly. In QEMU, the
user-configurable extended TSEG feature was unavailable up to and including pc-q35-2.9. Starting with pc-q35-2.10 the feature is available, with
default size 16 MiB. That should suffice for up to roughly 272 vCPUs, 5 GiB guest RAM in total, no hotplug memory range, and 32 GiB of 64-bit PCI MMIO
aperture. Or for 48 vCPUs, with 1TB of guest RAM, no hotplug DIMM range, and 32GB of 64-bit PCI MMIO aperture. The values may also vary based on the
loader the VM is using.

    Additional size might be needed for significantly higher vCPU counts or increased address space (that can be memory, maxMemory, 64-bit PCI MMIO
aperture size; roughly 8 MiB of TSEG per 1 TiB of address space) which can also be rounded up.

    Due to the nature of this setting being similar to "how much RAM should the guest have" users are advised to either consult the documentation of
the guest OS or loader (if there is any), or test this by trial-and-error changing the value until the VM boots successfully. Yet another guiding
value for users might be the fact that 48 MiB should be enough for pretty large guests (240 vCPUs and 4TB guest RAM), but it is on purpose not set as
default as 48 MiB of unavailable RAM might be too much for small guests (e.g. with 512 MiB of RAM).

    See Memory Allocation for more details about the unit attribute. Since 4.5.0 (QEMU only)


so my guess is you are missing the secure boot capable ovmf image on the host or there is a bug in your libvirt and smm is not being enabled by
default.

> os_hidden: false
> 
> hw_disk_bus: scsi
> hw_qemu_guest_agent: yes
> hw_scsi_model: virtio-scsi
> hw_video_model: virtio
> os_require_quiesce: yes
> os_secure_boot: required
> os_hidden: false
> 
> XML snippets taken from nova-compute.log:
>    <sysinfo type="smbios">
>      <system>
>        <entry name="manufacturer">OpenStack Foundation</entry>
>        <entry name="product">OpenStack Nova</entry>
>        <entry name="version">23.1.1</entry>
>        <entry name="serial">2798e3fe-ffae-4c26-955b-ef150b849561</entry>
>        <entry name="uuid">2798e3fe-ffae-4c26-955b-ef150b849561</entry>
>        <entry name="family">Virtual Machine</entry>
>      </system>
>    </sysinfo>
>    <os>
>      <type machine="q35">hvm</type>
>      <loader type="pflash" readonly="yes" 
> secure="yes">/usr/share/OVMF/OVMF_CODE.ms.fd</loader>
>      <nvram template="/usr/share/OVMF/OVMF_VARS.ms.fd"/>
>      <boot dev="cdrom"/>
>      <smbios mode="sysinfo"/>
>    </os>
>    <features>
>      <acpi/>
>      <apic/>
>    </features>
> 
> Other info:
> # cat /usr/share/qemu/firmware/40-edk2-x86_64-secure-enrolled.json
> {
>      "description": "UEFI firmware for x86_64, with Secure Boot and SMM, 
> SB enabled, MS certs enrolled",
>      "interface-types": [
>          "uefi"
>      ],
>      "mapping": {
>          "device": "flash",
>          "executable": {
>              "filename": "/usr/share/OVMF/OVMF_CODE.ms.fd",
>              "format": "raw"
>          },
>          "nvram-template": {
>              "filename": "/usr/share/OVMF/OVMF_VARS.ms.fd",
>              "format": "raw"
>          }
>      },
>      "targets": [
>          {
>              "architecture": "x86_64",
>              "machines": [
>                  "pc-q35-*"
>              ]
>          }
>      ],
>      "features": [
>          "acpi-s3",
>          "amd-sev",
>          "enrolled-keys",
>          "requires-smm",
>          "secure-boot",
>          "verbose-dynamic"
>      ],
>      "tags": [
> 
>      ]
> }
> 
> 
> 




More information about the openstack-discuss mailing list