Hello, In my company (CloudFerro) we have developed two features related to nova ephemeral storage (with libvirt virt driver) - "SPDK-based ephemeral storage backend" and "Multiple ephemeral storage backend handling" (both described below). Would these features be appropriate to be added to upstream? If yes, then should I start with creating a blueprint and proposing a spec for each of them so it is more clear what we want to introduce? Of course not both at once, since there is some code that makes one work with the other. For us it would probably be best if we upstreamed "SPDK-based ephemeral storage backend" first and then upstreamed "Multiple ephemeral storage backend handling" Description of SPDK-based ephemeral storage backend: We add a new possible value for [libvirt]/images_type in nova.conf: spdk_lvol. If this value is set then local disks of instances are handled as logical volumes (lvols). of locally run SPDK instance (on the same compute as VM). See https://spdk.io/doc/logical_volumes.html#lvol for docs on this subject. These are essentially a part of local NVMe disk managed by SPDK. We create and manage those lvols by making calls from nova-compute to SPDK instance with RPC (https://spdk.io/doc/jsonrpc.html) using provided python library (https://github.com/spdk/spdk/blob/master/python/README.md). We attach those lvols to instances by exposing them as vhost-blk devices (see https://spdk.io/doc/vhost.html) and specifying them as disks with source_type='vhostuser'. This method of exposing NVMe local storage allows for much better I/O performance than exposing them from local filesystem. We currently have working: creating, deleting, cold-migrating, shelving, snapshots, unshelving VMs with this storage backend. We don't have live migration working yet but have it in plans. This feature includes changes only to nova. Description of Multiple ephemeral storage backend handling: We add a new configuration option [libvirt]/supported_image_types to nova.conf of nova-compute and change the meaning of [libvirt]/images_type to mean default image type. If libvirt:image_type extra_spec is specified in flavor then VM is scheduled on compute with appropriate image type in its [libvirt]/supported_image_types. We use potentially multiple DISK_GB resource providers and construct appropriate request groups to handle this scheduling. This spec is also used to fill driver_info.libvirt.image_type field of BDMs with destination_type=local of this VM. (driver_info is new JSON serialized field of BDM) Then in compute if BDM specifies this field we use its value instead of [libvirt]/images_type to decide on imagebackend to use for it. This method of handling multiple backends only works after being enabled by administrators by setting libvirt:image_type on all their flavors and running nova-manage commands that update existing VMs. Without enabling everything works as it was before. This feature includes changes to nova and some new traits in os-traits (One per possible value of [libvirt]/images_type + one extra) Best regards, Karol Klimaszewski