[nova] iothread support with Libvirt

Sean Mooney smooney at redhat.com
Thu Jan 6 14:10:33 UTC 2022

On Thu, 2022-01-06 at 00:12 -0600, Eric K. Miller wrote:
> Hi,
> I haven't found anything that indicates Nova supports adding iothreads
> parameters to the Libvirt XML file.  I had asked various performance
> related questions a couple years back, including asking if iothreads
> were available, but I didn't get any response (so assumed the answer was
> no).  So I'm just checking again to see if this has been a consideration
> to help improve a VM's storage performance - specifically with extremely
> high-speed storage in the host.
hi up until recently the advice from our virt team was that iotread where not really needed
for openstack howver in the last 6 weeks they have actully asked us to consider enabling them

so work will be happening in qemu/libvirt to always create at least one iothread going forward and
affinies it to the same set of cores as the emulator threads by default.

we dont have a downstream rfe currently filed for ioithread specifically but we do virtio scsi multi queue support
i was proposing that we also enable iotread support as part of that work but we have not currently internaly
piroited it for any upstream release. enable support for iotrhead and virtio multiqueue i think makes a lot of sense
to do together. my understanding is that without iothread multi queue virtio scsi does not provide as much of
a perfromace boost as with io threads.

if you our other have capasity to work on this i would be happy to work on a spec with ye to enable it.
effectivly what i was plannign to propose if we got around to it is adding a new config option
cpu_iothread_set which would default to the same value as cpu_share_set.
this effectivly will ensure that witout any config updates all existing deployment will start benifiting
form iothreads and allow you to still dedicate a set of cores to running the iothread seperate form the cpu_share_set 
if you wasnt this to also benifit floating vms not just pinned vms.

in addtion to that a new flavor extra spec/image property woudl be added similar to cpu_emultor_threads.

im not quite sure how that extra spec should work but either 
hw:cpu_iotread_policy woudl either support the same vales as hw:cpu_emulator_threads where
hw:cpu_iotread_policy=shared woudl allocate an iotread that floats over the cpu_iothread_set (which is the same as cpu_shared_set by default)
and hw:cpu_iotread_policy=isolate would allocate an addtional iothread form the cpu_dedicated_set.
hw:cpu_iotread_policy=share woudl be the default behavior if cpu_shared_set or cpu_iothread_set was defined in the config and not flavor extra
spec or image property was defiend. basically all vms woudl have at least 1 iothread that floated over teh shared pool if a share pool was configured
on the host.

that is option a
option b woudl be to allso support

hw:cpu_iotread_count so you could ask for n iothread eitehr form the shared/iothread set or dedicated set depending on the value of

im not really sure if there is a need for more the 1 io thread. my understanding is that once you have at least 1 there is demising retruns.
it will improve your perfoamce if you have more propvided you have multiple disks/volumes attached but not as much as having the initall iotread.

is this something you wold be willing to work on and implement?
i woudl be happy to review any spec in this areay and i can bring it up downstream again but i cant commit to working on this in the z release.
this would require some minor rpc chagnes to ensure live migrate work properly as the iothread set or cpu share set could be different on different
hosts. but beyond that the feature is actully pretty simple to enable.
> Or is there a way to add iothread-related parameters without Nova being
> involved (such as modifying a template)?
no there is no way to enable them out of band of nova today.
you technially could wrap the qemu binary wiht a script that inject parmaters but that obviously would not be supported upstream.
but that would be a workaround if you really needed it 

https://review.opendev.org/c/openstack/devstack/+/817075 is an exmaple of such a script
that break apparmor and selinx but you could proably make it work with enough effort.
although i woudl sugess just implemeting the feature upstream and downing a downstream backport instead.
> Thanks!
> Eric

More information about the openstack-discuss mailing list