On 02/05/2025 01:31, Nguyễn Hữu Khôi wrote:
Hello. If total_bytes_sec=300000000 then it will work.
ya its actully bytes so `total_bytes_sec=300` is defiantly not going to allow anything other then maybe dos to function. total_bytes_sec=300000000 is 300MB combined read and write. thats SATA 2 speed wich is still pretty low but it should at least be functional. in the IOPS side setting it below 120 ish will also break most modern workload or severly degrade the perfomance. even cheap 5400 rpm SMR arcive hard drives can typeicaly get at least 250 4k random iops these days so that woudl be my recommendation for the lower bound of your least performant flavor these days. for bandwith 300MB/s should be ok but would not advice going much below that, maybe 150MB combined will function for some if its not an io intensive workload like a db. since your tuning things if you shoudl also consider seting https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.dis... ``` [libvirt] disk_cachemodes=file=none,block=none,network=writeback ``` or ``` [libvirt] disk_cachemodes=file=none,block=none,network=none ``` writeback and none will both flush data when explcitly asked too but the choose just comes down to do you want to hide some of the network latency with clint side (qemu) cacheing or not. conter intuitivly none tends to work better for really io/bandwith bound cases because espcially in windows the backpresure form teh storage sytem stop it building up a large amount of requets to the sotrage backend. you should also evaluate if virtio-scsi set by hw_disk_bus=scsi and hw_scsi_model=virtio-scsi woudl be better for your usecase. the main reasons to use virtio-scsi in the past were trim and the ablity to attach more then 20 ish volumes ot one vm. trim shoudl work with virtio-blk if you have a new enough qemu/machine type but it was enabeld in virtio-scsi several years before.
On Fri, May 2, 2025, 7:14 AM Eugen Block <eblock@nde.ag> wrote:
For me, the *_bytes_sec qos properties don't work, while *_iops_sec seem to work (I haven't checked if IOPS are really limited), but at least the VMs are booting successfully.
Zitat von Eugen Block <eblock@nde.ag>:
> I can confirm this for Antelope as well. The VM is stuck in "GRUB is > loading" or something. I haven't found any errors yet in cinder or > nova, but I haven'T looked too closely yet. > And as OP already mentioned, disassociating the qos from the volume > type isn't enough, even without a qos associated, the VM is stuck > booting. I haven't searched launchpad yet for existing bugs, will do > that next week. > > > Zitat von Nguyễn Hữu Khôi <nguyenhuukhoinw@gmail.com>: > >> I set total_bytes_sec=300 so it causes this error. But when I remove this >> parameter, this error still occurs. >> >> Nguyen Huu Khoi >> >> >> On Wed, Apr 30, 2025 at 10:31 PM Nguyễn Hữu Khôi <nguyenhuukhoinw@gmail.com> >> wrote: >> >>> Hello. >>> I create a volume type with qos which uses frontend consumer then I create >>> an instance which booted from a volume, however, It stucked it booting from >>> hardisk. >>> >>> What could will be? Pls give me some ideas on this? Thank you. >>> Env: >>> >>> Openstack 2025.1 >>> KollaAnsible 2025.1 master >>> Ubuntu 2024.02 >>> >>> Nguyen Huu Khoi >>> >>>