[nova][gate] status of some gate bugs

Sean Mooney smooney at redhat.com
Thu Mar 19 14:13:19 UTC 2020

On Thu, 2020-03-19 at 08:25 +0000, Arnaud Morin wrote:
> Hey Melanie, all,
> About OVH case (company I work for).
> We are digging into the issue.
> First thing, we do not limit anymore the IOPS. I dont remember when we
> removed this limit, but this is not new.
> However, the hypervisor are quite old now, and our policy on this old
> servers was to use some swap.
> And we think that the host may slow down when overcommitting on RAM
> (swapping on disk).
> Anyway, we also know that we can have better latency when upgrading
> QEMU. We are currently in the middle of testing a new QEMU version, I
> will push to upgrade your hypervisors first, so we will see if the
> latency on QEMU side can help the gate.
> Finally, we plan to change the hardware and stop doing overcommit on RAM
> (and swapping on disk). However, I have no ETA about that, but for sure,
> this will improve the IOPS.
if you stop doing over commit on ram can i also suggest that you enable hugepages
hosting vms. if this is dedicated hardware for the ci then you should not need to do live migration
on these vms since they are short lived (max 3-4 hours) so you can just disable the host
leave it drain, do your maintance and enable it again. when there is no over subscption anyway 
enableing hugepags will not affect capasity but it will also give a 30-40% performacne boost.

anyway its just something to consider, it wont allow or prevent any testing we do today but in addtion
to the new qemu version it should imporve the perfomance of the ovh nodes, not that they are generally slow
ovh used to perform quite well even if it is older hardware but hugepages will imporave all guest memory access
times, when compinded with not swaping to disk that should result in a marked imporvement in overall perforamce
can ci job time.

if ye dont want to enable hugepages thats fine too but since ye are considering making changes to the host anyway
i tought i would ask.
> I'll keep you in touch.
> Cheers,

More information about the openstack-discuss mailing list