On 22/11/2024 12:31, Gilles Mocellin wrote:
Le 2024-11-22 11:22, Tobias Urdin a écrit :
Hello,
Hello Tobias,
1. You're probably stuck on copying memory because the instance is using memory faster than you can migrate or pagefault it over.
This can be observed with virsh domjobinfo <instance> on the compute node. We optimize the live migration downtime stepping in nova.conf to work around that and after a while we just force it over but your use-case might not allow that.
Yes, this is the case, but I thought live_migration_permit_auto_converge would solve that. And the other problem is the libvirt stacktrace when I force the completion...
in generall live_migration_permit_auto_convergewhil it can help is much less effective the live_migration_permit_post_copy i dont know if your are using 1G hugepages? live migration is almost impostible with 1G hugepages without post_copy because wrting a single bit a 1G page requried the entire 1G page to be transfered again. post_copy make all the write go to the dest instance after an inital memroy copy and pagefaults reads across the network as pages are needed while it copys them in the backround. auto converge just add micro pauses to briefly stop the guest cpu cores to allow migration to make progress. in other words auto converage is hoping that if we slow the cpu execution enough the migration will eventually complete but it can actully guarentee it will ever complete.
2. Not sure since it's using Linux bridge agent which has been moved to experimental, you probably want to schedule migrating away from using it. Look into the enable_qemu_monitor_announce_self in nova.conf so that nova in post-migration does an QEMU announce_self that sends out RARP frames after the migration is complete if there is a race condition between the RARP frames being sent out and the port being bound, which is the case for us when using OVS agent.
Mmm, that's interesting. I didn't see that parameter. I'm trying to have a reproducible test and will try enable_qemu_monitor_announce_self. I'll come back here if I have a clear answer.
enable_qemu_monitor_announce_self is a workaroudn option in nova it shoudl work with linux bridge too however i dont think its required when using linux bridge becasue configure are no openflow rules ectra to configre, it wont hurt however so its worht enabling if you are having downtime issues.
/Tobias
Thank you tobias.
by the way you ask about `nova host-evacuate-live` and osc. that command is a clinet only implementation that we don't intend to ever port. we do not support it any more. it was deprecated when the nova-client cli was deprecated and it will be removed when python novaclinet deliverable removed. its effectively just a for loop over a server list for a given host with no error handling operators should not use it or build tooling around it. the same applies for the non live version it should not be used.