Regarding Split network plane for live migration

Sean Mooney smooney at
Thu Mar 26 15:33:11 UTC 2020

On Thu, 2020-03-26 at 20:55 +0600, Md. Hejbul Tawhid MUNNA wrote:
> Hi,
> We are using Openstack Rocky . We are observing internal network is using
> while we do live migration. our internal network interface is 1G. So
> interface getting full and migration never getting complete.
> (nova server-migration-list ) 'Remaining Memory Bytes' increasing and
> decreasing but not getting completed.
> We have 10G interface for public/external network. I have tried following
> solution but no luck. still using internal network
> Some Information:
> my_ip = Internal_network_IP
> [libvirt]
> live_migration_uri=qemu+ssh://nova@%s.external
> /system?keyfile=/var/lib/nova/.ssh/id_rsa&no_verify=1
on later release we generally recommend using live_migration_inbound_addr instead.
this will be ignored however if you set live_migration_tunnelled

the other genral advice we have for live migration is that you set max_concurrent_live_migrations to 1

if you have a NFV workload or something that is activly writing to memory its entirly possible to never have the
migration complete. if the rate of change of the dirty pages in the geuss exceed the available bandwith this
is going to result in the migration not finishing.

there are ways to help with this. for example you can enable live_migration_permit_auto_converge and

auto convergance will after a short timout slow start reducing the execution time of the guest to try and allow
forward progress to be made. post copy will after a delay copy enough memory form the source to the dest to allow
execution to start on the dest while memory is still being copied form the souces.

as long as you understand the implciation of autoconvergence(trotteling the guest) and post_copy (remote exectuion over
the network) then i would generaly recommend turning both on to help in your case but try using the
live_migration_inbound_addr first.

> Please let us know if any further information is required.
> Please advice us if we can solve the issue.
> Regards,
> Munna

More information about the openstack-discuss mailing list