Hi,
I already tried it. But after that its still using internal interface bandwidth.
Please find the additional configuration :
live_migration_inbound_addr=external_network_IP
#live_migration_tunnelled=false
[libvirt] live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MI GRATE_TUNNELLED
On Thu, 2020-03-26 at 21:12 +0600, Md. Hejbul Tawhid MUNNA wrote: live_migration_flag is not supported on rocky but if it had been this would have reenabled tunneling and prevented live_migration_inbound addr form working
Regards, Munna
On Thu, Mar 26, 2020 at 9:07 PM Sa Pham <saphi070@gmail.com> wrote:
I think you can use the live_migration_inbound_addr option in libvirt section. For more information, use [1].
[1] - https://docs.openstack.org/nova/latest/configuration/config.html
On Thu, Mar 26, 2020 at 9:57 PM Md. Hejbul Tawhid MUNNA < munnaeebd@gmail.com> wrote:
Hi,
We are using Openstack Rocky . We are observing internal network is using while we do live migration. our internal network interface is 1G. So interface getting full and migration never getting complete.
(nova server-migration-list ) 'Remaining Memory Bytes' increasing and decreasing but not getting completed.
We have 10G interface for public/external network. I have tried following solution but no luck. still using internal network
https://specs.openstack.org/openstack/nova-specs/specs/mitaka/implemented/sp...
Some Information:
[DEFAULT]
my_ip = Internal_network_IP
[libvirt]
live_migration_uri=qemu+ssh://nova@%s.external /system?keyfile=/var/lib/nova/.ssh/id_rsa&no_verify=1
Please let us know if any further information is required.
Please advice us if we can solve the issue.
Regards, Munna
-- Sa Pham Dang Skype: great_bn Phone/Telegram: 0986.849.582