Hi All,
we recently moved to openstack Train and according to the version release notes the SR-IOV direct port live-migration support was added to this version.
so we tried to do some live-migration to vm with
two SR-IOV direct ports attached to bond interface on the vm (no
vSwitch interface or a indirect only direct ports) but seems
that we have an issue with maintaining the network
connectivity during and after the live-migration, according to [1] i see that in order to maintain network connectivity during the migration we have to add one indirect or vswitch port to the bond and this interface will carry the
traffic during the migration and once the migration completed the direct port will back to be the primary slave on the target VM bond.
my question is if we don't add the indirect port we will lose the connectivity during the migration which makes sense to me,
but why the connectivity does not back after the
migration completed successfully, and why the bond master boot
up with no slaves on the target VM?
according to some documents and blueprints that I found in google (including [2],[3]) the guest os will receive a virtual hotplug add even and the
bond master will enslave those devices and
connectivity will back which is not the case here.
so I wondering maybe I need to add some scripts to handle those
events (if so where to add them) or some network flags to the
ifcfg-* files or i need to use a specific guest os?
[3] : https://openstack.nimeyo.com/72653/openstack-nova-neutron-live-migration-with-direct-passthru
bond master ifcfg file:
DEVICE=bond1
BONDING_OPTS=mode=active-backup
HOTPLUG=yes
TYPE=Bond
BONDING_MASTER=yes
BOOTPROTO=none
NAME=bond1
ONBOOT=yes
slaves ifcfg files:
TYPE=Ethernet
DEVICE=eth0
HOTPLUG=no
ONBOOT=no
MASTER=bond1
SLAVE=yes
BOOTPROTO=none
NM_CONTROLLED=no
guest OS:
CentOS Linux release 7.7.1908 (Core)