Train/SR-IOV direct port mode live-migration - question
smooney at redhat.com
Wed Mar 3 13:05:25 UTC 2021
On Wed, 2021-03-03 at 14:03 +0200, Heib, Mohammad (Nokia - IL/Kfar Sava) wrote:
> Hi All,
> we recently moved to openstack Train and according to the version
> release notes the SR-IOV direct port live-migration support was added to
> this version.
> so we tried to do some live-migration to vm with two SR-IOV direct ports
> attached to bond interface on the vm (no vSwitch interface or a indirect
> only direct ports) but seems that we have an issue with maintaining the
yes that is expected. live migration with direct mode sriov involvs hot unpluggin
the interface before the migration and hot plugging it after.
nic vendors do not actully support sriov migration so we developed a workaround
for that hardware limiation by using pcie hotplug.
> connectivity during and after the live-migration, according to  i see
> that in order to maintain network connectivity during the migration we
> have to add one indirect or vswitch port to the bond and this interface
> will carry the
> traffic during the migration and once the migration completed the direct
> port will back to be the primary slave on the target VM bond.
yes this is correct
> my question is if we don't add the indirect port we will lose the
> connectivity during the migration which makes sense to me,
> but why the connectivity does not back after the migration completed
> successfully, and why the bond master boot up with no slaves on the
> target VM?
that sound like your networking setup in the guest is not correctly detecting the change.
e.g. you are missing a udev rule or your network manager or systemd-netowrkd configuriton
is only running on first boot but not when an interface is added/removed
> according to some documents and blueprints that I found in google
> (including ,) the guest os will receive a virtual hotplug add even
> and the
> bond master will enslave those devices and connectivity will back which
> is not the case here.
> so I wondering maybe I need to add some scripts to handle those events
> (if so where to add them) or some network flags to the ifcfg-* files or
> i need to use a specific guest os?
you would need to but i do not have files for this unfortunetly.
>  :
> *bond master ifcfg file:*
> *slaves ifcfg files:*
try setting this to yes
i suspect you need to mark all the slave interfaces as hotplugable since they are what will be hotplugged.
you might also want to set onboot but hotplug seams more relevent.
> *guest OS:*
> CentOS Linux release 7.7.1908 (Core)
> */Thanks in advance for any help :)/,*
More information about the openstack-discuss