[openstack-dev] [nova] Removal of live_migration_flag and block_migration_flag config options

Daniel P. Berrange berrange at redhat.com
Wed Aug 3 08:21:16 UTC 2016


On Tue, Aug 02, 2016 at 02:36:32PM +0000, Koniszewski, Pawel wrote:
> In Mitaka development cycle 'live_migration_flag' and 'block_migration_flag'
> have been marked as deprecated for removal. I'm working on a patch [1] to
> remove both of them and want to ask what we should do with live_migration_tunnelled
> logic.
> 
> The default configuration of both flags contain VIR_MIGRATE_TUNNELLED option. It is
> there to avoid the need to configure the network to allow direct communication
> between hypervisors. However, tradeoff is that it slows down all migrations by up
> to 80% due to increased number of memory copies and single-threaded encryption
> mechanism in Libvirt. By 80% here I mean that transfer between source and destination
> node is around 2Gb/s on a 10Gb network. I believe that this is a configuration issue
> and people deploying OpenStack are not aware that live migrations with this flag will
> not work. I'm not sure that this is something we wanted to achieve. AFAIK most
> operators are turning it OFF in order to make live migration usable.

FYI, when you have post-copy migration active, live migration *will* still work.

> Going to a new flag that is there to keep possibility to turn tunneling on -
> Live_migration_tunnelled [2] which is a tri-state boolean - None, False, True:
> 
> * True - means that live migrations will be tunneled through libvirt.
> * False - no tunneling, native hypervisor transport.
> * None - nova will choose default based on, e.g., the availability of native
>   encryption support in the hypervisor. (Default value)
> 
> Right now we don't have any logic implemented for None value which is a default
> value. So the question here is should I implement logic so that if
> live_migration_tunnelled=None it will still use VIR_MIGRATE_TUNNELLED if native
> encryption is not available? Given the impact of this flag I'm not sure that we
> really want to keep it there. Another option is to change default value of
> live_migration_tunnelled to be True. In both cases we will again end up with
> slower LM and people complaining that LM does not work at all in OpenStack.

FWIW, I have compared libvirt tunnelled migration with TLS against native QEMU
TLS encryption and the performance is approximately the same. In both cases the
bottleneck is how fast the CPU can perform AES and we're maxing out a single
thread for that. IOW, there's no getting away from the fact that encryption is
going to have a performance impact on migration when you get into range of
10-Gig networking.

So the real question is whether we want to default to a secure or an insecure
configuration. If we default to secure config then, in future with native QEMU
TLS, this will effectively force those deploying nova to deploy x509 certs for
QEMU before they can use live migration. This would be akin of having our default
deployment of the public REST API mandate HTTPS and not listen on HTTP out of the
box. IIUC, we default to HTTP for REST APIs out of the box, which would suggest
doing the same for migration and defaulting to non-encrypted. This would mean
we do *not* need to set TUNNELLED by default.

Second, with some versions of QEMU, it is *not* possible to use tunnelled
migration in combination with block migration. We don't want to have normal
live migration and block live migration use different settings. This strongly
suggests *not* defaulting to tunnelled.

So all three points (performance, x509 deployment requirements, and block
migration limitations) point to not having TUNNELLED in the default flags,
and leaving it as an opt-in.

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|



More information about the OpenStack-dev mailing list