[openstack-dev] [nova] [RFC] how to enable xbzrle compress for live migration

金运通 yuntongjin at gmail.com
Fri Nov 27 14:25:00 UTC 2015


I think it'd be necessary to set live_migration_compression=on|off dynamic
according to memory and cpu on host at the beginning of compression
migration, consider about the case there are 50 VMs on host and operator
want to migration them all to maintain/shutdown the host, having
compression=on|off  dynamically will avoid host OOM, and also with this, we
can even consider to left scheduler out (aka, not alert scheduler about
memory/cpu consume of compression).


BR,
YunTongJin

2015-11-27 21:58 GMT+08:00 Daniel P. Berrange <berrange at redhat.com>:

> On Fri, Nov 27, 2015 at 01:01:15PM +0000, Koniszewski, Pawel wrote:
> > > -----Original Message-----
> > > From: Daniel P. Berrange [mailto:berrange at redhat.com]
> > > Sent: Friday, November 27, 2015 1:24 PM
> > > To: Koniszewski, Pawel
> > > Cc: OpenStack Development Mailing List (not for usage questions); ???;
> Feng,
> > > Shaohe; Xiao, Guangrong; Ding, Jian-feng; Dong, Eddie; Wang, Yong Y;
> Jin,
> > > Yuntong
> > > Subject: Re: [openstack-dev] [nova] [RFC] how to enable xbzrle
> compress for
> > > live migration
> > >
> > > On Fri, Nov 27, 2015 at 12:17:06PM +0000, Koniszewski, Pawel wrote:
> > > > > -----Original Message-----
> > > > > > > Doing this though, we still need a solution to the host OOM
> > > > > > > scenario problem. We can't simply check free RAM at start of
> > > > > > > migration and see if there's enough to spare for compression
> > > > > > > cache, as the schedular can spawn a new guest on the compute
> > > > > > > host at any time, pushing us into OOM. We really need some way
> > > > > > > to indicate that there is a (potentially very large) extra RAM
> > > > > > > overhead for the guest during
> > > > > migration.
> > > >
> > > > What about CPU? We might end up with live migration that degrades
> > > > performance of other VMs on source and/or destination node. AFAIK
> CPUs
> > > > are heavily oversubscribed in many cases and this does not help.
> > > > I'm not sure that this thing fits into Nova as it requires resource
> > > > monitoring.
> > >
> > > Nova already has the ability to set CPU usage tuning rules against
> each VM.
> > > Since the CPU overhead is attributed to the QEMU process, these
> existing
> > > tuning rules will apply. So there would only be impact on other VMs,
> if you
> > > do
> > > not have any CPU tuning rules set in Nova.
> >
> > Not sure I understand it correctly, I assume that you are talking about
> CPU
> > pinning. Does it mean that compression/decompression runs as part of VM
> > threads?
> >
> > If not then, well, it will require all VMs to be pinned on both hosts,
> source
> > and destination (and in the whole cluster because of static
> configuration...).
> > Also what about operating system performance? Will QEMU distinct OS
> processes
> > somehow and won't affect them?
>
> The compression runs in the migration thread of QEMU. This is not a vCPU
> thread, but one of the QEMU emulator threads. So CPU usage policy set
> against the QEMU emulator threads applies to the compression CPU overhead.
>
> > Also, nova can reserve some memory for the host. Will QEMU also respect
> it?
>
> No, its not QEMU's job to respect that. If you want to reserve resources
> for only the host OS, then you need to setup suitable cgroup partitions
> to separate VM from non-VM processes. The Nova reserved memory setting
> is merely a hint to the schedular - it has no functional effect on its
> own.
>
> Regards,
> Daniel
> --
> |: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/
> :|
> |: http://libvirt.org              -o-             http://virt-manager.org
> :|
> |: http://autobuild.org       -o-         http://search.cpan.org/~danberr/
> :|
> |: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc
> :|
>
> __________________________________________________________________________
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20151127/5df0a17c/attachment.html>


More information about the OpenStack-dev mailing list