<div dir="ltr">I think it'd be necessary to set <span style="font-size:14px">live_migration_compression=on|</span><span style="font-size:14px">off dynamic according to memory and cpu on host at the beginning of compression migration, consider about the case there are 50 VMs on host and operator want to migration them all to maintain/shutdown the host, having </span><span style="font-size:14px">compression=on|</span><span style="font-size:14px">off dynamically will avoid host OOM, and also with this, we can even consider to left scheduler out (aka, not alert scheduler about memory/cpu consume of compression).</span><div><span style="font-size:14px"><br></span></div></div><div class="gmail_extra"><br clear="all"><div><div class="gmail_signature"><div dir="ltr"><div>BR,</div><div>YunTongJin</div></div></div></div>
<br><div class="gmail_quote">2015-11-27 21:58 GMT+08:00 Daniel P. Berrange <span dir="ltr"><<a href="mailto:berrange@redhat.com" target="_blank">berrange@redhat.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">On Fri, Nov 27, 2015 at 01:01:15PM +0000, Koniszewski, Pawel wrote:<br>
> > -----Original Message-----<br>
> > From: Daniel P. Berrange [mailto:<a href="mailto:berrange@redhat.com">berrange@redhat.com</a>]<br>
> > Sent: Friday, November 27, 2015 1:24 PM<br>
> > To: Koniszewski, Pawel<br>
> > Cc: OpenStack Development Mailing List (not for usage questions); ???; Feng,<br>
> > Shaohe; Xiao, Guangrong; Ding, Jian-feng; Dong, Eddie; Wang, Yong Y; Jin,<br>
> > Yuntong<br>
> > Subject: Re: [openstack-dev] [nova] [RFC] how to enable xbzrle compress for<br>
> > live migration<br>
> ><br>
> > On Fri, Nov 27, 2015 at 12:17:06PM +0000, Koniszewski, Pawel wrote:<br>
> > > > -----Original Message-----<br>
> > > > > > Doing this though, we still need a solution to the host OOM<br>
> > > > > > scenario problem. We can't simply check free RAM at start of<br>
> > > > > > migration and see if there's enough to spare for compression<br>
> > > > > > cache, as the schedular can spawn a new guest on the compute<br>
> > > > > > host at any time, pushing us into OOM. We really need some way<br>
> > > > > > to indicate that there is a (potentially very large) extra RAM<br>
> > > > > > overhead for the guest during<br>
> > > > migration.<br>
> > ><br>
> > > What about CPU? We might end up with live migration that degrades<br>
> > > performance of other VMs on source and/or destination node. AFAIK CPUs<br>
> > > are heavily oversubscribed in many cases and this does not help.<br>
> > > I'm not sure that this thing fits into Nova as it requires resource<br>
> > > monitoring.<br>
> ><br>
> > Nova already has the ability to set CPU usage tuning rules against each VM.<br>
> > Since the CPU overhead is attributed to the QEMU process, these existing<br>
> > tuning rules will apply. So there would only be impact on other VMs, if you<br>
> > do<br>
> > not have any CPU tuning rules set in Nova.<br>
><br>
> Not sure I understand it correctly, I assume that you are talking about CPU<br>
> pinning. Does it mean that compression/decompression runs as part of VM<br>
> threads?<br>
><br>
> If not then, well, it will require all VMs to be pinned on both hosts, source<br>
> and destination (and in the whole cluster because of static configuration...).<br>
> Also what about operating system performance? Will QEMU distinct OS processes<br>
> somehow and won't affect them?<br>
<br>
</div></div>The compression runs in the migration thread of QEMU. This is not a vCPU<br>
thread, but one of the QEMU emulator threads. So CPU usage policy set<br>
against the QEMU emulator threads applies to the compression CPU overhead.<br>
<span class=""><br>
> Also, nova can reserve some memory for the host. Will QEMU also respect it?<br>
<br>
</span>No, its not QEMU's job to respect that. If you want to reserve resources<br>
for only the host OS, then you need to setup suitable cgroup partitions<br>
to separate VM from non-VM processes. The Nova reserved memory setting<br>
is merely a hint to the schedular - it has no functional effect on its<br>
own.<br>
<span class="im HOEnZb"><br>
Regards,<br>
Daniel<br>
--<br>
|: <a href="http://berrange.com" rel="noreferrer" target="_blank">http://berrange.com</a> -o- <a href="http://www.flickr.com/photos/dberrange/" rel="noreferrer" target="_blank">http://www.flickr.com/photos/dberrange/</a> :|<br>
|: <a href="http://libvirt.org" rel="noreferrer" target="_blank">http://libvirt.org</a> -o- <a href="http://virt-manager.org" rel="noreferrer" target="_blank">http://virt-manager.org</a> :|<br>
|: <a href="http://autobuild.org" rel="noreferrer" target="_blank">http://autobuild.org</a> -o- <a href="http://search.cpan.org/~danberr/" rel="noreferrer" target="_blank">http://search.cpan.org/~danberr/</a> :|<br>
|: <a href="http://entangle-photo.org" rel="noreferrer" target="_blank">http://entangle-photo.org</a> -o- <a href="http://live.gnome.org/gtk-vnc" rel="noreferrer" target="_blank">http://live.gnome.org/gtk-vnc</a> :|<br>
<br>
__________________________________________________________________________<br>
</span><span class="im HOEnZb">OpenStack Development Mailing List (not for usage questions)<br>
</span><div class="HOEnZb"><div class="h5">Unsubscribe: <a href="http://OpenStack-dev-request@lists.openstack.org?subject:unsubscribe" rel="noreferrer" target="_blank">OpenStack-dev-request@lists.openstack.org?subject:unsubscribe</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev" rel="noreferrer" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev</a><br>
</div></div></blockquote></div><br></div>