<div dir="ltr">What version of qemu are you running? I was hit by this bug[1] that seems to give off the same faults you are reporting and had to downgrade qemu packages to version '1:2.5+dfsg-5ubuntu10.5'<div><br></div><div>[1] <a href="https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1647389">https://bugs.launchpad.net/ubuntu/+source/qemu/+bug/1647389</a> </div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Dec 20, 2016 at 12:10 AM, Vladimir Prokofev <span dir="ltr"><<a href="mailto:v@prokofev.me" target="_blank">v@prokofev.me</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hello Ops.<div><br></div><div>I want to pick your brains on a live migration issue, cause I'm kinda stuck atm.</div><div><br></div><div>I'm running a small HA Ubuntu Openstack cloud - 3 controllers(VMs on Ubuntu/KVM) with ha-proxy LB, corosync/pacemaker manages VIP, Galera DB, 3 compute nodes(using KVM hypervisor), 3 network nodes(VMs on Ubuntu/KVM), CEPH cluster(4 OSD nodes, 3 MON nodes(VMs on Ubuntu/KVM)).</div><div>Nova, cinder, and glance use CEPH as backend. Neutron uses linux-bridge.</div><div><br></div><div>We were running Ubuntu Trusty 14.04/Mitaka, and decided to upgrade to Ubuntu Xenial 16.04/Newton. The issue appeared sometime after upgrade of all Mitaka packages to the latest version prior to Xenial upgrade, and stayed even after upgrade to Xenial/Newton. It's hard to tell for sure as we didn't use LM for a while. It worked fine under Trusty/Mitaka, and broke under Xenial/latest-Mitaka.</div><div><br></div><div><br></div><div>With a chance of 20-30% live migration will fail. The instance will pause on a source node, but will not be resumed on target node. Target node will destroy instance assuming LM failed, and on source node instance will stay paused. On source node no new messages appear in nova-compute.log, and commands such as "virsh list" won't provide any output, or even exit for that matter. nova-compute can be restarted, but after normal startup messages it once again doesn't provide any new log entries. Nova considers compute node as up and running.</div><div>In dashboard you can see that instance is now residing at a new host, and is in shutoff state.</div><div><br></div><div>If I restart libvirtd on target or source node then whole system "unfreezes". Source host unpauses instance, and it is live again. But it now resides on a source node, while database thinks it's on a target node and is shutdown. Warning messages will appear on source host:</div><div><br></div><div><div>2016-12-20 01:13:37.095 20025 WARNING nova.compute.manager [req-b3879539-989f-4075-9ef0-<wbr>d23ef8868102 - - - - -] While synchronizing instance power states, found 3 instances in the database and 4 instances on the hypervisor.</div></div><div><br></div><div>Currenly I'm stopping/destroying/undefining instance on a source node, and launching it via standart Openstack means, but this leads to an instance reboot.</div><div><br></div><div><br></div><div>Last messages in nova-compute.log on source node after LM start:</div><div><div>2016-12-20 00:09:35.961 16834 INFO nova.virt.libvirt.migration [req-51801b5e-b77a-4d76-ad87-<wbr>176326ac910e 84498aa7e26443c4908d973f3e86d5<wbr>30 ecee1197f46e453dba25669554226c<wbr>e5 - - -] [instance: cd8cb1db-dca3-4b0f-a03e-<wbr>c0befbbd7b53] Increasing downtime to 1251 ms after 0 sec elapsed time</div><div>2016-12-20 00:09:36.127 16834 INFO nova.virt.libvirt.driver [req-51801b5e-b77a-4d76-ad87-<wbr>176326ac910e 84498aa7e26443c4908d973f3e86d5<wbr>30 ecee1197f46e453dba25669554226c<wbr>e5 - - -] [instance: cd8cb1db-dca3-4b0f-a03e-<wbr>c0befbbd7b53] Migration running for 0 secs, memory 100% remaining; (bytes processed=0, remaining=0, total=0)</div><div>2016-12-20 00:09:36.894 16834 INFO nova.compute.manager [req-3a2a828f-8b76-4a65-b49b-<wbr>ea8d232a3de5 - - - - -] [instance: cd8cb1db-dca3-4b0f-a03e-<wbr>c0befbbd7b53] VM Paused (Lifecycle Event)</div><div>2016-12-20 00:09:37.046 16834 INFO nova.compute.manager [req-3a2a828f-8b76-4a65-b49b-<wbr>ea8d232a3de5 - - - - -] [instance: cd8cb1db-dca3-4b0f-a03e-<wbr>c0befbbd7b53] During sync_power_state the instance has a pending task (migrating). Skip.</div><div>2016-12-20 00:09:37.300 16834 INFO nova.virt.libvirt.driver [req-51801b5e-b77a-4d76-ad87-<wbr>176326ac910e 84498aa7e26443c4908d973f3e86d5<wbr>30 ecee1197f46e453dba25669554226c<wbr>e5 - - -] [instance: cd8cb1db-dca3-4b0f-a03e-<wbr>c0befbbd7b53] Migration operation has completed</div><div>2016-12-20 00:09:37.301 16834 INFO nova.compute.manager [req-51801b5e-b77a-4d76-ad87-<wbr>176326ac910e 84498aa7e26443c4908d973f3e86d5<wbr>30 ecee1197f46e453dba25669554226c<wbr>e5 - - -] [instance: cd8cb1db-dca3-4b0f-a03e-<wbr>c0befbbd7b53] _post_live_migration() is started..</div><div>2016-12-20 00:09:39.718 16834 INFO os_vif [req-51801b5e-b77a-4d76-ad87-<wbr>176326ac910e 84498aa7e26443c4908d973f3e86d5<wbr>30 ecee1197f46e453dba25669554226c<wbr>e5 - - -] Successfully unplugged vif VIFBridge(active=False,<wbr>address=fa:16:3e:e3:c4:c7,<wbr>bridge_name='brq0b7e75ee-b6',<wbr>has_traffic_filtering=True,id=<wbr>2a141727-77a7-4b43-b4cd-<wbr>21fb415a6a55,network=Network(<wbr>0b7e75ee-b691-4851-9b29-<wbr>08c3c62a96e0),plugin='linux_<wbr>bridge',port_profile=<?>,<wbr>preserve_on_delete=False,vif_<wbr>name='tap2a141727-77')</div><div>2016-12-20 00:09:39.777 16834 INFO nova.virt.libvirt.driver [req-51801b5e-b77a-4d76-ad87-<wbr>176326ac910e 84498aa7e26443c4908d973f3e86d5<wbr>30 ecee1197f46e453dba25669554226c<wbr>e5 - - -] [instance: cd8cb1db-dca3-4b0f-a03e-<wbr>c0befbbd7b53] Deleting instance files /var/lib/nova/instances/<wbr>cd8cb1db-dca3-4b0f-a03e-<wbr>c0befbbd7b53_del</div><div>2016-12-20 00:09:39.778 16834 INFO nova.virt.libvirt.driver [req-51801b5e-b77a-4d76-ad87-<wbr>176326ac910e 84498aa7e26443c4908d973f3e86d5<wbr>30 ecee1197f46e453dba25669554226c<wbr>e5 - - -] [instance: cd8cb1db-dca3-4b0f-a03e-<wbr>c0befbbd7b53] Deletion of /var/lib/nova/instances/<wbr>cd8cb1db-dca3-4b0f-a03e-<wbr>c0befbbd7b53_del complete</div><div>2016-12-20 00:10:33.644 16834 INFO nova.compute.resource_tracker [req-00c4c403-d039-4347-b7ad-<wbr>3098834a891c - - - - -] Auditing locally available compute resources for node compute1.cloud1.local</div></div><div><br></div><div>No new messages will appear until libvirtd restart.</div><div><br></div><div><br></div><div>Messages during LM in nova-compute.log on target node:</div><div><div>2016-12-20 00:09:32.234 29438 INFO nova.virt.libvirt.driver [req-51801b5e-b77a-4d76-ad87-<wbr>176326ac910e 84498aa7e26443c4908d973f3e86d5<wbr>30 ecee1197f46e453dba25669554226c<wbr>e5 - - -] Instance launched has CPU info: {"vendor": "Intel", "model": "Nehalem", "arch": "x86_64", "features": ["pge", "clflush", "sep", "syscall", "vme", "dtes64", "msr", "vmx", "xtpr", "cmov", "ssse3", "est", "pat", "monitor", "smx", "pbe", "lm", "tsc", "nx", "fxsr", "tm", "sse4.1", "pae", "sse4.2", "acpi", "mmx", "cx8", "mce", "de", "rdtscp", "ht", "dca", "lahf_lm", "pdcm", "mca", "pdpe1gb", "apic", "sse", "pse", "ds", "invtsc", "pni", "tm2", "sse2", "ss", "ds_cpl", "pcid", "fpu", "cx16", "pse36", "mtrr", "popcnt"], "topology": {"cores": 6, "cells": 2, "threads": 2, "sockets": 1}}</div><div>2016-12-20 00:09:34.476 29438 INFO os_vif [req-51801b5e-b77a-4d76-ad87-<wbr>176326ac910e 84498aa7e26443c4908d973f3e86d5<wbr>30 ecee1197f46e453dba25669554226c<wbr>e5 - - -] Successfully plugged vif VIFBridge(active=True,address=<wbr>fa:16:3e:e3:c4:c7,bridge_name=<wbr>'brq0b7e75ee-b6',has_traffic_<wbr>filtering=True,id=2a141727-<wbr>77a7-4b43-b4cd-21fb415a6a55,<wbr>network=Network(0b7e75ee-b691-<wbr>4851-9b29-08c3c62a96e0),<wbr>plugin='linux_bridge',port_<wbr>profile=<?>,preserve_on_<wbr>delete=False,vif_name='<wbr>tap2a141727-77')</div><div>2016-12-20 00:09:36.711 29438 INFO nova.compute.manager [req-75abcda4-701a-4b84-8f28-<wbr>c11f6ad6f933 - - - - -] [instance: cd8cb1db-dca3-4b0f-a03e-<wbr>c0befbbd7b53] VM Started (Lifecycle Event)</div><div>2016-12-20 00:09:39.777 29438 INFO nova.compute.manager [req-51801b5e-b77a-4d76-ad87-<wbr>176326ac910e 84498aa7e26443c4908d973f3e86d5<wbr>30 ecee1197f46e453dba25669554226c<wbr>e5 - - -] [instance: cd8cb1db-dca3-4b0f-a03e-<wbr>c0befbbd7b53] Post operation of migration started</div><div>2016-12-20 00:09:40.381 29438 INFO nova.network.neutronv2.api [req-51801b5e-b77a-4d76-ad87-<wbr>176326ac910e 84498aa7e26443c4908d973f3e86d5<wbr>30 ecee1197f46e453dba25669554226c<wbr>e5 - - -] [instance: cd8cb1db-dca3-4b0f-a03e-<wbr>c0befbbd7b53] Updating port 2a141727-77a7-4b43-b4cd-<wbr>21fb415a6a55 with attributes {'binding:profile': {}, 'binding:host_id': 'compute3'}</div><div>2016-12-20 00:09:52.589 29438 INFO nova.compute.manager [-] [instance: cd8cb1db-dca3-4b0f-a03e-<wbr>c0befbbd7b53] VM Stopped (Lifecycle Event)</div><div>2016-12-20 00:09:52.747 29438 WARNING nova.compute.manager [req-e945fe0d-dea0-47ee-8b87-<wbr>b88991b9796e - - - - -] [instance: cd8cb1db-dca3-4b0f-a03e-<wbr>c0befbbd7b53] Instance shutdown by itself. Calling the stop API. Current vm_state: active, current task_state: None, original DB power_state: 4, current VM power_state: 4</div><div>2016-12-20 00:09:52.964 29438 INFO nova.compute.manager [req-e945fe0d-dea0-47ee-8b87-<wbr>b88991b9796e - - - - -] [instance: cd8cb1db-dca3-4b0f-a03e-<wbr>c0befbbd7b53] Instance is already powered off in the hypervisor when stop is called.</div><div>2016-12-20 00:09:53.039 29438 INFO nova.virt.libvirt.driver [req-e945fe0d-dea0-47ee-8b87-<wbr>b88991b9796e - - - - -] [instance: cd8cb1db-dca3-4b0f-a03e-<wbr>c0befbbd7b53] Instance already shutdown.</div><div>2016-12-20 00:09:53.043 29438 INFO nova.virt.libvirt.driver [-] [instance: cd8cb1db-dca3-4b0f-a03e-<wbr>c0befbbd7b53] Instance destroyed successfully.</div><div>2016-12-20 00:09:56.696 29438 INFO nova.compute.resource_tracker [req-53e5b7a1-aba9-4b26-a3fb-<wbr>b9b293bf2f17 - - - - -] Auditing locally available compute resources for node compute3.cloud1.local</div><div>2016-12-20 00:09:57.295 29438 INFO nova.compute.resource_tracker [req-53e5b7a1-aba9-4b26-a3fb-<wbr>b9b293bf2f17 - - - - -] Total usable vcpus: 24, total allocated vcpus: 39</div><div>2016-12-20 00:09:57.295 29438 INFO nova.compute.resource_tracker [req-53e5b7a1-aba9-4b26-a3fb-<wbr>b9b293bf2f17 - - - - -] Final resource view: name=compute3.cloud1.local phys_ram=145060MB used_ram=94720MB phys_disk=14710GB used_disk=45GB total_vcpus=24 used_vcpus=39 pci_stats=[]</div></div><div><br></div><div>Target node will proceed normal operations.</div><div><br></div><div>Nothing interesting appears in libvirtd log or syslog on any compute node. I've tried running debug log for libvirt, but it's unreadable for me - all I can figure out is that libvirt is still running and other instances are unaffected.</div><div><br></div><div><br></div><div>So, my questions.</div><div>- Is there any workaround to sync the state of DB and compute node? I wish at least to avoid instance reboot, and for that I need some way to tell DB that instance is on a source node and running, without service interruption. Is it ok to update database directly via SQL?</div><div>- How to further troubleshoot this issue? As I said. I'm kinda stuck and have no idea where to look further. If the issue appeared 100% of the time it would be a lot easier, but there's no pattern I can figure out - it happens randomly with heavy loaded instances, and idle instances alike.</div><div><br></div><div>Any suggestions appreciated. I need a boot in a right direction.</div><div>I will provide any logs and/or config files per request.</div></div>
<br>______________________________<wbr>_________________<br>
OpenStack-operators mailing list<br>
<a href="mailto:OpenStack-operators@lists.openstack.org">OpenStack-operators@lists.<wbr>openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" rel="noreferrer" target="_blank">http://lists.openstack.org/<wbr>cgi-bin/mailman/listinfo/<wbr>openstack-operators</a><br>
<br></blockquote></div><br></div>