<html><head><meta http-equiv="Content-Type" content="text/html charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; -webkit-line-break: after-white-space;" class="">Return exactly the same tree:<div class=""><br class=""></div><div class=""><div style="margin: 0px; font-size: 11px; line-height: normal; font-family: Menlo;" class=""><span style="font-variant-ligatures: no-common-ligatures" class="">$ diff virsh_capabilities_compute1 virsh_capabilities_compute2</span></div><div style="margin: 0px; font-size: 11px; line-height: normal; font-family: Menlo;" class=""><span style="font-variant-ligatures: no-common-ligatures" class="">4c4</span></div><div style="margin: 0px; font-size: 11px; line-height: normal; font-family: Menlo;" class=""><span style="font-variant-ligatures: no-common-ligatures" class="">< <uuid>4c4c4544-0048-4d10-805a-b9c04f544232</uuid></span></div><div style="margin: 0px; font-size: 11px; line-height: normal; font-family: Menlo;" class=""><span style="font-variant-ligatures: no-common-ligatures" class="">---</span></div><div style="margin: 0px; font-size: 11px; line-height: normal; font-family: Menlo;" class=""><span style="font-variant-ligatures: no-common-ligatures" class="">> <uuid>4c4c4544-0048-4d10-8056-b9c04f544232</uuid></span></div><div style="margin: 0px; font-size: 11px; line-height: normal; font-family: Menlo;" class=""><br class=""></div><div style="margin: 0px; font-size: 11px; line-height: normal; font-family: Menlo;" class="">Thanks</div><div style="margin: 0px; font-size: 11px; line-height: normal; font-family: Menlo;" class=""><span style="font-variant-ligatures: no-common-ligatures" class=""><br class=""></span></div><div><blockquote type="cite" class=""><div class="">Le 25 janv. 2017 à 18:57, David Medberry <<a href="mailto:openstack@medberry.net" class="">openstack@medberry.net</a>> a écrit :</div><br class="Apple-interchange-newline"><div class=""><div dir="ltr" class="">Does "virsh capabilities" return the same tree on both hosts?</div><div class="gmail_extra"><br class=""><div class="gmail_quote">On Wed, Jan 25, 2017 at 4:10 AM, fabrice grelaud <span dir="ltr" class=""><<a href="mailto:fabrice.grelaud@u-bordeaux.fr" target="_blank" class="">fabrice.grelaud@u-bordeaux.fr</a>></span> wrote:<br class=""><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi,<br class="">
<br class="">
i ‘ve got live migration issue in one direction but not in other.<br class="">
I deploy openstack with OSA, ubuntu trusty, stable/newton branch, 14.0.5 tag.<br class="">
<br class="">
My 2 compute node are same host type and have nova-compute and cinder-volume (our ceph cluster as backend) services.<br class="">
<br class="">
No problem to live migrate instance from Compute 2 to Compute 1 whereas the reverse is not true.<br class="">
See log below:<br class="">
<br class="">
Live migration instance Compute 2 to 1: OK<br class="">
<br class="">
Compute 2 log<br class="">
2017-01-25 11:00:15.621 28309 INFO nova.virt.libvirt.migration [req-6f21e4a4-28a8-48e3-bf2f-<wbr class="">2e1ad3b52470 0329776bd1634978a7fed35a70c774<wbr class="">79 7531f209e3514e3f98eb58aafa4802<wbr class="">85 - - -] [instance: c7a5e5d1-bc22-4143-a85a-<wbr class="">ee5c3b6777b3] Increasing downtime to 46 ms after 0 sec elapsed time<br class="">
2017-01-25 11:00:15.787 28309 INFO nova.virt.libvirt.driver [req-6f21e4a4-28a8-48e3-bf2f-<wbr class="">2e1ad3b52470 0329776bd1634978a7fed35a70c774<wbr class="">79 7531f209e3514e3f98eb58aafa4802<wbr class="">85 - - -] [instance: c7a5e5d1-bc22-4143-a85a-<wbr class="">ee5c3b6777b3] Migration running for 0 secs, memory 100% remaining; (bytes processed=0, remaining=0, total=0)<br class="">
2017-01-25 11:00:17.737 28309 INFO nova.compute.manager [-] [instance: c7a5e5d1-bc22-4143-a85a-<wbr class="">ee5c3b6777b3] VM Paused (Lifecycle Event)<br class="">
2017-01-25 11:00:17.794 28309 INFO nova.virt.libvirt.driver [req-6f21e4a4-28a8-48e3-bf2f-<wbr class="">2e1ad3b52470 0329776bd1634978a7fed35a70c774<wbr class="">79 7531f209e3514e3f98eb58aafa4802<wbr class="">85 - - -] [instance: c7a5e5d1-bc22-4143-a85a-<wbr class="">ee5c3b6777b3] Migration operation has completed<br class="">
2017-01-25 11:00:17.795 28309 INFO nova.compute.manager [req-6f21e4a4-28a8-48e3-bf2f-<wbr class="">2e1ad3b52470 0329776bd1634978a7fed35a70c774<wbr class="">79 7531f209e3514e3f98eb58aafa4802<wbr class="">85 - - -] [instance: c7a5e5d1-bc22-4143-a85a-<wbr class="">ee5c3b6777b3] _post_live_migration() is started..<br class="">
2017-01-25 11:00:17.815 28309 INFO oslo.privsep.daemon [req-6f21e4a4-28a8-48e3-bf2f-<wbr class="">2e1ad3b52470 0329776bd1634978a7fed35a70c774<wbr class="">79 7531f209e3514e3f98eb58aafa4802<wbr class="">85 - - -] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmpfL96lI/privsep.sock']<br class="">
2017-01-25 11:00:18.387 28309 INFO oslo.privsep.daemon [req-6f21e4a4-28a8-48e3-bf2f-<wbr class="">2e1ad3b52470 0329776bd1634978a7fed35a70c774<wbr class="">79 7531f209e3514e3f98eb58aafa4802<wbr class="">85 - - -] Spawned new privsep daemon via rootwrap<br class="">
2017-01-25 11:00:18.395 28309 INFO oslo.privsep.daemon [-] privsep daemon starting<br class="">
2017-01-25 11:00:18.396 28309 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0<br class="">
2017-01-25 11:00:18.396 28309 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/<wbr class="">none<br class="">
2017-01-25 11:00:18.396 28309 INFO oslo.privsep.daemon [-] privsep daemon running as pid 28815<br class="">
2017-01-25 11:00:18.397 28309 INFO nova.compute.manager [req-aa0997d7-bf5f-480f-abc5-<wbr class="">beadd2d03409 - - - - -] [instance: c7a5e5d1-bc22-4143-a85a-<wbr class="">ee5c3b6777b3] During sync_power_state the instance has a pending task (migrating). Skip.<br class="">
2017-01-25 11:00:18.538 28309 INFO nova.compute.manager [req-115a99b8-48ef-43d5-908b-<wbr class="">5ff7aadc3df4 - - - - -] Running instance usage audit for host p-oscompute02 from 2017-01-25 09:00:00 to 2017-01-25 10:00:00. 2 instances.<br class="">
2017-01-25 11:00:18.691 28309 INFO nova.compute.resource_tracker [req-115a99b8-48ef-43d5-908b-<wbr class="">5ff7aadc3df4 - - - - -] Auditing locally available compute resources for node p-oscompute02.openstack.local<br class="">
2017-01-25 11:00:19.634 28309 INFO nova.compute.resource_tracker [req-115a99b8-48ef-43d5-908b-<wbr class="">5ff7aadc3df4 - - - - -] Total usable vcpus: 40, total allocated vcpus: 4<br class="">
2017-01-25 11:00:19.635 28309 INFO nova.compute.resource_tracker [req-115a99b8-48ef-43d5-908b-<wbr class="">5ff7aadc3df4 - - - - -] Final resource view: name=p-oscompute02.openstack.<wbr class="">local phys_ram=128700MB used_ram=6144MB phys_disk=33493GB used_disk=22GB total_vcpus=40 used_vcpus=4 pci_stats=[]<br class="">
2017-01-25 11:00:19.709 28309 INFO nova.compute.resource_tracker [req-115a99b8-48ef-43d5-908b-<wbr class="">5ff7aadc3df4 - - - - -] Compute_service record updated for p-oscompute02:p-oscompute02.<wbr class="">openstack.local<br class="">
2017-01-25 11:00:20.163 28309 INFO os_vif [req-6f21e4a4-28a8-48e3-bf2f-<wbr class="">2e1ad3b52470 0329776bd1634978a7fed35a70c774<wbr class="">79 7531f209e3514e3f98eb58aafa4802<wbr class="">85 - - -] Successfully unplugged vif VIFBridge(active=True,address=<wbr class="">fa:16:3e:d2:7c:83,bridge_name=<wbr class="">'brqc434ace8-45',has_traffic_<wbr class="">filtering=True,id=dff20b91-<wbr class="">a654-437d-8a74-dc55aeac8ab7,<wbr class="">network=Network(c434ace8-45f6-<wbr class="">4bb1-95bc-d52dadb557c7),<wbr class="">plugin='linux_bridge',port_<wbr class="">profile=<?>,preserve_on_<wbr class="">delete=False,vif_name='<wbr class="">tapdff20b91-a6')<br class="">
2017-01-25 11:00:20.201 28309 INFO nova.virt.libvirt.driver [req-6f21e4a4-28a8-48e3-bf2f-<wbr class="">2e1ad3b52470 0329776bd1634978a7fed35a70c774<wbr class="">79 7531f209e3514e3f98eb58aafa4802<wbr class="">85 - - -] [instance: c7a5e5d1-bc22-4143-a85a-<wbr class="">ee5c3b6777b3] Deleting instance files /var/lib/nova/instances/<wbr class="">c7a5e5d1-bc22-4143-a85a-<wbr class="">ee5c3b6777b3_del<br class="">
2017-01-25 11:00:20.202 28309 INFO nova.virt.libvirt.driver [req-6f21e4a4-28a8-48e3-bf2f-<wbr class="">2e1ad3b52470 0329776bd1634978a7fed35a70c774<wbr class="">79 7531f209e3514e3f98eb58aafa4802<wbr class="">85 - - -] [instance: c7a5e5d1-bc22-4143-a85a-<wbr class="">ee5c3b6777b3] Deletion of /var/lib/nova/instances/<wbr class="">c7a5e5d1-bc22-4143-a85a-<wbr class="">ee5c3b6777b3_del complete<br class="">
2017-01-25 11:00:20.352 28309 INFO nova.compute.resource_tracker [req-6f21e4a4-28a8-48e3-bf2f-<wbr class="">2e1ad3b52470 0329776bd1634978a7fed35a70c774<wbr class="">79 7531f209e3514e3f98eb58aafa4802<wbr class="">85 - - -] Auditing locally available compute resources for node p-oscompute02.openstack.local<br class="">
2017-01-25 11:00:21.143 28309 INFO nova.compute.resource_tracker [req-6f21e4a4-28a8-48e3-bf2f-<wbr class="">2e1ad3b52470 0329776bd1634978a7fed35a70c774<wbr class="">79 7531f209e3514e3f98eb58aafa4802<wbr class="">85 - - -] Total usable vcpus: 40, total allocated vcpus: 4<br class="">
2017-01-25 11:00:21.143 28309 INFO nova.compute.resource_tracker [req-6f21e4a4-28a8-48e3-bf2f-<wbr class="">2e1ad3b52470 0329776bd1634978a7fed35a70c774<wbr class="">79 7531f209e3514e3f98eb58aafa4802<wbr class="">85 - - -] Final resource view: name=p-oscompute02.openstack.<wbr class="">local phys_ram=128700MB used_ram=6144MB phys_disk=33493GB used_disk=22GB total_vcpus=40 used_vcpus=4 pci_stats=[]<br class="">
2017-01-25 11:00:21.204 28309 INFO nova.compute.resource_tracker [req-6f21e4a4-28a8-48e3-bf2f-<wbr class="">2e1ad3b52470 0329776bd1634978a7fed35a70c774<wbr class="">79 7531f209e3514e3f98eb58aafa4802<wbr class="">85 - - -] Compute_service record updated for p-oscompute02:p-oscompute02.<wbr class="">openstack.local<br class="">
2017-01-25 11:00:21.214 28309 INFO nova.compute.manager [req-6f21e4a4-28a8-48e3-bf2f-<wbr class="">2e1ad3b52470 0329776bd1634978a7fed35a70c774<wbr class="">79 7531f209e3514e3f98eb58aafa4802<wbr class="">85 - - -] [instance: c7a5e5d1-bc22-4143-a85a-<wbr class="">ee5c3b6777b3] Migrating instance to p-oscompute01 finished successfully.<br class="">
2017-01-25 11:00:21.215 28309 INFO nova.compute.manager [req-6f21e4a4-28a8-48e3-bf2f-<wbr class="">2e1ad3b52470 0329776bd1634978a7fed35a70c774<wbr class="">79 7531f209e3514e3f98eb58aafa4802<wbr class="">85 - - -] [instance: c7a5e5d1-bc22-4143-a85a-<wbr class="">ee5c3b6777b3] You may see the error "libvirt: QEMU error: Domain not found: no domain with matching name." This error can be safely ignored.<br class="">
2017-01-25 11:00:33.399 28309 INFO nova.compute.manager [-] [instance: c7a5e5d1-bc22-4143-a85a-<wbr class="">ee5c3b6777b3] VM Stopped (Lifecycle Event)<br class="">
<br class="">
Compute 1 log<br class="">
2017-01-25 11:00:13.067 113231 INFO nova.virt.libvirt.driver [req-6f21e4a4-28a8-48e3-bf2f-<wbr class="">2e1ad3b52470 0329776bd1634978a7fed35a70c774<wbr class="">79 7531f209e3514e3f98eb58aafa4802<wbr class="">85 - - -] Instance launched has CPU info: {"vendor": "Intel", "model": "Haswell-noTSX", "arch": "x86_64", "features": ["pge", "avx", "clflush", "sep", "syscall", "vme", "dtes64", "invpcid", "tsc", "fsgsbase", "xsave", "vmx", "erms", "xtpr", "cmov", "smep", "ssse3", "est", "pat", "monitor", "smx", "pbe", "lm", "msr", "nx", "fxsr", "tm", "sse4.1", "pae", "sse4.2", "pclmuldq", "acpi", "fma", "tsc-deadline", "mmx", "osxsave", "cx8", "mce", "de", "tm2", "ht", "dca", "lahf_lm", "abm", "popcnt", "mca", "pdpe1gb", "apic", "sse", "f16c", "pse", "ds", "invtsc", "pni", "rdtscp", "avx2", "aes", "sse2", "ss", "ds_cpl", "bmi1", "bmi2", "pcid", "fpu", "cx16", "pse36", "mtrr", "movbe", "pdcm", "rdrand", "x2apic"], "topology": {"cores": 10, "cells": 2, "threads": 2, "sockets": 1}}<br class="">
2017-01-25 11:00:13.434 113231 INFO oslo.privsep.daemon [req-6f21e4a4-28a8-48e3-bf2f-<wbr class="">2e1ad3b52470 0329776bd1634978a7fed35a70c774<wbr class="">79 7531f209e3514e3f98eb58aafa4802<wbr class="">85 - - -] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmp0444cm/privsep.sock']<br class="">
2017-01-25 11:00:13.984 113231 INFO oslo.privsep.daemon [req-6f21e4a4-28a8-48e3-bf2f-<wbr class="">2e1ad3b52470 0329776bd1634978a7fed35a70c774<wbr class="">79 7531f209e3514e3f98eb58aafa4802<wbr class="">85 - - -] Spawned new privsep daemon via rootwrap<br class="">
2017-01-25 11:00:13.986 113231 INFO oslo.privsep.daemon [-] privsep daemon starting<br class="">
2017-01-25 11:00:13.986 113231 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0<br class="">
2017-01-25 11:00:13.987 113231 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/<wbr class="">none<br class="">
2017-01-25 11:00:13.987 113231 INFO oslo.privsep.daemon [-] privsep daemon running as pid 113978<br class="">
2017-01-25 11:00:14.856 113231 INFO os_vif [req-6f21e4a4-28a8-48e3-bf2f-<wbr class="">2e1ad3b52470 0329776bd1634978a7fed35a70c774<wbr class="">79 7531f209e3514e3f98eb58aafa4802<wbr class="">85 - - -] Successfully plugged vif VIFBridge(active=True,address=<wbr class="">fa:16:3e:d2:7c:83,bridge_name=<wbr class="">'brqc434ace8-45',has_traffic_<wbr class="">filtering=True,id=dff20b91-<wbr class="">a654-437d-8a74-dc55aeac8ab7,<wbr class="">network=Network(c434ace8-45f6-<wbr class="">4bb1-95bc-d52dadb557c7),<wbr class="">plugin='linux_bridge',port_<wbr class="">profile=<?>,preserve_on_<wbr class="">delete=False,vif_name='<wbr class="">tapdff20b91-a6')<br class="">
2017-01-25 11:00:15.931 113231 INFO nova.compute.resource_tracker [req-7d438646-5f1d-4cad-9c29-<wbr class="">98272c1b8352 - - - - -] Auditing locally available compute resources for node p-oscompute01.openstack.local<br class="">
2017-01-25 11:00:16.748 113231 INFO nova.compute.resource_tracker [req-7d438646-5f1d-4cad-9c29-<wbr class="">98272c1b8352 - - - - -] Total usable vcpus: 40, total allocated vcpus: 8<br class="">
2017-01-25 11:00:16.749 113231 INFO nova.compute.resource_tracker [req-7d438646-5f1d-4cad-9c29-<wbr class="">98272c1b8352 - - - - -] Final resource view: name=p-oscompute01.openstack.<wbr class="">local phys_ram=128700MB used_ram=12288MB phys_disk=33493GB used_disk=72GB total_vcpus=40 used_vcpus=8 pci_stats=[]<br class="">
2017-01-25 11:00:16.819 113231 INFO nova.compute.resource_tracker [req-7d438646-5f1d-4cad-9c29-<wbr class="">98272c1b8352 - - - - -] Compute_service record updated for p-oscompute01:p-oscompute01.<wbr class="">openstack.local<br class="">
2017-01-25 11:00:17.227 113231 INFO nova.compute.manager [-] [instance: c7a5e5d1-bc22-4143-a85a-<wbr class="">ee5c3b6777b3] VM Started (Lifecycle Event)<br class="">
2017-01-25 11:00:17.896 113231 INFO nova.compute.manager [req-926f2642-43af-42a6-8b0a-<wbr class="">1fcbf8c433ee - - - - -] [instance: c7a5e5d1-bc22-4143-a85a-<wbr class="">ee5c3b6777b3] VM Resumed (Lifecycle Event)<br class="">
2017-01-25 11:00:17.959 113231 INFO nova.compute.manager [req-7d438646-5f1d-4cad-9c29-<wbr class="">98272c1b8352 - - - - -] Running instance usage audit for host p-oscompute01 from 2017-01-25 09:00:00 to 2017-01-25 10:00:00. 7 instances.<br class="">
2017-01-25 11:00:18.027 113231 INFO nova.compute.manager [req-926f2642-43af-42a6-8b0a-<wbr class="">1fcbf8c433ee - - - - -] [instance: c7a5e5d1-bc22-4143-a85a-<wbr class="">ee5c3b6777b3] During the sync_power process the instance has moved from host p-oscompute02 to host p-oscompute01<br class="">
2017-01-25 11:00:18.028 113231 INFO nova.compute.manager [req-926f2642-43af-42a6-8b0a-<wbr class="">1fcbf8c433ee - - - - -] [instance: c7a5e5d1-bc22-4143-a85a-<wbr class="">ee5c3b6777b3] VM Resumed (Lifecycle Event)<br class="">
2017-01-25 11:00:18.135 113231 INFO nova.compute.manager [req-926f2642-43af-42a6-8b0a-<wbr class="">1fcbf8c433ee - - - - -] [instance: c7a5e5d1-bc22-4143-a85a-<wbr class="">ee5c3b6777b3] During the sync_power process the instance has moved from host p-oscompute02 to host p-oscompute01<br class="">
2017-01-25 11:00:20.231 113231 INFO nova.compute.manager [req-6f21e4a4-28a8-48e3-bf2f-<wbr class="">2e1ad3b52470 0329776bd1634978a7fed35a70c774<wbr class="">79 7531f209e3514e3f98eb58aafa4802<wbr class="">85 - - -] [instance: c7a5e5d1-bc22-4143-a85a-<wbr class="">ee5c3b6777b3] Post operation of migration started<br class="">
<br class="">
<br class="">
Live migration Compute 1 to 2: NOK<br class="">
<br class="">
Compute 1 log<br class="">
2017-01-25 11:03:57.676 113231 INFO nova.virt.libvirt.migration [req-7bd352bf-8818-4f71-9fa0-<wbr class="">04fabccebf9c 0329776bd1634978a7fed35a70c774<wbr class="">79 7531f209e3514e3f98eb58aafa4802<wbr class="">85 - - -] [instance: c7a5e5d1-bc22-4143-a85a-<wbr class="">ee5c3b6777b3] Increasing downtime to 46 ms after 0 sec elapsed time<br class="">
2017-01-25 11:03:57.812 113231 INFO nova.virt.libvirt.driver [req-7bd352bf-8818-4f71-9fa0-<wbr class="">04fabccebf9c 0329776bd1634978a7fed35a70c774<wbr class="">79 7531f209e3514e3f98eb58aafa4802<wbr class="">85 - - -] [instance: c7a5e5d1-bc22-4143-a85a-<wbr class="">ee5c3b6777b3] Migration running for 0 secs, memory 100% remaining; (bytes processed=0, remaining=0, total=0)<br class="">
2017-01-25 11:03:58.475 113231 ERROR nova.virt.libvirt.driver [req-7bd352bf-8818-4f71-9fa0-<wbr class="">04fabccebf9c 0329776bd1634978a7fed35a70c774<wbr class="">79 7531f209e3514e3f98eb58aafa4802<wbr class="">85 - - -] [instance: c7a5e5d1-bc22-4143-a85a-<wbr class="">ee5c3b6777b3] Live Migration failure: Requested operation is not valid: domain 'instance-00000187' is already active<br class="">
2017-01-25 11:03:58.817 113231 ERROR nova.virt.libvirt.driver [req-7bd352bf-8818-4f71-9fa0-<wbr class="">04fabccebf9c 0329776bd1634978a7fed35a70c774<wbr class="">79 7531f209e3514e3f98eb58aafa4802<wbr class="">85 - - -] [instance: c7a5e5d1-bc22-4143-a85a-<wbr class="">ee5c3b6777b3] Migration operation has aborted<br class="">
<br class="">
Compute 2 log<br class="">
2017-01-25 11:03:55.783 28309 INFO nova.virt.libvirt.driver [req-7bd352bf-8818-4f71-9fa0-<wbr class="">04fabccebf9c 0329776bd1634978a7fed35a70c774<wbr class="">79 7531f209e3514e3f98eb58aafa4802<wbr class="">85 - - -] Instance launched has CPU info: {"vendor": "Intel", "model": "Haswell-noTSX", "arch": "x86_64", "features": ["pge", "avx", "clflush", "sep", "syscall", "vme", "dtes64", "invpcid", "tsc", "fsgsbase", "xsave", "vmx", "erms", "xtpr", "cmov", "smep", "ssse3", "est", "pat", "monitor", "smx", "pbe", "lm", "msr", "nx", "fxsr", "tm", "sse4.1", "pae", "sse4.2", "pclmuldq", "acpi", "fma", "tsc-deadline", "mmx", "osxsave", "cx8", "mce", "de", "tm2", "ht", "dca", "lahf_lm", "abm", "popcnt", "mca", "pdpe1gb", "apic", "sse", "f16c", "pse", "ds", "invtsc", "pni", "rdtscp", "avx2", "aes", "sse2", "ss", "ds_cpl", "bmi1", "bmi2", "pcid", "fpu", "cx16", "pse36", "mtrr", "movbe", "pdcm", "rdrand", "x2apic"], "topology": {"cores": 10, "cells": 2, "threads": 2, "sockets": 1}}<br class="">
2017-01-25 11:03:56.849 28309 INFO os_vif [req-7bd352bf-8818-4f71-9fa0-<wbr class="">04fabccebf9c 0329776bd1634978a7fed35a70c774<wbr class="">79 7531f209e3514e3f98eb58aafa4802<wbr class="">85 - - -] Successfully plugged vif VIFBridge(active=True,address=<wbr class="">fa:16:3e:d2:7c:83,bridge_name=<wbr class="">'brqc434ace8-45',has_traffic_<wbr class="">filtering=True,id=dff20b91-<wbr class="">a654-437d-8a74-dc55aeac8ab7,<wbr class="">network=Network(c434ace8-45f6-<wbr class="">4bb1-95bc-d52dadb557c7),<wbr class="">plugin='linux_bridge',port_<wbr class="">profile=<?>,preserve_on_<wbr class="">delete=False,vif_name='<wbr class="">tapdff20b91-a6')<br class="">
2017-01-25 11:03:58.981 28309 INFO nova.compute.manager [req-7bd352bf-8818-4f71-9fa0-<wbr class="">04fabccebf9c 0329776bd1634978a7fed35a70c774<wbr class="">79 7531f209e3514e3f98eb58aafa4802<wbr class="">85 - - -] [instance: c7a5e5d1-bc22-4143-a85a-<wbr class="">ee5c3b6777b3] Detach volume f40efa24-4992-49ac-8d75-<wbr class="">4ace88d9ecf7 from mountpoint /dev/vda<br class="">
2017-01-25 11:03:58.984 28309 WARNING nova.compute.manager [req-7bd352bf-8818-4f71-9fa0-<wbr class="">04fabccebf9c 0329776bd1634978a7fed35a70c774<wbr class="">79 7531f209e3514e3f98eb58aafa4802<wbr class="">85 - - -] [instance: c7a5e5d1-bc22-4143-a85a-<wbr class="">ee5c3b6777b3] Detaching volume from unknown instance<br class="">
2017-01-25 11:03:58.986 28309 WARNING nova.virt.libvirt.driver [req-7bd352bf-8818-4f71-9fa0-<wbr class="">04fabccebf9c 0329776bd1634978a7fed35a70c774<wbr class="">79 7531f209e3514e3f98eb58aafa4802<wbr class="">85 - - -] [instance: c7a5e5d1-bc22-4143-a85a-<wbr class="">ee5c3b6777b3] During detach_volume, instance disappeared.<br class="">
2017-01-25 11:04:00.033 28309 INFO nova.virt.libvirt.driver [-] [instance: c7a5e5d1-bc22-4143-a85a-<wbr class="">ee5c3b6777b3] During wait destroy, instance disappeared.<br class="">
2017-01-25 11:04:00.034 28309 INFO os_vif [req-7bd352bf-8818-4f71-9fa0-<wbr class="">04fabccebf9c 0329776bd1634978a7fed35a70c774<wbr class="">79 7531f209e3514e3f98eb58aafa4802<wbr class="">85 - - -] Successfully unplugged vif VIFBridge(active=True,address=<wbr class="">fa:16:3e:d2:7c:83,bridge_name=<wbr class="">'brqc434ace8-45',has_traffic_<wbr class="">filtering=True,id=dff20b91-<wbr class="">a654-437d-8a74-dc55aeac8ab7,<wbr class="">network=Network(c434ace8-45f6-<wbr class="">4bb1-95bc-d52dadb557c7),<wbr class="">plugin='linux_bridge',port_<wbr class="">profile=<?>,preserve_on_<wbr class="">delete=False,vif_name='<wbr class="">tapdff20b91-a6')<br class="">
2017-01-25 11:04:00.049 28309 INFO nova.virt.libvirt.driver [req-7bd352bf-8818-4f71-9fa0-<wbr class="">04fabccebf9c 0329776bd1634978a7fed35a70c774<wbr class="">79 7531f209e3514e3f98eb58aafa4802<wbr class="">85 - - -] [instance: c7a5e5d1-bc22-4143-a85a-<wbr class="">ee5c3b6777b3] Deleting instance files /var/lib/nova/instances/<wbr class="">c7a5e5d1-bc22-4143-a85a-<wbr class="">ee5c3b6777b3_del<br class="">
2017-01-25 11:04:00.050 28309 INFO nova.virt.libvirt.driver [req-7bd352bf-8818-4f71-9fa0-<wbr class="">04fabccebf9c 0329776bd1634978a7fed35a70c774<wbr class="">79 7531f209e3514e3f98eb58aafa4802<wbr class="">85 - - -] [instance: c7a5e5d1-bc22-4143-a85a-<wbr class="">ee5c3b6777b3] Deletion of /var/lib/nova/instances/<wbr class="">c7a5e5d1-bc22-4143-a85a-<wbr class="">ee5c3b6777b3_del complete<br class="">
<br class="">
Need some help or some hints to resolve/debug this issue. Give me some headache ;-) Because compute node hardware are the same, nova/libvirt conf too.<br class="">
<br class="">
Regards,<br class="">
<br class="">
Fabrice Grelaud<br class="">
Université de Bordeaux<br class="">
______________________________<wbr class="">_________________<br class="">
Mailing list: <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" rel="noreferrer" target="_blank" class="">http://lists.openstack.org/<wbr class="">cgi-bin/mailman/listinfo/<wbr class="">openstack</a><br class="">
Post to : <a href="mailto:openstack@lists.openstack.org" class="">openstack@lists.openstack.org</a><br class="">
Unsubscribe : <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack" rel="noreferrer" target="_blank" class="">http://lists.openstack.org/<wbr class="">cgi-bin/mailman/listinfo/<wbr class="">openstack</a><br class="">
</blockquote></div><br class=""></div>
</div></blockquote></div><br class=""></div></body></html>