[Openstack] Error occured while making volume backed live migration based on Ceph Block; Ask for assistance.

Li Zhuran lizhuran at jd.com
Tue Mar 25 15:09:36 UTC 2014


Hi all,

I'm trying the volume backed live migration via Ceph Block Storage and stunk
in the issues of libvirt.
The environment of my cluster is as follows:
Hosts in cluster: 
   Havana(Controller & compute) and Compute1(pure Compute node)
   Ceph1, ceph2, ceph3: ceph cluster

1. The ceph cluster: checked by command: ceph health
2. The Openstack cluster: 
  ceph client is installed on each node;
  Configuration done for ceph in glance, cinder, nova
  Qemu-kvm, qemu-img, qemu-kvm-tools are installed from ceph source.
3. Image creation correctly; Volume creation correctly; Instance lunched
from volume on host havana;
4. Unexpected, the instance migration from Havana to compute1 failed with
the following ERROR(I'm struggling to hard work on the issues and it would
be much appreciated if any clue!!!):

Compute.log on host compute1:
2014-03-25 18:46:44.346 16948 AUDIT nova.compute.manager
[req-defe320c-bae6-4d57-817e-1cca779e204a 714bab91932043e98ad2d855a81f19b0
888df5c4bc47459485b96ffa03c671e6] [instance:
aa44d1f6-1f4f-434d-b05e-f785f7b0a2a7] Detach volume
f5a34a51-b662-43d4-a7d5-4199de8b1d4b from mountpoint vda
2014-03-25 18:46:44.364 16948 WARNING nova.compute.manager
[req-defe320c-bae6-4d57-817e-1cca779e204a 714bab91932043e98ad2d855a81f19b0
888df5c4bc47459485b96ffa03c671e6] [instance:
aa44d1f6-1f4f-434d-b05e-f785f7b0a2a7] Detaching volume from unknown instance
2014-03-25 18:46:44.375 16948 ERROR nova.compute.manager
[req-defe320c-bae6-4d57-817e-1cca779e204a 714bab91932043e98ad2d855a81f19b0
888df5c4bc47459485b96ffa03c671e6] [instance:
aa44d1f6-1f4f-434d-b05e-f785f7b0a2a7] Failed to detach volume
f5a34a51-b662-43d4-a7d5-4199de8b1d4b from vda
2014-03-25 18:46:44.375 16948 TRACE nova.compute.manager [instance:
aa44d1f6-1f4f-434d-b05e-f785f7b0a2a7] Traceback (most recent call last):
2014-03-25 18:46:44.375 16948 TRACE nova.compute.manager [instance:
aa44d1f6-1f4f-434d-b05e-f785f7b0a2a7]   File
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 3737, in
_detach_volume
2014-03-25 18:46:44.375 16948 TRACE nova.compute.manager [instance:
aa44d1f6-1f4f-434d-b05e-f785f7b0a2a7]     encryption=encryption)
2014-03-25 18:46:44.375 16948 TRACE nova.compute.manager [instance:
aa44d1f6-1f4f-434d-b05e-f785f7b0a2a7]   File
"/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1202,
in detach_volume
2014-03-25 18:46:44.375 16948 TRACE nova.compute.manager [instance:
aa44d1f6-1f4f-434d-b05e-f785f7b0a2a7]     virt_dom =
self._lookup_by_name(instance_name)
2014-03-25 18:46:44.375 16948 TRACE nova.compute.manager [instance:
aa44d1f6-1f4f-434d-b05e-f785f7b0a2a7]   File
"/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 3101,
in _lookup_by_name
2014-03-25 18:46:44.375 16948 TRACE nova.compute.manager [instance:
aa44d1f6-1f4f-434d-b05e-f785f7b0a2a7]     raise
exception.InstanceNotFound(instance_id=instance_name)
2014-03-25 18:46:44.375 16948 TRACE nova.compute.manager [instance:
aa44d1f6-1f4f-434d-b05e-f785f7b0a2a7] InstanceNotFound: Instance
instance-0000000c could not be found.
2014-03-25 18:46:44.375 16948 TRACE nova.compute.manager [instance:
aa44d1f6-1f4f-434d-b05e-f785f7b0a2a7]

Libvirtd.log on host compute1:
2014-03-25 10:32:05.952+0000: 1891: warning : qemuDomainObjTaint:1377 :
Domain id=1 name='instance-0000000b'
uuid=4ddb08dc-c7ef-4cdf-8108-80a296eaf457 is tainted: high-privileges
2014-03-25 10:32:07.188+0000: 1891: warning :
qemuDomainObjEnterMonitorInternal:1005 : This thread seems to be the async
job owner; entering monitor without asking for a nested job is dangerous
2014-03-25 10:33:34.411+0000: 1891: warning :
qemuDomainObjEnterMonitorInternal:1005 : This thread seems to be the async
job owner; entering monitor without asking for a nested job is dangerous
2014-03-25 10:33:34.414+0000: 1891: warning :
qemuDomainObjEnterMonitorInternal:1005 : This thread seems to be the async
job owner; entering monitor without asking for a nested job is dangerous
2014-03-25 10:33:34.416+0000: 1891: warning : qemuSetupCgroupForVcpu:566 :
Unable to get vcpus' pids.
2014-03-25 10:33:34.419+0000: 1891: warning :
qemuDomainObjEnterMonitorInternal:1005 : This thread seems to be the async
job owner; entering monitor without asking for a nested job is dangerous
2014-03-25 10:33:34.419+0000: 1891: warning :
qemuDomainObjEnterMonitorInternal:1005 : This thread seems to be the async
job owner; entering monitor without asking for a nested job is dangerous

[root at compute1 ~(keystone_admin)]# rpm -qa |grep qemu
qemu-img-0.12.1.2-2.415.el6.3ceph.x86_64
qemu-guest-agent-0.12.1.2-2.415.el6.3ceph.x86_64
qemu-kvm-tools-0.12.1.2-2.415.el6.3ceph.x86_64
qemu-kvm-0.12.1.2-2.415.el6.3ceph.x86_64
gpxe-roms-qemu-0.9.7-6.10.el6.noarch
[root at compute1 ~(keystone_admin)]# rpm -qa |grep libvirt
libvirt-client-0.10.2-29.el6_5.3.x86_64
libvirt-python-0.10.2-29.el6_5.3.x86_64
libvirt-0.10.2-29.el6_5.3.x86_64





More information about the Openstack mailing list