[Openstack] Volume backed live migration error

Jakub Pavlík J.Pavlik at tcpisek.cz
Wed Jul 2 07:35:53 UTC 2014


Hi Tetsuya,

I raised the bug. Do you suggest to do anything more?

Thanks


Jakub
________________________________
Odesílate: Sodo, Tetsuya [tetsuya.sodo at hp.com]
Odesláno: 1. července 2014 04:30
Komu: Jakub Pavlík; openstack at lists.openstack.org
Kopie: Adam Skotnický; Vlastimil Mikeš
Předmět: RE: Volume backed live migration error

Hi Jakub,

I see the same error message when trying live migration and simple LVM cinder driver.
I wondered that the error has shown up on the post-migration compute host, not pre-migration compute host.

This may be the bug#1273268 which has been ‘unassigned’ for 6 months.
https://bugs.launchpad.net/nova/+bug/1273268

Does anyone tell the status?
Any information would be appreciated.

Thanks and regards,



Nodes:
Controller
Compute01
Compute02

Procedure:

-       Launch a VM (booted on Compute01)

-       Exec LiveMigration from Compute01 to Compute02

-       You see the following error message.

Error logs:
[compute01: pre-migration host]
2014-06-30 13:37:50.223 6643 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources
2014-06-30 13:37:50.958 6643 AUDIT nova.compute.resource_tracker [-] Free ram (MB): 6848
2014-06-30 13:37:50.959 6643 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 114
2014-06-30 13:37:50.959 6643 AUDIT nova.compute.resource_tracker [-] Free VCPUS: 3
2014-06-30 13:37:51.107 6643 INFO nova.compute.resource_tracker [-] Compute_service record updated for rhelospcp1.local.localnet:rhelospcp1.local.localnet

[compute02: post-migration host]
2014-06-30 13:37:17.894 6250 AUDIT nova.compute.manager [req-1559e6c0-6ddb-41ee-9eff-f3772f79df38 af07f17b76424dcbb3509da88a994ed0 031751b0c86d457a8388f2523129ba88] [instance: 12c7491f-3cd7-4f0d-9aa3-0e4fea69581c] Detach volume fdb5382a-ef06-4635-969c-10febac135e7 from mountpoint vda
2014-06-30 13:37:17.908 6250 WARNING nova.compute.manager [req-1559e6c0-6ddb-41ee-9eff-f3772f79df38 af07f17b76424dcbb3509da88a994ed0 031751b0c86d457a8388f2523129ba88] [instance: 12c7491f-3cd7-4f0d-9aa3-0e4fea69581c] Detaching volume from unknown instance
2014-06-30 13:37:17.918 6250 ERROR nova.compute.manager [req-1559e6c0-6ddb-41ee-9eff-f3772f79df38 af07f17b76424dcbb3509da88a994ed0 031751b0c86d457a8388f2523129ba88] [instance: 12c7491f-3cd7-4f0d-9aa3-0e4fea69581c] Failed to detach volume fdb5382a-ef06-4635-969c-10febac135e7 from vda
2014-06-30 13:37:17.918 6250 TRACE nova.compute.manager [instance: 12c7491f-3cd7-4f0d-9aa3-0e4fea69581c] Traceback (most recent call last):
2014-06-30 13:37:17.918 6250 TRACE nova.compute.manager [instance: 12c7491f-3cd7-4f0d-9aa3-0e4fea69581c]   File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 3746, in _detach_volume
2014-06-30 13:37:17.918 6250 TRACE nova.compute.manager [instance: 12c7491f-3cd7-4f0d-9aa3-0e4fea69581c]     encryption=encryption)
2014-06-30 13:37:17.918 6250 TRACE nova.compute.manager [instance: 12c7491f-3cd7-4f0d-9aa3-0e4fea69581c]   File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1214, in detach_volume
2014-06-30 13:37:17.918 6250 TRACE nova.compute.manager [instance: 12c7491f-3cd7-4f0d-9aa3-0e4fea69581c]     virt_dom = self._lookup_by_name(instance_name)
2014-06-30 13:37:17.918 6250 TRACE nova.compute.manager [instance: 12c7491f-3cd7-4f0d-9aa3-0e4fea69581c]   File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 3129, in _lookup_by_name
2014-06-30 13:37:17.918 6250 TRACE nova.compute.manager [instance: 12c7491f-3cd7-4f0d-9aa3-0e4fea69581c]     raise exception.InstanceNotFound(instance_id=instance_name)
2014-06-30 13:37:17.918 6250 TRACE nova.compute.manager [instance: 12c7491f-3cd7-4f0d-9aa3-0e4fea69581c] InstanceNotFound: Instance instance-00000011 could not be found.
2014-06-30 13:37:17.918 6250 TRACE nova.compute.manager [instance: 12c7491f-3cd7-4f0d-9aa3-0e4fea69581c]

Tetsuya

From: Jakub Pavlík [mailto:J.Pavlik at tcpisek.cz]
Sent: Monday, June 30, 2014 8:08 PM
To: openstack at lists.openstack.org
Cc: Adam Skotnický; Vlastimil Mikeš
Subject: [Openstack] Volume backed live migration error

Hi guys,

I have problem with live migration and block migration with IBM SVC plugin for Cinder at Fibre Channel. I have instance (boot from volume) running at compute node ch1nod2 and I am trying to migrate on second compute ch1nod3. I run the command and it takes about 1 second, after that instance is in state MIGRATION and at ch1nod3 I can see a new paused instance. It takes about 6 minutes and after I can see the errors bellow.

[root at ctl-1*12* ~]#nova list
+--------------------------------------+----------------+--------+------------+-------------+------------------------+
| ID                                   | Name           | Status | Task State | Power State | Networks               |
+--------------------------------------+----------------+--------+------------+-------------+------------------------+
| 8a9979f0-8a27-40ee-bc9d-b8cf17dd7265 | WindowsTest    | ACTIVE | -          | Running     | network1=192.168.6.54  |
| 9c5afd75-ab19-44c6-a630-458431ad4eda | centossnap2    | ACTIVE | -          | Running     | network2=192.168.7.252 |
| df725a9f-784a-4207-b7d0-da2a9f34eb9c | next           | ACTIVE | -          | NOSTATE     | network1=192.168.6.253 |
| 2cb09534-5413-4975-8560-b05ff9645c35 | volumesnaphost | ACTIVE | -          | Running     | network2=192.168.7.253 |
+--------------------------------------+----------------+--------+------------+-------------+------------------------+                                    │

[root at ctl-1*12* ~]#nova live-migration 9c5afd75-ab19-44c6-a630-458431ad4ed  ch1nod3.12.intra.cloudlab.cz


ch1nod2 - compute node
2014-06-30 11:02:17.376 3079 INFO nova.compute.manager [-] [instance: 9c5afd75-ab19-44c6-a630-458431ad4eda] During sync_power_state the instance has a pending task. Skip.

2014-06-30 11:08:11.606 3079 INFO nova.compute.resource_tracker [-] Compute_service record updated for ch1nod2.12.intra.cloudlab.cz:ch1nod2.12.intra.cloudlab.cz
2014-06-30 11:09:07.582 3079 INFO nova.compute.manager [-] Lifecycle event 3 on VM 9c5afd75-ab19-44c6-a630-458431ad4eda
2014-06-30 11:09:07.585 3079 ERROR nova.virt.libvirt.driver [-] [instance: 9c5afd75-ab19-44c6-a630-458431ad4eda] Live Migration failure: operation failed: migration job: unexpectedly failed
2014-06-30 11:09:11.670 3079 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources
2014-06-30 11:09:11.873 3079 AUDIT nova.compute.resource_tracker [-] Free ram (MB): 189078
2014-06-30 11:09:11.874 3079 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 156
2014-06-30 11:09:11.874 3079 AUDIT nova.compute.resource_tracker [-] Free VCPUS: 22


ch1nod3 - compute node

2014-06-30 11:09:07.578 29488 INFO nova.compute.manager [-] Lifecycle event 1 on VM 9c5afd75-ab19-44c6-a630-458431ad4eda
2014-06-30 11:09:07.738 29488 INFO nova.compute.manager [-] [instance: 9c5afd75-ab19-44c6-a630-458431ad4eda] During the sync_power process the instance has moved from host ch1nod3.12.intra.cloudlab.cz to host ch1nod2.12.intra.cloudlab.cz
2014-06-30 11:09:07.939 29488 AUDIT nova.compute.manager [req-6748d84f-4f0c-43ef-97b4-12e74a989b57 6836cb1afded478a802e2f28020b2bad e47d5141f5ac40f8a5fedf76bb40e904] [instance: 9c5afd75-ab19-44c6-a630-458431ad4eda] Detach volume 5c0160d7-f9e2-4089-9b0b-d3f3ad46006c from mountpoint vda
2014-06-30 11:09:07.941 29488 WARNING nova.compute.manager [req-6748d84f-4f0c-43ef-97b4-12e74a989b57 6836cb1afded478a802e2f28020b2bad e47d5141f5ac40f8a5fedf76bb40e904] [instance: 9c5afd75-ab19-44c6-a630-458431ad4eda] Detaching volume from unknown instance
2014-06-30 11:09:07.944 29488 ERROR nova.compute.manager [req-6748d84f-4f0c-43ef-97b4-12e74a989b57 6836cb1afded478a802e2f28020b2bad e47d5141f5ac40f8a5fedf76bb40e904] [instance: 9c5afd75-ab19-44c6-a630-458431ad4eda] Failed to detach volume 5c0160d7-f9e2-4089-9b0b-d3f3ad46006c from vda
2014-06-30 11:09:07.944 29488 TRACE nova.compute.manager [instance: 9c5afd75-ab19-44c6-a630-458431ad4eda] Traceback (most recent call last):
2014-06-30 11:09:07.944 29488 TRACE nova.compute.manager [instance: 9c5afd75-ab19-44c6-a630-458431ad4eda]   File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 3725, in _detach_volume
2014-06-30 11:09:07.944 29488 TRACE nova.compute.manager [instance: 9c5afd75-ab19-44c6-a630-458431ad4eda]     encryption=encryption)
2014-06-30 11:09:07.944 29488 TRACE nova.compute.manager [instance: 9c5afd75-ab19-44c6-a630-458431ad4eda]   File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1202, in detach_volume
2014-06-30 11:09:07.944 29488 TRACE nova.compute.manager [instance: 9c5afd75-ab19-44c6-a630-458431ad4eda]     virt_dom = self._lookup_by_name(instance_name)
2014-06-30 11:09:07.944 29488 TRACE nova.compute.manager [instance: 9c5afd75-ab19-44c6-a630-458431ad4eda]   File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 3085, in _lookup_by_name
2014-06-30 11:09:07.944 29488 TRACE nova.compute.manager [instance: 9c5afd75-ab19-44c6-a630-458431ad4eda]     raise exception.InstanceNotFound(instance_id=instance_name)
2014-06-30 11:09:07.944 29488 TRACE nova.compute.manager [instance: 9c5afd75-ab19-44c6-a630-458431ad4eda] InstanceNotFound: Instance instance-000009e0 could not be found.
2014-06-30 11:09:07.944 29488 TRACE nova.compute.manager [instance: 9c5afd75-ab19-44c6-a630-458431ad4eda]


[root at ch1nod2 ~]# grep "tls\|tcp" /etc/libvirt/libvirtd.conf | grep -v "^#"
listen_tls = 0
listen_tcp = 1
auth_tcp = "none"

nova.conf
# Migration flags to be set for live migration (string value)
#live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER
live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE

# Migration flags to be set for block migration (string value)
block_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_NON_SHARED_INC


Can anybody help me with this problem?
Jakub
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack/attachments/20140702/c1a814fa/attachment.html>


More information about the Openstack mailing list