[nova] volume information was duplicated when Live Migration failed
To whom it may concern Hi. I encountered an issue with OpenStack where volume information was duplicated when Live Migration failed. Could you please provide a solution to this issue? [Issue] After a Live Migration failure, when I checked the volume where the error occurred using the 'openstack volume list' command, the following was displayed: +--------------------------------------+-----------------------------------+-----------+------+---------------------------------------------------------------------------------------------------------------------------------------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+-----------------------------------+-----------+------+---------------------------------------------------------------------------------------------------------------------------------------------+ | 9ba9734f-575e-41de-8bb4-60839388e0ad | | in-use | 10 | Attached to 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to | | | | | | 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to | | | | | | 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to | | | | | | 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda | I expect that the 'Attached to' display should only appear once as there is one volume connected, but it is duplicated for each failed attempt. The versions of OpenStack and nova are as follows: OpenStack Zed $ openstack --version openstack 6.0.0 $ nova --version nova CLI is deprecated and will be a removed in a future release 18.1.0 [Details] The volume display is duplicated under the following conditions: - When Live Migration of an instance fails If the communication between the ports required for Live Migration is closed, Live Migration will fail when executed. $ sudo netstat -tuln | grep 16509 $ The output of nova-compute.log at the time of Live Migration failure is as follows: --- 2024-02-28 17:04:32.119 2785 ERROR nova.virt.libvirt.driver [-] [instance: 521eb55f-535d-4fa7-a27e-447b0bbae9b4] Live Migration failure: unable to connect to server at 'XXXXX.com:16509': Connection refused: libvirt.libvirtError: unable to connect to server at 'XXXXX.com:16509': Connection refused 2024-02-28 17:04:32.119 2785 DEBUG nova.virt.libvirt.driver [-] [instance: 521eb55f-535d-4fa7-a27e-447b0bbae9b4] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10516 2024-02-28 17:04:32.607 2785 DEBUG nova.virt.libvirt.migration [-] [instance: 521eb55f-535d-4fa7-a27e-447b0bbae9b4] VM running on src, migration failed _log /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:432 2024-02-28 17:04:32.608 2785 DEBUG nova.virt.libvirt.driver [-] [instance: 521eb55f-535d-4fa7-a27e-447b0bbae9b4] Fixed incorrect job type to be 4 _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10330 2024-02-28 17:04:32.608 2785 ERROR nova.virt.libvirt.driver [-] [instance: 521eb55f-535d-4fa7-a27e-447b0bbae9b4] Migration operation has aborted --- After Live Migration, when I check the volume where the error occurred using the 'openstack volume list' command, the following is displayed: +--------------------------------------+-----------------------------------+-----------+------+---------------------------------------------------------------------------------------------------------------------------------------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+-----------------------------------+-----------+------+---------------------------------------------------------------------------------------------------------------------------------------------+ | 9ba9734f-575e-41de-8bb4-60839388e0ad | | in-use | 10 | Attached to 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to | | | | | | 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to | | | | | | 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to | | | | | | 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda | '73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda' is displayed as many times as Live Migration failed. The duplicate display did not disappear even after Live Migration was successful. Thanks,
Hi Noguchi, In the event of live-migration failure, new record of table cinder.volume_attachement which corresponding to the destination host was not clean correctly so it has two attachments as your CLI result. My suggestion is clean up that stale record from cinder database manually. -- Best Regards, Sang Chan – FPT Smart Cloud From: Junya Noguchi (Fujitsu) <noguchi.junya@fujitsu.com> Date: Friday, 28 June 2024 at 19:19 To: 'openstack-discuss@lists.openstack.org' <openstack-discuss@lists.openstack.org> Cc: Hiroki Yokota (Fujitsu) <yokota.hiroki@fujitsu.com> Subject: [nova] volume information was duplicated when Live Migration failed To whom it may concern Hi. I encountered an issue with OpenStack where volume information was duplicated when Live Migration failed. Could you please provide a solution to this issue? [Issue] After a Live Migration failure, when I checked the volume where the error occurred using the 'openstack volume list' command, the following was displayed: +--------------------------------------+-----------------------------------+-----------+------+---------------------------------------------------------------------------------------------------------------------------------------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+-----------------------------------+-----------+------+---------------------------------------------------------------------------------------------------------------------------------------------+ | 9ba9734f-575e-41de-8bb4-60839388e0ad | | in-use | 10 | Attached to 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to | | | | | | 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to | | | | | | 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to | | | | | | 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda | I expect that the 'Attached to' display should only appear once as there is one volume connected, but it is duplicated for each failed attempt. The versions of OpenStack and nova are as follows: OpenStack Zed $ openstack --version openstack 6.0.0 $ nova --version nova CLI is deprecated and will be a removed in a future release 18.1.0 [Details] The volume display is duplicated under the following conditions: - When Live Migration of an instance fails If the communication between the ports required for Live Migration is closed, Live Migration will fail when executed. $ sudo netstat -tuln | grep 16509 $ The output of nova-compute.log at the time of Live Migration failure is as follows: --- 2024-02-28 17:04:32.119 2785 ERROR nova.virt.libvirt.driver [-] [instance: 521eb55f-535d-4fa7-a27e-447b0bbae9b4] Live Migration failure: unable to connect to server at 'XXXXX.com:16509': Connection refused: libvirt.libvirtError: unable to connect to server at 'XXXXX.com:16509': Connection refused 2024-02-28 17:04:32.119 2785 DEBUG nova.virt.libvirt.driver [-] [instance: 521eb55f-535d-4fa7-a27e-447b0bbae9b4] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10516 2024-02-28 17:04:32.607 2785 DEBUG nova.virt.libvirt.migration [-] [instance: 521eb55f-535d-4fa7-a27e-447b0bbae9b4] VM running on src, migration failed _log /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:432 2024-02-28 17:04:32.608 2785 DEBUG nova.virt.libvirt.driver [-] [instance: 521eb55f-535d-4fa7-a27e-447b0bbae9b4] Fixed incorrect job type to be 4 _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10330 2024-02-28 17:04:32.608 2785 ERROR nova.virt.libvirt.driver [-] [instance: 521eb55f-535d-4fa7-a27e-447b0bbae9b4] Migration operation has aborted --- After Live Migration, when I check the volume where the error occurred using the 'openstack volume list' command, the following is displayed: +--------------------------------------+-----------------------------------+-----------+------+---------------------------------------------------------------------------------------------------------------------------------------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+-----------------------------------+-----------+------+---------------------------------------------------------------------------------------------------------------------------------------------+ | 9ba9734f-575e-41de-8bb4-60839388e0ad | | in-use | 10 | Attached to 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to | | | | | | 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to | | | | | | 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to | | | | | | 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda | '73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda' is displayed as many times as Live Migration failed. The duplicate display did not disappear even after Live Migration was successful. Thanks,
Dear Mr. Quoc Thank you for your answer. I will carry out the method you taught me. Best regards, Junya Noguchi. From: Sang Tran Quoc <SangTQ8@fpt.com> Sent: Saturday, June 29, 2024 9:49 PM To: Noguchi, Junya/野口 惇矢 <noguchi.junya@fujitsu.com>; 'openstack-discuss@lists.openstack.org' <openstack-discuss@lists.openstack.org> Cc: Yokota, Hiroki/横田 浩基 <yokota.hiroki@fujitsu.com> Subject: Re: [nova] volume information was duplicated when Live Migration failed Hi Noguchi, In the event of live-migration failure, new record of table cinder.volume_attachement which corresponding to the destination host was not clean correctly so it has two attachments as your CLI result. My suggestion is clean up that stale record from cinder database manually. -- Best Regards, Sang Chan - FPT Smart Cloud From: Junya Noguchi (Fujitsu) <noguchi.junya@fujitsu.com<mailto:noguchi.junya@fujitsu.com>> Date: Friday, 28 June 2024 at 19:19 To: 'openstack-discuss@lists.openstack.org' <openstack-discuss@lists.openstack.org<mailto:openstack-discuss@lists.openstack.org>> Cc: Hiroki Yokota (Fujitsu) <yokota.hiroki@fujitsu.com<mailto:yokota.hiroki@fujitsu.com>> Subject: [nova] volume information was duplicated when Live Migration failed To whom it may concern Hi. I encountered an issue with OpenStack where volume information was duplicated when Live Migration failed. Could you please provide a solution to this issue? [Issue] After a Live Migration failure, when I checked the volume where the error occurred using the 'openstack volume list' command, the following was displayed: +--------------------------------------+-----------------------------------+-----------+------+---------------------------------------------------------------------------------------------------------------------------------------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+-----------------------------------+-----------+------+---------------------------------------------------------------------------------------------------------------------------------------------+ | 9ba9734f-575e-41de-8bb4-60839388e0ad | | in-use | 10 | Attached to 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to | | | | | | 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to | | | | | | 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to | | | | | | 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda | I expect that the 'Attached to' display should only appear once as there is one volume connected, but it is duplicated for each failed attempt. The versions of OpenStack and nova are as follows: OpenStack Zed $ openstack --version openstack 6.0.0 $ nova --version nova CLI is deprecated and will be a removed in a future release 18.1.0 [Details] The volume display is duplicated under the following conditions: - When Live Migration of an instance fails If the communication between the ports required for Live Migration is closed, Live Migration will fail when executed. $ sudo netstat -tuln | grep 16509 $ The output of nova-compute.log at the time of Live Migration failure is as follows: --- 2024-02-28 17:04:32.119 2785 ERROR nova.virt.libvirt.driver [-] [instance: 521eb55f-535d-4fa7-a27e-447b0bbae9b4] Live Migration failure: unable to connect to server at 'XXXXX.com:16509': Connection refused: libvirt.libvirtError: unable to connect to server at 'XXXXX.com:16509': Connection refused 2024-02-28 17:04:32.119 2785 DEBUG nova.virt.libvirt.driver [-] [instance: 521eb55f-535d-4fa7-a27e-447b0bbae9b4] Migration operation thread notification thread_finished /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10516 2024-02-28 17:04:32.607 2785 DEBUG nova.virt.libvirt.migration [-] [instance: 521eb55f-535d-4fa7-a27e-447b0bbae9b4] VM running on src, migration failed _log /usr/lib/python3.9/site-packages/nova/virt/libvirt/migration.py:432 2024-02-28 17:04:32.608 2785 DEBUG nova.virt.libvirt.driver [-] [instance: 521eb55f-535d-4fa7-a27e-447b0bbae9b4] Fixed incorrect job type to be 4 _live_migration_monitor /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:10330 2024-02-28 17:04:32.608 2785 ERROR nova.virt.libvirt.driver [-] [instance: 521eb55f-535d-4fa7-a27e-447b0bbae9b4] Migration operation has aborted --- After Live Migration, when I check the volume where the error occurred using the 'openstack volume list' command, the following is displayed: +--------------------------------------+-----------------------------------+-----------+------+---------------------------------------------------------------------------------------------------------------------------------------------+ | ID | Name | Status | Size | Attached to | +--------------------------------------+-----------------------------------+-----------+------+---------------------------------------------------------------------------------------------------------------------------------------------+ | 9ba9734f-575e-41de-8bb4-60839388e0ad | | in-use | 10 | Attached to 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to | | | | | | 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to | | | | | | 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to | | | | | | 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda Attached to 73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda | '73c4c6e8-fe34-4f87-9f26-e70b4cc593ba on /dev/vda' is displayed as many times as Live Migration failed. The duplicate display did not disappear even after Live Migration was successful. Thanks,
participants (2)
-
Junya Noguchi (Fujitsu)
-
Sang Tran Quoc