Hi, OpenStack Kolla Ansible has been installed on Ubuntu 20 and integrated with Ceph storage. The volume associated with the instance is not being deleted by Cinder when I terminate the instance with this option (Delete Volume on Instance Delete), which is causing me problems. I need to log in to the server and run the cinder commands to manually delete the volume that is not associated with instances. I would appreciate it if you could provide a solution. Thanks, Vijay
Hi, if you could provide some logs from cinder-volume service and nova-compute someone might be able to help. Is this a general issue (all volumes are affected) or is it limited to one or only some volumes? Did it ever work correctly? What are the steps you have to run manually to delete the volumes? Regards, Eugen Zitat von Vijay Thilak S <vijay.thilak@zohocorp.com>:
Hi,
OpenStack Kolla Ansible has been installed on Ubuntu 20 and integrated with Ceph storage. The volume associated with the instance is not being deleted by Cinder when I terminate the instance with this option (Delete Volume on Instance Delete), which is causing me problems. I need to log in to the server and run the cinder commands to manually delete the volume that is not associated with instances. I would appreciate it if you could provide a solution.
Thanks,
Vijay
Hi Eugen, Thank you for your response. --------------------- Nova-compute logs : --------------------- 2023-10-24 16:26:27.636 7 INFO nova.compute.manager [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] [instance: cbbcc461-7355-4c96-97ca-2bc3f37eb680] Took 0.41 seconds to destroy the instance on the hypervisor. 2023-10-24 16:26:27.851 7 INFO nova.compute.manager [-] [instance: cbbcc461-7355-4c96-97ca-2bc3f37eb680] Took 0.21 seconds to deallocate network for instance. 2023-10-24 16:26:27.948 7 ERROR nova.volume.cinder [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] Delete attachment failed for attachment de30e034-b2a6-40c8-b7f4-686ccf85c024. Error: ConflictNovaUsingAttachment: Detach volume from instance cbbcc461-7355-4c96-97ca-2bc3f37eb680 using the Compute API (HTTP 409) (Request-ID: req-afcf0374-c601-491d-b713-b0c6a9a383b6) Code: 409: cinderclient.exceptions.ClientException: ConflictNovaUsingAttachment: Detach volume from instance cbbcc461-7355-4c96-97ca-2bc3f37eb680 using the Compute API (HTTP 409) (Request-ID: req-afcf0374-c601-491d-b713-b0c6a9a383b6) 2023-10-24 16:26:27.948 7 WARNING nova.compute.manager [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] [instance: cbbcc461-7355-4c96-97ca-2bc3f37eb680] Ignoring unknown cinder exception for volume eb28224e-9b76-4cda-bc24-4c077bf59439: ConflictNovaUsingAttachment: Detach volume from instance cbbcc461-7355-4c96-97ca-2bc3f37eb680 using the Compute API (HTTP 409) (Request-ID: req-afcf0374-c601-491d-b713-b0c6a9a383b6): cinderclient.exceptions.ClientException: ConflictNovaUsingAttachment: Detach volume from instance cbbcc461-7355-4c96-97ca-2bc3f37eb680 using the Compute API (HTTP 409) (Request-ID: req-afcf0374-c601-491d-b713-b0c6a9a383b6) 2023-10-24 16:26:27.949 7 INFO nova.compute.manager [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] [instance: cbbcc461-7355-4c96-97ca-2bc3f37eb680] Took 0.10 seconds to detach 1 volumes for instance. 2023-10-24 16:26:27.981 7 WARNING nova.compute.manager [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] Failed to delete volume: eb28224e-9b76-4cda-bc24-4c077bf59439 due to Invalid input received: Invalid volume: Volume status must be available or error or error_restoring or error_extending or error_managing and must not be migrating, attached, belong to a group, have snapshots, awaiting a transfer, or be disassociated from snapshots after volume transfer. (HTTP 400) (Request-ID: req-6417cb1f-ec33-4009-b1d8-f8470b8ceac2): nova.exception.InvalidInput: Invalid input received: Invalid volume: Volume status must be available or error or error_restoring or error_extending or error_managing and must not be migrating, attached, belong to a group, have snapshots, awaiting a transfer, or be disassociated from snapshots after volume transfer. (HTTP 400) (Request-ID: req-6417cb1f-ec33-4009-b1d8-f8470b8ceac2) 2023-10-24 16:26:28.387 7 INFO nova.scheduler.client.report [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] Deleted allocations for instance cbbcc461-7355-4c96-97ca-2bc3f37eb680 2023-10-24 16:26:42.434 7 INFO nova.compute.manager [-] [instance: cbbcc461-7355-4c96-97ca-2bc3f37eb680] VM Stopped (Lifecycle Event) 2023-10-24 16:27:14.482 7 WARNING nova.virt.libvirt.driver [None req-cd6ed787-b2d4-4e4b-9a18-1315933cb4ae - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported. -------------------- Cinder-volume logs: --------------------- Getting this error when deleting the volume on the OpenStack console. Error: You are not allowed to delete volume: e2328224e-9b76-4cda-bc24-4c077bf59438 There is no log generated in "cinder-volume.log" while deleting the volume. --------------------------------------------------------- Run the below command to delete the volumes manually --------------------------------------------------------- cinder reset-state --attach-status detached $volume-id cinder delete $volume-id 2023-10-24 15:54:57.020 30 INFO cinder.volume.manager [req-40ec54ff-3a65-4676-9122-f6b87496421b req-995260a4-0c01-4cfd-8659-be137ac61533 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - - -] Deleted volume successfully. Yes, all project volumes are affected. Regards, Vijay
The issue is not the deletion but the failing detaching according to the logs. Could it be a permission issue on the ceph side? Can you share your auth caps for cinder and nova users? Which ceph version is it? Zitat von vj66666@gmail.com:
Hi Eugen,
Thank you for your response.
--------------------- Nova-compute logs : --------------------- 2023-10-24 16:26:27.636 7 INFO nova.compute.manager [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] [instance: cbbcc461-7355-4c96-97ca-2bc3f37eb680] Took 0.41 seconds to destroy the instance on the hypervisor. 2023-10-24 16:26:27.851 7 INFO nova.compute.manager [-] [instance: cbbcc461-7355-4c96-97ca-2bc3f37eb680] Took 0.21 seconds to deallocate network for instance. 2023-10-24 16:26:27.948 7 ERROR nova.volume.cinder [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] Delete attachment failed for attachment de30e034-b2a6-40c8-b7f4-686ccf85c024. Error: ConflictNovaUsingAttachment: Detach volume from instance cbbcc461-7355-4c96-97ca-2bc3f37eb680 using the Compute API (HTTP 409) (Request-ID: req-afcf0374-c601-491d-b713-b0c6a9a383b6) Code: 409: cinderclient.exceptions.ClientException: ConflictNovaUsingAttachment: Detach volume from instance cbbcc461-7355-4c96-97ca-2bc3f37eb680 using the Compute API (HTTP 409) (Request-ID: req-afcf0374-c601-491d-b713-b0c6a9a383b6) 2023-10-24 16:26:27.948 7 WARNING nova.compute.manager [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] [instance: cbbcc461-7355-4c96-97ca-2bc3f37eb680] Ignoring unknown cinder exception for volume eb28224e-9b76-4cda-bc24-4c077bf59439: ConflictNovaUsingAttachment: Detach volume from instance cbbcc461-7355-4c96-97ca-2bc3f37eb680 using the Compute API (HTTP 409) (Request-ID: req-afcf0374-c601-491d-b713-b0c6a9a383b6): cinderclient.exceptions.ClientException: ConflictNovaUsingAttachment: Detach volume from instance cbbcc461-7355-4c96-97ca-2bc3f37eb680 using the Compute API (HTTP 409) (Request-ID: req-afcf0374-c601-491d-b713-b0c6a9a383b6) 2023-10-24 16:26:27.949 7 INFO nova.compute.manager [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] [instance: cbbcc461-7355-4c96-97ca-2bc3f37eb680] Took 0.10 seconds to detach 1 volumes for instance. 2023-10-24 16:26:27.981 7 WARNING nova.compute.manager [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] Failed to delete volume: eb28224e-9b76-4cda-bc24-4c077bf59439 due to Invalid input received: Invalid volume: Volume status must be available or error or error_restoring or error_extending or error_managing and must not be migrating, attached, belong to a group, have snapshots, awaiting a transfer, or be disassociated from snapshots after volume transfer. (HTTP 400) (Request-ID: req-6417cb1f-ec33-4009-b1d8-f8470b8ceac2): nova.exception.InvalidInput: Invalid input received: Invalid volume: Volume status must be available or error or error_restoring or error_extending or error_managing and must not be migrating, attached, belong to a group, have snapshots, awaiting a transfer, or be disassociated from snapshots after volume transfer. (HTTP 400) (Request-ID: req-6417cb1f-ec33-4009-b1d8-f8470b8ceac2) 2023-10-24 16:26:28.387 7 INFO nova.scheduler.client.report [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] Deleted allocations for instance cbbcc461-7355-4c96-97ca-2bc3f37eb680 2023-10-24 16:26:42.434 7 INFO nova.compute.manager [-] [instance: cbbcc461-7355-4c96-97ca-2bc3f37eb680] VM Stopped (Lifecycle Event) 2023-10-24 16:27:14.482 7 WARNING nova.virt.libvirt.driver [None req-cd6ed787-b2d4-4e4b-9a18-1315933cb4ae - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
-------------------- Cinder-volume logs: --------------------- Getting this error when deleting the volume on the OpenStack console.
Error: You are not allowed to delete volume: e2328224e-9b76-4cda-bc24-4c077bf59438
There is no log generated in "cinder-volume.log" while deleting the volume.
--------------------------------------------------------- Run the below command to delete the volumes manually --------------------------------------------------------- cinder reset-state --attach-status detached $volume-id cinder delete $volume-id
2023-10-24 15:54:57.020 30 INFO cinder.volume.manager [req-40ec54ff-3a65-4676-9122-f6b87496421b req-995260a4-0c01-4cfd-8659-be137ac61533 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - - -] Deleted volume successfully.
Yes, all project volumes are affected.
Regards, Vijay
# ceph auth get client.cinder [client.cinder] key = AQBFOARlBUuJGhAAEqLsDweiiijrSUHEb0Df+w== caps mgr = "profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images" caps mon = "profile rbd" caps osd = "profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images" # ceph auth get client.nova Error ENOENT: failed to find client.nova in keyring There is no keyring for nova. So have created a keyring for Nova using the below command. Still getting the same issue. ceph auth add client.nova mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rx pool=images' Ceph Version -- Quincy
Did you deploy the nova keyring to all compute nodes? I'd suggest to enable debug logs for nova-compute (restart the service) and then see where it fails exactly. Zitat von vj66666@gmail.com:
# ceph auth get client.cinder [client.cinder] key = AQBFOARlBUuJGhAAEqLsDweiiijrSUHEb0Df+w== caps mgr = "profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images" caps mon = "profile rbd" caps osd = "profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images"
# ceph auth get client.nova Error ENOENT: failed to find client.nova in keyring
There is no keyring for nova. So have created a keyring for Nova using the below command. Still getting the same issue.
ceph auth add client.nova mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rx pool=images'
Ceph Version -- Quincy
No, haven't deployed the nova keyring to all compute nodes. Will try and let you know. Thanks
On 24/10, vj66666@gmail.com wrote:
Hi Eugen,
Thank you for your response.
Hi, This issue is most likely caused by Nova not being correctly configured in your deployment. Nova needs to be configured to use service tokens, otherwise detach operations are going to fail on the Cinder side to prevent CVE-2023-2088 [1]. If that's the problem you should also have difficulties detaching normal volumes. More on configuring service tokens can be found on the documentation [2]. Cheers, Gorka. [1]: https://nvd.nist.gov/vuln/detail/CVE-2023-2088 [2]: https://nvd.nist.gov/vuln/detail/CVE-2023-2088
--------------------- Nova-compute logs : --------------------- 2023-10-24 16:26:27.636 7 INFO nova.compute.manager [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] [instance: cbbcc461-7355-4c96-97ca-2bc3f37eb680] Took 0.41 seconds to destroy the instance on the hypervisor. 2023-10-24 16:26:27.851 7 INFO nova.compute.manager [-] [instance: cbbcc461-7355-4c96-97ca-2bc3f37eb680] Took 0.21 seconds to deallocate network for instance. 2023-10-24 16:26:27.948 7 ERROR nova.volume.cinder [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] Delete attachment failed for attachment de30e034-b2a6-40c8-b7f4-686ccf85c024. Error: ConflictNovaUsingAttachment: Detach volume from instance cbbcc461-7355-4c96-97ca-2bc3f37eb680 using the Compute API (HTTP 409) (Request-ID: req-afcf0374-c601-491d-b713-b0c6a9a383b6) Code: 409: cinderclient.exceptions.ClientException: ConflictNovaUsingAttachment: Detach volume from instance cbbcc461-7355-4c96-97ca-2bc3f37eb680 using the Compute API (HTTP 409) (Request-ID: req-afcf0374-c601-491d-b713-b0c6a9a383b6) 2023-10-24 16:26:27.948 7 WARNING nova.compute.manager [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] [instance: cbbcc461-7355-4c96-97ca-2bc3f37eb680] Ignoring unknown cinder exception for volume eb28224e-9b76-4cda-bc24-4c077bf59439: ConflictNovaUsingAttachment: Detach volume from instance cbbcc461-7355-4c96-97ca-2bc3f37eb680 using the Compute API (HTTP 409) (Request-ID: req-afcf0374-c601-491d-b713-b0c6a9a383b6): cinderclient.exceptions.ClientException: ConflictNovaUsingAttachment: Detach volume from instance cbbcc461-7355-4c96-97ca-2bc3f37eb680 using the Compute API (HTTP 409) (Request-ID: req-afcf0374-c601-491d-b713-b0c6a9a383b6) 2023-10-24 16:26:27.949 7 INFO nova.compute.manager [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] [instance: cbbcc461-7355-4c96-97ca-2bc3f37eb680] Took 0.10 seconds to detach 1 volumes for instance. 2023-10-24 16:26:27.981 7 WARNING nova.compute.manager [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] Failed to delete volume: eb28224e-9b76-4cda-bc24-4c077bf59439 due to Invalid input received: Invalid volume: Volume status must be available or error or error_restoring or error_extending or error_managing and must not be migrating, attached, belong to a group, have snapshots, awaiting a transfer, or be disassociated from snapshots after volume transfer. (HTTP 400) (Request-ID: req-6417cb1f-ec33-4009-b1d8-f8470b8ceac2): nova.exception.InvalidInput: Invalid input received: Invalid volume: Volume status must be available or error or error_restoring or error_extending or error_managing and must not be migrating, attached, belong to a group, have snapshots, awaiting a transfer, or be disassociated from snapshots after volume transfer. (HTTP 400) (Request-ID: req-6417cb1f-ec33-4009-b1d8-f8470b8ceac2) 2023-10-24 16:26:28.387 7 INFO nova.scheduler.client.report [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] Deleted allocations for instance cbbcc461-7355-4c96-97ca-2bc3f37eb680 2023-10-24 16:26:42.434 7 INFO nova.compute.manager [-] [instance: cbbcc461-7355-4c96-97ca-2bc3f37eb680] VM Stopped (Lifecycle Event) 2023-10-24 16:27:14.482 7 WARNING nova.virt.libvirt.driver [None req-cd6ed787-b2d4-4e4b-9a18-1315933cb4ae - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
-------------------- Cinder-volume logs: --------------------- Getting this error when deleting the volume on the OpenStack console.
Error: You are not allowed to delete volume: e2328224e-9b76-4cda-bc24-4c077bf59438
There is no log generated in "cinder-volume.log" while deleting the volume.
--------------------------------------------------------- Run the below command to delete the volumes manually --------------------------------------------------------- cinder reset-state --attach-status detached $volume-id cinder delete $volume-id
2023-10-24 15:54:57.020 30 INFO cinder.volume.manager [req-40ec54ff-3a65-4676-9122-f6b87496421b req-995260a4-0c01-4cfd-8659-be137ac61533 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - - -] Deleted volume successfully.
Yes, all project volumes are affected.
Regards, Vijay
Sorry wrong link for [2], the right link is: https://docs.openstack.org/cinder/latest/configuration/block-storage/service... On 30/10, Gorka Eguileor wrote:
On 24/10, vj66666@gmail.com wrote:
Hi Eugen,
Thank you for your response.
Hi,
This issue is most likely caused by Nova not being correctly configured in your deployment.
Nova needs to be configured to use service tokens, otherwise detach operations are going to fail on the Cinder side to prevent CVE-2023-2088 [1].
If that's the problem you should also have difficulties detaching normal volumes.
More on configuring service tokens can be found on the documentation [2].
Cheers, Gorka.
[1]: https://nvd.nist.gov/vuln/detail/CVE-2023-2088 [2]: https://nvd.nist.gov/vuln/detail/CVE-2023-2088
--------------------- Nova-compute logs : --------------------- 2023-10-24 16:26:27.636 7 INFO nova.compute.manager [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] [instance: cbbcc461-7355-4c96-97ca-2bc3f37eb680] Took 0.41 seconds to destroy the instance on the hypervisor. 2023-10-24 16:26:27.851 7 INFO nova.compute.manager [-] [instance: cbbcc461-7355-4c96-97ca-2bc3f37eb680] Took 0.21 seconds to deallocate network for instance. 2023-10-24 16:26:27.948 7 ERROR nova.volume.cinder [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] Delete attachment failed for attachment de30e034-b2a6-40c8-b7f4-686ccf85c024. Error: ConflictNovaUsingAttachment: Detach volume from instance cbbcc461-7355-4c96-97ca-2bc3f37eb680 using the Compute API (HTTP 409) (Request-ID: req-afcf0374-c601-491d-b713-b0c6a9a383b6) Code: 409: cinderclient.exceptions.ClientException: ConflictNovaUsingAttachment: Detach volume from instance cbbcc461-7355-4c96-97ca-2bc3f37eb680 using the Compute API (HTTP 409) (Request-ID: req-afcf0374-c601-491d-b713-b0c6a9a383b6) 2023-10-24 16:26:27.948 7 WARNING nova.compute.manager [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] [instance: cbbcc461-7355-4c96-97ca-2bc3f37eb680] Ignoring unknown cinder exception for volume eb28224e-9b76-4cda-bc24-4c077bf59439: ConflictNovaUsingAttachment: Detach volume from instance cbbcc461-7355-4c96-97ca-2bc3f37eb680 using the Compute API (HTTP 409) (Request-ID: req-afcf0374-c601-491d-b713-b0c6a9a383b6): cinderclient.exceptions.ClientException: ConflictNovaUsingAttachment: Detach volume from instance cbbcc461-7355-4c96-97ca-2bc3f37eb680 using the Compute API (HTTP 409) (Request-ID: req-afcf0374-c601-491d-b713-b0c6a9a383b6) 2023-10-24 16:26:27.949 7 INFO nova.compute.manager [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] [instance: cbbcc461-7355-4c96-97ca-2bc3f37eb680] Took 0.10 seconds to detach 1 volumes for instance. 2023-10-24 16:26:27.981 7 WARNING nova.compute.manager [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] Failed to delete volume: eb28224e-9b76-4cda-bc24-4c077bf59439 due to Invalid input received: Invalid volume: Volume status must be available or error or error_restoring or error_extending or error_managing and must not be migrating, attached, belong to a group, have snapshots, awaiting a transfer, or be disassociated from snapshots after volume transfer. (HTTP 400) (Request-ID: req-6417cb1f-ec33-4009-b1d8-f8470b8ceac2): nova.exception.InvalidInput: Invalid input received: Invalid volume: Volume status must be available or error or error_restoring or error_extending or error_managing and must not be migrating, attached, belong to a group, have snapshots, awaiting a transfer, or be disassociated from snapshots after volume transfer. (HTTP 400) (Request-ID: req-6417cb1f-ec33-4009-b1d8-f8470b8ceac2) 2023-10-24 16:26:28.387 7 INFO nova.scheduler.client.report [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] Deleted allocations for instance cbbcc461-7355-4c96-97ca-2bc3f37eb680 2023-10-24 16:26:42.434 7 INFO nova.compute.manager [-] [instance: cbbcc461-7355-4c96-97ca-2bc3f37eb680] VM Stopped (Lifecycle Event) 2023-10-24 16:27:14.482 7 WARNING nova.virt.libvirt.driver [None req-cd6ed787-b2d4-4e4b-9a18-1315933cb4ae - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
-------------------- Cinder-volume logs: --------------------- Getting this error when deleting the volume on the OpenStack console.
Error: You are not allowed to delete volume: e2328224e-9b76-4cda-bc24-4c077bf59438
There is no log generated in "cinder-volume.log" while deleting the volume.
--------------------------------------------------------- Run the below command to delete the volumes manually --------------------------------------------------------- cinder reset-state --attach-status detached $volume-id cinder delete $volume-id
2023-10-24 15:54:57.020 30 INFO cinder.volume.manager [req-40ec54ff-3a65-4676-9122-f6b87496421b req-995260a4-0c01-4cfd-8659-be137ac61533 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - - -] Deleted volume successfully.
Yes, all project volumes are affected.
Regards, Vijay
On 30/10, Gorka Eguileor wrote:
Sorry wrong link for [2], the right link is: https://docs.openstack.org/cinder/latest/configuration/block-storage/service...
I believe the latest kolla-ansible should already be configuring this during deployment: https://review.opendev.org/q/I2189dafca070accfd8efcd4b8cc4221c6decdc9f
On 30/10, Gorka Eguileor wrote:
On 24/10, vj66666@gmail.com wrote:
Hi Eugen,
Thank you for your response.
Hi,
This issue is most likely caused by Nova not being correctly configured in your deployment.
Nova needs to be configured to use service tokens, otherwise detach operations are going to fail on the Cinder side to prevent CVE-2023-2088 [1].
If that's the problem you should also have difficulties detaching normal volumes.
More on configuring service tokens can be found on the documentation [2].
Cheers, Gorka.
[1]: https://nvd.nist.gov/vuln/detail/CVE-2023-2088 [2]: https://nvd.nist.gov/vuln/detail/CVE-2023-2088
--------------------- Nova-compute logs : --------------------- 2023-10-24 16:26:27.636 7 INFO nova.compute.manager [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] [instance: cbbcc461-7355-4c96-97ca-2bc3f37eb680] Took 0.41 seconds to destroy the instance on the hypervisor. 2023-10-24 16:26:27.851 7 INFO nova.compute.manager [-] [instance: cbbcc461-7355-4c96-97ca-2bc3f37eb680] Took 0.21 seconds to deallocate network for instance. 2023-10-24 16:26:27.948 7 ERROR nova.volume.cinder [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] Delete attachment failed for attachment de30e034-b2a6-40c8-b7f4-686ccf85c024. Error: ConflictNovaUsingAttachment: Detach volume from instance cbbcc461-7355-4c96-97ca-2bc3f37eb680 using the Compute API (HTTP 409) (Request-ID: req-afcf0374-c601-491d-b713-b0c6a9a383b6) Code: 409: cinderclient.exceptions.ClientException: ConflictNovaUsingAttachment: Detach volume from instance cbbcc461-7355-4c96-97ca-2bc3f37eb680 using the Compute API (HTTP 409) (Request-ID: req-afcf0374-c601-491d-b713-b0c6a9a383b6) 2023-10-24 16:26:27.948 7 WARNING nova.compute.manager [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] [instance: cbbcc461-7355-4c96-97ca-2bc3f37eb680] Ignoring unknown cinder exception for volume eb28224e-9b76-4cda-bc24-4c077bf59439: ConflictNovaUsingAttachment: Detach volume from instance cbbcc461-7355-4c96-97ca-2bc3f37eb680 using the Compute API (HTTP 409) (Request-ID: req-afcf0374-c601-491d-b713-b0c6a9a383b6): cinderclient.exceptions.ClientException: ConflictNovaUsingAttachment: Detach volume from instance cbbcc461-7355-4c96-97ca-2bc3f37eb680 using the Compute API (HTTP 409) (Request-ID: req-afcf0374-c601-491d-b713-b0c6a9a383b6) 2023-10-24 16:26:27.949 7 INFO nova.compute.manager [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] [instance: cbbcc461-7355-4c96-97ca-2bc3f37eb680] Took 0.10 seconds to detach 1 volumes for instance. 2023-10-24 16:26:27.981 7 WARNING nova.compute.manager [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] Failed to delete volume: eb28224e-9b76-4cda-bc24-4c077bf59439 due to Invalid input received: Invalid volume: Volume status must be available or error or error_restoring or error_extending or error_managing and must not be migrating, attached, belong to a group, have snapshots, awaiting a transfer, or be disassociated from snapshots after volume transfer. (HTTP 400) (Request-ID: req-6417cb1f-ec33-4009-b1d8-f8470b8ceac2): nova.exception.InvalidInput: Invalid input received: Invalid volume: Volume status must be available or error or error_restoring or error_extending or error_managing and must not be migrating, attached, belong to a group, have snapshots, awaiting a transfer, or be disassociated from snapshots after volume transfer. (HTTP 400) (Request-ID: req-6417cb1f-ec33-4009-b1d8-f8470b8ceac2) 2023-10-24 16:26:28.387 7 INFO nova.scheduler.client.report [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] Deleted allocations for instance cbbcc461-7355-4c96-97ca-2bc3f37eb680 2023-10-24 16:26:42.434 7 INFO nova.compute.manager [-] [instance: cbbcc461-7355-4c96-97ca-2bc3f37eb680] VM Stopped (Lifecycle Event) 2023-10-24 16:27:14.482 7 WARNING nova.virt.libvirt.driver [None req-cd6ed787-b2d4-4e4b-9a18-1315933cb4ae - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
-------------------- Cinder-volume logs: --------------------- Getting this error when deleting the volume on the OpenStack console.
Error: You are not allowed to delete volume: e2328224e-9b76-4cda-bc24-4c077bf59438
There is no log generated in "cinder-volume.log" while deleting the volume.
--------------------------------------------------------- Run the below command to delete the volumes manually --------------------------------------------------------- cinder reset-state --attach-status detached $volume-id cinder delete $volume-id
2023-10-24 15:54:57.020 30 INFO cinder.volume.manager [req-40ec54ff-3a65-4676-9122-f6b87496421b req-995260a4-0c01-4cfd-8659-be137ac61533 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - - -] Deleted volume successfully.
Yes, all project volumes are affected.
Regards, Vijay
Oh right, I still keep forgetting about this one, mainly because I'm still operating older cloud versions. This sounds like a valid explanation. Thanks for the pointer. Zitat von Gorka Eguileor <geguileo@redhat.com>:
On 24/10, vj66666@gmail.com wrote:
Hi Eugen,
Thank you for your response.
Hi,
This issue is most likely caused by Nova not being correctly configured in your deployment.
Nova needs to be configured to use service tokens, otherwise detach operations are going to fail on the Cinder side to prevent CVE-2023-2088 [1].
If that's the problem you should also have difficulties detaching normal volumes.
More on configuring service tokens can be found on the documentation [2].
Cheers, Gorka.
[1]: https://nvd.nist.gov/vuln/detail/CVE-2023-2088 [2]: https://nvd.nist.gov/vuln/detail/CVE-2023-2088
--------------------- Nova-compute logs : --------------------- 2023-10-24 16:26:27.636 7 INFO nova.compute.manager [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] [instance: cbbcc461-7355-4c96-97ca-2bc3f37eb680] Took 0.41 seconds to destroy the instance on the hypervisor. 2023-10-24 16:26:27.851 7 INFO nova.compute.manager [-] [instance: cbbcc461-7355-4c96-97ca-2bc3f37eb680] Took 0.21 seconds to deallocate network for instance. 2023-10-24 16:26:27.948 7 ERROR nova.volume.cinder [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] Delete attachment failed for attachment de30e034-b2a6-40c8-b7f4-686ccf85c024. Error: ConflictNovaUsingAttachment: Detach volume from instance cbbcc461-7355-4c96-97ca-2bc3f37eb680 using the Compute API (HTTP 409) (Request-ID: req-afcf0374-c601-491d-b713-b0c6a9a383b6) Code: 409: cinderclient.exceptions.ClientException: ConflictNovaUsingAttachment: Detach volume from instance cbbcc461-7355-4c96-97ca-2bc3f37eb680 using the Compute API (HTTP 409) (Request-ID: req-afcf0374-c601-491d-b713-b0c6a9a383b6) 2023-10-24 16:26:27.948 7 WARNING nova.compute.manager [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] [instance: cbbcc461-7355-4c96-97ca-2bc3f37eb680] Ignoring unknown cinder exception for volume eb28224e-9b76-4cda-bc24-4c077bf59439: ConflictNovaUsingAttachment: Detach volume from instance cbbcc461-7355-4c96-97ca-2bc3f37eb680 using the Compute API (HTTP 409) (Request-ID: req-afcf0374-c601-491d-b713-b0c6a9a383b6): cinderclient.exceptions.ClientException: ConflictNovaUsingAttachment: Detach volume from instance cbbcc461-7355-4c96-97ca-2bc3f37eb680 using the Compute API (HTTP 409) (Request-ID: req-afcf0374-c601-491d-b713-b0c6a9a383b6) 2023-10-24 16:26:27.949 7 INFO nova.compute.manager [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] [instance: cbbcc461-7355-4c96-97ca-2bc3f37eb680] Took 0.10 seconds to detach 1 volumes for instance. 2023-10-24 16:26:27.981 7 WARNING nova.compute.manager [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] Failed to delete volume: eb28224e-9b76-4cda-bc24-4c077bf59439 due to Invalid input received: Invalid volume: Volume status must be available or error or error_restoring or error_extending or error_managing and must not be migrating, attached, belong to a group, have snapshots, awaiting a transfer, or be disassociated from snapshots after volume transfer. (HTTP 400) (Request-ID: req-6417cb1f-ec33-4009-b1d8-f8470b8ceac2): nova.exception.InvalidInput: Invalid input received: Invalid volume: Volume status must be available or error or error_restoring or error_extending or error_managing and must not be migrating, attached, belong to a group, have snapshots, awaiting a transfer, or be disassociated from snapshots after volume transfer. (HTTP 400) (Request-ID: req-6417cb1f-ec33-4009-b1d8-f8470b8ceac2) 2023-10-24 16:26:28.387 7 INFO nova.scheduler.client.report [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] Deleted allocations for instance cbbcc461-7355-4c96-97ca-2bc3f37eb680 2023-10-24 16:26:42.434 7 INFO nova.compute.manager [-] [instance: cbbcc461-7355-4c96-97ca-2bc3f37eb680] VM Stopped (Lifecycle Event) 2023-10-24 16:27:14.482 7 WARNING nova.virt.libvirt.driver [None req-cd6ed787-b2d4-4e4b-9a18-1315933cb4ae - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
-------------------- Cinder-volume logs: --------------------- Getting this error when deleting the volume on the OpenStack console.
Error: You are not allowed to delete volume: e2328224e-9b76-4cda-bc24-4c077bf59438
There is no log generated in "cinder-volume.log" while deleting the volume.
--------------------------------------------------------- Run the below command to delete the volumes manually --------------------------------------------------------- cinder reset-state --attach-status detached $volume-id cinder delete $volume-id
2023-10-24 15:54:57.020 30 INFO cinder.volume.manager [req-40ec54ff-3a65-4676-9122-f6b87496421b req-995260a4-0c01-4cfd-8659-be137ac61533 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - - -] Deleted volume successfully.
Yes, all project volumes are affected.
Regards, Vijay
participants (4)
-
Eugen Block
-
Gorka Eguileor
-
Vijay Thilak S
-
vj66666@gmail.com