The issue is not the deletion but the failing detaching according to the logs. Could it be a permission issue on the ceph side? Can you share your auth caps for cinder and nova users? Which ceph version is it? Zitat von vj66666@gmail.com:
Hi Eugen,
Thank you for your response.
--------------------- Nova-compute logs : --------------------- 2023-10-24 16:26:27.636 7 INFO nova.compute.manager [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] [instance: cbbcc461-7355-4c96-97ca-2bc3f37eb680] Took 0.41 seconds to destroy the instance on the hypervisor. 2023-10-24 16:26:27.851 7 INFO nova.compute.manager [-] [instance: cbbcc461-7355-4c96-97ca-2bc3f37eb680] Took 0.21 seconds to deallocate network for instance. 2023-10-24 16:26:27.948 7 ERROR nova.volume.cinder [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] Delete attachment failed for attachment de30e034-b2a6-40c8-b7f4-686ccf85c024. Error: ConflictNovaUsingAttachment: Detach volume from instance cbbcc461-7355-4c96-97ca-2bc3f37eb680 using the Compute API (HTTP 409) (Request-ID: req-afcf0374-c601-491d-b713-b0c6a9a383b6) Code: 409: cinderclient.exceptions.ClientException: ConflictNovaUsingAttachment: Detach volume from instance cbbcc461-7355-4c96-97ca-2bc3f37eb680 using the Compute API (HTTP 409) (Request-ID: req-afcf0374-c601-491d-b713-b0c6a9a383b6) 2023-10-24 16:26:27.948 7 WARNING nova.compute.manager [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] [instance: cbbcc461-7355-4c96-97ca-2bc3f37eb680] Ignoring unknown cinder exception for volume eb28224e-9b76-4cda-bc24-4c077bf59439: ConflictNovaUsingAttachment: Detach volume from instance cbbcc461-7355-4c96-97ca-2bc3f37eb680 using the Compute API (HTTP 409) (Request-ID: req-afcf0374-c601-491d-b713-b0c6a9a383b6): cinderclient.exceptions.ClientException: ConflictNovaUsingAttachment: Detach volume from instance cbbcc461-7355-4c96-97ca-2bc3f37eb680 using the Compute API (HTTP 409) (Request-ID: req-afcf0374-c601-491d-b713-b0c6a9a383b6) 2023-10-24 16:26:27.949 7 INFO nova.compute.manager [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] [instance: cbbcc461-7355-4c96-97ca-2bc3f37eb680] Took 0.10 seconds to detach 1 volumes for instance. 2023-10-24 16:26:27.981 7 WARNING nova.compute.manager [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] Failed to delete volume: eb28224e-9b76-4cda-bc24-4c077bf59439 due to Invalid input received: Invalid volume: Volume status must be available or error or error_restoring or error_extending or error_managing and must not be migrating, attached, belong to a group, have snapshots, awaiting a transfer, or be disassociated from snapshots after volume transfer. (HTTP 400) (Request-ID: req-6417cb1f-ec33-4009-b1d8-f8470b8ceac2): nova.exception.InvalidInput: Invalid input received: Invalid volume: Volume status must be available or error or error_restoring or error_extending or error_managing and must not be migrating, attached, belong to a group, have snapshots, awaiting a transfer, or be disassociated from snapshots after volume transfer. (HTTP 400) (Request-ID: req-6417cb1f-ec33-4009-b1d8-f8470b8ceac2) 2023-10-24 16:26:28.387 7 INFO nova.scheduler.client.report [None req-417882da-5639-493d-bba7-90ca2c0c574b 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - default default] Deleted allocations for instance cbbcc461-7355-4c96-97ca-2bc3f37eb680 2023-10-24 16:26:42.434 7 INFO nova.compute.manager [-] [instance: cbbcc461-7355-4c96-97ca-2bc3f37eb680] VM Stopped (Lifecycle Event) 2023-10-24 16:27:14.482 7 WARNING nova.virt.libvirt.driver [None req-cd6ed787-b2d4-4e4b-9a18-1315933cb4ae - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
-------------------- Cinder-volume logs: --------------------- Getting this error when deleting the volume on the OpenStack console.
Error: You are not allowed to delete volume: e2328224e-9b76-4cda-bc24-4c077bf59438
There is no log generated in "cinder-volume.log" while deleting the volume.
--------------------------------------------------------- Run the below command to delete the volumes manually --------------------------------------------------------- cinder reset-state --attach-status detached $volume-id cinder delete $volume-id
2023-10-24 15:54:57.020 30 INFO cinder.volume.manager [req-40ec54ff-3a65-4676-9122-f6b87496421b req-995260a4-0c01-4cfd-8659-be137ac61533 612dfccc5046423080013961965aed4e 7b556da67bfb4f41ac2ed23bf9dcb306 - - - -] Deleted volume successfully.
Yes, all project volumes are affected.
Regards, Vijay