Hi everyone, We are using a Ceph cluster as the backend for Cinder. The Ceph cluster has two pools: volumes_data_hdd for HDDs and volumes_data_ssd for SSDs. These are assigned to the Cinder volume types ceph-hdd and ceph-ssd, respectively. To change the type of a ceph-hdd volume to ceph-ssd, I executed the following command: openstack volume set --type ceph-ssd --retype-policy on-demand <ID of ceph-hdd volume> After execution, checking the capacity with the 'ceph df' command shows an increase in usage after the type change. The following is an example of migrating a newly created, empty 10GB ceph-hdd type volume to ceph-ssd: $ sudo ceph df | grep volumes_data volumes_data_hdd 19 1 98 GiB 25.51k 195 GiB 2.24 4.2 TiB volumes_data_ssd 22 1 91 GiB 23.89k 181 GiB 3.55 2.4 TiB $ openstack volume list --long | grep 7fd16e56-71ec-430b-8499-8d216a57f1c6 | 7fd16e56-71ec-430b-8499-8d216a57f1c6 | test | available | 10 | ceph-hdd | false | | | $ openstack volume set --type ceph-ssd --retype-policy on-demand 7fd16e56-71ec-430b-8499-8d216a57f1c6 $ openstack volume list --long | grep 7fd16e56-71ec-430b-8499-8d216a57f1c6 | 7fd16e56-71ec-430b-8499-8d216a57f1c6 | test | available | 10 | ceph-ssd | false | | | $ sudo ceph df | grep volumes_data volumes_data_hdd 19 1 98 GiB 25.51k 195 GiB 2.24 4.2 TiB volumes_data_ssd 22 1 101 GiB 26.45k 201 GiB 3.95 2.4 TiB Despite the volume being empty, the usage of volumes_data_ssd increases by 10GB. I suspect this is due to zero-filling of unused space during the copy process. Is it possible to change the volume type using Cinder functionality while maintaining the same usage? If not, I think it would be better to use the 'rbd cp' command internally for type changes within Ceph, as it should be able to copy the image while preserving the image size. What are your thoughts on this approach? OpenStack and Ceph versions are: $ ceph -v ceph version 17.2.8 (f817ceb7f187defb1d021d6328fa833eb8e943b3) quincy (stable) $ dnf list --installed | grep openstack-cinder openstack-cinder.noarch 1:21.3.2-1.el9s @centos-openstack-zed Best regards,