[ops][cinder] Moving volume to new type
Hi All, I'm currently languishing on Mitaka so perhaps further back than help can reach but...if anyone can tell me if this is something dumb I'm doing or a know bug in mitaka that's preventing me for movign volumes from one type to anyother it'd be a big help. In the further past I did a cinder backend migration by creating a new volume type then changing all the existign volume sto the new type. This is how we got from iSCSI to RBD (probably in Grizzly or Havana). Currently I'm starting to move from one RBD pool to an other and seems like this should work in the same way. Both pools and types exist and I can create volumes in either but when I run: openstack volume set --type ssd test-vol it rather silently fails to do anything (CLI returns 0), looking into schedulerlogs I see: # yup 2 "hosts" to check DEBUG cinder.scheduler.base_filter Starting with 2 host(s) get_filtered_objects DEBUG cinder.scheduler.base_filter Filter AvailabilityZoneFilter returned 2 host(s) get_filtered_objects DEBUG cinder.scheduler.filters.capacity_filter Space information for volume creation on host nimbus-1@ssdrbd#ssdrbd (requested / avail): 8/47527.78 host_passes DEBUG cinder.scheduler.base_filter Filter CapacityFilter returned 2 host(s) get_filtered_objects /usr/lib/python2.7/dist-packages/cinder/scheduler/base DEBUG cinder.scheduler.filters.capabilities_filter extra_spec requirement 'ssdrbd' does not match 'rbd' _satisfies_extra_specs /usr/lib/python2.7/dist- DEBUG cinder.scheduler.filters.capabilities_filter host 'nimbus-1@rbd#rbd': free_capacity_gb: 71127.03, pools: None fails resource_type extra_specs req DEBUG cinder.scheduler.base_filter Filter CapabilitiesFilter returned 1 host(s) get_filtered_objects /usr/lib/python2.7/dist-packages/cinder/scheduler/ # after filtering we have one DEBUG cinder.scheduler.filter_scheduler Filtered [host 'nimbus-1@ssdrbd#ssdrbd': free_capacity_gb: 47527.78, pools: None] _get_weighted_candidates # but it fails? ERROR cinder.scheduler.manager Could not find a host for volume 49299c0b-8bcf-4cdb-a0e1-dec055b0e78c with type bc2bc9ad-b0ad-43d2-93db-456d750f194d. successfully creating a volume in ssdrbd is identical to that point, except rather than ERROR on the last line it goes to: # Actually chooses 'nimbus-1@ssdrbd#ssdrbd' as top host DEBUG cinder.scheduler.filter_scheduler Filtered [host 'nimbus-1@ssdrbd#ssdrbd': free_capacity_gb: 47527.8, pools: None] _get_weighted_candidates DEBUG cinder.scheduler.filter_scheduler Choosing nimbus-1@ssdrbd#ssdrbd _choose_top_host # then goes and makes volume DEBUG oslo_messaging._drivers.amqpdriver CAST unique_id: 1b7a9d88402a41f8b889b88a2e2a198d exchange 'openstack' topic 'cinder-volume' _send DEBUG cinder.scheduler.manager Task 'cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create' (e70dcc3f-7d88-4542-abff-f1a1293e90fb) transitioned into state 'SUCCESS' from state 'RUNNING' with result 'None' _task_receiver Anyone recognize this situation? Since I'm retiring the old spinning disks I can also "solve" this on the Ceph side by changing the crush map such that the old rbd pool just picks all ssds. So this isn't critical, but in the transitional period until I have enough SSD capacity to really throw *everything* over, there are some hot spot volumes it would be really nice to move this way. Thanks, -Jon
On 30/01, Jonathan Proulx wrote:
Hi All,
I'm currently languishing on Mitaka so perhaps further back than help can reach but...if anyone can tell me if this is something dumb I'm doing or a know bug in mitaka that's preventing me for movign volumes from one type to anyother it'd be a big help.
In the further past I did a cinder backend migration by creating a new volume type then changing all the existign volume sto the new type. This is how we got from iSCSI to RBD (probably in Grizzly or Havana).
Currently I'm starting to move from one RBD pool to an other and seems like this should work in the same way. Both pools and types exist and I can create volumes in either but when I run:
openstack volume set --type ssd test-vol
it rather silently fails to do anything (CLI returns 0), looking into schedulerlogs I see:
# yup 2 "hosts" to check DEBUG cinder.scheduler.base_filter Starting with 2 host(s) get_filtered_objects DEBUG cinder.scheduler.base_filter Filter AvailabilityZoneFilter returned 2 host(s) get_filtered_objects DEBUG cinder.scheduler.filters.capacity_filter Space information for volume creation on host nimbus-1@ssdrbd#ssdrbd (requested / avail): 8/47527.78 host_passes DEBUG cinder.scheduler.base_filter Filter CapacityFilter returned 2 host(s) get_filtered_objects /usr/lib/python2.7/dist-packages/cinder/scheduler/base DEBUG cinder.scheduler.filters.capabilities_filter extra_spec requirement 'ssdrbd' does not match 'rbd' _satisfies_extra_specs /usr/lib/python2.7/dist- DEBUG cinder.scheduler.filters.capabilities_filter host 'nimbus-1@rbd#rbd': free_capacity_gb: 71127.03, pools: None fails resource_type extra_specs req DEBUG cinder.scheduler.base_filter Filter CapabilitiesFilter returned 1 host(s) get_filtered_objects /usr/lib/python2.7/dist-packages/cinder/scheduler/
# after filtering we have one DEBUG cinder.scheduler.filter_scheduler Filtered [host 'nimbus-1@ssdrbd#ssdrbd': free_capacity_gb: 47527.78, pools: None] _get_weighted_candidates
# but it fails? ERROR cinder.scheduler.manager Could not find a host for volume 49299c0b-8bcf-4cdb-a0e1-dec055b0e78c with type bc2bc9ad-b0ad-43d2-93db-456d750f194d.
Hi, This looks like you didn't say that it was OK to migrate volumes on the retype. Did you set the migration policy on the request to "on-demand": cinder retype --migration-policy on-demand test-vol ssd Cheers, Gorka.
successfully creating a volume in ssdrbd is identical to that point, except rather than ERROR on the last line it goes to:
# Actually chooses 'nimbus-1@ssdrbd#ssdrbd' as top host
DEBUG cinder.scheduler.filter_scheduler Filtered [host 'nimbus-1@ssdrbd#ssdrbd': free_capacity_gb: 47527.8, pools: None] _get_weighted_candidates DEBUG cinder.scheduler.filter_scheduler Choosing nimbus-1@ssdrbd#ssdrbd _choose_top_host
# then goes and makes volume
DEBUG oslo_messaging._drivers.amqpdriver CAST unique_id: 1b7a9d88402a41f8b889b88a2e2a198d exchange 'openstack' topic 'cinder-volume' _send DEBUG cinder.scheduler.manager Task 'cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create' (e70dcc3f-7d88-4542-abff-f1a1293e90fb) transitioned into state 'SUCCESS' from state 'RUNNING' with result 'None' _task_receiver
Anyone recognize this situation?
Since I'm retiring the old spinning disks I can also "solve" this on the Ceph side by changing the crush map such that the old rbd pool just picks all ssds. So this isn't critical, but in the transitional period until I have enough SSD capacity to really throw *everything* over, there are some hot spot volumes it would be really nice to move this way.
Thanks, -Jon
participants (2)
-
Gorka Eguileor
-
Jonathan Proulx