[openstack-dev] [cinder] [nova] Problem of Volume(in-use) Live Migration with ceph backend

Boxiang Zhu bxzhu_5355 at 163.com
Mon Oct 22 03:45:55 UTC 2018



Jay and Melanie, It's my fault to let you misunderstand the problem. I should describe my problem more clearly. My problem is not to migrate volumes between two ceph clusters. 


I have two clusters, one is openstack cluster(allinone env, hostname is dev) and another is ceph cluster. Omit the integrated configurations for openstack and ceph.[1] The special config of cinder.conf is as followed:


[DEFAULT]
enabled_backends = rbd-1,rbd-2
......
[rbd-1]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph
rbd_pool = volumes001
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = true
rbd_max_clone_depth = 2
rbd_store_chunk_size = 4
rados_connect_timeout = 5
rbd_user = cinder
rbd_secret_uuid = 86d3922a-b471-4dc1-bb89-b46ab7024e81
[rbd-2]
volume_driver = cinder.volume.drivers.rbd.RBDDriver
volume_backend_name = ceph
rbd_pool = volumes002
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot = true
rbd_max_clone_depth = 2
rbd_store_chunk_size = 4
rados_connect_timeout = 5
rbd_user = cinder
rbd_secret_uuid = 86d3922a-b471-4dc1-bb89-b46ab7024e81


There will be two hosts named dev at rbd-1#ceph and dev at rbd-2#ceph.
Then I create a volume type named 'ceph' with the command 'cinder type-create ceph' and add extra_spec 'volume_backend_name=ceph' for it with the command 'cinder type-key <vtype> set volume_backend_name=ceph'. 


I created a new vm and a new volume with type 'ceph'[So that the volume will be created on one of two hosts. I assume that the volume created on host dev at rbd-1#ceph this time]. Next step is to attach the volume to the vm. At last I want to migrate the volume from host dev at rbd-1#ceph to host dev at rbd-2#ceph, but it failed with the exception 'NotImplementedError(_("Swap only supports host devices")'.


So that, my real problem is that is there any work to migrate volume(in-use)(ceph rbd) from one host(pool) to another host(pool) in the same ceph cluster?
The difference between the spec[2] with my scope is only one is available(the spec) and another is in-use(my scope).




[1] http://docs.ceph.com/docs/master/rbd/rbd-openstack/
[2] https://review.openstack.org/#/c/296150


Cheers,
Boxiang
On 10/21/2018 23:19,Jay S. Bryant<jungleboyj at gmail.com> wrote:

Boxiang,

I have not herd any discussion of extending this functionality for Ceph to work between different Ceph Clusters.  I wasn't aware, however, that the existing spec was limited to one Ceph cluster.  So, that is good to know.

I would recommend reaching out to Jon Bernard or Eric Harney for guidance on how to proceed.  They work closely with the Ceph driver and could provide insight.

Jay




On 10/19/2018 10:21 AM, Boxiang Zhu wrote:



Hi melanie, thanks for your reply.


The version of my cinder and nova is Rocky. The scope of the cinder spec[1] 
is only for available volume migration between two pools from the same ceph cluster.
If the volume is in-use status[2], it will call the generic migration function. So that as you 
describe it, on the nova side, it raises NotImplementedError(_("Swap only supports host devices"). 
The get_config of net volume[3] has not source_path.


So does anyone try to succeed to migrate volume(in-use) with ceph backend or is anyone doing something of it?


[1] https://review.openstack.org/#/c/296150
[2] https://review.openstack.org/#/c/256091/23/cinder/volume/drivers/rbd.py
[3] https://github.com/openstack/nova/blob/stable/rocky/nova/virt/libvirt/volume/net.py#L101




Cheers,
Boxiang
On 10/19/2018 22:39,melanie witt<melwittt at gmail.com> wrote:
On Fri, 19 Oct 2018 11:33:52 +0800 (GMT+08:00), Boxiang Zhu wrote:
When I use the LVM backend to create the volume, then attach it to a vm.
I can migrate the volume(in-use) from one host to another. The nova
libvirt will call the 'rebase' to finish it. But if using ceph backend,
it raises exception 'Swap only supports host devices'. So now it does
not support to migrate volume(in-use). Does anyone do this work now? Or
Is there any way to let me migrate volume(in-use) with ceph backend?

What version of cinder and nova are you using?

I found this question/answer on ask.openstack.org:

https://ask.openstack.org/en/question/112954/volume-migration-fails-notimplementederror-swap-only-supports-host-devices/

and it looks like there was some work done on the cinder side [1] to
enable migration of in-use volumes with ceph semi-recently (Queens).

On the nova side, the code looks for the source_path in the volume
config, and if there is not one present, it raises
NotImplementedError(_("Swap only supports host devices"). So in your
environment, the volume configs must be missing a source_path.

If you are using at least Queens version, then there must be something
additional missing that we would need to do to make the migration work.

[1] https://blueprints.launchpad.net/cinder/+spec/ceph-volume-migrate

Cheers,
-melanie





__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




__________________________________________________________________________
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribehttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-dev/attachments/20181022/b01322c0/attachment-0001.html>


More information about the OpenStack-dev mailing list