[Openstack-operators] CInder - migration to LVM from RBD

Josef Zelenka josef.zelenka at cloudevelops.com
Fri Feb 16 09:23:43 UTC 2018


Hi, we've had several issues, mainly very slow recovery(on clusters 
containing rbd volumes, cephfs, radosgw pools) - sometimes even ten 
times slower. Also there have been issues with performance when the 
cluster was over 65%~ full, there has been a significant slowdown. 
ANother issue we've battled was stuck i/o in the cluster, where a VM had 
an operation stuck and generated 60k iops, only thing that helped was 
deleting the volume. I'm not saying these issues can't be solved, but 
for the time being, we've decided to downgrade :)

Josef


On 16/02/18 10:14, Sean Redmond wrote:
> Hi Josef,
>
> I can't help you with the cinder type migrations, however I am 
> interested to know why you find ceph Luminous with RBD is not 
> production ready in your case.
>
> I have a cluster here supporting over 1k instances all backed by RBD 
> and find it very reliable in production.
>
> Thanks
> Sean Redmond
>
> On Fri, Feb 16, 2018 at 8:56 AM, Josef Zelenka 
> <josef.zelenka at cloudevelops.com 
> <mailto:josef.zelenka at cloudevelops.com>> wrote:
>
>     Hello everyone,
>
>     i'm currently trying to figure out how to migrate my volumes from
>     my ceph backend. I'm currently using Ceph Luminous, but so far it
>     has proven not really prod ready for us, so we want to downgrade.
>     However, our client already has some of his VMs on this ceph
>     cluster and downgrading means backing them up somewhere. THe best
>     course of action for us would be live migration to local lvm
>     storage, but it isn't possible via the standard cinder migrate
>     tool - i always get an error, nothing happens. We are running
>     openstack pike. Does anyone have any procedures/ideas for making
>     this work? Our last resort is rbd exporting the volumes to another
>     cluster and then importing them back after the downgrade, but we'd
>     prefer to do a live migrate. Apparently this has worked in the
>     past. Thanks
>
>     Josef Zelenka
>
>
>     _______________________________________________
>     OpenStack-operators mailing list
>     OpenStack-operators at lists.openstack.org
>     <mailto:OpenStack-operators at lists.openstack.org>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>     <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators>
>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20180216/097b2ade/attachment.html>


More information about the OpenStack-operators mailing list