[Openstack-operators] Migrating glance images to a new backend

Fei Long Wang feilong at catalyst.net.nz
Wed Mar 29 01:59:50 UTC 2017


Hi Massimo,

Thanks for providing more information. As you can see from David's blog
and the script (https://github.com/dmsimard/migrate-glance-backend). The
most tricky part is how to keep current image id, otherwise, all the
existing instances will fail to rebuild. The way I'm suggesting can help
keep the image id and don't have to create another image. The steps are
as follow:

1. Download and re-upload

    1.1 Iterate tenants and images, download images, convert images from
qcow to raw

    1.2 upload images to RBD by following the same way like this
https://github.com/openstack/glance_store/blob/master/glance_store/_drivers/rbd.py#L426

2. Create new location for each image based on the location get from
#1.2. For this step, it would be nice to enable 
/show_multiple_locations=True/ Note: Glance team would suggest to
disable this after your migration.//However, if you want to use CoW, you
may still need to keep it :(/
/

3. Delete old locations based on GlusterFS

4. All done


*NOTE*:

1. For step #2 and #3, you could follow this blog about how
https://www.sebastien-han.fr/blog/2015/05/13/openstack-glance-use-multiple-location-for-an-image/

2. Step 1 and 2 can be done before your downtime window.

3. Technically, you can keep the two locations without deleting the old
location or at least getting a more smoother during migration by using
location strategy. For this case, you can set:

/     stores=rbd,file/

/     //location_strategy=store_type//
/

/     //store_type_preference=rbd,file/

/     /That means if there are 2 locations, Glance will try to use the
RBD location first, then filesystem location. See more info
https://github.com/openstack/glance/blob/master/etc/glance-api.conf#L4388



On 29/03/17 02:02, Massimo Sgaravatto wrote:
> First  of all thanks for your help
>
> This is a private cloud which is right now using gluster as backend.
> Most of the images are private (i.e. usable only within the project),
> uploaded by the end-users.  
> Most of these images were saved in qcow2 format ... 
>
>
> The ceph cluster is still being benchmarked. I am testing the
> integration between ceph and openstack (and studying the migration) on
> a small openstack testbed.
>
>  Having the glance service running during the migration is not
> strictly needed, i.e. we can plan a scheduled downtime of the service 
>
> Thanks again, Massimo
>
>
> 2017-03-28 5:24 GMT+02:00 Fei Long Wang <feilong at catalyst.net.nz
> <mailto:feilong at catalyst.net.nz>>:
>
>     Hi Massimo,
>
>     Though I don't have experience on the migration, but as the glance
>     RBD driver maintainer and image service maintainer of our public
>     cloud (Catalyst Cloud based in NZ), I'm happy to provide some
>     information. Before I talk more, would you mind sharing some
>     information of your environment?
>
>     1. Are you using CoW of Ceph?
>
>     2. Are you using multi locations? 
>
>     show_multiple_locations=True
>
>     3. Are you expecting to migrate all the images in a maintenance
>     time window or you want to keep the glance service running for end
>     user during the migration?
>
>     4. Is it a public cloud?
>
>
>     On 25/03/17 04:55, Massimo Sgaravatto wrote:
>>     Hi
>>
>>     In our Mitaka cloud we are currently using Gluster as storage
>>     backend for Glance and Cinder.
>>     We are now starting the migration to ceph: the idea is then to
>>     dismiss gluster when we have done.
>>
>>     I have a question concerning Glance. 
>>
>>     I have understood (or at least I hope so) how to add ceph as
>>     store backend for Glance so that new images will use ceph while
>>     the previously created ones on the file backend will be still usable.
>>
>>     My question is how I can migrate the images from the file backend
>>     to ceph when I decide to dismiss the gluster based storage.
>>
>>     The only documentation I found is this one:
>>
>>     https://dmsimard.com/2015/07/18/migrating-glance-images-to-a-different-backend/
>>     <https://dmsimard.com/2015/07/18/migrating-glance-images-to-a-different-backend/>
>>
>>
>>     Could you please confirm that there aren't other better (simpler)
>>     approaches for such image migration ?
>>
>>     Thanks, Massimo
>>
>>
>>     _______________________________________________
>>     OpenStack-operators mailing list
>>     OpenStack-operators at lists.openstack.org
>>     <mailto:OpenStack-operators at lists.openstack.org>
>>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>     <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators>
>
>     -- 
>     Cheers & Best regards,
>     Feilong Wang (王飞龙)
>     --------------------------------------------------------------------------
>     Senior Cloud Software Engineer
>     Tel: +64-48032246 <tel:+64%204-803%202246>
>     Email: flwang at catalyst.net.nz <mailto:flwang at catalyst.net.nz>
>     Catalyst IT Limited
>     Level 6, Catalyst House, 150 Willis Street, Wellington
>     -------------------------------------------------------------------------- 
>
>     _______________________________________________
>     OpenStack-operators mailing list
>     OpenStack-operators at lists.openstack.org
>     <mailto:OpenStack-operators at lists.openstack.org>
>     http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>     <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators>
>
>
-- 
Cheers & Best regards,
Feilong Wang (王飞龙)
--------------------------------------------------------------------------
Senior Cloud Software Engineer
Tel: +64-48032246
Email: flwang at catalyst.net.nz
Catalyst IT Limited
Level 6, Catalyst House, 150 Willis Street, Wellington
-------------------------------------------------------------------------- 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20170329/2bad31d9/attachment.html>


More information about the OpenStack-operators mailing list