<div dir="ltr"><div><div>> It would be nice if glance was clever enough to convert where appropriate.<br><br></div>You're right, and it looks like that was added in the Kilo cycle: <a href="https://review.openstack.org/#/c/159129/">https://review.openstack.org/#/c/159129/</a><br><br><br></div>-Chris<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, May 28, 2015 at 3:34 PM, Warren Wang <span dir="ltr"><<a href="mailto:warren@wangspeed.com" target="_blank">warren@wangspeed.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><div>Even though we're using Ceph as a backend, we still use qcow2 images as our golden images, since we still have a significant (maybe majority) number of users using true ephemeral disks. It would be nice if glance was clever enough to convert where appropriate.<br><br></div>Warren<span class="HOEnZb"><font color="#888888"><br></font></span></div><div class="gmail_extra"><span class="HOEnZb"><font color="#888888"><br clear="all"><div><div>Warren</div></div></font></span><div><div class="h5">
<br><div class="gmail_quote">On Thu, May 28, 2015 at 3:21 PM, Fox, Kevin M <span dir="ltr"><<a href="mailto:Kevin.Fox@pnnl.gov" target="_blank">Kevin.Fox@pnnl.gov</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">I've experienced the opposite problem though. Downloading raw images and uploading them to the cloud is very slow. Doing it through qcow2 allows them to be compressed over the slow links. Ideally, the Ceph driver would take a qcow2 and convert it to raw on glance ingest rather then at boot.<br>
<br>
Thanks,<br>
Kevin<br>
________________________________________<br>
From: Dmitry Borodaenko [<a href="mailto:dborodaenko@mirantis.com" target="_blank">dborodaenko@mirantis.com</a>]<br>
Sent: Thursday, May 28, 2015 12:10 PM<br>
To: David Medberry<br>
Cc: <a href="mailto:openstack-operators@lists.openstack.org" target="_blank">openstack-operators@lists.openstack.org</a><br>
Subject: Re: [Openstack-operators] what is the different in use Qcow2 or Raw in Ceph<br>
<div><div><br>
David is right, Ceph implements volume snapshotting at the RBD level,<br>
not even RADOS level: whole 2 levels of abstraction above file system.<br>
It doesn't matter if it's XFS, BtrFS, Ext4, or VFAT (if Ceph supported<br>
VFAT): Ceph RBD takes care of it before individual chunks of an RBD<br>
volume are passed to RADOS as objects and get written into the file<br>
system as files by an OSD process.<br>
<br>
The reason Fuel documentation recommends to disable QCOW2 format for<br>
images is that RBD does not support QCOW2 disks directly, so Nova and<br>
Cinder have to _convert_ a QCOW2 image into RAW format before passing<br>
it to QEMU's RBD driver. This means that you end up downloading the<br>
QCOW2 image from Ceph to a nova-compute node (first full copy),<br>
converting it (second full copy), and uploading the resultant RAW<br>
image back to Ceph (third full copy) just to launch a VM or create a<br>
volume from an image.<br>
<br>
<br>
On Thu, May 28, 2015 at 8:33 AM, David Medberry <<a href="mailto:openstack@medberry.net" target="_blank">openstack@medberry.net</a>> wrote:<br>
> yep. It's at the CEPH level (not the XFS level.)<br>
><br>
> On Thu, May 28, 2015 at 8:40 AM, Stephen Cousins <<a href="mailto:steve.cousins@maine.edu" target="_blank">steve.cousins@maine.edu</a>><br>
> wrote:<br>
>><br>
>> Hi David,<br>
>><br>
>> So Ceph will use Copy-on-write even with XFS?<br>
>><br>
>> Thanks,<br>
>><br>
>> Steve<br>
>><br>
>> On Thu, May 28, 2015 at 10:36 AM, David Medberry <<a href="mailto:openstack@medberry.net" target="_blank">openstack@medberry.net</a>><br>
>> wrote:<br>
>>><br>
>>> This isn't remotely related to btrfs. It works fine with XFS. Not sure<br>
>>> how that works in Fuel, never used it.<br>
>>><br>
>>> On Thu, May 28, 2015 at 8:01 AM, Forrest Flagg <<a href="mailto:fostro.flagg@gmail.com" target="_blank">fostro.flagg@gmail.com</a>><br>
>>> wrote:<br>
>>>><br>
>>>> I'm also curious about this. Here are some other pieces of information<br>
>>>> relevant to the discussion. Maybe someone here can clear this up for me as<br>
>>>> well. The documentation for Fuel 6.0, not sure what they changed for 6.1,<br>
>>>> [1] states that when using Ceph one should disable qcow2 so that images are<br>
>>>> stored in raw format. This is due to the fact that Ceph includes its own<br>
>>>> mechanisms for copy-on-write and snapshots. According to the Ceph<br>
>>>> documentation [2], this is true only when using a BTRFS file system, but in<br>
>>>> Fuel 6.0 Ceph uses XFS which doesn't provide this functionality. Also, [2]<br>
>>>> recommends not using BTRFS for production as it isn't considered fully<br>
>>>> mature. In addition, Fuel 6.0 [3] states that OpenStack with raw images<br>
>>>> doesn't support snapshotting.<br>
>>>><br>
>>>> Given this, why does Fuel suggest not using qcow2 with Ceph? How can<br>
>>>> Ceph be useful if snapshotting isn't an option with raw images and qcow2<br>
>>>> isn't recommended? Are there other factors to take into consideration that<br>
>>>> I'm missing?<br>
>>>><br>
>>>> [1]<br>
>>>> <a href="https://docs.mirantis.com/openstack/fuel/fuel-6.0/terminology.html#qcow2" target="_blank">https://docs.mirantis.com/openstack/fuel/fuel-6.0/terminology.html#qcow2</a><br>
>>>> [2]<br>
>>>> <a href="http://ceph.com/docs/master/rados/configuration/filesystem-recommendations/" target="_blank">http://ceph.com/docs/master/rados/configuration/filesystem-recommendations/</a><br>
>>>> [3]<br>
>>>> <a href="https://docs.mirantis.com/openstack/fuel/fuel-6.0/user-guide.html#qcow-format-ug" target="_blank">https://docs.mirantis.com/openstack/fuel/fuel-6.0/user-guide.html#qcow-format-ug</a><br>
>>>><br>
>>>> Thanks,<br>
>>>><br>
>>>> Forrest<br>
>>>><br>
>>>> On Thu, May 28, 2015 at 8:02 AM, David Medberry <<a href="mailto:openstack@medberry.net" target="_blank">openstack@medberry.net</a>><br>
>>>> wrote:<br>
>>>>><br>
>>>>> and better explained here:<br>
>>>>> <a href="http://ceph.com/docs/master/rbd/qemu-rbd/" target="_blank">http://ceph.com/docs/master/rbd/qemu-rbd/</a><br>
>>>>><br>
>>>>> On Thu, May 28, 2015 at 6:02 AM, David Medberry<br>
>>>>> <<a href="mailto:openstack@medberry.net" target="_blank">openstack@medberry.net</a>> wrote:<br>
>>>>>><br>
>>>>>> The primary difference is the ability for CEPH to make zero byte<br>
>>>>>> copies. When you use qcow2, ceph must actually create a complete copy<br>
>>>>>> instead of a zero byte copy as it cannot do its own copy-on-write tricks<br>
>>>>>> with a qcow2 image.<br>
>>>>>><br>
>>>>>> So, yes, it will work fine with qcow2 images but it won't be as<br>
>>>>>> performant as it is with RAW. Also, it will actually use more of the native<br>
>>>>>> underlying storage.<br>
>>>>>><br>
>>>>>> This is also shown as an Important Note in the CEPH docs:<br>
>>>>>> <a href="http://ceph.com/docs/master/rbd/rbd-openstack/" target="_blank">http://ceph.com/docs/master/rbd/rbd-openstack/</a><br>
>>>>>><br>
>>>>>> On Thu, May 28, 2015 at 4:12 AM, Shake Chen <<a href="mailto:shake.chen@gmail.com" target="_blank">shake.chen@gmail.com</a>><br>
>>>>>> wrote:<br>
>>>>>>><br>
>>>>>>> Hi<br>
>>>>>>><br>
>>>>>>> Now I try to use Fuel 6.1 deploy openstack Juno, use Ceph as cinder,<br>
>>>>>>> nova and glance backend.<br>
>>>>>>><br>
>>>>>>> In Fuel document suggest if use ceph, suggest use RAW format image.<br>
>>>>>>><br>
>>>>>>> but if I upload qcow2 image, seem working well.<br>
>>>>>>><br>
>>>>>>> what is the different use qcow2 and RAW in Ceph?<br>
>>>>>>><br>
>>>>>>><br>
>>>>>>> --<br>
>>>>>>> Shake Chen<br>
>>>>>>><br>
>>>>>>><br>
>>>>>>> _______________________________________________<br>
>>>>>>> OpenStack-operators mailing list<br>
>>>>>>> <a href="mailto:OpenStack-operators@lists.openstack.org" target="_blank">OpenStack-operators@lists.openstack.org</a><br>
>>>>>>><br>
>>>>>>> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
>>>>>>><br>
>>>>>><br>
>>>>><br>
>>>>><br>
>>>>> _______________________________________________<br>
>>>>> OpenStack-operators mailing list<br>
>>>>> <a href="mailto:OpenStack-operators@lists.openstack.org" target="_blank">OpenStack-operators@lists.openstack.org</a><br>
>>>>> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
>>>>><br>
>>>><br>
>>><br>
>>><br>
>>> _______________________________________________<br>
>>> OpenStack-operators mailing list<br>
>>> <a href="mailto:OpenStack-operators@lists.openstack.org" target="_blank">OpenStack-operators@lists.openstack.org</a><br>
>>> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
>>><br>
>><br>
>><br>
>><br>
>> --<br>
>> ________________________________________________________________<br>
>> Steve Cousins Supercomputer Engineer/Administrator<br>
>> Advanced Computing Group University of Maine System<br>
>> 244 Neville Hall (UMS Data Center) <a href="tel:%28207%29%20561-3574" value="+12075613574" target="_blank">(207) 561-3574</a><br>
>> Orono ME 04469 steve.cousins at <a href="http://maine.edu" target="_blank">maine.edu</a><br>
>><br>
><br>
><br>
> _______________________________________________<br>
> OpenStack-operators mailing list<br>
> <a href="mailto:OpenStack-operators@lists.openstack.org" target="_blank">OpenStack-operators@lists.openstack.org</a><br>
> <a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
><br>
<br>
<br>
<br>
--<br>
Dmitry Borodaenko<br>
<br>
_______________________________________________<br>
OpenStack-operators mailing list<br>
<a href="mailto:OpenStack-operators@lists.openstack.org" target="_blank">OpenStack-operators@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
<br>
_______________________________________________<br>
OpenStack-operators mailing list<br>
<a href="mailto:OpenStack-operators@lists.openstack.org" target="_blank">OpenStack-operators@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
</div></div></blockquote></div><br></div></div></div>
<br>_______________________________________________<br>
OpenStack-operators mailing list<br>
<a href="mailto:OpenStack-operators@lists.openstack.org">OpenStack-operators@lists.openstack.org</a><br>
<a href="http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators" target="_blank">http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators</a><br>
<br></blockquote></div><br></div>