[Openstack-operators] what is the different in use Qcow2 or Raw in Ceph

Warren Wang warren at wangspeed.com
Thu May 28 20:49:55 UTC 2015


We have some workloads that are only met by using ephemeral SSDs, so we
can't move those to RBD ephemeral. As far as the import conversion goes, it
looks like it would still do a blind one time conversion to raw (or
whatever format). I still wouldn't be able to use that function.

I guess ideally, we would have host aggs for each type, and both raw and
qcow2 versions of images. I fear user education would be challenging though.

Warren

Warren

On Thu, May 28, 2015 at 3:55 PM, Jonathan Proulx <jon at jonproulx.com> wrote:

> On Thu, May 28, 2015 at 3:34 PM, Warren Wang <warren at wangspeed.com> wrote:
> > Even though we're using Ceph as a backend, we still use qcow2 images as
> our
> > golden images, since we still have a significant (maybe majority) number
> of
> > users using true ephemeral disks. It would be nice if glance was clever
> > enough to convert where appropriate.
>
> you can use RBD as your ephemeral backend as well, we do.
>
> this gets us very fast starts and efficient use of storage from RAW
> since the ephemeral disk is just a CoW clone of the glance image.
> We'd previously been using relatively slow local disk (7.2k SATA) and
> our  ceph implementation (ssd for xfs journaling, 7.2k SAS for osds)
> has better performance than that for most of our work loads.
>
> Snapshotting instances still takes a long journey getting written to
> local disk then pushed back up to glance, there's work to make proper
> RBD snapshots directly but AFAIK this has a way to go.
>
> -Jon
>
>
> > Warren
> >
> > Warren
> >
> > On Thu, May 28, 2015 at 3:21 PM, Fox, Kevin M <Kevin.Fox at pnnl.gov>
> wrote:
> >>
> >> I've experienced the opposite problem though. Downloading raw images and
> >> uploading them to the cloud is very slow. Doing it through qcow2 allows
> them
> >> to be compressed over the slow links. Ideally, the Ceph driver would
> take a
> >> qcow2 and convert it to raw on glance ingest rather then at boot.
> >>
> >> Thanks,
> >> Kevin
> >> ________________________________________
> >> From: Dmitry Borodaenko [dborodaenko at mirantis.com]
> >> Sent: Thursday, May 28, 2015 12:10 PM
> >> To: David Medberry
> >> Cc: openstack-operators at lists.openstack.org
> >> Subject: Re: [Openstack-operators] what is the different in use Qcow2 or
> >> Raw in Ceph
> >>
> >> David is right, Ceph implements volume snapshotting at the RBD level,
> >> not even RADOS level: whole 2 levels of abstraction above file system.
> >> It doesn't matter if it's XFS, BtrFS, Ext4, or VFAT (if Ceph supported
> >> VFAT): Ceph RBD takes care of it before individual chunks of an RBD
> >> volume are passed to RADOS as objects and get written into the file
> >> system as files by an OSD process.
> >>
> >> The reason Fuel documentation recommends to disable QCOW2 format for
> >> images is that RBD does not support QCOW2 disks directly, so Nova and
> >> Cinder have to _convert_ a QCOW2 image into RAW format before passing
> >> it to QEMU's RBD driver. This means that you end up downloading the
> >> QCOW2 image from Ceph to a nova-compute node (first full copy),
> >> converting it (second full copy), and uploading the resultant RAW
> >> image back to Ceph (third full copy) just to launch a VM or create a
> >> volume from an image.
> >>
> >>
> >> On Thu, May 28, 2015 at 8:33 AM, David Medberry <openstack at medberry.net
> >
> >> wrote:
> >> > yep. It's at the CEPH level (not the XFS level.)
> >> >
> >> > On Thu, May 28, 2015 at 8:40 AM, Stephen Cousins
> >> > <steve.cousins at maine.edu>
> >> > wrote:
> >> >>
> >> >> Hi David,
> >> >>
> >> >> So Ceph will use Copy-on-write even with XFS?
> >> >>
> >> >> Thanks,
> >> >>
> >> >> Steve
> >> >>
> >> >> On Thu, May 28, 2015 at 10:36 AM, David Medberry
> >> >> <openstack at medberry.net>
> >> >> wrote:
> >> >>>
> >> >>> This isn't remotely related to btrfs. It works fine with XFS. Not
> sure
> >> >>> how that works in Fuel, never used it.
> >> >>>
> >> >>> On Thu, May 28, 2015 at 8:01 AM, Forrest Flagg
> >> >>> <fostro.flagg at gmail.com>
> >> >>> wrote:
> >> >>>>
> >> >>>> I'm also curious about this.  Here are some other pieces of
> >> >>>> information
> >> >>>> relevant to the discussion.  Maybe someone here can clear this up
> for
> >> >>>> me as
> >> >>>> well.  The documentation for Fuel 6.0, not sure what they changed
> for
> >> >>>> 6.1,
> >> >>>> [1] states that when using Ceph one should disable qcow2 so that
> >> >>>> images are
> >> >>>> stored in raw format.  This is due to the fact that Ceph includes
> its
> >> >>>> own
> >> >>>> mechanisms for copy-on-write and snapshots.  According to the Ceph
> >> >>>> documentation [2], this is true only when using a BTRFS file
> system,
> >> >>>> but in
> >> >>>> Fuel 6.0 Ceph uses XFS which doesn't provide this functionality.
> >> >>>> Also, [2]
> >> >>>> recommends not using BTRFS for production as it isn't considered
> >> >>>> fully
> >> >>>> mature.  In addition, Fuel 6.0 [3] states that OpenStack with raw
> >> >>>> images
> >> >>>> doesn't support snapshotting.
> >> >>>>
> >> >>>> Given this, why does Fuel suggest not using qcow2 with Ceph?  How
> can
> >> >>>> Ceph be useful if snapshotting isn't an option with raw images and
> >> >>>> qcow2
> >> >>>> isn't recommended?  Are there other factors to take into
> >> >>>> consideration that
> >> >>>> I'm missing?
> >> >>>>
> >> >>>> [1]
> >> >>>>
> >> >>>>
> https://docs.mirantis.com/openstack/fuel/fuel-6.0/terminology.html#qcow2
> >> >>>> [2]
> >> >>>>
> >> >>>>
> http://ceph.com/docs/master/rados/configuration/filesystem-recommendations/
> >> >>>> [3]
> >> >>>>
> >> >>>>
> https://docs.mirantis.com/openstack/fuel/fuel-6.0/user-guide.html#qcow-format-ug
> >> >>>>
> >> >>>> Thanks,
> >> >>>>
> >> >>>> Forrest
> >> >>>>
> >> >>>> On Thu, May 28, 2015 at 8:02 AM, David Medberry
> >> >>>> <openstack at medberry.net>
> >> >>>> wrote:
> >> >>>>>
> >> >>>>> and better explained here:
> >> >>>>> http://ceph.com/docs/master/rbd/qemu-rbd/
> >> >>>>>
> >> >>>>> On Thu, May 28, 2015 at 6:02 AM, David Medberry
> >> >>>>> <openstack at medberry.net> wrote:
> >> >>>>>>
> >> >>>>>> The primary difference is the ability for CEPH to make zero byte
> >> >>>>>> copies. When you use qcow2, ceph must actually create a complete
> >> >>>>>> copy
> >> >>>>>> instead of a zero byte copy as it cannot do its own copy-on-write
> >> >>>>>> tricks
> >> >>>>>> with a qcow2 image.
> >> >>>>>>
> >> >>>>>> So, yes, it will work fine with qcow2 images but it won't be as
> >> >>>>>> performant as it is with RAW. Also, it will actually use more of
> >> >>>>>> the native
> >> >>>>>> underlying storage.
> >> >>>>>>
> >> >>>>>> This is also shown as an Important Note in the CEPH docs:
> >> >>>>>> http://ceph.com/docs/master/rbd/rbd-openstack/
> >> >>>>>>
> >> >>>>>> On Thu, May 28, 2015 at 4:12 AM, Shake Chen <
> shake.chen at gmail.com>
> >> >>>>>> wrote:
> >> >>>>>>>
> >> >>>>>>> Hi
> >> >>>>>>>
> >> >>>>>>> Now I try to use Fuel 6.1 deploy openstack Juno, use Ceph as
> >> >>>>>>> cinder,
> >> >>>>>>> nova and glance backend.
> >> >>>>>>>
> >> >>>>>>> In Fuel document suggest if use ceph, suggest use RAW format
> >> >>>>>>> image.
> >> >>>>>>>
> >> >>>>>>> but if I upload qcow2 image, seem working well.
> >> >>>>>>>
> >> >>>>>>> what is the different use qcow2 and RAW in Ceph?
> >> >>>>>>>
> >> >>>>>>>
> >> >>>>>>> --
> >> >>>>>>> Shake Chen
> >> >>>>>>>
> >> >>>>>>>
> >> >>>>>>> _______________________________________________
> >> >>>>>>> OpenStack-operators mailing list
> >> >>>>>>> OpenStack-operators at lists.openstack.org
> >> >>>>>>>
> >> >>>>>>>
> >> >>>>>>>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >> >>>>>>>
> >> >>>>>>
> >> >>>>>
> >> >>>>>
> >> >>>>> _______________________________________________
> >> >>>>> OpenStack-operators mailing list
> >> >>>>> OpenStack-operators at lists.openstack.org
> >> >>>>>
> >> >>>>>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >> >>>>>
> >> >>>>
> >> >>>
> >> >>>
> >> >>> _______________________________________________
> >> >>> OpenStack-operators mailing list
> >> >>> OpenStack-operators at lists.openstack.org
> >> >>>
> >> >>>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >> >>>
> >> >>
> >> >>
> >> >>
> >> >> --
> >> >> ________________________________________________________________
> >> >>  Steve Cousins             Supercomputer Engineer/Administrator
> >> >>  Advanced Computing Group            University of Maine System
> >> >>  244 Neville Hall (UMS Data Center)              (207) 561-3574
> >> >>  Orono ME 04469                      steve.cousins at maine.edu
> >> >>
> >> >
> >> >
> >> > _______________________________________________
> >> > OpenStack-operators mailing list
> >> > OpenStack-operators at lists.openstack.org
> >> >
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >> >
> >>
> >>
> >>
> >> --
> >> Dmitry Borodaenko
> >>
> >> _______________________________________________
> >> OpenStack-operators mailing list
> >> OpenStack-operators at lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >>
> >> _______________________________________________
> >> OpenStack-operators mailing list
> >> OpenStack-operators at lists.openstack.org
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
> >
> >
> > _______________________________________________
> > OpenStack-operators mailing list
> > OpenStack-operators at lists.openstack.org
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-operators/attachments/20150528/290906cb/attachment.html>


More information about the OpenStack-operators mailing list