[Openstack-operators] what is the different in use Qcow2 or Raw in Ceph
dborodaenko at mirantis.com
Thu May 28 20:21:51 UTC 2015
On Thu, May 28, 2015 at 12:55 PM, Jonathan Proulx <jon at jonproulx.com> wrote:
> On Thu, May 28, 2015 at 3:34 PM, Warren Wang <warren at wangspeed.com> wrote:
>> Even though we're using Ceph as a backend, we still use qcow2 images as our
>> golden images, since we still have a significant (maybe majority) number of
>> users using true ephemeral disks. It would be nice if glance was clever
>> enough to convert where appropriate.
> you can use RBD as your ephemeral backend as well, we do.
> this gets us very fast starts and efficient use of storage from RAW
> since the ephemeral disk is just a CoW clone of the glance image.
> We'd previously been using relatively slow local disk (7.2k SATA) and
> our ceph implementation (ssd for xfs journaling, 7.2k SAS for osds)
> has better performance than that for most of our work loads.
> Snapshotting instances still takes a long journey getting written to
> local disk then pushed back up to glance, there's work to make proper
> RBD snapshots directly but AFAIK this has a way to go.
True. What you're referring to is these two bugs, one more generic and
the other specific to RBD:
This problem has been debated for 1.5 years now. There's even a fix on
review now, and as of couple of weeks ago it is reported to actually
work with Kilo (kudos Zoltan, Josh, and Pádraig!):
However it has met the usual resistance from Nova core team, so based
on my previous experience with similar RBD related changes (RBD backed
live migrations first worked on Havana, landed in Juno), it's likely
to take until M release of OpenStack before it's actually merged.
Hopefully sooner if more people help with testing and promoting this
More information about the OpenStack-operators