[openstack-dev] [ceph-users] [Nova] [RBD] Copy-on-write cloning for RBD-backed disks

Dennis Kramer (DT) dennis at holmes.nl
Thu Jul 17 06:55:45 UTC 2014


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi Dmitry,

I've been using Ubuntu 14.04LTS + Icehouse /w CEPH as a storage
backend for glance, cinder and nova (kvm/libvirt). I *really* would
love to see this patch cycle in Juno. It's been a real performance
issue because of the unnecessary re-copy from-and-to CEPH when using
the default "boot from image"-option. It seems that the your fix would
be the solution to all. IMHO this is one of the most important
features when using CEPH RBD as a backend for Openstack Nova.

Can you point me in the right direction in how to apply this patch of
yours on a default Ubuntu14.04LTS + Icehouse installation? I'm using
the default ubuntu packages since Icehouse lives in core and I'm not
sure how to apply the patch series. I would love to test and review it.

With regards,

Dennis

On 07/16/2014 11:18 PM, Dmitry Borodaenko wrote:
> I've got a bit of good news and bad news about the state of landing
> the rbd-ephemeral-clone patch series for Nova in Juno.
> 
> The good news is that the first patch in the series 
> (https://review.openstack.org/91722 fixing a data loss inducing bug
> with live migrations of instances with RBD backed ephemeral drives)
> was merged yesterday.
> 
> The bad news is that after 2 months of sitting in review queue and 
> only getting its first a +1 from a core reviewer on the spec 
> approval freeze day, the spec for the blueprint 
> rbd-clone-image-handler (https://review.openstack.org/91486)
> wasn't approved in time. Because of that, today the blueprint was
> rejected along with the rest of the commits in the series, even
> though the code itself was reviewed and approved a number of
> times.
> 
> Our last chance to avoid putting this work on hold for yet another 
> OpenStack release cycle is to petition for a spec freeze exception 
> in the next Nova team meeting: 
> https://wiki.openstack.org/wiki/Meetings/Nova
> 
> If you're using Ceph RBD as backend for ephemeral disks in Nova and
> are interested this patch series, please speak up. Since the 
> biggest concern raised about this spec so far has been lack of CI 
> coverage, please let us know if you're already using this patch 
> series with Juno, Icehouse, or Havana.
> 
> I've put together an etherpad with a summary of where things are 
> with this patch series and how we got here: 
> https://etherpad.openstack.org/p/nova-ephemeral-rbd-clone-status
> 
> Previous thread about this patch series on ceph-users ML: 
> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-March/028097.html
>
_______________________________________________
ceph-users
> 
mailing list
ceph-users at lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.15 (GNU/Linux)
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlPHc3EACgkQiJDTKUBxIRtAEgCgiNRTedwsydYOWY4rkC6v2vbS
FTEAn34qSiwTyBNCDrXGWOmGPpFu+4PQ
=tK4K
-----END PGP SIGNATURE-----



More information about the OpenStack-dev mailing list