[openstack-dev] [ceph-users] [Nova] [RBD] Copy-on-write cloning for RBD-backed disks

Dmitry Borodaenko dborodaenko at mirantis.com
Thu Jul 17 19:07:29 UTC 2014


The meeting is in 2 hours, so you still have a chance to particilate
or at least lurk :)

On Wed, Jul 16, 2014 at 11:55 PM, Somhegyi Benjamin
<somhegyi.benjamin at wigner.mta.hu> wrote:
> Hi Dmitry,
>
> Will you please share with us how things went on the meeting?
>
> Many thanks,
> Benjamin
>
>
>
>> -----Original Message-----
>> From: ceph-users [mailto:ceph-users-bounces at lists.ceph.com] On Behalf Of
>> Dmitry Borodaenko
>> Sent: Wednesday, July 16, 2014 11:18 PM
>> To: ceph-users at lists.ceph.com
>> Cc: OpenStack Development Mailing List (not for usage questions)
>> Subject: [ceph-users] [Nova] [RBD] Copy-on-write cloning for RBD-backed
>> disks
>>
>> I've got a bit of good news and bad news about the state of landing the
>> rbd-ephemeral-clone patch series for Nova in Juno.
>>
>> The good news is that the first patch in the series
>> (https://review.openstack.org/91722 fixing a data loss inducing bug with
>> live migrations of instances with RBD backed ephemeral drives) was
>> merged yesterday.
>>
>> The bad news is that after 2 months of sitting in review queue and only
>> getting its first a +1 from a core reviewer on the spec approval freeze
>> day, the spec for the blueprint rbd-clone-image-handler
>> (https://review.openstack.org/91486) wasn't approved in time. Because of
>> that, today the blueprint was rejected along with the rest of the
>> commits in the series, even though the code itself was reviewed and
>> approved a number of times.
>>
>> Our last chance to avoid putting this work on hold for yet another
>> OpenStack release cycle is to petition for a spec freeze exception in
>> the next Nova team meeting:
>> https://wiki.openstack.org/wiki/Meetings/Nova
>>
>> If you're using Ceph RBD as backend for ephemeral disks in Nova and are
>> interested this patch series, please speak up. Since the biggest concern
>> raised about this spec so far has been lack of CI coverage, please let
>> us know if you're already using this patch series with Juno, Icehouse,
>> or Havana.
>>
>> I've put together an etherpad with a summary of where things are with
>> this patch series and how we got here:
>> https://etherpad.openstack.org/p/nova-ephemeral-rbd-clone-status
>>
>> Previous thread about this patch series on ceph-users ML:
>> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-
>> March/028097.html
>>
>> --
>> Dmitry Borodaenko
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users at lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Dmitry Borodaenko



More information about the OpenStack-dev mailing list