[openstack-dev] [Nova] FFE Request: Ephemeral RBD image support

Dmitry Borodaenko dborodaenko at mirantis.com
Thu Mar 6 21:18:05 UTC 2014


+1 on both accounts:

Yes, this change has low impact outside of the RBD driver that has
been out there since September and I agree that it should be exempted
from feature freeze.

And yes, RBD driver in Nova is severely crippled without this code
(which is why this was originally reported as a bug). Please let me
explain why for the benefit of prospective reviewers.

The primary benefit of using Ceph as the storage backend in an
OpenStack deployment is to keep all bulk data in a single storage pool
and eliminating the need to duplicate and transfer image data every
time you need to launch or snapshot a VM. The way Ceph achieves this
is with copy-on-write object snapshots: when you create a Cinder
volume from a Glance image, all you pass from Ceph to Cinder is an RBD
object URI to a new snapshot of the same object. When you write into
the new volume, only the parts that are changed get new RADOS object
stripes, the rest of the data remains unchanged and unduplicated.

Contrast this with the way the current implementation of RBD driver in
Nova works: when you launch an instance from a Glance image backed by
RBD, the whole image is downloaded from Ceph onto a local drive on the
compute node, only to be uploaded back as a new Ceph RBD object. This
wastes both network and disk capacity, not a lot when all you deal
with is a dozen of snowflake VMs, and a deal-breaker if you need
thousands of nearly identical VMs with disk contents differences
limited to configuration files in /etc.

Having this kind of limitation defeats the whole purpose of having RBD
driver in Nova, you might as well use the local storage on compute
nodes to store ephemeral disks.

Thank you,
-Dmitry Borodaenko

On Thu, Mar 6, 2014 at 3:18 AM, Sebastien Han
<sebastien.han at enovance.com> wrote:
> Big +1 on this.
> Missing such support would make the implementation useless.
>
> ----
> Sébastien Han
> Cloud Engineer
>
> "Always give 100%. Unless you're giving blood."
>
> Phone: +33 (0)1 49 70 99 72
> Mail: sebastien.han at enovance.com
> Address : 11 bis, rue Roquépine - 75008 Paris
> Web : www.enovance.com - Twitter : @enovance
>
> On 06 Mar 2014, at 11:44, Zhi Yan Liu <lzy.dev at gmail.com> wrote:
>
>> +1! according to the low rise and the usefulness for the real cloud deployment.
>>
>> zhiyan
>>
>> On Thu, Mar 6, 2014 at 4:20 PM, Andrew Woodward <xarses at gmail.com> wrote:
>>> I'd Like to request A FFE for the remaining patches in the Ephemeral
>>> RBD image support chain
>>>
>>> https://review.openstack.org/#/c/59148/
>>> https://review.openstack.org/#/c/59149/
>>>
>>> are still open after their dependency
>>> https://review.openstack.org/#/c/33409/ was merged.
>>>
>>> These should be low risk as:
>>> 1. We have been testing with this code in place.
>>> 2. It's nearly all contained within the RBD driver.
>>>
>>> This is needed as it implements an essential functionality that has
>>> been missing in the RBD driver and this will become the second release
>>> it's been attempted to be merged into.
>>>
>>> Andrew
>>> Mirantis
>>> Ceph Community
>>>
>>> _______________________________________________
>>> OpenStack-dev mailing list
>>> OpenStack-dev at lists.openstack.org
>>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> _______________________________________________
>> OpenStack-dev mailing list
>> OpenStack-dev at lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
> _______________________________________________
> OpenStack-dev mailing list
> OpenStack-dev at lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



More information about the OpenStack-dev mailing list