[Glance] Xena and CEPH RBD backend (show_image_direct_url status )

Tony Liu tonyliu0592 at hotmail.com
Tue Mar 15 23:04:20 UTC 2022

Hi Andrew,

When you say it's not working, any error messages to user, any error or failure logging from Glance or Cinder?
More details might help to look into it.

From: west, andrew <andrew.west-contractor at CGG.COM>
Sent: February 24, 2022 06:29 AM
To: openstack-discuss at lists.openstack.org
Subject: Xena and CEPH RBD backend  (show_image_direct_url status )

Hello experts

Currently using openstack Xena and Ceph backend (Pacific 16.2.7)

It seems there is a bug (since Wallaby?) where the efficient use of a CEPH Pacific RBD backend (i.e with copy-on-write-cloning) is not working .
Show_image_direct_url needs to be False to create volumes (or ephemeral volumes for nova)

This can of course be tremendously slow (Nova  , ephemeral root disk) without copy-on-write cloning feature of Ceph.

As Ceph RBD is THE most favourite  backend for block storage in openstack I am wondering how others are coping (or workarounds found ?)
Which combinations of Openstack and Ceph  are known to work well with copy-on-write-cloning?

How is the noted GRAVE Security RISK  of enabling Show_image_direct_url mitigated  ?  (i.e I think , for CEPH RBD, it needs to  be True to get cloning to work efficiently)

See another report of this issue here:
Re: Ceph Pacifif and Openstack Wallaby - ERROR cinder.scheduler.flows.create_volume — CEPH Filesystem Users (spinics.net)<https://www.spinics.net/lists/ceph-users/msg66016.html>

Thanks for any help or pointers,

Andrew West
Openstack consulting
CGG France

“This e-mail and any accompanying attachments are confidential. The information is intended solely for the use of the individual to whom it is addressed. Any review, disclosure, copying, distribution, or use of the email by others is strictly prohibited. If you are not the intended recipient, you must not review, disclose, copy, distribute or use this e-mail; please delete it from your system and notify the sender immediately.”

More information about the openstack-discuss mailing list