[Glance] Xena and CEPH RBD backend (show_image_direct_url status )

Zakhar Kirpichenko zakhar at gmail.com
Tue Mar 15 20:08:30 UTC 2022


What's really disappointing is that there were several discussions about
this both in Ceph and Openstack communities over a period of more than a
year, but the issue persists and isn't getting any attention from either
"camp".

/Z

On Tue, Mar 15, 2022 at 8:22 PM Tony Liu <tonyliu0592 at hotmail.com> wrote:

> Add [Glance] to attract attention from Glance experts.
> It would be good someone could point out what change between Victoria and
> Wallaby
> caused such break, and what's the plan to bring it back to work.
>
> Thanks!
> Tony
> ________________________________________
> From: Eugen Block <eblock at nde.ag>
> Sent: February 28, 2022 06:04 AM
> To: openstack-discuss at lists.openstack.org
> Subject: Re: Xena and CEPH RBD backend (show_image_direct_url status )
>
> Hi,
>
> it's disappointing that this is still an issue.
> We're currently using OpenStack Ussuri with Ceph Nautilus (we plan the
> Upgrade to Octopus soon) which works fine without enabling
> show_image_direct_url. The same goes for Victoria and Octopus (one of
> our customers uses this combination).
>
>
> > How is the noted GRAVE Security RISK  of enabling
> > Show_image_direct_url mitigated  ?  (i.e I think , for CEPH RBD, it
> > needs to  be True to get cloning to work efficiently)
>
> I'm also wondering in which case the location contains credentials, I
> haven't seen that yet. Depending on how your cloud is used (is it a
> public or private cloud) maybe enabling the option is not that big of
> a risk?
>
> Regards,
> Eugen
>
>
> Zitat von "west, andrew" <andrew.west-contractor at cgg.com>:
>
> > Hello experts
> >
> > Currently using openstack Xena and Ceph backend (Pacific 16.2.7)
> >
> > It seems there is a bug (since Wallaby?) where the efficient use of
> > a CEPH Pacific RBD backend (i.e with copy-on-write-cloning) is not
> > working .
> > Show_image_direct_url needs to be False to create volumes (or
> > ephemeral volumes for nova)
> >
> > This can of course be tremendously slow (Nova  , ephemeral root
> > disk) without copy-on-write cloning feature of Ceph.
> >
> > As Ceph RBD is THE most favourite  backend for block storage in
> > openstack I am wondering how others are coping (or workarounds found
> > ?)
> > Which combinations of Openstack and Ceph  are known to work well
> > with copy-on-write-cloning?
> >
> > How is the noted GRAVE Security RISK  of enabling
> > Show_image_direct_url mitigated  ?  (i.e I think , for CEPH RBD, it
> > needs to  be True to get cloning to work efficiently)
> >
> >
> > See another report of this issue here:
> > Re: Ceph Pacifif and Openstack Wallaby - ERROR
> > cinder.scheduler.flows.create_volume - CEPH Filesystem Users
> > (spinics.net)<https://www.spinics.net/lists/ceph-users/msg66016.html>
> >
> > Thanks for any help or pointers,
> >
> > Andrew West
> > Openstack consulting
> > CGG France
> >
> >
> > ________________________________
> > "This e-mail and any accompanying attachments are confidential. The
> > information is intended solely for the use of the individual to whom
> > it is addressed. Any review, disclosure, copying, distribution, or
> > use of the email by others is strictly prohibited. If you are not
> > the intended recipient, you must not review, disclose, copy,
> > distribute or use this e-mail; please delete it from your system and
> > notify the sender immediately."
>
>
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20220315/d748ea7d/attachment.htm>


More information about the openstack-discuss mailing list