Xena and CEPH RBD backend (show_image_direct_url status )

Sofia Enriquez senrique at redhat.com
Fri Mar 4 19:47:39 UTC 2022


Greetings,

Have you opened a bug report for this issue? If not, It would be a great
help if you could file a bug: https://bugs.launchpad.net/cinder/.

The cinder team holds a bug meeting every week. In order to reproduce and
discuss this issue further we need a bug report with all the context of a
bug. It's also really good to track fixes. Please at least provide the next
information on it: description, version details, crystal clear details to
reproduce the bug (steps), environment details and actual results like
cinder logs if any. [1]

Thanks,
Sofia

[1] https://wiki.openstack.org/wiki/BugFilingRecommendations


On Mon, Feb 28, 2022 at 11:07 AM Eugen Block <eblock at nde.ag> wrote:

> Hi,
>
> it's disappointing that this is still an issue.
> We're currently using OpenStack Ussuri with Ceph Nautilus (we plan the
> Upgrade to Octopus soon) which works fine without enabling
> show_image_direct_url. The same goes for Victoria and Octopus (one of
> our customers uses this combination).
>
>
> > How is the noted GRAVE Security RISK  of enabling
> > Show_image_direct_url mitigated  ?  (i.e I think , for CEPH RBD, it
> > needs to  be True to get cloning to work efficiently)
>
> I'm also wondering in which case the location contains credentials, I
> haven't seen that yet. Depending on how your cloud is used (is it a
> public or private cloud) maybe enabling the option is not that big of
> a risk?
>
> Regards,
> Eugen
>
>
> Zitat von "west, andrew" <andrew.west-contractor at cgg.com>:
>
> > Hello experts
> >
> > Currently using openstack Xena and Ceph backend (Pacific 16.2.7)
> >
> > It seems there is a bug (since Wallaby?) where the efficient use of
> > a CEPH Pacific RBD backend (i.e with copy-on-write-cloning) is not
> > working .
> > Show_image_direct_url needs to be False to create volumes (or
> > ephemeral volumes for nova)
> >
> > This can of course be tremendously slow (Nova  , ephemeral root
> > disk) without copy-on-write cloning feature of Ceph.
> >
> > As Ceph RBD is THE most favourite  backend for block storage in
> > openstack I am wondering how others are coping (or workarounds found
> > ?)
> > Which combinations of Openstack and Ceph  are known to work well
> > with copy-on-write-cloning?
> >
> > How is the noted GRAVE Security RISK  of enabling
> > Show_image_direct_url mitigated  ?  (i.e I think , for CEPH RBD, it
> > needs to  be True to get cloning to work efficiently)
> >
> >
> > See another report of this issue here:
> > Re: Ceph Pacifif and Openstack Wallaby - ERROR
> > cinder.scheduler.flows.create_volume - CEPH Filesystem Users
> > (spinics.net)<https://www.spinics.net/lists/ceph-users/msg66016.html>
> >
> > Thanks for any help or pointers,
> >
> > Andrew West
> > Openstack consulting
> > CGG France
> >
> >
> > ________________________________
> > "This e-mail and any accompanying attachments are confidential. The
> > information is intended solely for the use of the individual to whom
> > it is addressed. Any review, disclosure, copying, distribution, or
> > use of the email by others is strictly prohibited. If you are not
> > the intended recipient, you must not review, disclose, copy,
> > distribute or use this e-mail; please delete it from your system and
> > notify the sender immediately."
>
>
>
>
>

-- 

Sofía Enriquez

she/her

Software Engineer

Red Hat PnT <https://www.redhat.com>

IRC: @enriquetaso
@RedHat <https://twitter.com/redhat>   Red Hat
<https://www.linkedin.com/company/red-hat>  Red Hat
<https://www.facebook.com/RedHatInc>
<https://www.redhat.com>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.openstack.org/pipermail/openstack-discuss/attachments/20220304/1a116601/attachment.htm>


More information about the openstack-discuss mailing list