Xena and CEPH RBD backend (show_image_direct_url status )
Hello experts Currently using openstack Xena and Ceph backend (Pacific 16.2.7) It seems there is a bug (since Wallaby?) where the efficient use of a CEPH Pacific RBD backend (i.e with copy-on-write-cloning) is not working . Show_image_direct_url needs to be False to create volumes (or ephemeral volumes for nova) This can of course be tremendously slow (Nova , ephemeral root disk) without copy-on-write cloning feature of Ceph. As Ceph RBD is THE most favourite backend for block storage in openstack I am wondering how others are coping (or workarounds found ?) Which combinations of Openstack and Ceph are known to work well with copy-on-write-cloning? How is the noted GRAVE Security RISK of enabling Show_image_direct_url mitigated ? (i.e I think , for CEPH RBD, it needs to be True to get cloning to work efficiently) See another report of this issue here: Re: Ceph Pacifif and Openstack Wallaby - ERROR cinder.scheduler.flows.create_volume - CEPH Filesystem Users (spinics.net)<https://www.spinics.net/lists/ceph-users/msg66016.html> Thanks for any help or pointers, Andrew West Openstack consulting CGG France ________________________________ "This e-mail and any accompanying attachments are confidential. The information is intended solely for the use of the individual to whom it is addressed. Any review, disclosure, copying, distribution, or use of the email by others is strictly prohibited. If you are not the intended recipient, you must not review, disclose, copy, distribute or use this e-mail; please delete it from your system and notify the sender immediately."
Hi, it's disappointing that this is still an issue. We're currently using OpenStack Ussuri with Ceph Nautilus (we plan the Upgrade to Octopus soon) which works fine without enabling show_image_direct_url. The same goes for Victoria and Octopus (one of our customers uses this combination).
How is the noted GRAVE Security RISK of enabling Show_image_direct_url mitigated ? (i.e I think , for CEPH RBD, it needs to be True to get cloning to work efficiently)
I'm also wondering in which case the location contains credentials, I haven't seen that yet. Depending on how your cloud is used (is it a public or private cloud) maybe enabling the option is not that big of a risk? Regards, Eugen Zitat von "west, andrew" <andrew.west-contractor@cgg.com>:
Hello experts
Currently using openstack Xena and Ceph backend (Pacific 16.2.7)
It seems there is a bug (since Wallaby?) where the efficient use of a CEPH Pacific RBD backend (i.e with copy-on-write-cloning) is not working . Show_image_direct_url needs to be False to create volumes (or ephemeral volumes for nova)
This can of course be tremendously slow (Nova , ephemeral root disk) without copy-on-write cloning feature of Ceph.
As Ceph RBD is THE most favourite backend for block storage in openstack I am wondering how others are coping (or workarounds found ?) Which combinations of Openstack and Ceph are known to work well with copy-on-write-cloning?
How is the noted GRAVE Security RISK of enabling Show_image_direct_url mitigated ? (i.e I think , for CEPH RBD, it needs to be True to get cloning to work efficiently)
See another report of this issue here: Re: Ceph Pacifif and Openstack Wallaby - ERROR cinder.scheduler.flows.create_volume - CEPH Filesystem Users (spinics.net)<https://www.spinics.net/lists/ceph-users/msg66016.html>
Thanks for any help or pointers,
Andrew West Openstack consulting CGG France
________________________________ "This e-mail and any accompanying attachments are confidential. The information is intended solely for the use of the individual to whom it is addressed. Any review, disclosure, copying, distribution, or use of the email by others is strictly prohibited. If you are not the intended recipient, you must not review, disclose, copy, distribute or use this e-mail; please delete it from your system and notify the sender immediately."
Greetings, Have you opened a bug report for this issue? If not, It would be a great help if you could file a bug: https://bugs.launchpad.net/cinder/. The cinder team holds a bug meeting every week. In order to reproduce and discuss this issue further we need a bug report with all the context of a bug. It's also really good to track fixes. Please at least provide the next information on it: description, version details, crystal clear details to reproduce the bug (steps), environment details and actual results like cinder logs if any. [1] Thanks, Sofia [1] https://wiki.openstack.org/wiki/BugFilingRecommendations On Mon, Feb 28, 2022 at 11:07 AM Eugen Block <eblock@nde.ag> wrote:
Hi,
it's disappointing that this is still an issue. We're currently using OpenStack Ussuri with Ceph Nautilus (we plan the Upgrade to Octopus soon) which works fine without enabling show_image_direct_url. The same goes for Victoria and Octopus (one of our customers uses this combination).
How is the noted GRAVE Security RISK of enabling Show_image_direct_url mitigated ? (i.e I think , for CEPH RBD, it needs to be True to get cloning to work efficiently)
I'm also wondering in which case the location contains credentials, I haven't seen that yet. Depending on how your cloud is used (is it a public or private cloud) maybe enabling the option is not that big of a risk?
Regards, Eugen
Zitat von "west, andrew" <andrew.west-contractor@cgg.com>:
Hello experts
Currently using openstack Xena and Ceph backend (Pacific 16.2.7)
It seems there is a bug (since Wallaby?) where the efficient use of a CEPH Pacific RBD backend (i.e with copy-on-write-cloning) is not working . Show_image_direct_url needs to be False to create volumes (or ephemeral volumes for nova)
This can of course be tremendously slow (Nova , ephemeral root disk) without copy-on-write cloning feature of Ceph.
As Ceph RBD is THE most favourite backend for block storage in openstack I am wondering how others are coping (or workarounds found ?) Which combinations of Openstack and Ceph are known to work well with copy-on-write-cloning?
How is the noted GRAVE Security RISK of enabling Show_image_direct_url mitigated ? (i.e I think , for CEPH RBD, it needs to be True to get cloning to work efficiently)
See another report of this issue here: Re: Ceph Pacifif and Openstack Wallaby - ERROR cinder.scheduler.flows.create_volume - CEPH Filesystem Users (spinics.net)<https://www.spinics.net/lists/ceph-users/msg66016.html>
Thanks for any help or pointers,
Andrew West Openstack consulting CGG France
________________________________ "This e-mail and any accompanying attachments are confidential. The information is intended solely for the use of the individual to whom it is addressed. Any review, disclosure, copying, distribution, or use of the email by others is strictly prohibited. If you are not the intended recipient, you must not review, disclose, copy, distribute or use this e-mail; please delete it from your system and notify the sender immediately."
-- Sofía Enriquez she/her Software Engineer Red Hat PnT <https://www.redhat.com> IRC: @enriquetaso @RedHat <https://twitter.com/redhat> Red Hat <https://www.linkedin.com/company/red-hat> Red Hat <https://www.facebook.com/RedHatInc> <https://www.redhat.com>
Add [Glance] to attract attention from Glance experts. It would be good someone could point out what change between Victoria and Wallaby caused such break, and what's the plan to bring it back to work. Thanks! Tony ________________________________________ From: Eugen Block <eblock@nde.ag> Sent: February 28, 2022 06:04 AM To: openstack-discuss@lists.openstack.org Subject: Re: Xena and CEPH RBD backend (show_image_direct_url status ) Hi, it's disappointing that this is still an issue. We're currently using OpenStack Ussuri with Ceph Nautilus (we plan the Upgrade to Octopus soon) which works fine without enabling show_image_direct_url. The same goes for Victoria and Octopus (one of our customers uses this combination).
How is the noted GRAVE Security RISK of enabling Show_image_direct_url mitigated ? (i.e I think , for CEPH RBD, it needs to be True to get cloning to work efficiently)
I'm also wondering in which case the location contains credentials, I haven't seen that yet. Depending on how your cloud is used (is it a public or private cloud) maybe enabling the option is not that big of a risk? Regards, Eugen Zitat von "west, andrew" <andrew.west-contractor@cgg.com>:
Hello experts
Currently using openstack Xena and Ceph backend (Pacific 16.2.7)
It seems there is a bug (since Wallaby?) where the efficient use of a CEPH Pacific RBD backend (i.e with copy-on-write-cloning) is not working . Show_image_direct_url needs to be False to create volumes (or ephemeral volumes for nova)
This can of course be tremendously slow (Nova , ephemeral root disk) without copy-on-write cloning feature of Ceph.
As Ceph RBD is THE most favourite backend for block storage in openstack I am wondering how others are coping (or workarounds found ?) Which combinations of Openstack and Ceph are known to work well with copy-on-write-cloning?
How is the noted GRAVE Security RISK of enabling Show_image_direct_url mitigated ? (i.e I think , for CEPH RBD, it needs to be True to get cloning to work efficiently)
See another report of this issue here: Re: Ceph Pacifif and Openstack Wallaby - ERROR cinder.scheduler.flows.create_volume - CEPH Filesystem Users (spinics.net)<https://www.spinics.net/lists/ceph-users/msg66016.html>
Thanks for any help or pointers,
Andrew West Openstack consulting CGG France
________________________________ "This e-mail and any accompanying attachments are confidential. The information is intended solely for the use of the individual to whom it is addressed. Any review, disclosure, copying, distribution, or use of the email by others is strictly prohibited. If you are not the intended recipient, you must not review, disclose, copy, distribute or use this e-mail; please delete it from your system and notify the sender immediately."
What's really disappointing is that there were several discussions about this both in Ceph and Openstack communities over a period of more than a year, but the issue persists and isn't getting any attention from either "camp". /Z On Tue, Mar 15, 2022 at 8:22 PM Tony Liu <tonyliu0592@hotmail.com> wrote:
Add [Glance] to attract attention from Glance experts. It would be good someone could point out what change between Victoria and Wallaby caused such break, and what's the plan to bring it back to work.
Thanks! Tony ________________________________________ From: Eugen Block <eblock@nde.ag> Sent: February 28, 2022 06:04 AM To: openstack-discuss@lists.openstack.org Subject: Re: Xena and CEPH RBD backend (show_image_direct_url status )
Hi,
it's disappointing that this is still an issue. We're currently using OpenStack Ussuri with Ceph Nautilus (we plan the Upgrade to Octopus soon) which works fine without enabling show_image_direct_url. The same goes for Victoria and Octopus (one of our customers uses this combination).
How is the noted GRAVE Security RISK of enabling Show_image_direct_url mitigated ? (i.e I think , for CEPH RBD, it needs to be True to get cloning to work efficiently)
I'm also wondering in which case the location contains credentials, I haven't seen that yet. Depending on how your cloud is used (is it a public or private cloud) maybe enabling the option is not that big of a risk?
Regards, Eugen
Zitat von "west, andrew" <andrew.west-contractor@cgg.com>:
Hello experts
Currently using openstack Xena and Ceph backend (Pacific 16.2.7)
It seems there is a bug (since Wallaby?) where the efficient use of a CEPH Pacific RBD backend (i.e with copy-on-write-cloning) is not working . Show_image_direct_url needs to be False to create volumes (or ephemeral volumes for nova)
This can of course be tremendously slow (Nova , ephemeral root disk) without copy-on-write cloning feature of Ceph.
As Ceph RBD is THE most favourite backend for block storage in openstack I am wondering how others are coping (or workarounds found ?) Which combinations of Openstack and Ceph are known to work well with copy-on-write-cloning?
How is the noted GRAVE Security RISK of enabling Show_image_direct_url mitigated ? (i.e I think , for CEPH RBD, it needs to be True to get cloning to work efficiently)
See another report of this issue here: Re: Ceph Pacifif and Openstack Wallaby - ERROR cinder.scheduler.flows.create_volume - CEPH Filesystem Users (spinics.net)<https://www.spinics.net/lists/ceph-users/msg66016.html>
Thanks for any help or pointers,
Andrew West Openstack consulting CGG France
________________________________ "This e-mail and any accompanying attachments are confidential. The information is intended solely for the use of the individual to whom it is addressed. Any review, disclosure, copying, distribution, or use of the email by others is strictly prohibited. If you are not the intended recipient, you must not review, disclose, copy, distribute or use this e-mail; please delete it from your system and notify the sender immediately."
Hi Andrew, When you say it's not working, any error messages to user, any error or failure logging from Glance or Cinder? More details might help to look into it. Thanks! Tony ________________________________________ From: west, andrew <andrew.west-contractor@CGG.COM> Sent: February 24, 2022 06:29 AM To: openstack-discuss@lists.openstack.org Subject: Xena and CEPH RBD backend (show_image_direct_url status ) Hello experts Currently using openstack Xena and Ceph backend (Pacific 16.2.7) It seems there is a bug (since Wallaby?) where the efficient use of a CEPH Pacific RBD backend (i.e with copy-on-write-cloning) is not working . Show_image_direct_url needs to be False to create volumes (or ephemeral volumes for nova) This can of course be tremendously slow (Nova , ephemeral root disk) without copy-on-write cloning feature of Ceph. As Ceph RBD is THE most favourite backend for block storage in openstack I am wondering how others are coping (or workarounds found ?) Which combinations of Openstack and Ceph are known to work well with copy-on-write-cloning? How is the noted GRAVE Security RISK of enabling Show_image_direct_url mitigated ? (i.e I think , for CEPH RBD, it needs to be True to get cloning to work efficiently) See another report of this issue here: Re: Ceph Pacifif and Openstack Wallaby - ERROR cinder.scheduler.flows.create_volume — CEPH Filesystem Users (spinics.net)<https://www.spinics.net/lists/ceph-users/msg66016.html> Thanks for any help or pointers, Andrew West Openstack consulting CGG France ________________________________ “This e-mail and any accompanying attachments are confidential. The information is intended solely for the use of the individual to whom it is addressed. Any review, disclosure, copying, distribution, or use of the email by others is strictly prohibited. If you are not the intended recipient, you must not review, disclose, copy, distribute or use this e-mail; please delete it from your system and notify the sender immediately.”
On Thu, Feb 24, 2022 at 2:37 PM west, andrew <andrew.west-contractor@cgg.com> wrote:
Hello experts
Currently using openstack Xena and Ceph backend (Pacific 16.2.7)
It seems there is a bug (since Wallaby?) where the efficient use of a CEPH Pacific RBD backend (i.e with copy-on-write-cloning) is not working .
Show_image_direct_url needs to be False to create volumes (or ephemeral volumes for nova)
This can of course be tremendously slow (Nova , ephemeral root disk) without copy-on-write cloning feature of Ceph.
As Ceph RBD is THE most favourite backend for block storage in openstack I am wondering how others are coping (or workarounds found ?)
Which combinations of Openstack and Ceph are known to work well with copy-on-write-cloning?
How is the noted *GRAVE Security RISK of enabling *Show_image_direct_url mitigated ? (i.e I think , for CEPH RBD, it needs to be True to get cloning to work efficiently)
See another report of this issue here:
Re: Ceph Pacifif and Openstack Wallaby - ERROR cinder.scheduler.flows.create_volume — CEPH Filesystem Users (spinics.net) <https://www.spinics.net/lists/ceph-users/msg66016.html>
Thanks for any help or pointers,
Andrew West
Openstack consulting
CGG France
------------------------------ “This e-mail and any accompanying attachments are confidential. The information is intended solely for the use of the individual to whom it is addressed. Any review, disclosure, copying, distribution, or use of the email by others is strictly prohibited. If you are not the intended recipient, you must not review, disclose, copy, distribute or use this e-mail; please delete it from your system and notify the sender immediately.”
Hi Andrew, Sorry for the delayed reply. I got distracted and forgot after the first time I noticed this. So far I see you only mentioning 'show_image_direct_url' setting but AFAIK also the 'show_multiple_locations' is required for these features to work, is that set true and the issue still persists? - jokke
Hello, Andrew & Joke: I have the same problem. If I enable show_image_direct_url = True show_multiple_locations = True on /etc/glance/glance-api.conf When I create an instance from image, I get: [Error: Build of instance 5855874c-860c-4ab8-8b3f-08970e220806 aborted: Volume 0367557b-4efc-40ab-8fda-932ce0a9f542 did not finish being created even after we waited 3 seconds or 2 attempts. And its status is error.]. Thanks! El 16/3/22 a las 10:33, Erno Kuvaja escribió:
On Thu, Feb 24, 2022 at 2:37 PM west, andrew <andrew.west-contractor@cgg.com> wrote:
Hello experts
Currently using openstack Xena and Ceph backend (Pacific 16.2.7)
It seems there is a bug (since Wallaby?) where the efficient use of a CEPH Pacific RBD backend (i.e with copy-on-write-cloning) is not working .
Show_image_direct_url needs to be False to create volumes (or ephemeral volumes for nova)
This can of course be tremendously slow (Nova , ephemeral root disk) without copy-on-write cloning feature of Ceph.
As Ceph RBD is THE most favourite backend for block storage in openstack I am wondering how others are coping (or workarounds found ?)
Which combinations of Openstack and Ceph are known to work well with copy-on-write-cloning?
How is the noted /GRAVE Security RISK of enabling /Show_image_direct_url mitigated ? (i.e I think , for CEPH RBD, it needs to be True to get cloning to work efficiently)
See another report of this issue here:
Re: Ceph Pacifif and Openstack Wallaby - ERROR cinder.scheduler.flows.create_volume — CEPH Filesystem Users (spinics.net) <https://www.spinics.net/lists/ceph-users/msg66016.html>
Thanks for any help or pointers,
Andrew West
Openstack consulting
CGG France
------------------------------------------------------------------------ “This e-mail and any accompanying attachments are confidential. The information is intended solely for the use of the individual to whom it is addressed. Any review, disclosure, copying, distribution, or use of the email by others is strictly prohibited. If you are not the intended recipient, you must not review, disclose, copy, distribute or use this e-mail; please delete it from your system and notify the sender immediately.”
Hi Erno, I have a Xena setup with Ceph. When create a snapshot of an image, it is a full copy. When create a volume from an image, it is an incremental copy. show_multiple_locations is true. show_image_direct_url doesn't seem having effect, true or false, the same result. With Ussuri, both of above 2 creations are incremental copy. Is there any way we can do incremental snapshot for image? Thanks! Tony ________________________________________ From: Erno Kuvaja <ekuvaja@redhat.com> Sent: March 16, 2022 06:33 AM To: west, andrew Cc: openstack-discuss@lists.openstack.org Subject: Re: Xena and CEPH RBD backend (show_image_direct_url status ) On Thu, Feb 24, 2022 at 2:37 PM west, andrew <andrew.west-contractor@cgg.com<mailto:andrew.west-contractor@cgg.com>> wrote: Hello experts Currently using openstack Xena and Ceph backend (Pacific 16.2.7) It seems there is a bug (since Wallaby?) where the efficient use of a CEPH Pacific RBD backend (i.e with copy-on-write-cloning) is not working . Show_image_direct_url needs to be False to create volumes (or ephemeral volumes for nova) This can of course be tremendously slow (Nova , ephemeral root disk) without copy-on-write cloning feature of Ceph. As Ceph RBD is THE most favourite backend for block storage in openstack I am wondering how others are coping (or workarounds found ?) Which combinations of Openstack and Ceph are known to work well with copy-on-write-cloning? How is the noted GRAVE Security RISK of enabling Show_image_direct_url mitigated ? (i.e I think , for CEPH RBD, it needs to be True to get cloning to work efficiently) See another report of this issue here: Re: Ceph Pacifif and Openstack Wallaby - ERROR cinder.scheduler.flows.create_volume — CEPH Filesystem Users (spinics.net)<https://www.spinics.net/lists/ceph-users/msg66016.html> Thanks for any help or pointers, Andrew West Openstack consulting CGG France ________________________________ “This e-mail and any accompanying attachments are confidential. The information is intended solely for the use of the individual to whom it is addressed. Any review, disclosure, copying, distribution, or use of the email by others is strictly prohibited. If you are not the intended recipient, you must not review, disclose, copy, distribute or use this e-mail; please delete it from your system and notify the sender immediately.” Hi Andrew, Sorry for the delayed reply. I got distracted and forgot after the first time I noticed this. So far I see you only mentioning 'show_image_direct_url' setting but AFAIK also the 'show_multiple_locations' is required for these features to work, is that set true and the issue still persists? - jokke
To clarify, what I did was to create a snapshot of VM based on image. Is it because Nova doesn't get the image location from Glance? Thanks! Tony ________________________________________ From: Tony Liu <tonyliu0592@hotmail.com> Sent: April 9, 2022 07:28 PM To: Erno Kuvaja; west, andrew Cc: openstack-discuss@lists.openstack.org Subject: Re: Xena and CEPH RBD backend (show_image_direct_url status ) Hi Erno, I have a Xena setup with Ceph. When create a snapshot of an image, it is a full copy. When create a volume from an image, it is an incremental copy. show_multiple_locations is true. show_image_direct_url doesn't seem having effect, true or false, the same result. With Ussuri, both of above 2 creations are incremental copy. Is there any way we can do incremental snapshot for image? Thanks! Tony ________________________________________ From: Erno Kuvaja <ekuvaja@redhat.com> Sent: March 16, 2022 06:33 AM To: west, andrew Cc: openstack-discuss@lists.openstack.org Subject: Re: Xena and CEPH RBD backend (show_image_direct_url status ) On Thu, Feb 24, 2022 at 2:37 PM west, andrew <andrew.west-contractor@cgg.com<mailto:andrew.west-contractor@cgg.com>> wrote: Hello experts Currently using openstack Xena and Ceph backend (Pacific 16.2.7) It seems there is a bug (since Wallaby?) where the efficient use of a CEPH Pacific RBD backend (i.e with copy-on-write-cloning) is not working . Show_image_direct_url needs to be False to create volumes (or ephemeral volumes for nova) This can of course be tremendously slow (Nova , ephemeral root disk) without copy-on-write cloning feature of Ceph. As Ceph RBD is THE most favourite backend for block storage in openstack I am wondering how others are coping (or workarounds found ?) Which combinations of Openstack and Ceph are known to work well with copy-on-write-cloning? How is the noted GRAVE Security RISK of enabling Show_image_direct_url mitigated ? (i.e I think , for CEPH RBD, it needs to be True to get cloning to work efficiently) See another report of this issue here: Re: Ceph Pacifif and Openstack Wallaby - ERROR cinder.scheduler.flows.create_volume — CEPH Filesystem Users (spinics.net)<https://www.spinics.net/lists/ceph-users/msg66016.html> Thanks for any help or pointers, Andrew West Openstack consulting CGG France ________________________________ “This e-mail and any accompanying attachments are confidential. The information is intended solely for the use of the individual to whom it is addressed. Any review, disclosure, copying, distribution, or use of the email by others is strictly prohibited. If you are not the intended recipient, you must not review, disclose, copy, distribute or use this e-mail; please delete it from your system and notify the sender immediately.” Hi Andrew, Sorry for the delayed reply. I got distracted and forgot after the first time I noticed this. So far I see you only mentioning 'show_image_direct_url' setting but AFAIK also the 'show_multiple_locations' is required for these features to work, is that set true and the issue still persists? - jokke
participants (7)
-
Erno Kuvaja
-
Eugen Block
-
Sofia Enriquez
-
Tecnologia Charne.Net
-
Tony Liu
-
west, andrew
-
Zakhar Kirpichenko