Hi Openstack Neutron Team, I have deployed Multinode cluster of Openstack through kolla-ansible zed release. The problem I am facing is instances created cannot pick any Ip addresses( Rather than loop back ip or ipv6 addresses) from the assigned networks which I have created provider and self-service flat network type. Hence, I may not be able to ping and ssh instance from the controller node and these instances won’t be able to connect to the internet. Can you guide me how to resolve these issues related to network connectivity so I may able to send you the required files to troubleshoot easily? Thanks -----Original Message----- From: openstack-discuss-request@lists.openstack.org <openstack-discuss-request@lists.openstack.org> Sent: Thursday, December 14, 2023 12:26 AM To: openstack-discuss@lists.openstack.org Subject: openstack-discuss Digest, Vol 62, Issue 44 Send openstack-discuss mailing list submissions to openstack-discuss@lists.openstack.org To subscribe or unsubscribe via email, send a message with subject or body 'help' to openstack-discuss-request@lists.openstack.org You can reach the person managing the list at openstack-discuss-owner@lists.openstack.org When replying, please edit your Subject line so it is more specific than "Re: Contents of openstack-discuss digest..." Today's Topics: 1. What are the correct auth caps for Ceph RBD clients of Cinder / Glance / Nova (Christian Rohmann) 2. Re: What are the correct auth caps for Ceph RBD clients of Cinder / Glance / Nova (Jonathan Rosser) 3. Re: [tripleo][ironic][wallaby] Partition Table: loop (Alexey Kashavkin) 4. Re: [all][tc][ptls] Eventlet: Python 3.12, and other concerning discoveries (Jay Faulkner) ---------------------------------------------------------------------- Message: 1 Date: Wed, 13 Dec 2023 18:00:46 +0100 From: Christian Rohmann <christian.rohmann@inovex.de> Subject: What are the correct auth caps for Ceph RBD clients of Cinder / Glance / Nova To: openstack-discuss <openstack-discuss@lists.openstack.org> Message-ID: <a65d9057-67a2-4ab0-a2b6-fe327d185f06@inovex.de> Content-Type: text/plain; charset=UTF-8; format=flowed Hey openstack-discuss, I am a little confused about correct and required Ceph authx permissions for the RBD clients in Cinder, Glance and also Nova: When Glance is requested to delete an image it will check if this image has depended children, see https://opendev.org/openstack/glance_store/src/commit/6f5011d1f05c99894fb8b9.... The children of Glance images usually are (Cinder) volumes, which therefore live in a different RBD pool "volumes". But if such children do exist a 500 error is thrown by Glance API. There also is an bug about this issue on Launchpad [3]. Manually using the RBD client shows the same error:
# rbd -n client.glance -k /etc/ceph/ceph.client.glance.keyring -p images children $IMAGE_ID
2023-12-13T16:51:48.131+0000 7f198cf4e640 -1 librbd::image::OpenRequest: failed to retrieve name: (1) Operation not permitted 2023-12-13T16:51:48.131+0000 7f198d74f640 -1 librbd::ImageState: 0x5639fdd5af60 failed to open image: (1) Operation not permitted rbd: listing children failed: (1) Operation not permitted 2023-12-13T16:51:48.131+0000 7f1990c474c0 -1 librbd::api::Image: list_descendants: failed to open descendant b7078ed7ace50d from pool instances:(1) Operation not permitted
So it's a permission error. Following either the documentation of Glance [1] or Ceph [2] on configuring the ceph auth caps there is no mention of granting anything towards the volume pool to Glance. So this is what I currently have configured:
client.cinder key: REACTED caps: [mgr] profile rbd pool=volumes, profile rbd-read-only pool=images caps: [mon] profile rbd caps: [osd] profile rbd pool=volumes, profile rbd-read-only pool=images
client.glance key: REACTED caps: [mgr] profile rbd pool=images caps: [mon] profile rbd caps: [osd] profile rbd pool=images
client.nova key: REACTED caps: [mgr] profile rbd pool=instances, profile rbd pool=images caps: [mon] profile rbd caps: [osd] profile rbd pool=instances, profile rbd pool=images
When granting the glance client e.g. "rbd-read-only" to the volumes pool via:
# ceph auth caps client.glance mon 'profile rbd' osd 'profile rbd pool=images, profile rbd-read-only pool=volumes' mgr 'profile rbd pool=images, profile rbd-read-only pool=volumes'
the error is gone. I am wondering through if this is really just a documentation bug (at OpenStack AND Ceph equally) and if Glance really needs read-only on the whole volumes pool or if there is some other capability that covers asking for child images. All in all I am simply wondering what the correct and least-privilege ceph auth caps for the RBD clients in Cinder, Glance and Nova would look like. Thanks Christian [1] https://docs.openstack.org/glance/latest/configuration/configuring.html#conf... [2] https://docs.ceph.com/en/latest/rbd/rbd-openstack/#setup-ceph-client-authent... [3] https://bugs.launchpad.net/glance/+bug/2045158 ------------------------------ Message: 2 Date: Wed, 13 Dec 2023 17:40:44 +0000 From: Jonathan Rosser <jonathan.rosser@rd.bbc.co.uk> Subject: Re: What are the correct auth caps for Ceph RBD clients of Cinder / Glance / Nova To: openstack-discuss@lists.openstack.org Message-ID: <fb421a48-b0f6-4a6c-8d72-5b2ed8532b52@rd.bbc.co.uk> Content-Type: text/plain; charset=UTF-8; format=flowed Hi Christain, If you dig through the various deployment tooling then you'll find things like https://opendev.org/openstack/openstack-ansible/src/branch/master/inventory/... Hope this is helpful, Jonathan. On 13/12/2023 17:00, Christian Rohmann wrote:
Hey openstack-discuss,
I am a little confused about correct and required Ceph authx permissions for the RBD clients in Cinder, Glance and also Nova:
When Glance is requested to delete an image it will check if this image has depended children, see https://opendev.org/openstack/glance_store/src/commit/6f5011d1f05c99894fb8b9.... The children of Glance images usually are (Cinder) volumes, which therefore live in a different RBD pool "volumes". But if such children do exist a 500 error is thrown by Glance API. There also is an bug about this issue on Launchpad [3].
Manually using the RBD client shows the same error:
# rbd -n client.glance -k /etc/ceph/ceph.client.glance.keyring -p images children $IMAGE_ID
2023-12-13T16:51:48.131+0000 7f198cf4e640 -1 librbd::image::OpenRequest: failed to retrieve name: (1) Operation not permitted 2023-12-13T16:51:48.131+0000 7f198d74f640 -1 librbd::ImageState: 0x5639fdd5af60 failed to open image: (1) Operation not permitted rbd: listing children failed: (1) Operation not permitted 2023-12-13T16:51:48.131+0000 7f1990c474c0 -1 librbd::api::Image: list_descendants: failed to open descendant b7078ed7ace50d from pool instances:(1) Operation not permitted
So it's a permission error. Following either the documentation of Glance [1] or Ceph [2] on configuring the ceph auth caps there is no mention of granting anything towards the volume pool to Glance. So this is what I currently have configured:
client.cinder key: REACTED caps: [mgr] profile rbd pool=volumes, profile rbd-read-only pool=images caps: [mon] profile rbd caps: [osd] profile rbd pool=volumes, profile rbd-read-only pool=images
client.glance key: REACTED caps: [mgr] profile rbd pool=images caps: [mon] profile rbd caps: [osd] profile rbd pool=images
client.nova key: REACTED caps: [mgr] profile rbd pool=instances, profile rbd pool=images caps: [mon] profile rbd caps: [osd] profile rbd pool=instances, profile rbd pool=images
When granting the glance client e.g. "rbd-read-only" to the volumes pool via:
# ceph auth caps client.glance mon 'profile rbd' osd 'profile rbd pool=images, profile rbd-read-only pool=volumes' mgr 'profile rbd pool=images, profile rbd-read-only pool=volumes'
the error is gone.
I am wondering through if this is really just a documentation bug (at OpenStack AND Ceph equally) and if Glance really needs read-only on the whole volumes pool or if there is some other capability that covers asking for child images.
All in all I am simply wondering what the correct and least-privilege ceph auth caps for the RBD clients in Cinder, Glance and Nova would look like.
Thanks
Christian
[1] https://docs.openstack.org/glance/latest/configuration/configuring.htm l#configuring-the-rbd-storage-backend [2] https://docs.ceph.com/en/latest/rbd/rbd-openstack/#setup-ceph-client-a uthentication [3] https://bugs.launchpad.net/glance/+bug/2045158
------------------------------ Message: 3 Date: Thu, 14 Dec 2023 01:05:41 +0600 From: Alexey Kashavkin <akashavkin@gmail.com> Subject: Re: [tripleo][ironic][wallaby] Partition Table: loop To: Julia Kreger <juliaashleykreger@gmail.com> Cc: openstack-discuss@lists.openstack.org Message-ID: <0F672C26-BC74-4D16-92FC-292F1D2BF62F@gmail.com> Content-Type: multipart/alternative; boundary="Apple-Mail=_BE0F9200-A2C4-458F-B2C4-31E6043ACAA2" Greetings Julia, Thank you so much for your reply and clarification.
On 12 Dec 2023, at 20:22, Julia Kreger <juliaashleykreger@gmail.com> wrote:
Greetings Alexey,
That would do it. Whenever we do not see a kernel/ramdisk with an image file, we assume it is a whole disk image, and as a result, we don't partition, we only write out what we've been provided. This is rooted in the image convention storage with Glance, because you need a kernel/ramdisk to boot a partition, but you can "just boot" a whole disk image.
The disk utilities, in this specific case when you look at the disk, it thinks it is looking at a partition image on a loopback due to the lack of the partition table.
That being said, whole disk images are the preferred path given you have greater flexibility in the partition/structure/layout of the deployed host.
Hope that helps explain and provides the extra context.
-Julia
On Tue, Dec 12, 2023 at 2:56 AM Alexey Kashavkin <akashavkin@gmail.com <mailto:akashavkin@gmail.com>> wrote:
I realized what was wrong in my case. I have a partition image, and I added the ramdisk and kernel path to 'baremetal_deployment.yaml'. This helped.