[magnum] Policy doesn't allow certificate:get
Folks, I have recently deployed openstack 2023.1 using kolla-ansible and after setting up magnum I noticed the following policy error when obtaining cards. After reading [1] the policy file looks like it needs a reader role to obtain a certificate. I have assigned the "reader" role to the user but still getting the same error message and no luck. $ openstack role add --user user1 --user-domain mydomain1 --project myproject1 reader # Reload User Creds RC file. $ openstack coe cluster config dev2 Policy doesn't allow certificate:get to be performed (HTTP 403) (Request-ID: req-7445ef3c-52a3-4911-97f6-1fb25d9fac1f) What else could be wrong here? 1. https://docs.openstack.org/magnum/latest/configuration/sample-policy.html
Hi Satish, Does user1 own the cluster? there is a check for user_id. Regards, Jake On 24/4/2024 6:13 am, Satish Patel wrote:
Folks,
I have recently deployed openstack 2023.1 using kolla-ansible and after setting up magnum I noticed the following policy error when obtaining cards. After reading [1] the policy file looks like it needs a reader role to obtain a certificate. I have assigned the "reader" role to the user but still getting the same error message and no luck.
$ openstack role add --user user1 --user-domain mydomain1 --project myproject1 reader
# Reload User Creds RC file.
$ openstack coe cluster config dev2 Policy doesn't allow certificate:get to be performed (HTTP 403) (Request-ID: req-7445ef3c-52a3-4911-97f6-1fb25d9fac1f)
What else could be wrong here?
1. https://docs.openstack.org/magnum/latest/configuration/sample-policy.html <https://docs.openstack.org/magnum/latest/configuration/sample-policy.html>
Yes, user1 created this cluster. I am user1 and I did it myself. How do I check the user_id of the cluster? I am not able to see cluster status. (venv-openstack) root@os-eng-ctrl-01:~# openstack coe cluster show dev2 +----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+ | status | CREATE_COMPLETE | | health_status | HEALTHY | | cluster_template_id | a998b58d-fcf5-4cf8-84fb-3febd368e321 | | node_addresses | [] | | uuid | e08d6cd5-fe99-4311-a167-077d2c024827 | | stack_id | kube-3cm4o | | status_reason | None | | created_at | 2024-04-23T19:15:02+00:00 | | updated_at | 2024-04-23T19:20:44+00:00 | | coe_version | v1.27.4 | | labels | {'kube_tag': 'v1.27.4', 'ingress_controller': 'octavia', 'cloud_provider_enabled': 'true', 'availability_zone': 'general', 'auto_scaling_enabled': | | | 'False', 'auto_healing_enabled': 'False'} | | labels_overridden | {} | | labels_skipped | {} | | labels_added | {'auto_scaling_enabled': 'False', 'auto_healing_enabled': 'False'} | | fixed_network | None | | fixed_subnet | None | | floating_ip_enabled | True | | faults | | | keypair | user1-sshkey | | api_address | https://10.0.27.218:6443 | | master_addresses | [] | | master_lb_enabled | False | | create_timeout | 60 | | node_count | 2 | | discovery_url | | | docker_volume_size | None | | master_count | 1 | | container_version | None | | name | dev2 | | master_flavor_id | gen.c4-m8-d40 | | flavor_id | gen.c4-m8-d40 | | health_status_reason | {'kube-3cm4o-default-worker-47w4d-8ml4f-n2sjh.Ready': 'True', 'kube-3cm4o-default-worker-47w4d-8ml4f-n4jb4.Ready': 'True', 'kube-3cm4o-zxxwk- | | | lqvdh.Ready': 'True'} | | project_id | 65261738576843ce92d21899a5f86621 | +----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+ On Wed, Apr 24, 2024 at 3:12 AM Jake Yip <jake.yip@ardc.edu.au> wrote:
Hi Satish,
Does user1 own the cluster? there is a check for user_id.
Regards, Jake
On 24/4/2024 6:13 am, Satish Patel wrote:
Folks,
I have recently deployed openstack 2023.1 using kolla-ansible and after setting up magnum I noticed the following policy error when obtaining cards. After reading [1] the policy file looks like it needs a reader role to obtain a certificate. I have assigned the "reader" role to the user but still getting the same error message and no luck.
$ openstack role add --user user1 --user-domain mydomain1 --project myproject1 reader
# Reload User Creds RC file.
$ openstack coe cluster config dev2 Policy doesn't allow certificate:get to be performed (HTTP 403) (Request-ID: req-7445ef3c-52a3-4911-97f6-1fb25d9fac1f)
What else could be wrong here?
1.
https://docs.openstack.org/magnum/latest/configuration/sample-policy.html <https://docs.openstack.org/magnum/latest/configuration/sample-policy.html
Funny thing is I deployed 2023.1 last year in another place where everything is working. I am able to create a cluster and retrieve certificates etc.. even I didn't add any users in the reader role. Seems this is something new added recently and not documented anywhere except policy file. In the new setup I have integrated keystone with LDAP (only for username/password auth not for assignment etc.) On Wed, Apr 24, 2024 at 7:36 AM Satish Patel <satish.txt@gmail.com> wrote:
Yes, user1 created this cluster. I am user1 and I did it myself. How do I check the user_id of the cluster? I am not able to see cluster status.
(venv-openstack) root@os-eng-ctrl-01:~# openstack coe cluster show dev2
+----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value
|
+----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+ | status | CREATE_COMPLETE
| | health_status | HEALTHY
| | cluster_template_id | a998b58d-fcf5-4cf8-84fb-3febd368e321
| | node_addresses | []
| | uuid | e08d6cd5-fe99-4311-a167-077d2c024827
| | stack_id | kube-3cm4o
| | status_reason | None
| | created_at | 2024-04-23T19:15:02+00:00
| | updated_at | 2024-04-23T19:20:44+00:00
| | coe_version | v1.27.4
| | labels | {'kube_tag': 'v1.27.4', 'ingress_controller': 'octavia', 'cloud_provider_enabled': 'true', 'availability_zone': 'general', 'auto_scaling_enabled': | | | 'False', 'auto_healing_enabled': 'False'}
| | labels_overridden | {}
| | labels_skipped | {}
| | labels_added | {'auto_scaling_enabled': 'False', 'auto_healing_enabled': 'False'} | | fixed_network | None
| | fixed_subnet | None
| | floating_ip_enabled | True
| | faults |
| | keypair | user1-sshkey
| | api_address | https://10.0.27.218:6443
| | master_addresses | []
| | master_lb_enabled | False
| | create_timeout | 60
| | node_count | 2
| | discovery_url |
| | docker_volume_size | None
| | master_count | 1
| | container_version | None
| | name | dev2
| | master_flavor_id | gen.c4-m8-d40
| | flavor_id | gen.c4-m8-d40
| | health_status_reason | {'kube-3cm4o-default-worker-47w4d-8ml4f-n2sjh.Ready': 'True', 'kube-3cm4o-default-worker-47w4d-8ml4f-n4jb4.Ready': 'True', 'kube-3cm4o-zxxwk- | | | lqvdh.Ready': 'True'}
| | project_id | 65261738576843ce92d21899a5f86621
|
+----------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
On Wed, Apr 24, 2024 at 3:12 AM Jake Yip <jake.yip@ardc.edu.au> wrote:
Hi Satish,
Does user1 own the cluster? there is a check for user_id.
Regards, Jake
On 24/4/2024 6:13 am, Satish Patel wrote:
Folks,
I have recently deployed openstack 2023.1 using kolla-ansible and after setting up magnum I noticed the following policy error when obtaining cards. After reading [1] the policy file looks like it needs a reader role to obtain a certificate. I have assigned the "reader" role to the user but still getting the same error message and no luck.
$ openstack role add --user user1 --user-domain mydomain1 --project myproject1 reader
# Reload User Creds RC file.
$ openstack coe cluster config dev2 Policy doesn't allow certificate:get to be performed (HTTP 403) (Request-ID: req-7445ef3c-52a3-4911-97f6-1fb25d9fac1f)
What else could be wrong here?
1.
https://docs.openstack.org/magnum/latest/configuration/sample-policy.html < https://docs.openstack.org/magnum/latest/configuration/sample-policy.html
On 24/4/2024 10:04 pm, Satish Patel wrote:
On Wed, Apr 24, 2024 at 7:36 AM Satish Patel <satish.txt@gmail.com <mailto:satish.txt@gmail.com>> wrote:
Yes, user1 created this cluster. I am user1 and I did it myself. How do I check the user_id of the cluster? I am not able to see cluster status.
It's returned by the API but not show in the table. You can see it if you do a `openstack --debug coe cluster show user1`. Alternatively, look in the DB, magnum.cluster.user_id Also may help if you dump the output of `openstack role assignment list` for user1.
Funny thing is I deployed 2023.1 last year in another place where everything is working. I am able to create a cluster and retrieve certificates etc.. even I didn't add any users in the reader role. Seems this is something new added recently and not documented anywhere except policy file.
Your old cluster is 2023.1 and new cluster is 2023.1? I took a look at stable/2023.1, we didn't backport much patches with policy. Can you elaborate on "something new added recently"?
In the new setup I have integrated keystone with LDAP (only for username/password auth not for assignment etc.)
Maybe this might be it, but I'm not familiar with LDAP setup so can't help you there. You may want to redeploy same version of Magnum but without the LDAP integration to rule out code or config differences. HTH, Jake
Hi Jake, Thank you for wonderful tips for debugging and suggestions. Turn out to be a LDAP setup related issue. I have set up LDAP for authentication but not role assignment. It works after setting following lines in keystone.conf [assignment] driver = sql I didn't add the above line because I thought it's default but maybe my ldap config overrides that option. Anyway thanks for the help again. ~S On Wed, Apr 24, 2024 at 11:43 AM Jake Yip <jake.yip@ardc.edu.au> wrote:
On 24/4/2024 10:04 pm, Satish Patel wrote:
On Wed, Apr 24, 2024 at 7:36 AM Satish Patel <satish.txt@gmail.com <mailto:satish.txt@gmail.com>> wrote:
Yes, user1 created this cluster. I am user1 and I did it myself. How do I check the user_id of the cluster? I am not able to see cluster status.
It's returned by the API but not show in the table. You can see it if you do a `openstack --debug coe cluster show user1`. Alternatively, look in the DB, magnum.cluster.user_id
Also may help if you dump the output of `openstack role assignment list` for user1.
Funny thing is I deployed 2023.1 last year in another place where everything is working. I am able to create a cluster and retrieve certificates etc.. even I didn't add any users in the reader role. Seems this is something new added recently and not documented anywhere except policy file.
Your old cluster is 2023.1 and new cluster is 2023.1? I took a look at stable/2023.1, we didn't backport much patches with policy. Can you elaborate on "something new added recently"?
In the new setup I have integrated keystone with LDAP (only for username/password auth not for assignment etc.)
Maybe this might be it, but I'm not familiar with LDAP setup so can't help you there. You may want to redeploy same version of Magnum but without the LDAP integration to rule out code or config differences.
HTH, Jake
participants (2)
-
Jake Yip
-
Satish Patel